I've had an NS4300 for a couple of years. It's configured as raid 5 w/spare (Disks 1-3 are 1TB data and Disk 4 is 1TB spare).
The unit has started dropping off the network and is inaccessible from both SmartNavi and the web interface until I force a reboot by pulling the power cord (it won't respond to front button pushes once in this state, but I'm not sure it's ever supposed to power down from the front button). An interesting side note is that even when the unit is not responding to any file or admin access, it still responds to pings.
Anyways, there have been no warning beeps, and the three LEDs for the data disks are green on the front panel.
The system status page reports "functional", but I noticed in the event log that the Disk 1 has been throwing off the following errors:BSL log disk 1 at LBA 0x06a4e04bf cleared
WARNING BSL update on disk 1 at LBA 06a4e04bf
WARNING Task 20 disk error on disk 1 at LBA 0x06a4e04bf (Length 0x1) with status 51
It's been doing that for a couple of days, and all of a sudden last night it started throwing out:WARNING S.M.A.R.T threshold exceeded on disk 1
Obviously Disk 1 is going bad and needs to be replaced. I've got a new drive on order and should be replacing it tomorrow.
My questions/concerns are that the system seems perfectly content despite a disk going bad. I guess the Spare disk won't be called upon until the Disk 1 actually fails completely, correct? Also, why does the unit keep dropping off the network and need to be rebooted to be accessible again?
What I'd kind of like to do is force Spare (Disk 4) to kick in now to ensure no data loss and I can use the new replacement disk as the new spare. Would there be any point in this?
Additional info: I had the drives set to go to sleep after 3 minutes. I just set that to "never" and the system has stayed online on the network much longer than it was when I was checking it last night.
I have firmware version 01.05.0000.12. DLNA plugin and the ********** plugin (which never works for me).
I'm a little leery of the NS4300 to begin with because I had a drive go bad in it a couple of years ago (there's probably some posts about it here in the archives) when I had the unit configured as raid5 with 4 data disks. I hot swapped in a replacement disk and the array died, no rebuild, 3 terrabytes of data gone. Total failure of the purpose of the raid. Fortunately I had backups or originals of most of the stuff lost, but I'm not sure that's the case now.
I'm afraid that pulling the bad disk is going to kill the array like it did before, despite the use of a hot spare now.
Any feedback appreciated, and a couple of screenshots