![]() And in some rare cases, a RAID may stop detecting RAID drives after a soft reboot.īelow are some quick fixes for a RAID controller when it fails to recognize RAID drives. can lead to errors where RAID controller may stop detecting or recognizing RAID drives.įurther, SMART disk errors and overheating can also lead to such errors. I'm now looking for a cheap external SAS enclosure for backup purposes.This logical arrangement of physical drives in a RAID array is susceptible to errors often caused by hardware/software failure and human errors.Įvents such malware or virus infection, power surge or abrupt shutdown, device driver issues, metadata conflicts, etc. I think I will return my SFF-8088 to 4 x eSATA breakout cable to Amazon. Even with just the first drive, it doesn't work reliably, which is surprising. Those require a controller which supports them, though. I also have eSATA port multiplier enclosures. 0/ 6.0 TB drives to do that - I have a pair of each of them. Why so many hotswap SATA docks ? Mostly because I am worried about backing up my ZFS array, and I want to use a bunch of old 1.5 / 3.0 / 4. The PCIe 3.0 x8 LSI controllers each have 7800MB/s of bandwidth. If you want an array of fast SSDs, forget the Intel SATA controller. 5 x 230 = 1150 MB/s, which is more than the bandwidth of the Intel controller. Those easystore average 170MB/s on the whole surface, but have some peaks at 230MB/s. If your drives are SSDs, that's a major issue. ![]() The Intel controller has a bandwidth limitation of PCIe 2.0 x4 which is 1 GB/s. If I only wanted 6 drives total, the Intel would be fine, IMO, as long as those are HDDs. Once I do, I think I can successfully unshuck it to return it to BB. I still don't know which drive it is exactly, but I'm going to track it down. This was the case on the Intel SATA controller, and is still the case on the LSI. The only issue I have encountered is one HDD disconnects sometimes I think one of my Easystore drives is intermittently bad. I could use those 4 free ports for 4 more SSDs, using the space that remains in the case.Īnd of course I have the one external miniSAS SFF-8088 left for expansion I still have 4 free SATA ports internally - 3 on the Intel, and one on the LSI. Could be swapped for a dual SSD SATA dockĪltogether, I have 18 internal SATA ports, between the 6 on the motherboard, and the 12 from the two LSI controllers.Ħ of those are connected to the HDDs and SSD, and 8 of them to existing docks. The other 5.25 bay is forced-converted to 3.5 in the HAF-XM, and currently has a USB 3.0 hub / card reader. In one 5.25 bay, I added a 4x2.5 SSD SATA dockĪnother 5.25 bay has a SATA dock with 1 x 3.5 and 1 x 2.5 Space for many more drives in my case : 1 x 3.5 left inside, 2 x 3.5 through x-dock in the front unreliable beyond 2400 unfortunately!)ġ x Kingston 96GB SSD (SATA II) mounted to the back of the motherboard for the OSĥ x WD 10TB easystore, shucked in Decemberġ x Aquantia AQN-107 10 Gbe NIC PCI-e 3.0 x4, running at x2 but still manages 10 Gbe in iperf Skylake 6600k CPU running at 4.4 GHz (3 year old chip)Ĭooler Master HAF-XM case (6 year old case)ģ2GB DDR4-3000 Patriot RAM, at 2400 MHz (3 year old RAM. Asus Z170-AR motherboard (3 year old mobo ) My main volume is RAID Z2 with 5 x 10TB drives. ![]() This was completely seamless in Ubuntu 18.04. This week I added the two LSI controller for more ports, and moved the HDDs to one of the LSI. Until recently, I had all 6 drives below attached to the Intel SATA motherboard. I am running a home NAS now with consumer stuff. Through all of my recent struggles all I could think was, this is so much easier in linux. Anyone needing such large volumes uses a NAS. The time of chipset raid except for nvme raid has clearly passed its prime usage point. Had to power off, unplug the mirror, power on, disable raid, go to OS format the 3tb drive, power off, replug drives and enable raid and all was happy again. Windows 7 and UEFI shouldn't have the >2tb issue.but it did. tried updating a 1155 box to larger than 500gb rust, dropped a 3tb in and it showed up as 720gb or such. (it always tries to install nvme raid if you have one installed, even from the sata only package)Īnd for intel. Evidently they never tested my setup and anyone with nvme was doing raid. I have had copious issues with the current AMD windows installer when booted from nvme trying to do SATA raid, you have to insert the sata only raid driver on OS install to have a working setup, and side install the OS side monitoring. I have not had any reliability issues on AMD or Intel chipset raid and I have had drives fail and rebuild properly, and grown the array by pulling a drive, replacing with larger then doing it again after the rebuild. Your experience is likely to be the same amd to intel as they are both going to be using the windows storport miniport driver with a compatibility shim layer for their chipset.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |