Originally Posted by dksc318
It is safe to say here we are not building servers that require top performance as in corporate or commercial servers. So few of us really spend the money or particularly critical of the performance of these cards.
From what I can see the Marvell/Highpoint cards were used in large quantities in data centers since I could get quite a few of these for cheap locally. From that I knew they don't have "reliability" problems. When I saw "reliability" popped up as a reason they should not be used, I knew it is not sustainable.
Data centers here are in massive scale. They are dotted everywhere. This one is near my office.
Vantage is about 50MW. Customer is reportedly a large social networking website. When they change equipment, we benefit from the disposal.
Without knowing why they were sold and what they were replaced with you really don't know do you? Changing equipment could very well be a euphemism for "I'm bleeding cash so I'm shutting down infrastructure and having a fire sale to save money".
Here's a little conjecture from me. Whoever designed their storage realized that they could build 100TB+ whitebox storage arrays that gave them high data availability by replicating their data across the 4 drive or JBOD volumes they create with those cards. They probably got thier idea from Backblaze. Backblaze builds thier own storage that uses cheap Syba Sil based controllers and 3TB consumer SATA drives. By doing this they can build incredibly dense storage that's so cheap they simply use multiple cheap boxes to create redundancy. The reliability doesn't come from the hardware they use, it comes from having a enough hardware to support a large amount of failure. By building this way, they can lose entire storage boxes and not lose any data. They build cheap with failure in mind and keep someone onsite to manage the hardware.
It is a cool concept though, so we were inspired by Backblaze at our company to do the same kind of thing. We have our infrastructure spread over 4 DC's in the the U.S. and in Europe and were pretty excited the possibility of building out a cheap storage network to use for things like D2D backup, DR replication, and archival purposes. The first test box we built started off with cheap raid controllers like the ones from Syba that Backblaze was using at the time, and some Highpoint controllers in a 24 drive Supermicro 846 chassis with the 24 SATA/SAS port backplane, consumer grade 2TB sata drives. We tried all sorts of setups, Windows with fakeraid, Windows with Raid on card, Windows with JBOD, Linux with JBOD, Opensolaris with ZFS etc..aside from the mediocre performance, the problem was reliability, drives would disappear while the box was running, and with the Windows setup, entire volumes sometimes wouldn't get picked back up after a reboot.
We switched to the 846 chassis with the 8087 ports on the backplane with a single $150.00 LSI card with 8087 SAS connectors and our problems went away. Perfomance and reliability was like night and day, and the setup was a lot simpler which was important for storage that was going into our datacenters. We don't have the luxury of having our own on site tech like Backblaze, we rely on hands on from the DC staff, which can be a tad shaky sometimes. We build almost the exact same box now. We've moved to using 36 drive chassis and Hardware Raid 5 cards for the OS flexibility they provide. We're close to 500TB of storage built on this model now and they're rock solid, Other then the usual drive failures the only problem we've run into is a 3yr old Raid controller battery that needs replacement. Our only point of failure is the single MB/CPU/MEM/Raid Controller, but we may be moving to the SBB chassis from Supermicro so we have redundant controllers. By the time we're done building it all out, I plan on making sure that we could lose access to 3 out of our 4 DC's but still have access to our critical data at the 4th DC.
The big thing we learned out of this excercise was that in the long haul you save money, time, and frustration by spending a little more at the outset. You may not be building enterprise class servers, that still doesn't mean you shouldn't plan ahead, spend a little extra, and buy cards that offer better performance, reliability, and expandibility.