Need a Sata controller card reccomendation - AVS Forum
Forum Jump: 
Reply
 
Thread Tools
post #1 of 36 Old 08-31-2012, 01:12 PM - Thread Starter
Senior Member
 
wyen78's Avatar
 
Join Date: Oct 2011
Location: Philly, PA
Posts: 307
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 12
I have this MOBO

Gigabyte GA-Z68MA-D2H-B3
http://www.gigabyte.com/products/product-page.aspx?pid=3855#ov

1 have a GPU in the PCI-E X16 slot, all other expansion slots are open (X8, X4, X1) would prefer to use X4 or X1 since the x8 shares bandwidth with X16.

I am using windows 7 64bit ultimate

Need to add 4 X SATA ports for 4X 3TB drives for media storage (5900 or 7200 rpm drives) . I'm not against using 2 X 2 port cards if needed.

I do not need RAID, I just need to hook up more drives and not have them bottlenecked by the card.

If there is any additional info needed to make a recommendation please let me know and I'll provide whatever info I can. I think I'll also need some kind of adapter/splitter to provide more sata power connections from the power supply any suggestions on what I need?

Thanks

wyen78 is offline  
Sponsored Links
Advertisement
 
post #2 of 36 Old 08-31-2012, 02:12 PM
AVS Special Member
 
captain_video's Avatar
 
Join Date: Jan 2002
Location: Ellicott City, MD
Posts: 3,486
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 27 Post(s)
Liked: 118
Check out the Lime Technology forums for unRAID. There are numerous recommended SATA cards that can be used as either individual drive controllers or for controlling drives in an array. Adaptec makes a good four-drive controller but I don't recall what the model number is offhand.
captain_video is offline  
post #3 of 36 Old 08-31-2012, 07:54 PM
Senior Member
 
dksc318's Avatar
 
Join Date: Apr 2012
Location: San Jose,CA
Posts: 453
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 14
I am using Highpoint cards such as 2300, 2640. These are RAID cards but they work as JBOD too. The newer series 620, 640L supports SATA3 6Gb/s HDDs.
dksc318 is offline  
post #4 of 36 Old 08-31-2012, 08:25 PM
AVS Special Member
 
jrwalte's Avatar
 
Join Date: Jun 2007
Posts: 2,537
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
Supermicro aoc-saslp-mv8. Pcie x4 and it supports 8 sata drives. You'll need to get 2x sas 8087 to sata breakout cables (make sure the cable does not say reverse).

Sent from my SPH-D700 using Tapatalk 2
jrwalte is offline  
post #5 of 36 Old 09-01-2012, 06:37 PM
AVS Special Member
 
captain_video's Avatar
 
Join Date: Jan 2002
Location: Ellicott City, MD
Posts: 3,486
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 27 Post(s)
Liked: 118
The Adaptec 1430SA was the adapter I was thinking of. It's a 4-port controller for a PCI-e X4 slot, but it works fine in a PCI-e X16 slot. However, it costs about the same as the Supermicro card referenced above, making the Supermicro the better deal. OTOH, the Adaptec has SATA connectors, allowing you to use standard SATA cables. The Supermicro requires special cables that can cost you about $10-15 apiece (you need two). I got one of the Adaptec adapters on ebay a while back for about $50 (there's currently one listed with a Buy-it-Now price of $35 with only $5 shipping) I'm currently using two of the Supermicro adapters in an unRAID server. Both cards work equally well and I've never had any issues with either model.
captain_video is offline  
post #6 of 36 Old 09-02-2012, 05:45 PM
AVS Special Member
 
hdkhang's Avatar
 
Join Date: Aug 2004
Location: Sydney, Australia
Posts: 2,164
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 18
Quote:
Originally Posted by captain_video View Post

The Adaptec 1430SA was the adapter I was thinking of. It's a 4-port controller for a PCI-e X4 slot, but it works fine in a PCI-e X16 slot. However, it costs about the same as the Supermicro card referenced above, making the Supermicro the better deal. OTOH, the Adaptec has SATA connectors, allowing you to use standard SATA cables. The Supermicro requires special cables that can cost you about $10-15 apiece (you need two). I got one of the Adaptec adapters on ebay a while back for about $50 (there's currently one listed with a Buy-it-Now price of $35 with only $5 shipping) I'm currently using two of the Supermicro adapters in an unRAID server. Both cards work equally well and I've never had any issues with either model.

Do you know if the 1430SA supports 3TB drives?

To OP, cheapest way to add 4 drives from what I can gather is 2 x ASM1061 cards off ebay (about $12 each)... shipping might take a while. They support 3TB drives and are SATA3 (wouldn't use them for SSDs though).
hdkhang is offline  
post #7 of 36 Old 09-02-2012, 08:17 PM
AVS Special Member
 
robnix's Avatar
 
Join Date: Dec 2003
Posts: 1,580
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 38 Post(s)
Liked: 245
Quote:
Originally Posted by jrwalte View Post

Supermicro aoc-saslp-mv8. Pcie x4 and it supports 8 sata drives. You'll need to get 2x sas 8087 to sata breakout cables (make sure the cable does not say reverse).
Sent from my SPH-D700 using Tapatalk 2

I use the Intel SASUC8I based on the LSI 1068E. Supermicro's version is the SuperMicro USAS-L8i. Solid cards with better OS support than the saslp-mv8. Never was a big fan of Marvel chips anyway.

Looky here!
robnix is offline  
post #8 of 36 Old 09-03-2012, 08:14 AM
Senior Member
 
xfett's Avatar
 
Join Date: Jan 2008
Posts: 294
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 18
The Intel SASUC8I, and the IBM BR10i are both re-branded LSI SAS3082E cards based on the LSI 1068e chipset. All cards can be flashed to IT firmware to be basic SAS adapters. Only drawback is no 3TB support. Unless someone knows of a fix but my cards won't recognize 3TB drives.

"Man, I told you we shouldn't have shot Niedermeyer"

xfett is offline  
post #9 of 36 Old 09-03-2012, 08:30 AM
Senior Member
 
dksc318's Avatar
 
Join Date: Apr 2012
Location: San Jose,CA
Posts: 453
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 14
Did you change to GUID partition or GPT? The maximum for MBR is 2TB.
dksc318 is offline  
post #10 of 36 Old 09-03-2012, 08:46 AM
AVS Special Member
 
robnix's Avatar
 
Join Date: Dec 2003
Posts: 1,580
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 38 Post(s)
Liked: 245
Quote:
Originally Posted by xfett View Post

The Intel SASUC8I, and the IBM BR10i are both re-branded LSI SAS3082E cards based on the LSI 1068e chipset. All cards can be flashed to IT firmware to be basic SAS adapters. Only drawback is no 3TB support. Unless someone knows of a fix but my cards won't recognize 3TB drives.

IBM ServeRAID M1015 cards are usually around $160.00 on Ebay, but can be had for under $100.00 if you get lucky. They're the same card as the LSI 9240 which uses the LSI 2008 chipset that has 3TB support.
wlee1225 likes this.

Looky here!
robnix is offline  
post #11 of 36 Old 09-03-2012, 09:08 AM
Senior Member
 
xfett's Avatar
 
Join Date: Jan 2008
Posts: 294
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 18
Quote:
Originally Posted by dksc318 View Post

Did you change to GUID partition or GPT? The maximum for MBR is 2TB.

Doesn't matter what you change. If I plug a 3TB drive into any of my BR10i's they show up as 2.2TB. Like Robnix said I'd have to upgrade to a card that uses the LSI 2008 chipset that supports 3TB drives. No biggie I just run the 3TB drive off my mobo

"Man, I told you we shouldn't have shot Niedermeyer"

xfett is offline  
post #12 of 36 Old 09-04-2012, 12:13 PM - Thread Starter
Senior Member
 
wyen78's Avatar
 
Join Date: Oct 2011
Location: Philly, PA
Posts: 307
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 12
Quote:
Originally Posted by dksc318 View Post

I am using Highpoint cards such as 2300, 2640. These are RAID cards but they work as JBOD too. The newer series 620, 640L supports SATA3 6Gb/s HDDs.

Thanks. I think I will get this
http://www.newegg.com/Product/Product.aspx?Item=N82E16816115114&SortField=0&SummaryType=0&PageSize=10&SelectedRating=-1&VideoOnlyMark=False&IsFeedbackTab=true#scrollFullInfo

the reviews say there is an issue with RAID but I only need JBOD.

Will this work?

Thanks for all the suggestions but this card is a bit cheaper and I don't need extra cabling for it, so if it works with 3 TB drives and doesn't create a bottleneck for speed, I'd be happy with it. If anyone sees an issue with this card in my system please let me know....I'm planning on buying one in the next 6 months. I still have 4 TB left so I'm good for a little while longer.

wyen78 is offline  
post #13 of 36 Old 09-04-2012, 01:07 PM
AVS Special Member
 
robnix's Avatar
 
Join Date: Dec 2003
Posts: 1,580
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 38 Post(s)
Liked: 245
Quote:
Originally Posted by wyen78 View Post

Thanks. I think I will get this
http://www.newegg.com/Product/Product.aspx?Item=N82E16816115114&SortField=0&SummaryType=0&PageSize=10&SelectedRating=-1&VideoOnlyMark=False&IsFeedbackTab=true#scrollFullInfo
the reviews say there is an issue with RAID but I only need JBOD.
Will this work?
Thanks for all the suggestions but this card is a bit cheaper and I don't need extra cabling for it, so if it works with 3 TB drives and doesn't create a bottleneck for speed, I'd be happy with it. If anyone sees an issue with this card in my system please let me know....I'm planning on buying one in the next 6 months. I still have 4 TB left so I'm good for a little while longer.

You get what you pay for. JBOD performance isn't very good on those cards with more than two drives attached and Marvell chipsets aren't exactly know for stability. Better performance, better reliability, and more room for expansion down the road is only $100.00 more.

Looky here!
robnix is offline  
post #14 of 36 Old 09-04-2012, 01:35 PM
AVS Addicted Member
 
assassin's Avatar
 
Join Date: Jul 2004
Posts: 12,894
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 3 Post(s)
Liked: 205
Quote:
Originally Posted by robnix View Post

You get what you pay for. JBOD performance isn't very good on those cards with more than two drives attached and Marvell chipsets aren't exactly know for stability. Better performance, better reliability, and more room for expansion down the road is only $100.00 more.

Watch it. I made this same statement and a certain someone in this thread accused me of "defamation".
assassin is offline  
post #15 of 36 Old 09-04-2012, 02:22 PM
Member
 
bomberjim's Avatar
 
Join Date: Jan 2008
Posts: 136
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 13
Maybe I misread it, but I didn't see that it says JBOD is supported. I also didn't see that it supports 3 TB drives. I'd be careful with this card.
bomberjim is offline  
post #16 of 36 Old 09-05-2012, 09:54 AM
AVS Special Member
 
robnix's Avatar
 
Join Date: Dec 2003
Posts: 1,580
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 38 Post(s)
Liked: 245
Quote:
Originally Posted by assassin View Post

Watch it. I made this same statement and a certain someone in this thread accused me of "defamation".

Marvell makes second tier chips, there's a reason that you see them in budget cards.

Looky here!
robnix is offline  
post #17 of 36 Old 09-06-2012, 08:00 PM
Senior Member
 
dksc318's Avatar
 
Join Date: Apr 2012
Location: San Jose,CA
Posts: 453
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 14
Quote:
Originally Posted by robnix View Post

Marvell makes second tier chips, there's a reason that you see them in budget cards.

Great info, then we all should stop buying WD and Samsung HDD with 2nd tier chips inside?
http://www.marvell.com/company/news/pressDetail.do?releaseID=2236
dksc318 is offline  
post #18 of 36 Old 09-06-2012, 10:12 PM
AVS Special Member
 
robnix's Avatar
 
Join Date: Dec 2003
Posts: 1,580
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 38 Post(s)
Liked: 245
Quote:
Originally Posted by dksc318 View Post

Great info, then we all should stop buying WD and Samsung HDD with 2nd tier chips inside?
http://www.marvell.com/company/news/pressDetail.do?releaseID=2236

For RAID/SATA controllers they're still 2nd tier chips. Performance and reliability is worse than anything that you can get from LSI, Areca, or even Intel.

Here's some awesome BSOD fun with the 9100 series and SSD drives.

http://lmgtfy.com/?q=marvell+91xx+bsod

People with X58 boards that have the Intel and Marvell chipsets on board were moving all their drives to the Intel controller.

For consumer drives, some of your drive will most likely fail, so buy the drive with the best warranty.

Looky here!
robnix is offline  
post #19 of 36 Old 09-06-2012, 10:25 PM
Senior Member
 
dksc318's Avatar
 
Join Date: Apr 2012
Location: San Jose,CA
Posts: 453
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 14
I see, you are saying Marvell use different reliability standards between HDD chips and controller chips; higher standards for the cheaper HDD chips and lower reliability standards for higher priced controller chips?

WD ships some bare drive kits with Highpoint/Marvell controllers (Rocket620). You think WD accepts lower reliability standard?
dksc318 is offline  
post #20 of 36 Old 09-06-2012, 11:30 PM
Senior Member
 
dksc318's Avatar
 
Join Date: Apr 2012
Location: San Jose,CA
Posts: 453
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 14
Quote:
Originally Posted by bomberjim View Post

Maybe I misread it, but I didn't see that it says JBOD is supported. I also didn't see that it supports 3 TB drives. I'd be careful with this card.

According to the Highpoint website:
http://www.highpoint-tech.com/PDF/RR640L/RocketRAID%2064xL_Rocket%2064xL_Compatibility_List.pdf
3TB support=yes
http://www.highpoint-tech.com/USA_new/series_rr640L.htm
JBOD support=yes
dksc318 is offline  
post #21 of 36 Old 09-07-2012, 04:58 AM
AVS Addicted Member
 
arnyk's Avatar
 
Join Date: Oct 2002
Location: Grosse Pointe Woods, MI
Posts: 13,654
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 288 Post(s)
Liked: 997
Quote:
Originally Posted by dksc318 View Post

I see, you are saying Marvell use different reliability standards between HDD chips and controller chips; higher standards for the cheaper HDD chips and lower reliability standards for higher priced controller chips?
WD ships some bare drive kits with Highpoint/Marvell controllers (Rocket620). You think WD accepts lower reliability standard?

Often what you get with the so-called first-tier suppliers is dominated by a fancier package with better instructions and more in-the-box support including both hardware and software and also a nice web site with a well-managed forum,etc.

That gives you warm fuzzies and may help if you can't solve your own problems.

Very picky motherboard manufacturers such as Asus put some of these same so-called second tier chips on their motherboards.

Some of the first tier chips also have more intelligence and put less load on the host CPU. I can still remember when the CPU load from a cheap controller could make a difference, but I haven't run 486s for many years. ;-)
arnyk is offline  
post #22 of 36 Old 09-07-2012, 08:27 AM
Senior Member
 
dksc318's Avatar
 
Join Date: Apr 2012
Location: San Jose,CA
Posts: 453
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 14
I'm too poor to buy those out of pretty boxes, lol. I got my cards from data center equipment recycling places around Silicon Valley. If these things are so bad, they wouldn't be in data centers in the first place.
dksc318 is offline  
post #23 of 36 Old 09-07-2012, 01:50 PM
AVS Special Member
 
robnix's Avatar
 
Join Date: Dec 2003
Posts: 1,580
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 38 Post(s)
Liked: 245
Quote:
Originally Posted by dksc318 View Post

I see, you are saying Marvell use different reliability standards between HDD chips and controller chips; higher standards for the cheaper HDD chips and lower reliability standards for higher priced controller chips?
WD ships some bare drive kits with Highpoint/Marvell controllers (Rocket620). You think WD accepts lower reliability standard?

The 620 has the same 91xx chipset that I linked to. It's been unusable at it's worst and mediocre at best. It advertised 6G Sata III but only had a single pci-e lane so it got outperformed by Sata II controllers. It wasn't stable until some firmware driver fixes down the road helpedout , but even at that point it was still outperformed by SATA II controllers.

The reason it was bundled was simple, WD needed a controller that had 3TB support. They got those cards for next to nothing because of their existing deals with Marvell so those cards let them keep the thing that's most important to them, keeping shelf prices low. I'd be stunned if anyone at WD marketing thought about anything other than finding a way to get those 3TB drives into consumer hands.
Quote:
Originally Posted by arnyk View Post

Often what you get with the so-called first-tier suppliers is dominated by a fancier package with better instructions and more in-the-box support including both hardware and software and also a nice web site with a well-managed forum,etc.
That gives you warm fuzzies and may help if you can't solve your own problems.
Very picky motherboard manufacturers such as Asus put some of these same so-called second tier chips on their motherboards.
Some of the first tier chips also have more intelligence and put less load on the host CPU. I can still remember when the CPU load from a cheap controller could make a difference, but I haven't run 486s for many years. ;-)

I don't disagree with you that there are companies that make great products that are sold in plain packages with few frills, but this isn't one of those cases. ASUS isn't as picky as you think either. Ever since ASUS made it's push into markets other than Motherboards and Video Cards, their QC has gone downhil in favor of cost savings. Google "Asus X58 Sabertooth Marvell". The going recommedation on that $200.00 motherboard was to avoid the performance and BSOD issues that the Marvell 9128 SATA III chipset was causing and use the Intel SATA II controller instead.

Looky here!
robnix is offline  
post #24 of 36 Old 09-07-2012, 06:30 PM
Senior Member
 
dksc318's Avatar
 
Join Date: Apr 2012
Location: San Jose,CA
Posts: 453
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 14
Quote:
Originally Posted by robnix View Post

For RAID/SATA controllers they're still 2nd tier chips. Performance and reliability is worse than anything that you can get from LSI, Areca, or even Intel.
Here's some awesome BSOD fun with the 9100 series and SSD drives.
http://lmgtfy.com/?q=marvell+91xx+bsod
....

Google is very powerful for sure:
http://lmgtfy.com/?q=lsi+controller+bsod

http://lmgtfy.com/?q=areca+bsod

You can pretty much get it to support anything.
dksc318 is offline  
post #25 of 36 Old 09-07-2012, 08:52 PM
AVS Special Member
 
robnix's Avatar
 
Join Date: Dec 2003
Posts: 1,580
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 38 Post(s)
Liked: 245
Quote:
Originally Posted by dksc318 View Post

Google is very powerful for sure:
http://lmgtfy.com/?q=lsi+controller+bsod
http://lmgtfy.com/?q=areca+bsod
You can pretty much get it to support anything.

Not much there once you go past the summary. 8 out of the first 11 links in the LSI search were for the VMWares/Citrix virtual LSI driver and VM issues, and link 7 was a review from Toms Hardware that ends with the LSI card as the top choice of the RAID controller test ahead of Areca and Intel. The Marvell based Highpoint RAID controller came in last.
Quote:
Overall, LSI's MegaRAID 9265-8i is the fastest controller, especially in with regard to I/O throughput. While it does have some weak spots, like not-so-stellar RAID 5 and 6 performance, the MegaRAID 9265-8i wins most benchmarks and presents itself as a well-rounded, professional-grade solution. Its $630 street price is also the highest in this round-up though, so you have to keep that in mind. But for that price you get a future-proof controller that runs rings around its competition, especially mated to SSDs. It clearly has performance in reserve, which could come in handy when more storage is added down the road. Moreover, you can enhance the performance of LSI's MegaRAID 9265-8i by adding the FastPath or CacheCade upgrades, so long as you're willing to pay extra for them.

The Adaptec RAID 6805 and Areca ARC-1880i controllers offer similar performance at similar mid-range price points ($460 versus $540 street price). Both are good performers as far as data throughput goes, in addition to I/O. The Adaptec controller ekes out a small performance advantage over the Areca controller, and it also offers the desirable Zero Maintenance Cache Protection (ZMCP) feature, replacing conventional battery backup units and the service that goes into keeping them running with a bit of NAND flash and a capacitor.

HighPoint's RocketRAID 2720SGL sells for about $170, which is the bargain bin compared to the three other controllers reviewed today. Its performance is acceptable if you're using it with conventional disk drives, though clearly a step down from what Adaptec or Areca give you. Don't use this one with SSDs. .

But at $170.00 it's the cheapest card in the test, so I guess you're getting what you pay for there.

The Areca link isn't much different, most of the first page are fixya threads which have a tendency to get grouped together in google searches, or the same question from one person posted on different forums.

So yes, you can get google to come up with anything you want really, whether it's relevant or not is another story.

Looky here!
robnix is offline  
post #26 of 36 Old 09-08-2012, 01:32 AM
Senior Member
 
dksc318's Avatar
 
Join Date: Apr 2012
Location: San Jose,CA
Posts: 453
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 14
It is safe to say here we are not building servers that require top performance as in corporate or commercial servers. So few of us really spend the money or particularly critical of the performance of these cards.

From what I can see the Marvell/Highpoint cards were used in large quantities in data centers since I could get quite a few of these for cheap locally. From that I knew they don't have "reliability" problems. When I saw "reliability" popped up as a reason they should not be used, I knew it is not sustainable.

Data centers here are in massive scale. They are dotted everywhere. This one is near my office.
http://www.vantagedatacenters.com/data-centers/
Vantage is about 50MW. Customer is reportedly a large social networking website. When they change equipment, we benefit from the disposal.
dksc318 is offline  
post #27 of 36 Old 09-08-2012, 11:00 AM
AVS Special Member
 
captain_video's Avatar
 
Join Date: Jan 2002
Location: Ellicott City, MD
Posts: 3,486
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 27 Post(s)
Liked: 118
Say what you want about Marvell chips. All I know is that I'm using two of the Supermicro AOC-SASLP-MV8 controllers in my unRAID server and they work as advertised. Never had a problem with either of them.
captain_video is offline  
post #28 of 36 Old 09-08-2012, 11:07 AM
AVS Special Member
 
robnix's Avatar
 
Join Date: Dec 2003
Posts: 1,580
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 38 Post(s)
Liked: 245
Quote:
Originally Posted by dksc318 View Post

It is safe to say here we are not building servers that require top performance as in corporate or commercial servers. So few of us really spend the money or particularly critical of the performance of these cards.
From what I can see the Marvell/Highpoint cards were used in large quantities in data centers since I could get quite a few of these for cheap locally. From that I knew they don't have "reliability" problems. When I saw "reliability" popped up as a reason they should not be used, I knew it is not sustainable.
Data centers here are in massive scale. They are dotted everywhere. This one is near my office.
http://www.vantagedatacenters.com/data-centers/
Vantage is about 50MW. Customer is reportedly a large social networking website. When they change equipment, we benefit from the disposal.

Without knowing why they were sold and what they were replaced with you really don't know do you? Changing equipment could very well be a euphemism for "I'm bleeding cash so I'm shutting down infrastructure and having a fire sale to save money".

Here's a little conjecture from me. Whoever designed their storage realized that they could build 100TB+ whitebox storage arrays that gave them high data availability by replicating their data across the 4 drive or JBOD volumes they create with those cards. They probably got thier idea from Backblaze. Backblaze builds thier own storage that uses cheap Syba Sil based controllers and 3TB consumer SATA drives. By doing this they can build incredibly dense storage that's so cheap they simply use multiple cheap boxes to create redundancy. The reliability doesn't come from the hardware they use, it comes from having a enough hardware to support a large amount of failure. By building this way, they can lose entire storage boxes and not lose any data. They build cheap with failure in mind and keep someone onsite to manage the hardware.

It is a cool concept though, so we were inspired by Backblaze at our company to do the same kind of thing. We have our infrastructure spread over 4 DC's in the the U.S. and in Europe and were pretty excited the possibility of building out a cheap storage network to use for things like D2D backup, DR replication, and archival purposes. The first test box we built started off with cheap raid controllers like the ones from Syba that Backblaze was using at the time, and some Highpoint controllers in a 24 drive Supermicro 846 chassis with the 24 SATA/SAS port backplane, consumer grade 2TB sata drives. We tried all sorts of setups, Windows with fakeraid, Windows with Raid on card, Windows with JBOD, Linux with JBOD, Opensolaris with ZFS etc..aside from the mediocre performance, the problem was reliability, drives would disappear while the box was running, and with the Windows setup, entire volumes sometimes wouldn't get picked back up after a reboot.

We switched to the 846 chassis with the 8087 ports on the backplane with a single $150.00 LSI card with 8087 SAS connectors and our problems went away. Perfomance and reliability was like night and day, and the setup was a lot simpler which was important for storage that was going into our datacenters. We don't have the luxury of having our own on site tech like Backblaze, we rely on hands on from the DC staff, which can be a tad shaky sometimes. We build almost the exact same box now. We've moved to using 36 drive chassis and Hardware Raid 5 cards for the OS flexibility they provide. We're close to 500TB of storage built on this model now and they're rock solid, Other then the usual drive failures the only problem we've run into is a 3yr old Raid controller battery that needs replacement. Our only point of failure is the single MB/CPU/MEM/Raid Controller, but we may be moving to the SBB chassis from Supermicro so we have redundant controllers. By the time we're done building it all out, I plan on making sure that we could lose access to 3 out of our 4 DC's but still have access to our critical data at the 4th DC.

The big thing we learned out of this excercise was that in the long haul you save money, time, and frustration by spending a little more at the outset. You may not be building enterprise class servers, that still doesn't mean you shouldn't plan ahead, spend a little extra, and buy cards that offer better performance, reliability, and expandibility.

Looky here!
robnix is offline  
post #29 of 36 Old 09-08-2012, 12:53 PM
Senior Member
 
dksc318's Avatar
 
Join Date: Apr 2012
Location: San Jose,CA
Posts: 453
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 14
Quote:
Originally Posted by robnix View Post

Without knowing why they were sold and what they were replaced with you really don't know do you? Changing equipment could very well be a euphemism for "I'm bleeding cash so I'm shutting down infrastructure and having a fire sale to save money".
Here's a little conjecture from me. Whoever designed their storage realized that they could build 100TB+ whitebox storage arrays that gave them high data availability by replicating their data across the 4 drive or JBOD volumes they create with those cards. They probably got thier idea from Backblaze. Backblaze builds thier own storage that uses cheap Syba Sil based controllers and 3TB consumer SATA drives. By doing this they can build incredibly dense storage that's so cheap they simply use multiple cheap boxes to create redundancy. The reliability doesn't come from the hardware they use, it comes from having a enough hardware to support a large amount of failure. By building this way, they can lose entire storage boxes and not lose any data. They build cheap with failure in mind and keep someone onsite to manage the hardware.
It is a cool concept though, so we were inspired by Backblaze at our company to do the same kind of thing. We have our infrastructure spread over 4 DC's in the the U.S. and in Europe and were pretty excited the possibility of building out a cheap storage network to use for things like D2D backup, DR replication, and archival purposes. The first test box we built started off with cheap raid controllers like the ones from Syba that Backblaze was using at the time, and some Highpoint controllers in a 24 drive Supermicro 846 chassis with the 24 SATA/SAS port backplane, consumer grade 2TB sata drives. We tried all sorts of setups, Windows with fakeraid, Windows with Raid on card, Windows with JBOD, Linux with JBOD, Opensolaris with ZFS etc..aside from the mediocre performance, the problem was reliability, drives would disappear while the box was running, and with the Windows setup, entire volumes sometimes wouldn't get picked back up after a reboot.
We switched to the 846 chassis with the 8087 ports on the backplane with a single $150.00 LSI card with 8087 SAS connectors and our problems went away. Perfomance and reliability was like night and day, and the setup was a lot simpler which was important for storage that was going into our datacenters. We don't have the luxury of having our own on site tech like Backblaze, we rely on hands on from the DC staff, which can be a tad shaky sometimes. We build almost the exact same box now. We've moved to using 36 drive chassis and Hardware Raid 5 cards for the OS flexibility they provide. We're close to 500TB of storage built on this model now and they're rock solid, Other then the usual drive failures the only problem we've run into is a 3yr old Raid controller battery that needs replacement. Our only point of failure is the single MB/CPU/MEM/Raid Controller, but we may be moving to the SBB chassis from Supermicro so we have redundant controllers. By the time we're done building it all out, I plan on making sure that we could lose access to 3 out of our 4 DC's but still have access to our critical data at the 4th DC.
The big thing we learned out of this excercise was that in the long haul you save money, time, and frustration by spending a little more at the outset. You may not be building enterprise class servers, that still doesn't mean you shouldn't plan ahead, spend a little extra, and buy cards that offer better performance, reliability, and expandibility.

Time in service is important too. Of course I checked their datecodes, mainly from 2005-2008. So I know they've been in service for quite a while.

Google, Facebook's "warehouse scale computer" use the concept you outlined. There are no add in cards. The 4-6 HDDs are connected directly to the server MB. HP, IBM, Dell, Hitachi Data Systems, Ciscos of the world don't get any business from them. They completely changed the concepts on how to do a large server. The key people are all top brains from all over the world but many are from Stanford.

By the way the Supermicro card Captain_video referred is Marvell based.
dksc318 is offline  
post #30 of 36 Old 09-08-2012, 02:01 PM
AVS Special Member
 
robnix's Avatar
 
Join Date: Dec 2003
Posts: 1,580
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 38 Post(s)
Liked: 245
Quote:
Originally Posted by dksc318 View Post

Time in service is important too. Of course I checked their datecodes, mainly from 2005-2008. So I know they've been in service for quite a while.
Google, Facebook's "warehouse scale computer" use the concept you outlined. There are no add in cards. The 4-6 HDDs are connected directly to the server MB. HP, IBM, Dell, Hitachi Data Systems, Ciscos of the world don't get any business from them. They completely changed the concepts on how to do a large server. The key people are all top brains from all over the world but many are from Stanford.

We're discussing slowly migrating off the EMC/Isilon gear that we currently use for our critical data, and moving to the much cheaper custom built hardware with some sort of distributed FS architecture. Personally I think that the days of big expensive hardware dominating the storage landscape are dwindling, that people will move towards purpose built whitebox hardware or with the smaller storage companies that offer more efficient and elegant solutions. We have some interesting plans for some 7U 8 node ESX packages all built around whitebox hardware. Personally, I'm looking forward to building and testing my first flash array next year. 16 SSD's used for temp storage with a 128 core test compute lab should be fun. biggrin.gif
Quote:
Originally Posted by dksc318 View Post

By the way the Supermicro card Captain_video referred is Marvell based.

If you don't do much with them they can be OK, but they're still single lane PCI-Express x1 cards. They had some serious issues with their Linux drivers as recently as last year as well, drives would simply disappear. AFAIK they've been EOL, Supermicro is using LSI or Intel almost exclusively now.

Looky here!
robnix is offline  
Reply Home Theater Computers

User Tag List

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off