AVS Forum banner
1 - 20 of 48 Posts

·
Registered
Joined
·
3,714 Posts
Discussion Starter · #1 ·


Where I first saw it http://arstechnica.com/gadgets/2015...rked-to-death-and-last-far-longer-than-rated/

The Experiment
http://techreport.com/review/24841/introducing-the-ssd-endurance-experiment

Pushing till they died
http://techreport.com/review/27909/the-ssd-endurance-experiment-theyre-all-dead


The first deaths chart


As you can see only the samsung 840 even had reallocated sectors after 100TB of writes. Other models going to 600TB before seeing any reallocated sectors

Putting that into perspective, say you wanted to do something silly on your SSD like rip a blu-ray to it every day. After doing this every day for a year you'd end up on the high side of that average being around 11TB of writes (assuming the high end blu ray rip of around 38GB and the low around 22, and just taking 30GB as an average)

Even with that silly example you would be able to stress the ssd that way for 9 years before seeing any reallocated sectors on the worst case scenario of all drives tested. According to their test results you'd have to do this for over 50 years for any of the drives they tested to fail and that was the earliest failure :D Or you could do two blu-rays a day of writes for 25 years ;)

Many people have said this isn't really an issue, and many of us have already decided the performance from an in-depth windows environment perspective was such a boost that we didn't care whether or not we would have to replace it within a few years. Also the prices were cheap enough that replacement wasn't a huge concern, but now we should be able to breath some sigh of relief. These will be irrelevant technology in terms of speed long before my typical usage case ever sees that volume of writes to any of the SSDs I own. Now we just need a cost-benefit-analysis showing all the datacenters in all the world how much they'd save with SSD migrations for all data ;) Then more production and lower prices. Can't wait for my solid-state server :D I guess the jury is still out on MTBF in hours, but I don't think SSDs will ever see the same treatment as it just doesn't make as much sense for Power-on-hours to degrade SSD life.
 

·
Registered
Joined
·
5,044 Posts
You just made Mfusick a very happy man. :D

TBH, I don't see SSDs as a viable solution for mass storage anytime in the near future. The cost per GB is not even close to being competitive with standard rotating hard drives. SSDs have come down considerably in price, but they still have a long way to go to catch up with standard drives, especially with 6-8TB drives being introduced in the $200-300 price range.
 

·
Registered
Joined
·
3,714 Posts
Discussion Starter · #3 ·
TBH, I don't see SSDs as a viable solution for mass storage anytime in the near future. The cost per GB is not even close to being competitive with standard rotating hard drives. SSDs have come down considerably in price, but they still have a long way to go to catch up with standard drives, especially with 6-8TB drives being introduced in the $200-300 price range.
I know this and realize it's the current situation, but look at today's landscape

1) personal home media servers (we don't matter and account for hardly anything :D )
2) Corporate PC sales (counts for a lot) still shipping with HDDs for how much longer? A few of our smaller clients (100-500 computers) already upgraded to Dell 6250s with SSDs (for all) from their old Dell XP boxes. Of course our own (1000+) computers are still coming with 320GB WD blacks. Our large clients (2000+ have mostly gone with HP recently, but still HDDs)
3) Consumer boxes (not counting for very much and the higher end models are exclusively SSD
4) Tablets/smartphones - all flash (accounts for a lot)
5) Big data (accounts for a lot - NSA, Facebook, Google, etc) already hybridized but the majority bulk is HDD

No way to cut it other than to realize how freakin much NAND we are going to be consuming from here until something much better comes along (starting 4+ years ago)

In my opinion if 2 or 5 goes (even if only one big data center starts the trend) all the rest will soon start to go. If 1-5 are heading that direction prices will have no option but to fall because you can guarantee supply will be created to meet the demand

There is a lot less to engineer for flash based data centers ~ dust, heat, noise, vibration become more inconsiderable factors
 

·
Banned
Joined
·
2,696 Posts
I will preface this by saying, I firmly believe that SSDs are perfectly adequate from a reliability/longevity standpoint for home use (including HTPC use) And I didn't re-read the entire thing (I read it over a year ago) but I did give it a glance again.

That said, I think the methodology was a bit questionable for this test.

They were writing mostly sequential files, filling the drives, then deleting and starting over, keeping 10GB of static data. That pretty much negates most wear-leveling so while the numbers they arrive at aren't completely meaningless, they aren't completely realistic either.

Usage patters will obviously vary from user to user, but I think that a drive with 50% or even 66% static data is a more realistic estimate for most people, and writing a lot of dynamic data to a drive that is half (or two thirds) full is going to challenge the wear leveling much more than one with 5% of the drive "filled"

I think the only use of an SSD that would yield longer life is a landing disk where it is constantly filled and flushed. (which is pretty rare usage case overall)
 
  • Like
Reactions: ilovejedd

·
Registered
Joined
·
3,714 Posts
Discussion Starter · #5 ·
What about a manufactured reserve? A few drive makers are doing this already (selling 256 drives with 240 available)

I agree 10% static is not a realistic use case. I'd guess they arrived at that with just a bare install size of w8.

For long term storage (a use case that still has a few economies of scale factors to go down in price before it happens) good wear leveling performance isn't really required. Fill levels and static data become a very known quantity
 

·
Registered
Joined
·
2,488 Posts
What about a manufactured reserve? A few drive makers are doing this already (selling 256 drives with 240 available)
Technically that's not a reserve (if you mean re-allocation space?), that's "overhead" or working space for the drive's controller. Though I'm guessing this is where the reallocated sectors come from anyway (?) drive re-allocates sectors from the overhead and thus lessens the available overhead. So in a way it could be a reserve but that's not the primary purpose of stuff like RAISE, etc.


TBH though, I was never worried about this wear/failure business anyway. I have seven SSDs myself and have put one each in my sister's and mom's laptops (giving nine total that I have close experience with), and never seen any problems with any of them (*knocks on wood*). My oldest is a Barefoot Vertex [1] 60GB drive and it still works fine too.


Granted, about half of these drives don't see much use and none of them see any heavy/frequent writes, but this is "normal use" for me so that's what's important to me. For the drives that have the Drive Life level/expectancy ratings in SMART, I think they're all still at 100%. Surely drives fail earlier than that but I've been lucky enough to have had no problems across the board.


But good to know drives are holding up well to these tests/studies, nonetheless.
 

·
Registered
Joined
·
3,127 Posts
Well that tells me the drives will handle a lot of sustained writes.
I'm not sure that's necessarily an indicator for good longevity.

And the failure state, particularly for the Intel SSD, is quite worrying.

http://techreport.com/review/27909/the-ssd-endurance-experiment-theyre-all-dead said:
Intel's 335 Series failed much earlier, though to be fair, it pulled the trigger itself. The drive's media wear indicator ran out shortly after 700TB, signaling that the NAND's write tolerance had been exceeded. Intel doesn't have confidence in the drive at that point, so the 335 Series is designed to shift into read-only mode and then to brick itself when the power is cycled. Despite suffering just one reallocated sector, our sample dutifully followed the script. Data was accessible until a reboot prompted the drive to swallow its virtual cyanide pill.
So the drive destroys all the data on it even though there was only one reallocated sector? That's absurd.
And why is the failure state to destroy all data? I would much rather the drive locked itself to a read-only state, than destroy itself.

In the event of a hard drive failure, if you really need that data, you can usually recover some if not all of it.
It may require sending it off to a specialist to rebuild the drive, but your data can often be recovered.

And if the data is important, look at the cost of the drives.
I can buy 8x 500GB hard drives for the price of one 500GB SSD.
Would you rather trust a single SSD with your data, or have 8 copies of it?
 

·
Registered
Joined
·
2,488 Posts
I think that is absurd that you can't recover the data, because as an unknowing user you probably won't get any sort of warning that the data won't be readable on the next power cycle. Granted if you have SMART monitoring on, in you BIOS/UEFI or in a program in Windows (like SpeedFan), you may get an alert when you get to a low level of drive life % remaining.

I'm going to assume the data is not destroyed though, I'm thinking it's still on the drive but the firmware has made it so the controller completely locks out access (write and read). I wonder if Intel won't release a firmware update so the data is still readable once the drive reaches 0% life remaining/100% write tolerance, in light of this discovery? It shouldn't require any kind of specialty services (the cost of which are very high/prohibitive to the home user), the drive should just be readable (at least) plain and simple; thus my thinking Intel might be pressed for fw update to fix this.


However, in reality, getting to that 0% life/100% tolerance state is highly unlikely for the regular everyday end user. By the time that state would even theoretically be reached for most, the drive will be well outdated/obsolete. I mean you don't see people keeping 500MB-1GB IDE drives from 20 years ago, right? I mean even if they did still work fine, nobody in their right mind would use one.

700TB write would be the equivalent of installing Windows around 30k times, meaning you need to do a new install of windows every day for 80 years, to get to 700TB. Obviously there are other write operations than installing Windows but that's just to put it in perspective. For majority of [Windows] users, installing Windows is probably the largest set of one-time writes a person will do. Saying they can do that everyday for 80 years tells you it's not something most people will even get near to in the time the drive remains technologically useful otherwise. Even if heavy use means you do the equivalent of four times that kind of writing, you're still getting to >20 years before you hit 700TB.

As for "trusting data to one SSD", if you have anything important to store I wouldn't trust it to any one drive, whether it's a HDD, SSD, flash drive, optical disc or whatever. One copy of any important data is never enough--hence why backups are made.
 

·
Registered
Joined
·
6,808 Posts
Interesting...
The new TiVo firmware lets you spin down the hard drive after two hours of no requests/use.
For someone like me where we only record 2 or 3 shows a week, a SSD may work in the TiVo.
 

·
Registered
Joined
·
2,488 Posts
I always thought SSD will go obsolete (size+speed) before they exhaust their r/w cycles, then as already mentioned, SSD will never be cost effective per GB against rotating HDs, nobody buy all SSD-based mass storage right?
I don't understand how you came to that conclusion (I don't really understand your last sentence TBH but I gather that's what you're saying?).

They will eventually become more cost-effective because as you said, what will go obsolete first is the size/speed of the drives, not sold-state storage itself.

In terms of speed they're already more cost effective because even cheap and old SSDs are fast enough that even pitting them against the fastest HDDs out there, it's still not even a contest. For speed, there is no comparison for cost to an HDD because no single consumer HDD, that is as fast as a single consumer SSD. The only place HDD can even be considered somewhat competitive is in sequential transfer. The rest is a landslide victory for the sold-state drive.

Speaking of consumer/mainstream standards... Consider that SSDs are now saturating SATA 6Gbps links, requiring the need for SATA Express and newer standards to be developed to keep up. HDDs on the other hand, they're barely doing much better than SATA 1.5Gbps links, and using even a current/modern SSD on a SATA 1.5Gbps link is not doing much harm to its performance. In other words HDDs are now around three generations of interface behind on performance.

If current mainstream SSDs go obsolete in terms of speed, what will that mean for HDDs which are already just about obsolete in the same terms?

Eventually solid-state should become cheaper than HDD per unit size as well (for typical consumer storage needs), though I can't say when that will be. What usually happens with technology is once something newer/better becomes well adopted and mainstream it causes a reverse situation with regards to cost where the old technology becomes more expensive for a time. For example DDR2 becoming more expensive than DDR3; AGP video cards becoming less cost-effective than PCIe; etc.

So, eventually, there should be a time where HDDs will end up costing more per GB/TB/whatever, than solid-state devices. This is of course barring something totally different than current solid-state storage coming along and usurping that before this phenomenon occurs.

HDDs may stick around for very large storage if they can keep increasing the sizes, but if the speed of solid-state continues to increase, HDDs will be like what tape-based backup was in the 90s. Yeah you can store lots but being slow-like-molasses means it'll be largely relegated to professional backup use.

Saying today that "nobody buys all SSD storage" may be correct today. but it's about as useful at future prediction as saying "no one will ever need more than 640kB of RAM" LOL. I remember in the 486 days thinking about the max specs for motherboard RAM, thinking "wow who will ever have as much as 64MB of RAM?" :D Here we are today, around 20 years later, about three orders of magnitude up on that.

In 20 years time, you still won't have reached the write limit for the SSD you have today (they may fail for other reasons just like anything but you will almost certainly not reach those limits); and that SSD will certainly not be useful by then anyway and you'd be long be rid of it. And HDDs? In 20 years? We'll likely be laughing at them like we do floppy disks. Of course some of us will probably be dead by then too, so I'll laugh at HDDs right now while I still can haha :p
 

·
Registered
Joined
·
3,121 Posts
So, what you're saying is SSDs are only good for fast reboots and should never be used for live TV buffering, right?

;)
 

·
Registered
Joined
·
467 Posts
The thing that troubles me about SSD's isn't that they fail (or how frequently compared to traditional hard drives), it's how they fail. Typically, you get no warning. It just dies. At least with a traditional SATA drive, you get some health warnings or maybe you hear the drive get noisier than usual. That's a strong hint that you'd better pull anything of value off of it in a hurry. SSD's offer no such courtesy.

I have SSD's as the primary boot drive on both my HTPC's, and if they die.. I don't care. All the content is located on a server and the OS itself is backed up so I throw another disk in and do a restore. All suffering is limited to the minor inconvenience of having to watch TV someplace else in the house for a while. To use such a device in a server? That's crazy. I don't see how you can sleep well at night knowing all your precious content that's sitting on a massive SSD could disappear in the time it takes to blink your eyes.
 

·
Registered
Joined
·
2,488 Posts
So, what you're saying is SSDs are only good for fast reboots and should never be used for live TV buffering, right?

;)
I would avoid it myself, yeah (when I'm ripping CDs I even use a RAMdisk, lol); but let's take a look at this empirically anyway...

What's the maximum data rate for live TV? I don't know what it would be but Netflix seems to top out at 5800kbps I think (?) which is around 2.5GB/hr. I'm going by this for now but correct me if live TV works differently as perhaps some kind of realtime encoding/transcoding is at work there?

Average person watches 30hrs of TV per week (according to google lookup--5hrs/day for Americans; 30hrs/wk for Canadians; I settled on the latter).

30hrs/week * 2.5GB/hr = 75GB/wk

To get to 700TB write** (I'm just going to go with base-10 numbers here 'cause I'm not sure what is using what at this point, lol):
700TB = 700000 GB
700000GB / 75GB/wk = 9333 wk

9333wk / 52wk/yr = 179 years

If you have multiple tuners and/or recordings going at the same time, somehow all on the same SSD, even if you have let's say four going all the time and you watch twice as much TV as the average person (so 8x the average person). You're still at over 20 years, theoretically.

**I should probably take a moment to explain, for those that haven't read the TR articles, that 700TB write is well beyond the warranty of all drives. This is just where TR started recognising failures. The Intel that failed at 700TB, is only warrantied for 22TB writes. In other words it lasted 31x longer than it should have.

If we apply a 22TB limit to our live TV calculations, we're only going to get about 6 years before we hit that marker; and if we go with 8x that, we're obviously down to less than 1 year. The warranty of the Intel 335 is only 3 years though, so keep that in mind. As proven in the test it will last a lot longer in terms of just total data written.
 

·
Registered
Joined
·
3,121 Posts
I was just joking :)

For years, there's been knock-down, drag-out fights over this with people adamant SSD has no place in HTPC, especially for live TV, because they're just too fragile -- even though user experience showed otherwise.

I've had the same 64GB and 128GB SSDs in all my MCE HTPCs for years now, all happily buffering away to the SSD when in use (we stop the live TV stream when done to keep tuners free) and for storing recordings until they are swept to the server, all without an issue. Even my original crappy MicroCenter rebranded 64GB ADATA SSD I paid $99 for and that's been used since what seems like the dawn of time in various HTPCs still works.

I fully expected based on reputation to be replacing them every 1-2 years but I haven't had to replace a single SSD in anything.

I could never not have an SSD.
 

·
Registered
Joined
·
2,488 Posts
I was just joking :)

For years, there's been knock-down, drag-out fights over this with people adamant SSD has no place in HTPC, especially for live TV, because they're just too fragile -- even though user experience showed otherwise.
Ohhh I see, lol :)



I could never not have an SSD.
Indeed. And no PC from C2D onward should be without one either.


Clearly the "sky is falling" crowd have nothing to confirm their fears/fear-mongering.
 

·
Premium Member
Joined
·
11,855 Posts
I'm an old guy and both of my computers have SSDs, my desktop(Samsung 840 - 120GB) with a spinning data disc and the laptop (Intel 520 - 240 GB).

I'm looking at a new computer and will equip it with the Samsung 850 Pro 512GB. The 850 has a 10 year guarantee or some huge amount of writes, really meant for business market but I lean towards reliability more than cost, up to the point I can't afford it;).

Intel went to 10nm for the EVO series, unfortunately cell wear out can occur in as few as 300 cycles.... For the 850 they went to 25nm architecture, which lasts much longer, and to maintain the die size they constructed the multilevel cells vertically. 512GB will run me around $400 but it's, hopefully, a one time deal. Good chance it will be functioning longer than I am :)

I back up pretty often, especially before/after system updates or installing new programs. Spinning discs are fine for this and get little wear.
 

·
Registered
Joined
·
1,390 Posts
I fully expected based on reputation to be replacing them every 1-2 years but I haven't had to replace a single SSD in anything.
In 6 years I've had 3 personal SSDs die out of a dozen or so. One cheapo and two mid-range ones. They all caught me off-guard. It would be nice to know what the real failure rates are to compare with HDDs, but I know they aren't zero.
 
1 - 20 of 48 Posts
Top