Quote:
Originally posted by richardkaufmann
Whatever! 80 Mbits/sec = 10 Mbytes/sec, which is well in doable range for an IDE drive, even factoring in file system overhead. |
I feel I should point out that hard drive performance has been hugely overrated in past years. You see the manufacturers talking about transfer speeds of 33...40...66...80...133...160 megabytes per second, and they aren't telling you the whole story.
The whole story is that's how fast the bus is, and if you're really super lucky, the drive will be able to pump bits into our out of its internal buffer onto the bus that fast, but that buffer is... what? 16MB at best. That's clearly less than a second's worth of 40MB/sec transfer rate.
For sustained data transfer, the actual data rate is ultimately limited by the rate at which the drive's heads fly over the bits on the disk (or more accurately, the rate at which the bits on the disk spin under the disk's head). That rate is governed by two factors: the rate at which the disk spins, and the density of the data on the disk itself.
Here, higher density drives do mean a higher transfer rate, but the differences aren't growing as fast as the manufacturers' claims. Quadruple the data density (as measured in bits per square inch) and you'll see a doubling of transfer rate.
So, in '94, I bought a 4GB 7200 RPM fast/wide SCSI drive that on a good day could do around 6MB/second sustained (20MB/sec burst). Come forward 7 years and pick up a 160G drive in a similar form factor. All other things being equal, that's a 40x increase in data density,meaning about a 6.25x increase in data rate. Oh, you say that 160G drive is only 5400RPM (actually, I don't know)? make that a 4.75x increase in data rate.
In reality, if you can get a SCSI LVD 10,000 RPM drive to sustain a real 20MB/second for an hour, you're doing pretty good - and those are the fast drives. Not impossible, but still high end. If you're talking a 10MB/second data rate for video, then you're going to need at least 20MB/second to record one, watch another, and that's leaving one huge factor out of the equation.
The huge factor that is left out of the equation is disk head seek times. For track to track seeks, you end up spending a significant amount of your time waiting for the seek. That's why a sustained, hour long transfer will come up less than theoretically possible - because you're going to be doing a lot of one-track seeks during that time, and in the world of 20MB/sec transfers, any mechanical movement is going to seem like it takes ages. Which is why a drive that even advertises a "native transfer rate" (meaning the rate at which bits fly under the heads) of something big is going to show considerably less in a sustained transfer.
That's all assuming the data you're reading/writing is on adjacent tracks. For a continuous read or continuous write process on an unfragmented disk, that'll probably be the case.
But, take the case where we're playing one show while recording another. Here, the seek time becomes huge, and effective transfer rates plummet, because we're seeking over large distances. Whereas access times for reading while on track can be measured in micro or nanoseconds, seek times are measured in milliseconds. Talking non-unitary powers of ten here. And even if that doesn't kill you, you'll have an average of a half a rotation's rotational latency to deal with after the seek. The best way to compensate for that is to have a REALLY BIG RAM buffer to reduce the frequency of the seeking. If your data demand is 10MB/sec to read or write, it would seem that a 1 sec buffer in both directions would be a start, meaning a 20MB buffer. Pretty cheap in modern day, but still a lot of RAM.
Of course, pull one recoverable error out of this and the whole house of cards collapses.
Now, there's a lot of other monkeying you can do. For one thing, you can double your data rate by reading more than one bit at a time. For all I know, some drives may do that (or not - I really don't know).
But there are other gotchas. In the name of squeezing every last ounce of data into the drive, they build them with a constant data density. That means the bits are packed as tightly on the outer tracks as they are on the inner tracks. But, the disk rotates at a constant speed so... the data rate is variable. You get a faster data rate on the outer tracks than on the inner tracks. Hook the new drive up, run an hour's worth of speed benchmark, figure you have a fast enough drive, only to discover that it isn't fast enough when it's nearly full. "Native Transfer Rates" also tend to emphasize the fast end of the scale.
If someone told me that they could get an IDE drive to sustain a 10MB/second read and simultaneous 10MB/second write (and the file system overhead would be pretty negligible) for an hour or two at a shot - that wouldn't hugely surprise me. It just don't take the possibility of it happening with just any IDE drive to be a foregone conclusion.
(In case anyone actually read this far and has enough interest left to wonder, yes, I did start my early years computing theoretical disk transfer rates - at first using a slide rule. As demonstrated here, I can be absolutely insufferable when it comes to talking about seek latency, rotational latency, transfer rates and channel utilizations.)