I was reading a story where Seagate were talking about 60TB disk drives. That’s all well and good, but how quick can I get data to and from them? If I need a certain number of spindles to get the performance I require, then I’m just going to end up with masses of wasted capacity.
I can picture the scene now. I have a database of “x” terabytes in size and I need “y” number of spindles to get the performance I require, so I end up having to buy disks amounting to “z” petabytes of space to meet my performance needs. Not only is it hard to justify, but you know the “spare” capacity will get used to store stuff that’s got nothing to do with my database.
Just think of those 60TB bad-boys in a RAID5 configuration. Shudder. 🙂
Feel free to insert a, “SSD/Flash will solve the worlds storage problems”, comment of your choice here. 🙂
Cheers
Tim….
60TB Disk Drives? | The ORACLE-BASE Blog http://t.co/YHuG8Joi
60TB Disk Drives? http://t.co/fwzzBIG7
I can hear the SAN makers already:
“you need to spread the I/O in your database across more drives!”
“We can sell you big ones too, 60TB each!”
Will the dementia never end?…
I wonder what that would do to my graph:
http://tylermuth.wordpress.com/2011/11/02/a-little-hard-drive-history-and-the-big-data-problem/
@Tyler : That’s exactly it! 🙂