At work we have this wonderful, big, honkin' NetApp SAN. It has two storage controllers, five shelves of fourteen 300GB 10,000RPM FibreChannel drives and one shelf of 500GB 7,200 RPM SATA drives. The FC drives house databases and virtual machines. The SATA drives are for disk-to-disk backup. That SATA shelf cost $24,500. That's $3.50/GB raw, and roughly $4.90 per usable GB of storage.
About the same time that we made our initial SAN purchase, I was looking to do some experimenting. I wanted to see what sort of performance I could get from a storage server assembled from parts -- my "white box NAS". For $2,466 I assembled a machine with 4x750GB 7,200RPM SATA drives, yielding 1.5TB of usable storage in it's best performing configuration (mirrored pairs, striped together -- a.k.a. RAID10). So, that's 82¢ per raw GB and $1.64 per usable GB. The box runs Linux (Slackware 12), and serves as the destination for backups from a number of other Linux boxes (it also contributes to Team MHIS)
Granted, the NetApp with its redundant everything beats my Frankenserver hands down for enterprise-class reliability, but it costs nearly three times as much per usable GB of space.
My "built from pieces" storage server has been plugging away for a year now, and is very successful at its job:
hm-lnx-nas01:~# uptime 13:49:04 up 346 days, 7:38, 1 user, load average: 1.16, 1.65, 1.51 hm-lnx-nas01:~# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 9.4G 3.3G 5.7G 37% / /dev/sda3 3.8G 135M 3.5G 4% /var /dev/sda4 19G 173M 18G 1% /work /dev/md2 1.4T 518G 788G 40% /data
The take-away is, that if you put some thought into matching the job to the machine, there's no reason -- even in an enterprise environment -- to pay all out doors for something that really should be inexpensive.