We are in the midst of starting our virtualization effort (both servers and desktop PCs are going virtual); we have all of the new gear -- servers, SAN space and switches -- and the guys will be racking it over the next couple of days. Also as part of this is FrankenSAN.
FrankenSAN is what happens when you get disgusted with the astronomical prices that the storage vendors want for storage, and decide to see just how far (and fast) you can go with a build-it-yourself box. So, I did some shopping at JDR Microdevices and CDW-G and picked up an industrial rack PC case, a 2.2GHz Core2Duo motherboard with 2GB of RAM, two gigabit NICs, a pair of 36GB SCSI disks and a SCSI RAID card (to make a RAID1 set to boot off of) and a quartet of 750GB SATA drives. (Yes, that's 3 terabytes of disk. It'll be more like 2.2TB of usable space.)
I spent yesterday afternoon and this morning putting the pieces together, getting Linux loaded (Slackware 10.1 - a 2.4 kernel) and fiddling with the fiddly bits. Wow that's a lot of disk. Total cost is about $1 per usable gigabyte vs. ~$9.40 per usable gig from NetApp. Sometime in the near future, there will be side-by-side performance tests.
The picture shows the two volumes I setup and shared out via Samba -- look at the P: and Q: drives -- I think it's just too cool for words to see 1.43TB. Terabytes. I'm the proud owner of terabytes!
I have a question for my fellow Linux geeks -- ever tried to build a RAID5 set with 4x750GB disks? The RAID tool -- mdadm -- successfully creates and assembles the RAID set, but mke2fs (creating an ext3 file system) only sees a 50GB volume ... which doesn't work right. When I split it up into two RAID5s -- one as 4x200GB partitions and the other as 4x500GB partitions, mdadm and mke2fs are both happy. Any thoughts?