New ESXi 5 “White Boxes” & SAN

Over the years, I’ve built a few ESXi white boxes for my home lab. I really got serious back in 2009 when “BirkleNET” was built. I must add that I didn’t name it that, my colleagues gave it that name and it stuck. By doing so it helps me stay on top of the new offerings from VMware. Through teaching for the Purdue CIT program, I’ve recently inherited an older EMC CX3-10c SAN from the Department of Medicine with 2 storage processors, power cables, 3 shelves of 300GB 10k drives, 2 FC switches, their PSUs and 3 FC cards which are PCI-E. Their trash is my treasure! Perfect timing too as I was beating the life out of my home-brew openfiler box I was using there. Big thanks to Kyle Ruddy!

I decided it was also time to get my whiteboxes upgraded there.

I wanted something small that would pack power. It needed to have enough juice so that I could run vCloud Director, vMA, VMware View 5, etc. I ended up getting 2 Shuttle XPC systems. Very similar to Kendrick Colemans home lab.

I did not purchase any disks because I don’t plan on using any local disk immediately, although I will admit that I have a few things with local SSD that I want to try out with VMware View in the near future! I am just going to instal ESXi and boot these from USB sticks for now … and who doesn’t have 20 or so of those laying around??

The goodies:

* 2x Shuttle SH67H3
* 2x 16GB DDR3 1333 Memory
* 2x Intel i7 Sandy Bridge 3.4GHz
* 2x Intel EXPI9402PT PRO/1000 PT Dual Port Server Adapter

Each of these machines cost around $800.00 to build. You can probably save some money shopping around for the dual NIC and memory. It seems like those vary a lot from place to place. If you’re reading this 30 days or so after the original post date, you’ll probably pay half of what I got these for. Haha.

After updating the BIOS to the latest version, I had no issues installing ESXi 5 after making appropriate BIOS tweaks like VT-d, set to boot USB in hard disk mode, etc. All of the NICs, even the integrated one came up and appear to work well.

The BIOS update is important because the 1 x PCI Express 2.0 x16 slot says it’s for graphics cards only and ESXi will PSOD if you try to use a NIC in that slot. After the update, the system no longer PSODs.

These were the easiest of all of the white boxes I’ve ever set up. I plan to do another post on how I plan to configure the SAN in this environment soon. Brian Wuchner over at EnterpriseAdmins.org, Jake Robinson over at http://geekafterfive.com and I plan to play with it sometime next week and will hopefully be able to start work on our next project … building a hybrid cloud!

This entry was posted in Storage, vSphere. Bookmark the permalink.

3 Responses to New ESXi 5 “White Boxes” & SAN

  1. Brett says:

    Hi Ryan,

    Did you have any luck getting your cx3-10 to work with ESXi 5? I’d like to upgrade to 5, but the according to the SAN HCL the cx3-10 is only supported up to 4.1 U2…

    Thanks,
    Brett

  2. Ryan Birk says:

    Brett,

    I’m actually in the process of getting the SAN racked and tested. Hoping to get to it this week. In my opinion, I would not recommend running anything in production that’s not on the HCL. Mostly for a support reason, in case you ever need it.

    I don’t anticipate any problems with ESXi 5 though. I’ll keep you posted.

    Ryan

  3. Pingback: Ryan Birk – Virtual Insanity » VCP-5 Exam Thoughts

Leave a Reply

Your email address will not be published. Required fields are marked *