So much power in so little space.
Over the years I have tweaked and built up many different ESXi whitebox “servers” for lab use. I wanted to share some experiences with you in a blog post and recommend some new hardware that will help you get a nice little home lab setup for ESXi 5.
I’ve bought vanilla Intel motherboards and have had a lot of success with those. The Intel network adapters seem to have the greatest compatibility overall. My most recent builds have been based on the Shuttle XPC platform. They work right out of the box and this post will focus primarily on those.
If you’re looking to build a hardware based VMware home lab, you will have the most flexibility overall. When you nest your lab inside VMware Workstation, there are some features that you will not be able to use like Fault Tolerance and EVC. If you hack around a bit you can get them to work though. I’ve just always went with a hardware lab so we will stick with that.
The ESXi 5 Host:
It’s recommended to have two hosts minimum so you can do all of the cool stuff. Without the second one, it will still work fine but what fun is it without HA/DRS and vMotion?!
Shuttle XPC SH67H3 - We will use this as our base system. It’s a compact system, supports new generation processors and up to 32GB memory. It is recommended to update to the newest BIOS on the Shuttle website. This will allow us to use the PCI Express slot for a NIC and not lock us into using it for a video card. There were early reports that old revisions of the BIOS would only allow you to use it for video card expansions. We want to have the option to use a dual NIC to bring the host up to a total of 3 NICs. The integrated adapter works fine out of the box with no modifications to the ESXi 5 install media. These machines have been running flawlessly for almost a year now in my lab.
Intel PRO/1000 Dual Port Server Adapter - This adapter works in the spare slot that the Shuttle has. Again, as of this writing it works flawlessly out of the box with the vanilla ESXi 5 installer. No modifications needed. Just be sure to flash the BIOS to the newest revision.
Your Choice: Intel Core i7 2600 Sandy Bridge Processor or Intel Core i5 2500 Processor - I went with the i7 processor but this is entirely your choice. If you’re on a tighter budget the i5 processor would save you some cash. I would recommend that, if you are on a budget, go with the i5 and buy two hosts vs. one host with an i7. You are almost always out of memory before CPU anyway. Be sure to buy the non K processors. (The ones linked are the ones you want in a virtualization lab.) K processors do not support Intel TXT, Intel VT-d and vPro. (Yes, Sandy Bridge works with Fault Tolerance.)
Your Choice: 8GB Corsair DDR3 (2 x 4GB) 1333 MHz or 16GB Corsair DDR3 (2 x 8GB) 1333 MHz - I’ve had a ton of success with Corsair memory. This is one of those things where you probably will want to shop around but this memory has been priced well and just works every time you buy it. Remember that the Shuttle system above can do a maximum of 32GB DDR3 across 4 total slots. So if it’s in your budget order up 2x of the 16GB kits and you’re set. I went with 16GB in each box.
Kingston Digital 8 GB USB 2.0 Flash Drive - I’ve used 4GB USB sticks before with no issue, but went with this 8GB stick on the last build. I am sure you have plenty of these things laying around. Any of them should do. I try to avoid the really cheap vendor ones given out at all the VMUGs/conferences etc since I want it to run relatively stable and not die within a weeks use.
This guy might be proud of his lab, but it’s time to upgrade.
The Storage Setup/Options:
Now we meet a crossroads. The stuff above gets you your basic host but we all know that a standalone host can only go so far. We have a couple options when it comes to storage. If you’re totally new to the homebrew ESXi lab then you will need to decide what path to take.
- The cheapest option: Find an old computer (you know you have one) and install openfiler or FreeNAS on it and share it out. This is the cheapest option, but may not perform great depending on the box/disk/network etc.
- No old computer to re-purpose? Then pick up a Seagate Barracuda 7200 1 TB 7200RPM SATA 6 Gb/s- I’ve used these before but you can buy any simple and cheap SATA drive and install it inside of the Shuttle standalone host. Try to stay away from the green drives if you can. Especially if you’re putting them in a NAS. Then you could build a massive virtual machine on the host and share out the storage over iSCSI/NFS to the host(s). You can use openfiler or FreeNAS if you’d like inside of the virtual machine. Remember that in this case your VMs (disk files specifically) that you build will live inside of this new storage VM itself. Think of the movie Inception. Kind of like a dream within a dream.
- Buy a new NAS. I own a device called a Thecus N5200 Pro. I cannot recommend it. Thecus tries to say this is a great device but they never update the firmware. Avoid them if you can. I would recommend and have heard great reviews of Synology. A Synology DS411 NAS. It’s not the cheapest solution but the performance is great and Synology has impressed as of late. Throw in 4 of the Seagate Barracuda 7200 1 TB 7200RPM SATA 6 Gb/s disks and you’ve got a winner! Kyle Ruddy has a recent post and review on his Synology DS411.
So you can see with storage in a VMware home lab you have a lot of options. I would recommend starting small and growing it later.
The Networking Setup:
In order to be as real world as possible in this home lab you’ll want to find a way to setup some VLANs and keep the traffic that your homelab generates off of your “Production” network.
I would recommend using a router based on dd-wrt and setup seperate VLANs for your storage, vMotion, etc. I use dd-wrt and have had success with it and pair it with a 24 port managed Trendnet switch. It works but I’ve also recently started playing with an HP 1810G-8 switch as well which is exactly what I need for my hosts.
Leave your home lab components in the comments below! Lets get a good thread going with hardware that’s supported on ESXi 5.