In the process of setting up a new Hyper-V 2008 R2 server, I accidentally disabled "Host Access" to both network cards, thus killing remote access to the server (which is in my basement). Since I'm running Hyper-V Server which has no full GUI like Server Core installs, I don't have access to the normal Network Connections GUI to unbind the virtual switch protocol from the network cards.
I did some searching around the web, and all the solutions I found involve downloading some scripts or tools, and since I'm too lazy to put one of these tools on a USB drive and walk downstairs to run them, I wanted a solution that I could run from the command line remotely (as I have access to the server via Intel AMT's VNC KVM). I finally found what I was looking for on this blog post at ENIAC KB.
So, the steps are simple:
- Remove the virtual switch protocol from all network adapters:
netcfg -u vms_pp
Re-install the virtual switch protocol (which will leave it disabled by default):
netcfg -l c:\windows\winsxs\amd64_wvms_pp.inf_31bf3856ad364e35_6.1.7600.16385_none_beda85050b13680c\wvms_pp.inf -c p -i vms_pp
So if you broke remote access to your Hyper-V server under server core, and you can still get access to the CLI either through a remote KVM or at the physical console, you can uninstall the Virtual switch protocol, reboot, re-install, and continue your network configuration from VMM or the Hyper-V Console remotely.
I'm waiting for my shiny new parts to arrive from Episode 1, so I decided to do some more research on the Hypervisor software options I have available. I'm definitely looking in the realm of free for the licensing cost factor, and most of the proprietary solutions offer similar features in this category. However, since I use my home network as my learning lab, I typically use more enterprise-class features than one normally might. There are a few features such as software-based RAID 1 and hot backups that I currently do make use of with Microsoft Hyper-V Server 2008 R2 that are problematic with some of the other options I'm looking at. While this isn't intended to be an exhaustive analysis, I've noted some of my findings below.
Initially I was planning on going with VMware vSphere Hypervisor. However, after realizing that software RAID 1 is not supported under vSphere, I don't think this is a good option for home. I understand VMware's reasons for this, because in an enterprise environment, an iSCSI SAN or onboard hardware RAID controller would usually be the better choice. However, I mostly steer clear of hardware RAID for home because it's an unnecessary complication, and typically expensive. I have been contemplating building an iSCSI host at home to centralize all my storage, but at the wise counsel of a friend, I think I've decided against that added complication.
I think hot backups are going to be more complicated with vSphere, at least without buying an actual backup software product. Most of the time I setup a backup solution on the guest VM to backup the important data on it, but I also like to schedule a periodic full backups of the disk images. With Hyper-V, I can write a script around the diskshadow command line tool to make and mount a shadow copy to a drive letter to perform the backup. This method offers no downtime for Windows guests and a very short downtime for Linux guests while the shadow copy is being created. Once mounted, I can backup the VHDs with whatever tool I want.
Citrix XenServer is still an option. XenServer does appear to work fine with software RAID, at least according to this guide by Major Hayden over at Racker Hacker. I think this one is at least worth installing and trying it out before I make a final selection.
The Linux Kernel-based Virtual Machine (KVM) really interests me; I think mostly because it seems like a fun new challenge. KVM is relatively new, although the same can be said for Hyper-V. Ubuntu seems to have quite a bit of documentation on setting up KVM and managing it with libvert (and without), so I could probably start there. This one also goes on the "install it and try it out" list.
Stay tuned for Episode 3.
So I finished Phase 1 of building my new virtual host today, "the purchase". I've been watching some prices and my e-mail deals from Newegg for a few weeks now (and I checked around a few other places like TigerDirect).
I based the build around the Intel DQ67SWB3 Motherboard with an Intel Core i5-2400 quad core processor. The new motherboard has vPro so it will have some nice remote management capabilities such as remote KVM via VNC. The Core i5-2500 and the Q67 chipset together offer both Intel VT and Intel Directed IO (VT-d).
While this isn't the latest and greatest of hardware, it's a big step up from my 5 year old Dell PowerEdge 860 with a dual core Pentium D. The current virtual host is running Hyper-V Server with 2 Windows guests and 2 Ubuntu Linux guests.
For now, this new build will run these same guests, but I'm also considering virtualizing my main Windows file server / domain controller. I'm planning on running VMware vSphere Hypervisor (formerly ESXi) on this new machine since it's free, but I've also considered KVM and XenServer, so I'm not set in stone there yet. I could also continue running Hyper-V Server, but I'm kinda looking to experience something new, and I think VMware might have some better Linux guest support (although Ubuntu and CentOS have run just fine under Hyper-V for a while now).
The full build details are below:
The total purchase for my build was $686 after about $97 in promo code savings which I think is pretty decent.
Stay tuned for Episode 2.