Justin R. Buchanan

on Software Development, Systems Administration, Networking, and Random other Stuff

Fancy For Loops – Part 2

So this post is somewhat more abbreviated than originally intended, because I finished it out 2 years after it was started. Between the last time I posted something on my blog, I’ve had another kid, and and built a house, so my blog got neglected. This post was sitting in my drafts since 2014. It’s not my finest work or anything, but I figured I might as well get it posted.

Array.prototype.filter() or “Give me all the items where some condition is truthy”

Consider the following example that uses a for loop to produce an array of numbers that are all evenly divisible by 2 from an input list of numbers (without modifying the original array or its contents).

var numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
var evenNumbers = [];
for (var i = 0; i < numbers.length; i++){
    if (numbers[i] % 2 === 0){

// evenNumbers now contains:
// [2, 4, 6, 8, 10]

Again, since the above code is using a for loop, it is faster than the array prototype methods that accept callbacks, but may not be as concise as what it would look like if you used the filter method, as shown below:

var numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
var evenNumbers = numbers.filter(function(num){
    return num % 2 === 0;

The .filter() method again accepts two parameters, a callback function, and a thisArg. The callback function is executed once for each item in the array. The return value of .filter() is a new array containing all the items in the original array where executing callback() returned a truthy value.

When combined with map, the .filter() method vs. the for-loop can really shine when it comes to writing concise code. Consider the example below where we need to filter a set of people to only those whose age is greater than or equal to 18, and produce an array of just their age.

people.filter(function(x) { x.age >= 18 }).map(function(x) { return x.age });

Great, now we have an array of ages of people 18 or over. But what if we want to reduce that set to a single value, such as the maximum or average age?

Array.Prototype.reduce or “Reduce a set to a single value”

So what does reduce do exactly and how can it help me? Reduce takes a set of values, and reduces it to a single value. This can useful in all kinds of ways, but the simple example is the easiest to start with. Taking a list of numbers, we could compute the sum of those numbers like this:

var numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
var sum = 0;
for (var i = 0; i < numbers.length; i++){
    sum += numbers[i];

Using reduce, we could write the above code this way:

var sum = numbers.reduce(function(accum, cur) {
    return accum + cur;
}, 0);

The reduce method accepts two parameters, the first being a callback function, just like map & filter. The arguments to the callback function are as follows:

  • accumulator: the current value of the accumulator
  • current value: the value at the current index of the source array.
  • current value index: the index of the current array element
  • array: the array on which reduce() was called.

The second parameter to reduce is the initial value of the accumulator. That is, the value that will be passed as the accumulator (first parameter) on the first invocation of the callback function. In the above example, I specify 0 as the initial value, because I want the sum.

Using this knowledge we can take our list of ages of people we produced above, and incorporate reduce to compute the average age in our list.

var averageAge = people
    .filter(function(x) { x.age >= 18 })
    .map(function(x) { return x.age });
    .reduce(function(acc, cur, index, arr) { 
        var retval = acc + cur;
        // if we are looking at the last value, return the avg instead of the sum
        if (index === arr.length-1) {
            return retval / arr.length;    
        return retval;

While the above code is filled with potential performance issues, the clarity it provides probably out weights the performance overhead in most cases. It is worth noting, that beyond the performance hit of using the callback functions above, there is a second performance issue here that may not be entirely obvious. Namely, we are iterating over an array three times in the above code, instead of what could be once. Libraries like Underscore.js and Lodash can mitigate this using chaining, which *can* reduce many algorithms like the above into a single loop that performs better.

It’s worth noting that since this post was originally drafted, arrow functions are widely available in the native browser, Babel or Typescript, and would probably be a better way to write some of the above examples. Using arrow functions, the last code sample would look like this:

var averageAge = people
    .filter(x => x.age >= 18)
    .map(x => x.age);
    .reduce((acc, cur, index, arr) => { 
        var retval = acc + cur;
        // if we are looking at the last value, return the avg instead of the sum
        if (index === arr.length-1) {
            return retval / arr.length;    
        return retval;

Fancy For Loops – Part 1

Using libraries like underscore (or Lo-Dash) for traversing and transforming arrays or objects can be a great time saver (even though, it seems like they did it wrong). However, when helping developers with anything new, I have found the less “black box” libraries you throw at someone, the better they are able to learn what’s really going on. Performance and style discussions aside, I’d rather see a beginner JavaScript developer write tons of for or while loops before finding out what library X, Y, or Z helps them do. Not using a library is also a great learning experience about writing your own algorithms, or using polyfills because IE8 doesn’t support map() or filter().

Most of the array prototype methods can be implemented with a simple for or while loop, generally with better performance, but possibly not as elegantly or with the same level of reusability. This post is not intended to be a guide on when to use and when not to use these methods (or the underscore/Lo-Dash equivalents), but rather help understand the concepts.

Disclaimer: I am by no means a JavaScript performance or functional programming expert, but I play one at work.

Array.prototype.forEach() or “A for loop, but all functional and stuff”

Consider the following example that loops over each item in an array and logs the value to the debug console.

var values = [1,2,3,4,5,6,7,8,9,10];
for (var i = 0; i < values.length; i++){
    console.log('value at index ' + i + ': ' + values[i]);

This is pretty much the “Hello World” of for loops in JavaScript. We can write the same thing using the array prototype method .forEach(). According to the MDN, arrays have a forEach() method that accepts a callback function as the first argument, and thisArg as an optional second. We’ll ignore thisArg for this particular post.

As an aside, if you aren’t using the MDN while writing web apps, you are doing it wrong (or you have it all memorized, in which case you should be working for NASA or maybe as one of those waiters that never writes anything down to be all impressive and stuff).


The callback function will be executed once per array element, passing in the array element as the first argument, the index of the element as the second, and the array itself as the third. Using the forEach() method, we can produce the same output as the above code this way:

var values = [1,2,3,4,5,6,7,8,9,10];
values.forEach(function(value, idx, arr){
    console.log('value at index ' + idx + ': ' + value);

Array.prototype.map() or “Make all Items in an Array into Something Else”

Consider the following example that uses a for loop to produce an array of upper case letters from an array of lower case letters (without modifying the original array or its contents).

var letters = ['a','b','c','d','e','f'];
var upperLetters = new Array(letters.length);

for (var i = 0; i < items.length; i++){

// upperLetters now contains:
// ['A','B','C','D','E','F'];

We can write the same thing using map(). According to the MDN, the map() method on JavaScript arrays accepts two arguments, callback, and optionally thisArg. The first argument, callback will be executed over each item in the array. The return value of map() will be an array containing all the return values from callback. Underscore map does the same thing, but, it works reliably in browsers that do not implement native JavaScript map. The following code produces the same output as the above code.

var letters = ['a','b','c','d','e','f'];

var upperLetters = letters.map(function(value){
    return value.toUpperCase();

// upperLetters now contains:
// ['A','B','C','D','E','F'];

So why would you use the second example over the first, especially considering it is slower? The answer, of course is, “it depends”.  Using map() with a named function or function variable can be really useful for writing more concise code. Consider the following example code that creates copies of three arrays, while removing leading and trailing whitespace from the array elements:

var arr1 = [' bob', 'sally  ', '  tod', ' phil  '];
var arr2 = [' teresa ', ' julie  ', '  sandy  ', ' ron  '];
var arr3 = [' jason', ' jill  ', ' jane  ', ' sam  '];

var arr1trimmed = arr1.map(String.prototype.trim);
// ['bob', 'sally', 'tod', 'phil'] var arr2trimmed = arr2.map(String.prototype.trim);
// ['teresa', 'julie', 'sandy', 'ron'] var arr3trimmed = arr3.map(String.prototype.trim);
// ['jason', 'jill', 'jane', 'sam'];

This code reuses the trim() function, rather than passing an anonymous function into forEach() like we did earlier.  It should be noted that this simplistic trim() function example will throw an exception if any of the values in the array are undefined, null, or not a string.

Hopefully this helps un-black-box things a bit. In Part 2, we’ll look at Array.prototype.filter(), Array.prototype.reduce(), and look more at what the underscore and Lo-Dash libraries provide relating to for-loops.

Linux Cheat Sheet

No blog would be complete without a Linux cheat sheet. I compiled this list when I first started learning Linux, and I’ve had it in my private wiki for a while. This list is mainly just a brain-dump, is not comprehensive, and is mostly targeted at Debian or derivatives.

If you want a good list of quick diagnostic commands, check out First 5 Minutes Troubleshooting A Server.

Get Help for Most Any Command

$ man ls

This would display the help file for the command "ls".

Execute Something as SuperUser

This assumes you are in the sudoers file. You will be prompted for your password.

$ sudo <command>

To become super user permanently for your shell session, use sudo -s. Note this is dangerous and you shouldn’t do it most of the time!

Get the Current Version of Linux

$ uname -a
Linux toboe 2.6.13-15-default #1 Tue Sep 13 14:56:15 UTC 2005 i686 i686 i386 GNU/Linux

Print the Contents of a Text File

The first command here (cat) dumps the whole file, while the second command (less) lets you page through the file.

$ cat /etc/cronttab
$ less /var/log/kern.log

Analyze Disk Usage

$ df -h
/dev/hda3              14G  2.2G   12G  17% /
tmpfs                 252M   12K  252M   1% /dev/shm
/dev/hda2              87M   39M   49M  45% /boot

Find a Running Process

$ ps -ae | grep ssh
1435 ?      00:00:00 sshd
1877 ?      00:00:00 sshd
1879 ?      00:00:00 sshd

Find Devices Available (e.g. CDROM’s, HD’s, etc)

This command uses the wildcard character * to find all devices starting with ‘sd’. sd is the prefix typically given to hard disks. You could replace ‘sd’ with ‘cd’ to find cdrom drives, etc.

$ ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sda5

Mount a CD/DVD ROM Drive

$ mount /dev/cdrom /media/cdrom

List all Loaded Modules

$ lsmod
Module                  Size  Used by
ext2                   52937  1
loop                   11799  0
i2c_i801                7830  0
snd_hda_codec_analog    64562  1
radeon                574812  0

Install a Package

For this example, we will install tcpdump. Note we are using sudo to run as super-user.

$ sudo apt-get update
$ sudo apt-get install tcpdump

Foreground/Background Operations

To send a running process to the background you can press CTRL-Z.

To restore it to the foreground, use ‘fq’.

[1]+  Stopped                 sudo tcpdump -i eth0
$ do-something-else
$ fg

Return Paged Data from an IQueryable<T>

I finally broke down and wrote a standard extension method for IQueryable<T> that returns a single “page” of data from a source set. This method is terminal if it is in an expression tree in that it returns an IEnumerable<T> and not IQueryable<T>. This method works by calling Count() to get the total number of records (which it sends back in the output parameter itemCount), and then uses the normal .Skip() and .Take() method to return a single page. It should work against any IQueryable<T> including LINQ to SQL and Entity Framework sequences.

/// <summary>
/// Gets a single page of items from a sequence.
/// </summary>
/// <typeparam name="T">The data type of the result items.</typeparam>
/// <param name="query">The sequence</param>
/// <param name="pageNumber">The page number to retrieve, starting at 1.</param>
/// <param name="pageSize">The number of items in each page.</param>
/// <param name="pageCount">Provides the total number of pages available.</param>
/// <returns></returns>
public static IEnumerable<T> TakePage<T>(this IQueryable<T> query, int pageNumber, int pageSize, out int pageCount)
    int itemCount;
    return TakePage(query, pageNumber, pageSize, out pageCount, out itemCount);

/// <summary>
/// Gets a single page of items from a sequence.
/// </summary>
/// <typeparam name="T">The data type of the result items.</typeparam>
/// <param name="query">The sequence</param>
/// <param name="pageNumber">The page number to retrieve, starting at 1.</param>
/// <param name="pageSize">The number of items in each page.</param>
/// <param name="pageCount">Provides the total number of pages available.</param>
/// <param name="itemCount">Provides the total number of items availabe.</param>
/// <returns></returns>
public static IEnumerable<T> TakePage<T>(this IQueryable<T> query, int pageNumber, int pageSize, out int pageCount, out int itemCount)
    if (pageNumber < 1)
        throw new ArgumentException("The value for 'page' must be greater than or equal to 1", "pageNumber");

    itemCount = query.Count();

    pageCount = (int)Math.Ceiling((double)itemCount / (double)pageSize);

    if (pageNumber > pageCount)
        pageNumber = pageCount;

    if (pageNumber > 1)
        return query.Skip((pageNumber - 1) * pageSize).Take(pageSize);
        return query.Take(pageSize);

How Not to Future-Proof an Application

I was recently troubleshooting a reported application crash for a client along with a fellow engineer from the client (let’s call him Bob).  A message box along the lines of “An Error Happened, Good Luck” showed up when a user clicked a certain button. Google searches for problems with this application typically turn up one post from 2003 that’s totally unhelpful.


This application is a behemoth comprised of some VB6, .NET and who knows what else running against SQL Server 2000.  Bob and I have some experience with this application, so we started a SQL Profiler trace to look at what was happening when the user clicked “The Button”.  As it turns out, when clicked, it was executing a SQL statement that looked a lot like this:

SELECT * FROM SomeDataTable WHERE SurrogateKey =  ORDER BY SomeDataField

So if you know anything about SQL (spend the next few hours here if you don’t), this statement is wrong. Knowing the application, it looked suspiciously like there was code somewhere like this:

command = "SELECT * FROM SomeDataTable WHERE SurrogateKey = " + SurrogateKeyParam + " ORDER BY SomeDataField"

SurrogateKeyParam was obviously empty for some reason. So Bob started running some queries to look at the data in SomeDataTable, which looked like this:

SurrogateKey CreateDate DataField1 DataField2 DataField3 etc…
10000009 12/6/2012  
10000010 12/6/2012  
10000011 12/6/2012  
10000012 12/6/2012  

I halfway jokingly said to him, “Surely that surrogate key isn’t too big for some reason”. I think we both laughed in a scary “That might really bit it” sort of way, and then proceeded to run the numbers in our heads to see if 10,000,000 crossed some computer science wizard boundary somewhere. It was much too big to have crossed Int16 or UInt16 recently, and well below Int32’s maximum value. So we went back to the data from the day when this issue was first reported. It looked something like this:

SurrogateKey CreateDate DataField1 DataField2 DataField3 etc…
9999998 12/5/2012  
9999999 12/5/2012  
10000000 12/5/2012  
10000001 12/5/2012  

Based on the title of the post, you may already know where this is going. The issue was first reported on 12/5, and the values went from 7 to 8 digits on that day. We were both getting a “That can’t be a coincidence” feeling at this point, and Bob, in his East Coast accent, said something to the effect of “That can’t really be it”. We have both known there has to be some unplanned obsolescence in this application somewhere, and we were both hoping this wasn’t it.

So off to the “Test” environment we went. “Test” was well below reaching 8 digits for this value, and clicking “The Button” worked without a problem. Since the surrogate key values are allocated from a sequence table, we artificially set the value up to 9,999,999 and then clicked “The Button”. BAM! The “An Error Happened, Good Luck” message appeared. Confirmation of our worst fear, somewhere in the vast expanse of the GUI code mess that is this application, there was a variable or statement that was arbitrarily limiting the value of SurrogateKey to a length of 7 characters for who-knows-what reason. However, it wasn’t just trimming the value, somehow it was ending up NULL or empty.

Since we don’t have the source, we can only guess this is the result of incorrect data types and/or maximum lengths on an ADO Recordset, or worse (I guess) an = LEFT(SurrogateKey,7) is somewhere in the code. Who knows what it is really.

So after doing a quick search to see if anyone else was hiring so we could both quit before the helpdesk called back to check on our progress, we realized the data in this table is only used temporarily and then deleted.  Nothing else outside the process that “The Button” is used for references the SurrogateKey column.  Our fix was as easy as writing the following query:

UPDATE SequenceTable SET SurrogateKey=1 WHERE TableName=’SomeDataTable’

We also made the necessary sacrifices to the volcano in the event this breaks something, and may the Lords of Kobol help us when (not if), we see this error on a table that needs referential integrity. In the past, tinkering in the database manually has caused numerous problems, but we had no other resolution.

So next time you are thinking to yourself, “I’ll just shove these Int32 values into 7 character strings”, don’t. Plan for your application to be used 5 years past the “Surely they will upgrade by now” or “surely they will never get to X” date. This one is going on 6 years past.

ASP.NET MVC HtmlHelper Extension Method for Menu Highlighting

I built an extension method on the ASP.NET MVC HtmlHelper class to handle the creation of navigation menu links (tabs) that have a different CSS class applied to them if they are active (i.e. the current page). I had been doing this manually in the master layout (_Layout.cshtml) view with a bunch of if blocks. Of course you can roll out a full-fledged sitemap to handle this, but I didn’t want to.

/// <summary>
/// Extension method for <see cref="HtmlHelper"/> to support highlighting the active tab on the default MVC menu
/// </summary>
/// <param name="htmlHelper"></param>
/// <param name="linkText">The text to display in the link</param>
/// <param name="actionName">Link target action name</param>
/// <param name="controllerName">Link target controller name</param>
/// <param name="activeClass">The CSS class to apply to the link if active</param>
/// <param name="checkAction">If true, checks the current action name to determine if the menu item is 'active', otherwise only the controller name is matched</param>
/// <returns></returns>
public static MvcHtmlString MenuLink(this HtmlHelper htmlHelper, string linkText, string actionName, string controllerName, string activeClass, bool checkAction)
    string currentAction = htmlHelper.ViewContext.RouteData.GetRequiredString("action");
    string currentController = htmlHelper.ViewContext.RouteData.GetRequiredString("controller");

    if (string.Compare(controllerName, currentController, StringComparison.OrdinalIgnoreCase) == 0 && ((!checkAction) || string.Compare(actionName, currentAction, StringComparison.OrdinalIgnoreCase) == 0))
        return htmlHelper.ActionLink(linkText, actionName, controllerName, null, new { @class = activeClass });

    return htmlHelper.ActionLink(linkText, actionName, controllerName);

activeClass sets the CSS class name that will be applied, and setting checkAction to true applies the class only if the action name and controller name match.

So my in my _Layout.cshtml view, I changed all the Html.ActionLink calls to Html.MenuLink and added the active and checkAction parameters.

    <ul id="menu">                        
        <li>@Html.MenuLink("Home", "Index", "Home", "active", true)</li>
        <li>@Html.MenuLink("About", "About", "Home", "active", true)</li>
        <li>@Html.MenuLink("Contact", "Contact", "Home", "active", true)</li>
        <li>@Html.MenuLink("Something Else", "Index", "Something", "active", false)</li>

The Home, About and Contact actions are all handled by the HomeController, and since they are distinct menu choices, we set checkAction to true. The Something Else option, on the other hand, should be highlighted anytime we hit an action on the SomethingController.

One final tweak was to add an overload to default the checkAction parameter to true.

public static MvcHtmlString MenuLink(this HtmlHelper htmlHelper, string linkText, string actionName, string controllerName, string activeClass)
    return MenuLink(htmlHelper, linkText, actionName, controllerName, activeClass, true);

And the updated view code:

    <ul id="menu">                        
        <li>@Html.MenuLink("Home", "Index", "Home", "active")</li>
        <li>@Html.MenuLink("About", "About", "Home", "active")</li>
        <li>@Html.MenuLink("Contact", "Contact", "Home", "active")</li>
        <li>@Html.MenuLink("Something Else", "Index", "Something", "active", false)</li>

Secure Remote Access through Application Specific and One-time Passwords

This is the first part of a series I will be doing on how I implemented one-time password support for remote access to resources inside my home network such as my webmail client, along with supporting application and device specific passwords for use on my mobile phone, tablet, etc.

I always hate the feeling of using any of my username and password combinations on sketchy public computer somewhere. You know the kind I am talking about, those computers at hotels running Windows XP and IE6, signed as "Adminstrator", with every toolbar and add-on installed from a 9 year old version of Real Player to three different versions of some Internet poker game. There's bound to be a key logger in there someplace.

One time use passwords have been around for a long time to mitigate this type of scenario. As the name suggests a one-time password, is valid only once. In theory if someone sees or captures that password, it's worthless to them. Typically one time passwords are accompanied by a normal password, or a PIN number. This mostly satisfies Two Factor Authentication, which requires something you know (the password or PIN), and something you have (the phone giving you the password). Google started offering one-time-password support for Google accounts through the 2-step verification system and the Authenticator app.

So we have three different surfaces to protect:

  • Apache HTTP access to the Roundcube webmail client.
  • Postfix SMTP access to send mail from an external e-mail.
  • Dovecot IMAP access to retrieve mail from an external e-mail client.

We want to be able to use our one-time password to access #1, but since an e-mail client may login many times during a single session, #2 and #3 are better served by an application specific password that is sufficiently random but never changes.

So I set out by sketching out a few requirements:

  1. Make use of my existing e-mail platform (Ubuntu 12.04 + Postfix + Dovecot + RoundCube on Apache).
  2. Close any existing access points into home network via simple username/password combinations.
  3. Support application/device specific passwords for IMAP and SMTP clients (thunderbird, etc.).
  4. Support one-time passwords to access RoundCube webmail.
  5. Phone app to generate one-time passwords.
  6. Backup codes that can be printed to use in the case where the app is not inaccessible.

Application Specific Passwords

Since the first step is to prevent logins to IMAP and SMTP from outside the firewall with passwords we may be typing in on public computer, we need to provide secure passwords that will only be entered once into a device or application to configure. We don’t need to remember these passwords, so we can revoke them and re-configure an application or device at anytime.

Dovecot is configured to authenticate users against PAM, and PAM is configured to authenticate users with mod-auth-kerb. Postfix is configured to authenticate via SASL to Dovecot. So ultimately, there is a single username/password for all users through my Kerberos database.

Dovecot separates the concept of a user database and password database, so I can keep my existing user database (Linux passwd, LDAP, etc), and just alter the password database. To add additional password validation options to Dovecot, you simply add more passdb options to the configuration file. One of those options happens to be a MySQL, so I went ahead and made a simple database and table to store our application specific passwords.

CREATE TABLE `dovecot_passwords` (
  `username` varchar(100) NOT NULL,
  `appname` varchar(50) NOT NULL,
  `password` varbinary(256) NOT NULL,
  PRIMARY KEY (`username`,`appname`)

You will notice username and appname make the unique key here, since we want to have multiple passwords for the same account. The data in this table might look something like this:

username appname password
justin K9 Mail on Phone ********
justin Thunderbird on Laptop ********
justin Thunderbird on Desktop ********
sarah iPhone ********

The value in the password field is the MD5 hash of the password without any whitespace (Yes, it should be salted and maybe using SHA1 instead). In order to make the application specific passwords more secure, I’m using rather long passwords, and so when I generate them I usually format them in blocks of four separated by spaces, such as xRtg Dbea 4d9g aP44. This is easier to type into a mobile keyboard while glancing back and forth between the device and the keyboard. The password database will need to ignore the whitespace, because we don’t care either way if it is there. For now I'll just manage entries in this table manually, but later I plan on writing a fancy CLI or GUI tool. So to insert new records in this table, I generate a random password, and do an insert:

INSERT INTO dovecot_passwords (username, appname, password) VALUES( 'justin', 'smartphone', MD5('xRtgDbea4d9gaP44') );

Now I can configure Dovecot to check this password database instead of PAM by changing my passdb entry to use the SQL driver instead of PAM.

passdb { 
    args = /etc/dovecot/dovecot-sql-other.conf 
    driver = sql 

And the associated SQL config file:

driver = mysql 
connect = host=localhost dbname=mail_db user=dovecot password=******** 
default_pass_scheme = PLAIN 
password_query = SELECT NULL AS password,'Y' as nopassword, username AS user 
                FROM dovecot_passwords
                WHERE username = '%u' AND password=MD5(REPLACE('%w',' ',''))

You can read more about how this configuration file works on the Dovecot Wiki page for SQL passdb, but essentially my query is removing any whitespace from the supplied password, and matching the MD5 hash. To really make this secure, we should be adding a password salt into the mix and probably using SHA1 for the hash algorithm. It is worth mentioning, that if you want to support regular username/password authentication via PAM for users on the internal network, but the application specific passwords everywhere else, this is possible by adding the pam_access module into your PAM configuration for Dovecot.

So now we have satisfied requirements #1, #2, and #3. Dovecot (and Postfix by means of SASL) will now authenticate users against the custom password database, and will not authenticate users with their old username/password.

Next up: Implementing #4, #5, #6 to support One-time passwords.

I Broke Remote Access to Hyper-V Server

In the process of setting up a new Hyper-V 2008 R2 server, I accidentally disabled "Host Access" to both network cards, thus killing remote access to the server (which is in my basement). Since I'm running Hyper-V Server which has no full GUI like Server Core installs, I don't have access to the normal Network Connections GUI to unbind the virtual switch protocol from the network cards.

I did some searching around the web, and all the solutions I found involve downloading some scripts or tools, and since I'm too lazy to put one of these tools on a USB drive and walk downstairs to run them, I wanted a solution that I could run from the command line remotely (as I have access to the server via Intel AMT's VNC KVM). I finally found what I was looking for on this blog post at ENIAC KB.

So, the steps are simple:

  1. Remove the virtual switch protocol from all network adapters:
    netcfg -u vms_pp
  2. Reboot
    shutdown /r
  3. Re-install the virtual switch protocol (which will leave it disabled by default):
    netcfg -l c:\windows\winsxs\amd64_wvms_pp.inf_31bf3856ad364e35_6.1.7600.16385_none_beda85050b13680c\wvms_pp.inf -c p -i vms_pp

So if you broke remote access to your Hyper-V server under server core, and you can still get access to the CLI either through a remote KVM or at the physical console, you can uninstall the Virtual switch protocol, reboot, re-install, and continue your network configuration from VMM or the Hyper-V Console remotely.

Windows 8 Rant


I installed Windows 8 in Virtual Box at home on my desktop with all the Direct3D/2D/etc. enhancements enabled so that it got the absolute best performance. I ran it full screen across all 3 of my monitors to give it a real honest trial.

I absolutely hate the stupid Metro start menu thing. It's so random what was made Metro and what wasn't. You are constantly switching between the Metro thing and the Desktop. It's just horrible on multiple monitors because the Metro start menu thing moves all around to other monitors whenever you move a window (do we still call them that?) between monitors. The hovering on the corners and then clicking thing to get the "Start" button is so much slower and non-obvious than clicking Start (granted using the Windows key or CTRL-ESC is faster than both).

Also, everything says "Tap here" now instead of click, which just secures my belief the design team spent almost zero hours thinking about non-tablet users (and I'm not even convinced it will be a good UI there, but I don't have any touch screen devices to test it on).

It is worse on Windows Server 2012 where every tool you use is not Metro, but the Start menu still is, so you are constantly switching. They redesigned the Server Manager tool to sort of look Metro-ish, but it's still a "legacy" desktop app and it offers less functionality than the previous version is pretty awesome in the way it centralized management better than the 2008 Server Manager (I added this after using it for a while longer).

The way I see it, Windows 8 is half-finished like a lot of things Microsoft has released recently (i.e. Vista).

At least they are offering an upgrade price of $39.99, presumably because they are expecting no one to want it.

I just don't get it… GET OFF MY LAWN!


Also, this video of some guy's Dad using Windows 8 is true if not a little staged:

New Virtual Host - Episode 2: Software Options

I'm waiting for my shiny new parts to arrive from Episode 1, so I decided to do some more research on the Hypervisor software options I have available. I'm definitely looking in the realm of free for the licensing cost factor, and most of the proprietary solutions offer similar features in this category. However, since I use my home network as my learning lab, I typically use more enterprise-class features than one normally might. There are a few features such as software-based RAID 1 and hot backups that I currently do make use of with Microsoft Hyper-V Server 2008 R2 that are problematic with some of the other options I'm looking at. While this isn't intended to be an exhaustive analysis, I've noted some of my findings below.

Initially I was planning on going with VMware vSphere Hypervisor. However, after realizing that software RAID 1 is not supported under vSphere, I don't think this is a good option for home. I understand VMware's reasons for this, because in an enterprise environment, an iSCSI SAN or onboard hardware RAID controller would usually be the better choice. However, I mostly steer clear of hardware RAID for home because it's an unnecessary complication, and typically expensive. I have been contemplating building an iSCSI host at home to centralize all my storage, but at the wise counsel of a friend, I think I've decided against that added complication.

I think hot backups are going to be more complicated with vSphere, at least without buying an actual backup software product. Most of the time I setup a backup solution on the guest VM to backup the important data on it, but I also like to schedule a periodic full backups of the disk images. With Hyper-V, I can write a script around the diskshadow command line tool to make and mount a shadow copy to a drive letter to perform the backup. This method offers no downtime for Windows guests and a very short downtime for Linux guests while the shadow copy is being created. Once mounted, I can backup the VHDs with whatever tool I want.

Citrix XenServer is still an option. XenServer does appear to work fine with software RAID, at least according to this guide by Major Hayden over at Racker Hacker. I think this one is at least worth installing and trying it out before I make a final selection.

The Linux Kernel-based Virtual Machine (KVM) really interests me; I think mostly because it seems like a fun new challenge. KVM is relatively new, although the same can be said for Hyper-V. Ubuntu seems to have quite a bit of documentation on setting up KVM and managing it with libvert (and without), so I could probably start there. This one also goes on the "install it and try it out" list.

Stay tuned for Episode 3.