Migrating a Cross-targeted project.json .NET Core Library to csproj Using the .NET Core CLI

Today I migrated several of our internal C# libraries we use at CSG that are using .NET Core from the project.json to csproj format using the .NET Core CLI tools. Everything went well, but I ran into one snag with trying to build the NuGet packages. Here are my steps I went through to migrate and resolve the errors I encountered.

First, I started with deleting the global.json file that was being used to pin the sdk/cli tooling to a pre-csproj version. I opened the project in the command line and ran the migration command, which is dotnet migrate.

dotnet-migrate

So far so good. I went ahead and tried to build a Release build with a NuGet package by running dotnet build –configuration Release, which is what our continuous integration build script is doing to generate the NuGet packages.

dotnet-build

Well crap, the build command was failing with the following errors:

error MSB4018: The "PackTask" task failed unexpectedly.
error MSB4018: System.IO.FileNotFoundException: File not found: '...\bin\Release\net461\Csg.Data.Dapper.dll'.

There were some various other flavors of this error involving the netstandard1.6 path instead of net461, but they were all similar. It appeared to be failing when trying to build the nupkg because of missing build output files.  What was weirder, is that running the Build command (CTRL-SHIFT-B) in Visual Studio 2017 26228.4 was showing the same errors in the Output window, but the build was completing with a “Build: 1 succeeded, 0 failed, 0 skipped”. It wasn’t picking up on the fact that the dotnet pack command was failing. This seems like an unrelated problem with VS2017, so I went back to the command line build. I tried a view variations of dotnet build using the –framework argument, which led me to believe it was some issue with the multi-targeting being used in the project.

These projects are targeting both the full .NET Framework 4.6.1 and as well as NET Standard 1.6. Before the migration, the frameworks section of the project.json looked like this:

"frameworks": {
  "netstandard1.6": {
    "dependencies": {
      "NETStandard.Library": "1.6.0"
    }
  },
  "net461": {
  }
}

It’s worth mentioning here that there is no support via the Visual Studio 2017 UI (Project properties page) for configuring multiple target frameworks as mentioned by Damian Edwards in one of the recent ASP.NET Community Standup’s. The equivalent csproj syntax looks like this:

<PropertyGroup>
...
  <Authors>Justin R. Buchanan</Authors>
  <TargetFrameworks>netstandard1.6;net461</TargetFrameworks>
  <DebugType>portable</DebugType>
...
</PropertyGroup>

Note the framework list is semi-colon delimited in a <TargetFrameworks> element instead of <TargetFramework>.

While I was in the csproj I noticed it had migrated this project.json scripts content:

"scripts": {
  "postcompile": [
    "dotnet pack --no-build --configuration %compile:Configuration%"
  ]
}

to this csproj equivalent:

<Target Name="PostcompileScript" AfterTargets="Build" Condition=" '$(IsCrossTargetingBuild)' != 'true' ">
  <Exec Command="dotnet pack --no-build --configuration $(Configuration)" />
</Target>

I removed the new <Target Name=”PostcompileScript”> tag entirely, and then manually ran dotnet build and then dotnet pack, and yay, it worked!

EDIT: I was reading over this again today and noticed the Condition on the <Target> that seems like it should have prevented me from having this error (althought it wouldn’t have generated a NuGet package). I’ll have to do more research on exactly when $(IsCrossTargetingBuild) evaluates to true. It seems like I would have still needed to add the new dotnet pack step outlined below though.

dotnet-build-yay 
dotnet-pack-yay

The only change I had to make at this point was to to add an extra step to our build.cmd (used by our build server) to run the dotnet pack command. Previously we had not needed this because the dotnet build command was running the pack in the postcompile steps. Now it looks something like this:

SET SOLUTION=Csg.Data.Dapper.sln
SET BUILD_CONFIG=Release
...
dotnet build %SOLUTION% --configuration %BUILD_CONFIG%
...
dotnet pack %SOLUTION% --no-build --configuration %BUILD_CONFIG%
...

I’m not sure if this is a bug, or if I’m doing something else wrong, or both, but this got me past the error and I was able to continue migrating the rest of our libraries from project.json to csproj.

Authenticate ASP.NET Core Identity Users via Active Directory or LDAP Password

Update: I have published an updated 2.0.0-preview00 release that supports ASP.NET Core Identity 2.0 on .NET Standard 2.0 at NuGet.org. I'll publish 2.0.0 without the "preview" tag once I hear back from a couple folks that this resolved their reported issues.

In a project I was recently working on, I needed a way to store and manage user accounts in a stock ASP.NET Core Identity Entity Framework Core based database, but validate user passwords against an existing Active Directory domain. In this situation, I could not leverage Kerberos/Windows Authentication because users were outside the Intranet, nor could I use ADFS or equivalent SSO services as it was beyond the scope of my project to deploy such a solution.

To achieve this, I created a simple UserManager wrapper class that overrides the base CheckPasswordAsync method with one that uses the Novell LDAP library for NETStandard 1.3 to perform an LDAP bind against a directory, and thus perform simple password validation.

I began by creating a UserManager class that inherits from Microsoft.AspNetCore.Identity.UserManager.

/// <summary>
/// Provides a custom user store that overrides password related methods to valid the user's password against LDAP.
/// </summary>
/// <typeparam name="TUser"></typeparam>
public class LdapUserManager<TUser> : Microsoft.AspNetCore.Identity.UserManager<TUser>
where TUser: class

Then I implement CheckPasswordAsync() using an LdapAuthentication class, which is just a loose abstraction around the Novell LDAP library.

/// <summary>
/// Checks the given password agains the configured LDAP server.
/// </summary>
/// <param name="user"></param>
/// <param name="password"></param>
/// <returns></returns>
public override async Task<bool> CheckPasswordAsync(TUser user, string password)
{
    using (var auth = new LdapAuthentication(_ldapOptions))
    {
        string dn;

        // This gives a custom way to extract the DN from the user if it is different from the username.
        if (this.Store is IUserLdapStore<TUser>)
        {
            dn = await((IUserLdapStore<TUser>)this.Store).GetDistinguishedNameAsync(user);
        }
        else
        {
            dn = await this.Store.GetNormalizedUserNameAsync(user, CancellationToken.None);
        }

        if (auth.ValidatePassword(dn, password))
        {
            return true;
        }
    }

    return false;
}

The meat of the LdapAuthentication class is in the ValidatePassword() method.

/// <summary>
/// Gets a value that indicates if the password for the user identified by the given DN is valid.
/// </summary>
/// <param name="distinguishedName"></param>
/// <param name="password"></param>
/// <returns></returns>
public bool ValidatePassword(string distinguishedName, string password)
{
    if (_isDisposed)
    {
        throw new ObjectDisposedException(nameof(LdapConnection));
    }

    if (string.IsNullOrEmpty(_options.Hostname))
    {
        throw new InvalidOperationException("The LDAP Hostname cannot be empty or null.");
    }

    _connection.Connect(_options.Hostname, _options.Port);

    try
    {
        _connection.Bind(distinguishedName, password);
        return true;
    }
    catch (Exception ex)
    {
        System.Diagnostics.Debug.WriteLine(ex.Message);
        return false;
    }
    finally
    {
        _connection.Disconnect();
    }
}

At this point, I just needed some basic configuration and DI code to get things wired up in the Startup.cs of an ASP.NET Core app.

public void ConfigureServices(IServiceCollection services)
{
    services.Configure<Justin.AspNetCore.LdapAuthentication.LdapAuthenticationOptions>(this.Configuration.GetSection("ldap"));
    services.AddLdapAuthentication<ApplicationUser>();
    services.AddIdentity<ApplicationUser, IdentityRole>()
        .AddUserManager<Justin.AspNetCore.LdapAuthentication.LdapUserManager<ApplicationUser>>()
        .AddEntityFrameworkStores<ApplicationDbContext>()                
        .AddDefaultTokenProviders();
}

This expects configuration to come from an AppSettings.json section, which looks like this:

"ldap": {
  "Hostname": "dc1.example.com",
  "Port": 389
}

This allows me to keep the user accounts in a database (in this instance, a MySQL database), but eliminates the need for the user to have a separate password. It’s important to note that in my case, users do not need to be able to change, reset, or otherwise manage their user account password through the web interface, as they have a separate existing process in place for that.

I intend on coming back at some point an implementing more of the UserManager methods that *can* be implemented via LDAP, but for now all I needed was to eliminate the need for users to create a separate account password for this app.

The full source code is available on GitHub, or  you can install the NuGet package:

Install-Package -Pre Justin.AspNetCore.LdapAuthentication

Fancy For Loops – Part 2

So this post is somewhat more abbreviated than originally intended, because I finished it out 2 years after it was started. Between the last time I posted something on my blog, I’ve had another kid, and and built a house, so my blog got neglected. This post was sitting in my drafts since 2014. It’s not my finest work or anything, but I figured I might as well get it posted.

Array.prototype.filter() or “Give me all the items where some condition is truthy”

Consider the following example that uses a for loop to produce an array of numbers that are all evenly divisible by 2 from an input list of numbers (without modifying the original array or its contents).

var numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
var evenNumbers = [];
for (var i = 0; i < numbers.length; i++){
    if (numbers[i] % 2 === 0){
        evenNumbers.push(numbers[i]);
    }
}

// evenNumbers now contains:
// [2, 4, 6, 8, 10]

Again, since the above code is using a for loop, it is faster than the array prototype methods that accept callbacks, but may not be as concise as what it would look like if you used the filter method, as shown below:

var numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
var evenNumbers = numbers.filter(function(num){
    return num % 2 === 0;
});

The .filter() method again accepts two parameters, a callback function, and a thisArg. The callback function is executed once for each item in the array. The return value of .filter() is a new array containing all the items in the original array where executing callback() returned a truthy value.

When combined with map, the .filter() method vs. the for-loop can really shine when it comes to writing concise code. Consider the example below where we need to filter a set of people to only those whose age is greater than or equal to 18, and produce an array of just their age.

people.filter(function(x) { x.age >= 18 }).map(function(x) { return x.age });

Great, now we have an array of ages of people 18 or over. But what if we want to reduce that set to a single value, such as the maximum or average age?

Array.Prototype.reduce or “Reduce a set to a single value”

So what does reduce do exactly and how can it help me? Reduce takes a set of values, and reduces it to a single value. This can useful in all kinds of ways, but the simple example is the easiest to start with. Taking a list of numbers, we could compute the sum of those numbers like this:

var numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
var sum = 0;
for (var i = 0; i < numbers.length; i++){
    sum += numbers[i];
}

Using reduce, we could write the above code this way:

var sum = numbers.reduce(function(accum, cur) {
    return accum + cur;
}, 0);

The reduce method accepts two parameters, the first being a callback function, just like map & filter. The arguments to the callback function are as follows:

  • accumulator: the current value of the accumulator
  • current value: the value at the current index of the source array.
  • current value index: the index of the current array element
  • array: the array on which reduce() was called.

The second parameter to reduce is the initial value of the accumulator. That is, the value that will be passed as the accumulator (first parameter) on the first invocation of the callback function. In the above example, I specify 0 as the initial value, because I want the sum.

Using this knowledge we can take our list of ages of people we produced above, and incorporate reduce to compute the average age in our list.

var averageAge = people
    .filter(function(x) { x.age >= 18 })
    .map(function(x) { return x.age });
    .reduce(function(acc, cur, index, arr) { 
        var retval = acc + cur;
        // if we are looking at the last value, return the avg instead of the sum
        if (index === arr.length-1) {
            return retval / arr.length;    
        }
        return retval;
    });

While the above code is filled with potential performance issues, the clarity it provides probably out weights the performance overhead in most cases. It is worth noting, that beyond the performance hit of using the callback functions above, there is a second performance issue here that may not be entirely obvious. Namely, we are iterating over an array three times in the above code, instead of what could be once. Libraries like Underscore.js and Lodash can mitigate this using chaining, which *can* reduce many algorithms like the above into a single loop that performs better.

It’s worth noting that since this post was originally drafted, arrow functions are widely available in the native browser, Babel or Typescript, and would probably be a better way to write some of the above examples. Using arrow functions, the last code sample would look like this:

var averageAge = people
    .filter(x => x.age >= 18)
    .map(x => x.age);
    .reduce((acc, cur, index, arr) => { 
        var retval = acc + cur;
        // if we are looking at the last value, return the avg instead of the sum
        if (index === arr.length-1) {
            return retval / arr.length;    
        }
        return retval;
    });

Fancy For Loops – Part 1

Using libraries like underscore (or Lo-Dash) for traversing and transforming arrays or objects can be a great time saver (even though, it seems like they did it wrong). However, when helping developers with anything new, I have found the less “black box” libraries you throw at someone, the better they are able to learn what’s really going on. Performance and style discussions aside, I’d rather see a beginner JavaScript developer write tons of for or while loops before finding out what library X, Y, or Z helps them do. Not using a library is also a great learning experience about writing your own algorithms, or using polyfills because IE8 doesn’t support map() or filter().

Most of the array prototype methods can be implemented with a simple for or while loop, generally with better performance, but possibly not as elegantly or with the same level of reusability. This post is not intended to be a guide on when to use and when not to use these methods (or the underscore/Lo-Dash equivalents), but rather help understand the concepts.

Disclaimer: I am by no means a JavaScript performance or functional programming expert, but I play one at work.

Array.prototype.forEach() or “A for loop, but all functional and stuff”

Consider the following example that loops over each item in an array and logs the value to the debug console.

var values = [1,2,3,4,5,6,7,8,9,10];
for (var i = 0; i < values.length; i++){
    console.log('value at index ' + i + ': ' + values[i]);
}

This is pretty much the “Hello World” of for loops in JavaScript. We can write the same thing using the array prototype method .forEach(). According to the MDN, arrays have a forEach() method that accepts a callback function as the first argument, and thisArg as an optional second. We’ll ignore thisArg for this particular post.

As an aside, if you aren’t using the MDN while writing web apps, you are doing it wrong (or you have it all memorized, in which case you should be working for NASA or maybe as one of those waiters that never writes anything down to be all impressive and stuff).

 

The callback function will be executed once per array element, passing in the array element as the first argument, the index of the element as the second, and the array itself as the third. Using the forEach() method, we can produce the same output as the above code this way:

var values = [1,2,3,4,5,6,7,8,9,10];
values.forEach(function(value, idx, arr){
    console.log('value at index ' + idx + ': ' + value);
});

Array.prototype.map() or “Make all Items in an Array into Something Else”

Consider the following example that uses a for loop to produce an array of upper case letters from an array of lower case letters (without modifying the original array or its contents).

var letters = ['a','b','c','d','e','f'];
var upperLetters = new Array(letters.length);

for (var i = 0; i < items.length; i++){
    upperLetters.push(items[i].toUpperCase());
}

// upperLetters now contains:
// ['A','B','C','D','E','F'];

We can write the same thing using map(). According to the MDN, the map() method on JavaScript arrays accepts two arguments, callback, and optionally thisArg. The first argument, callback will be executed over each item in the array. The return value of map() will be an array containing all the return values from callback. Underscore map does the same thing, but, it works reliably in browsers that do not implement native JavaScript map. The following code produces the same output as the above code.

var letters = ['a','b','c','d','e','f'];

var upperLetters = letters.map(function(value){
    return value.toUpperCase();
});

// upperLetters now contains:
// ['A','B','C','D','E','F'];

So why would you use the second example over the first, especially considering it is slower? The answer, of course is, “it depends”.  Using map() with a named function or function variable can be really useful for writing more concise code. Consider the following example code that creates copies of three arrays, while removing leading and trailing whitespace from the array elements:

var arr1 = [' bob', 'sally  ', '  tod', ' phil  '];
var arr2 = [' teresa ', ' julie  ', '  sandy  ', ' ron  '];
var arr3 = [' jason', ' jill  ', ' jane  ', ' sam  '];

var arr1trimmed = arr1.map(String.prototype.trim);
// ['bob', 'sally', 'tod', 'phil'] var arr2trimmed = arr2.map(String.prototype.trim);
// ['teresa', 'julie', 'sandy', 'ron'] var arr3trimmed = arr3.map(String.prototype.trim);
// ['jason', 'jill', 'jane', 'sam'];

This code reuses the trim() function, rather than passing an anonymous function into forEach() like we did earlier.  It should be noted that this simplistic trim() function example will throw an exception if any of the values in the array are undefined, null, or not a string.

Hopefully this helps un-black-box things a bit. In Part 2, we’ll look at Array.prototype.filter(), Array.prototype.reduce(), and look more at what the underscore and Lo-Dash libraries provide relating to for-loops.

Linux Cheat Sheet

No blog would be complete without a Linux cheat sheet. I compiled this list when I first started learning Linux, and I’ve had it in my private wiki for a while. This list is mainly just a brain-dump, is not comprehensive, and is mostly targeted at Debian or derivatives.

If you want a good list of quick diagnostic commands, check out First 5 Minutes Troubleshooting A Server.

Get Help for Most Any Command

$ man ls

This would display the help file for the command "ls".

Execute Something as SuperUser

This assumes you are in the sudoers file. You will be prompted for your password.

$ sudo <command>

To become super user permanently for your shell session, use sudo -s. Note this is dangerous and you shouldn’t do it most of the time!

Get the Current Version of Linux

$ uname -a
Linux toboe 2.6.13-15-default #1 Tue Sep 13 14:56:15 UTC 2005 i686 i686 i386 GNU/Linux

Print the Contents of a Text File

The first command here (cat) dumps the whole file, while the second command (less) lets you page through the file.

$ cat /etc/cronttab
$ less /var/log/kern.log

Analyze Disk Usage

$ df -h
/dev/hda3              14G  2.2G   12G  17% /
tmpfs                 252M   12K  252M   1% /dev/shm
/dev/hda2              87M   39M   49M  45% /boot

Find a Running Process

$ ps -ae | grep ssh
1435 ?      00:00:00 sshd
1877 ?      00:00:00 sshd
1879 ?      00:00:00 sshd

Find Devices Available (e.g. CDROM’s, HD’s, etc)

This command uses the wildcard character * to find all devices starting with ‘sd’. sd is the prefix typically given to hard disks. You could replace ‘sd’ with ‘cd’ to find cdrom drives, etc.

$ ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sda5

Mount a CD/DVD ROM Drive

$ mount /dev/cdrom /media/cdrom

List all Loaded Modules

$ lsmod
Module                  Size  Used by
ext2                   52937  1
loop                   11799  0
i2c_i801                7830  0
snd_hda_codec_analog    64562  1
radeon                574812  0
...etc...

Install a Package

For this example, we will install tcpdump. Note we are using sudo to run as super-user.

$ sudo apt-get update
$ sudo apt-get install tcpdump

Foreground/Background Operations

To send a running process to the background you can press CTRL-Z.

To restore it to the foreground, use ‘fq’.

[1]+  Stopped                 sudo tcpdump -i eth0
$ do-something-else
$ fg