NuGet and chocolatey behind a proxy

NuGet historically didn’t handle company proxies, a common enough issue, which has led to a lot of guidance on the ‘net that is now out of date on how to deal with a proxy that isn’t playing nicely with your default credentials. Here are two tips: the first will let you get chocolatey installed and the second will let you configure NuGet so that chocolatey can do its goodness.

install chocolatey with windows auth proxy

The above gist will let you install chocolatey from a PowerShell command line. I used the normal download URL like this:

Set-ExecutionPolicy -ExecutionPolicy Unrestricted;$a=new-object net.webclient;$a.proxy.credentials=[system.net.credentialcache]::defaultnetworkcredentials;$a.downloadstring('https://chocolatey.org/install.ps1')|iex

NuGet configuration settings

From the linked configuration settings above I added:

.\nuget config -Set http_proxy=company-proxy:8080
.\nuget config -Set http_proxy.user=mylogin
.\nuget config -Set http_proxy.password=secret

Note that the password is encrypted in your NuGet.config so you can’t edit it directly.

Now I can use chocolatey without seeing:

Please provide proxy credentials:
UserName: Cannot prompt for input in non-interactive mode.

I believe that PowerShell and NuGet are trying to present the default web credentials rather than the default network credentials and in our case the proxy we lurk behind isn’t satisfied with that, thus the need to be explicit. It’s possible that NuGet was failing to present any credentials; I could investigate with Fiddler but these fixes have solved it for me. YMMV.

Advertisements

Git for Windows –help

Git for Windows wasn’t finding my web browser for launching help pages. The HTML was launched in Notepad.

First I made sure that the help should be launching as web pages. This is the default on Windows anyway as neither an info nor a man viewer are available.

$ git config --global help.format web

I tried tracing what was happening:

$ GIT_TRACE=1 git stash --help
trace: built-in: git 'help' 'stash'
Launching default browser to display HTML ...

Not enough information. I tried to use the web–browse script explicitly:

$ GIT_TRACE=1 git web--browse --browser=chrome 'http://news.bbc.co.uk/'
trace: exec: 'git-web--browse' '--browser=chrome' 'http://news.bbc.co.uk/'
trace: run_command: 'git-web--browse' '--browser=chrome' 'http://news.bbc.co.uk/'
trace: built-in: git 'config' 'browser.chrome.path'
The browser chrome is not available as 'chrome'.

Looks like the script couldn’t find my web browser:

$ git config --global browser.chrome.path "C:\Program Files\Google\Chrome\Application\chrome.exe"

No joy. I tried Doug Knox’s Windows XP File Association Fixes for HTM/HTML Associations. Nope.

Finally I went to the files in Git\doc\git\html and opened the Properties for one of the .html files and set the ‘Opens with:’ to Chrome. Joy!

So I can’t be sure exactly what worked here. I changed the git config to work with Firefox and the file ‘Opens with:’ to Firefox and it still worked. I’ve not repeated it with IE because, well, you wouldn’t, would you?

Windows Azure Cloud Storage using the Repository pattern

Repository pattern instead of an ORM but with added Unit of Work and Specification patterns

When querying Azure Tables you will usually use the .NET client to the RESTful interface. The .NET client provides a familiar ADO.NET syntax that is easy to use and works wonderfully with LINQ. To prevent the access code becoming scattered through your code you should be collecting it into some kind of DAL. You should also be thinking about testability of your code and the simplist way to provide this is to have interfaces to your data access code. Okay, so there’s nothing earth-shattering here but getting the patterns together and learning to use Azure Tables to their best is probably new to you or your project.

IRepository

What do you want to provide to every object that needs a backing store? I’d suggest searching and saving so here are the two methods every repository is going to need.

public interface IRepository<TEntity> where TEntity : TableServiceEntity
{
  IEnumerable<TEntity> Find(params Specification<TEntity>[] specifications);

  void Save(TEntity item);
}

IEntityRepository

What about getting back a particular entity, making changes and saving that back? The first thing to note is that in Azure Tables an entity is stored in the properties of a Table row *but* other entities may also be stored in the same Table. So think entity and not table, which is different to how you would normally think of a repository.

Let’s say for this example I want to be able to get a single entity, a range of entities, to be able to delete a given entity and even to page through a range of entities.

To keep the code cleaner I’m going to pass in the parameters as already formed predicates for my where clause. There’s little advantage to using the Specification pattern here other than I think it makes the code a little more explicit.

public interface IEntityRepository : IRepository<Entity>
{
    void Delete(Entity item);

    Entity GetEntity(params Specification<Entity>[] specifications);

    IEnumerable<Entity> GetEntities(
        params Specification<Entity>[] specifications);

    IEnumerable<Entity> GetEntitiesPaged(
        string key, int pageIndex, int pageSize);
}

EntityRepository

public class EntityRepository : RepositoryBase, IEntityRepository
{
    public EntityRepository(IUnitOfWork context) 
        : base(context, "table")
    {
    }

    public void Save(Entity entity)
    {
        // Insert or Merge Entity aka Upsert (>= v.1.4).
        // In case we are already tracking the entity we must 
        // first detach for the Upsert to work.
        this.Context.Detach(entity);
        this.Context.AttachTo(this.Table, entity);
        this.Context.UpdateObject(entity);
    }

    public void Delete(Entity entity)
    {
        this.Context.DeleteObject(entity);
    }

    public Entity GetEntity(
        params Specification<Entity>[] specifications)
    {
        return this.Find(specifications).FirstOrDefault();
    }

    public IEnumerable<Entity> GetEntities(
        params Specification<Entity>[] specifications)
    {
        // new ByKeySpecification("partitionKey")
        return this.Find(specifications);
    }

    public IEnumerable<Entity> GetEntitiesPaged(
        string partitionKey, int pageIndex, int pageSize)
    {
        var results = this.Find(
            new ByPartitionKeySpecification("partitionKey"));

        return results.Skip(pageIndex * pageSize).Take(pageSize);
    }

    public IEnumerable<Entity> Find(
        params Specification<Entity>[] specifications)
    {
        IQueryable<Entity> query = 
            this.Context
            .CreateQuery<Entity>(this.Table)
            .AsTableServiceQuery();

        query = specifications.Aggregate(
            query, (current, spec) => 
            current.Where(spec.Predicate));

        return query.ToArray();
    }
}

It’s easy enough to pass in a context for your repository following the Unit of Work pattern. You can create this quite simply (see TableStorageContext following). You have to define which Table your entity is stored in and you want that and your context as properties of your class. I find it cleaner to manage (and easier for the next developer to implement) if that work is done in a base class, RepositoryBase.

public class RepositoryBase
{
    public RepositoryBase(IUnitOfWork context, string table)
    {
        if (context == null)
        {
            throw new ArgumentNullException("context");
        }

        if (string.IsNullOrEmpty(table))
        {
            throw new ArgumentNullException(
                "table", "Expected a table name.");
        }

        this.Context = context as TableServiceContext;
        this.Table = table;

        // belt-and-braces code - 
        // ensure the table is there for the repository.
        if (this.Context != null)
        {
            var cloudTableClient = 
                new CloudTableClient(
                    this.Context.BaseUri, 
                    this.Context.StorageCredentials);
            cloudTableClient.CreateTableIfNotExist(this.Table);
        }
    }

    protected TableServiceContext Context { get; private set; }

    protected string Table { get; private set; }
}

So now we actually get to the meat of the matter and implement our TableServiceContext methods for the CRUD functionality we need. In this example I’ve a single Save method that uses the ‘Upsert’ (InsertOrMerge) functionality available in Azure since v.1.4 (2011-08). The Find method is there for convience – if it doesn’t suit your query then simply don’t use it.

TableStorageContext

public class TableStorageContext : TableServiceContext, IUnitOfWork
{
    // Constructor allows for setting up a specific 
    // connection string (for testing).
    public TableStorageContext(string connectionString = null)
        : base(
            BaseAddress(connectionString),
            CloudCredentials(connectionString))
    {
        this.SetupContext();
    }

    // NOTE: the implementation of Commit may vary depending on 
    // your desired table behaviour.
    public void Commit()
    {
        try
        {
            // Insert or Merge Entity aka Upsert (>=v.1.4) uses 
            // SaveChangesOptions.None to generate a merge request.
            this.SaveChanges(SaveChangesOptions.None);
        }
        catch (DataServiceRequestException exception)
        {
            var dataServiceClientException =       
                exception.InnerException as 
                DataServiceClientException;
            if (dataServiceClientException != null)
            {
                if (
                    dataServiceClientException.StatusCode == 
                    (int)HttpStatusCode.Conflict)
                {
                    // a conflict may arise on a retry where it
                    // succeeded so this is ignored.
                    return;
                }
            }

            throw;
        }
    }

    public void Rollback()
    {
        // TODO: clean up context.
    }

    private static string BaseAddress(string connectionString)
    {
        return CloudStorageAccount(connectionString)
            .TableEndpoint.ToString();
    }

    private static StorageCredentials CloudCredentials(
        string connectionString)
    {
        return CloudStorageAccount(connectionString).Credentials;
    }

    private static CloudStorageAccount CloudStorageAccount(
        string connectionString)
    {
        var cloudConnectionString = 
            connectionString ?? 
                CloudConfigurationManager
                .GetSetting("CloudConnectionString");
        var cloudStorageAccount =     
            Microsoft.WindowsAzure.CloudStorageAccount.Parse(
                cloudConnectionString);
        return cloudStorageAccount;
    }

    private void SetupContext()
    {
        /*
            * this retry policy will introduce a greater delay if 
            * there are retries than the original setting of 3 retries 
            * in 3 seconds but it will then show up a problem with 
            * the system without the system failing completely.
            */
        this.RetryPolicy = 
            RetryPolicies.RetryExponential(
                RetryPolicies.DefaultClientRetryCount, 
                RetryPolicies.DefaultClientBackoff);

        // don't throw a DataServiceRequestException when 
        // a row doesn't exist.
        this.IgnoreResourceNotFoundException = true;
    }
}

In my ServiceDefinition config I have a CloudConnectionString. This has to be parsed to get the endpoint and account details before I can create the TableServiceContext. A couple of static methods do the job. This object also implements the Commit and Rollback methods for the Unit of Work. My Commit is implementing ‘Upsert’ so you may want it to be different or you may want to have different implementations of TableStorageContext that you can pass in to your Repository class depending on how it needs to talk to storage.

Further Architectural Options

I favour Uncle Bob’s Clean Architecture and as such I wouldn’t expose my Repository classes to other modules. I would wrap them in a further service layer that would receive and pass back Model objects. Cloud Table Storage is much more flexible than relational database storage but you have to think about it quite differently and the structure of your code will be very different to what you may be used to.

I’ve placed the Repository project on github: WindowsAzureRepository.

Unit testing Expressions with Moq

When setting up a mock object with the Moq framework you can specify what parameters may be passed to the mock and thus what to return when the mock encounters those specific parameters.

This falls down in the odd instance when you’re trying to pass a lambda expression to an optional parameter. This occurs, for example, on IRepository Find() as the where: and the orderby: are both optional. You can’t pass an expression tree to an optional parameter as it’s not compiled yet.

Moq gets around this by allowing It.IsAny() so at least we can specify the type of the expression to accept. How do we know, though, whether our mocked interface was called correctly? Quite different expressions, and any parameters, could have been used.

Fortunately, you can access the actual expression used in the .Returns() callback on the mock. Here you can create an anonymous function that will test the signature, the parameters used and still return the object mothers you’ve specified.

Here’s what we’ve got so far.

   
       [TestMethod]
       public void ShouldFindUser()
       {
           Expression<Func> expected = 
               x => 
               x.Name == this.sut.Name && 
               x.Memberships[0].Password == 
                   this.sut.Memberships[0].Password;
           // note: optional parameters that are passed 
           // expression trees can't be compiled in .NET 4.0 
           // but Moq's It.IsAny saves the day.
           repository.Setup(
               r =>
               r.Find(
                   It.IsAny<Expression<Func>>(),
                   It.IsAny<IOrderByClause[]>()))
                .Returns(
                       (Expression<Func> where, 
                        IOrderByClause[] order) =>
                           {
                               // note: before the expressions can be 
                               // compared they must
                               // be partially evaluated.
                               this.ExpressionMatch(
                                   Evaluator.PartialEval(where), 
                                   Evaluator.PartialEval(expected));
                               return suts;
                           });

           service = new UserService(repository.Object);

           bool isValidUser = 
               this.service.ValidateUser(
                   this.sut.Name, 
                   this.sut.Memberships[0].Password);

           Assert.IsTrue(isValidUser, "Expected to find user.");
       }

Comparing the expressions, however, introduces more problems. The expressions have not been compiled yet so they have unevaluated references to closed variables. The expressions will differ between the actual expression and any expected expression you may have defined using your object mother for the parameters.

You need to partially evaluate the expressions to create constants from the references before you can compare them (the comparison is essentially comparing the two .ToString() products). Finally, you can wrap an unit test assertion around the equality comparison.

       private void ExpressionMatch(Expression actual, Expression expected)
       {
           var isEqual = 
               ExpressionEqualityComparer.ExpressionEqual(actual, expected);

           Assert.IsTrue(isEqual, "Expected the expressions to match.");
       }

Now if someone alters the code that calls to the interface the test will fail. Otherwise it would have been joyfully returning object mothers for any old query passed to the interface.

Technical Debt

We need to start recording our technical debt, I think.

For example, if Identity Server was a packaged third party product we’d be okay but it’s actually quite rough demonstration code with only a small integration test harness. It should be brought up to the same standard as the rest of the code base (eventually). So it works but we have a fair amount of technical debt that we need to record.

Note that we can’t use stories for technical debt. The debt accrued from getting a story to ‘done’ and the points have already been earned.

Also, as we are developing, there will be times we add TODO/HACK into the code but the story still meets its acceptance tests. This extra work should be recorded as tasks in the backlog and then ordered. A rough estimate in hours added to each task will reveal our technical debt.

There will be times when we deliberately let the code quality slip, usually by choosing to ignore some of our metrics going into the red, in order to make a release available. That technical debt needs to be recorded, too.

How we pay down the technical debt is another matter. Preferably we wait until we are revisiting that piece of work and have a need to refactor. If we have so much cruft in a piece of code that adding a new feature is risky then that’s another time when the debt needs to be paid down. Otherwise, let it accrue, debt is a useful resource in the project budget.

Easy to say, harder to do. 🙂

Mercurial with a Subversion central repository

This How-To is for developers wishing to use Mercurial (Hg) on their Windows development boxes when the source code is in a central Subversion (SVN) repository.

Install TortoiseHg.

In a Windows development environment the easiest way to get started with Hg is to install TortoiseHg. The Windows Explorer extension not only makes it easy to work with files by using icon overlays and context menus but also packages up Python, the SVN bindings and a large number of Hg extensions that will prove very useful.

If you are behind a proxy you must first configure Hg with the proxy settings. Use the TortoiseHg context menu and Global Settings > Proxy.

Working with SVN.

Given that you are working against a central SVN repository then Hg has a number of extensions that can talk to the SVN repo. I use HgSubversion. To install this extension you need to clone the hgsubversion repo:

hg clone http://bitbucket.org/durin42/hgsubversion c:\hgsvn

Use the TortoiseHg context menu and Global Settings > Edit File.

[extensions]
hgsubversion = c:\hgsvn

Start with a single project. It’s better to clone the whole repo, the trunk, branches and tags, rather than just a branch. This way you only need to do the clone once – clones can take a while. Create a project folder, i.e. c:\development\local\myproject, change to it and then clone the SVN repo:

hg clone sv://subversion/myproject/

Ignore.

It’s best to set up the ignore list for Hg now. Save this as .hgignore and copy it to the root of your local Hg clone (next to .hg).

# Mercurial .hgignore file template by Abdullin.com
# For syntax see: http://linux.die.net/man/5/hgignore
# Source: http://bitbucket.org/abdullin/snippets/

# Visual Studio
glob:*.user
glob:*.suo
glob:*.cache
glob:_ReSharper.*/
relre:/obj/
relre:/bin/
relre:/Bin/

# Subversion
glob:.svn/
glob:_svn/

# Build structure
relre:^Build/

# Misc
glob:Thumbs.db
glob:*.bak
glob:*.log

Local Development.

If you use local branches and merge them in your local Hg repo you won’t be able to push changes to SVN.

If you commit often then you will end up with lots of changesets to push to SVN that will make it harder to see what the intention of your update was.

A good way to solve both problems is to submit patches to the central repo. Hg has an extension for patch management called Mercurial Queues (MQ).

Mercurial Queues.

Mercurial Queues (MQ) provides patch management integrated with Hg. This is incredibly useful for packaging up a lot of small changes that you record as you work locally into a single submission to the central repo. You can also work on a number of patches concurrently.

Enable the MQ extension in Hg, TortoiseHg > Global Settings > Extensions and check ‘mq’. In the mercurial.ini set diffs to use the git format.

[diff]
git = True

In the Hg Repository Explorer you can create a new patch from the latest revision. This means you will need to commit something you want to go into a patch before you can create the patch. The other way is to use the command line.

hg qnew mylatestpatch

qnew will include the latest changes found in the working directory.

qfold

One way to work with plenty of local commits but to have a single, tidy submission to the central repo is to use MQ’s qfold to create a patch.

Once you have some changesets you wish to publish you convert them into patches. In Hg Repository Explorer use the context menu on the latest changeset first: Mercurial Queues > Import Revision to MQ. Repeat for each of the changesets. Next, unapply the latest patches until just the earliest is still applied. Now at the command line use qfold. When you refresh Hg Repository Explorer you will see a single patch. Use the patch context menu’s ‘Finish Applied’ to change the patch into a single changeset you can push to the central repo.

For example, you have three changesets: 16; 17 and 18. You then create three patches: 16.diff; 17.diff and 18.diff. You unapply 17.diff and 18.diff then go to the command line:

hg qfold 17
hg qfold 18

Now you have a single patch, 16.diff, that contains all the work commited in the original three changesets.

SpecFlow

Acceptance Tests for User Stories allow the Product Owner to more easily say whether or not they accept a Story as ‘Done’. Also, Acceptance Tests can be used in Behaviour Driven Development (BDD) to provide an “outside in” development process that complements the “inside out” coding style of Test Driven Development (TDD).

SpecFlow brings Cucumber BDD to .NET without the need for a Ruby intermediary like IronRuby.

In your Tests project add a Features folder. SpecFlow installs some templates into VS.NET so add a new SpecFlowFeature and fill it out like the following example, InterestRatings.feature :

Feature: Interest Ratings
	In order to manage the Interest Ratings
	As a Trader
	I want an Interest Ratings screen

@view
Scenario: View the Interest Ratings
	Given a repository of Interest Rating records
	When I view the Interest Rating screen 
	Then the full list is created

@create
Scenario: Create an Interest Ratings record
	Given a repository of Interest Rating records
	When I create an Interest Rating with name Test 
	And Interest Rating code 4
	Then the Interest Rating is saved to the repository with 
                 name Test and code 4

The scenarios are the tests. The format is a clear Given-When-Then description. As you create the scenario the .feature.cs will be updated for you.

Now you need to link up the statements in your scenario to steps that the unit test framework can execute. Create a Steps folder under Features and add a SpecFlowStepDefinition. You’ll find the generated file has some useful placeholders to get you started. Here, for example, is InterestSteps.cs :

    [Binding]
    public class InterestSteps
    {
        private IInterestRatingService interestService;
        private InterestRatingViewModel interestRatingViewModel;
        private InterestRating rating;
        private Mock<IValidationService> validationService 
                      = new Mock<IValidationService>();
        private Mock<ILoadScreen> loadScreen 
                      = new Mock<ILoadScreen>();
        private Mock<IServiceLocator> serviceLocator 
                      = new Mock<IServiceLocator>();
        private Mock<IRepository<InterestRating>> interestRepository 
                      = new Mock<IRepository<InterestRating>>();
        private List<InterestRating> interestRatings;

        [Given("a repository of Interest Rating records")]
        public void GivenARepositoryOfInterestRatingRecords()
        {
            Mock<IValidator> validator = new Mock<IValidator>();
            this.serviceLocator
                     .Setup(s => s.GetInstance<IValidationService>())
                     .Returns(this.validationService.Object);
            this.validationService
                     .Setup(v => v.GetValidator(
                         It.IsAny<InterestRating>()))
                     .Returns(validator.Object);
            this.serviceLocator
                     .Setup(s => s.GetInstance<IEventAggregator>())
                     .Returns(new EventAggregator());
            this.serviceLocator
                     .Setup(s => s.GetInstance<ILoadScreen>())
                     .Returns(this.loadScreen.Object);
            ServiceLocator.SetLocatorProvider(() => this.serviceLocator.Object);
            this.interestRatings = 
                      InterestRatingMother
                          .CreateGoodInterestRatingMother()
                          .InterestRatings
                          .Cast<InterestRating>().ToList();
            this.interestRepository
                     .Setup(s => s.GetAll())
                     .Returns(this.interestRatings);
            this.interestService 
                  =  new InterestRatingService(
                                this.interestRepository.Object);
        }


        [When("I view the Interest Rating screen")]
        public void WhenIViewTheInterestRatingScreen()
        {
            this.interestRatingViewModel 
                  = new InterestRatingViewModel(this.interestService);
            this.interestRatingViewModel.Load();
        }

        [When("I create an Interest Rating with name (.*)")]
        public void WhenICreateAnInterestRatingWithName(string name)
        {
            this.interestRatingViewModel 
                  = new InterestRatingViewModel(this.interestService);
            this.interestRatingViewModel.Load();
            this.interestRatingViewModel.Add();
            this.rating 
                  = this.interestRatingViewModel
                              .InterestRatings[
                                   this.interestRatingViewModel
                                        .InterestRatings.Count - 1];
            this.rating.InterestRatingName = name;
        }

        [When("Interest Rating code (.*)")]
        public void AndInterestRatingCode(int code)
        {
            this.rating.InterestRatingCode = code;
            this.interestRatingViewModel.Save();
        }

        [Then("the full list is created")]
        public void ThenTheFullListIsCreated()
        {
            Assert.That(
                this.interestRatings.Count 
                      == this.interestRatingViewModel
                               .InterestRatings.Count);  
        }


        [Then("the Interest Rating is saved to the repository 
         with name (.*) and code (.*)")]
        public void ThenTheInterestRatingIsSavedToTheRepository(
                           string name, int code)
        {
            InterestRating rating 
                = (from m in this.interestRatingViewModel.InterestRatings
                   where m.InterestRatingName.Equals(name)
                   select m).Single();

            Assert.That(
                rating.InterestRatingName.Equals(name),
                "The interest rating name was not saved.");

            Assert.That(
                rating.InterestRatingCode == code,
                "The interest rating code was not saved.");
        }
    }

In particular, notice the reuse of steps, for example GivenARepositoryOfInterestRatingRecords(), and the use of variable placeholders like (.*) to allow the passing of variables into the tests.

BDD wraps TDD. A reasonable flow would be to start with the Story, write up the Acceptance Tests and sketch out some of the steps. As you sketch out the steps you can see what unit tests you need so you go and develop the code using TDD. Once your code is ready and all the unit tests are passing you can integrate the layers with the BDD tests and when those are passing you have fulfilled your Acceptance Test.

Gherkin parsers for SpecFlow are on the way as are VS.NET language plugins (Cuke4VS – currently this crashes my VS.NET 2008).

Cuke4Nuke is another Cucumber port that is worth looking at.

The readability of the Features makes it easy to take Acceptance Tests from User Stories so that the Product Owner and Stakeholders can see what the system is doing. The “outside in” nature of the creating the code gives focus to fulfilling the User Story.

VsSettings

I’ve created a dark scheme for VS.NET as I was finding the white screen was too bright (yes, I did also turn down the brightness on my screen). It’s based on the Aloha scheme.

I’ve saved the .vssettings. You’ll also need the Consolas and Dina fonts (Consolas is probably already installed).

To use these settings go to Tools > Import and Export Settings … .

I find this much more legible than the VS.NET defaults, especially for highlighting search results and for editing Xaml. You can port it to VS.NET 2010, too.

UPDATE

There’re a couple of downloads of these settings each month. I’d be interested in seeing any tweaks people may have so I’ve placed a copy on GitHub for folks to fork: http://github.com/Boggin/vssettings

KeyboardShortcuts

I’ve put together a couple of scripts, poached from the interWebs, that will give those VS.NET keyboard ninjas some more shortcut-fu. Rather than relying on different, incomplete cheatsheets for each of your plugin’s keyboard shortcuts, this macro will find all of your current shortcuts and present them in a single table.

From VS.NET 2008 open the Macros IDE:
Tools > Macros > Macros IDE

Once in the Macros IDE, create a new Module:
Project Explorer > My Macros (context menu) > Add > Add Module ("KeyboardShortcuts")

Copy the following over the file’s contents:

Imports EnvDTE
Imports System.Diagnostics

Public Module KeyboardShortcuts

    Sub ListKeyboardShortcuts()
        Dim i As Integer
        Dim j As Integer
        Dim pane As OutputWindowPane = GetOutputWindowPane("Commands")
        Dim keys As System.Array

        pane.Clear()
        pane.OutputString("<font face=arial>")
        pane.OutputString("<table border=1 cellspacing=0 cellpadding=2 bgcolor=f0f0ff>" + Chr(10))
        pane.OutputString("<tr><th colspan=2 bgcolor=d0d0e0>Keyboard Mappings</th></tr>" + Chr(10))
        pane.OutputString("<tr><th bgcolor=e0e0f0>Action</th>")
        pane.OutputString("<th bgcolor=e0e0f0>Key</th></tr>" + Chr(10))

        For i = 1 To DTE.Commands.Count
            keys = DTE.Commands.Item(i).Bindings
            If keys.Length > 0 Then
                pane.OutputString("<tr>")

                'DTE.Commands.Item(i).Name() is sometimes blank.
                'We will print an m-dash in this case, as printing a blank table cell is visually
                'misleading, as such a cell has no borders, making it appear to be attached to
                'another cell.
                If DTE.Commands.Item(i).Name() <> "" Then
                    pane.OutputString("<td valign=top>" + DTE.Commands.Item(i).Name())
                Else
                    pane.OutputString("<td><center>&mdash;</center>")
                End If

                pane.OutputString("</td><td>")
                For j = 0 To keys.Length - 1
                    If j > 0 Then
                        pane.OutputString("<br/>")
                    End If
                    pane.OutputString(keys(j))
                Next
                pane.OutputString("</td></tr>" + Chr(10))
            End If
        Next

        pane.OutputString("</table></font>")

    End Sub

End Module

You’ll notice the error squiggle under the GetOutputWindowPane() call. We have to add that Utility in (it used to come with the Samples):
Project Explorer > My Macros (context menu) > Add > Add Module ("Utilities")

Imports System
Imports EnvDTE
Imports EnvDTE80
Imports EnvDTE90
Imports System.Diagnostics

Public Module Utilities

    ' This function retrieves the output window pane
    Function GetOutputWindowPane(ByVal Name As String, Optional ByVal show As Boolean = True) As OutputWindowPane
        Dim window As Window
        Dim outputWindow As OutputWindow
        Dim outputWindowPane As OutputWindowPane

        window = DTE.Windows.Item(EnvDTE.Constants.vsWindowKindOutput)
        If show Then window.Visible = True
        outputWindow = window.Object
        Try
            outputWindowPane = outputWindow.OutputWindowPanes.Item(Name)
        Catch e As System.Exception
            outputWindowPane = outputWindow.OutputWindowPanes.Add(Name)
        End Try
        outputWindowPane.Activate()
        Return outputWindowPane
    End Function

End Module

To execute the macro go back to VS.NET:
Tools > Macros > Macros Explorer > ListKeyboardShortcuts (context menu) > Run

Yes, the output goes to the Output view and you have to cut-and-paste it into a .html but hey, if you want it to do more, update the script. 🙂 HTH