Sunday, 27 December 2015

Testing exceptions with NUnit 3.x

If you are moving from NUnit 2.x to 3.x you will find that the old ExpectedException attribute is missing. Not to worry. There are alternatives.

Here’s an example. In Domain-Driven Design (DDD) there's a concept of an 'entity'; an object with an identifier. I have been experienting with a supporting framework for DDD which includes a base class for entities. I decided I didn’t want entities to be instantiated without an identifier and I didn’t want that identifier to be the default value for the identity type. Here’s a simple entity class together with a fragment of the base class:

private class Customer : Entity<int>
    public Customer(int id) : base(id)

public abstract class Entity<T> : IEquatable<Entity<T>>
    protected Entity(T id)
        if (object.Equals(id, default(T)))
            throw new ArgumentException("The ID cannot be the type's default value.", "id");

        this.Id = id;

    // ... snip ...

With 2.x tests you could write something like this:

public void EntityConstructor_WithDefaultValue_ThrowsException()
    var customer1 = new Customer(default(int));

If you have tests like that and you update NUnit to 3.x that won’t even compile because the attribute isn’t there anymore. No need to panic though, there are a couple of options available to you. The first is to use Exception Asserts (i.e. the Assert.Throws method).

public void EntityConstructor_WithDefaultValue_ThrowsException()
    Assert.Throws<ArgumentException>(() => new Customer(default(int)));

The second option is to use the Throws Constraint which is my preference.

/// Tests that the ID can't be the types default value.
public void EntityConstructor_WithDefaultValue_ThrowsException()
    Assert.That(() => new Customer(default(int)), Throws.ArgumentException);

Actually the last 2 alternatives have a subtle improvement over the ExpectedException attribute.

The attribute approach doesn’t allow you specify exactly when and where the exception is expected to be thrown. If you have a test with multiple lines of setup code any one of those lines could throw an exception which would be caught by the attribute (assuming the exception type is correct) so you might not be testing what you think you are testing.

The assertion approach allows you to specify the exact line of code you expect to throw the exception.

Saturday, 26 December 2015

Bitdefender and the hard sell

Update 03/01/2016 – My issue was resolved by BitDefender (see details at the end of the post).

OK, so I’ve been using Bitdefender again. I should know better I guess after the trouble I had in the past (e.g. Are BitDefender (Avangate/Digital River) behaving like confidence tricksters?, BitDefender and underhanded automatic re-subscription, and more besides).

Now they have taken to spamming me every single day and often more than once a day with this little gem that pops up on my desktop.


There is no way of stopping this pop-up as far as I can see. I have 115 days left on my subscription so am I to expect this to appear every single day until I renew?

Bitdefender, thanks for the offer but now let me decline and stop this goddam thing from hogging my desktop. I will renew when I want to renew which will be at the end of the current license period. At that time you will make an offer of how much it will cost me and I will make my own mind up whether it’s acceptable or not. If not, I will take my business elsewhere.

Make the pop-ups go away!

Update 03/02/106

BitDefender gave me the following instructions via Twitter:

  1. Open Bitdefender
  2. Click on the human-shaped button, in the upper right corner
  3. Click on Settings (General Settings)
  4. Remove the check-mark next to "Display special offers and product notifications".


Now I checked the settings before and really can’t remember seeing this option (obviously). Anyway, hopefully this will make the ads go away. I’ll keep you posted if it doesn’t.

Monday, 21 December 2015

Building a .Net 4.6.1 project with Jenkins

This post covers the basic steps I took to get a .Net 4.6.1 project written with Visual Studio 2015 and C#6 to build with Jenkins. At the time of writing Jenkins in at version 1.642. I include instructions for getting NUnit 3.0 tests to run and to output an XML test report. The project I was working on also used NuGet version 3.3.0.

Step 1 – Prepare the build server with .Net 4.6.1

You may need to install the following packages on the build server so Jenkins can build .Net 4.6.1 projects.

Step 2 – Add MSBuild v14.0 to Jenkins

Go to Manage Jenkins > Configure System. Find the MSBuild section and click MSBuild installations… Click Add MSBuild and enter the new MSBuild details for version 14.0:


Save the configuration.

Step 3 – Create the Jenkins job

Start by creating your Jenkins build in the usual way. Setup your basic project data, source code management and build triggers as usual.

NuGet package restore

The next step is to get NuGet package restore working. NuGet package restore has changed since version 2.x and the easiest way I have found to get it to work with a basic Jenkins build is to use the command-line package restore option. This involved downloading the latest NuGet command line distribution and putting it a known location on the server.

Add a build step to Execute Windows batch command.


Edit the command to call the NuGet executable with the restore command line argument. I installed the NuGet executable to C:\Tools\NuGet.


Add MSBuild step

Next add the new MSBuild step using version 14.0 of MSBuild.



Alternative – use a batch command to call MSBuild

If you don’t want to add a new MSBuild version to Jenkins you can call MSBuild directly. When it comes to adding the MSBuild step don’t choose “Build a Visual Studio project or solution using MSBuild” but use “Execute Windows batch command” instead.


Enter the command to call MSBuild directly:

"C:\Program Files (x86)\MSBuild\14.0\Bin\MsBuild.exe" Andy.French.Fat.Calculation.sln

Add a step to run NUnit

Before adding this step I had to download and install the latest version of NUnit (version 3.0.1) because that’s what I’d used in the project as a NuGet package to write my unit tests.


Note that the command line options for the executable have changed in version 3.x. I had to add a new –result option specifying an output file name. I also specified NUnit version 2 output format to make the data suitable for use with the existing test result plugins.

Don’t forget to add a post build action to publish your test results using the same file name as mentioned in the previous batch command.


Save your changes and you should be good to go. Phew!

Monday, 21 December 2015

Thursday, 10 December 2015

How to upgrade TeamCity

I’m assuming you have already installed JetBrains TeamCity and you are looking to upgrade an existing installation. You can check what version of TeamCity you are running by looking in the footer of the web interface.


At the time of writing TeamCity is in version 9.1.4.

So, assuming you don’t have the latest version and need to upgrade let’s press on. Grab the installer from the JetBrains site and follow the steps below.

Step 1 – Backup TeamCity

Make sure that TeamCity is running and access the web user interface. Under Administration – which can be found top-right - choose Backup from the Server Administration section – bottom-left. Select an appropriate Backup scope and click Start Backup.


Wait patiently until your backup completes. You can see where TeamCity has put the backup file in the report section at the bottom of the page.


Step 2 – Install the latest version of TeamCity

Double-click on the installer and follow the wizard.


Follow the wizard through. It is very likely you’ll be prompted to uninstall the previous version of TeamCity. I always do so.


Uninstall the previous version.


Hopefully the previous version will have been uninstalled.


Now we can start installing the new version.


Keep running through the wizard. Eventually you’ll be asked to allocate an account to the TeamCity server and the build agent. I always use the system account for each.



Keep ploughing through the wizard until you hit the final step.


Check the “Open TeamCity Web UI after Setup is completed” checkbox and click Finish. The web UI should open for you.

Step 3 – Data upgrade

It is quite possible that when you view the TeamCity web UI you’ll be notified that data upgrade is required.


Assuming you are an administrator, click the “I’m a server administrator, show me the details” link. On the page that follows you’ll be prompted for a token that’s in the TeamCity server log. On my system – and I use default TeamCity settings – that’s found in C:\TeamCity\logs and is called teamcity-server.log.


Copy the token and paste it into the first on the form before clicking Confirm.


On the next form click Upgrade and cross your fingers.


Hopefully TeamCity will now start.


Once it has started you should be dropped back in the management UI.


Job done. Congratulate yourself on how awesome you are.

What if it all went wrong?

If you are unlucky enough for there to have been a problem you might have to restore TeamCity from the backup you took at the start of the process.

Tuesday, 8 December 2015

Saving an SSH key when using Git

Note to self: Here’s a quick reminder about how to save an SSH key for use with Git command line operations. You should have your private key file already. Following this procedure makes the OpenSSH key file available to Git command line operations against a repository that requires authentication.

Step 1 - Open PuTTygen and load your private key file. Enter your pass phrase when prompted.

2015-12-08 10_25_54-PuTTY Key Generator

Step 2 – Export the OpenSSH key using the Conversions > Export OpenSSH key menu item.


Step 3 – Save the file to the .ssh folder in your user home directory. Name the file id_rsa.


Step 4 – When you use a Git command line operation that requires authentication expect to be prompted for the pass phrase.


Sunday, 6 December 2015

The Inside Of My Head–No.8

I do enjoy a good doodle and I thought I’d show you my latest effort. My doodles can take weeks or even months to complete and they are typically done in the notebook I use for work.

Here’s a picture of the notebook with the original full page doodle.


And here’s a scan of the page before any additional processing.


What I like to do is to use some basic image processing to add a splash of colour. Nothing too fancy - they are only doodles after all - but adding some layers to the image using an appropriate blend mode can have quite an effect.

Here I have added a series of layers using a burn blend mode with a different colour in each layer. By changing the amount of blend mode the subtlety of the effect can be varied.


There’s plenty more where that came from!

Sunday, 15 November 2015

The basics of garbage collection in .Net

Time for some revision again. Every now and then I like to go back to basics and refresh my memory about some aspect of software development and this time it’s garbage collection in .Net.

If you are looking for a more definitive explanation of garbage collection you can do no better than the Garbage Collection documentation on the MSDN site [1]. This post will only contain a few titbits that I find useful to remember. Go to the horse’s mouth for details.

What is the garbage collector?

One of the key differences between .Net development and many other programming paradigms is that you as the programmer are no longer directly responsible for memory management in your application. That’s not to say you can’t still have memory leaks and other problems – which is why having an understanding of garbage collection is important – but the business of allocating and de-allocating memory is largely taken out of your hands.

The garbage collector  is the automatic memory management component of the common language runtime (CLR). It provides a number of benefits including:

  • Enabling you to develop your application without having to free memory yourself.
  • Allocating objects on the managed heap efficiently (a description of the managed heap follows).
  • Reclaiming the memory occupied by objects that are no longer in use.
  • Providing memory safety by ensuring that an object cannot use the content of another object.


This isn’t a get out of jail free card though. You can still run in to a number of problems if you are careless including running out of memory (out of memory exceptions), high CPU usage during garbage collection and other performance problems.[2]

The managed heap

The managed heap is a segment of memory used to store managed objects in a .Net process and is allocated by the garbage collector when the process is initialised by the CLR.

Each process has one managed heap with all threads in the process allocating memory for objects on the same managed heap.

Actually, the managed heap can be considered as two heaps: the large object heap and the small object heap. The large object heap contains very large objects – usually arrays - that are 85,000 bytes and larger.

Managed heap generations

A nice performance optimisation of the garbage collector is to use generations of which there are 3 in the managed heap:

Generation Description
Generation 0
  • This is the youngest generation containing short-lived objects
  • Garbage collection occurs most frequently in this generation.
  • Newly allocated objects are implicitly generation 0 unless they are large objects (they go on the large object heap in a generation 2 collection).
  • Most objects are reclaimed for garbage collection in generation 0 and do not survive to the next generation.
Generation 1
  • Contains short-lived objects.
  • Serves as a buffer between short-lived objects and long-lived objects.
Generation 2
  • This generation contains long-lived objects.

Garbage collection occurs most frequently on Generation 0, and successively less frequently for generations 1 and 2. Details of how this optimisation works follow in the next section.

Generations 0 and 1 are are known as the ephemeral generations because the objects they contain are short-lived.

Untitled Diagram

Figure 1 – A super simplified view of the managed heap

What is garbage collection?

The garbage collector reclaims the memory occupied by dead objects. It also compacts live objects so that they are moved together and the dead space is removed. The overall effect is to make the managed heap smaller making more memory available to the process.

Garbage collection in the CLR is basically a mark and sweep operation which includes compaction of the managed heap. During the marking phase, the garbage collector runs through the objects in the managed heap - actually through a generation in the heap - and marks those objects it identifies as being live, that is those objects that still have references in the process. The remaining objects are considered as being dead and are therefore candidates for clean-up.

So, the phases of garbage collection are:

  • Marking – find and create a list of all live objects.
  • Relocating - update the references to the objects that will be compacted.
  • Compacting - reclaims the space occupied by the dead objects and compacts the surviving objects.


The survivors of a garbage collection run are promoted to the next generation. This is a neat trick because the garbage collector can perform garbage collections at a different rate for each generation (i.e.  more frequently at generation 0). If objects survive to generation 2 it’s a fair assumption that they will be hanging around in the application for a while. As a result garbage collection can be performed less frequently on generations 1 and 2 thereby saving system resources (garbage collection uses CPU).

So, the following rules describe how objects are promoted through the generations:

  • Objects that survive a generation 0 garbage collection are promoted to generation 1
  • Objects that survive a generation 1 garbage collection are promoted to generation 2
  • Objects that survive a generation 2 garbage collection remain in generation 2


Note that in the past the large object heap would not be compacted because of the performance penalty incurred by copying large objects around. From .NET Framework 4.5.1 onwards you can use the GCSettings.LargeObjectHeapCompactionMode property to compact the large object heap on demand.

When garbage collection occurs

Garbage collection occurs when:

  • the system has low physical memory, or
  • the memory that is used by the managed heap is greater than a constantly adjusted threshold, or
  • the GC.Collect method is called


Wrapping up

So that’s it for this quick look at garbage collection. There are a number of other concepts to look at including root objects, flavours of garbage collection (workstation or server), concurrency, and object finalization. All suitable subjects for future posts. Stay tuned.


[1] Garbage Collection, MSDN
[2] Garbage Collector Basics and Performance Hints, MSDN

Sunday, 15 November 2015

Saturday, 15 August 2015

Setting up a Consul cluster for testing and development with Vagrant (Part 2)

Using Vagrant to provision machines


In the first post in this series (Setting up a Consul cluster for testing and development with Vagrant (Part 1)) we looked at the Vagrant files and associated files required to automatically provision a Consul cluster for testing and development. We didn’t get as far as actually provisioning any machines, which is what we will do here. If you haven’t read the previous post it’s probably a good idea to do so before going any further with this one.

All the files we created in the previous post are available on BitBucket.

Provisioning the Consul servers

If you recall from the previous post the Vagrant file defined 4 machines: 3 machines hosting Consul servers and another machine hosting a Consul client which runs the Consul Web UI.

I’m using CygWin so the first step is to change to the consul-cluster folder I created to contain all my working files, including the Vagrantfile. To provision all the machines in one go you can simply run the following command:

vagrant up
However, I don’t want to provision all the machines in one go. I want to use ConEmu to create a tab for each machine. To provision the machines one at a time you can simply pass in the name of the box to provision. For example:
vagrant up consul1

If all goes well you should see Vagrant provision the machine. This means Vagrant will download the box image (hashicorp/precise64) and run the provisioner which in this case is a Shell provisioner that runs the script.

The last thing in the provisioning script is starting the Consul agent in bootstrap mode. You should end up with something like this:

That’s it! You now have a virtual machine up-and-running with the Consul agent running in bootstrap mode.

You can now repeat this process for the remaining servers and the client. I open a new tab for each machine in ConEmu.

As each new instance is started you can see it joining the cluster:


Back on the consul1 instance you will see the new Consul instance joining the cluster too:

It’s just a case of repeating the process for the consul3 instance to get the completed server cluster up-and-running.

Provisioning the Consul client

You provision the Consul client in exactly the same way as the server instances:
vagrant up consulclient

The client will join the cluster just like the servers. Once it’s up-and-running you will be able to access the Consul Web UI with a browser from your host workstation (go to You should see something like this:



Excellent! Note there’s a data center ‘DC1’ listed top-right. If you were observant you’d have noticed we gave each Consul instance a data center in the config.json files. This is reflected in the Consul Web UI.

    "bootstrap": true,
    "server": true,
    "datacenter": "dc1",
    "data_dir": "/var/consul",
    "encrypt": "Dt3P9SpKGAR/DIUN1cDirg==",
    "log_level": "INFO",
    "enable_syslog": true,
    "bind_addr": "",
    "client_addr": ""

Halting a virtual machine

To halt a virtual machine you just need to have a command prompt (CygWin in my case) open in the directory containing the Vagrantfile and type “vagrant halt” followed by the name of the instance to stop. For example:
vagrant halt consul3

Once the instance has halted you should see this reflected in the Consul Web UI.



If you want to halt all virtual machines in one go just type “vagrant halt” but don’t specify a machine name. Vagrant will iterate through all the virtual machines you have defined in the Vagrantfile and halt each one in turn.

Restarting a virtual machine

If you halt a virtual machine you can easily bring it back up again by typing “vagrant up” followed by the machine name. However, when you do this you’ll notice something different; the provisioner – and therefore - doesn’t get run.



This makes perfect sense because the machine has already been provisioned, we’re just restarting it.

Never fear, Consul will be running because of the upstart script we created. We can check that by connecting to the virtual machine with SSH.

Connecting to a virtual machine with SSH

We can connect to a virtual machine with SSH by using another Vagrant command, “vagrant ssh” followed by the machine name:
vagrant ssh consul1


This connects you too the machine using the ‘vagrant’ user that is automatically created for you. We can now verify that the Consul agent is up-and-running:


Forcing the provisioner to be run again

If you want to restart a virtual machine that has already been provisioned but you want to force the provisioning step to be rerun you have a few of options including passing in the “—provision” argument to “vagrant up”.

Refer to the Vagrant documentation for details.

Destroying virtual machines

One of the joys of using something like Vagrant is knowing you can completely remove a virtual machine from your system and be able to easily recreate it later, safe in the knowledge that it will be provisioned and configured exactly the same each time.

To completely remove a virtual machine from your system type “vagrant destroy” followed by the name of the machine to remove. As with most Vagrant commands if you omit the machine name Vagrant will iterate through all of the machines defined in the Vagrantfile and destroy them all.


Wrapping up

So that’s it for this brief introduction to using Vagrant to provision a Consul cluster for testing and development. Don’t forget, the source files can be found on BitBucket.

Friday, 14 August 2015

Setting up a Consul cluster for testing and development with Vagrant (Part 1)

Setup and scripting


When you start to move into a service-oriented way of working, particularly with microservices, the proliferation of services becomes an issue in itself. We know that we should avoid the ‘nano services’ anti-pattern and take care to identify our service boundaries pragmatically, perhaps using the DDD notion of bounded contexts to help discover those boundaries, but sooner or later you are going to have multiple services and perhaps multiple instances of services too.

Pretty quickly you’ll run in to a number of related problems. For example, how do client applications discover and locate services? If there are multiple instances of a service which one should the client use? What happens if an instance is down? How do you manage configuration across services? The list goes on.

A possible solution to these problems is Consul by Hashicorp. Consul is open source - being distributed under a MPL 2.0 license - and offers DNS and HTTP interfaces for managing service registration and discovery, functionality for health checking services so you can implement circuit breakers in your code, and a key-value store for service configuration. It is also designed to work across data centres.

This post isn’t going to look at the details of Consul and the functionality it offers. Rather it is focused on setting up a Consul cluster you can use for development or testing. Why a cluster? Well the Consul documentation suggests that Consul be setup this way:

“While Consul can function with one server, 3 to 5 is recommended to avoid failure scenarios leading to data loss. A cluster of Consul servers is recommended for each datacenter.” [1]

A Windows distribution of Consul is available but it’s not recommended for production environments where Linux is the preference. With consistency in mind I elected to investigate setting up a cluster of Linux virtual machines running Consul even when developing on a Windows machine.

When it comes to provisioning virtual machines there are a number of alternatives you can choose including Vagrant (also by Hashicorp), Chef, Puppet etc.  Consul is actually pretty simple to install and configure with comparatively few steps required to get a cluster up-and-running. Because of this – and because I wanted to look at Vagrant in a bit more detail – I opted to use Vagrant on its own to provision the server cluster. The addition of Chef  at this point seemed like overkill.

The advantage of using something like Vagrant is that provisioning the cluster is encapsulated in text form - infrastructure as code – so you can reprovision your environment at any time in the future. This is great because it means I could wipe the cluster off my system at any time and know with confidence that I could easily provision it again later. By encapsulating the features of the cluster this way I know it will be reconstructed exactly the same way each time it’s provisioned.

A lot of what follows is based on a blog post by Justin Ellingwood - How to Configure Consul in a Production Environment on Ubuntu 14.04. I’ve made some changes to Justin’s approach but it’s basically the same. The key difference is that I’ve used Vagrant to provision the cluster.


Before getting started I elected to use a couple of tools to help me out. The first is ConEmu which I use habitually anyway. ConEmu is an alternative to the standard Windows command prompt and adds a number of useful features including the ability to open multiple tabs. If like me you use Chocolatey, installing ConEmu is a breeze.

c:\> choco install conemu 

The next tool I chose to use is CygWin. CygWin provides a Unix-like environment and command-line interface for Windows. Why use this? Well it’s completely optional and everything that follows will run from the Windows command line but I found it helpful to work in a Unix-like way from the outset. To create the Consul cluster  I knew I’d be writing Bash scripts so using CygWin meant I wouldn’t be context switching between Windows DOS and Linux shell commands.

To install CygWin I used cyg-get which itself can be installed using Chocolatey.

c:\> choco install cyg-get

Once you have cyg-get you can use it to install CygWin:

c:\> cyg-get default

Once you have ConEmu and CygWin in place it is useful to configure ConEmu so you can open CygWin Bash from the ConEmu ‘new console’ menu. To do that you need to add a new console option.



Now you can open a new CygWin bash console in ConEmu which results in a Unix-like prompt:


You’ll note that this prompt is located at your home directory (identified by the ~). On my system CygWin is installed to c:\tools\cygwin. In the CygWin folder there’s a ‘home’ subfolder which itself contains a further subfolder named after your login name. That’s your CygWin home folder (~) and that’s where I added the Vagrant files and scripts required to build out the Consul cluster later on.


The tools above are really just ‘nice to haves’. The main components you’ll need are VirtualBox and Vagrant. Both can be installed by the trusty Chocolatey.

c:\> choco install virtualbox 
c:\> choco install vagrant 

What do we need in our Consul cluster?

The first thing to note is that Consul – actually the Consul agent - can run in 2 basic modes: server, or client. Without going in to details (you can read the documentation here) a cluster should contain 3 to 5 Consul agents running in server mode and it’s recommended that the servers run on dedicated instances.

So that’s the first requirement: at least 3 Consul agents running in server mode each in a dedicated instance.

Consul also includes a web-based user interface, the Consul Web UI. This is actually hosted by a Consul agent by passing in the appropriate configuration options at start-up time.

So the second requirement is to have a Consul agent running in client mode which also hosts the Consul Web UI. We’ll create this in a separate instance.

That’s 4 virtual machines each hosting a Consul agent. Three instances will be running the agent in server mode and one in client mode and will host the Consul Web UI.

Creating the working folder

If you’re not using CygWin you just need to create a folder to work in. If you’re using CygWin the first job is to create a working folder in the CygWin home. I called mine ‘consul-cluster’. You’ll see from the image below that I also created a Git repository in this folder, hence the green tick courtesy of Tortoise Git. I added all of my Vagrant files and scripts to Git (infrastructure as code!).


The Vagrantfile

The main artefact when using Vagrant is the Vagrantfile.

“The primary function of the Vagrantfile is to describe the type of machine required for a project, and how to configure and provision these machines.” [2]

I created the Vagrantfile in the consul-cluster folder I created previously. Let’s cut to the chase. Here’s my completed Vagrantfile.

# -*- mode: ruby -*-
# vi: set ft=ruby :
#^syntax detection


Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| = "hashicorp/precise64"
  config.vm.define "consul1" do |consul1|
    config.vm.provision "shell" do |s|
        s.path = ""
        s.args   = ["/vagrant/consul1/config.json"]
    consul1.vm.hostname = "consul1" "private_network", ip: ""
  config.vm.define "consul2" do |consul2|
    config.vm.provision "shell" do |s|
        s.path = ""
        s.args   = ["/vagrant/consul2/config.json"]
    consul2.vm.hostname = "consul2" "private_network", ip: ""
  config.vm.define "consul3" do |consul3|
    config.vm.provision "shell" do |s|
        s.path = ""
        s.args   = ["/vagrant/consul3/config.json"]
    consul3.vm.hostname = "consul3" "private_network", ip: ""
  config.vm.define "consulclient" do |client|
    config.vm.provision "shell" do |s|
        s.path = ""
        s.args   = ["/vagrant/consulclient/config.json"]
    client.vm.hostname = "consulclient" "private_network", ip: ""

A Vagrantfile can be used to provision a single machine but I wanted to use the same file to provision the 4 machines needed for the cluster. You’ll notice there are 4 config.vm.define statements in the Vagrantfile with each one defining one of the 4 machines we need. The first 3 (named consul1, consul2 and consul3) are our Consul servers. The last (named consulclient) is the Consul client that will host the web UI.

Each of the machines we provide a host name (client.vm.hostname) and an IP address ( The IP addresses are important because we will be using them later when we configure the Consul agents to join the cluster.

Vagrant boxes

Prior to the config.vm.define statements there’s a line that defines the Because this is outside the config.vm.define statements it’s effectively global and will be inherited by each of the defined machines that follow.

The statement specifies the type of box we are going to use. A box is a packaged Vagrant environment and in this case it’s the hashicorp/precise64 box - a packaged standard Ubuntu 12.04 LTS 64-bit box.

Vagrant provisioning

For each of the machines we’ve defined there is a ‘provisioner’ (config.vm.provision). In Vagrant provisioning is the process by which you can install software and alter configuration  as part of creating the machine. There are a number of provisoiners available including some that leverage Chef, Docker and Puppet, to name but a few. However in this case all I needed to do was to run Bash scripts on the server. That’s where the Shell Provisioner comes in.

The 3 server instances (consul1, consul2, and consul3) all run a Bash script called Let’s take a look at it.


# Step 1 - Get the necessary utilities and install them.
apt-get update
apt-get install -y unzip

# Step 2 - Copy the upstart script to the /etc/init folder.
cp /vagrant/consul.conf /etc/init/consul.conf

# Step 3 - Get the Consul Zip file and extract it.  
cd /usr/local/bin
unzip *.zip
rm *.zip

# Step 4 - Make the Consul directory.
mkdir -p /etc/consul.d
mkdir /var/consul

# Step 5 - Copy the server configuration.
cp $1 /etc/consul.d/config.json

# Step 6 - Start Consul
exec consul agent -config-file=/etc/consul.d/config.json

The steps are fairly straight forward. We’ll take a look at step 2 in more detail shortly but basically the process is as follows:

  • Install the unzip utility so we can extract Zip archives.
  • Copy an upstart script to /etc/init so the Consul agent will be restarted if we restart the virtual machine.
  • Grab the Consul Zip file containing the 64-bit Linux distribution. Unzip it to /usr/local/bin.
  • Make directories for containing the Consul configuration and its data directory.
  • Copy the Consul configuration file to /etc/consul.d/ (more on that shortly too).
  • Finally, start the Consul agent using the configuration file.


Hang on. How can we be copying files? Well when you provision a machine using Vagrant the working folder will be mounted inside the virtual machine (at /vagrant). That means that all the files you create in the working folder are available inside the virtual machine.

I know we haven’t actually provisioned anything yet but here’s an example from a  machine I have provisioned with Vagrant. If I SSH onto the box and look at the root of the file system I can see the /vagrant directory.


Listing the contents of the /vagrant directory reveals all the files and folders in my working directory on the host machine including the Vagrantfile and


You’ll also notice that in step 5 of we are using an argument passed into the script ($1). This argument is used to pass in the path to the appropriate Consul configuration file for the instance because there is a different configuration file for each Consul agent. You can see this argument being set in the Vagrant file in the Shell provisioner. For example:

s.args   = ["/vagrant/consul1/config.json"]

The upstart script

The upstart script looks like this:

description "Consul process"

start on (local-filesystems and net-device-up IFACE=eth0)
stop on runlevel [!12345]


exec consul agent -config-file=/etc/consul.d/config.json

This script is copied to /etc/init on the virtual machine in step 2 of This simply makes sure that the agent is restarted when the box is restarted.

Why is this necessary? Well, you won’t always run provisioning when you start a box. Sometimes it’s simply restarted and doesn’t get run. Under these circumstances you want to make sure Consul is fired up.

Consul configuration

Consul configuration can be handled by passing in arguments to the Consul agent on the command line. An alternative is to use a separate configuration file, which is what I’ve done. Each instance has a slightly different configuration but here’s the configuration for the first server, the consul1 instance:

    "bootstrap": true,
    "server": true,
    "datacenter": "dc1",
    "data_dir": "/var/consul",
    "encrypt": "Dt3P9SpKGAR/DIUN1cDirg==",
    "log_level": "INFO",
    "enable_syslog": true,
    "bind_addr": "",
    "client_addr": ""

In each Consul cluster you need one of the servers to start up in bootstrap mode. You’ll note that this server has the bootstrap configuration option set to true. It’s the only server in the cluster to specify this. In Consul terms this is manual bootstrapping where we are specifying which server will be the leader. With later versions of Consul it is possible to use automatic bootstrapping.

The configuration for the other servers looks something like this:

    "bootstrap": false,
    "server": true,
    "datacenter": "dc1",
    "data_dir": "/var/consul",
    "encrypt": "Dt3P9SpKGAR/DIUN1cDirg==",
    "log_level": "INFO",
    "enable_syslog": true,
    "bind_addr": "",
    "client_addr": "",
    "start_join": ["", ""]

Note that the bootstrap configuration option is now set to false. We also tell the Consul agent what servers it is to join to form the cluster by specifying the IP addresses of the other 2 servers (start_join).

The Consul agent running in client mode

To provision the machine hosting the Consul agent running in client mode we use a slightly different provisioning script ( This script simply adds a new step to download the Consul Web UI and extract it in the /usr/local/bin directory of the provisioned machine.


# Step 1 - Get the necessary utilities and install them.
apt-get update
apt-get install -y unzip

# Step 2 - Copy the upstart script to the /etc/init folder.
cp /vagrant/consul.conf /etc/init/consul.conf

# Step 3 - Get the Consul Zip file and extract it.  
cd /usr/local/bin
unzip *.zip
rm *.zip

# step 4 - Get the Consul UI
unzip *.zip
rm *.zip

# Step 5 - Make the Consul directory.
mkdir -p /etc/consul.d
mkdir /var/consul

# Step 6 - Copy the server configuration.
cp $1 /etc/consul.d/config.json

# Step 7 - Start Consul
exec consul agent -config-file=/etc/consul.d/config.json

We also use a slightly different Consul configuration file:

    "bootstrap": false,
    "server": false,
    "datacenter": "dc1",
    "data_dir": "/var/consul",
    "ui_dir": "/usr/local/bin/dist",
    "encrypt": "Dt3P9SpKGAR/DIUN1cDirg==",
    "log_level": "INFO",
    "enable_syslog": true,
    "bind_addr": "",
    "client_addr": "",
    "start_join": ["", "", ""]

In this configuration file we add the directory where the Consul Web UI has been extracted on the virtual machine (ui_dir).

Wrapping up

In the following blog post (Setting up a Consul cluster for testing and development with Vagrant (Part 2)) we’ll look at how you actually use the Vagrantfile and scripts here to provision the server cluster.

You can get the files referred to in this blog post from BitBucket.


[1] Introduction to Consul

[2] Vagrantfile