Tuesday, 9 June 2015

Jaybird BlueBuds X bluetooth earbuds review

setI workout a lot - always have and hopefully always will – but one thing I hate is the piped music you get in most gyms. On the other hand I find wired headphones really irritating so the idea of having some wireless Bluetooth earbuds was very appealing. Enter the Jaybird BlueBuds X.

So why the review? Well unfortunately what sounded like a great idea fell down in practice and I thought I’d relate my experience.

If you’re in a rush skip to Quality below. Also see the Updates at the bottom where Jaybird are being a pain giving me a replacement.

Appearance

I have the black BlueBuds X. Considering the amount of electronics and battery they have to cram in they are reasonably compact. The included carry case was reasonably practical and looks good. Overall, the appearance of the product was fine.

Fit

There are 2 ways of wearing the BlueBuds X. The first way is the most obvious allowing the cord that connects each earbud to dangle downwards from the ear and to loop the cord around your neck. This really didn’t work for me because the weight of the cord acted to pull the earbuds out as I exercised.


buttonsAn alternative is to use an over ear placement. When wearing the BlueBuds this way the cord is kept short and the weight of the cord is supported by your ears. This is the way I used them. The one concern I have is that wearing them like this puts the control unit in direct contact with your head which might allow sweat to penetrate and cause damage.


secure-fitOne novel feature of the BlueBuds is the inclusion of flexible ‘Secure Fit Ear Cushions’. These comma shaped attachments lock the earbuds in place inside your ear. I found they worked quite well and provided as secure a fit as you could reasonably expect from earbuds.

Sound quality

I listen to a lot of Rock-based music when I’m in the gym with subgenres ranging from aggressive metal to alternative. I found the sound quality of the BlueBuds X to be really good for this type of music. It’s quite well balanced and doesn’t have that excessive bass response you get from Dr Dre Beats.

As mentioned above I found the fit very good and the ear inserts blocked out a lot of background noise so overall the sound quality was very good.

You can make phone calls with the BlueBuds but this is something I did very infrequently (I was using an iPhone 5S). Standing in the gym apparently talking to yourself isn’t my style but on the few occasions I did use them to make calls I could hear the other party on the call very well but they usually struggled to hear me. I guess the placement of the microphone is too far away from your mouth to be particularly effective.

Even if you don’t use the BlueBuds to make calls it is useful to get the audio notifications for incoming texts etc.

Range

I tended to keep my phone in a small bag I carry around with me in the gym. I always have gloves, wraps and the such like so it was natural to pop my phone in the same bag. I never had a dropout when using the BlueBuds even when walking several meters away from my bag.

Connecting and Charging

Charging the BlueBuds was easy. All that was required was to connect them to a PC using the included USB cable. One of the earbuds has an indicator light that show red while charging and which turns green when charging is complete.

Connecting them to my iPhone 5S was a doddle. I just switched on the BlueBuds, activated Bluetooth on the phone and they were detected right away.

Quality

OK, this is where it all goes wrong. In 6 months I have gone through 2 pairs of BlueBuds, both failing in exactly the same way.

I experienced occasional uninvited power-offs and loss of audio but these incidents were infrequent and restarting the BlueBuds always fixed the issue. However, eventually I hit another issue with both pairs I have owned, an issue that killed the BlueBuds stone dead.

The first indication that something was wrong would be turning up at the gym to find the BlueBuds would not switch on. This despite having fully charged them. I’d return home assuming the charge hadn’t worked and would plug them in using the USB cable. On connecting the BlueBuds to the PC the indicator light would immediately turn green suggesting they were fully charged. I’d leave them on to charge ‘just in case’ but they still would not switch on. End of story. Dead BlueBuds.

The cause of the failure is a mystery but you have to suspect they are not as sweat proof as Jaybird would have us believe. I do wonder if the over ear fitting is part of the problem because it puts the small plastic control unit that holds the buttons in contact with your head.

Support and Returns

Jaybird support was quite good overall with prompt responses to contact made using the support page on their website. The most irritating thing was being asked to return the BlueBuds to the USA. I’m in the UK so I really didn’t want the added expense. Jaybirds alternative is for you to send 2 photographs to Jaybird: one with the headphones intact and the other with the wire cut. You also have to include a piece of paper in each photo showing your signature and the date.

Really?

I mean, really?

Good job Jaybird on making me feel like you think I’m trying to con you out of a new pair of BlueBuds. Definitely not the kind of customer experience I appreciate.

Conclusion

A great idea and so close to being an excellent product. Wearability, usability and sound quality were great. The only issue was the build quality. The BlueBuds are clearly not up to the wear and tear of gym use and the most likely culprit is sweat.

Having had 2 pairs fail on me in exactly the same way I can only suggest you avoid the Jaybird BlueBuds X if you intend to use them when training. At £123.00 a pop they are far too expensive to think of as a disposable product.

Update 10/06/2014

After my second pair of BlueBuds failed I contacted Jaybird again for a replacement. This is the response I got:

XXXX XXXXX (Jaybird)
Jun 9, 2:56 PM
Hi,
Try this first,
Firmly but quickly press the power button 8 times and then hold for 5 seconds. Repeat 2 times.
When finished, plug the headset into the charger and tell me what you see.
Most of the time if the headset turns RED it will charge and function normally again, Make sure that if the light is red to fully charge the unit again.
Try holding the power button down for 10 seconds also some times it takes longer to turn them on.
let us know and if this does not work and I would be happy to help with an exchange.
My best,
XXXX

Now I was stupid enough to actually try this and guess what? No change. Back to Jaybird.

Incidentally, I was told when the last pair failed that pressing and holding the power button for 10 seconds does a factory reset.

Update 12/06/2015

OK, I’m getting pretty fed up with Jaybird now. For some reason I can’t get their support person to understand that the BlueBuds were bought online and that there’s no physical receipt. He really wants a ‘picture’.

Frankly I’m done. They don’t work anyway.

Sunday, 31 May 2015

Creating a JSON service using TopShelf and Nancy

I love TopShelf. I first encountered it when I was looking at MassTransit. Both MassTransit and TopShelf originated from the same developers (Chris Patterson and Dru Sellers). The NServiceBus host is also built with TopShelf.

What TopShelf allows you to do is to write a service as a console application – which is great for development and testing – but to deploy it later as a Windows service. You do this by simply by passing in some command line parameters to the console application. Fantastic.

“Topshelf is a framework for hosting services written using the .NET framework. The creation of services is simplified, allowing developers to create a simple console application that can be installed as a service using Topshelf. The reason for this is simple: It is far easier to debug a console application than a service. And once the application is tested and ready for production, Topshelf makes it easy to install the application as a service.” – TopShelf Overview

I have been encouraged to look at Nancy by a colleague (thanks Matt!) who has used it on a number of projects. It’s been on my radar for a while but somehow I’ve never got round to looking at it. As it happens for a small side project I’m working on I need to expose some JSON data from a service. How opportune!

“Nancy is a lightweight, low-ceremony, framework for building HTTP based services on .Net and Mono. The goal of the framework is to stay out of the way as much as possible and provide a super-duper-happy-path to all interactions.” – Introduction (Nancy documentation)

Visual Studio and NuGet packages

Firstly I created a Visual Studio solution and a console application project. Once that was done I installed a number of NuGet packages:

  • TopShelf
  • Ninject (for dependency injection)
  • Ninject.Web.Common (so we can scope Ninject bindings to an HTTP request)
  • TopShelf.Ninject (which provides extensions to TopShelf so it can painlessly use Ninject)
  • Nancy
  • Nancy.Hosting.Self (so Nancy can be hosted inside the service)

I also added a few other packages for logging, configuration and JSON serialisation but they aren’t core to the task at hand so I’ll ignore them for now.



Defining the service

The first thing I did was to define a service contract - an interface - for the service that would be hosted by TopShelf.

namespace Andy.French.Region.Growing.Venue.Service
{
    /// <summary>
    /// Classes implementing this interface provide methods for getting
    /// or manipulating venue data.
    /// </summary>
    public interface IVenueService
    {
        /// <summary>
        /// Starts the service.
        /// </summary>
        void Start();

        /// <summary>
        /// Stops the service.
        /// </summary>
        void Stop();
    }
}
The Start() and Stop() methods align with the TopShelf service configurator that allows you to assign actions when a service is started and stopped. More on that later.

The implementation of the interface would be the service to be hosted by TopShelf and which would spin up the Nancy endpoint. Let’s see what that looked like.

namespace Andy.French.Region.Growing.Venue.Service
{
    using System;
    using Andy.French.Configuration.Service;
    using Andy.French.Logging;
    using global::Nancy.Hosting.Self;

    /// <summary>
    /// The venue service.
    /// </summary>
    public class VenueService : IVenueService
    {
        /// <summary>The logger factory.</summary>
        private readonly ILoggerFactory loggerFactory;

        /// <summary>The logger.</summary>
        private ILogger logger;

        /// <summary>The nancy host.</summary>
        private NancyHost nancyHost;

        /// <summary>The host URL.</summary>
        private string hostUrl;

        /// <summary>The JSON file path.</summary>
        private string jsonFilePath;

        /// <summary>
        /// Initialises a new instance of the <see cref="VenueService"/> class.
        /// </summary>
        /// <param name="loggerFactory">The logger factory.</param>
        /// <param name="configurationService">The configuration service.</param>
        public VenueService(ILoggerFactory loggerFactory, IConfigurationService configurationService)
        {
            this.loggerFactory = loggerFactory;
            this.hostUrl = configurationService.GetString("host.url");
        }

        /// <summary>
        /// Gets the logger.
        /// </summary>
        /// <value>The logger.</value>
        public ILogger Logger
        {
            get
            {
                if (this.logger == null)
                {
                    this.logger = this.loggerFactory.CreateInstance(typeof(VenueService));
                }

                return this.logger;
            }
        }

        /// <summary>
        /// Starts the service. In this case it starts the Nancy endpoint.
        /// </summary>
        public void Start()
        {
            this.Logger.Information("The venue service is starting.");

            this.nancyHost = new NancyHost(new Uri(this.hostUrl));
            this.nancyHost.Start();
        }

        /// <summary>
        /// Stops the service. In this case it stops and disposes of the Nancy host.
        /// </summary>
        public void Stop()
        {
            this.Logger.Information("The venue service is stopping.");
            this.nancyHost.Stop();
            this.nancyHost.Dispose();
        }
    }
}
The interesting stuff appears around line 63 in the Start() method. Here I create a Nancy host and start it. The URL passed in (the hostUrl) is loaded from configuration but its actually something like http://localhost:1234. In the Stop() method I stop the Nancy host and dispose of it.

Basically, this is all the code it took to get the Nancy host up-and-running but there was a little bit of plumbing required to set up routes and handlers.

The Nancy module

At this point I had a Nancy host but that was only part of the story. I also needed to define a route and a handler for an HTTP verb (e.g. GET). It’s actually very easy to do this in Nancy and is accomplished using a Nancy module.

namespace Andy.French.Region.Growing.Venue.Service.Nancy
{
    using System.Collections.Generic;
    using System.IO;
    using System.Threading.Tasks;

    using Andy.French.Configuration.Service;
    using Andy.French.Region.Growing.Domain;

    using global::Nancy;

    using Newtonsoft.Json;

    /// <summary>
    /// A Nancy module for the venues service.
    /// </summary>
    public class VenuesModule : NancyModule
    {
        /// <summary>The venues.</summary>
        private List<Venue> venues;

        /// <summary>The JSON file path.</summary>
        private string jsonFilePath;

        /// <summary>
        /// Initialises a new instance of the <see cref="VenuesModule"/> class.
        /// </summary>
        /// <param name="configurationService">The configuration service.</param>
        public VenuesModule(IConfigurationService configurationService)
        {
            this.jsonFilePath = configurationService.GetString("json.file.path");

            this.Get["/venues", true] = async (x, ct) =>
            {
                await this.LoadVenues();
                return this.Response.AsJson(this.venues);
            };
        }

        /// <summary>
        /// Loads the venues.
        /// </summary>
        /// <returns>The task to load the venues.</returns>
        private async Task LoadVenues()
        {
            await Task.Factory.StartNew(
                () =>
                    {
                        this.venues = JsonConvert.DeserializeObject<List<Domain.Venue>>(File.ReadAllText(this.jsonFilePath));
                    });
        }
    }
}

You can see that I created a class that extended NancyModule and defined a route and a handler in the constructor (line 33). This basically says that we are expecting GET requests to arrive at ‘/venues’ and defines what we want to do with them. In my naïve service I simply load a JSON file and return the contents in the response as JSON (line 36).

Dependency injection

You’ll notice that the Nancy module and the Venue service both have dependencies passed in on their constructors (e.g. a logger factory or a configuration service). To get these resolved I needed to plug in Ninject. That was done with a simple Ninject module:

namespace Andy.French.Region.Growing.Venue.Service.Ninject
{
    using Andy.French.Configuration.Service;
    using Andy.French.Logging;
    using Andy.French.Logging.Log4Net;

    using global::Ninject.Modules;
    using global::Ninject.Web.Common;

    /// <summary>
    /// A <c>Ninject</c> module.
    /// </summary>
    public class VenueServiceModule : NinjectModule
    {
        /// <summary>
        /// Loads the module into the kernel.
        /// </summary>
        public override void Load()
        {
            this.Bind<ILoggerFactory>().To<LoggerFactory>().InRequestScope();
            this.Bind<IVenueService>().To<VenueService>().InRequestScope();
            this.Bind<IConfigurationService>().To<AppSettingsConfigurationService>().InRequestScope();
        }
    }
}

That was all there was to that.

Wiring up TopShelf

All that remained was to bring everything together with TopShelf. That was done in the main program method:

namespace Andy.French.Region.Growing.Venue.Service
{
    using Andy.French.Region.Growing.Venue.Service.Ninject;

    using Topshelf;
    using Topshelf.Ninject;

    /// <summary>
    /// The main application program.
    /// </summary>
    public class Program
    {
        /// <summary>
        /// Defines the entry point of the application.
        /// </summary>
        public static void Main()
        {
            HostFactory.Run(hostConfigurator =>
            {
                hostConfigurator.UseNinject(new VenueServiceModule());

                hostConfigurator.Service<IVenueService>(serviceConfigurator =>
                {
                    serviceConfigurator.ConstructUsingNinject();
                    serviceConfigurator.WhenStarted(service => service.Start());
                    serviceConfigurator.WhenStopped(service => service.Stop());
                });

                hostConfigurator.RunAsLocalSystem();
                hostConfigurator.SetDescription("The venue service provides venue data.");
                hostConfigurator.SetDisplayName("Region Growing Venue Service");
                hostConfigurator.SetServiceName("VenueService"); 
            });
        }
    }
}

Trying it out

Aside from some log4Net configuration that was pretty much it. Hitting F5 in Visual Studio resulted in the console application being launched and the service starting up. You can see the INFO message indicating the Start() method of the service has been called and therefore the Nancy host has been started.




Using the Fiddler composer I fired a request at the service.



This resulted in JSON being returned (albeit a small incomplete example at this stage). The content type was correctly set to application/json.



Hitting Ctrl-C caused TopShelf to call the Stop() method of the service which stops and disposes of the Nancy host.




That’s it! All too easy.

Sunday, 24 May 2015

How to setup a TeamCity build server with Azure

I thought it would be interesting to see what it would take to setup a Continuous Integration (CI) build server with Azure infrastructure. No doubt there are alternative ways of doing it but this approach certainly got me to a point where had a CI server installed and ready to go. In this case I used TeamCity.

Throughout this post I’ll be using the updated Azure portal.

Create a virtual network (optional)

The first thing I did was create a Virtual Network (VN). Why? Well, this would allow backend virtual machines to communicate with each other should that be necessary. If you’re unsure if you need one this article might help.

To set up a VN click the NEW icon and choose Networking > Virtual Network.

SNAGHTML6d27dde
To create a VN enter a name and click ADDRESS SPACE. This section is required. In the example above I’ve added an Address space CIDR block of 10.0.0.0/16. I’ve then created a subnet with a CIDR block of 10.0.1.0/24. I’ll be adding other resources such as a Virtual Machine (VM) to this subnet.

You might be interested in this if you’re new to Azure VN: http://azure.microsoft.com/en-gb/documentation/services/virtual-network/

Create storage account (optional)

The next thing I did was to create a Storage Account. This would be used for the virtual disks etc. required by the VM I created later. You can omit this step in which case a storage account will be created for you when you create the virtual machine.

SNAGHTML6d1ea58

To add a new storage account click the NEW icon and go to Data + Storage > Storage. It’s quite annoying but when adding the name for your storage account you’re restricted to letters and numbers only. If like me you use a naming convention that involved hyphens you’re hosed.

Create SQL server database (optional)

I chose to use a SQL Server database for TeamCity. Of course there are other options including the default HSQLDB database engine that ships with TeamCity. The problem with the default database is that it can crash and lose your data as well as other problems:

“The internal database may crash and lose all your data (e.g. on out of disk space condition). Also, internal database can become extremely slow on large data sets (say, database storage files over 200Mb). Please also note that our support does not cover any performance or database data loss issues if you are using the internal database.
In short, do not EVER use internal HSQLDB database for production TeamCity instances.” - Setting up an External Database

SQL Server it is. To add a new SQL Server database click the NEW icon and go to Data + Storage > SQL Database. You may need to add a new SQL Server instance or you can use an existing one.


SNAGHTML6deb84c

 

It’s worth having a look at the pricing information for SQL Server in Azure. I chose S0 for this installation that comes out at about £9.16 per month at the time of writing.


SNAGHTML6e17f3c

 

You can get the connection details for your server from the settings but if you try to connect straightaway using SQL Server Management Studio (SSMS) from your local machine you will be disappointed. Before you can connect you need to punch a hole in the firewall.

Select your SQL Server from the browse list in the Azure portal and go to Settings > Firewall. Add a rule with a name and an IP address range. I didn’t add a range because I just wanted my desktop machine to be able to connect - so the start and end IP addresses were the same.


SNAGHTML6e87ad5

 

Don’t forget to save! I did first time around.

SNAGHTML6e93a9b

 

At this point I was able to connect to the newly created SQL Server using SSMS and was able to create a blank database ready to be used by TeamCity. I also added a login and a database user, something like this:

CREATE LOGIN teamcity 
    WITH PASSWORD = 'password_goes_here' 
GO

CREATE USER teamcity
    FOR LOGIN teamcity
GO

Create a Virtual Machine

So far I had a virtual network, a storage account and a SQL Server complete with blank database. Now it was time to create the Virtual Machine (VM) ready to install TeamCity.
To create a new VM click the NEW icon and go to Compute > Windows Server 2012 R2 Datacenter. Fill in the host name, user name and password. The next step is to choose your pricing plan. For this example I chose D1 but you can always check out the pricing guide.


SNAGHTML701ba07

After choosing the price band etc. it’s time to set up the networking side of things. Remember the VN I setup right at the beginning? Well, this is where it came in to play. By selecting the OTIONAL CONFIGURATION I was able to set the Virtual Network and assign the VM to a subnet under NETWORK.


SNAGHTML70fe672

I was also able to assign the storage account that I created earlier on too. This time under the STORAGE ACCOUNT in the optional configuration.

So, at this point I had a Virtual Machine on my Virtual Network and using my storage account.Time to logon to the Virtual Machine.

Logon to the Virtual Machine and install TeamCity

Logging on the the VM was quite easy. Just click on Connect when viewing the VM properties.

 
SNAGHTML7159d18
HINT: I had a few issues with connecting this was from the new portal. I was able to connect by switching to the old portal and doing it from there.

Once I had logged in to the new VM the first job was to download the TeamCity installation package an run it. For this exercise I installed to the default locations but you might consider setting up a separate disk in your VM. Note that I configured TeamCity to run on port 80. More on this later.

Once the basic installation was complete I ran the browser-based UI which starts the setup of TeamCity. When it came to the database I selected the SQL Server database type and entered the details I prepared earlier on my Azure-based SQL Server.

One thing to remember is you need to install the SQL Server JDBC driver by copying the jar into the JDBC directory.


SNAGHTML78e711a

Once the setup was complete I could check with SSMS back on my local machine and see the tables etc. had been created in the SQL Server database.


SNAGHTML798609e
At this point TeamCity was installed and the UI was viewable locally on the VM on port 80. However to make the TeamCity UI visible from my local development machine there were a couple more steps to take.

Firstly, in the VM needed to add some firewall rules added to allow inbound and outbound traffic on port 80. To do this I opened the Server Manager and selected Tools > Windows Firewall with Advanced Security.


SNAGHTML7946da2

Then it was a case of adding the new rules. A picture is worth a thousand words so here’s a few thousand words worth showing the basic steps and settings for the inbound rule (the outbound rule is much the same):


image


image


image


image


image


image


With the firewall rules completed in the VM there was one thing left to do: add a firewall rule to the VM in the Azure portal. This involved the following:

  • Select the VM in the browse list in the Azure portal.
  • Click Settings.
  • Click Endpoints.
  • Click Add and create a new endpoint for the TCP protocol on port 80. 


SNAGHTML79a4ea3

With that done I was able to view the TeamCity UI over the web from my local development machine.

Job’s a good ‘un.

Monday, 20 April 2015

When is a repository not a repository?

I think the answer to this question is “most of the time”, at least that’s the case judging by a lot of the code I’ve seen lately.

For example, during a recent recruitment campaign we’ve been setting a fairly open technical test for the candidates, a test that required them to write some code to solve a problem. A number of candidates chose to expose data using something called a ‘repository’ but in almost every case the so-called repository didn’t look like a repository to me at all.

Does the following look like an interface for a repository to you?

public interface ISeatRepository
{
    IEnumerable<Seat> GetAllSeats();

    IEnumerable<Seat> GetSeatsForVenue(Venue venue);

    IEnumerable<Seat> GetSeatsForCustomer(Customer customer);
}

This is typical of the kind of thing I have been seeing.

My concern is that the term ‘repository’ is being applied to anything that exposes data with no real understanding of the original pattern. This is at best misleading and in most cases demonstrates a lack of understanding of the pattern and/or a laziness to use a more appropriate architecture.

Defining the Repository Pattern

Martin Fowler’s web site describes a repository in the following way:

“A Repository mediates between the domain and data mapping layers, acting like an in-memory domain object collection. Client objects construct query specifications declaratively and submit them to Repository for satisfaction. Objects can be added to and removed from the Repository, as they can from a simple collection of objects, and the mapping code encapsulated by the Repository will carry out the appropriate operations behind the scenes.” [1]

I like this definition. The take-away point is that the repository “Mediates between the domain and data mapping layers using a collection-like interface for accessing domain objects.” Note, ‘a collection-like interface’.

To me the pattern is quite clear: the repository should look like a collection. For example:

public interface IRepository<T>
{
    void Add(T instance);
    void Remove(T instance);
    IEnumerable<T>
    FindAll();
    T Find(int id);
    IEnumerable<T>
    Find(Expression<Func<T, bool>> predicate);
}

There are many alternative approaches to repositories but the one above serves well enough as an example.

One of the key drivers behind the repository pattern is that it isolates domain objects from details of the database access code. This can be a benefit or a drawback depending on your requirements. If for example you need tight control over the SQL then a repository may not be the best way to go; the pattern is designed to hide that sort of detail from client code.

Eric Evans in his book Domain-Driven Design, Tackling Complexity in the Heart of Software talks extensively about repositories. In DDD the repository is a key concept to be used in combination with Aggregates or Aggregate Roots. In fact, repositories should be limited to aggregate roots – you don’t have repositories for every domain type in your system! A discussion of those concepts is beyond the scope of this post but I’d encourage the user to investigate further.

Evans provides this definition:

“For each type of object that needs global access, create an object that can provide the illusion of an in-memory collection of all objects of that type. Set up access through a well-known global interface. Provide methods to add and remove objects, which will encapsulate the actual insertion or removal of data in the data store. Provide methods that select objects based on some criteria and return fully instantiated objects or collections of objects whose attribute values meet the criteria, thereby encapsulating the actual storage and query technology.” [2]
If you are interested Vaughn Vernon goes in to some detail about repositories, including collection-oriented repositories, in his book Implementing Domain-Driven Design.

Repositories - how and when

Use repositories if you want to present clients with a simple model for obtaining persistent objects and managing their life cycle. Be aware that using this pattern will provide a bland interface to the data – that’s the whole point. If you need to write optimised SQL then this probably isn’t the way to go.

In my experience Object Relational Mappers (ORMs) like Entity Framework much less successful but still possible.

Note that if you use repositories you will probably need an alternative method of managing transactionality. The Unit of Work pattern is commonly used in combination with repositories to solve this problem. Rob Connery has an interesting article on why Repositories On Top UnitOfWork Are Not a Good Idea but I think a good part of the issues here are caused by Entity Framework and the need to pass DbContext objects around.

One category of data access technology I don’t think is a good fit with repositories are the micro-ORMs (e.g. PetaPoco, Dapper). These tend to have APIs that are quite close to the SQL and just don’t have a comfortable fit with repositories. If you use these technologies and your data access API doesn’t have a collection-like interface then don’t call your data access classes repositories!

An alternative

An alternative that I like is to use is to implement Commands and Queries. Having seen the potential benefits of Command Query Responsibility Segregation (CQRS) originally put forward by Udi Dahaan the basic concept of separating operations that modify data (Commands) and operations that fetch data (Queries) seemed very attractive. Now CQRS is quite an involved architecture but it’s really an extension of Command Query Separation, a principle that’s been around for years.

Note that:

  • Queries: Return a result and do not change the observable state of the system (are free of side effects).
  • Commands: Change the state of a system but do not return a value.


Jimmy Bogard discusses a similar approach in his post Favor query objects over repositories.

By providing some simple interfaces commands and queries can be easily defined. Firstly queries:

/// <summary>
/// This interface encapsulates queries - read-only operations against a backing store.
/// </summary>
/// <typeparam name="TParameters">The query parameters to use</typeparam>
/// <typeparam name="TResult">The type of result to return</typeparam>
public interface IQuery<TParameters, TResult>
{
    /// <summary>
    /// Execute the query against the backing store.
    /// </summary>
    /// <param name="parameters">The query parameters to use</param>
    /// <returns>The result of the query</returns>
    TResult Execute(TParameters parameters);
}

/// <summary>
/// This interface encapsulates queries - read-only operations against a backing store.
/// </summary>
/// <typeparam name="TResult">The type of result to return</typeparam>
public interface IQuery<TResult>
{
    /// <summary>
    /// Execute the query against the backing store.
    /// </summary>
    /// <returns>The result of the query</returns>
    TResult Execute();
}

And now commands:

/// <summary>
/// An interface for commands. Commands write to a backing store.
/// </summary>
/// <remarks>
/// Commands do not return values because it is expected that the operation will succeed. If it fails an exception will be thrown.
/// </remarks>
public interface ICommand
{
    /// <summary>
    /// Execute the command.
    /// </summary>
    void Execute();
}

/// <summary>
/// An interface for commands with parameters. Commands write to a backing store.
/// </summary>
/// <remarks>
/// Commands do not return values because it is expected that the operation will succeed. If it fails an exception will be thrown.
/// You should create classes to represent you <typeparamref name="TParameters"/>, simple POCOs with properties for the
/// individual parameters to be passed into the <see cref="Execute"/> method.
/// </remarks>
/// <typeparam name="TParameters">The type of parameters to use</typeparam>
public interface ICommand<TParameters>
{
    /// <summary>
    /// Execute the command using the given parameters.
    /// </summary>
    /// <param name="parameters">The parameters to use</param>
    void Execute(TParameters parameters);
}

/// <summary>
/// An interface for commands with parameters. Commands write to a backing store.
/// </summary>
/// <remarks>
/// Commands do not return values because it is expected that the operation will succeed. If it fails an exception will be thrown.
/// You should create classes to represent your <typeparamref name="TParameters"/>, simple POCOs with properties for the
/// individual parameters to be passed into the <see cref="Execute"/> method.
/// </remarks>
/// <typeparam name="TParameters">The type of parameters to use</typeparam>
/// <typeparam name="TResult">The type of result to return</typeparam>
public interface ICommand<TParameters, TResult>
{
    /// <summary>
    /// Execute the command using the given parameters.
    /// </summary>
    /// <param name="parameters">The parameters to use</param>
    /// <returns>The result</returns>
    TResult Execute(TParameters parameters);
}

Interfaces such as these allow transactionality to be implemented on the Execute() methods. For example a base class could be created that automatically has transactions applied to an abstract Execute() method using an attribute (I have done this using Spring.Net which provides a TransactionAttribute).

A simple command might look like this:

/// <summary>
/// A command for saving a <see cref="User"/>.
/// </summary>
public class SaveUserCommand : ICommand<User>
{
    /// <summary>
    /// Execute the command using the given parameters.
    /// </summary>
    /// <param name="user">The user to save</param>
    public void Execute(User user)
    {
        // Save the user using your technology of choice...
    }
}
One thing I have noticed about this approach is that you quickly build up a library of commands and queries which can become difficult to manage and error prone. They all start to look the same. A case of buyer beware.

Wrapping up

If you are writing a data access API and it doesn’t have a collection-like interface then don’t call it a repository. Don’t use the name ‘repository’ as a crutch for poor, lazy design. There are alternatives.

References

[1] Repository, by Edward Hieatt and Rob Mee, http://martinfowler.com/eaaCatalog/repository.html
[2] Evans, Domain-Driven Design, Tackling Complexity in the Heart of Software, p.151-152

Sunday, 19 April 2015

Could Not Detect Test Framework error when running ApprovalTests from an automated build

I was making some changes to an automated continuous integration build in TeamCity when I encountered an unexpected error. I had a suite of unit tests using the ApprovalTests framework but when the build was triggered all of the tests failed, tests which ran perfectly from within Visual Studio.

Investigation in TeamCity revealed the following error details (the same for each test):

Test(s) failed. System.Exception : 
Could Not Detect Test Framework

Either:
1) Optimizer Inlined Test Methods

Solutions:
a) Add [MethodImpl(MethodImplOptions.NoInlining)]
b) Set Build->Opitmize Code to False 
   & Build->Advanced->DebugInfo to Full

or
2) Approvals is not set up to use your test framework.
It currently supports [NUnit, MsTest, MbUnit, xUnit.net, xUnit.extensions, Machine.Specifications (MSpec)]

Solution:
To add one use ApprovalTests.Namers.StackTraceParsers.StackTraceParser.AddParser() method to add implementation of ApprovalTests.Namers.StackTraceParsers.IStackTraceParser with support for your testing framework.
To learn how to implement one see http://blog.approvaltests.com/2012/01/creating-namers.html
   at ApprovalTests.Namers.StackTraceParsers.StackTraceParser.Parse(StackTrace stackTrace)
   at ApprovalTests.Namers.UnitTestFrameworkNamer..ctor()
   at ApprovalTests.Approvals.<.cctor>b__c()
   at ApprovalTests.Approvals.GetDefaultNamer()
   at ApprovalTests.Approvals.Verify(IApprovalWriter writer)
   at ApprovalTests.Approvals.VerifyFile(String receivedFilePath)
   at ApprovalTests.Approvals.Verify(FileInfo receivedFilePath)

 

SNAGHTML472c94c

The first point to note is that the build was set to run using the Release configuration.

SNAGHTML4758484

The fix requires changes to the Visual Studio project settings for the test project.
Open the test project settings for the Release configuration. On the Build tab uncheck Optimize code and click Advanced

SNAGHTML479828c

In the advanced settings dialog set Debug Info to full.

image

OK the dialog and save the project. Commit changes to source control and trigger the build again.

That should do it!

The error is caused by the absence of stacktrace information when the source is built using the Release configuration. Approvals Tests infers the name the approval file by walking the stack trace to get the test method name. No stacktrace, no inferred method name and therefore ApprovalTests can’t find the approval file.

Sunday, 12 April 2015

Comparing .Net statsd clients

This post follows on from 2 previous posts: Running statsd on Windows and Testing Log4Net.Async with statsd and Graphite. In those posts I set up a virtual Linux box which hosted statsd and Graphite before experimenting with log4Net to compare performance when using different appenders. In this post I’m going to look at some .Net statsd clients to see how they compare with regards to performance and usability.

Now, the performance tests I describe below aren’t particularly scientific given my machine is running a lot of applications - including the virtual machine hosting statsd and Graphite. I ran each of the tests a number of times and the elapsed times I provide below are rough averages and should be taken as indicative of performance only. This is ‘finger in the air’ testing and any conclusions should be taken with a pinch of salt.

Statsd C# Client

I created a really simple console application to send a counter to statsd from inside a ‘for’ loop which ran 100,000 iterations. For the first version I used the Statsd C# Client which I used in my previous posts.

namespace StatsD.Client.Test
{
    using System;

    using StatsdClient;

    /// <summary>
    /// The main program class.
    /// </summary>
    class Program
    {
        /// <summary>The stopwatch.</summary>
        private static System.Diagnostics.Stopwatch stopwatch;

        /// <summary>
        /// Defines the entry point of the application.
        /// </summary>
        /// <param name="args">The arguments.</param>
        static void Main(string[] args)
        {
            Configure();
            Start();

            for (int i = 0; i < 100000; i++)
            {
                Metrics.Counter("test-counter");
            }
            
            Stop();
        }

        /// <summary>
        /// Creates and starts the stopwatch.
        /// </summary>
        private static void Start()
        {
            stopwatch = System.Diagnostics.Stopwatch.StartNew();
        }

        /// <summary>
        /// Stops the stopwatch and outputs the elapsed time.
        /// </summary>
        private static void Stop()
        {
            stopwatch.Stop();
            Console.WriteLine("Elapsed time: {0}", stopwatch.ElapsedMilliseconds);
            Console.ReadKey();
        }

        /// <summary>
        /// Configure th <c>statsd</c> client.
        /// </summary>
        private static void Configure()
        {
            var metricsConfig = new MetricsConfig
            {
                StatsdServerName = "localhost",
                StatsdServerPort = 8125
            };

            Metrics.Configure(metricsConfig);   
        }
    }
}

Running this took 591 milliseconds.

From a usability point of view the API is quite nice offering the static Metrics class used above or a more direct Statsd class which you have to instantiate. The nice thing about the static Metrics class is that you can use it from within a ‘using’ block.

using (Metrics.StartTimer("stat-name"))
{
  DoMagic();
}

However, I’m rather cautious of using static classes as they tend to lead to code that violates the open/closed principle and can make testing awkward. I’m inclined to wrap frameworks in my own service classed that implement interfaces I can easily mock in unit tests.

NStatsD.Client

For the next test I used the NStatsD.Client by Rob Bihun. There haven’t been any commits to the GitHub repo for 7 months or so. This package adds a bit of configuration to the App.config file so no need for configuration in code.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <configSections>
    <section name="statsD" type="NStatsD.StatsDConfigurationSection, NStatsD.Client" />
  </configSections>
  <startup> 
    <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5.1" />
  </startup>
  <statsD>
    <server host="localhost" port="8125" />
  </statsD>
</configuration>

Running the NStatsD.Client version took 1031 milliseconds. Interesting.

The API is ‘no frills’ but gets the job done. The API isn’t using static classes but rather employs the singleton pattern.

NStatsD.Client.Current.Increment("increment");
NStatsD.Client.Current.Increment("increment", 0.5); // Optional Sample Rate included on all methods
NStatsD.Client.Current.Decrement("decrement");
NStatsD.Client.Current.Timing("timing", 2345);
NStatsD.Client.Current.Gauge("gauge", 45);

NStatsD.HighPerformance

Next I tried the NStatsD.HighPerformance package. This is actually a fork of the NStatsD.Client and was created by Aaron Stannard, one of the driving forces behind Akka.Net a really interesting framework for creating distributed systems using the Actor model.

Aaron has a very interesting blog post on the Tradeoffs in High Performance Software which talks about his fork of the NStatsD.Client. The big change is in the private SendToStatsD method of the Client class. Originally it looked like this:

private void SendToStatsD(Dictionary<string, string> sampledData, AsyncCallback callback)
{
	var prefix = Config.Prefix;
	var encoding = new System.Text.ASCIIEncoding();

	foreach (var stat in sampledData.Keys)
	{
		var stringToSend = string.Format("{0}{1}:{2}", prefix, stat, sampledData[stat]);
		var sendData = encoding.GetBytes(stringToSend);
		_client.BeginSend(sendData, sendData.Length, callback, null);
	}
}

By simply changing the async BeginSend call to an inline Send he got an improvement in performance of around 40%.

private void SendToStatsD(Dictionary<string, string> sampledData, AsyncCallback callback)
{
	var prefix = Config.Prefix;
	foreach (var stat in sampledData.Keys)
	{
		var stringToSend = string.Format("{0}{1}:{2}", prefix, stat, sampledData[stat]);
		var sendData = Encoding.ASCII.GetBytes(stringToSend);
		UdpClient.Send(sendData, sendData.Length);
	}
}

There hasn’t been any activity on the GitHub repo for over 9 months so I guess this is considered done.

Running the NStatsD.HighPerformance version resulted in an elapsed time of about 691 milliseconds. Not surprisingly the API is identical to NStatsD.Client.

Graphite

The next package I tried was Graphite, a project inspired by MiniProfiler. This project seems to have a larger scope than some of the others I’ve looked at and provides hooks for Elmah and WCF. With this package you can send metrics to statsd or directly to Graphite (the graphing framework). I note that there haven’t been any commits to the GitHub repo for a year or so.

Running the Graphite version resulted in an elapsed time of about 608 milliseconds.

From an API point of view Graphite looks quite nice. Configuration is done through App.config again and the API provides for ‘using’ statements for timers (you can see the MiniProfiler influence here). For timers there’s a special Step method that basically measure the time taken for the Dispose method to be called.

using (MetricsPipe.Current.Step("duration"))
{
    // Do some work here...
}

Wrapping up

As my old Dad used to say, ‘you pays your money and you takes your choice’. That seems to be the case here. There isn’t a massive difference in performance between any of them. One thing you might like to take in to consideration is the user base.

Judging by the number of NuGet downloads and performance the Statsd C# Client comes out on top.

Package Elapsed time (ms) NuGet downloads Description
Statsd C# Client 591 9830 Allows ‘using’ statements for timers and has static methods for metrics.
NStatsD.Client 1031 1635 Configuration in App.config. Singleton pattern for sending metrics. No ‘using’ statements for timers.
NStatsD.HighPerformance 691 295 Same as NStatsD.Client.
Graphite 608 930 Configuration in App.config. API allows called direct to Graphite. Includes ‘using’ statement for timers.

Sunday, 5 April 2015

Testing Log4Net.Async with statsd and Graphite

Example code: You can get the example code for this post on BitBucket.

Carrying on from my previous post, Running statsd on Windows, I thought it would be interesting to try to gather some application metrics and send them to statsd. I’ve also been meaning to have a look at the Log4Net.Async library for a while and thought I might be able to combine the two objectives: having a look at Log4Net.Async, and experimenting with statsd metrics to see how the Log4Net.Async appenders compare with standard log4Net ones.

The first question is, how to get metrics out of an application and in to statsd? Not surprisingly someone has beaten me to it – quite a few people have actually – and there are a number of libraries out there. I chose the Statsd C# Client library which seems to have had a good number of downloads on NuGet.

 

I have a simple logging library of my own that can use log4Net. So, I decided to use my library to create a simple application to write to a log file in a loop and to time how long it took to do a large number of writes.

Getting the NuGet packages

The first step was to create a console application and to add Log4Net.Async and the Statsd C# Client NuGet packages.

image

image

The code

I started by creating a simple interface for a logging service. This allowed me to created separate implementations that could be assigned separate log4Net loggers and subsequently separate appenders.

namespace Logging.StatsD
{
    /// <summary>
    /// Classes implementing this interface will write to the log.
    /// </summary>
    public interface ILoggingService
    {
        /// <summary>Does the logging.</summary>
        /// <param name="numberOfTimes">The number of times to log.</param>
        void DoLogging(int numberOfTimes);
    }
}

I created 3 implementations of the ILoggingService:

  • SynchronousLoggingService
    • Logged messages from this class would be handled by a standard log4Net RollingFileAppender
  • AsynchronousLoggingService
    • Logged messages from this class would be handled by one of the Log4Net.Async appenders.
  • AsyncAwaitLoggingService
    • Just out of interest I used async await – somewhat naively – to wrap calls to the logger in a task.

Here’s an example of one of the logging services, the SynchronousLoggingService:

namespace Logging.StatsD
{
    using Andy.French.Logging;

    using StatsdClient;

    /// <summary>
    /// This class will write to the log synchronously.
    /// </summary>
    public class SynchronousLoggingService : ILoggingService
    {
        /// <summary>The logger.</summary>
        private ILogger logger;

        /// <summary>
        /// Initialises a new instance of the <see cref="SynchronousLoggingService"/> class.
        /// </summary>
        /// <param name="loggerFactory">The logger factory.</param>
        public SynchronousLoggingService(ILoggerFactory loggerFactory)
        {
            this.logger = loggerFactory.CreateInstance(typeof(SynchronousLoggingService));
        }

        /// <summary>
        /// Does the logging.
        /// </summary>
        /// <param name="numberOfTimes">The number of times to log.</param>
        public void DoLogging(int numberOfTimes)
        {
            using (Metrics.StartTimer("time-to-log-sync"))
            {
                for (int i = 0; i < numberOfTimes; i++)
                {
                    this.logger.Debug("This is test message {0} from the SynchronousLoggingService.", i);
                }
            }
        }
    }
}

You can see that the DoLogging method simply writes a Debug message to the log in a ‘for’ loop. The most interesting line is line 30. This is a call to the Statsd C# Client library and creates a timer around the code enclosed by the using statement. Note the name of the metric (time-to-log-sync). Similar code was added to the other logging services but with each one using a unique metric name.

So far we only have half of the story. What remains is the need to configure the Statsd C# Client library before it can successfully send messages to statsd. That was done in the main program.

namespace Logging.StatsD
{
    using System;

    using Andy.French.Configuration.Service;
    
    using Ninject;

    using StatsdClient;

    /// <summary>
    /// The main program.
    /// </summary>
    public class Program
    {
        /// <summary>The configuration service.</summary>
        private static IConfigurationService configurationService;

        /// <summary>The synchronous logging service.</summary>
        private static ILoggingService synchronousLoggingService;

        /// <summary>The asynchronous logging service.</summary>
        private static ILoggingService asynchronousLoggingService;

        /// <summary>The async await logging service.</summary>
        private static ILoggingService asyncAwaitLoggingService;

        /// <summary>
        /// Defines the entry point of the application.
        /// </summary>
        /// <param name="args">The arguments.</param>
        public static void Main(string[] args)
        {
            try
            {
                ConfigureMetrics();
                ResolveDependencies();

                var numberOfTimes = configurationService.GetInt("Number.Of.Times");

                asynchronousLoggingService.DoLogging(numberOfTimes);
                synchronousLoggingService.DoLogging(numberOfTimes);
                asyncAwaitLoggingService.DoLogging(numberOfTimes);
            }
            catch (Exception ex)
            {
                Console.WriteLine("An error occurred: {0}", ex.Message);
            }
        }

        /// <summary>
        /// Configures the <c>statsd</c> client.
        /// </summary>
        private static void ConfigureMetrics()
        {
            var metricsConfig = new MetricsConfig
            {
                StatsdServerName = "localhost",
                StatsdServerPort = 8125
            };

            Metrics.Configure(metricsConfig);
        }

        /// <summary>
        /// Resolves the dependencies.
        /// </summary>
        private static void ResolveDependencies()
        {
            var kernel = new StandardKernel(new ProgramModule());
            configurationService = kernel.Get<IConfigurationService>();
            synchronousLoggingService = kernel.Get<ILoggingService>("Synchronous");
            asynchronousLoggingService = kernel.Get<ILoggingService>("Asynchronous");
            asyncAwaitLoggingService = kernel.Get<ILoggingService>("AsyncAwait");
        }
    }
}

The Statsd C# Client library is configured in the call to the ConfigureMetrics method. Here we specify the statsd server name and port by creating a new instance of MetricsConfig and passing it to the static Metrics.Configure method.

Note – To find the statsd port I went back into the Chef cookbooks created when I set up the virtual machine. In <statsd-graphite-vm repository directory>\cookbooks\statsd\attributes there’s a default.rb file. The default.rb file contains the settings used to setup statsd in the virtual machine, including the port number.

default["statsd"]["dir"]                          = "/usr/share/statsd"
default["statsd"]["conf_dir"]                     = "/etc/statsd"
default["statsd"]["repository"]                   = "https://github.com/etsy/statsd.git"
default["statsd"]["reference"]                    = "master"
default["statsd"]["flush_interval"]               = 10000
default["statsd"]["percent_threshold"]            = 90
default["statsd"]["address"]                      = "0.0.0.0"
default["statsd"]["port"]                         = 8125
default["statsd"]["mgmt_address"]                 = "0.0.0.0"
default["statsd"]["mgmt_port"]                    = 8126
default["statsd"]["graphite_host"]                = "127.0.0.1"
default["statsd"]["graphite_port"]                = 2003
default["statsd"]["graphite_role"]                = "graphite_server"
default["statsd"]["graphite_query"]               = "roles:#{node['statsd']['graphite_role']} AND chef_environment:#{node.chef_environment}"
default["statsd"]["delete_idle_stats"]            = false
default["statsd"]["delete_timers"]                = false
default["statsd"]["delete_gauges"]                = false
default["statsd"]["delete_sets"]                  = false
default["statsd"]["delete_counters"]              = false
default["statsd"]["username"]                     = "statsd"
default["statsd"]["debug"]                        = false
default["statsd"]["dump_messages"]                = false

# Graphite storage config
default["statsd"]["graphite"]["legacy_namespace"] = true
default["statsd"]["graphite"]["global_prefix"]    = "stats"
default["statsd"]["graphite"]["global_suffix"]    = ""
default["statsd"]["graphite"]["prefix_counter"]   = "counters"
default["statsd"]["graphite"]["prefix_timer"]     = "timers"
default["statsd"]["graphite"]["prefix_gauge"]     = "gauges"
default["statsd"]["graphite"]["prefix_set"]       = "sets"

# nodejs
default["statsd"]["nodejs_bin"] = "#{node["nodejs"]["dir"]}/bin/node"

Configuring log4Net

At this point I had the code to write to a log but I still needed to configure log4Net. The log4Net configuration was all in  App.config.

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <configSections>
    <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" />
  </configSections>
  
  <log4net>
    <appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender">
      <file value="log.txt" />
      <immediateFlush value="true" />
      <appendToFile value="true" />
      <maximumFileSize value="100MB" />
      <maxSizeRollBackups value="2" />
      <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%date [%thread] %-5level %logger - %message%newline" />
      </layout>
      <lockingModel type="log4net.Appender.FileAppender+MinimalLock" />
    </appender>

    <appender name="AsyncForwarder" type="Log4Net.Async.AsyncForwardingAppender,Log4Net.Async">
      <appender-ref ref="RollingFileAppender" />
    </appender>

    <logger name="Logging.StatsD.AsynchronousLoggingService">
      <level value="DEBUG" />
      <appender-ref ref="AsyncForwarder" />
      <!--<appender-ref ref="AsyncRollingFileAppender" />-->
    </logger>

    <logger name="Logging.StatsD.SynchronousLoggingService">
      <level value="DEBUG" />
      <appender-ref ref="RollingFileAppender" />
    </logger>

    <logger name="Logging.StatsD.AsyncAwaitLoggingService">
      <level value="DEBUG" />
      <appender-ref ref="RollingFileAppender" />
    </logger>
    
  </log4net>

  <appSettings>
    <add key="Number.Of.Times" value="10000" />
  </appSettings>
  
  <startup> 
    <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5.1" />
  </startup>
</configuration>

Starting on line 24 there are 3 loggers, one for each of the logging service classes I created. Each logger was configured to use one of the appenders defined on lines 8 and 20:

  • RollingFileAppender – a standard log4Net rolling file appender
  • AsyncForwarder – A Log4Net.Async appender. This appender uses a ring buffer and it is possible for message loss to take place. More on that later.

 

A note about Log4Net.Async appenders

It pays to read documentation and that’s certainly true of the Log4Net.Async library. In my test application the logger created for the AsynchronousLoggingService is set to use the AsyncForwarder appender. This appender – one of the Log4Net.Async appenders - provides high performance at the expense of potential lost messages.

And lost messages is what I got. When running the example I saw occasional log entries like this:

2015-04-05 10:15:13,886 [AsyncForwarder Forwarding Appender Thread] ERROR AsyncForwardingAppender - Buffer overflow. 13 logging events have been lost in the last 30 seconds. [BufferSize: 1000]

This isn’t an error; it’s by design.

“This appender utilizes a ring buffer with a a default limit of 1000 messages. If the head of the buffer overtakes the tail, message loss occurs. This behavior prioritizes application performance over logging fidelity. The current implementation uses a 10ms polling (sleep) period on the background thread.” - https://github.com/cjbhaines/Log4Net.Async

If you need lossless messaging another option is to use the ParallelForwardingAppender.

“This appender utilizes System.Collections.Concurrent.BlockingCollection(T) and other facets of the Task Parallel Library (TPL) to implement a lossless message queue. This implementation does not use polling but rather waits until new messages are available to append. This results in less CPU overhead for queue polling and appending starts without having to wait on sleep expiration.” - https://github.com/cjbhaines/Log4Net.Async

Trying things out

I ran the application a number of times over the course of a few minutes and then viewed the Graphite UI in my browser (to get to the Graphite I went to http://localhost:8888 as described in my previous post). By expanding the Graphite > stats > timers folder on the Graphite page I was able to see the 3 metrics I added to the application. That confirmed that the metrics were in fact being forwarded to statsd. Cool!

image

By expanding each of the timers I could see various values which I could select and add to the graph. I chose the ‘upper’ value for each of the 3 timers.

image

Once the values had been chosen for each of the metrics the results appeared in the graph. I ran the test application approximately once a minute over a 10 minute period so to make things clearer I configured the graph to only show data for the last 10 minutes. I also changed the line type to be a connected line.

image

image

Now I have to say I was blown away by what I saw. The version of the logging service that was using the Log4Net.Async forwarding appender was blisteringly fast. Check out the graph for yourself.

image

I’m a bit suspicious so I did some additional tests using a StopWatch in the code but the results were confirmed. The down-side was lost messages (roughly between 8 and 20 per run).

My naive async await implementation certainly wasn’t any faster – it was actually slower - although I could see that logging work was being assigned to different threads.

As an experiment I switched to a different Log4Net.Async appender, the ParallelForwardingAppender, which guarantees message delivery and tried again.

image

Not as fast as the AsyncForwardingAppender but still much faster than the standard log4Net appender.

Wrapping up

So there we have it. We’ve managed to get metrics from an application into statsd using the Statsd C# Client library and we’ve seen that the Log4Net.Async appenders offer performance gains over the standard log4Net equivalents. We’ve also managed to view the metrics in Graphite.

Result! Glass of wine anybody?