Wednesday, 12 December 2012

NServiceBus 3.3.2 processes and high CPU usage

Problem

I recently ran in to a problem with a simple Windows forms application that I was using to test an NServiceBus endpoint. The forms application simply allowed me to add messages to the endpoint input queue without having to invoke a number of other components in the system; useful for development and testing.

The forms application had been written using NServiceBus 2.6 but had been upgraded to NServiceBus 3.3.2. However, when I ran the upgraded version of the forms application it was using over 40% of available CPU. This didn’t happen when using NServiceBus 2.6.

Solution

The issue turned out to be permissions when the forms application tried to access its input queue. The solution was to delete the existing queues and to configure the application to run the NServiceBus installers on start-up.

In this case the NServiceBus was self-hosted within the forms application so I invoked the installers when I created the bus, something like this:

var bus = NServiceBus.Configure.With()
              .Log4Net()
              .DefaultBuilder()
              .XmlSerializer()
              .MsmqTransport()
              .UnicastBus()
                .LoadMessageHandlers()
              .DisableTimeoutManager()
              .CreateBus()
              .Start(() => Configure.Instance.ForInstallationOn<NServiceBus.Installation.Environments.Windows>().Install());

 

Note that if your process doesn’t actually need an input queue because it only sends messages you can avoid the necessity of creating the input queue altogether by using send-only mode:

var bus = NServiceBus.Configure.With()
              .Log4Net()
              .DefaultBuilder()
              .XmlSerializer()
              .MsmqTransport()
              .UnicastBus()
              .DisableTimeoutManager()
              .SendOnly();

 

See also

Wednesday, 10 October 2012

Using CruiseControl.Net to build branched code from Subversion

WARNING: You will want to treat this post with caution. Subsequent investigation has indicated that the original solution provided is inaccurate. I’ve left the post in place as a record of my investigation into the Subversion Source Control Block in CCNet.  There’s an Update at the bottom of the page that describes how CCNet implements autoGetSource  and cleanCopy for Subversion and the probable cause of our issue.
We recently branched some code in Subversion ready to kick off development for the next phase of a project. We employ continuous integration (CI) and use CruiseControl.Net (CCNet) so we thought it would be a simple matter of cloning the existing CCNet projects and modifying the Subversion repository path to point to the new branch rather than the trunk. However, we discovered that the branch builds were actually getting the source from the trunk even though the repository URL was pointing to the new branch. The solution to this problem turned out to be simple but took a bit of head scratching.
We are using CCNet version 1.6.7981.1.
Firstly, our Subversion repository a structure similar to the following:

Repo
So, we have a repository root under which are a set of projects. Each project has a branches folder and a tags folder. Branches and tags can contain as many branches and tags as necessary. Each project also has a single trunk for the current working copy.
To build the trunk we had CCNet source control configuration like the following (note that the trunkUrl is pointing to the trunk in our repository):
<sourcecontrol type="filtered">
  <sourceControlProvider type="svn">
    <trunkUrl>http://<server name here>/repository/ProjectName/Trunk</trunkUrl>
    <workingDirectory>C:\SomePathToWorkingDirectory</workingDirectory>
    <executable>c:\svn\bin\svn.exe</executable>
    <username>someusername</username>
    <password>somepassword</password>
  </sourceControlProvider>
  <exclusionFilters>
    <pathFilter>
      <pattern>/SomePathToFilter/*.*</pattern>
    </pathFilter>
  </exclusionFilters>
</sourcecontrol>

To build a branch we had CCNet source control configuration like the following (note that the trunkUrl is now pointing to a branch in our repository):
<sourcecontrol type="filtered">
  <sourceControlProvider type="svn">
    <trunkUrl>http://<server name here>/repository/ProjectName/Branches/v2.0.0.0</trunkUrl>
    <workingDirectory>C:\SomePathToWorkingDirectory</workingDirectory>
    <executable>c:\svn\bin\svn.exe</executable>
    <username>someusername</username>
    <password>somepassword</password>
  </sourceControlProvider>
  <exclusionFilters>
    <pathFilter>
      <pattern>/SomePathToFilter/*.*</pattern>
    </pathFilter>
  </exclusionFilters>
</sourcecontrol>

But this configuration failed as described above; we ended up building the trunk code, not the branch.
The solution turned out to be to use the autoGetSource configuration element of the Subversion Source Control Block [1]. There is limited documentation about this element but what we are told is it indicates “whether to retrieve the updates from Subversion for a particular build”.
<sourcecontrol type="filtered">
  <sourceControlProvider type="svn">
    <autoGetSource>true</autoGetSource>
    <trunkUrl>http://<server name here>/repository/ProjectName/Branches/v2.0.0.0</trunkUrl>
    <workingDirectory>C:\SomePathToWorkingDirectory</workingDirectory>
    <executable>c:\svn\bin\svn.exe</executable>
    <username>someusername</username>
    <password>somepassword</password>
  </sourceControlProvider>
  <exclusionFilters>
    <pathFilter>
      <pattern>/SomePathToFilter/*.*</pattern>
    </pathFilter>
  </exclusionFilters>
 </sourcecontrol>
This seems to have solved the problem and the branch builds are now working correctly. However, I’m not all together sure why this works because the documentation for our version of CCNet indicates that autoGetSource is optional but defaults to ‘true’.

Update

Having been confused by this behaviour I’ve had a look at the CCNet source code for the Subversion Source Control Block (ThoughtWorks.CruiseControl.Core.Sourcecontrol.Svn class) for the version we are using (1.6.7981.1).
Firstly, the AutoGetSource property is set the ‘true’ in the class constructor and – as far as I can see – it is only referenced in the GetSource(IIntegrationResult result) method.  So, it should be ‘true’ if you don’t set it in your CCNet config file.
public override void GetSource(IIntegrationResult result)
{
    result.BuildProgressInformation.SignalStartRunTask("Getting source from SVN");

    if (!AutoGetSource) return;

    if (DoesSvnDirectoryExist(result) && !CleanCopy)
    {
        UpdateSource(result);
    }
    else
    {
        if (CleanCopy)
        {
            if (WorkingDirectory == null)
            {
                DeleteSource(result.WorkingDirectory);
            }
            else
            {
                DeleteSource(WorkingDirectory);
            }
        }
        CheckoutSource(result);
    }
}
Looking at the code above if you set autoGetSource to ‘false’ CCNet won’t get try to checkout/update the source from Subversion at all.
Next, if the Subversion directory exists and you haven’t set cleanCopy to ‘true’ in CCNet config, CCNet will do a Subversion update on the existing code. Otherwise it will end up doing a Subversion checkout, deleting the working directory first if cleanCopy was set to ‘true’.
It now seems very unlikely that explicitly setting autoGetSource to ‘true’ would have had the effect of fixing our problem. It seems much more likely that the first time the build ran it did a checkout against the the trunk and not the branch (perhaps because the CCNet config trunkUrl was incorrect at that time). Subsequent builds were therefore doing an update against the trunk. As part of trying to resolve the issue we deleted the working directory (and the SVN directory within it) which would have forced a fresh checkout and we can assume that the trunkUrl was then correctly pointing to the branch.

References

[1] CruiseControl.NET : Subversion Source Control Block

Monday, 24 September 2012

Classic and integrated application pool modes in IIS 7

You may have noticed that when creating or editing application pools in IIS 7 you can choose between 2 different modes: Classic and Integrated. So what’s the difference?

Firstly, a quick reminder on how to get to the application pools. Crack open the IIS manager and select Application Pools from the connections tree-view on the left. You’ll see a list of application pools which you can select. If you right-click on an application pool and choose “Basic settings…” in the pop-up menu you can change the “Managed pipeline mode” using a drop-down. [2]

 

Capture

IIS manager showing basic settings for an application pool

 

Microsoft documentation describes an application pool in the following terms:

“An application pool is a group of one or more URLs that are served by a worker process or a set of worker processes. Application pools set boundaries for the applications they contain, which means that any applications that are running outside a given application pool cannot affect the applications in the application pool.” [1]

It goes on to say:

“The application pool mode affects how the server processes requests for managed code. If a managed application runs in an application pool with integrated mode, the server will use the integrated, request-processing pipelines of IIS and ASP.NET to process the request. However, if a managed application runs in an application pool with classic mode, the server will continue to route requests for managed code through Aspnet_isapi.dll, processing requests the same as if the application was running in IIS 6.0.” [1]

In versions of IIS prior to version 7, ASP.NET integrated with IIS via an ISAPI extension (aspnet_isapi.dll) and and ISAPI filter (aspnet_filter.dll). It therefore exposed its own application and request processing model which resulted in “ASP.NET components executing entirely inside the ASP.NET ISAPI extension bubble and only for requests mapped to ASP.NET in the IIS script map configuration” [3]. So, in effect, there were 2 pipelines: one for native ISAPI filters and another for managed application components (ASP.Net). This architecture had limitations:

“The major limitation of this model was that services provided by ASP.NET modules and custom ASP.NET application code were not available to non-ASP.NET requests. In addition, ASP.NET modules were unable to affect certain parts of the IIS request processing that occurred before and after the ASP.NET execution path.” [4]

In IIS 7 the ASP.NET runtime was integrated with the core web server, providing a unified request processing pipeline exposed to both native and managed components.

Some benefits of the new architecture include:

  • Allowing services provided by both native and managed modules to apply to all requests, regardless of handler. For example, managed Forms Authentication can be used for all content, including ASP pages, CGIs, and static files.
  • Empowering ASP.NET components to provide functionality that was previously unavailable to them due to their placement in the server pipeline. For example, a managed module providing request rewriting functionality can rewrite the request prior to any server processing, including authentication.
  • A single place to implement, configure, monitor and support server features such as single module and handler mapping configuration, single custom errors configuration, single url authorization configuration. [3]

There’s a nice description of how ASP.Net is integrated with IIS 7 here.

References

Thursday, 20 September 2012

aspnet_regiis.exe error 0x8007000B on Windows 7

Problem

The following error occurred while registering ASP.Net in IIS on Windows 7 using aspnet_regiis.exe -i:

Operation failed with 0x08007000B

An attempt was made to load a program with an incorrect format.

Capture

Solution

The solution was to run the 64-bit version of aspnet_regiis.exe located in the Framework64 folder.

Thursday, 20 September 2012

Saturday, 18 August 2012

Flash player – stutter during audio playback

Problem

Audio stutter occurs when playing back videos using Flash player. The CPU is not overloaded and network bandwidth is available.

Solution

In my case this was caused by Flash Player 11. I don’t know what they’ve done in that version but it simply won’t playback videos without audio stutter regardless of what web browser hosts the plugin.

The solution was to uninstall Flash 11 and install version 10.3. Links to the installers can be found here:

Where can I find direct downloads of Flash Player 10.3 for Windows or Macintosh?

Thursday, 19 July 2012

Enabling failed request tracing in IIS 7 on Windows Server 2008

Make sure failed request tracing (FRT) is installed

You can tell if FRT is installed because when you open the IIS manager and select a web site the option to Failed Request Tracing… option is missing.

 

screen000

Fig 1 - IIS Manager with no Failed Request Tracing… option

 

To enable this feature:

1. Open the Server Manager.

2. Expand Roles and select Web Server (IIS).

 

screen001

Fig 2 – The Server Manager

 

3. Scroll down to the Role Services section.

4. The Tracing feature will not be installed.

 

screen002

Fig 3 – Tracing feature not installed.

 

4. Click Add Role Services.

5. Enable Tracing.

screen003

Fig 4 – Enable Tracing

 

6. Click Next etc. to install the feature.

7. Close the Server Manager and reopen the IIS Manager and select a web site.

8. Failed Request Tracing… is now available.

 

screen004

Fig 5 – Failed Request Tracing… is now available.

 

9. Click on Failed Request Tracing… and select Enable.

10. Click OK.

 

screen005

Fig 6 – Enable Failed Request Tracing…

Thursday, 19 July 2012

Thursday, 24 May 2012

Collections in NServiceBus 2.6 messages

I needed to create an NServiceBus message type (an implementation of IMessage) that contained a collection of sub items. My initial thought was to expose the collection as an IEnumerable in order to preserve encapsulation - I didn’t want client code to be able to modify the collection. Here’s an example:

public class MyMessage : IMessage
{
    public int SomeCode { get; set; }

    public IEnumerable<ListItem> Items { get; set; }
}
public class ListItem
{
    public string Key { get; set; }
    
    public string Message { get; set; } 
}

The problem was that the collection was turning up empty at the destination.

This turns out to be a feature of the NServiceBus XML serializer. This is described as follows:

“NServiceBus has its own custom XML serializer which is capable of handling both classes and interfaces as well as dictionaries and does not use the WCF DataContractSerializer. Binary serialization is done using the standard .net binary serializer.” - http://nservicebus.com/Performance.aspx

However, it seems the XML serializer isn’t as fully featured as other XML serializers but it is focussed on addressing problems relating to moving messages around quickly and efficiently.

In this case the solution was to change from using IEnumerable<T> to List<T>. Not too painful really.

public class MyMessage : IMessage
{
    public int SomeCode { get; set; }

    public List<ListItem> Items { get; set; }
}

Note that a number of serialization issues in NServiceBus – including this one - can be addressed by using the NServiceBus binary serializer.