Thursday 20 November 2014

Idempotence

More revision, a few musings and another aide-mémoire.

Idempotence is a subject of much debate and seems to mean different things to different people when used in different contexts. This post is just a quick mile-high overview by way of a bit of revision.

Idempotence defined

Lets pop over to Wikipedia and get a definition of idempotence:

“Idempotence (/ˌaɪdɨmˈpoʊtəns/ EYE-dəm-POH-təns) is the property of certain operations in mathematics and computer science, that can be applied multiple times without changing the result beyond the initial application.” [1]

“In computer science, the term idempotent is used more comprehensively to describe an operation that will produce the same results if executed once or multiple times. This may have a different meaning depending on the context in which it is applied. In the case of methods or subroutine calls with side effects, for instance, it means that the modified state remains the same after the first call…

This is a very useful property in many situations, as it means that an operation can be repeated or retried as often as necessary without causing unintended effects. With non-idempotent operations, the algorithm may have to keep track of whether the operation was already performed or not.” [2]

So for our purposes idempotence is the property of an operation that means if it is executed once or multiple times the result will always be the same. Now it’s the result that seems to be cause of the debate.

There are often side effects – sometimes quite subtle - that could be considered part of the result. For example, if you were to create an audit record every time an apparently idempotent operation is executed is that still idempotent? The answer is ‘probably not’ but will depend on the expected behaviour of the application. Logging, auditing and monitoring may be considered side effects of message handling:

“These side effects are not relevant to the semantics of the application behavior, so the processing of an idempotent request is still considered idempotent even if side effects exist.” [6]

Idempotence in HTTP

In HTTP there are some methods that are considered idempotent.

“Methods can also have the property of "idempotence" in that (aside from error or expiration issues) the side-effects of N > 0 identical requests is the same as for a single request. The methods GET, HEAD, PUT and DELETE share this property. Also, the methods OPTIONS and TRACE should not have side effects, and so are inherently idempotent.” [3]

Actually in practice there are some potential pitfalls with methods like DELETE. Although the result of a delete operation on the server may be idempotent – deleting the same resource multiple times has the same effect - it is possible for repeated DELETE operations on the same resource to return different HTTP status codes. An initial call to DELETE may return a status code of 200 but subsequent calls could return 404. From the client side the operation may not appear to be idempotent.  

Idempotency is a concept that features in REST quite a bit and is regarded as an important characteristic of fault-tolerant APIs. In REST the result may well be the resource representation so issues about audit records etc. may not be a concern. To illustrate here’s a quote from http://restcookbook.com:

“An idempotent HTTP method is a HTTP method that can be called many times without different outcomes. It would not matter if the method is called only once, or ten times over. The result should be the same. Again, this only applies to the result, not the resource itself. This still can be manipulated (like an update-timestamp, provided this information is not shared in the (current) resource representation.” [4]

So the author is drawing a distinction here between the actual resource and it’s representation. As long as the representation doesn’t change then the operation is regarded as idempotent.

 

Idempotence in messaging

Idempotence in message-based systems is very useful because it helps avoid the necessity of using strategies such as two-phase commit (2PC) to manage distributed transactions. Essentially 2PC in a message-based system means that a message must be processed exactly once by the cohorts participating in the distributed transaction. As we know there are potential drawbacks to using 2PC because it is a blocking protocol that potentially ties up resources.

With a 2PC strategy in place the coordinating process has to determine if a failure condition has occurred and abort the distributed transaction. This causes all participating cohorts to abort their part of the distributed transaction. In a message-based system this effectively means that the messages sent to the cohorts are regarded as not having been processed. Recovering from the failure requires the coordinator to replay the whole distributed transaction by republishing the messages.

An alternative approach is to use a strategy where a message is processed at least once. With this strategy if a failure occurs the original message may be published again by the coordinating process but recipients that have already processed the message are expected to ignore it. This can be achieved 2 ways: have a mechanism in place to remove duplicate messages from being processed by participating cohorts, or messages have to be idempotent. [5]

A typical pattern for handling messages is for a message to be read from a queue, processed, and then removed from the queue.[6] In this scenario if a failure occurs after the message has been processed but before the message has been removed from the queue (e.g. there’s a power failure) the message will be processed twice. If the messages are idempotent this is no longer a concern.

Strategies for implementing idempotent messages include:

  • Natural idempotency – some messages are naturally idempotent (processing them multiple times has the same effect). In much the same way as an HTTP DELETE can be idempotent so could a message to delete a resource.
  • Use a correlation identifier – add a unique identifier to each message and use it to see if the message has been processed before.

 

Idempotence and SOA

Just a quick note here that idempotence has a place to play in a service oriented architecture because it facilities fault-tolerance:

“Idempotency guarantees that repeated invocations of a service capability are safe and will have no negative effect.

Idempotent capabilities are generally limited to read-only data retrieval and queries. For capabilities that do request changes to service state, their logic is generally based on "set", "put" or "delete" actions that have a post-condition that does not depend on the original state of the service.

The design of an idempotent capability can include the use of a unique identifier with each request so that repeated requests (with the same identifier value) that have already been processed will be discarded or ignored by the service capability, rather than being processed again.” [7]

 

References

[1] Idempotence (Wikipedia)

[2] Idempotence – Computer science meaning (Wikipedia)

[3] RFC 2616 Part 9

[4] The RESTful CookBook – Idempotency

[5] (Un) Reliability in messaging: idempotency and de-duplication (Jimmy Bogard, Los Techies)

[6] Idempotence Is Not a Medical Condition (Pat Helland, acm.org)

[7] Idempotent Capability

Wednesday 19 November 2014

Two-phase Commit (2PC)

Time for a bit of revision. What is Two-phase Commit (2PC)?

Firstly, there’s lots of in formation out there on 2PC including Wkikpedia and MSDN. Those articles will go into much more detail about 2PC than I will here. This post is really just an aide-mémoire.

In a nutshell:

“It is a distributed algorithm that coordinates all the processes that participate in a distributed atomic transaction on whether to commit or abort (roll back) the transaction (it is a specialized type of consensus protocol). The protocol achieves its goal even in many cases of temporary system failure (involving either process, network node, communication, etc. failures), and is thus widely utilized.” [1]

Essentially 2PC provides a mechanism for tasks to be executed across separate systems as a single atomic distributed transaction. For example you might want to make updates to separate databases on different servers - with each update running in its own transaction - and have the whole process run as a single distributed transaction. If an error occurs during any of the component transactions then all of the transactions should be aborted (rolled back).

Note that 2PC does not have to apply to database transactions. A step in the process could mean executing a program.

“The term transaction (or any of its derivatives, such as transactional), might be misleading. In many cases, the term transaction describes a single program executing on a mainframe computer that does not use the 2PC protocol. In other cases, however, it is used to denote an operation that is carried out by multiple programs on multiple computers that are using the 2PC protocol.” [2]

There will be 2 basic actors to 2PC: a coordinating process that manages the distributed transaction, and participating processes (participants, cohorts, or workers).

The 2PC protocol calls for 2 phases (see reference [1] for full details):

  • Commit-request phase (or Voting phase)
    • The coordinator sends an instruction to all cohorts to undertake their part of the distributed transaction and waits until it has received a reply from all cohorts.
    • The cohorts execute the transaction up to the point where they will be asked to commit. They each write an entry to their undo log and an entry to their redo log.
    • Each cohort replies with an agreement message (cohort votes Yes to commit), if the cohort's actions succeeded, or an abort message (cohort votes No, not to commit), if the cohort experiences a failure that will make it impossible to commit.
  • Commit phase (or Completion phase)
    • Success
      • If the coordinator received an agreement message from all cohorts during the commit-request phase:
        • The coordinator sends a commit message to all the cohorts.
        • Each cohort completes the operation, and releases all the locks and resources held during the transaction.
        • Each cohort sends an acknowledgment to the coordinator.
        • The coordinator completes the transaction when all acknowledgments have been received.
    • Failure
      • If any cohort votes No during the commit-request phase (or the coordinator's timeout expires):
        • The coordinator sends a rollback message to all the cohorts.
        • Each cohort undoes the transaction using the undo log, and releases the resources and locks held during the transaction.
        • Each cohort sends an acknowledgement to the coordinator.
        • The coordinator undoes the transaction when all acknowledgements have been received.

The key point is that the cohorts do their work up to the point that they need to commit their transactions. They then vote on whether or not to commit. If all cohorts vote “Yes” then the coordinator tells all cohorts to commit. If any cohort votes “No” then the coordinator tells all cohorts to abort (rollback).

An interesting scenario to be considered is what happens if a cohort crashes having already voted but before it receives or processes the coordinator’s instruction to commit. The trick here is that the distributed transaction is not committed until all cohorts have acknowledged that they have committed. The coordinator will instruct the crashed cohort to commit when it becomes available again. To make this kind of scenario work the cohorts need to use logging to keep track of what steps they have taken (e.g. the database transaction log):

“To accommodate recovery from failure (automatic in most cases) the protocol's participants use logging of the protocol's states. Log records, which are typically slow to generate but survive failures, are used by the protocol's recovery procedures.” [1]

 

Disadvantages

The two-phase commit protocol is a blocking protocol. So,

  • Resources may be locked by cohorts while waiting for an instruction from the coordinator to commit or abort
  • If the coordinator fails permanently, some cohorts will never resolve their transactions

 

References

[1] Two-phase commit protocol (Wikipedia)

[2] Two-Phase Commit (MSDN)

Thursday 30 October 2014

Using Process Monitor to solve a file copy failure during an automated build

The Problem

We use CruiseControl.Net (CCNet) for continuous integration and also for automated deployment of applications. The deployment processes usually include compiling an application, configuring it appropriately for a target environment (e.g. test, production, etc.), creating a deployment package such as a Zip archive, and then copying the package to the appropriate server(s) where it is installed.

However, we suddenly started getting failures when running these deployment builds. After checking the CCNet build logs we could see errors such as “Could not find a part of the path \\servername\deploymentfolder”.

Initial checks on the build server showed that the Universal Naming Convention (UNC) paths were actually valid. So what was going on?

The Solution

To find out what was going on I used Process Monitor to see if there were issues accessing the UNC paths.

Firstly, I grabbed the latest version of Process Monitor from Microsoft SysInternals. It’s free and does not need installation – it runs as a single executable. Then I copied Process Monitor (procmon.exe) to the machine in question and ran it. Once it has opened I did the following:

  1. Disabled event capture by clicking the magnifying glass icon so it has the red ‘X’ overlay.
  2. Clicked the eraser icon to clear all existing captured events.
  3. Made sure Process Monitor was only going to capture file system activity by only selecting the filing cabinet icon.

procmon

Then I needed to add a filter so I only saw events relating to the file/folder I was interested in:

  1. Clicked the filter icon (the Filter > Filter… menu option does the same thing).
  2. Once the Process Monitor Filter dialog opened, clicked the Reset button.
  3. Using the drop-down menus etc. created a filter that said “Path begins with \\servername\deploymentfolder then include”.
  4. Clicked the Add button to add the new filter.

procmon2

Finally I clicked the magnifying glass icon again to start event capture and forced one of the failing builds to run. Once the build completed I stopped capturing events again. This was the result:

SNAGHTML2522739c

So I could see that NAnt had been unable to copy the deployment file to the server because of a login failure. By double-clicking on an entry in the event list I could see more details about the issue including which account was being used.

Further digging identified the cause of the issue being related to the account being used to run the CCNet service and was easy to correct.

Thursday 23 October 2014

Date operations with Noda Time

I was looking at a coding exercise recently that consisted of source code that needed refactoring. Embedded in part of the code was a check to see if someone was 21 or over. I thought this was a great chance to use the Noda Time library created by Jon Skeet to create nicely understandable code. Noda Time is described in the following terms:

“Noda Time is an alternative date and time API for .NET. It helps you to think about your data more clearly, and express operations on that data more precisely.” [1]

So, here’s a quick piece of playful code using Noda Time to check a person’s age in years. Firstly, let’s create a service interface.

using System;

namespace AgeService
{
    public interface IAgeService
    {
        bool IsOfAgeAtDate(DateTime dateOfTest, DateTime dateOfBirth, int expectedAgeInYears);
    }
}

Now let’s implement the interface.

using System;
using NodaTime;

namespace AgeService
{
    public class AgeService : IAgeService
    {
        public bool IsOfAgeAtDate(DateTime dateOfTest, DateTime dateOfBirth, int expectedAgeInYears)
        {
            var localDateOfTest = new LocalDate(dateOfTest.Year, dateOfTest.Month, dateOfTest.Day);
            var localDateOfBirth = new LocalDate(dateOfBirth.Year, dateOfBirth.Month, dateOfBirth.Day);

            Period age = Period.Between(localDateOfBirth, localDateOfTest, PeriodUnits.Years);

            return age.Years >= expectedAgeInYears;
        }
    }
}

A LocalDate is the date portion of a Noda Time LocalDateTime that has no concept of the time of day; it's just a date. A Period described as follows:

“A Period is a number of years, months, weeks, days, hours and so on, which can be added to (or subtracted from) a LocalDateTime, LocalDate or LocalTime. The amount of elapsed time represented by a Period isn't fixed: a period of "one month" is effectively longer when added to January 1st than when added to February 1st, because February is always shorter than January.” [2]

Finally here are a couple of simple tests to demonstrate it works.

using System;
using NUnit.Framework;

namespace AgeServiceTests
{
    [TestFixture]
    public class AgeServiceTests
    {
        [Test]
        public void IsOfAgeAtDate_WhenOfCorrectAge_ReturnsTrue()
        {
            // Arrange
            var ageService = new AgeService.AgeService();
            var dateOfTest = new DateTime(2014, 10, 23);
            var dateOfBirth = new DateTime(1993, 10, 23);

            // Act
            var result = ageService.IsOfAgeAtDate(dateOfTest, dateOfBirth, 21);

            // Assert
            Assert.That(result, Is.True);
        }

        [Test]
        public void IsOfAgeAtDate_WhenUnderAge_ReturnsFalse()
        {
            // Arrange
            var ageService = new AgeService.AgeService();
            var dateOfTest = new DateTime(2014, 10, 23);
            var dateOfBirth = new DateTime(1993, 10, 24);

            // Act
            var result = ageService.IsOfAgeAtDate(dateOfTest, dateOfBirth, 21);

            // Assert
            Assert.That(result, Is.False);
        }
    }
}

There’s lots more to Noda Time than that but it’s a start! Happy coding.

References

[1] http://nodatime.org

[2] http://nodatime.org/1.3.x/userguide/core-types.html

Thursday 23 October 2014

Thursday 14 August 2014

Using TeamCity to generate NuGet packages that reference other NuGet packages containing binaries for specific .Net versions

The Problem

I have been using the TeamCity continuous integration server to generate and publish NuGet packages automatically. The approach I have taken is based on that proposed by David Peden in this StackOverflow thread (see Option #1). Much appreciated, David.

This works pretty well until you try to generate a NuGet package that has dependencies on other NuGet packages, and in particular if a referenced package has different .Net builds in its lib folder. This problem can be illustrated in Visual Studio. If you change the .Net version of a project that has a NuGet reference to a package that contains specific .Net builds you might see an error.

 

image 

This is because the reference was created when the NuGet package was added to the project and has a path to the appropriate binaries in the NuGet package. Looking in the .csproj file for the references illustrates this further:

 

<ItemGroup>
    <Reference Include="Andy.French.Logging">
        <HintPath>..\packages\Andy.French.Logging.1.0.9\lib\net451\Andy.French.Logging.dll</HintPath>
    </Reference>
    <Reference Include="System" />
    <Reference Include="System.Core" />
    <Reference Include="System.Xml.Linq" />
    <Reference Include="System.Data.DataSetExtensions" />
    <Reference Include="Microsoft.CSharp" />
    <Reference Include="System.Data" />
    <Reference Include="System.Xml" />
</ItemGroup>

 

Notice the logging framework reference path is to the net451 folder and therefore binaries built for that .Net version.

The same thing happened when I tried to generate the NuGet packages using David Peden’s approach without modification because it runs separate build steps for the different .Net versions. As a result in some cases the NuGet references were wrong for the step in question.

 

The Solution

For the time being I have come up with a somewhat hacky solution. It works for now but I am concerned it may not prove particularly maintainable. Time will tell.

The solution involves manually editing the .csproj file to include conditional references, something like this:

<ItemGroup>
    <Reference Condition="'$(TargetFrameworkVersion)' == 'v4.5.1'" Include="Andy.French.Logging, Version=1.0.1.0, Culture=neutral, processorArchitecture=MSIL">
        <SpecificVersion>False</SpecificVersion>
        <HintPath>..\packages\Andy.French.Logging.1.0.9\lib\net451\Andy.French.Logging.dll</HintPath>
    </Reference>
    <Reference Condition="'$(TargetFrameworkVersion)' == 'v4.5'" Include="Andy.French.Logging, Version=1.0.1.0, Culture=neutral, processorArchitecture=MSIL">
        <SpecificVersion>False</SpecificVersion>
        <HintPath>..\packages\Andy.French.Logging.1.0.9\lib\net45\Andy.French.Logging.dll</HintPath>
    </Reference>
    <Reference Condition="'$(TargetFrameworkVersion)' == 'v4.0'" Include="Andy.French.Logging, Version=1.0.1.0, Culture=neutral, processorArchitecture=MSIL">
    <SpecificVersion>False</SpecificVersion>
        <HintPath>..\packages\Andy.French.Logging.1.0.9\lib\net40\Andy.French.Logging.dll</HintPath>
    </Reference>
    <Reference Include="log4net, Version=1.2.13.0, Culture=neutral, PublicKeyToken=669e0ddf0bb1aa2a, processorArchitecture=MSIL">
        <SpecificVersion>False</SpecificVersion>
        <HintPath>..\packages\log4net.2.0.3\lib\net40-full\log4net.dll</HintPath>
    </Reference>
    <Reference Include="System" />
    <Reference Include="System.Core" />
    <Reference Include="System.Xml.Linq" />
    <Reference Include="System.Data.DataSetExtensions" />
    <Reference Include="Microsoft.CSharp" />
    <Reference Include="System.Data" />
    <Reference Include="System.Xml" />
</ItemGroup>

When TeamCity runs the build for each target framework the appropriate reference will be used. For more complex sets of references the following approach can be used:

<Choose>
    <When Condition="'$(TargetFrameworkVersion)' == 'v4.5.1'">
        <ItemGroup>
            <Reference Include="Andy.French.Domain.Driven.Design, Version=1.0.6.0, Culture=neutral, processorArchitecture=MSIL">
              <SpecificVersion>False</SpecificVersion>
              <HintPath>..\packages\Andy.French.Domain.Driven.Design.1.0.6\lib\net451\Andy.French.Domain.Driven.Design.dll</HintPath>
            </Reference>
            <Reference Include="Andy.French.Repository, Version=1.0.1.0, Culture=neutral, processorArchitecture=MSIL">
              <SpecificVersion>False</SpecificVersion>
              <HintPath>..\packages\Andy.French.Repository.1.0.6\lib\net451\Andy.French.Repository.dll</HintPath>
            </Reference>
            <Reference Include="EntityFramework">
              <HintPath>..\packages\EntityFramework.6.1.1\lib\net45\EntityFramework.dll</HintPath>
            </Reference>
            <Reference Include="EntityFramework.SqlServer">
              <HintPath>..\packages\EntityFramework.6.1.1\lib\net45\EntityFramework.SqlServer.dll</HintPath>
            </Reference>
        </ItemGroup>
    </When>
    <When Condition="'$(TargetFrameworkVersion)' == 'v4.5'">
        <ItemGroup>
            <Reference Include="Andy.French.Domain.Driven.Design, Version=1.0.6.0, Culture=neutral, processorArchitecture=MSIL">
              <SpecificVersion>False</SpecificVersion>
              <HintPath>..\packages\Andy.French.Domain.Driven.Design.1.0.6\lib\net45\Andy.French.Domain.Driven.Design.dll</HintPath>
            </Reference>
            <Reference Include="Andy.French.Repository, Version=1.0.1.0, Culture=neutral, processorArchitecture=MSIL">
              <SpecificVersion>False</SpecificVersion>
              <HintPath>..\packages\Andy.French.Repository.1.0.6\lib\net45\Andy.French.Repository.dll</HintPath>
            </Reference>
            <Reference Include="EntityFramework">
              <HintPath>..\packages\EntityFramework.6.1.1\lib\net45\EntityFramework.dll</HintPath>
            </Reference>
            <Reference Include="EntityFramework.SqlServer">
              <HintPath>..\packages\EntityFramework.6.1.1\lib\net45\EntityFramework.SqlServer.dll</HintPath>
            </Reference>
        </ItemGroup>
    </When>
    <When Condition="'$(TargetFrameworkVersion)' == 'v4.0'">
        <ItemGroup>
            <Reference Include="Andy.French.Domain.Driven.Design, Version=1.0.6.0, Culture=neutral, processorArchitecture=MSIL">
              <SpecificVersion>False</SpecificVersion>
              <HintPath>..\packages\Andy.French.Domain.Driven.Design.1.0.6\lib\net40\Andy.French.Domain.Driven.Design.dll</HintPath>
            </Reference>
            <Reference Include="Andy.French.Repository, Version=1.0.1.0, Culture=neutral, processorArchitecture=MSIL">
              <SpecificVersion>False</SpecificVersion>
              <HintPath>..\packages\Andy.French.Repository.1.0.6\lib\net40\Andy.French.Repository.dll</HintPath>
            </Reference>
            <Reference Include="EntityFramework">
              <HintPath>..\packages\EntityFramework.6.1.1\lib\net40\EntityFramework.dll</HintPath>
            </Reference>
            <Reference Include="EntityFramework.SqlServer">
              <HintPath>..\packages\EntityFramework.6.1.1\lib\net40\EntityFramework.SqlServer.dll</HintPath>
            </Reference>
        </ItemGroup>
    </When>
</Choose>
<ItemGroup>
    <Reference Include="System" />
    <Reference Include="System.ComponentModel.DataAnnotations" />
    <Reference Include="System.Core" />
    <Reference Include="System.Xml.Linq" />
    <Reference Include="System.Data.DataSetExtensions" />
    <Reference Include="Microsoft.CSharp" />
    <Reference Include="System.Data" />
    <Reference Include="System.Xml" />
</ItemGroup>

 

It’s not perfect but it keeps me moving on for now.

Thursday 14 August 2014

Saturday 9 August 2014

ReSharper keyboard shortcuts stopped working

This post applies to ReSharper 8.2 running in Visual Studio Professional 2013 (12.0.30501.00 Update 2).

I’ve been installing a few Visual Studio extensions and ReSharper plugins recently and when I started Visual Studio this morning I found that the ReSharper keyboard shortcuts had stopped working. No problem though, it turned out to be an easy fix.

If you get this problem simply follow these steps:

  1. Go to ReSharper > Options.
  2. Locate Keyboard and Menus under the Environment section.
  3. I choose the Visual Studio keyboard scheme but you choose what’s appropriate to you.
  4. Click the Apply Scheme button.

That should be it. You may be prompted by ReSharper the first time you use certain ReSharper shortcuts to confirm what you want to do but this process has reset the ReSharper keyboard scheme.

 

image

Sunday 20 July 2014

Null-coalescing operator in C#

It feels like I’ve been using C# forever. When you’ve been doing something for a long time it’s easy to fall into habits and miss or forget about techniques that could make your life a little easier so it doesn’t hurt to remind ourselves of things from time to time.

I have used many DI containers like Spring.Net, Unity, NInject etc. but I recently started using StructureMap for the first time. I added it to an MVC project using NuGet and when I looked at the code that had been added to my project I saw code like this:

protected override IEnumerable<object> DoGetAllInstances(Type serviceType) 
{
	return (this.CurrentNestedContainer ?? this.Container).GetAllInstances(serviceType).Cast<object>();
}

It really struck me that I couldn’t remember the last time I used the null-coalescing operator (??). Some quick revision:

“The ?? operator is called the null-coalescing operator. It returns the left-hand operand if the operand is not null; otherwise it returns the right hand operand.” [1]

The C# Reference goes on to describe more features of the ?? operator:

  • You can use the ?? operator’s syntactic expressiveness to return an appropriate value when the left operand has a nullable type whose value is null.
  • If you try to assign a nullable value type to a non-nullable value type without using the ?? operator, you will generate a compile-time error.
  • If you use a cast, and the nullable value type is currently undefined, an InvalidOperationException exception will be thrown.
  • The result of a ?? operator is not considered to be a constant even if both its arguments are constants.

So, by way of example, the following statements are all equivalent:

return nullableValue ?? someOtherValue;

return nullableValue != null ? nullableValue : someOtherValue;

if(nullableValue != null)
{
    return nullableValue;
}
else
{
    return someOtherValue;
}

References

Sunday 20 July 2014

Sunday 13 July 2014

Configuring ELMAH to use SQL Server

ELMAH (Error Logging Modules and Handlers for ASP.NET) is a great little project but the documentation could be improved. One thing I like to do is to get ELMAH logging to a SQL Server database pretty much as soon as it’s integrated into a project but the documentation is a bit scant. Here’s how you do it, in this case for an MVC application.

At the time of writing the core ELMAH library is in version 1.2.1 and the Elmah.MVC package is in version 2.1.1.

Install ELMAH

In your MVC application Manage NuGet Packages… and search for ‘elmah’. Install the Elmah.MVC package which will also install the ELMAH core library as a dependency.

image

Create the ELMAH database

Hop over to the ELMAH site and locate the Microsoft SQL Server DDL script on the downloads page. Download the DDL script to your machine.

SNAGHTMLdc8946

Open SQL Server Management Studio and create a database. I called mine Elmah. Open the DDL script and run it against the new database. This will create the tables and stored procedures used by ELMAH.

SNAGHTMLe49de1

Create a SQL login which will be used by ELMAH to connect to the Elmah database from your MVC application. I use SQL Server authentication. You’ll probably want to cancel password policy enforcement etc. if you do the same.

SNAGHTMLe842dc

Create a new User in the Elmah database using the Elmah login. Give the user data reader and data writer roles.

SNAGHTMLea5188 

You will also need to grant execute permissions to the ELMAH stored procedures:

USE Elmah; 
GRANT EXECUTE ON OBJECT::ELMAH_GetErrorsXml
    TO Elmah;
GO 

GRANT EXECUTE ON OBJECT::ELMAH_GetErrorXml
    TO Elmah;
GO 

GRANT EXECUTE ON OBJECT::ELMAH_LogError
    TO Elmah;
GO 

Modify the ELMAH configuration in the MVC application

When you added ELMAH to your MVC application it will have created an elmah section in your Web.config. You will need to update Web.config to include a connection string for the Elmah database and then update the elmah section to use that connection string.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <!-- other configuration removed for clarity -->
  <connectionStrings>
    <add name="elmah" connectionString="server=localhost;database=;uid=Elmah;password=password;" />
  </connectionStrings>
  <!-- other configuration removed for clarity -->
  <elmah>
    <errorLog type="Elmah.SqlErrorLog, Elmah" connectionStringName="elmah" applicationName="YourApplicationName"/>
  </elmah>
</configuration>

That should be all there is to get started. Now you’ll probably want to secure ELMAH.

Adding bundling and minification to an empty MVC site

If you create an empty MVC application you’ll probably want to add features like bundling and minification. The process is very simple and starts with NuGet.

At the time of writing MVC is in version 5.

Get the NuGet optimization package

In Visual Studio Manage NuGet Packages…

image 

Search on the term ‘optimization’.

image

Install the Microsoft ASP.Net Web Optimization Framework. Notice that this has dependencies that will also be installed.

image 

Create your bundle configuration class

In the App_Start folder of your application create a new class called BundleConfig. Create a RegisterBundles(BundleCollection bundles) static method in the class which you will use to register your bundles. If you’re not familiar with bundles you’ll probably want to look-up ScriptBundle and StyleBundle from the System.Web.Optimization namespace to see how to do that.

Here’s an example that registers my CSS files:

namespace Mansio.Web.UI
{
    using System.Web.Optimization;

    /// <summary>
    /// This class handles bundle configuration.
    /// </summary>
    public class BundleConfig
    {
        /// <summary>
        /// Registers bundles with the application.
        /// </summary>
        /// <param name="bundles">The bundles to register.</param>
        public static void RegisterBundles(BundleCollection bundles)
        {
            bundles.Add(new StyleBundle("~/Content/css").Include("~/Content/*.css"));
        } 
    }
}

Update Global.asax

The final step is to call the bundle configuration from Global.asax by calling your static RegisterBundles method from Application_Start.

/// <summary>
/// Called when the application starts.
/// </summary>
protected void Application_Start()
{
    AreaRegistration.RegisterAllAreas();
    RouteConfig.RegisterRoutes(RouteTable.Routes);
    BundleConfig.RegisterBundles(BundleTable.Bundles);
}

That’s all there is to it.

See also

Wednesday 25 June 2014

Using Entity Framework in integration tests

Entity Framework code first includes an interesting feature called database initializers. When I first encountered this feature I wondered if it would be possible to use a database initializer to drop and recreate a database as part of a suite of integration tests. It might prove very useful if we were able to create a set of repeatable and atomic tests around, for example, a data access layer. Of course it turns out that this is possible.

At the time of writing Entity Framework is in version 6.1.1.

 

What is a database initializer?

A database initializer is an implementation of the IDatabaseInitializer<TContext> interface and is used by Entity Framework to setup the database when the context is used for the first time. This could involve dropping and recreating the entire database, or just updating the schema if the model has changed. MSDN describes the interface as follows:

“An implementation of this interface is used to initialize the underlying database when an instance of a DbContext derived class is used for the first time. This initialization can conditionally create the database and/or seed it with data.” [1]

There are several implementations available out of the box:

  • DropCreateDatabaseIfModelChanges<TContext> – will DELETE, recreate, and optionally re-seed the database only if the model has changed since the database was created.
  • DropCreateDatabaseAlways<TContext> - will always recreate and optionally re-seed the database the first time that a context is used in the app domain. To seed the database, create a derived class and override the Seed method.
  • CreateDatabaseIfNotExists<TContext> - will recreate and optionally re-seed the database only if the database does not exist. To seed the database, create a derived class and override the Seed method.

In our scenario we always want to reinitialize the database before each test. This will give us a repeatable baseline at the start of each test case. Also, we probably want to be able to insert some known test data. The DropCreateDatabaseAlways<TContext> class looks like it does exactly what we want: it always recreates the schema and can optionally re-seed the database.

 

Creating an initializer derived from DropCreateDatabaseAlways<TContext>

Creating our custom initializer turns out to be very simple:

namespace Andy.French.Repository.Entity.Framework.Tests
{
    using System.Data.Entity;
    using System.Data.Entity.Migrations;

    using Andy.French.Repository.Entity.Framework.Tests.Domain;

    public class TestInitializer : DropCreateDatabaseAlways<TestContext>
    {
        protected override void Seed(TestContext context)
        {
            context.Customers.AddOrUpdate(
                                c => c.Id,
                                new Customer { Name = "Customer 1" },
                                new Customer { Name = "Customer 2" });

            base.Seed(context);
        }
    }
}

In this example I’m seeding the database with a couple of customers. Naturally you would seed the database according to your needs using your domain classes.

 

Invoking the initializer in our test suite

I use NUnit for tests - and have done so for a very long time – so the example below is based on NUnit.

For true atomic repeatable tests you should drop and recreate the database before each and every test case but for this example I have chosen to do so once for each test fixture - a class that contains tests. To do that I have created a class that’s marked with the SetUpFixture attribute. NUnit will pick this up and will run the method marked with the SetUp attribute for each test fixture and before any of the tests it contains are run.

namespace Andy.French.Repository.Entity.Framework.Tests
{
    using System.Data.Entity;
    using NUnit.Framework;

    [SetUpFixture]
    public class SetUpFixture
    {
        public SetUpFixture()
        {
        }

        [SetUp]
        public void SetUp()
        {
            Database.SetInitializer(new TestInitializer());
            
            var context = new TestContext();
            context.Database.Initialize(true);
        }
    }
}

That’s all here is to it! On line 16 we call the Database.SetInitializer method so Entity Framework will use our custom initializer. Remember, the custom initializer extends DropCreateDatabaseAlways<TestContext> so the initializer is set for the test context type.

On lines 18 and 19 we simply create a context and call context.Database.Initialize(true) which causes the database to be dropped, recreated and re-seeded. MSDN describes the Initialize method in the following terms:

“Runs the the registered IDatabaseInitializer<TContext> on this context. If "force" is set to true, then the initializer is run regardless of whether or not it has been run before. This can be useful if a database is deleted while an app is running and needs to be reinitialized. If "force" is set to false, then the initializer is only run if it has not already been run for this context, model, and connection in this app domain. This method is typically used when it is necessary to ensure that the database has been created and seeded before starting some operation where doing so lazily will cause issues, such as when the operation is part of a transaction.” [2]

 

Running a test

With everything in place we can now run some tests. Here’s an example where we are testing a customer repository:

namespace Andy.French.Repository.Entity.Framework.Tests
{
    using System.Linq;
    using NUnit.Framework;

    [TestFixture]
    public class CustomerRepositoryTest
    {
        private CustomerRepository _repository;

        [SetUp]
        public void SetUp()
        {
            var context = new TestContext();
            _repository = new CustomerRepository(context);
        }

        [Test]
        public void FindAll_WhenCalled_FindsAllInstances()
        {
            // Arrange

            // Act
            var result = _repository.FindAll();

            // Assert
            Assert.That(result.Count(), Is.EqualTo(2));
        }
    }
}

 

A note on configuration

You will need to add an app.config file to your project. The config file will have to tell Entity Framework which database to use. It might look something like this:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <configSections>
    <section name="entityFramework" type="System.Data.Entity.Internal.ConfigFile.EntityFrameworkSection, EntityFramework, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false"/>
  </configSections>
  
  <entityFramework>
    <defaultConnectionFactory type="System.Data.Entity.Infrastructure.LocalDbConnectionFactory, EntityFramework">
      <parameters>
        <parameter value="v11.0"/>
      </parameters>
    </defaultConnectionFactory>
    
    <providers>
      <provider invariantName="System.Data.SqlClient" type="System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer"/>
    </providers>
  </entityFramework>
  
  <startup>
     <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5.1"/>
  </startup>
</configuration>

Note that we are using a SQL Server Express local database (v11.0). In my case the result is I get a couple of database files (.mdf and .ldf) named after the context used by the database initializer in my user directory.

 

image

 

References

[1] DatabaseInitializer<TContext> Interface (MSDN)

[2] Database.Initialize Method (MSDN)

Saturday 21 June 2014

Integrating StyleCop with TeamCity

Having used CruiseControl.Net for some time I thought it was time to try something new: TeamCity from JetBrains. I’m a bit fussy about code quality so one thing I like my integration builds to do is run StyleCop and fail the build if violations are found.

Create an MSBuild file

After some research I tracked down some basic guidance on StackOverflow and adapted it to my needs [1].

I created an MSBuild file that could be referenced from a umber of TeamCity build configurations. This build file invokes StyleCop, counts the violations and fails the build if StyleCop violations are encountered. I saved the build file to a shared location where it could be used from different builds. Here’s the basic script:

 

<Project DefaultTargets="RunStyleCop" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="3.5">
	<Import Project="$(MSBuildBinPath)\Microsoft.CSharp.targets" />
    <Import Project="$(ProgramFiles)\MSBuild\StyleCop\v4.7\StyleCop.targets" />
	<UsingTask TaskName="XmlRead" AssemblyFile="C:\MSBuild\lib\MSBuild.Community.Tasks.dll" />

	<Target Name="RunStyleCop">
        <CreateItem Include="$(teamcity_build_checkoutDir)\**\*.cs">
            <Output TaskParameter="Include" ItemName="StyleCopFiles" />
        </CreateItem>

        <StyleCopTask ProjectFullPath="$(MSBuildProjectFile)"
                      SourceFiles="@(StyleCopFiles)"
                      ForceFullAnalysis="true"
                      TreatErrorsAsWarnings="true"
                      OutputFile="StyleCopReport.xml"
                      CacheResults="true"
                      AdditionalAddinPaths="$(ProgramFiles)\StyleCop 4.7\Andy.French.StyleCop.Rules.dll"
                      OverrideSettingsFile="$(teamcity_build_checkoutDir)\Settings.StyleCop" />
                      
        <XmlRead XPath="count(//Violation)" XmlFileName="StyleCopReport.xml">
            <Output TaskParameter="Value" PropertyName="StyleCopViolations" />
        </XmlRead>

        <TeamCitySetStatus Status="$(AllPassed)" Text="StyleCop violations: $(StyleCopViolations)" />

        <Error Condition="$(StyleCopViolations) > 0" Text="There were $(StyleCopViolations) StyleCop violations." />
	</Target>
</Project>

 

On line 2 we import the StyleCop.targets from the StyleCop installation directory. This makes the StyleCopTask available on line 10. If you examine this file you’ll find it references the StyleCop.dll in the StyleCop installation directory. The StyleCopTask is actually in that DLL.

On line 3 we import the MSBuild.Community.Tasks.dll. This is an open source project that adds some useful MSBuild tasks including the XmlRead task on line 19 (see [2] below).

You may have to hop on over to the project GitHub site to grab a release [3]. I downloaded the Zip file and extracted the DLLs that I wanted, putting them in a shared location (C:\MSBuild\lib\ in this case).

The RunStyleCop target does all the work. On line 6 we grab all the C# files in the solution. Note that we are using a TeamCity variable here: teamcity.build.checkoutDir. NB: Don’t forget you have to replace all instances of “.” with “_” if you are using MSBuild.

 

“Make sure to replace "." with "_" when using properties in MSBuild scripts; e.g. use teamcity_dotnet_nunitlauncher_msbuild_task instead of teamcity.dotnet.nunitlauncher.msbuild.task” [4]

 

Now I have some custom StyleCop rules and I like to disable a couple of the default rules. To activate my custom StyleCop rules DLL I had to specify the path to it using the AdditionalAddinPaths attribute on line 16. I also include a Settings.StyleCop file with overridden settings with each solution so on line 17 I set the OverrideSettingsFile attribute to point to that file.

The XmlRead task on line 19 reads the output from StyleCop and makes the result available in a property called StyleCopViolations. This is used on line 23 to report the number of violations to TeamCity using the TeamCitySetStatus task on line 23. Then, on line 25, we throw an error – failing the build – if there are any errors.

TeamCity configuration

It’s quite straight forward then in TeamCity. You need to add a Build Step to your Build Configuration. I set the path to the shared build file created above and specified the target:

 

image

 

Note I have set the path to the shared build file and included the target to run. Here’s an example of a failed build, first in the Projects overview where our status message is displayed:

 

image

 

And in the build results page where the Error message can be seen:

 

image

 

There’s more work to do, for example getting the violations to display better but for now I can get at them via the build log tab.

References

Wednesday 18 June 2014

Download SQL Server Express

Only a few days ago we were moaning in the office about how complicated it was to dowload the correct version of SQL Server Express. Well, it seems we were not alone.

Scott Hanselman has come to the rescue with an awesome blog post that provides easy links to the various SQL Server Express binaries. Here's Scott's short link to his blog post:

http://downloadsqlserverexpress.com

One for the bookmark list. Thanks Scott!



Thursday 22 May 2014

Minimising deadlocks with READ_COMMITTED_SNAPSHOT

The problem

We have a system that has a number of separate Windows services that all access a shared database hosted by SQL Server 2008 R2. The system is used to provide online hydraulic modelling in support of a water utility company’s maintenance activities. The services are built around SynerGEE Water, a hydraulic modelling product from DNVGL.

One of the Windows services is responsible for monitoring a model library – SynerGEE models are stored as MDB files on the file system – and when a new or updated model is detected it adds a MODEL record to the database. It also indexes all of the pipes in the model and adds them to a MODEL_PIPE table in the database.

A second service checks the database for new MODEL records and then invokes SynerGEE to perform some hydraulic analysis. The results of this analysis are used to update the MODEL_PIPE records.

We observed that if a number of models had been updated in one go the result was that sometimes a database deadlock occurred when the second service was querying the MODEL_PIPE table. This was because the first service was in the process of adding other MODEL_PIPE records for other models at the same time.

We are using NHibernate for all data access and all database queries or updates are wrapped in transactions with the assistance of the Spring.Net transaction template. NHibernate Profiler was used to confirm that all the transactions were correctly formed and we could see that the transactions were using the READ COMMITTED isolation level.

The solution

Firstly, I did some research around Minimizing Deadlocks and noted that using a row-based isolation level can help. In particular, activating READ_COMMITTED_SNAPSHOT on the database can help by allowing SQL Server to use row versioning rather than shared locks.

“When the READ_COMMITTED_SNAPSHOT database option is set ON, a transaction running under read committed isolation level uses row versioning rather than shared locks during read operations.” [1]

Further research around snapshot isolation levels provided further insight:

“The READ_COMMITTED_SNAPSHOT database option determines the behavior of the default READ COMMITTED isolation level when snapshot isolation is enabled in a database. If you do not explicitly specify READ_COMMITTED_SNAPSHOT ON, READ COMMITTED is applied to all implicit transactions. This produces the same behavior as setting READ_COMMITTED_SNAPSHOT OFF (the default). When READ_COMMITTED_SNAPSHOT OFF is in effect, the Database Engine uses shared locks to enforce the default isolation level. If you set the READ_COMMITTED_SNAPSHOT database option to ON, the database engine uses row versioning and snapshot isolation as the default, instead of using locks to protect the data.” [2]

Bingo! If the READ_COMMITTED_SNAPSHOT database option is set to ON row versioning is used instead of locks.

“Once snapshot isolation is enabled, updated row versions for each transaction are maintained in tempdb. A unique transaction sequence number identifies each transaction, and these unique numbers are recorded for each row version. The transaction works with the most recent row versions having a sequence number before the sequence number of the transaction. Newer row versions created after the transaction has begun are ignored by the transaction…

…Snapshot isolation uses an optimistic concurrency model. If a snapshot transaction attempts to commit modifications to data that has changed since the transaction began, the transaction will roll back and an error will be raised. ” [2]

In our case this looked very interesting because the first Windows service adds new MODEL and MODEL_PIPE records in a transaction and is done with them. The second Windows service then reads the new MODEL and MODEL_PIPE records and updates them in a separate transaction. The chances of an optimistic concurrency issue are minimal. Although the two services are accessing the same table they are not accessing the same rows. Therefore, using row version-based locks would allow the two services to work better together.

So, I enabled READ_COMMITTED_SNAPSHOT on the database [3] and found that the deadlocks no longer occurred.

ALTER DATABASE <dbname> SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
ALTER DATABASE <dbname> SET READ_COMMITTED_SNAPSHOT ON;
ALTER DATABASE <dbname> SET MULTI_USER;

Result!

References

[1] Minimizing Deadlocks, Technet.

[2] Snapshot Isolation in SQL Server, MSDN.

[3] Enabling Row Versioning-Based Isolation Levels, Technet.

Monday 21 April 2014

BitDefender subscription update (2)

OK, an hour or two after submitting the email to BitDefender in step 9 of my previous post I received a reply saying my automatic renewal for BitDefender Internet Security 2014 has been cancelled. Praise be!

BitDefender, if you are listening, please change the way you manage automatic renewals of your product. Allow users to cancel their subscription or automatic renewal from the product page as easily as they can renew them. You are operating in a sector that requires the trust of your users. Engender that trust by making the process honest and transparent. I think your business will benefit from it. 

Sunday 20 April 2014

BitDefender subscription update

OK, here’s an update to the situation regarding my BitDefender subscriptions (see previous post Are BitDefender (Avangate/Digital River) behaving like confidence tricksters?).

I had a couple of emails waiting for me this morning. There has been some movement, but not on everything.

Step 8 - Subscription cancellation (huzzah!)

BitDefender have cancelled my “BitDefender Internet Security 2013 subscription”. Thank you BitDefender. Thank goodness for that.

However, I note that they have not cancelled the automatic renewal of the BitDefender Internet Security 2014 subscription so I’m going to have to do that separately again. I’m going to use the email address I was given on Twitter (see below).

Step 9 - Twitter power

Well, yesterday I popped an update on Twitter and it looks like BitDefender heard me. I’m now going to contact them using the email address they provided (bitsy@bitdefender.com) to get the automatic renewal of the 2014 subscription cancelled. Let’s see what happens then.

bd21
bd22

Saturday 19 April 2014

Are BitDefender (Avangate/Digital River) behaving like confidence tricksters?


I have been a user of BitDefender by Digital River for a few years now. In January of this year I ‘upgraded’ my BitDefender installation to BitDefender Internet Security 2014 subscription. I was surprised to find that today – 19th April 2014 – I was charged £40.46 GBP because my subscription had been renewed automatically.

I don’t want auto-renewal of anything and I couldn’t understand why I was being charged again with 281 days left on my subscription.

What I discovered is that the subscriptions for the old versions of the product are still in place and are being automatically renewed and I can’t cancel them!

This seems to be a trend amongst the anti-virus vendors. I had a similar experience with Kaspersky. These are companies that are operating in an environment where you are inclined to trust them. After all, they are working to protect you, aren’t they? What they are actually doing is making you accidentally sign up for automatic renewals (there was probably some small print and an inconspicuous checkbox on their payment page) and then not letting you cancel the subscription or making it very hard to do so.

This post is a description of all the steps I’ve taken to try and cancel the automatic renewal. At this point all attempts have failed but I’ll update this post if I succeed. Please read on and make your own minds up as to whether BitDefender are behaving like confidence tricksters.

Suffice is to say, I would advise anybody to avoid BitDefender like the plague.
  

Step 1 – Do some checks

OK, so BitDefender shows me that I have 281 days left.

bd1

So, lets hop over to their website and see what gives. I log on to my account and head over to the product page and this is what I see:

bd2

Looks like all my previous products are still active. I don’t use them anymore because I’ve upgraded to the 2014 version so how can I cancel the automatic renewals? Well, not on this page and there are no instructions here either.

Isn’t it reasonable to expect to see a button allowing you to cancel a subscription? After all they are very keen for you to renew. Even a little link next to each subscription would be a help. 
 

Step 2 – Check the email

The automatic renewal email I received from BitDefender had some instruction about how to follow up:

bd3 
 

Step 3 – Find my order

So I head over to www.findmyorder.com to see what gives. The site requires an order number and a password. Luckily the email from BitDefender included an order number so a put it in along with my account password:

bd4

What? Incorrect order number and/or password?

OK, lets try the ‘forget your password link’ to see if it’s the password. This gives me a form asking for the order number again. No problem, I enter the order number, click Submit and it sends me an email.

bd5

The weird thing is the password is completely different to my account password (by the way, I strongly suspect everyone is getting the same password back for this page). Never mind, maybe it’s me. Let’s enter the password and see what happens:

bd7

Great, we can manage a subscription. Let’s click the link…

Step 4 – Manage subscription (not)

Ah, another login.

bd8

Never mind. Let’s try logging in.

bd9

Now take my word for it, it doesn’t matter what password I use (my BitDefender password or the one they sent me in the email previously) I get the same thing: “Enter a valid email address”.

It could be that it’s bad validation but by now I’m getting suspicious.

Step 5 – Contact support

So, back on the product page from step 1 I use the Support link and get this dialog:

bd10

OK, where’s the 2014 product (I have a 2014 subscription listed on the product page)? And take my word for it clicking “the full list of products” link doesn’t list it either. Oh well, let’s use the 2013 version for now and see what gives.

bd11

I click FIND HELP and get a useful looking result:

bd12

Let’s click the link:

bd13

Oh good, another link. OK, here goes:

bd14

What you get is a nice form to fill in. The problem is, there’s no 2014 version of the subscription listed but the automatic renewal I want to cancel is listed as BitDefender Internet Security 2014, not 2013.

Anyway, I have submitted this form a number of times, once for each ‘version’ I have asking for automatic renewals of my subscriptions to be stopped. I have also submitted an extra one listing all 3 of the products I own asking the same.

Now I don’t know where this form goes but I haven’t even received an automated response and as far as I can see nothing has happened.

But really, why do I have to go through all these pages to try – and fail – to cancel a subscription or an automatic renewal when all subscriptions are listed on my product page? Why oh why can’t I do it there? Why should I have to contact support for this?

Anyway, this hasn’t worked so what can I do now?

Step 6 – eHow makes a suggestion

Getting desperate I do a Google search with Bing and find a link to an eHow page that suggests going to http://shop.BitDefender.com and completing some simple steps. Now the steps aren’t right; it looks like the article is out of date. So, I ended up clicking Contact Us at the top of the page and then SUPPORT but that gets you right back to the support page from step 5. Bummer.

However, if you click “My BitDefender” at the top of the page you get something that looks quite useful:

bd15

But yet again there are no ways to cancel subscriptions or automatic renewals.

But look, there’s a support link at the top of the page, I wonder where that goes:

bd16

Well, it goes nowhere. You stay on the same page.

Step 7 – The mystery page

OK, now I can’t remember how I found this page. It looks like the page that the eHow article was suggesting in Step 6 but I can’t remember how I found it.

Anyway, I completed the form but it absolutely will not submit it because it tells me that no orders were found! Now remember this is using the order number I was given in BitDefender’s automatic renewal email. Funny that. The same order number worked in step 3.

You really do have to question if any of this is accidental.

bd17 

Step 8 – Email customer support directly

Well, on the mystery page in Step 7 there was an email address listed (customerservice@bitdefender.com). So, I sent an email to that address.

bd18

OK, guess what came back. Now remember this was an email address taken from a publically accessible page on the BitDefender web site.

bd19

What a surprise.


Thursday 17 April 2014

Modifying layout and formatting rules in ReSharper

We use StyleCop to implement some of our coding standards. Although in the main we use StyleCop defaults there are a few exceptions for which we have created custom rules. I found that some of ReSharper’s formatting and layout rules clash with our StyleCop rules.

What follows is a description of how I set about changing a couple of those ReSharper formatting and layout rules.

Using directives before namespaces


There may be arguments for putting using directives inside the namespace but our standards – and my preference - require they appear at the top of the file. I’m not alone in this.

By default when ReSharper helps out by adding using directives for you it adds them inside the namespace. To make ReSharper follow this convention I did this:
  1. In Visual Studio go to ReSharper > Options.
  2. Navigate to Code Editing > C# > Namespace Imports.
  3. Deselect “Add using directives to the deepest scope”.
  4. Save the changes.



Private instance fields prefixed with an underscore


To add an underscore to private instance fields do the following:
  1. In Visual Studio go to ReSharper > Options.
  2. Navigate to Code Editing > C# > Naming Style.
  3. Double-click on “Instance Fields (private).
  4. Add a Name Prefix and click Set.
  5. Save the changes.

   


Don't use 'this' qualifier for instance members

 

To prevent ReSharper from including "this." for instance members:

  1. In Visual Studio go to ReSharper > Options.
  2. Navigate to Code Editing > C# > Formatting Style > Other.
  3. Scroll down to "Force "this." qualifier for instance member" and select "Do not use" from the dropdown list.
  4. Save the changes.




Monday 3 March 2014

Validation adorners not displaying in WPF

The problem

I was developing a WPF application using the IDataErrorInfo interface to perform validation on view models. When errors were encountered I wanted them to be displayed in the UI using adorners to change border appearances and to show error messages. I also wanted these adorners to show when the form first loaded so the user could see straight away what fields needed to be completed.

The problem was that the adorners only appeared after the user changed the values of the validated controls. When the form first loaded things like required fields were not adorned.


Figure 1 – No adorners showing on required fields etc. when the form first loaded.



Figure 2 – The desired result with adorners displayed correctly.

After spending some time checking the behaviour of the application I could see that IDataErrorInfo members were being called when the form loaded and that INotifyPropertyChanged was correctly implemented and the property changed event was being raised correctly.

The solution

The solution was to wrap the form elements in an AdornerDecorator. So, where I previously had a Grid element containing the rows of input controls and labels I wrapped that grid in an AdornerDecorator.

<StackPanel Orientation="Vertical">
    <AdornerDecorator>
        <Grid>
            <Grid.RowDefinitions>
                <!-- Row definitions omitted -->
            </Grid.RowDefinitions>
            <Grid.ColumnDefinitions>
                <!-- Column definitions omitted -->
            </Grid.ColumnDefinitions>

            <!-- Input controls and labels omitted -->
            
        </Grid>
    </AdornerDecorator>
    
    <!-- Other layout omitted -->
    
</StackPanel>

Explanation

The controls displaying validation errors in the application were based on styles that used the Validation.ErrorTemplate to provide adorned layout to use in the case of errors. For adorners to be displayed there needs to be an AdornerLayer available in the visual tree.

“An Adorner is a custom FrameworkElement that is bound to a UIElement. Adorners are rendered in an AdornerLayer, which is a rendering surface that is always on top of the adorned element or a collection of adorned elements.” [1]

There will be times when there isn’t an AdornerLayer available so you may have to provide one. The way to do this is to add an AdornerDecorator to your XAML remembering that somewhere there will be a call to the static AdornerLayer.GetAdornerLayer method which walks the visual tree looking for an AdornerLayer.

“This static method traverses up the visual tree starting at the specified Visual and returns the first adorner layer found.” [2]
“The AdornerDecorator specifies the position of the AdornerLayer in the visual tree. It is typically used in a ControlTemplate for a control that might host Adorner objects. For example, the ControlTemplate of a Window contains an AdornerDecorator so that the child elements of the window can be adorned. The GetAdornerLayer method returns null if you pass in an element that does not have an AdornerDecorator as an ancestor in its visual tree.” [3]

Note that - as stated above - the AdornerDecorator is typically used in a ControlTemplate. You might want to explore this avenue further.

References

[1] Adorners Overview
[2] AdornerLayer.GetAdornerLayer Method
[3] AdornerDecorator Class

Saturday 1 March 2014

Create a self-signed certificate for development in IIS 7

This post is a quick follow up to an earlier one, Generating temp SSL certificates for development, which showed a manual method for creating self-signed certificates. If you’ve got IIS 7 then the following method is much quicker and easier.

The method

Open the Internet Information Services (IIS) Manager.

Select the server node in the connections pane.

server-cert-001

Find the Sever Certificates item and double-click on it.

Right-click in the Server Certificates pane and choose Create Self-Signed Certificate… from the pop-up menu.

server-cert-002

In the dialog box provide a friendly name for the certificate and click OK.

 server-cert-003

That’s all there is to creating a self-signed certificate in IIS 7!

To use the certificate on a web site hosted in IIS you need to open the Sites node in the Connected Sites pane and select the web site that needs the SSL certificate.

In the Actions pane on the right click Bindings…

server-cert-004

If there isn’t an HTTPS binding you’ll have to add one in the Site Bindings dialog. You could edit an existing HTTPS binding if you need to.

server-cert-005

You’ll be prompted to select an SSL certificate for the binding. Just select the self-signed certificate you’ve created. Click OK and you’re done.

server-cert-006

You are now ready to use SSL on your website.

References