Thursday 27 October 2011

Handling WCF faults in Silverlight

Here’s a quick reminder to self about handling SOAP faults in Silverlight applications, something that happened recently and I forgot a simple step.

Before going any further read this (it will probably answer all your questions): Creating and Handling Faults in Silverlight

Firstly, I had a WCF service that was exposing faults contracts using the [FaultContract] attribute.

[ServiceContract(Namespace = "http://some_namespace_here/ConfigurationService/2011/10/01/01", Name = "ConfigurationService")]
public interface IConfigurationService
{
    [OperationContract(Name = "GetConfiguration")]
    [FaultContract(typeof(MyFault))]
    [FaultContract(typeof(ValidationFault))]
    ConfigurationResponse GetConfiguration(ConfigurationRequest request);
}

The service was implemented along with fault types, for example:

[DataContract(Namespace = "http://some_namespace_here/WIRM/2011/10/01/01", Name = "MyFault")]
public class MyFault
{
    [DataMember(IsRequired = true, Name = "FaultDetail", Order = 0)]
    public string FaultDetail { get; set; }

    [DataMember(IsRequired = true, Name = "FaultCode", Order = 1)]
    public string FaultCode { get; set; }
}

The problem was that back in Silverlight when I was handling exceptions generated from service calls the exception didn’t have the specific detail of the fault. The trick to making this work is in the article linked to above. We chose to use the “alternative client HTTP stack” approach by adding this to App.xaml.cs:

public partial class App : Application
{
    public App()
    {
        bool registerResult = WebRequest.RegisterPrefix("http://", WebRequestCreator.ClientHttp);
        InitializeComponent();
    }
}

Thereafter it was possible to get at the actual fault exceptions and take advantage of specific exception details:

private void ClientGetConfigurationCompleted(object sender, GetGetConfigurationEventArgs e)
{
    if (e.Error == null)
    {
        // Do normal processing here...
        return;
    }
            
    if (e.Error is FaultException<ValidationFault>)
    {   
        var ex = e.Error as FaultException<ValidationFault>;
        // Do stuff with validation errors (via ex.Detail)
    }
            
    if (e.Error is FaultException<MyFault>)
    {
        var ex = e.Error as FaultException<WirmFault>;
        // Do stuff with MyFault (via ex.Detail)
    }
}

That’s it.

Wednesday 19 October 2011

Feature layers not displaying when using the ArcGIS Silverlight API

I have been using the ArcGIS Silverlight API to create a mapping application. To provide some context, the application had to show a pipe network together with valves and other associated assets. The pipes were to be selectable in the interface so that pipe asset IDs could be used to drive queries and other processing.
In order to render the valves etc. I chose to use the ESRI FeatureLayer. I also used a FeatureLayer with a Mode of SelectionOnly for pipe selection.
One of the requirements of the system was that background imagery be used. This was created using an ArcGISDynamicMapServiceLayer. The feature layers and the background layer were taking their data from different ArcGIS services.
Although my code was using MVVM the scenario could be replicated in XAML like this (this is the map control XAML only with a number of layers omitted):
<esri:Map x:Name="AWMap">
    <esri:Map.Layers>
        <esri:ArcGISDynamicMapServiceLayer 
                 ID="BaseLayerStreets" 
                 Url="http://servername/ArcGIS/rest/services/projectname/backgroundservicename/MapServer" />
        
        <esri:FeatureLayer ID="Hydrants" 
            Url="http://servername/ArcGIS/rest/services/projectname/featureservicename/MapServer/0"
            Where="1 = 1"
            Mode="OnDemand"
            Renderer="{StaticResource ValveRenderer}">
            <esri:FeatureLayer.Clusterer>
                <esri:FlareClusterer 
                    FlareBackground="Red" 
                    FlareForeground="White" 
                    MaximumFlareCount="9" />
            </esri:FeatureLayer.Clusterer>
        </esri:FeatureLayer>
    
    </esri:Map.Layers>
</esri:Map>

The problem

The problem was that as soon as the background layer was included the feature layers simply didn’t render. Handling the feature LayerInitialized and InitializationFailed events showed that the feature layers were initialised and that no errors were reported.
So what was going on?

The solution

After hours of head-scratching I reread the ESRI documentation and this popped out:
“By default, the first layer with a valid spatial reference defines the spatial reference for the map. Dynamic ArcGIS Server map and image services as well as feature layers (FeatureLayer) will be reprojected to the map's spatial reference if necessary.” - http://help.arcgis.com/en/webapi/silverlight/help/index.html#/Creating_a_map/016600000011000000/
When I checked the metadata for the 2 services in the ArcGIS Services Directory I noticed that the Spatial Reference was different for the 2 services. So, I changed the XAML to something like this (note lines 2 to 8):
<esri:Map x:Name="AWMap">
    <esri:Map.Extent>
        <esri:Envelope XMin="111111" YMin="222222" XMax="333333" YMax="444444" >
            <esri:Envelope.SpatialReference>
                <esri:SpatialReference WKID="27700"/>
            </esri:Envelope.SpatialReference>
        </esri:Envelope>
    </esri:Map.Extent>
    <esri:Map.Layers>
        <esri:ArcGISDynamicMapServiceLayer 
                 ID="BaseLayerStreets" 
                 Url="http://servername/ArcGIS/rest/services/projectname/servicename/MapServer" />
        
        <esri:FeatureLayer ID="Hydrants" 
            Url="http://servername/ArcGIS/rest/services/projectname/servicename/MapServer/0"
            Where="1 = 1"
            Mode="OnDemand"
            Renderer="{StaticResource ValveRenderer}">
            <esri:FeatureLayer.Clusterer>
                <esri:FlareClusterer 
                    FlareBackground="Red" 
                    FlareForeground="White" 
                    MaximumFlareCount="9" />
            </esri:FeatureLayer.Clusterer>
        </esri:FeatureLayer>
    
    </esri:Map.Layers>
</esri:Map>
The result was that the missing feature layer magically appeared.
So, if you are having problems with missing feature layers check your spatial references. My guess is you should change the spatial references at the server to prevent re-projection on the client; performance may be better.

Wednesday 28 September 2011

Open XAML file in XAML view by default in VS 2010

When working with XAML in Visual Studio 2010 I prefer to see XAML as text rather than in the design or split views. To set this as a default in Visual Studio 2010:

  1. Open Visual Studio 2010.
  2. Go to Tools > Options > Text Editor > XAML > Miscellaneous.
  3. Check “Always open documents in full XAML view”.
  4. Click OK.
  5. Job done.

 

Untitled

Wednesday 27 July 2011

No send action on Visio 2007 activity diagram signal shapes

One thing that’s not intuitive in Visio 2007 is how to change the text on a UML activity diagram signal element. When you first drag the signal shape on to an activity diagram it displays the text “<no send action>”.

Untitled3

 

No amount of editing properties will change that unless you do some thing like the following:

1. Create a new Static Structure Diagram (if you haven’t got one already).

2. Using the UML Static Structure shape palette drag a new Signal on to the Static Structure Diagram.

Untitled2

 

3. Right-click on the signal and open Properties.

4. Change the Name to whatever you want to signal to be called.

5. Return to your Activity Diagram and using the UML Activity Diagram palette add a Signal to your activity diagram.

6. Right-click on the signal and open Properties.

7. Choose Actions from the left-hand pane.

8. Select the Send Action and choose Properties.

9. Under Send Action you can now see the signal you added to the Static Structure diagram in the Signal drop down. Select it.

Untitled4

 

10. OK to all the dialogs and now the name of the signal appears on your signal shape.

Untitled5

 

Intuitive, isn’t it!

Monday 25 July 2011

Renaming files with PowerShell

Problem: I had a directory full of PDF files that needed renaming. Specifically, I had to remove part of each file name.

Solution: Let’s try PowerShell!

PowerShell has been sitting on my machine for ages but for some reason I haven’t got around to using it. This seemed like a great opportunity to get my feet wet and – happily - this example turned out to be very straight forward. Firstly, I discovered that PowerShell was installed to C:\WINDOWS\system32\windowspowershell\v1.0 and I checked that the directory was in my Path system environment variable.

I launched Console2  - it’s so much more fun than using the standard Windows command prompt – and changed directory to the one holding the files I wanted to rename. I then ran powershell.exe which brought up the PowerShell command prompt and used the following PowerShell command to rename the files in the current directory:

get-childitem *.pdf | foreach{rename-item $_ $_.Name.Replace("text to replace", "")}

 

Untitled 

 

That was it. The files were renamed replacing “text to replace” with an empty string.

References

Wednesday 22 June 2011

Bulk copy, full text and a severe error

I have a program that is using SqlBulkCopy to batch update data in a SQL Server Express 2008 database. Everything was working fine until, for no apparent reason, the bulk inserts failed with the following exception:

“A severe error occurred on the current command. The results, if any, should be discarded.”

After much hair pulling the cause of the error turned out to be the addition of a full text index on one of the fields being updated. Removing the full text index stopped the exception from being thrown. That’s all well and good but I needed a full text index.

The index had been defined something like this:

EXEC sp_fulltext_database 'enable'
GO

CREATE FULLTEXT CATALOG MyCatalog
GO

CREATE FULLTEXT INDEX ON dbo.MyTable
(
    MyColumn
    Language 0X0
)

KEY INDEX PK_MyTable ON MyCatalog WITH CHANGE_TRACKING AUTO

After some experimentation the problem seemed to be with the CHANGE_TRACKING option. By setting it to OFF the bulk copy worked fine but failed with either AUTO or MANUAL.* For my purposes this was acceptable because the data was fairly static and updated infrequently. I was left with having to ensure the index was rebuilt or populated appropriately as a separate process.

 

References

* Full-Text Index Population - http://msdn.microsoft.com/en-us/library/ms142575.aspx

Monday 20 June 2011

Error installing SQL Server Express 2008

I ran in to an issue when trying to install SQL Server Express 2008 with Advanced Services. The installation file was a single executable (SQLEXPRADV_x86_ENU.exe)  but when I ran it I got the following error in a dialog box:

“SQL Server Setup has encountered the following error:

Invoke or BeginInvoke cannot be called on a control until the window handle has been created..”

The solution to the problem is:

  1. SQLEXPRADV_x86_ENU.exe is a self-extracting Zip file. Use WinZip to extract it to a local folder.
  2. Open a command window and navigate to the folder containing the extracted setup files.
  3. Run setup.exe from the command prompt.

That’s it.

Note that if you want to use Add or Remove Programs feature to add new features you can use the extracted files as the ‘installation media’.

Sunday 12 June 2011

A few useful exception types

Sometimes it’s useful to throw exceptions from your code, for example if an incoming method argument is incorrect for some reason. Throwing Exception isn’t very specific so what should we throw and when? Here’s a quick aide-mémoire for a few exception types I use:

Exception type When to use it
ArgumentException The exception that is thrown when one of the arguments provided to a method is not valid.
InvalidOperationException The exception that is thrown when a method call is invalid for the object's current state.
FormatException The exception that is thrown when the format of an argument does not meet the parameter specifications of the invoked method.
NotImplementedException The exception that is thrown when a requested method or operation is not implemented.

References

http://msdn.microsoft.com/en-us/library/system.systemexception.aspx

Monday 6 June 2011

Saving changes is not permitted

I’ve run into this problem a couple of times now and it’s really annoying every time it happens. I created a table in SQL Express using the SQL Management Studio and saved it. I then tried adding some new columns to the table but when I saved it again the following error appeared:

“Saving changes is not permitted. The changes you have made require the following tables to be dropped and re-created. You have either made changes to a table that can't be re-created or enabled the option Prevent saving changes that require the table to be re-created.”

The dialog is really helpful:

Untitled

The solution is as follows:

  1. Go to Tools > Options…
  2. Select the Designers node.
  3. Uncheck the Prevent saving changes that require table re-creation option.

 

Untitled

Job done.

Wednesday 25 May 2011

SQL revision - the GROUP BY clause

Every now and then I like to go back and have a look at something I think I know and double check my assumptions and understanding. In this case I’m going to look at the GROUP BY clause in SQL.

Partitions

Firstly, let’s look at partitions in sets. This is important because it directly relates to the ‘groups’ returned by a GROUP BY clause.

“A partition of a set X is a set of nonempty subsets of X such that every element x in X is in exactly one of these subsets.” *

We can derive certain properties of partitions:

  • The union of all the partitions (subsets) returns the original set
  • The intersection of the partitions (subsets) is empty

We can think of this like dividing a pizza into pieces. Each piece is a partition, and joining the partitions together gives us the original pizza.

image_thumb[3]

The ‘groups’ returned by a GROUP BY clause are effectively simple partitions of the original set of data.

The GROUP BY clause

When we use a GROUP BY clause we are taking the resulting set of data from a query (consisting of a FROM and WHERE clauses) and then put the rows into groups (partitions) based on the values of the columns specified in the GROUP BY clause.

Each group becomes a single row in the result table. Each column in the row must be a characteristic of the group not of a single row in the group. Therefore the SELECT list must be made up of grouping columns or optional aggregate functions. Note also that groups, by definition, must have at least one row (i.e. they can’t be empty). his means that the result of a COUNT will never return zero when used in a query against a non-empty table. Groups are also distinct.

The resulting table of a GROUP BY is called a group table. All subsequent operations are executed on the rows in the group table rather than the original rows.

NULL values are generally treated as a single group.

groupby

The columns in the SELECT statement can be a subset of the columns in the GROUP BY clause, but the columns in the GROUP BY clause can never be a subset of the columns in the SELECT statement.

References

* Partition of a set, Wikipedia

Wednesday 25 May 2011

Monday 23 May 2011

What is the difference between the various WCF programming models?

OK, I’m getting confused again about the differences between the various WCF-based programming models. There are so many variations on the theme - WCF Data Services, WCF RIA Services, WCF Web API, etc. – that I can no longer see the wood for the trees. Microsoft’s insistence on renaming things doesn’t help. So, here’s a quick aide-mémoire for me based on WCF, Data Services and RIA Services Alignment Questions and Answers.

Service model Description Also known as
WCF
  • SOAP-based services
  • Full flexibility for building operation-centric services
  • Supports interoperability (e.g. with Java)
  • Indigo
WCF WebHttp Services
  • Operation centric
  • Expose WCF service operations to non-SOAP endpoints
  • Best when
    • operation-centric HTTP services to be deployed at web scale
    • or building a RESTful service and want full control over the URI/format/protocol
  • WCF Rest
WCF Data Services
  • Data centric
  • Best when exposing data model through a RESTful interface
  • Includes a full implementation of the OData Protocol
  • Astoria
  • ADO.Net Data Services
WCF RIA Services
  • For building an end-to-end Silverlight applications
  • Provides a prescriptive pattern that defaults many options for the best experience in the common cases
  • .Net RIA Services
WCF Web API
  • Expose applications, data and services to the web directly over HTTP
  • Replaces the REST Starter Kit
  • Supporting SOAP is not a goal
  • New kid on the block

References

Monday 23 May 2011

Wednesday 18 May 2011

MEF basics

I’m digging into the Managed Extensibility Framework (MEF) and need to keep track of some simple terms, definitions and concepts. This post is just an aide-mémoire so don’t expect any detail. I’ve crushed the MEF documentation into some bullet points.

What is MEF?

“MEF offers discovery and composition capabilities that you can leverage to load application extensions.” *
The basic namespace for everything MEF is System.ComponentModel.Composition.

Basic terms and definitions

  • Catalog
    • Responsible for discovering extensions (ComposableParts)
    • Assembly Catalog
      • Discovers all the exports in a given assembly
    • Directory Catalog
      • Discover all the exports in all the assemblies in a directory
      • Does a one-time scan of the directory and will not automatically refresh when there are changes in the directory (can call Refresh() to rescan)
      • Not supported in Silverlight
    • Aggregate Catalog
      • Use when a combination of catalogs is needed
    • Type Catalog
      • Discovers all the exports in a specific set of types
    • Deployment Catalog
      • Silverlight only
      • For dynamically downloading remote XAPs
  • CompositionContainer
    • Interacts with Catalogs
    • Resolves a part's dependencies and exposes Exports to the outside world
  • ComposablePart
    • A composable unit within MEF
    • Offers up one or more Exports
    • May depend on one or more externally provided Imports
    • Attributed with the [System.ComponentModel.Composition.Export] and [System.ComponentModel.Composition.Import] attributes in order to declare their exports and imports.
    • Either added to the container explicitly or created through the use of catalogs
    • A common pattern is for a ComposablePart to export an interface or an abstract type contract rather than a concrete type
      • Allows the importer to be decoupled from the specific implementation of the export
  • Contract
    • The bridge between exports and imports
    • A contract is a string identifier
    • If no contract is specified, MEF will implicitly use the fully qualified name of the type as the contract
    • Every export has a contract, and every import declares the contract it needs
  • Contract Assembly
    • An assembly which contains contract types that extenders can use for extending your application
  • Exports
    • Composable Part export
      • Used when a Composable Part needs to export itself
      • Decorate the Composable Part class with the[Export]
    • Property export
      • Decorate a property with the [Export] attribute
      • Allows exporting sealed types such as the core CLR types, or other third party types
      • Allows decoupling the export from how the export is created
      • Allows having a family of related exports in the same Composable Part
    • Method export
      • Methods are exported as delegates which are specified in the Export contract
      • Allows finer grained control as to what is exported
      • Shields the caller from any knowledge of the type
      • Can be generated through light code gen
    • Inherited Exports
      • Base class / interface defines exports which are automatically inherited by implementers
      • Use [InheritedExport]
    • Lazy Exports
      • Can delay instantiation
      • Can prevent recursive composition down the graph
      • Import an [System.Lazy<T>] instead of [T] directly
  • Imports
    • Property import
      • Decorate the property with [Import]
    • Constructor parameters
      • Specify imports through constructor parameters
      • Add [ImportingConstructor] to the constructor
      • Add parameters to the constructor for each import
    • Field imports
      • Decorate the field with [Import]
    • Optional imports
      • MEF allows you to specify that an import is optional ([Import(AllowDefault=true)])
      • The container will provide an export if one is available otherwise it will set the import to Default(T)
    • Importing collections
      • Can import collections with the [ImportMany] attribute
      • All instances of the specific contract will be imported from the container and added to the collection
      • Recomposition
        • As new exports become available in the container, collections are automatically updated with the new set
        • [ImportMany(AllowRecomposition=true)]
    • IPartImportsSatisfiedNotification
      • Defines an OnImportsSatisfied method, which is called when all imports that could be satisfied have been satisfied

References

* MEF Overview

See also

Managed Extensibility Framework, on CodePlex

Tuesday 17 May 2011

Refreshing attribute table data in ArcMap

I’ve been working on a piece of code that updates data in an attribute table in ArcGIS. As part of integration testing it was necessary to setup test data using ArcMap and then run the code which would modify other data. Unfortunately, even though the code appeared to be working, the data in ArcMap didn’t look like it had changed. Even closing the ArcMap attribute table view and reopening it didn’t have any effect. I recalled that ArcGIS can use versioning so it occurred to me that this might be part of the problem. I realised I needed to refresh the data but couldn’t remember how to do it. It turns out I needed to use the Versioning toolbar.
  • Right-click on the toolbar area at the top of the ArcMap screen.
  • In the pop-up menu scroll to the bottom and tick Versioning to open the Versioning tool bar.
  • Click the Refresh icon on the Versioning toolbar.
Untitled
Figure 1 – Right-click on the toolbar area.

Untitled2
Figure 2 – The Versioning tools (the Refresh icon circled).
Tuesday 17 May 2011

Service locator anti-pattern

Some time ago I blogged about Breaking dependencies on specific DI containers. That post was concerned with resolving a WCF service’s dependencies using an IInstanceProvider and how to break a tight decency on Unity. I chose to use the interface provided by the Common Service Locator library on CodePlex. But there is still a problem, and one that didn’t occur to me as I was writing the code because I was so wrapped up in the details of WCF.

In short, the use of a Service Locator is considered by some to be an anti-pattern. Having thought things over I think I have to agree.

The problem with a Service Locator is that it hides dependencies in your code making them difficult to figure out and potentially leads to errors that only manifest themselves at runtime. If you use a Service Locator your code may compile but hide the fact that the Service Locator has been incorrectly configured. At runtime, when your code makes use of the Service Locator to resolve a dependency that hasn’t been configured, an error will occur. The compiler wasn’t able to help. Moreover, by using a Service Locator you can’t see from the API exactly what dependencies a given class may have.

At least one proposed solution to this problem is to use Constructor Injection to explicitly pass all of a classes dependencies in when it is instantiated. I like the logic of this solution. You can see from the constructor signature exactly what dependencies a class has and if one is missing you’ll get a compiler error.

Let’s look at an example. Imagine some code like this:

class Program
{
    static void Main(string[] args)
    {
        var needsDependencies = new NeedsDependencies(new ServiceLocator());
        needsDependencies.CallMe();
    }
}

Can you tell from the above code what dependencies the NeedsDependencies class actually has? How can you tell if the ServiceLocator has been configured correctly without running the code and seeing if it fails? In fact you can only see the dependencies by looking at the internal code of NeedsDependencies:

public class NeedsDependencies
{
    private IServiceLocator _serviceLocator;

    public NeedsDependencies(IServiceLocator serviceLocator)
    {
        _serviceLocator = serviceLocator;
    }

    public void CallMe()
    {
        var dependency = (Dependency)_serviceLocator.GetInstance(typeof(Dependency));
        dependency.CallMe();
    }
}

From the source code we can see that the NeedsDependencies class actually needs an instance of the Dependency class. If we didn’t have access to the source code (e.g. we were using a library provided by a 3rd party) we’d be be none the wiser and would only see that the class needs a Service Locator.

If we remove the Service Locator and replace it with the actual dependencies we can see exactly what a class needs up front:

public class NeedsDependencies
{
    private Dependency _dependency;

    public NeedsDependencies(Dependency dependency)
    {
        _dependency = dependency;
    }

    public void CallMe()
    {
        _dependency.CallMe();
    }
}

To call this version we’d do something like this:

class Program
{
    static void Main(string[] args)
    {
        var NeedsDependencies = new NeedsDependencies(new Dependency());
        needsDependencies.CallMe();
    }
}

We can see right away that NeedsDependencies there is a dependency on the Dependency class – because the constructor makes this clear - and if we fail to provide one the compiler will complain.

It is important to note that we can still use IoC to resolve dependencies and we can even use a Service Locator. The point is we shouldn’t pass the Service Locator around but should restrict its use to one place in the application, usually at at start-up when dependencies are being resolved.

References

Monday 16 May 2011

Best .Net podcasts and screen casts

Podcasts

Here are my favourite .Net podcasts:

  • .Net Rocks - .NET Rocks is a weekly talk show presented by Karl Franklin and Richard Campbell for anyone interested in programming on the Microsoft .NET platform. The show can get a bit chatty but overall it’s an entertaining and informative podcast.
  • Hanselminutes - .Net’s answer to the stand-up comedian Scott Hanselman. Short programs get right to the point.
  • Herding Code – Presented by an ensemble of .Net luminaries this podcast covers a lot of ground. Subject matter is diverse.
  • Deep Fried Bytes – As they say about themselves, “Deep Fried Bytes is an audio talk show with a Southern flavor hosted by technologists and developers Keith Elder and Chris Woodruff.”
  • Yet Another Podcast – This podcast has a strong Silverlight skew but covers other topics as well. Described as “Intelligent Technical Conversation Focused on Windows Phone 7, Silverlight, And Best Practices”.

Screen casts

The best screen casts come from conferences and Microsoft has done a great job at putting conference content online. Here are my favourites:

For a wide range of video content you can always stop by MSDN’s Channel9.

Thursday 5 May 2011

The mythical ‘attributed relationship’ in ArcGIS

Following on from a recent post on Relationship classes (and other things) in ArcGIS I thought I’d take a look at a confusing term that’s banded around in ArcGIS documentation, namely the attributed relationship.
Firstly I have noticed that there are 2 variations on the term: attribute relationship and attributed relationship. These 2 terms, although painfully similar, refer to quite different concepts.
The book Modeling Our World (Second Edition) draws a distinction between spatial relationships and attribute relationships:
Modeling with spatial relationships
Your first choice for modeling relationships is to use the GIS to manage the spatial relationships inherent among features… [snip] …
Modeling with attribute relationships
There are also many associations that require attribute relationships to be defined. You can have an association between a geographic feature, such as a parcel of land, and a non-geographic entity, such as one or more parcel owners… [snip] …
Features in feature classes and rows in tables can have attribute relationships established between them through a common field called a key… [snip] … You can make these associations in several ways, including joining or relating tales temporarily in your map or by creating relationship classes in your geodatabase that maintains more permanent associations.” *
So here the definition of an attribute relationship is quite simple; an attribute relationship refers to a relationship between a feature class and a table. That’s it.
A few pages on you get this:
Attributed relationship classes
An attributed relationship class is a type of relationship class that uses a table to store the key values for each individual relationship. Because of this, attributes can be optionally added to each relationship.” **
So, the definition of an attributed relationship is also simple but more specific; an attributed relationship is a relationship class with additional attributes other than just the keys.
I also came across this explanation in an old ESRI PDF:
If a relationship class is not attributed and does not have a cardinality of many-to-many, it is stored as a set of foreign keys on the feature or object classes. Attributed relationships are stored in tables.
Again this seems to confirm that an attributed relationship is just a relationship class (which I visualise as just a database link table) with some extra columns for additional attributes.
NB: If you are programming against ArcGIS you will find that there is an AttributedRelationship class, so from an API point of view the attributed relationship is a first class entity. I’m afraid I can’t comment on this aspect because I’ve yet to encounter it.
If you search ESRI documentation you’ll see the two terms coming up quite frequently. They express the notion that there can be relationships other than pure spatial ones between entities in the geodatabase and that relationship classes specifically can have attributes other than key values.

References

* p.78 – p79, Modeling Our World (Second Edition), Michael Zeiler, ISBN 978-1-58948-278-4
** p.85, Modeling Our World (Second Edition), Michael Zeiler, ISBN 978-1-58948-278-4
Thursday 5 May 2011

Wednesday 4 May 2011

Unit of Work

Health warning – The code featured in this post hasn’t been tested. These are just my musings on the subject of the Unit of Work pattern.

The Unit of Work pattern has been around for quite a while. I first encountered this pattern when I adopted NHibernate as the basis of my data access code. In NHibernate the Session object represents a Unit of Work. I’ve been investigating the Microsoft Entity Framework again - the latest code first option in particular - and I’ve noticed that in many examples the Unit of Work pattern is being implemented explicitly. This is particularly prevalent when the Repository pattern is being used.

So, what is the Unit of Work pattern?

Martin Fowler defines Unit of Work in the following terms:

"Maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems.

…When you're pulling data in and out of a database, it's important to keep track of what you've changed; otherwise, that data won't be written back into the database. Similarly you have to insert new objects you create and remove any objects you delete…

…A Unit of Work keeps track of everything you do during a business transaction that can affect the database. When you're done, it figures out everything that needs to be done to alter the database as a result of your work." *

Folwer goes on to suggest an interface for the Unit of Work pattern. In C# – and taking advantage of Generics - the interface might look something like this:

public interface IUnitOfWork<T>
{
    void RegisterNew(T instance);
    void RegisterDirty(T instance);
    void RegisterClean(T instance);
    void RegisterDeleted(T instance);
    void Commit();
    void Rollback();
}

For my purposes I think I can drop the RegisterClean(T instance) and Rollback() methods leaving:

public interface IUnitOfWork<T>
{
    void RegisterNew(T instance); // for tracking new instances
    void RegisterDirty(T instance); // for tracking updated instances 
    void RegisterDeleted(T instance); // for tracking deleted instances
    void Commit(); // to flush registered changes to the backing store
}

Many people seem to remove the Rollback() method and simply choose to not commit changes when something goes wrong. If you need to work with transactions you might choose differently.

Once you have an interface for your Unit of Work you need to decide what is going to implement it. I have observed 2 basic approaches to this:

  1. Create a Unit of Work class that implements the IUnitOfWork<T> interface and pass this to a Repository. In this case the IUnitOfWork<T> acts as a wrapper or adapter for the NHibernate session or EF context etc.
  2. Have your repository implement IUnitOfWork<T>. In this case the Repository is the Unit of Work.

I do not like the second option because I think it blurs the distinction between the 2 patterns. When we look at things from the point of view of separation of concerns or single responsibility the first option looks better to me.

So, given our IUnitOfWork<T> interface, lets imagine an NHibernate implementation of the Unit of Work as well as a Repository:

public class NHibernateUnitOfWork<T> : IUnitOfWork<T>
{
    private ISession _session;

    public NHibernateUnitOfWork(ISession session)
    {
        _session = session;
    }

    public void RegisterNew(T instance)
    {
        _session.Save(instance);
    }

    public void RegisterDirty(T instance)
    {
        _session.Update(instance);
    }

    public void RegisterDeleted(T instance)
    {
        _session.Delete(instance);
    }

    public void Commit()
    {
        _session.Flush();
    }
}

public interface IRepository<T>
{
    void Add(T instance);
    void Remove(T instance);
    void Update(T instance);
}

public class Repository<T> : IRepository<T>
{
    private IUnitOfWork<T> _unitOfWork;

    public Repository(IUnitOfWork<T> unitOfWork)
    {
        _unitOfWork = unitOfWork;
    }

    public void Add(T instance)
    {
        _unitOfWork.RegisterNew(instance);
    }

    public void Remove(T instance)
    {
        _unitOfWork.RegisterDeleted(instance);
    }

    public void Update(T instance)
    {
        _unitOfWork.RegisterDirty(instance);
    }
}

public class User
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

To make use of the Unit of Work we might do something like this:

//Configure NHibernate
var configuration = new Configuration();
configuration.Configure();

ISessionFactory sessionFactory = configuration.BuildSessionFactory();
ISession session = sessionFactory.OpenSession();

// Create and use a Unit of Work
IUnitOfWork<User> unitOfWork = new NHibernateUnitOfWork<User>(session);
IRepository<User> repository = new Repository<User>(unitOfWork);

var user = new User {FirstName = "Andy", LastName = "French"};

repository.Add(user);
// Maybe do other things with the repository here...
            
unitOfWork.Commit();

Note that in this example we are simply delegating to the NHibernate session which already represents a Unit of Work. Rather than do this some people like to keep lists of the affected entities in the Unit of Work class and then process the lists as a batch in the Commit() method. Perhaps such an implementation would look like this:

public class NHibernateUnitOfWork<T> : IUnitOfWork<T>
{
    private ISession _session;
    private IList<T> _added = new List<T>();
    private IList<T> _deleted = new List<T>();
    private IList<T> _updated = new List<T>();

    public NHibernateUnitOfWork(ISession session)
    {
        _session = session;
    }

    public void RegisterNew(T instance)
    {
        _added.Add(instance);
    }

    public void RegisterDirty(T instance)
    {
        _updated.Add(instance);
    }

    public void RegisterDeleted(T instance)
    {
        _deleted.Add(instance);
    }

    public void Commit()
    {
        using (ITransaction transaction = _session.BeginTransaction())
        {
            ProcessAdded();
            ProcessUpdated();
            ProcessDeleted();

            transaction.Commit();
        }
    }

    private void ProcessAdded()
    {
        foreach(var instance in _added)
        {
            _session.Save(instance);
        }
    }

    private void ProcessUpdated()
    {
        foreach (var instance in _updated)
        {
            _session.Update(instance);
        }
    }

    private void ProcessDeleted()
    {
        foreach (var instance in _deleted)
        {
            _session.Delete(instance);
        }
    }
}

Note that in this version we’ve been able to use NHibernate transaction support. Choose your poison.

References

* Unit of Work, Martin Fowler, http://martinfowler.com/eaaCatalog/unitOfWork.html

Tuesday 19 April 2011

NuGet package manager error

I just got bitten by a known issue with NuGet when Reflector is added to Visual Studio 2010 as an add-in. When the package manager console was opened the following error appeared:

The following error occurred while loading the extended type data file: Microsoft.PowerShell.Core, C:\WINDOWS\system32\WindowsPowerShell\v1.0\types.ps1xml(2943) : Error in type "System.Security.AccessControl.ObjectSecurity": Exception: Cannot convert the "Microsoft.PowerShell.Commands.SecurityDescriptorCommandsBase" value of type "System.String" to type "System.Type".
The following error occurred while loading the extended type data file: Microsoft.PowerShell.Core, C:\WINDOWS\system32\WindowsPowerShell\v1.0\types.ps1xml(2950) : Error in type "System.Security.AccessControl.ObjectSecurity": Exception: Cannot convert the "Microsoft.PowerShell.Commands.SecurityDescriptorCommandsBase" value of type "System.String" to type "System.Type".
The following error occurred while loading the extended type data file: Microsoft.PowerShell.Core, C:\WINDOWS\system32\WindowsPowerShell\v1.0\types.ps1xml(2957) : Error in type "System.Security.AccessControl.ObjectSecurity": Exception: Cannot convert the "Microsoft.PowerShell.Commands.SecurityDescriptorCommandsBase" value of type "System.String" to type "System.Type".
The following error occurred while loading the extended type data file: Microsoft.PowerShell.Core, C:\WINDOWS\system32\WindowsPowerShell\v1.0\types.ps1xml(2964) : Error in type "System.Security.AccessControl.ObjectSecurity": Exception: Cannot convert the "Microsoft.PowerShell.Commands.SecurityDescriptorCommandsBase" value of type "System.String" to type "System.Type".
The following error occurred while loading the extended type data file: Microsoft.PowerShell.Core, C:\WINDOWS\system32\WindowsPowerShell\v1.0\types.ps1xml(2971) : Error in type "System.Security.AccessControl.ObjectSecurity": Exception: Cannot convert the "Microsoft.PowerShell.Commands.SecurityDescriptorCommandsBase" value of type "System.String" to type "System.Type".
System.Management.Automation.CmdletInvocationException: Could not load file or assembly 'Scripts\nuget.psm1' or one of its dependencies. The parameter is incorrect. (Exception from HRESULT: 0x80070057 (E_INVALIDARG)) ---> System.IO.FileLoadException: Could not load file or assembly 'Scripts\nuget.psm1' or one of its dependencies. The parameter is incorrect. (Exception from HRESULT: 0x80070057 (E_INVALIDARG)) ---> System.ArgumentException: Illegal characters in path.
   at System.IO.Path.CheckInvalidPathChars(String path)
   at System.IO.Path.Combine(String path1, String path2)
   at Microsoft.VisualStudio.Platform.VsAppDomainManager.<AssemblyPaths>d__1.MoveNext()
   at Microsoft.VisualStudio.Platform.VsAppDomainManager.InnerResolveHandler(String name)
   at Microsoft.VisualStudio.Platform.VsAppDomainManager.ResolveHandler(Object sender, ResolveEventArgs args)
   at System.AppDomain.OnAssemblyResolveEvent(RuntimeAssembly assembly, String assemblyFullName)
   --- End of inner exception stack trace ---
   at Microsoft.PowerShell.Commands.ModuleCmdletBase.LoadBinaryModule(Boolean trySnapInName, String moduleName, String fileName, Assembly assemblyToLoad, String moduleBase, SessionState ss, String prefix, Boolean loadTypes, Boolean loadFormats, Boolean& found)
   at Microsoft.PowerShell.Commands.ModuleCmdletBase.LoadModuleNamedInManifest(String moduleName, String moduleBase, Boolean searchModulePath, String prefix, SessionState ss, Boolean loadTypesFiles, Boolean loadFormatFiles, Boolean& found)
   at Microsoft.PowerShell.Commands.ModuleCmdletBase.LoadModuleManifest(ExternalScriptInfo scriptInfo, ManifestProcessingFlags manifestProcessingFlags, Version version)
   at Microsoft.PowerShell.Commands.ModuleCmdletBase.LoadModule(String fileName, String moduleBase, String prefix, SessionState ss, Boolean& found)
   at Microsoft.PowerShell.Commands.ImportModuleCommand.ProcessRecord()
   at System.Management.Automation.Cmdlet.DoProcessRecord()
   at System.Management.Automation.CommandProcessor.ProcessRecord()
   --- End of inner exception stack trace ---
   at System.Management.Automation.Runspaces.PipelineBase.Invoke(IEnumerable input)
   at System.Management.Automation.Runspaces.Pipeline.Invoke()
   at NuGetConsole.Host.PowerShell.Implementation.PowerShellHost.Invoke(String command, Object input, Boolean outputResults)
   at NuGetConsole.Host.PowerShell.Implementation.PowerShellHostExtensions.ImportModule(PowerShellHost host, String modulePath)
   at NuGetConsole.Host.PowerShell.Implementation.PowerShellHost.LoadStartupScripts()
   at NuGetConsole.Host.PowerShell.Implementation.PowerShellHost.Initialize()
   at NuGetConsole.Implementation.Console.ConsoleDispatcher.Start()
   at NuGetConsole.Implementation.PowerConsoleToolWindow.MoveFocus(FrameworkElement consolePane)

Basically this error is fixed by removing Reflector as an add-in. Go to Tools > Add-in Manager and uncheck the Startup checkbox next to the .Net Reflector entry and restart Visual Studio.

Image1

Friday 8 April 2011

Geometric Networks in ArcGIS

When modelling networks in ArcGIS there are 2 basic options you can choose:
  • Network datasets
  • Geometric networks
Network datasets are good for modelling undirected networks because they can allow flow in any direction. Network datasets are based on junctions, edges, and turn sources, which are simple feature classes. Network datasets are best suited for modelling transportation networks such as roads. Given that the work I do is not really concerned with road networks I’m going to put network datasets on the back burner.
Geometric datasets are better suited to modelling utilities (e.g. gas or oil pipelines), and natural resources (e.g. river systems). This is of much greater interest to me at the moment, hence this post. In essence a geometric network is a set of features that form a connected system of edges and junctions. An important characteristic of geometric networks is that features are regarded as being connected if they exist at the same x,y coordinate (geometric coincidence). 
So, geometric networks are based on 2 main elements: edges and junctions.

Edges

Edges are features that have a length and through which some commodity flows.
There are 2 types of edge:
  • Simple – A line between two adjacent junctions. Resources enter one end of an edge and exit at the other end.
  • Complex – A set of connected lines with 2 or more junctions. Resources flow from one end to another but may be ‘siphoned off’ along the edge without having to split the edge feature.
Drawing1
Note that in the diagram above the complex edge is intersected by 2 simple edges but it is not split by them. The complex edge is one continuous feature. This could represent a gas main (complex edge) intersected by service mains (simple edges).
Connectivity
Connectivity for simple edges occurs only at the end points of the feature. Connectivity for complex edges occurs at the ends of features but also at mid-span intersections.
Connections between coincident junctions is not permitted. You can only connect junctions through edges.
You can specify rules to constrain connectivity between features. For example you can specify edge-junctions rules to stipulate that only edges of a certain type can connect to junctions of a certain type. Similarly you can specify edge-edge rules to constrain what types of edge can connect to each other.

Junctions

Junctions represent the the location of endpoints of edges or where multiple edges are connected. A junction can connect 2 or more edges and facilitates flow of a commodity (traffic, water, etc.) from one edge to another.

Logical network

A geometric network will have a corresponding logical network. The logical is not concerned with coordinate values but rather data about how the network is connected (connectivity, weights, and other information). The logical network is a graph stored as tables in the geodatabase and is used for tracing and flow operations. This is primarily acheived through the use of a connectivity table that lists, for every junction, adjacent junctions and edges. 

Sources and Sinks

Any of the features in the geometric dataset can take on the role of a source (where a commodity flows from) or a sink (where a commodity terminates). These roles are used to define flow in the network.

Weights

Features can have a weight which represents the cost of traversing the edge or passing through a junction. Any number of attributes can be used as weights, for example lengths, capacity or slope.
Friday 8 April 2011

Monday 21 March 2011

Using Moq to verify that a method does NOT get called

I recently had a bit of a brain fade when using Moq. I wanted to check that one method on a mocked class wasn’t called but that another was. Imagine a class like this:

public class Testable
{
    IService _service;
    
    public Testable(IService service)
    {
        _service = service;
    }

    public void MethodToTest(int i)
    {
        if(i <= 0)
        {
            _service.DoSomething();
            return;
        }
        
        _service.DoSomethingElse();
    }
}

I want to test that when if i > 0 only the _service.DoSomethingElse() method is called. I need the mock implementation of IService to throw an exception if the wrong method is called. The solution is to ask Moq to create a strict mock:

using Moq;
using NUnit.Framework;

[TestFixture]
public class TestableTest 
{
    [Test]
    public void MethodToTest_WhenArgumentGreaterThanZero_DoesSomethingElse()
    {
        // Arrange
        Mock<IService> mockService = new Mock<IService>(MockBehavior.Strict);
        mockService.SetUp(x => x.DoSomethingElse());
        
        Testable testable = new Testable(mockService.Object);
        
        // Act
        testable.MethodToTest(1);
        
        // Assert
        mockService.Verify(x => x.DoSomethingElse());
    }
}

When MockBehavior.Strict is used, Moq will throw an exception if any method is called that does not have a corresponding setup. In this case if the DoesSomething() method is called and exception will be thrown and the test will fail.

Friday 18 March 2011

Relationship classes (and other things) in ArcGIS

I’ve started dipping my toes in the ESRI ArcGIS APIs and wanted to get something clear in my mind - What is a relationship class?
Firstly, I’ve made the somewhat obvious observation that the ESRI concepts around classes, tables, joins etc. are really a layer over the top of relational database concepts. It seems that the ArcGIS tools provide a non-SQL way to design or access the data in the geodatabase – sensible if you want a tool for users not familiar with databases or SQL. However, if you are a developer who is familiar with SQL and databases the ArcGIS interface gets a bit confusing until you can draw a mapping between well understood relational database concepts and what the ArcGIS tools give you.

Tables

Tables in ArcGIS are just the same as tables in an RDBMS. They have fields (columns if you prefer) and rows.
Note that in ArcGIS a table will always have an ObjectID field. This is the unique identifier for a row and is maintained by ArcGIS. If the requirements of your application mean that you need to use your own identifiers then you will have to add a new field to the table (in addition to the ObjectID) and use that to store your identifiers.
Subtypes
Tables can have subtypes. A subtype is just an integer value that is used as a discriminator to group rows together (this is somewhat similar to the discriminator column when using a single table to store a class hierarchy in NHibernate). Note that it becomes a design decision as to whether you represent different feature types as separate feature classes or by using subtypes.
Domains
The term domain pops up all over the place but the explanation for what they are is quite simple.
Attribute domains are rules that describe the legal values for a field type, providing a method for enforcing data integrity.” ***
That looks very much like a way to maintain constraints to me. There are 2 types of domain:
  • Coded-value domains – where you specify a set of valid values for an attribute.
  • Range domains – where you specify a range of valid values for an attribute.

Feature classes

First up, a feature class is just a table. The way I think about it is that feature class could have been named feature table. The difference between a feature class and a plain table is that a feature class has a special field for holding shape data – the feature type.
All features in a feature class have the same feature type.

Relationship classes

This is what started me off on this investigation. I visualise a relationship class as just a link table in a database (i.e. the way you would normally manage many-to-many relationships in an RDBMS).
Here are some quotes from ESRI documentation:
A relationship class is an object in a geodatabase that stores information about a relationship between two feature classes, between a feature class and a nonspatial table, or between two nonspatial tables. Both participants in a relationship class must be stored in the same geodatabase.” *
In addition to selecting records in one table and seeing related records in the other, with a relationship class you can set rules and properties that control what happens when data in either table is edited, as well as ensure that only valid edits are made. You can set up a relationship class so that editing a record in one table automatically updates related records in the other table.” *
Relationship classes have a cardinality which is used to specify the number of objects in the origin class that can relate to a number of objects in the destination class. A relationship class can have one of three cardinalities:
  • One-to-one
  • One-to-many
  • Many-to-many
No surprises there; we are following a relational data model. Cardinality can be set using the relationship cardinality constants provided by the ESRI API. Note that once a relationship class is created, the cardinality parameter cannot be altered. If you need to change the cardinality the relationship class must be deleted and recreated.
In a relationship one class will act as the origin and another as the destination. Different behaviour is associated with the origin and destination classes so it is important to define them correctly.
In code, relationships are expressed as instances of classes that implement the IRelationshipClass interface.
Note that relationships can have additional attributes (this is like adding extra columns to a link table to store data specific to an instance of the relationship). See The mythical ‘attributed relationship’ in ArcGIS for more details.
Simple relationships
In a simple relationship, related objects can exist independently of each other. The cascade behaviour is restricted; when deleting an origin object the key field value for the matching destination object is set to null. Deleting a destination object has no effect on the origin object.
Composite relationships
In a composite relationship, the destination objects are dependent on the lifetime of the origin objects. Deletes are cascaded from an origin object to all of the related destination objects.
The IRelationshipClass interface has an IsComposite parameter; a Boolean. Setting it to true indicates the relationship will be composite and setting it to false indicates that the relationship will be simple.

Relates

A relate is distinct from a relationship and is a mechanism for defining a relationship between two datasets based on a key. In other words this very much like an RDBMS foreign-key relationship (where one table has a foreign-key column that references the primary key of another table). So, a relate can be used to represent one-to-one and one-to-many relationships.

Joins

Although regarded as a separate construct in ArcGIS the join is really just what you’d expect a join to be in an RDBMS:
Joining datasets in ArcGIS will append attributes from one table onto the other based on a field common to both. When an attribute join is performed the data is dynamically joined together, meaning nothing is actually written to disk.” ***

 

References

* Relates vs. Relationship Classes
** How to create relationship classes in the geodatabase
*** Modeling Our World, Michael Zeiler, ISBN 978-1-58948-278-4
Friday 18 March 2011

Friday 4 March 2011

Command-Query Responsibility Segregation (CQRS)

If you listen to the .Net Rocks! podcast you may have heard the episode where Udi Dahan Clarifies CQRS. CQRS? What’s that all about then?
Health warning: I’m learning here. Errors and misconceptions probable. Jump to References below to go to more authoritative resources.
It looks like CQRS started with Bertrand Meyer as part of his work on the Eiffel programming language. At that time Meyer referred to the principle as Command-Query Separation (CQS). That principle states:
“…that every method should either be a command that performs an action, or a query that returns data to the caller, but not both. In other words, asking a question should not change the answer. More formally, methods should return a value only if they are referentially transparent and hence possess no side effects.” - Wikipedia
The implications of this principle are that if you have a return value you cannot mutate state. It also implies that if you mutate state your return type must be void.
CQRS extends CQS by effectively mandating that when we interact with data stores objects are assigned to one of two ‘types’: Commands and Queries. This division of responsibility extends into the rest of the architecture. You may have separate services dedicated to either commands (e.g. create, update and delete operations) or queries (read-only operations). It can even extend to the level of the data store – you may choose to have separate stores for read-only queries and for commands.
Why is this separation important? Well, it recognises that the two sides have very different requirements:
Command Query
Data Store Store normalised data for transactional updates etc. Store denormalised data for fast and convenient querying (e.g. minimise the number of joins required).
Scalability Scalability may be less important for commands. For example many web systems are skewed towards more frequent read-only operations. Scalability may be very important, especially in web systems. We may want to use caching here.
CQRS recognises that rather than having one unified system where create, read, update and delete operations are treated as being the same it may be better to have two complimentary systems working side by side: one for read-only query operations and another for command operations. The following diagram is somewhat over-simplified but…
CQRS v1.0

Features of the Command side

  • Our ‘data access layer’ for command operations becomes behavioural as opposed to data-centric.
  • Our data transfer objects don’t need to expose internal state – we create a DTO and fire it off with a command.
  • We don’t need to process commands immediately – they can be queued.
  • We might not even require an ‘always on’ connection to the data store.
  • We might use events to manage interaction between components in the command system.

Features of the Query side (the Thin Read Layer)

  • We can create an object model that is optimised for display purposes (e.g. view models).
  • We can create a separate data store that is optimised to meet the needs of the display object model (e.g. data can be denormalised to fit the requirements of specific views).
  • We can optimise data access to prevent round trips to the data layer so that all the data required by a view is returned in one operation.
  • We can optimise read-only queries in isolation from the requirements of update operations (this might be compromised in a non-CQRS system where, for example, an ORM is used for all data access operations).
  • We can bypass direct interaction with the data store and use cached data (although we should be explicit about this when we do it – let the user know how fresh the data is).
  • We can use different levels of caching for different parts of the application (perhaps one screen requires data to be fresher than another).

How do we keep the query data store in sync?

This is where events come in to the picture and I think I’ll leave that for another post!

References

Thursday 3 March 2011

Properties versus direct access to fields

I was recently required to pick up some legacy code that included public fields along these lines:
public class SomeClass
{
    public int someInt;

    // ... snip ...
}
I have grown used to using auto-implemented properties under such circumstances:
public class SomeClass
{
    public int SomeInt { get; set; }

    // ... snip ...
}
To make matters worse different classes had different casing so sometimes public fields started with a capital and sometimes not (trivial but annoying). After my initial gasps of horror I got to thinking about the difference between exposing fields directly versus using properties. What really are the pros and cons?
MSDN describes properties in the following terms:
A property is a member that provides a flexible mechanism to read, write, or compute the value of a private field. Properties can be used as if they are public data members, but they are actually special methods called accessors. This enables data to be accessed easily and still helps promote the safety and flexibility of methods.”
It turns out that Jeff Atwood is quite opinionated about the subject. Jeff acknowledges some arguments in favour of using properties over fields such as the following:
  • Reflection works differently on fields than on properties. If you rely on reflection properties are a more natural fit.
  • You can't databind against a field.
  • Changing a field to a property is a breaking change requiring client code to be recompiled.
I also think that using a property signifies intent; it identifies that you intended the encapsulated field to be exposed outside the class. In effect by using a property you are declaring part of a contract, which is probably why you can declare properties in interfaces. This opinion seems to be confirmed by Jon Skeet:
A property communicates the idea of "I will make a value available to you, or accept a value from you." It's not an implementation concept, it's an interface concept. A field, on the other hand, communicates the implementation - it says "this type represents a value in this very specific way". There's no encapsulation, it's the bare storage format. This is part of the reason fields aren't part of interfaces - they don't belong there, as they talk about how something is achieved rather than what is achieved.” - http://csharpindepth.com/Articles/Chapter8/PropertiesMatter.aspx
Properties also allow you to use combined access modifiers should you need to (note the private setter here):
public class SomeClass
{
    public int SomeInt { get; private set; }

    // ... snip ...
}
More verbose than simple public fields they may be, but I like properties.
Thursday 3 March 2011,

Friday 4 February 2011

Known Types in a WCF service

Firstly, this post has been prompted by an MSDN Magazine article by renowned brain-box, Juval Lowy: Data Contract Inheritance. Having recently been forced to resort to the [KnownType] attribute on a WCF data contract, and having been slightly confused as to why (not to mention uncomfortable with having a data contract base class now coupled to its subclasses) the article has proved most illuminating.

For my own benefit here are a few choice bits from the article (my bold).

What’s the problem?

Unlike traditional object orientation or the classic CLR programming model, WCF passes all operation parameters by value, not by reference… The parameters are packaged in the WCF message and transferred to the service, where they are then deserialized to local references for the service operation to work with…

… With multitier applications, marshaling the parameters by value works better than by reference because any layer in the architecture is at liberty to provide its own interpretation to the behavior behind the data contract. Marshaling by value also enables remote calls, interoperability, queued calls and long-running workflows.

If you do pass a subclass reference to a service operation that expects a base class reference, how would WCF know to serialize into the message the derived class portion?

What does [KnownType] do?

When the client passes a data contract that uses a known type declaration, the WCF message formatter tests the type (akin to using the is operator) and sees if it’s the expected known type. If so, it serializes the parameter as the subclass rather than the base class…

The WCF formatter uses reflection to collect all the known types of the data contracts, then examines the provided parameter to see if it’s of any of the known types…

… Because the KnownType attribute may be too broad in scope, WCF also provides ServiceKnownTypeAttribute, which you can apply on a specific operation or on a specific contract.”

.Net 4 to the rescue

To alleviate the problem, in the .NET Framework 4 WCF introduced a way of resolving the known types at run time. This programmatic technique, called data contract resolvers, is the most powerful option because you can extend it to completely automate dealing with the known type issues. In essence, you’re given a chance to intercept the operation’s attempt to serialize and deserialize parameters and resolve the known types at run time both on the client and service sides.

Friday 4 February 2011

Wednesday 2 February 2011

Remember Oracle?

OK, it looks like I’m going to have to work with Oracle again after some considerable time working exclusively with MS SQL Server. I’m now so used to T-SQL I thought I’d better spend some time getting up to speed with PL/SQL again. As usual there’s no detail here; this post is just an aide-mémoire.

What is PL/SQL?

The abbreviation PL/SQL refers to Oracle's Procedural Language extension to SQL. In general PL/SQL is executed on the database server but some tools (e.g. SQL Developer) can execute PL/SQL on the client. PL/SQL is a procedural language that also enables you to embed SQL statements within its procedural code.

PL/SQL basics

A PL/SQL Block consists of three sections:

  • The Declaration section
    • Optional
    • Starts with the keyword DECLARE
    • Used to declare any placeholders like variables, constants, records and cursors
  • The Execution section
    • Mandatory
    • Starts with the keyword BEGIN and ends with END
  • The Exception (or Error) Handling section
    • Optional
    • Starts with the keyword EXCEPTION

Basic PL/SQL looks like this:

[DECLARE
     -- Variable declaration here]
BEGIN
     -- Program Execution here
[EXCEPTION
     -- Exception handling here]
END;

For example:

DECLARE
 x   NUMBER;
BEGIN
 x := 123456;
 dbms_output.put_line('x = ');
 dbms_output.put_line(x);
END;
/

The general syntax for declaring a variable is either variable_name:= value; or assigning a value in a SQL statement using the INTO keyword. Note that if the dbms_output.put_line function doesn’t print anything try using SET SERVEROUTPUT ON first. We can select values into a variable like this:

SELECT column_name
INTO variable_name 
FROM table_name 
[WHERE condition]; 
Records

Records are a composite data types (i.e. a combination of different scalar data types like char, varchar, number etc.). Each scalar data type in the record holds a value. A record is therefore somewhat analogous to a row of data.

TYPE record_type_name IS RECORD 
(first_col_name column_datatype, 
second_col_name column_datatype, ...); 

To assign values to elements of a record use the following syntax:

record_name.column_name := value;  

For example:

SELECT col1, col2 
INTO record_name.col_name1, record_name.col_name2 
FROM table_name 
[WHERE clause]; 
Functions

NB: A function must always return a value. Basic function definitions look like this:

CREATE OR REPLACE FUNCTION my_function
 RETURN NUMBER AS
  x   NUMBER;
 BEGIN
  x := 123456;
  RETURN x;
 END;
/

Want to see errors?

SHOW ERRORS;

Execute the function:

SELECT my_function FROM DUAL;
Procedures

NB: A procedure may or may not return a value. A procedure is a PL/SQL Block that is stored and named for reuse. Parameters can be passed to and from a procedure in the following ways:

  • IN-parameters
  • OUT-parameters
  • IN OUT-parameters

The basic syntax for creating a procedure is:

CREATE [OR REPLACE] PROCEDURE proc_name [list of parameters] 
IS    
   Declaration section 
BEGIN    
   Execution section 
EXCEPTION    
  Exception section 
END; 

To execute a procedure use the following syntax. From the SQL prompt.

 EXECUTE [or EXEC] procedure_name; 

Or from within another procedure:

  procedure_name;
The DUAL table

Why select from DUAL? The DUAL table is used in Oracle when you need to run SQL that does not have a table name. It is a special table that has exactly one row and one column (called DUMMY). Because it has one row it is guaranteed to return exactly one row in SQL statements.

Packages

A package is a schema object that groups logically related PL/SQL types, items, and subprograms. Packages usually have two parts, a specification and a body, although sometimes the body is unnecessary. The specification (spec for short) is the interface to your applications; it declares the types, variables, constants, exceptions, cursors, and subprograms available for use. The body fully defines cursors and subprograms, and so implements the spec.” - http://download.oracle.com/docs/cd/B10500_01/appdev.920/a96624/09_packs.htm

See also

http://plsql-tutorial.com/

Wednesday 2 February 2011

What is Progressive Enhancement?

Progressive Enhancement is a web design strategy that approaches the same problems tackled by Graceful Degradation but from the opposite direction. Rather than a designer creating a compelling experience for the latest browsers and then making it degrade acceptably for older browsers, a designer ensures that basic functionality is available to all browsers and then offers additional functionality to those with a higher specification.
Wikipedia states that Progressive Enhancement consists of the following core principles:
  • basic content should be accessible to all browsers
  • basic functionality should be accessible to all browsers
  • sparse, semantic mark-up contains all content
  • enhanced layout is provided by externally linked CSS
  • enhanced behaviour is provided by unobtrusive, externally linked JavaScript
  • end user browser preferences are respected
This approach has lead to the adoption of related ideas such as Unobtrusive JavaScript (as now supported by Microsoft MVC 3).

References

Thursday 27 January 2011

Breaking dependencies on specific DI containers

Warning: Before going any further you probably want to have a look at Service locator anti-pattern. The code featured in this post uses Service Locator but the pattern is regarded by some to be an anti-pattern and as such should be avoided.

I’ve been working on a WCF service that uses Unity for dependency resolution. Everything has been working but I’ve been unhappy about a tight dependency on Unity itself that I have introduced in my code. I recalled that there is a service locator library knocking around that defines an interface that IoCs can adopt.

The Common Service Locator library contains a shared interface for service location which application and framework developers can reference. The library provides an abstraction over IoC containers and service locators. Using the library allows an application to indirectly access the capabilities without relying on hard references. The hope is that using this library, third-party applications and frameworks can begin to leverage IoC/Service Location without tying themselves down to a specific implementation.” - http://commonservicelocator.codeplex.com/

The Common Service Locator library provides a simple interface for service location:

public interface IServiceLocator : IServiceProvider
{
    object GetInstance(Type serviceType);
    object GetInstance(Type serviceType, string key);
    IEnumerable<object> GetAllInstances(Type serviceType);
    TService GetInstance<TService>();
    TService GetInstance<TService>(string key);
    IEnumerable<TService> GetAllInstances<TService>();
}

It turns out that the library is supported by Unity (as well as a bunch of other IoC implementations). The Common Service Locator library site links to an adapter for Unity but peeking around I found the UnityServiceLocator class in the Microsoft.Practices.Unity assembly.

I have now been able to replace all references to IUnityContainer with IServiceLocator (i.e. breaking the tight dependency on Unity). In the case of the WCF service all I needed to do was create a ServiceHostFactory implementation that passes an instance of UnityServiceLocator around rather than an instance of UnityContainer.

public class UnityServiceLocatorServiceHostFactory : ServiceHostFactory
{
    protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses)
    {
        var unityContainer = new UnityContainer();
        unityContainer.LoadConfiguration();
        var unityServiceLocator = new UnityServiceLocator(unityContainer);
        return new ServiceLocatorServiceHost(serviceType, unityServiceLocator, baseAddresses);
    }  
}

The UnityServiceLocatorServiceHostFactory is the only class that has a tight dependency on Unity and can be farmed off into a separate Unity assembly. All other classes, including the service host implementation, only need to deal with IServiceLocator:

public class ServiceLocatorServiceHost : ServiceHost
{
    private IServiceLocator _serviceLocator;
        
    public ServiceLocatorServiceHost(IServiceLocator serviceLocator) : base()
    {
        _serviceLocator = serviceLocator;
    }

    public ServiceLocatorServiceHost(Type serviceType, IServiceLocator serviceLocator, params Uri[] baseAddresses)
        : base(serviceType, baseAddresses)
    {
        _serviceLocator = serviceLocator;
    }

    protected override void OnOpening()
    {
        if (Description.Behaviors.Find<ServiceLocatorServiceBehaviour>() == null)
        {
            Description.Behaviors.Add(new ServiceLocatorServiceBehaviour(_serviceLocator));
        }

        base.OnOpening();
    }
}

The ServiceLocatorServiceBehavior adds a service locator instance provider to the endpoint dispatcher, something like this:

public class ServiceLocatorServiceBehavior : IServiceBehavior
{   
    private readonly IServiceLocator _serviceLocator;
	
    public ServiceLocatorServiceBehavior(IServiceLocator serviceLocator)
    {
        _serviceLocator = serviceLocator;
    }

    public void Validate(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase)
    {
        // Nothing to see here. Move along...
    }

    public void AddBindingParameters(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase, Collection<ServiceEndpoint> endpoints, BindingParameterCollection bindingParameters)
    {
        // Nothing to see here. Move along...
    }

    public void ApplyDispatchBehavior(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase)
    {
        foreach (ChannelDispatcher channelDispatcher in serviceHostBase.ChannelDispatchers)
        {
            foreach (EndpointDispatcher endpointDispatcher in channelDispatcher.Endpoints)
            {
                string contractName = endpointDispatcher.ContractName;
                ServiceEndpoint serviceEndpoint = serviceDescription.Endpoints.FirstOrDefault(e => e.Contract.Name == contractName);
                endpointDispatcher.DispatchRuntime.InstanceProvider = new ServiceLocatorInstanceProvider(_serviceLocator, serviceEndpoint.Contract.ContractType);
            }
        }
    }
}

And finally the ServiceLocatorInstanceProvider uses the service locator to resolve dependencies, something like this:

public class ServiceLocatorInstanceProvider : IInstanceProvider
{
    private readonly IServiceLocator _serviceLocator;
    private readonly Type _contractType;

    public ServiceLocatorInstanceProvider(IServiceLocator serviceLocator, Type contractType)
    {
        this._serviceLocator = serviceLocator;
        this._contractType = contractType;
    }

    public object GetInstance(InstanceContext instanceContext)
    {
        return GetInstance(instanceContext, null);
    }

    public object GetInstance(InstanceContext instanceContext, Message message)
    {
        return _serviceLocator.GetInstance(_contractType);
    }

    public void ReleaseInstance(InstanceContext instanceContext, object instance)
    {
            
    }
}

Thursday 20 January 2011

Some notes on ArcGIS and associated technologies

I’m getting started with ArcGIS so I need to keep some notes. NB: This post is just an aide-mémoire for me as I get started so nothing is covered in any detail.

What is Esri?

Esri is a company providing Geographic Information System (GIS) software and geodatabase management applications. They are based in California and have about 30% of the GIS software market (see http://en.wikipedia.org/wiki/Esri).

What is the APDM?

APDM (ArcGIS Pipeline Data Model) is an open standard for storing geographical data associated with pipelines:

The ArcGIS Pipeline Data Model is designed for storing information pertaining to features found in gathering and transmission pipelines, particularly gas and liquid systems. The APDM was expressly designed for implementation as an ESRI geodatabase for use with ESRI's ArcGIS and ArcSDE® products. A geodatabase is an object-relational construct for storing and managing geographic data as features within an industry-standard relational database management system (RDBMS).” - http://www.apdm.net/

What is ArcSDE?

ArcSDE technology is a core component of ArcGIS Server. It manages spatial data in a relational database management system (RDBMS) and enables it to be accessed by ArcGIS clients.” - http://www.esri.com/software/arcgis/arcsde/index.html

ArcSDE technology serves as the gateway between GIS clients and the RDBMS. It enables you to easily store, access, and manage spatial data within an RDBMS package…

ArcSDE technology is critical when you need to manage long transactions and versioned-based workflows such as

* Support for multiuser editing environments
* Distributed editing
* Federated replicas managed across many RDBMS architectures
* Managing historical archives

The responsibility for defining the specific RDBMS schema used to represent geographic data and for application logic is retained in ArcGIS, which provides the behavior, integrity, and utility of the underlying records.” - http://www.esri.com/software/arcgis/geodatabase/storage-in-an-rdbms.html

What is a geodatabase?

“The geodatabase is the common data storage and management framework for ArcGIS. It combines "geo" (spatial data) with "database" (data repository) to create a central data repository for spatial data storage and management.” - http://www.esri.com/software/arcgis/geodatabase/index.html

Basic terms and concepts

There are four fundamental types upon which geographic representations in a GIS are based:

  • Features (collections or points, lines, and polygons)
    • Representations of things located on or near the surface of the earth.
    • Can be natural (rivers, vegetation, etc).
    • Can be constructions (roads, pipelines, buildings, etc.).
    • Can be subdivisions of land (counties, political divisions, land parcels, etc.).
    • Most commonly represented as points, lines, and polygons.
  • Attributes (descriptive attributes of features)
    • Managed in tables based on simple relational database concepts.
  • Imagery
    • Imagery is managed as a raster data type composed of cells organized in a grid of rows and columns.
    • In addition to the map projection, the coordinate system for a raster dataset includes its cell size and a reference coordinate (usually the upper left or lower left corner of the grid).
    • These properties enable a raster dataset to be described by a series of cell values starting in the upper left row.
    • Each cell location can be located using the reference coordinate, the cell size, and the number of rows and columns.
  • Continuous surfaces (such as elevation)
    • A surface describes an occurrence that has a value for every point on the earth.
    • Surface elevation is a continuous layer of values for ground elevation above mean sea level.
    • Other surface type examples include rainfall, pollution concentration, and sub-surface representations of geological formations.

See the ArcGIS Desktop Help file for further details.

GIS data structures

Features, rasters, attributes, and surfaces are managed using three primary GIS data structures:

  • Feature classes
  • Attribute tables
  • Raster datasets

Map Layer Types GIS Datasets
Features (points, lines, and polygons) Feature classes
Attributes Tables
Imagery Raster datasets
Surfaces

Both features and rasters can be used to provide a number of alternative surface representations:

  • Feature classes (such as contours)
  • Raster-based elevation datasets
  • TINs built from XYZ points and 3D line feature classes

In a GIS datasets hold data about a particular feature collection (for example, roads) that is geographically referenced to the earth's surface. A dataset is a collection of homogeneous features. Most datasets are collections of simple geographic elements.

Users work with geographic data in two fundamental ways:

  • As datasets (homogeneous collections of features, rasters, or attributes)
  • As individual elements (e.g. individual features, rasters, and attribute values) contained within each dataset

Datasets are:

  • The primary inputs and outputs for geoprocessing.
  • Datasets are the primary means for data sharing.

 

See also

There’s some good basic information on GIS systems on the Ordinance Survey website: http://www.ordnancesurvey.co.uk/oswebsite/gisfiles/index.html

Thursday 20 January 2011