Friday, 4 February 2011

Known Types in a WCF service

Firstly, this post has been prompted by an MSDN Magazine article by renowned brain-box, Juval Lowy: Data Contract Inheritance. Having recently been forced to resort to the [KnownType] attribute on a WCF data contract, and having been slightly confused as to why (not to mention uncomfortable with having a data contract base class now coupled to its subclasses) the article has proved most illuminating.

For my own benefit here are a few choice bits from the article (my bold).

What’s the problem?

Unlike traditional object orientation or the classic CLR programming model, WCF passes all operation parameters by value, not by reference… The parameters are packaged in the WCF message and transferred to the service, where they are then deserialized to local references for the service operation to work with…

… With multitier applications, marshaling the parameters by value works better than by reference because any layer in the architecture is at liberty to provide its own interpretation to the behavior behind the data contract. Marshaling by value also enables remote calls, interoperability, queued calls and long-running workflows.

If you do pass a subclass reference to a service operation that expects a base class reference, how would WCF know to serialize into the message the derived class portion?

What does [KnownType] do?

When the client passes a data contract that uses a known type declaration, the WCF message formatter tests the type (akin to using the is operator) and sees if it’s the expected known type. If so, it serializes the parameter as the subclass rather than the base class…

The WCF formatter uses reflection to collect all the known types of the data contracts, then examines the provided parameter to see if it’s of any of the known types…

… Because the KnownType attribute may be too broad in scope, WCF also provides ServiceKnownTypeAttribute, which you can apply on a specific operation or on a specific contract.”

.Net 4 to the rescue

To alleviate the problem, in the .NET Framework 4 WCF introduced a way of resolving the known types at run time. This programmatic technique, called data contract resolvers, is the most powerful option because you can extend it to completely automate dealing with the known type issues. In essence, you’re given a chance to intercept the operation’s attempt to serialize and deserialize parameters and resolve the known types at run time both on the client and service sides.

Friday, 4 February 2011

Wednesday, 2 February 2011

Remember Oracle?

OK, it looks like I’m going to have to work with Oracle again after some considerable time working exclusively with MS SQL Server. I’m now so used to T-SQL I thought I’d better spend some time getting up to speed with PL/SQL again. As usual there’s no detail here; this post is just an aide-mémoire.

What is PL/SQL?

The abbreviation PL/SQL refers to Oracle's Procedural Language extension to SQL. In general PL/SQL is executed on the database server but some tools (e.g. SQL Developer) can execute PL/SQL on the client. PL/SQL is a procedural language that also enables you to embed SQL statements within its procedural code.

PL/SQL basics

A PL/SQL Block consists of three sections:

  • The Declaration section
    • Optional
    • Starts with the keyword DECLARE
    • Used to declare any placeholders like variables, constants, records and cursors
  • The Execution section
    • Mandatory
    • Starts with the keyword BEGIN and ends with END
  • The Exception (or Error) Handling section
    • Optional
    • Starts with the keyword EXCEPTION

Basic PL/SQL looks like this:

[DECLARE
     -- Variable declaration here]
BEGIN
     -- Program Execution here
[EXCEPTION
     -- Exception handling here]
END;

For example:

DECLARE
 x   NUMBER;
BEGIN
 x := 123456;
 dbms_output.put_line('x = ');
 dbms_output.put_line(x);
END;
/

The general syntax for declaring a variable is either variable_name:= value; or assigning a value in a SQL statement using the INTO keyword. Note that if the dbms_output.put_line function doesn’t print anything try using SET SERVEROUTPUT ON first. We can select values into a variable like this:

SELECT column_name
INTO variable_name 
FROM table_name 
[WHERE condition]; 
Records

Records are a composite data types (i.e. a combination of different scalar data types like char, varchar, number etc.). Each scalar data type in the record holds a value. A record is therefore somewhat analogous to a row of data.

TYPE record_type_name IS RECORD 
(first_col_name column_datatype, 
second_col_name column_datatype, ...); 

To assign values to elements of a record use the following syntax:

record_name.column_name := value;  

For example:

SELECT col1, col2 
INTO record_name.col_name1, record_name.col_name2 
FROM table_name 
[WHERE clause]; 
Functions

NB: A function must always return a value. Basic function definitions look like this:

CREATE OR REPLACE FUNCTION my_function
 RETURN NUMBER AS
  x   NUMBER;
 BEGIN
  x := 123456;
  RETURN x;
 END;
/

Want to see errors?

SHOW ERRORS;

Execute the function:

SELECT my_function FROM DUAL;
Procedures

NB: A procedure may or may not return a value. A procedure is a PL/SQL Block that is stored and named for reuse. Parameters can be passed to and from a procedure in the following ways:

  • IN-parameters
  • OUT-parameters
  • IN OUT-parameters

The basic syntax for creating a procedure is:

CREATE [OR REPLACE] PROCEDURE proc_name [list of parameters] 
IS    
   Declaration section 
BEGIN    
   Execution section 
EXCEPTION    
  Exception section 
END; 

To execute a procedure use the following syntax. From the SQL prompt.

 EXECUTE [or EXEC] procedure_name; 

Or from within another procedure:

  procedure_name;
The DUAL table

Why select from DUAL? The DUAL table is used in Oracle when you need to run SQL that does not have a table name. It is a special table that has exactly one row and one column (called DUMMY). Because it has one row it is guaranteed to return exactly one row in SQL statements.

Packages

A package is a schema object that groups logically related PL/SQL types, items, and subprograms. Packages usually have two parts, a specification and a body, although sometimes the body is unnecessary. The specification (spec for short) is the interface to your applications; it declares the types, variables, constants, exceptions, cursors, and subprograms available for use. The body fully defines cursors and subprograms, and so implements the spec.” - http://download.oracle.com/docs/cd/B10500_01/appdev.920/a96624/09_packs.htm

See also

http://plsql-tutorial.com/

Wednesday, 2 February 2011

What is Progressive Enhancement?

Progressive Enhancement is a web design strategy that approaches the same problems tackled by Graceful Degradation but from the opposite direction. Rather than a designer creating a compelling experience for the latest browsers and then making it degrade acceptably for older browsers, a designer ensures that basic functionality is available to all browsers and then offers additional functionality to those with a higher specification.
Wikipedia states that Progressive Enhancement consists of the following core principles:
  • basic content should be accessible to all browsers
  • basic functionality should be accessible to all browsers
  • sparse, semantic mark-up contains all content
  • enhanced layout is provided by externally linked CSS
  • enhanced behaviour is provided by unobtrusive, externally linked JavaScript
  • end user browser preferences are respected
This approach has lead to the adoption of related ideas such as Unobtrusive JavaScript (as now supported by Microsoft MVC 3).

References

Thursday, 27 January 2011

Breaking dependencies on specific DI containers

Warning: Before going any further you probably want to have a look at Service locator anti-pattern. The code featured in this post uses Service Locator but the pattern is regarded by some to be an anti-pattern and as such should be avoided.

I’ve been working on a WCF service that uses Unity for dependency resolution. Everything has been working but I’ve been unhappy about a tight dependency on Unity itself that I have introduced in my code. I recalled that there is a service locator library knocking around that defines an interface that IoCs can adopt.

The Common Service Locator library contains a shared interface for service location which application and framework developers can reference. The library provides an abstraction over IoC containers and service locators. Using the library allows an application to indirectly access the capabilities without relying on hard references. The hope is that using this library, third-party applications and frameworks can begin to leverage IoC/Service Location without tying themselves down to a specific implementation.” - http://commonservicelocator.codeplex.com/

The Common Service Locator library provides a simple interface for service location:

public interface IServiceLocator : IServiceProvider
{
    object GetInstance(Type serviceType);
    object GetInstance(Type serviceType, string key);
    IEnumerable<object> GetAllInstances(Type serviceType);
    TService GetInstance<TService>();
    TService GetInstance<TService>(string key);
    IEnumerable<TService> GetAllInstances<TService>();
}

It turns out that the library is supported by Unity (as well as a bunch of other IoC implementations). The Common Service Locator library site links to an adapter for Unity but peeking around I found the UnityServiceLocator class in the Microsoft.Practices.Unity assembly.

I have now been able to replace all references to IUnityContainer with IServiceLocator (i.e. breaking the tight dependency on Unity). In the case of the WCF service all I needed to do was create a ServiceHostFactory implementation that passes an instance of UnityServiceLocator around rather than an instance of UnityContainer.

public class UnityServiceLocatorServiceHostFactory : ServiceHostFactory
{
    protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses)
    {
        var unityContainer = new UnityContainer();
        unityContainer.LoadConfiguration();
        var unityServiceLocator = new UnityServiceLocator(unityContainer);
        return new ServiceLocatorServiceHost(serviceType, unityServiceLocator, baseAddresses);
    }  
}

The UnityServiceLocatorServiceHostFactory is the only class that has a tight dependency on Unity and can be farmed off into a separate Unity assembly. All other classes, including the service host implementation, only need to deal with IServiceLocator:

public class ServiceLocatorServiceHost : ServiceHost
{
    private IServiceLocator _serviceLocator;
        
    public ServiceLocatorServiceHost(IServiceLocator serviceLocator) : base()
    {
        _serviceLocator = serviceLocator;
    }

    public ServiceLocatorServiceHost(Type serviceType, IServiceLocator serviceLocator, params Uri[] baseAddresses)
        : base(serviceType, baseAddresses)
    {
        _serviceLocator = serviceLocator;
    }

    protected override void OnOpening()
    {
        if (Description.Behaviors.Find<ServiceLocatorServiceBehaviour>() == null)
        {
            Description.Behaviors.Add(new ServiceLocatorServiceBehaviour(_serviceLocator));
        }

        base.OnOpening();
    }
}

The ServiceLocatorServiceBehavior adds a service locator instance provider to the endpoint dispatcher, something like this:

public class ServiceLocatorServiceBehavior : IServiceBehavior
{   
    private readonly IServiceLocator _serviceLocator;
	
    public ServiceLocatorServiceBehavior(IServiceLocator serviceLocator)
    {
        _serviceLocator = serviceLocator;
    }

    public void Validate(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase)
    {
        // Nothing to see here. Move along...
    }

    public void AddBindingParameters(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase, Collection<ServiceEndpoint> endpoints, BindingParameterCollection bindingParameters)
    {
        // Nothing to see here. Move along...
    }

    public void ApplyDispatchBehavior(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase)
    {
        foreach (ChannelDispatcher channelDispatcher in serviceHostBase.ChannelDispatchers)
        {
            foreach (EndpointDispatcher endpointDispatcher in channelDispatcher.Endpoints)
            {
                string contractName = endpointDispatcher.ContractName;
                ServiceEndpoint serviceEndpoint = serviceDescription.Endpoints.FirstOrDefault(e => e.Contract.Name == contractName);
                endpointDispatcher.DispatchRuntime.InstanceProvider = new ServiceLocatorInstanceProvider(_serviceLocator, serviceEndpoint.Contract.ContractType);
            }
        }
    }
}

And finally the ServiceLocatorInstanceProvider uses the service locator to resolve dependencies, something like this:

public class ServiceLocatorInstanceProvider : IInstanceProvider
{
    private readonly IServiceLocator _serviceLocator;
    private readonly Type _contractType;

    public ServiceLocatorInstanceProvider(IServiceLocator serviceLocator, Type contractType)
    {
        this._serviceLocator = serviceLocator;
        this._contractType = contractType;
    }

    public object GetInstance(InstanceContext instanceContext)
    {
        return GetInstance(instanceContext, null);
    }

    public object GetInstance(InstanceContext instanceContext, Message message)
    {
        return _serviceLocator.GetInstance(_contractType);
    }

    public void ReleaseInstance(InstanceContext instanceContext, object instance)
    {
            
    }
}

Thursday, 20 January 2011

Some notes on ArcGIS and associated technologies

I’m getting started with ArcGIS so I need to keep some notes. NB: This post is just an aide-mémoire for me as I get started so nothing is covered in any detail.

What is Esri?

Esri is a company providing Geographic Information System (GIS) software and geodatabase management applications. They are based in California and have about 30% of the GIS software market (see http://en.wikipedia.org/wiki/Esri).

What is the APDM?

APDM (ArcGIS Pipeline Data Model) is an open standard for storing geographical data associated with pipelines:

The ArcGIS Pipeline Data Model is designed for storing information pertaining to features found in gathering and transmission pipelines, particularly gas and liquid systems. The APDM was expressly designed for implementation as an ESRI geodatabase for use with ESRI's ArcGIS and ArcSDE® products. A geodatabase is an object-relational construct for storing and managing geographic data as features within an industry-standard relational database management system (RDBMS).” - http://www.apdm.net/

What is ArcSDE?

ArcSDE technology is a core component of ArcGIS Server. It manages spatial data in a relational database management system (RDBMS) and enables it to be accessed by ArcGIS clients.” - http://www.esri.com/software/arcgis/arcsde/index.html

ArcSDE technology serves as the gateway between GIS clients and the RDBMS. It enables you to easily store, access, and manage spatial data within an RDBMS package…

ArcSDE technology is critical when you need to manage long transactions and versioned-based workflows such as

* Support for multiuser editing environments
* Distributed editing
* Federated replicas managed across many RDBMS architectures
* Managing historical archives

The responsibility for defining the specific RDBMS schema used to represent geographic data and for application logic is retained in ArcGIS, which provides the behavior, integrity, and utility of the underlying records.” - http://www.esri.com/software/arcgis/geodatabase/storage-in-an-rdbms.html

What is a geodatabase?

“The geodatabase is the common data storage and management framework for ArcGIS. It combines "geo" (spatial data) with "database" (data repository) to create a central data repository for spatial data storage and management.” - http://www.esri.com/software/arcgis/geodatabase/index.html

Basic terms and concepts

There are four fundamental types upon which geographic representations in a GIS are based:

  • Features (collections or points, lines, and polygons)
    • Representations of things located on or near the surface of the earth.
    • Can be natural (rivers, vegetation, etc).
    • Can be constructions (roads, pipelines, buildings, etc.).
    • Can be subdivisions of land (counties, political divisions, land parcels, etc.).
    • Most commonly represented as points, lines, and polygons.
  • Attributes (descriptive attributes of features)
    • Managed in tables based on simple relational database concepts.
  • Imagery
    • Imagery is managed as a raster data type composed of cells organized in a grid of rows and columns.
    • In addition to the map projection, the coordinate system for a raster dataset includes its cell size and a reference coordinate (usually the upper left or lower left corner of the grid).
    • These properties enable a raster dataset to be described by a series of cell values starting in the upper left row.
    • Each cell location can be located using the reference coordinate, the cell size, and the number of rows and columns.
  • Continuous surfaces (such as elevation)
    • A surface describes an occurrence that has a value for every point on the earth.
    • Surface elevation is a continuous layer of values for ground elevation above mean sea level.
    • Other surface type examples include rainfall, pollution concentration, and sub-surface representations of geological formations.

See the ArcGIS Desktop Help file for further details.

GIS data structures

Features, rasters, attributes, and surfaces are managed using three primary GIS data structures:

  • Feature classes
  • Attribute tables
  • Raster datasets

Map Layer Types GIS Datasets
Features (points, lines, and polygons) Feature classes
Attributes Tables
Imagery Raster datasets
Surfaces

Both features and rasters can be used to provide a number of alternative surface representations:

  • Feature classes (such as contours)
  • Raster-based elevation datasets
  • TINs built from XYZ points and 3D line feature classes

In a GIS datasets hold data about a particular feature collection (for example, roads) that is geographically referenced to the earth's surface. A dataset is a collection of homogeneous features. Most datasets are collections of simple geographic elements.

Users work with geographic data in two fundamental ways:

  • As datasets (homogeneous collections of features, rasters, or attributes)
  • As individual elements (e.g. individual features, rasters, and attribute values) contained within each dataset

Datasets are:

  • The primary inputs and outputs for geoprocessing.
  • Datasets are the primary means for data sharing.

 

See also

There’s some good basic information on GIS systems on the Ordinance Survey website: http://www.ordnancesurvey.co.uk/oswebsite/gisfiles/index.html

Thursday, 20 January 2011

Tuesday, 18 January 2011

Log on as a batch job in Windows Server 2008

Problem

When creating a scheduled task on Windows Server 2008 I needed to assign a local user to run the task. For this to work the user must be given “Log on as a batch job” privileges.

Solution

1. Administrative Tools > Local Security Policy
2. Security Settings > Local Policies > User Rights Assignment
3. Find and double-click on the “Log on as a batch job” policy.
4. Add User or Group…
5. Add the user and click OK.

untitled

Multiple X.509 certificates found

Problem

I was configuring a WCF service to use SSL and had created and installed a self-signed certificate. The WCF service configuration looked something like this:

<serviceBehaviors>
  <behavior name="EnquirySubmissionServiceBehavior">
    <serviceMetadata httpsGetEnabled="true" />
    <serviceDebug includeExceptionDetailInFaults="true" />
    <serviceAuthorization principalPermissionMode="UseAspNetRoles" roleProviderName="SqlRoleProvider" />
    <serviceCredentials>
      <serviceCertificate findValue="CertificateNameHere" storeLocation="LocalMachine" storeName="My" x509FindType="FindBySubjectName" />
    </serviceCredentials>
  </behavior>
</serviceBehaviors>

When trying to access the service metadata in a browser I received an error stating that multiple X.509 certificates had been found using the given search criteria.

Solution

The solution was to change the configuration to use an alternative method to find the certificate. In this case I used FindByThumbprint and provided the certificate thumbprint. To obtain the thumbprint do the following:

1. Start > Run > mmc
2. File > Add/Remove snap in…
3. Find and add Certificates (local machine).
4. Find the certificate and double-click on it.
5. In the pop-up dialog scroll to Thumbprint and click on it to view the value.
6. Copy the thumbprint value and remove spaces.

untitled

I then changed the WCF service configuration to look something like this:

<serviceBehaviors>
  <behavior name="EnquirySubmissionServiceBehavior">
    <serviceMetadata httpsGetEnabled="true" />
    <serviceDebug includeExceptionDetailInFaults="true" />
    <serviceAuthorization principalPermissionMode="UseAspNetRoles" roleProviderName="SqlRoleProvider" />
    <serviceCredentials>
      <serviceCertificate findValue="46677f6006fb15fe64e5f394d1d99c22f3729155" storeLocation="LocalMachine" storeName="My" x509FindType="FindByThumbprint" />
    </serviceCredentials>
  </behavior>
</serviceBehaviors>