Single Instance windows service

I recently had a requirement to prevent users from running more than one instance of a particular set of applications. I couldn’t find any immediate way of controlling this with Windows (either locally or via Group Policy).

Someone on the team pointed out the following handy utility which limits the number of application instances. We gave it a whirl and it did the job – you can specify a list of processes and a polling time and away you go.

However, this app runs in the System Tray, the user can easily turn it off and/or edit the config, which as a system administrator did not give me enough control, as the need was to guarantee (as far as possible) that only one instance would run.

So I decided to write a small Windows Service in C#. This way it could run in the background, benefit from the service auto-restart options and, if so desired, have the process list hard-coded to prevent tampering.

It really is quite straightforward, the main body of the code looks like this:

            foreach (var processToLimit in processesToLimit)
                var processes = Process.GetProcessesByName(Path.GetFileNameWithoutExtension(processToLimit));
                if (processes.Length > 1)
                    Logger.Warn($"Found >1 instance of {processToLimit}, attempting to kill, start time {processes[1].StartTime}");

                    // This attempts to ensure that the most recent instance is killed, not the original one.
                    processes.OrderByDescending(p => p.StartTime).First().Kill();

Using the Process.GetProcessesByName API (remembering to strip the file extension) returns an array of matches, and then some simple LINQ to find the most recently started one and kill it:

processes.OrderByDescending(p => p.StartTime).First().Kill();

View and download the code from the SingleInstance repository on my GitHub page.

Configuring SQL Server Kerberos for Double-Hop Authentication

The Requirement

We have one database stored on SQL Server (A), which has some synonyms to tables in SQL Server (B).  We want our .NET 4.5 application (running under IIS) to invoke some queries to move data from tables in SQL Server (A) to SQL Server (B), using the synonyms (so the web application doesn’t need to know about SQL Server (B)).


  • Windows Server 2012 R2

  • SQL Server 2012, services running as a domain service account

  • IIS Application Pool Identity running as a domain service account

  • SQL Server (A) has a linked server to SQL Server (B)

  • Both SQL Servers running named instances

The Problem

When IIS talks to SQL Server (A), it does so using it’s domain service account (as that is the account running the AppPool).  That account has been granted sufficient privileges over the database on SQL Server (A) such that it can happily perform operations on it.

When the application wishes to work on tables residing in a database on SQL Server (B), through the use of the synonyms in SQL Server (A), because it is a different server when SQL Server (A) tries to run commands on SQL Server (B) it has no user identity to pass with the request.  This results in the following exception being thrown from SQL Server (B):


SQL Server (A) has not been granted delegation rights to submit commands to SQL Server (B) using the IIS AppPool’s Identity.

The Solution (Overview)

The solution is to use Kerberos authentication throughout the flow.  When using Windows Authentication in the connection between the IIS application and SQL Server, as indicated by a connection string entry similar to the following:

Data Source=.;initial catalog=MyDb;integrated security=SSPI;

IIS will first attempt Kerberos authentication (if it can), otherwise it will fallback to NTLM authentication (this is seamless to the client, but it can be seen if you run a network trace).

If Kerberos authentication succeeds between the IIS application and SQL Server (A), then provided SQL Server (A) has been given delegation rights over the IIS AppPool Identity account, it can make a subsequent request to SQL Server (B) (when it needs to) using the IIS AppPool Identity account, rather than NT AuthorityANONYMOUS LOGON.

There are six steps in getting this to work:

  1. Configure Service Principal Names (SPNs) for the appropriate services / accounts for SQL and IIS

  2. Check that IIS is authenticating to SQL Server (A) using Kerberos

  3. Grant SQL Server (A) delegation rights for the IIS AppPool Identity account

  4. Grant that account permissions on the SQL Server (B) database as appropriate

  5. Enable DTC Options

  6. Tweak remote queries

The Solution (Steps)

Step 1 – Configure SPNs


  • The domain account that the SQL Server services are running under needs a SPN for the MSSQL service (and several variations for each one)

  • The domain account that the IIS AppPool is running under needs a SPN for each IIS website that will be connecting to the SQL Server


setspn -a domainsqlsvc-account MSSQLSvc/
setspn -a domainsqlsvc-account MSSQLSvc/
setspn -a domainsqlsvc-account MSSQLSvc/host
setspn -a domainsqlsvc-account MSSQLSvc/host:1433


setspn -a domainsqlsvc-account MSSQLSvc/host:instanceName
setspn -a domainsqlsvc-account MSSQLSvc/host:<TCPPORT>
setspn -a domainsqlsvc-account MSSQLSvc/
setspn -a domainsqlsvc-account MSSQLSvc/<TCPPORT>


setspn -a domainapppool-account http/mywebsitehost
setspn -a domainapppool-account http/

The SQL SPNs will be automatically created if (and only if) the account it is running under has permissions to create the SPNs (which it attempts to do on start up).  In most scenarios this will not be the case, so you can manually add them in as above.

Make sure you check for duplicate SPNs (any duplicates will stop Kerberos Authentication from working):

setspn -x

See the following MSDN article for more details:


By default SQL Server NAMED INSTANCES allocate a TCP port dynamically, so creating the SPN by hand is tricky.  There are two options:

  1. Set a static port (recommended when using clusters)

  2. Grant the SQL service account permissions to create the SPNs itself when the service starts up

The latter option requires an edit via AdsiEdit.msc as follows:

  • Expand the domain you are interested in

  • Locate the OU where the service account resides

  • Right click on the CN=<service account name> and click Properties

  • Click on the Security tab and click Advanced

  • In Advanced Security Settings dialog box select SELF under Permission Entries

  • Click Edit, select the Properties tab

  • Scroll down and tick Allow against:

    • Read servicePrincipalName

    • Write servicePrincipalName

See the following KB article for more information:

Step 2 – Check Kerberos between IIS and SQL Server (A)

Restart IIS and access a page which causes some database traffic to hit SQL Server (A).  You can run the following query on SQL Server (A) to check the authentication method being used by the current active connections:

select session_id,net_transport,client_net_address,auth_scheme from sys.dm_exec_connections

You should see something like this:

Check the auth_scheme column to see what is being used.  This will tell you if you need to recheck the SPNs.  If you’re still having troubles, fire up a network monitor (e.g. Wireshark) on the IIS server and filter for Kerberos traffic.

Step 3 – Grant Delegation Rights

Once SQL Server (A) has been presented with the Kerberos ticket from IIS, it still won’t be able to use those credentials to contact SQL Server (B) until it is explicitly allowed.  There are two approaches to this: one is to allow the SQL service account to delegate credentials to any service; the more secure way is to use constrained delegation whereby we specify exactly which services this account can delegate credentials to.

Open Active Directory Users & Computers, right click on the SQL service account and choose Properties.  After adding the SPNs (step 1) a new tab will appear called Delegation.

  • Select Trust this user for delegation to specified services only

  • Use Kerberos only

  • Click Add, enter the SQL service account name and select both sets of SPNs added

  • Click OK

Step 4 – Grant SQL Permissions

Don’t forget to do this – the account used by the IIS Application Pool needs to be given suitable permissions on SQL Server (B).

Step 5 – Enable DTC Options

On both SQL Servers the Distributed Transaction Coordinator needs configuring to allow remote connections.  Open the DTC properties:

  • Control Panel

  • Administrative Tools

  • Component Services

  • Computers > My Computer > Distributed Transaction Coordinator

  • Right click on Local DTC and select Properties

  • Select the Security tab

  • Enable Network DTC Access, Allow Remote Clients, Allow Remote Administration, Allow Inbound, Allow Outbound, No Authentication Required

  • Click OK

  • This causes the DTC Service to be restarted

See this article for more details:

Step 6 – Tweak the Stored Procedures / Remote Queries

After getting Kerberos authentication fully working I hit another issue to do with SQL spawning nested transactions on the linked tables.  The exception thrown was:

Unable to start a nested transaction for OLE DB provider "SQLNCLI11" for linked server "SERVERXXX". A nested transaction was required because the XACT_ABORT option was set to OFF.

It turns out there’s some more SQL voodoo needed, namely the following statement at the start of each stored procedure we were running:


That did the trick.  See this SO post for more details:

Firewall Requirements

Both SQL Server need an inbound allow rule for the Distributed Transaction Coordinator to execute.  This can be done by enabling the predefined rule in Windows Firewall for Distributed Transaction Coordinator (TCP-In):

This is usually disabled by default.


Enable Kerberos Logging:

Solving PPTP VPN Error 619 when behind a TMG 2010 firewall

I was recently configuring a test environment which had a Microsoft Threat Management Gateway (TMG) 2010 firewall between the private network and the Internet.  From a test Windows 7 client I was trying to establish an outbound PPTP VPN – but I kept getting Error 619 “A connection to the remote computer could not be established”.


I knew the VPN connection was OK, as when I ran it from the other side of the TMG firewall it connected straight away.

After digging around a bit I discovered that although I had set up a rule in TMG to allow PPTP requests through (from the Internal network to the External network in my case), there was another setting necessary to enable this to work (which was not obvious).


I found that disabling the PPTP FIlter on the PPTP protocol in TMG 2010 resolved this problem.  To change this setting, do the following:

1. Open Forefront TMG Management console


2. On the right hand side, click on the Toolbox tab, click on Protocols, expand VPN and IPSec, right click on PPTP and click Properties



3. Click on the Parameters tab and uncheck the PPTP Filter option in the Application Filters section:



4. Click OK and apply the change in TMG, then re-test the VPN, it should work now.

I haven’t got to the bottom of the ‘why’ behind this change but I hope it saves someone else the hours it took me to solve this problem.


Solving ‘An exception occurred in publishing: No such interface supported (Exception from HRESULT: 0x80004002 (E_NOINTERFACE)’ with Visual Studio 2012 RC

So I’m using Visual Studio 2010 RC and loving web deploy as a simple way to publish my projects to different environments.  However a problem cropped up today after installing some web tooling updates:

‘An exception occurred in publishing: No such interface supported (Exception from HRESULT: 0x80004002 (E_NOINTERFACE)’

I couldn’t even open the publish settings dialog to see if anything was wrong in there.  I tried restarting Visual Studio, then running it as Admin, no change.  I eventually found the following steps on a forum which resolved the problem for me:

  1. Run Command Prompt as Administrator
  2. regsvr32 actxprxy.dll
  3. Restart Visual Studio

And hey presto, project publishing was working again!

Hopefully this will be of use to someone else out there banging their head against a brick wall trying to figure this one out.

Configuring multiple public DHCP IP addresses on a Linksys WRT54G with OpenWrt

I hit a problem the other day whilst trying to map a bunch of public IP addresses (provided by Virgin Business) to various services within the network.  Essentially I’m running a VMWare ESXi server with several web servers on, and I want to use the public IP addresses to expose these servers to the Internet through the business broadband connection.

Rather than splash out on some expensive networking kit, I decided to have a go at hacking with OpenWrt (an open-source mini-Linux router firmware which can run on number low-end network devices).  I decided to plumb for a new Linksys WRT54G, as these are renown for their support of firmware upgrades like OpenWrt (also dd-wrt and tomato to name a few).

The Challenge

When you buy public IP addresses from your ISP, most will hand you a static block which are assigned specifically to your account.  However, some ISPs (Virgin Business in my case), assign your public IPs by DHCP (I initially bought 5).  So every time you connect a different device to the cable modem their DHCP server hands out a different public IP address.  Whilst this seems all very nice, it makes life more difficult when configuring a router to listen on all of those addresses.   DHCP hands out a public IP and stores the hardware MAC address of the device requesting an address in it’s lease table.  The problem with trying to grab more than one address from a single router is that it only has one physical network port (for WAN), and therefore only one hardware MAC address.  The trick here is to create multiple virtual interfaces in the router, each with their own (made up) MAC address, so they can each make a DHCP request to the ISP.

The Hardware

It turns out that there are a plethora of models of Linksys WRT54G, some with different hardware and supporting different firmware features.  The way to check is to turn the router upside down:


(N.B. I have highlighted the area to look at with the red box.)

In my case I had a WRT54GL v1.1, which I bought from eBuyer for £45.

The Firmware

I originally looked at tomato, but the project seems to be languishing and gathering Internet dust.  I settled on OpenWRT because I found the kmod-macvlan package which allows you to create virtual MAC interfaces on the router, which is exactly what I needed.  I followed the installation instructions here (scroll down to the Installing OpenWRT section).

Now, this is important: you must use the bcrm47xx targeted build of OpenWRT to get access to the kmod-macvlan package (this took me a while to figure out).  The one I used came from here:

After flashing your device (either via SSH if enabled, or via the web GUI), I suggest you enable the SSH daemon, life gets much easier that way.


Here’s an overview of my network setup:


OpenWRT comes with a package manager called opkg which is incredibly useful for installing/managing additional packages.

opkg update
opkg install ip
opkg install kmod-macvlan

This will get the latest list of packages from OpenWRT and install ip and kmod-macvlan packages which we need to configure the virtual MAC interfaces.

Next I modified /etc/rc.local to create the virtual MAC interfaces:

# set up virtual mac addresses as aliases on the main WAN i/f eth0.1

ip link add link eth0.1 eth2 type macvlan
ifconfig eth2 hw ether 58:55:ca:23:32:e9

ip link add link eth0.1 eth3 type macvlan
ifconfig eth3 hw ether 5d:a4:02:04:24:0d

ip link add link eth0.1 eth4 type macvlan
ifconfig eth4 hw ether 8C-89-A5-57-80-E7

ip link add link eth0.1 eth5 type macvlan
ifconfig eth5 hw ether 58:4f:4a:df:40:03

ifup -a

# default route
route add default gw dev eth0.1

exit 0

This script configures 4 additional virtual interfaces on top of the main WAN interface (eth0.1), each with it’s own unique MAC address (you can generate a random MAC address using the instructions here.  I’ve added a default route to Virgin’s router to make life easier when it comes to configuring the firewall (you can find out what your’s is by running ifconfig before making any of these changes).

For each new WAN interface I added a section to OpenWRT’s network config (in /etc/config/network):

config 'interface' 'wan2'
    option 'ifname' 'eth2'
    option 'proto' 'dhcp'
    option 'defaultroute' '0'
    option 'peerdns' '0'
    option 'gateway' '

This maps the WAN2 interface onto the eth2 hardware device, and specifies that it should obtain an address using DHCP.  The route and gateway entries are to force all outgoing requests through the main WAN interface (eth0.1).

After saving these changes you’ll need to reboot your router (use reboot –f).  You can then check the status with ifconfig – look at each interface and check they all have public IP addresses.


The next step was to configure some Network Address Translation (NAT) rules in the firewall to forward traffic coming in on certain public IPs to the relevant hosts on my internal network.  This was achieved relatively easily by adding the following sections to OpenWRT’s firewall config (in /etc/config/firewall):

config zone
    option name             wan1
    option network          'wan1'
    option input            REJECT
    option output           ACCEPT
    option forward          REJECT
    option masq             1
    option mtu_fix          1

# forwards from 1st WAN i/f to SP2010 Web01
config redirect
    option src              wan1
    option src_dport        3389
    option proto            tcp
    option dest_ip

The first block configures a firewall zone named wan1 which maps to the wan1 network, with some default rules (e.g. reject all input by default, accept all output by default).  The second block forwards tcp traffic on port 3389 (remote desktop protocol) from wan1 to a local IP of  This happens to be a SharePoint 2010 web server sitting on the ESX host.

Tidying Up

I had one other problem which was that my Linksys box was sitting on the 192.168.0.x network but needed an additional interface to talk to the Virtual Machines on the ESX server.  This was simply achieved by adding an alias section to the OpenWRT network config (in /etc/config/network):

config 'alias'
    option 'interface' 'lan'
    option 'proto' 'static'
    option 'ipaddr' ''
    option 'netmask' ''

After another reboot of the router everything was looking good.

Next Steps

The next thing I want to get working is OpenVPN server running on the Linksys, so I can support remote VPNs into the local network.  Naturally there’s a package for this, but it looks like it needs a bit of configuration, as always.

Happy hacking!

Claims Proxy – A C# Library for Calling Claims Protected Web Services

The ClaimsProxy library enables you to get a WIF cookie collection for a SharePoint site which is protected by Claims Based Authentication. It assumes that ADFS is configured as the Trusted Identity Token Issuer and that the down-stream identity provider is based on the StarterSTS / IdentityServer project.

In some recent work I needed to call some SharePoint 2010 web services from a client outside of the SharePoint farm.  Using traditional network credentials and Windows Authentication this was a straightforward matter, for example:

ICredentials credentials = new NetworkCredential('username', 'password', 'domain');
SharepointUserGroupsWCF.UserGroupSoapClient client = new SharepointUserGroupsWCF.UserGroupSoapClient();
client.ClientCredentials.Windows.ClientCredential = (NetworkCredential)credentials;
client.ClientCredentials.Windows.AllowedImpersonationLevel = System.Security.Principal.TokenImpersonationLevel.Impersonation;
client.Endpoint.Address = new EndpointAddress(webPath + "/_vti_bin/usergroup.asmx");

XElement groupXml = client.GetGroupCollectionFromSite();

However, when the target service is protected by Claims Based Authentication, it’s not so straightforward to call such services.  In my scenario I had a SharePoint 2010 site protected in a web application configured with Claims Based Authentication.  I had configured SharePoint to direct authentication to ADFS (Microsoft Active Directory Federation Server), and then I had a custom claims provider configured based on Dominick Baier’s Identity Server  open source STS product (formerly StarterSTS), however my approach could easily be adapted to work with any number of final claims providers.

Even if you just want the code, read this bit first.

I’ve implemented the code as a .NET 4.0 assembly, although it should be relatively easy to get it to compile under .NET 3.5.  The overall approach works as follows:

  • Use username + password to request a symmetric token from our custom STS (you could get a Windows token here or whatever you are using) – set the realm to that configured for ADFS
  • Use that token to request a bearer token from ADFS – set the realm to the SharePoint site realm as configured in ADFS
  • Use the token from ADFS to authenticate with SharePoint
  • From the resulting authentication response, extract the WIF (Windows Identity Foundation) cookie for authentication (commonly named FedAuth)
  • Return this cookie to the client

I have included a test application in the solution which demonstrates using the returned cookie to make a call to a web service in the SharePoint site.  I’ve left the responsibility of caching the cookie to the client (but included this in the test app).

IMPORTANT NOTE: There is one modification required to the library which I haven’t done yet.  If you read the finer details of WIF you will know that if the SAML token data is too large for a single cookie, then WIF spreads the data over multiple cookies named FedAuth1, FedAuth2 … FedAuthN.  The library assumes that there is only one FedAuth cookie at present – which will be sufficient for most applications – but its worth keeping an eye on if you’re running into troubles.  I will get around to sorting this at some point, but feel free to make a pull request on my Github repository if you get there first.

Download the ClaimsProxy library and sample application from Github

ClaimsProxy is straightforward to use:

// configure our SPServiceRequestor.
var requestor = new SPServiceRequestor
    DobstsEndpoint = "",
    DobstsUsername = "",
    DobstsPassword = "password",
    DobstsAdfsRealm = "",
    AdfsEndpoint = "",
    SharepointRealm = "",
    SharepointSiteUrl = "",
    IgnoreSslValidation = true,
    DebugMode = true,
    DebugEventCallback = (data) =>
     // your debug function here

string spCookieRaw = requestor.GetCookie();

N.B.: DobSts is our own implementation of the StarterSTS project.

Check out the sample application for a full example of how to use the library.

I’d welcome any comments / feedback on this – hopefully it will be of some use to others out there, it certainly has been to me.

Setting up Google Apps Single Sign On (SSO) with ADFS 2.0 and a custom STS such as IdentityServer

I recently had to undertake some work to enable users to seamlessly authenticate to Google Apps using an identity stored in a custom Secure Token Service such as the excellent IdentityServer open source STS by Dominick Baier.  The work involved is mostly configuration in Google Apps and ADFS but there are quite a number of steps and as it was non-trivial I thought I’d document it here for reference.  Note that Google Apps uses SAML 2.0 tokens and because ADFS is brokering the authentication, you shouldn’t have any problems with compatibility as ADFS 2.0 can issue SAML 2.0 tokens.

Here’s a quick architecture diagram:



Green arrows = user request flow

Blue arrows = service response flow


For those of you impatient, here’s a quick overview of the steps required:

  1. Enable SSO in Google Apps
  2. Enter correct ADFS urls into Google Apps
  3. Upload ADFS Token Signing Certificate so Google Apps can verify the SAML tokens
  4. Add Google Apps as a Relying Party in ADFS
  5. Test

I will now walk through each stage in detail, for those who like the details.

Enable SSO in Google Apps

The first stage is to enable Single Sign-on in Google Apps.  Log in to your administration console at /"><your-domain>/.  Click on Advanced Tools and in the Authentication section click on Set up single sign-on (SSO):


This will take you through to a configuration screen.  Make sure the checkbox next to Enable Single Sign-on is ticked, and then enter the following values:

page URL:

Sign-out page URL:

Change password URL:

Verification certificate: Upload the ADFS Token Signing cert (.cer file) which you can obtain from the AD FS 2.0 Management Console (under Service > Certificates).  Remember to click Upload.

Check the box next to “Use a domain specific issuer”.

Enter some network addresses into the Network masks box if you wish.


At this point Single sign-on is configured and enabled.  Note that this will take immediate effect on your access to Google Apps services so beware!  However it does not affect your login to the admin console – that is always accessed via a manual login, so you can get in and turn it off again.

Configure ADFS

Open up the AD FS 2.0 Management Console and navigate to the Relying Parties section.  Click Add Relying Party Trust and follow these steps:

Choose Enter data about the relying party manually


Provide a name for the trust (not important, only so you can easily identify it)


Choose AD FS 2.0 profile


Tick Enable support for the SAML 2.0 WebSSO protocol and enter /acs"><your-domain>/acs into the Relying party SAML 2.0 SSO service URL


Enter<your-domain> as the relying party identifier


Complete the wizard.

Then click on the newly added item and click Properties.  Click on the Signature tab and click add:


Here we add the Token Signing Certificate – it must be the same one that we uploaded in the Google admin console, and this should be the ADFS Token Signing Certificate.

Once you’ve done that click OK to close the Properties dialog.

Now click Edit Claim Rules and click Add Rule:


Select Transform an Incoming Claim from the Claim rule template drop-down:


Give the rule a name, select E-Mail Address as the Incoming Claim Type, set the Outgoing claim type to Name ID and the Outgoing name ID format to Email:


Finish the wizard.


I’ve assumed here that you’ve already got your custom STS configured as a Claims Provider in ADFS.  To test the end-to-end service, visit<your-domain&gt;.  You should get redirected to ADFS.  Choose your STS and then enter your credentials.  You should then be redirected back to Google Apps and arrive at your mailbox, logged in.

If you hit problems, check these items:

– You’ve got the correct certificate uploaded to Google Apps and configured in ADFS

– The time on the ADFS server and custom STS servers is correct

– Google Apps SSO configuration is correct

– If all else fails, try Googling!

How to run StarterSTS on IIS 6 / Windows 2003

I’ve been using the awesome StarterSTS project created by Dominick Baier.  In the words of Dominick:

StarterSTS is a compact, easy to use security token service that is completely based on the ASP.NET provider infrastructure. It is built using the Windows Identity Foundation and supports WS-Federation., WS-Trust, REST, OpenId and Information Cards.

The StarterSTS System Requirements specify IIS 7.x which implies a Windows 2008 variant operating system.  I recently had a requirement to get it running on a Windows 2003 server (with IIS 6), and it isn’t all that straightforward so I thought I’d post the steps (and as much reasoning as possible here).


You’ll need the following things to hand to get going:


Follow the StarterSTS instructions to extract the web package and create the site in IIS.  Make sure you install the SSL certificate on the IIS web site, and follow the subsequent instructions to set up the relevant StarterSTS configuration files.

If you haven’t installed the WIF package you’ll get an error similar to this:

Could not load file or assembly Microsoft.IdentityModel, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35 or one of its dependencies. The system cannot find the file specified.

Run the WIF install package and that error will go away.

You should then be able to bring up the StarterSTS homepage:


Go ahead and create yourself a user account by visiting /signup.aspx (remember to create the database as in the StarterSTS instructions):


Token Issuance

This is where the configuration gets a bit more involved.

Note: The auto-generation of federation meta-data doesn’t work on IIS 6 (normally you would hit /FederationMetadata/2007-06/FederationMetadata.xml).

Try visiting the following URL:


(Remember to replace the first part with the address of your server.  The value of wtrealm depends on what you’ve put in the configuration/relyingParties.config file in StarterSTS.  In this example I’m just using the sample entry that comes with the source download.)

The first error you’ll see is:

The handle is invalid. System.Security.Cryptography.CryptographicException: The handle is invalid.

This error comes about because the identity of the IIS Process executing the StarterSTS code cannot access the private key of your signing certificate (as specified in configuration/certificates.config).  In Windows 2008, this is easily fixed using the Certificates MMC Snap-In, but not so in Windows 2003 (the option isn’t available).  This is where the WinHTTPCertCfg tool comes in handy.  It allows us to check and set the user identities with access to certificates installed on the machine [read X509 Certificates on Windows Server 2003 for more info].

If you followed the default install settings, then the tool will have installed to c:Program FilesWindows Resource Kitstools.  Open a Command Prompt, switch to this directory and run the following command to see which user identities have access to the certificate:

winhttpcertcfg.exe -l -c LOCAL_MACHINEMy -s "sts.gws.lab.local"


Note: Replace the “sts.gws.lab.local” with the Common Name on your certificate.

Now you need to check which user is running IIS.  This is done by looking at the properties on the Application Pool assigned to the StarterSTS virtual directory:


My server is running with the out-of-the-box account Network Service.  Now that we know this, we can go back to the Command Prompt and grant this account permissions over the private key:

winhttpcertcfg.exe -g -c LOCAL_MACHINEMy -s "sts.gws.lab.local" -a "NetworkService"


Restart IIS and go back to the the issue.aspx URL and you’ll likely get this error now:

Object identified (OID) is unknown. System.Security.Cryptography.CryptographicException: Object identifier (OID) is unknown.


This comes about due to the need to configure RSA-SHA256 on Windows 2003.  For more details on this see this MSDN Forum Post – CryptographicException – Object identifier (OID) is unknown.  Follow the steps in the forum answer:

using Security.Cryptography;

class Program
    static void Main(string[] args)
  • Build and run the console application (alongside the DLL) on the server

Alternatively, save yourself the hassle and download the ZIP that I made earlier.

After that, everything should fly, and StarterSTS should issue a token and redirect you back to the relying party specified in the wtrealm parameter.  I grabbed a couple of screenshots using Fiddler to show it working:


And that’s it!  I hope this has been helpful, it was very rewarding when I got this working and I have repeated it several times on different boxes.

Moving away from and 10 must-have WordPress plugins


Last night, with a couple of hours to spare, I decided to get on and move my blog away from to my own hosting.  Why do this?

  • Access to my own Google Analytics data
  • Use of any plugin I like
  • Association my content with my domain name
  • Greater control over the site, to, for example, add Google Authorship links

I’ve been running the blog on since 6th December 2007, but time has come to move to new pastures.   The move process was very simple:

  1. Set up WordPress on my Linux server (5 minutes)
  2. Export existing blog content using WordPress XML export
  3. Import XML into new blog
  4. Pay $12 / year to 301 all pages / posts from the old blog to the new domain
This was I won’t lose any of my Google ranking weight, and users won’t get the dreaded 404 on that key post they were looking for.
To get my new blog up to scratch I did some research and loaded up the following 10 awesome WordPress plugins:
  1. Akismet – the de facto comment and trackback spam protection
  2. Digg Digg – add as many social media buttons as you can shake a stick at
  3. Google Analytics for WordPress – all in the name
  4. Head Space2 – manage SEO meta-data to a fine grained level
  5. Multi Social Favicon – auto-generate the blog favicon using a social media account
  6. Smart Video – simple YouTube, Vimeo video embeds
  7. SyntaxHighlighter Evolved – awesome syntax highlighting for code in posts
  8. Tweet old post – keep old posts alive by tweeting them every now and then
  9. W3 Total Cache – speed up your blog with caching
  10. WP-DBManager – database management, backups – essential
WordPress is an awesome blogging platform; it moves out of the way so you can focus on writing content.  And with the large plugin and theme offerings out there, there’s no excuse to share your knowledge with the wider community.
Get on and write something!

Clearing up the confusion over session timeouts in PHP and Zend Framework

I’ve recently made a foray into the world of Zend Framework.  If you’ve not come across it, it is one of several popular PHP frameworks that implements the Model View Controller software architecture.

By day I’m mostly devoted to hacking ASP.NET MVC, but having had some experience with PHP I decided to get my teeth stuck into Zend for a new PHP project I’ve been working on.  Somewhere in my head I have another blog post to write about the dyer state that is documentation for Zend, but that’s a topic for another night.

This post details my journey through figuring out how PHP sessions really work and how to fix them for properly bringing down a user’s session after a period of inactivity, in the context of Zend Framework (although most of the post applies to PHP proper).

Get your server’s clock synchronised

The PHP session files are date-stamped when they are created/updated.  The PHP Session cookie that gets sent to the browser has an expiry date-stamp on it.  While you can’t rely on the client’s system clock to be correct, making sure your server’s is makes debugging less of a headache.

On a Windows server this should already be set up – if not check the Date & Time Properties in Control Panel and pay attention to the Internet Time tab.  On a Linux server it’s a case of installing and configuring the ntp package – a bit of Googling should show you the way with whichever distribution you are using.

The options for session timeouts

There are a number of options as to how you can handle session timeouts, as I see it:

  1. Keep the session alive until the user closes their browser
  2. End the session after a fixed period of time (regardless of activity)
  3. End the session after a fixed period of time with no activity
  4. Some combination of the above

I’d put my money on the best user experience being a combination of point 1) and point 3).  This, rather surprisingly, isn’t as straightforward as you would hope.

PHP Session Settings – What they mean


There are a handful of PHP settings relating to sessions, and understanding how sessions work and what these settings do is critical to getting the behaviour you desire.

PHP sessions are stored as files (by default) – the files contain a serialised version of the $_SESSION superglobal arrray.  The path to these files is set by session.save_path.

N.B. Security Warning – make sure no other users can get to the session directory.  Files are stored unencrypted and therefore should be treated with the same caution as password files.

An interesting side note: if you inspect the cookie PHPSESSID (the default name for the PHP Session Cookie), you’ll notice a funny looking value.  Now take a look at the contents of the PHP session directory – hey presto, there’s a file called sess_<SESSIONID>. Simple eh.

Each time PHP session_start() is called (which is not just when you first start a session, but every time you want to load session data into your app, i.e. on every request for most apps), PHP works out whether or not it should run the session garbage collector.

Garbage – what garbage?

The session garbage collector runs around and checks all of the timestamps on the session files and compares them to the current time less the session.gc_maxlifetime value, to work out if a file should be deleted (because it has exceeded it’s lifetime).

You might think that the garbage collector would run on every session_start(), to avoid any expired session files being left hanging around.  However there is a performance overhead with this and so PHP uses two settings, session.gc_probability and session.gc_divisor to calculate the probability that it should run the garbage collector (it does this by dividing the two and then rolling it’s own dice to see whether it should run).

N.B. Remember that each time you access the session the timestamp on the session file gets updated, this stops the garbage collector from killing a user’s session while they are still active.

This all sounds very cosy and like it will do just what I want – dump the user’s session after a period of inactivity (as specified by session.gc_maxlifetime).  And on low volume sites you could force the garbage collector to run every time by making the probability 100%.

However, all is not quite so straightforward!

The order of things

I’m sure there’s a good reason for this, but I’m not aware of what it is.  PHP first loads up the session, then works out if it should run the garbage collector.  So even if the garbage collector runs every time and the session file has expired, on the first request the user will appear to still have a session, and then only on the next request they won’t.

Imagine this scenario.  A user has logged in to your site.  They go away for lunch, and during that time their session has expired.  They come back and hit refresh – the screen reloads with their session apparently intact.  All is looking good.  Then they click somewhere else and bam – their session is gone.  Kinda confusing really.

In an ideal world there would be a way to check for dead sessions before the session fires up, and in fact there is, but you have to roll it yourself.

Did I mention Zend Framework?

So far everything I have covered is PHP proper, no mention of Zend.  However the Zend_Session object is based on all of these settings, and therefore it warrants understanding the underlying PHP behaviour first.

The solution to this problem is to roll your own inactivity timeout check.  I needed this to implement inactivity logouts.  I used the following code to do my tracking:

$idleTimeout = 3600; // timeout after 1 hour

if(isset($_SESSION[‘timeout_idle’]) && $_SESSION[‘timeout_idle’] < time()) {
header(‘Location: /account/signin’);

$_SESSION[‘timeout_idle’] = time() + $idleTimeout;

N.B. If you’re not using Zend Framework you can exchange the Zend_Session lines for their standard PHP equivalents.

All this does is check if we have an idle timeout set in our session (remember this gets loaded even if the session has technically expired) and if it has fallen past the current time we tear down the session, regenerate the session ID (to avoid picking up the existing session file) and head off to the log in screen.

Specifically to Zend, you can set the following settings in the application.ini file:

resources.session.gc_probability = 1
resources.session.gc_divisor = 1
resources.session.gc_maxlifetime = 3600
resources.session.idle_timeout = 3600

And then place the custom idle time out code inside this function in Bootstrap.php:

protected function _initSession()
# set up the session as per the config.
$options = $this->getOptions();
$sessionOptions = array(
‘gc_probability’    =>    $options[‘resources’][‘session’][‘gc_probability’],
‘gc_divisor’        =>    $options[‘resources’][‘session’][‘gc_divisor’],
‘gc_maxlifetime’    =>    $options[‘resources’][‘session’][‘gc_maxlifetime’]

$idleTimeout = $options[‘resources’][‘session’][‘idle_timeout’];


# now check for idle timeout.
if(isset($_SESSION[‘timeout_idle’]) && $_SESSION[‘timeout_idle’] < time()) {
header(‘Location: /account/signin’);

$_SESSION[‘timeout_idle’] = time() + $idleTimeout;

And that’s all there is to it!

If you know the reasoning behind some of these apparently strange PHP session operations then please post them here – I’m left scratching my head in a daze at how complex a matter it has been to implement such a standard bit of user experience.

Technology, Literature and the Rest