Category Archives: Technology

This all-encompassing category covers hardware, software and anything fun in-between

Configuring SQL Server Kerberos for Double-Hop Authentication

The Requirement

We have one database stored on SQL Server (A), which has some synonyms to tables in SQL Server (B).  We want our .NET 4.5 application (running under IIS) to invoke some queries to move data from tables in SQL Server (A) to SQL Server (B), using the synonyms (so the web application doesn’t need to know about SQL Server (B)).

Environment

  • Windows Server 2012 R2

  • SQL Server 2012, services running as a domain service account

  • IIS Application Pool Identity running as a domain service account

  • SQL Server (A) has a linked server to SQL Server (B)

  • Both SQL Servers running named instances

The Problem

When IIS talks to SQL Server (A), it does so using it’s domain service account (as that is the account running the AppPool).  That account has been granted sufficient privileges over the database on SQL Server (A) such that it can happily perform operations on it.

When the application wishes to work on tables residing in a database on SQL Server (B), through the use of the synonyms in SQL Server (A), because it is a different server when SQL Server (A) tries to run commands on SQL Server (B) it has no user identity to pass with the request.  This results in the following exception being thrown from SQL Server (B):

Login failed for user 'NT AUTHORITYANONYMOUS LOGON'

SQL Server (A) has not been granted delegation rights to submit commands to SQL Server (B) using the IIS AppPool’s Identity.

The Solution (Overview)

The solution is to use Kerberos authentication throughout the flow.  When using Windows Authentication in the connection between the IIS application and SQL Server, as indicated by a connection string entry similar to the following:

Data Source=.;initial catalog=MyDb;integrated security=SSPI;

IIS will first attempt Kerberos authentication (if it can), otherwise it will fallback to NTLM authentication (this is seamless to the client, but it can be seen if you run a network trace).

If Kerberos authentication succeeds between the IIS application and SQL Server (A), then provided SQL Server (A) has been given delegation rights over the IIS AppPool Identity account, it can make a subsequent request to SQL Server (B) (when it needs to) using the IIS AppPool Identity account, rather than NT AuthorityANONYMOUS LOGON.

There are six steps in getting this to work:

  1. Configure Service Principal Names (SPNs) for the appropriate services / accounts for SQL and IIS

  2. Check that IIS is authenticating to SQL Server (A) using Kerberos

  3. Grant SQL Server (A) delegation rights for the IIS AppPool Identity account

  4. Grant that account permissions on the SQL Server (B) database as appropriate

  5. Enable DTC Options

  6. Tweak remote queries

The Solution (Steps)

Step 1 – Configure SPNs

Overview:

  • The domain account that the SQL Server services are running under needs a SPN for the MSSQL service (and several variations for each one)

  • The domain account that the IIS AppPool is running under needs a SPN for each IIS website that will be connecting to the SQL Server

SQL

setspn -a domainsqlsvc-account MSSQLSvc/host.domain.com:1433
setspn -a domainsqlsvc-account MSSQLSvc/host.domain.com
setspn -a domainsqlsvc-account MSSQLSvc/host
setspn -a domainsqlsvc-account MSSQLSvc/host:1433

OR

setspn -a domainsqlsvc-account MSSQLSvc/host:instanceName
setspn -a domainsqlsvc-account MSSQLSvc/host:<TCPPORT>
setspn -a domainsqlsvc-account MSSQLSvc/host.domain.com:instanceName
setspn -a domainsqlsvc-account MSSQLSvc/host.domain.com:<TCPPORT>

IIS

setspn -a domainapppool-account http/mywebsitehost
setspn -a domainapppool-account http/mywebsitefqdn.com

The SQL SPNs will be automatically created if (and only if) the account it is running under has permissions to create the SPNs (which it attempts to do on start up).  In most scenarios this will not be the case, so you can manually add them in as above.

Make sure you check for duplicate SPNs (any duplicates will stop Kerberos Authentication from working):

setspn -x

See the following MSDN article for more details:

https://msdn.microsoft.com/en-us/library/ms191153(v=sql.110).aspx

IF USING SQL SERVER NAMED INSTANCES…

By default SQL Server NAMED INSTANCES allocate a TCP port dynamically, so creating the SPN by hand is tricky.  There are two options:

  1. Set a static port (recommended when using clusters)

  2. Grant the SQL service account permissions to create the SPNs itself when the service starts up

The latter option requires an edit via AdsiEdit.msc as follows:

  • Expand the domain you are interested in

  • Locate the OU where the service account resides

  • Right click on the CN=<service account name> and click Properties

  • Click on the Security tab and click Advanced

  • In Advanced Security Settings dialog box select SELF under Permission Entries

  • Click Edit, select the Properties tab

  • Scroll down and tick Allow against:

    • Read servicePrincipalName

    • Write servicePrincipalName

See the following KB article for more information: http://support.microsoft.com/kb/319723

Step 2 – Check Kerberos between IIS and SQL Server (A)

Restart IIS and access a page which causes some database traffic to hit SQL Server (A).  You can run the following query on SQL Server (A) to check the authentication method being used by the current active connections:

select session_id,net_transport,client_net_address,auth_scheme from sys.dm_exec_connections

You should see something like this:

Check the auth_scheme column to see what is being used.  This will tell you if you need to recheck the SPNs.  If you’re still having troubles, fire up a network monitor (e.g. Wireshark) on the IIS server and filter for Kerberos traffic.

Step 3 – Grant Delegation Rights

Once SQL Server (A) has been presented with the Kerberos ticket from IIS, it still won’t be able to use those credentials to contact SQL Server (B) until it is explicitly allowed.  There are two approaches to this: one is to allow the SQL service account to delegate credentials to any service; the more secure way is to use constrained delegation whereby we specify exactly which services this account can delegate credentials to.

Open Active Directory Users & Computers, right click on the SQL service account and choose Properties.  After adding the SPNs (step 1) a new tab will appear called Delegation.

  • Select Trust this user for delegation to specified services only

  • Use Kerberos only

  • Click Add, enter the SQL service account name and select both sets of SPNs added

  • Click OK

Step 4 – Grant SQL Permissions

Don’t forget to do this – the account used by the IIS Application Pool needs to be given suitable permissions on SQL Server (B).

Step 5 – Enable DTC Options

On both SQL Servers the Distributed Transaction Coordinator needs configuring to allow remote connections.  Open the DTC properties:

  • Control Panel

  • Administrative Tools

  • Component Services

  • Computers > My Computer > Distributed Transaction Coordinator

  • Right click on Local DTC and select Properties

  • Select the Security tab

  • Enable Network DTC Access, Allow Remote Clients, Allow Remote Administration, Allow Inbound, Allow Outbound, No Authentication Required

  • Click OK

  • This causes the DTC Service to be restarted

See this article for more details:

http://www.sqlvillage.com/Articles/Distributed%20Transaction%20Issue%20for%20Linked%20Server%20in%20SQL%20Server%202008.asp

Step 6 – Tweak the Stored Procedures / Remote Queries

After getting Kerberos authentication fully working I hit another issue to do with SQL spawning nested transactions on the linked tables.  The exception thrown was:

Unable to start a nested transaction for OLE DB provider "SQLNCLI11" for linked server "SERVERXXX". A nested transaction was required because the XACT_ABORT option was set to OFF.

It turns out there’s some more SQL voodoo needed, namely the following statement at the start of each stored procedure we were running:

SET XACT_ABORT ON

That did the trick.  See this SO post for more details:

http://stackoverflow.com/questions/6036357/making-an-entity-framework-model-span-multiple-databases

Firewall Requirements

Both SQL Server need an inbound allow rule for the Distributed Transaction Coordinator to execute.  This can be done by enabling the predefined rule in Windows Firewall for Distributed Transaction Coordinator (TCP-In):

This is usually disabled by default.

Troubleshooting

Enable Kerberos Logging: http://support.microsoft.com/kb/262177

Advertisements

Solving PPTP VPN Error 619 when behind a TMG 2010 firewall

I was recently configuring a test environment which had a Microsoft Threat Management Gateway (TMG) 2010 firewall between the private network and the Internet.  From a test Windows 7 client I was trying to establish an outbound PPTP VPN – but I kept getting Error 619 “A connection to the remote computer could not be established”.

vpn-error-619

I knew the VPN connection was OK, as when I ran it from the other side of the TMG firewall it connected straight away.

After digging around a bit I discovered that although I had set up a rule in TMG to allow PPTP requests through (from the Internal network to the External network in my case), there was another setting necessary to enable this to work (which was not obvious).

tmg-vpn-rule

I found that disabling the PPTP FIlter on the PPTP protocol in TMG 2010 resolved this problem.  To change this setting, do the following:

1. Open Forefront TMG Management console

forefront-tmg-management

2. On the right hand side, click on the Toolbox tab, click on Protocols, expand VPN and IPSec, right click on PPTP and click Properties

tmg-pptp

 

3. Click on the Parameters tab and uncheck the PPTP Filter option in the Application Filters section:

tmg-pptp-properties

 

4. Click OK and apply the change in TMG, then re-test the VPN, it should work now.

I haven’t got to the bottom of the ‘why’ behind this change but I hope it saves someone else the hours it took me to solve this problem.

 

Solving ‘An exception occurred in publishing: No such interface supported (Exception from HRESULT: 0x80004002 (E_NOINTERFACE)’ with Visual Studio 2012 RC

So I’m using Visual Studio 2010 RC and loving web deploy as a simple way to publish my projects to different environments.  However a problem cropped up today after installing some web tooling updates:

‘An exception occurred in publishing: No such interface supported (Exception from HRESULT: 0x80004002 (E_NOINTERFACE)’

I couldn’t even open the publish settings dialog to see if anything was wrong in there.  I tried restarting Visual Studio, then running it as Admin, no change.  I eventually found the following steps on a forum which resolved the problem for me:

  1. Run Command Prompt as Administrator
  2. regsvr32 actxprxy.dll
  3. Restart Visual Studio

And hey presto, project publishing was working again!

Hopefully this will be of use to someone else out there banging their head against a brick wall trying to figure this one out.

Configuring multiple public DHCP IP addresses on a Linksys WRT54G with OpenWrt

I hit a problem the other day whilst trying to map a bunch of public IP addresses (provided by Virgin Business) to various services within the network.  Essentially I’m running a VMWare ESXi server with several web servers on, and I want to use the public IP addresses to expose these servers to the Internet through the business broadband connection.

Rather than splash out on some expensive networking kit, I decided to have a go at hacking with OpenWrt (an open-source mini-Linux router firmware which can run on number low-end network devices).  I decided to plumb for a new Linksys WRT54G, as these are renown for their support of firmware upgrades like OpenWrt (also dd-wrt and tomato to name a few).

The Challenge

When you buy public IP addresses from your ISP, most will hand you a static block which are assigned specifically to your account.  However, some ISPs (Virgin Business in my case), assign your public IPs by DHCP (I initially bought 5).  So every time you connect a different device to the cable modem their DHCP server hands out a different public IP address.  Whilst this seems all very nice, it makes life more difficult when configuring a router to listen on all of those addresses.   DHCP hands out a public IP and stores the hardware MAC address of the device requesting an address in it’s lease table.  The problem with trying to grab more than one address from a single router is that it only has one physical network port (for WAN), and therefore only one hardware MAC address.  The trick here is to create multiple virtual interfaces in the router, each with their own (made up) MAC address, so they can each make a DHCP request to the ISP.

The Hardware

It turns out that there are a plethora of models of Linksys WRT54G, some with different hardware and supporting different firmware features.  The way to check is to turn the router upside down:

Untitled

(N.B. I have highlighted the area to look at with the red box.)

In my case I had a WRT54GL v1.1, which I bought from eBuyer for £45.

The Firmware

I originally looked at tomato, but the project seems to be languishing and gathering Internet dust.  I settled on OpenWRT because I found the kmod-macvlan package which allows you to create virtual MAC interfaces on the router, which is exactly what I needed.  I followed the installation instructions here http://wiki.openwrt.org/toh/linksys/wrt54g (scroll down to the Installing OpenWRT section).

Now, this is important: you must use the bcrm47xx targeted build of OpenWRT to get access to the kmod-macvlan package (this took me a while to figure out).  The one I used came from here:

http://downloads.openwrt.org/backfire/10.03.1/brcm47xx/

After flashing your device (either via SSH if enabled, or via the web GUI), I suggest you enable the SSH daemon, life gets much easier that way.

Network

Here’s an overview of my network setup:

image

OpenWRT comes with a package manager called opkg which is incredibly useful for installing/managing additional packages.

opkg update
opkg install ip
opkg install kmod-macvlan

This will get the latest list of packages from OpenWRT and install ip and kmod-macvlan packages which we need to configure the virtual MAC interfaces.

Next I modified /etc/rc.local to create the virtual MAC interfaces:

# set up virtual mac addresses as aliases on the main WAN i/f eth0.1

ip link add link eth0.1 eth2 type macvlan
ifconfig eth2 hw ether 58:55:ca:23:32:e9

ip link add link eth0.1 eth3 type macvlan
ifconfig eth3 hw ether 5d:a4:02:04:24:0d

ip link add link eth0.1 eth4 type macvlan
ifconfig eth4 hw ether 8C-89-A5-57-80-E7

ip link add link eth0.1 eth5 type macvlan
ifconfig eth5 hw ether 58:4f:4a:df:40:03

ifup -a

# default route
route add default gw 82.7.16.1 dev eth0.1

exit 0

This script configures 4 additional virtual interfaces on top of the main WAN interface (eth0.1), each with it’s own unique MAC address (you can generate a random MAC address using the instructions here.  I’ve added a default route to Virgin’s router to make life easier when it comes to configuring the firewall (you can find out what your’s is by running ifconfig before making any of these changes).

For each new WAN interface I added a section to OpenWRT’s network config (in /etc/config/network):

config 'interface' 'wan2'
    option 'ifname' 'eth2'
    option 'proto' 'dhcp'
    option 'defaultroute' '0'
    option 'peerdns' '0'
    option 'gateway' '0.0.0.0

This maps the WAN2 interface onto the eth2 hardware device, and specifies that it should obtain an address using DHCP.  The route and gateway entries are to force all outgoing requests through the main WAN interface (eth0.1).

After saving these changes you’ll need to reboot your router (use reboot –f).  You can then check the status with ifconfig – look at each interface and check they all have public IP addresses.

Firewall

The next step was to configure some Network Address Translation (NAT) rules in the firewall to forward traffic coming in on certain public IPs to the relevant hosts on my internal network.  This was achieved relatively easily by adding the following sections to OpenWRT’s firewall config (in /etc/config/firewall):

config zone
    option name             wan1
    option network          'wan1'
    option input            REJECT
    option output           ACCEPT
    option forward          REJECT
    option masq             1
    option mtu_fix          1

# forwards from 1st WAN i/f to SP2010 Web01
config redirect
    option src              wan1
    option src_dport        3389
    option proto            tcp
    option dest_ip          192.168.180.94

The first block configures a firewall zone named wan1 which maps to the wan1 network, with some default rules (e.g. reject all input by default, accept all output by default).  The second block forwards tcp traffic on port 3389 (remote desktop protocol) from wan1 to a local IP of 192.168.180.94.  This happens to be a SharePoint 2010 web server sitting on the ESX host.

Tidying Up

I had one other problem which was that my Linksys box was sitting on the 192.168.0.x network but needed an additional interface to talk to the Virtual Machines on the ESX server.  This was simply achieved by adding an alias section to the OpenWRT network config (in /etc/config/network):

config 'alias'
    option 'interface' 'lan'
    option 'proto' 'static'
    option 'ipaddr' '192.168.180.1'
    option 'netmask' '255.255.255.0'

After another reboot of the router everything was looking good.

Next Steps

The next thing I want to get working is OpenVPN server running on the Linksys, so I can support remote VPNs into the local network.  Naturally there’s a package for this, but it looks like it needs a bit of configuration, as always.

Happy hacking!

Setting up Google Apps Single Sign On (SSO) with ADFS 2.0 and a custom STS such as IdentityServer

I recently had to undertake some work to enable users to seamlessly authenticate to Google Apps using an identity stored in a custom Secure Token Service such as the excellent IdentityServer open source STS by Dominick Baier.  The work involved is mostly configuration in Google Apps and ADFS but there are quite a number of steps and as it was non-trivial I thought I’d document it here for reference.  Note that Google Apps uses SAML 2.0 tokens and because ADFS is brokering the authentication, you shouldn’t have any problems with compatibility as ADFS 2.0 can issue SAML 2.0 tokens.

Here’s a quick architecture diagram:

google-sso-adfs

Key:

Green arrows = user request flow

Blue arrows = service response flow

Overview

For those of you impatient, here’s a quick overview of the steps required:

  1. Enable SSO in Google Apps
  2. Enter correct ADFS urls into Google Apps
  3. Upload ADFS Token Signing Certificate so Google Apps can verify the SAML tokens
  4. Add Google Apps as a Relying Party in ADFS
  5. Test

I will now walk through each stage in detail, for those who like the details.

Enable SSO in Google Apps

The first stage is to enable Single Sign-on in Google Apps.  Log in to your administration console at /">http://www.google.com/a/<your-domain>/.  Click on Advanced Tools and in the Authentication section click on Set up single sign-on (SSO):

step01

This will take you through to a configuration screen.  Make sure the checkbox next to Enable Single Sign-on is ticked, and then enter the following values:

Sign-in page URL: https://adfs.yourdomain.com/adfs/ls/

Sign-out page URL: https://adfs.yourdomain.com/adfs/ls/

Change password URL: https://sts.yourdomain.com/startersts/users/password.aspx

Verification certificate: Upload the ADFS Token Signing cert (.cer file) which you can obtain from the AD FS 2.0 Management Console (under Service > Certificates).  Remember to click Upload.

Check the box next to “Use a domain specific issuer”.

Enter some network addresses into the Network masks box if you wish.

step02

At this point Single sign-on is configured and enabled.  Note that this will take immediate effect on your access to Google Apps services so beware!  However it does not affect your login to the admin console – that is always accessed via a manual login, so you can get in and turn it off again.

Configure ADFS

Open up the AD FS 2.0 Management Console and navigate to the Relying Parties section.  Click Add Relying Party Trust and follow these steps:

Choose Enter data about the relying party manually

step03

Provide a name for the trust (not important, only so you can easily identify it)

step04

Choose AD FS 2.0 profile

step05

Tick Enable support for the SAML 2.0 WebSSO protocol and enter /acs">https://www.google.com/a/<your-domain>/acs into the Relying party SAML 2.0 SSO service URL

step06

Enter google.com/a/<your-domain> as the relying party identifier

step07

Complete the wizard.

Then click on the newly added item and click Properties.  Click on the Signature tab and click add:

step08

Here we add the Token Signing Certificate – it must be the same one that we uploaded in the Google admin console, and this should be the ADFS Token Signing Certificate.

Once you’ve done that click OK to close the Properties dialog.

Now click Edit Claim Rules and click Add Rule:

step09

Select Transform an Incoming Claim from the Claim rule template drop-down:

step10

Give the rule a name, select E-Mail Address as the Incoming Claim Type, set the Outgoing claim type to Name ID and the Outgoing name ID format to Email:

step11

Finish the wizard.

Test

I’ve assumed here that you’ve already got your custom STS configured as a Claims Provider in ADFS.  To test the end-to-end service, visit http://mail.google.com/a/<your-domain&gt;.  You should get redirected to ADFS.  Choose your STS and then enter your credentials.  You should then be redirected back to Google Apps and arrive at your mailbox, logged in.

If you hit problems, check these items:

– You’ve got the correct certificate uploaded to Google Apps and configured in ADFS

– The time on the ADFS server and custom STS servers is correct

– Google Apps SSO configuration is correct

– If all else fails, try Googling!

Clearing up the confusion over session timeouts in PHP and Zend Framework

I’ve recently made a foray into the world of Zend Framework.  If you’ve not come across it, it is one of several popular PHP frameworks that implements the Model View Controller software architecture.

By day I’m mostly devoted to hacking ASP.NET MVC, but having had some experience with PHP I decided to get my teeth stuck into Zend for a new PHP project I’ve been working on.  Somewhere in my head I have another blog post to write about the dyer state that is documentation for Zend, but that’s a topic for another night.

This post details my journey through figuring out how PHP sessions really work and how to fix them for properly bringing down a user’s session after a period of inactivity, in the context of Zend Framework (although most of the post applies to PHP proper).

Get your server’s clock synchronised

The PHP session files are date-stamped when they are created/updated.  The PHP Session cookie that gets sent to the browser has an expiry date-stamp on it.  While you can’t rely on the client’s system clock to be correct, making sure your server’s is makes debugging less of a headache.

On a Windows server this should already be set up – if not check the Date & Time Properties in Control Panel and pay attention to the Internet Time tab.  On a Linux server it’s a case of installing and configuring the ntp package – a bit of Googling should show you the way with whichever distribution you are using.

The options for session timeouts

There are a number of options as to how you can handle session timeouts, as I see it:

  1. Keep the session alive until the user closes their browser
  2. End the session after a fixed period of time (regardless of activity)
  3. End the session after a fixed period of time with no activity
  4. Some combination of the above

I’d put my money on the best user experience being a combination of point 1) and point 3).  This, rather surprisingly, isn’t as straightforward as you would hope.

PHP Session Settings – What they mean

image

There are a handful of PHP settings relating to sessions, and understanding how sessions work and what these settings do is critical to getting the behaviour you desire.

PHP sessions are stored as files (by default) – the files contain a serialised version of the $_SESSION superglobal arrray.  The path to these files is set by session.save_path.

N.B. Security Warning – make sure no other users can get to the session directory.  Files are stored unencrypted and therefore should be treated with the same caution as password files.

An interesting side note: if you inspect the cookie PHPSESSID (the default name for the PHP Session Cookie), you’ll notice a funny looking value.  Now take a look at the contents of the PHP session directory – hey presto, there’s a file called sess_<SESSIONID>. Simple eh.

Each time PHP session_start() is called (which is not just when you first start a session, but every time you want to load session data into your app, i.e. on every request for most apps), PHP works out whether or not it should run the session garbage collector.

Garbage – what garbage?

The session garbage collector runs around and checks all of the timestamps on the session files and compares them to the current time less the session.gc_maxlifetime value, to work out if a file should be deleted (because it has exceeded it’s lifetime).

You might think that the garbage collector would run on every session_start(), to avoid any expired session files being left hanging around.  However there is a performance overhead with this and so PHP uses two settings, session.gc_probability and session.gc_divisor to calculate the probability that it should run the garbage collector (it does this by dividing the two and then rolling it’s own dice to see whether it should run).

N.B. Remember that each time you access the session the timestamp on the session file gets updated, this stops the garbage collector from killing a user’s session while they are still active.

This all sounds very cosy and like it will do just what I want – dump the user’s session after a period of inactivity (as specified by session.gc_maxlifetime).  And on low volume sites you could force the garbage collector to run every time by making the probability 100%.

However, all is not quite so straightforward!

The order of things

I’m sure there’s a good reason for this, but I’m not aware of what it is.  PHP first loads up the session, then works out if it should run the garbage collector.  So even if the garbage collector runs every time and the session file has expired, on the first request the user will appear to still have a session, and then only on the next request they won’t.

Imagine this scenario.  A user has logged in to your site.  They go away for lunch, and during that time their session has expired.  They come back and hit refresh – the screen reloads with their session apparently intact.  All is looking good.  Then they click somewhere else and bam – their session is gone.  Kinda confusing really.

In an ideal world there would be a way to check for dead sessions before the session fires up, and in fact there is, but you have to roll it yourself.

Did I mention Zend Framework?

So far everything I have covered is PHP proper, no mention of Zend.  However the Zend_Session object is based on all of these settings, and therefore it warrants understanding the underlying PHP behaviour first.

The solution to this problem is to roll your own inactivity timeout check.  I needed this to implement inactivity logouts.  I used the following code to do my tracking:

$idleTimeout = 3600; // timeout after 1 hour

if(isset($_SESSION[‘timeout_idle’]) && $_SESSION[‘timeout_idle’] < time()) {
Zend_Session::destroy();
Zend_Session::regenerateId();
header(‘Location: /account/signin’);
exit();
}

$_SESSION[‘timeout_idle’] = time() + $idleTimeout;

N.B. If you’re not using Zend Framework you can exchange the Zend_Session lines for their standard PHP equivalents.

All this does is check if we have an idle timeout set in our session (remember this gets loaded even if the session has technically expired) and if it has fallen past the current time we tear down the session, regenerate the session ID (to avoid picking up the existing session file) and head off to the log in screen.

Specifically to Zend, you can set the following settings in the application.ini file:

resources.session.gc_probability = 1
resources.session.gc_divisor = 1
resources.session.gc_maxlifetime = 3600
resources.session.idle_timeout = 3600

And then place the custom idle time out code inside this function in Bootstrap.php:

protected function _initSession()
{
# set up the session as per the config.
$options = $this->getOptions();
$sessionOptions = array(
‘gc_probability’    =>    $options[‘resources’][‘session’][‘gc_probability’],
‘gc_divisor’        =>    $options[‘resources’][‘session’][‘gc_divisor’],
‘gc_maxlifetime’    =>    $options[‘resources’][‘session’][‘gc_maxlifetime’]
);

$idleTimeout = $options[‘resources’][‘session’][‘idle_timeout’];

Zend_Session::setOptions($sessionOptions);
Zend_Session::start();

# now check for idle timeout.
if(isset($_SESSION[‘timeout_idle’]) && $_SESSION[‘timeout_idle’] < time()) {
Zend_Session::destroy();
Zend_Session::regenerateId();
header(‘Location: /account/signin’);
exit();
}

$_SESSION[‘timeout_idle’] = time() + $idleTimeout;
}

And that’s all there is to it!

If you know the reasoning behind some of these apparently strange PHP session operations then please post them here – I’m left scratching my head in a daze at how complex a matter it has been to implement such a standard bit of user experience.

Using O2 ZTE MF100 Mobile Broadband on Mac OS X Lion

A few days ago I took the leap and upgraded my Macbook Air to OS X Lion.  After a seamless (and typically Apple) upgrade process, I was enjoying the benefits of an even more refined operating system.

However, one of the first things I did was test out my mobile broadband – and there the problem began.

I have an O2 mobile broadband dongle, the ZTE MF100 USB stick. When I originally installed it a small application called O2 Mobile Connect installed and worked a treat for connecting to the O2 service.

After upgrading to OS X Lion the application crashed as soon as it opened – I assume because of a change in an API somewhere. After hunting around the O2 site (and Google) I could find no update on getting the Mobile Connect application to work on Lion, or even where to download it.  It would appear that O2 have ditched supplying the ZTE dongles in favour of Huwaei branded sticks.

Anyway, I was damned if I was going to lose my mobile broadband (and I certainly didn’t want to uninstall Lion), so here’s how I eventually got the stick working:

  1. Open System Preferences
  2. Click on the plus sign to Create a new service
  3. Choose ZTEUSBModem as the interface
  4. Name the service O2 Mobile Broadband (or something recognisable)
  5. Set Telephone Number to *99#
  6. Set Account name to 02web
  7. Set Password to password

Note that this is for Pay Monthly Broadband – I think the details are different for Pay & Go customers.

Key step: set DNS server to 193.113.200.201

And hey presto, I now have mobile broadband working again.

Hope this helps anyone out there trying to get it working.