Monday, 23 January 2012

Worst 10 Trusts for Management of Access Targets - Disproportionately FTs



It is a well-known secret that targets based on clock-stops promotes inappropriate behaviour.  They motivate Trusts to focus on ensuring that most of the people starting definitive treatment are within 18 weeks, rather than treating patients in referral order (after allowing for clinical urgency).  It is a subtle, but important distinction; and Trusts can get stuck in the trap of managing clock-stops - not their total patient base.

This post names and shames the worst of these Trusts.  And it finds that Foundation Trusts are actually poorer at governing access targets.  So much for the ability of FTs to improve performance and governance.

This inappropriate management does not necessarily happen by strategy.  It can happen by by the natural dynamics of actions in a Trust, without active thought to it.  For example, how many Trusts have breach lists - patients who will breach if not treated imminently?  And how many service managers run around trying to get these patients on a proximate operating list?  The effect of this is not to treat patients in order of referral; but to focus on those patients who are about to breach 18 weeks.  The effect of this is to treat people within 18 weeks disproportionately.

It also happens because of the culture within the system to achieve 18 weeks.  To take this out of context, it is a bit like the treatment of Iraqi prisoners.  The Western Forces had no active strategy to degrade prisoners at Abu Ghraib; but the overall culture and philosophy of the western forces was to demonise and dehumanise the opposition.  In addition, there were subliminal signals from leaders that international humanitarian law was over-zealous.  That slowly results in (to coin a phrase) "institutional torture".  Similary, the culture, targets and performance management within Trusts is on 18 weeks.  Senior management do not focus on the length of the longest waiters (and still waiting), but on the 18 week target - which focuses on those being treated in the month.  The result is "institutional neglect" of long-waiters.

This culture also results in many concrete actions to intensify the problem.  In any management meeting, more time is spent on those specialities and services whose performance is at the threshold - who have marginally breached or a marginal change will lead them to breach.  Central resources, such as analytics, IT, transformation (what is that, by the way?), strategy and - critically - investment, get disproportionately devoted to these areas.  The end result is that those areas with much greater difficulties (e.g., orthopaedics) get ghettoed into the "too difficult" box, and resources are spent negotiating different profiles with commissioners for those.  The overall result is that patients get treated out of referral order (without clinical justification).

So if that is the problem, how do you identify the worst offenders.  This can be done by looking at the discrepancy between the time people being treated have waited, versus the length of time waited by those not being treated.  In general, if you are managing by referral order (disregarding clinical urgency), people being treated should have waited longer than those waiting for treatment.  So if we look at the statistics on referral published by DoH, the percentage of patients treated within 18 weeks should be far lower than the percentage of untreated people within 18 weeks.  So if we are exactly meeting the existing "clock-stop" targets, on average at least 92.5% of all patients (admitted and non-admitted) would have waited less than 18 weeks (this assumes that there are equal admitted and non-admitted pathways - a simplifying assumption).  Therefore, much more than 92.5% of patients waiting for treatment should have waited less than 18 weeks.  But if we find that only 87.5% of patients not yet treated have waited less than 18 weeks, then I conclude that there has been inappropriate management going on.  And I define a new score - which I have called the MM Obstacle Score of -5% (which is 92.5%-87.5%).  In fact, any negative score, and slightly positive scores probably indicates that Trusts are mismanaging access targets.

[I realise that urgency does change this significantly.  I will return to incorporating this factor on a future date. For now, I realise that this is a simplified picture.]

And now to the list of the worst 10 performers.  This is presented at the top of the blog, with their MM Obstacle Scores based on November 2011 statistics.

8 of the worst 10 are actually FTs.  And they are meant to be the ones with a proven ability to govern themselves better.  In fact, if one considers the 180 Trusts who had admissions of more than 5 patients in November 2011, 60% of the worst performers were FTs, whereas only 51% of the best performers were FTs.  FTs are systematically poorer performers than other Trusts in this measure.  And they are meant to be better at self- governance!


Blog Post Updated on Monday 23rd January 2012, 16:00, to include bring out the fact that Foundation Trusts were disproportionately poorer performers.

1 comment:

  1. Doctor at one of the named hospitals25 January 2012 at 02:27

    This is a great bit of simple analysis – and I’m not in a position to know if it’s true?

    Another way of looking at it would be that these 10 Trusts are best at working to meet a target – and perhaps the observation should be about how absurd the target is – why 18 weeks, why 90%, why clock stops and starts etc., etc.?

    As NHS managers are trained, motivated and performance managed to meet targets, we shouldn’t be surprised when they use every ounce of innovation to figure out how best to do it.

    We know that targets work – so the real issue is that no-one in a position that matters is trained (or interested enough) to develop the clinical targets we need. And we could do so much better for waiting times.

    ReplyDelete