The Quality and Outcomes Framework (QOF), included in the GP Contract, has its friends and its foes. Like it or loathe it, a significant proportion of general practice income depends on performance against its targets. I have been vocal in my criticism of the reduction in the key HbA1c indicator to 7.0% (53 mmol/mol) this year, largely on the grounds that I have fears for the potential for iatrogenic hypoglycaemia, particularly in older people taking sulphonylureas or insulin. Overall, however, I believe that the framework has succeeded in focusing primary care practitioners towards achieving evidence-based indicators that should result in improved outcomes.
Inevitably, because of the limitations in the methodology of data collection, it concentrates on easily quantifiable parameters – blood pressure or HbA1c, for example – or on declarations of “processes undertaken”, such as testing of feet for neuropathy and ischaemia. It can take little account of other issues such as the quality of advice and support offered to people with diabetes or their carers, and this is unlikely to change soon.
Critics argue, perhaps justifiably, that areas of practice not included in the QOF may lose out or even be ignored. Thus, there is a queue of advocates seeking to have their priority interests brought within the system. Maybe that is a testament to its success?
Most commentators, both inside and outside of primary care, were surprised by the high scores achieved by most practices, even from the first years of the scheme. Within my own practice our pre-QOF estimate of our likely performance fell far short of our actual achievement. I believe this was attributable both to a gross underestimate of existing practice achievement and to the enormous efforts made by practices when the contract was enacted. Almost all of the indicators measured by QOF improved in the first years of the system, before reaching an expected plateau (see data tables overleaf). Interestingly, where data exists for equivalent performance before the contract, performance was already improving year on year, possibly in response to other initiatives. The table of disease prevalence shown overleaf, however, shows a particular increase in diabetes compared with other domains, and this can only rise further with more widespread use of screening programmes (Table 1).
Over the years since the implementation of QOF, I have been one of a team of GPs performing visits to practices in my PCT. These serve to check, with significant but constructive rigour, that practice claims are supported by evidence of the quantity and quality of work recorded in clinical notes. Of at least equal value is the opportunity to share experience with visited practices, and with their permission to disseminate new ideas and spread best practice. A significant area where doubts often arise is that of “exception reporting”, the subject to which I now turn.
The principle of exception reporting is that clinical guidelines are not applicable to every individual and that practices should not be penalised for making appropriate clinical decisions that result in an indicator not being achieved. Critics and cynics see this as providing loopholes in the system and claim that abuse, or “gaming” is prevalent. Fortunately, evidence does not support this assertion (Doran et al, 2008), and PCTs have both powers and responsibility to ensure compliance with both the letter and spirit of the law. The overall rate of exception reporting has been between 4% and 7%, although higher for some specific indicators and lower for others (Doran et al, 2008).
So when is it legitimate to consider exception reporting? The Department of Health (2004) has published criteria that seem generally clear and logical. There is, however, room for differing interpretations. Among my assessor colleagues, for example, it would not be deemed acceptable to exclude people in a residential care or nursing home from appropriate blood pressure monitoring, HbA1c tests, or checks for foot pulses and neuropathy, simply because of the logistical difficulty of arranging the tests. It would be acceptable, however, to except some of the same individuals if they were unable to attend for, or collaborate with, digital retinal screening. The observation that the national exclusion rate for indicator DM 11 (a measure of blood pressure in the preceding 15 months) is only 1.46% (The Information Centre, 2007), whereas that for DM 21 (performance of retinal screening) is 6.84% (personal communication from Nofolk PCT) suggests that this point is generally accepted by practices.
More open to interpretation will be DM 23 (percentage of people achieving HbA1c ≤7% [≤53 mmol/mol]) together with DM 24 (HbA1c ≤8% [≤64 mmol/mol]) and DM 25 (HbA1c ≤9% [≤75 mmol/mol]). It would be a tenable position to argue that a significant number of people may be on “maximal tolerated medication” when still short of the set HbA1c indicator.
When you consider exception reporting make sure you record your logic and reasoning clearly in the clinical notes. Then at least you can justify your exception reports if or when challenged by the PCT assessors.