During the last two decades the context of care has changed greatly, with a greater emphasis placed on evidence-based practice, clinical effectiveness and value for money. This has led to significant changes in the organisation and delivery of care and the culture in which healthcare professionals operate (Trinder and Reynolds, 2000).
Audit
Clinical audit is a process designed to improve clinical care and has been an increasingly prevalent feature of the current health system since the white paper titled Working for patients (Department of Health [DoH], 1989) advocated audit on a much greater scale than had previously been the case. Although initially promoted with a focus on medicine, it is now considered to be a multi-professional activity.
According to Crombie et al (1993) the need for clinical audit may arise for the following reasons.
- Variation in the care delivered: across nations or internationally there may be quite marked differences in the delivery of care. At face value it may not be possible to say what form of delivery is best, but after an audit it may be possible to determine if one method is better than another.
- Limitation of resources: as all resources are limited it is important that the best possible use is made of resources and audit may help inform this debate.
- Evident deficiencies in the care delivered: deficiencies in the delivery of care can be demonstrated through audit.
- Organisational need for audit: audit provides a means of monitoring the quality of work within an organisation.
- Technological advances and professional education: identified weaknesses in the provision of care can be remedied through the use of new technology or professional education, to bring about change.
- Political power of audit: documented evidence of deficiencies in care can be used to bring about change.
The above reasons for audit indicate that it is an activity to investigate the quality of care and to help make a case, where necessary, for improvements in care. At face value this is clear-cut but in reality there appears to be a grey area where it may be difficult to decide whether a situation calls for an audit or a research investigation. This is especially so when the research concerns issues related to quality of care and involving survey rather than experimental designs.
For example, Wincour et al (2002) report on a survey to examine the provision and role of diabetes specialist nurses (DSNs) in the UK, published by Diabetic Medicine in 2002 as part of a series of papers on the ‘State of diabetes and diabetic care in the UK’. They report that they sent a questionnaire to 456 consultant physicians providing diabetes services. This study was conducted to gain a sense of the workforce and the roles and responsibilities of DSNs in order to judge the adequacy of the current situation and to make recommendations to ensure that the service will be able to provide appropriate care to the growing numbers of people with diabetes. They concluded that there were many fewer DSNs than recommended in national strategy documents and suggested a need for a nationally co-ordinated approach to training and recruitment. This survey appears to fulfil the criteria of an audit, although it is presented as research. Whether labelled research or audit, this work has the potential to lead to recommendations for practice development that could influence the quality of care for people with diabetes in the future.
Research governance processes
While, on the one hand, we can argue that the difference between audit and research does not matter as long as the work is done, nowadays the distinction does matter, particularly since the introduction of the Research Governance Framework (DoH, 2001). The rules and regulations surrounding research have been made much stricter and if someone is wishing to conduct research then appropriate approval must be gained. Audit is considered a routine part of clinical governance and, according to the National Council on Ethics in Human Research (1995), it can be conducted without informed consent and does not require ethical approval. This is in contrast to research, which should always have ethical approval if it involves patients or healthcare staff or if it takes place on healthcare premises.
Furthermore, as the processes by which research approval are gained can be perceived as difficult and frustrating (Jones and Bamford, 2004; Watson and Manthorpe, 2002; Smith, 2000), there is great temptation to submit work as audit rather than research. Kneafsey and Howarth (2004) suggest that in the current research climate we might anticipate a surge in audit activity. They also warn that this is not an ideal situation as it will mean that some proposals that would benefit from the research governance processes will not have the opportunity to make use of the process. Another concern is that clinicians who cannot get their work approved through a clinical audit route may abandon potentially valuable research ideas as the thought of all the bureaucracy may deter them totally.
Defining audit and research
One useful definition of audit is that given by the DoH (1989):
‘the systematic critical analysis of the quality of medical care, including the procedures used for diagnosis and treatment, the use of resources and the resulting outcome and quality of life for the patient.’
Another useful definition is that of Crombie et al (1993), who defined audit as:
‘the process of reviewing the delivery of health care to identify deficiencies so that they may be remedied.’
In contrast, research may be defined (DoH, 1993) as:
‘rigorous and systematic enquiry conducted on a scale and using methods commensurate with the issues investigated and designed to lead to generalisable contributions to knowledge.’
According to this definition, the goal of research is to produce knowledge that can be generalised to other populations (although this is in itself controversial) while the purpose of audit is to improve care.
The process of audit
Typically the process of audit is illustrated as a cycle that includes both the identification of a problem and the remedy of the problem (Crombie et al, 1993). The audit cycle is illustrated in Figure 1.
The process of an audit starts with identification of the topic and this is best achieved in discussion with other members of the clinical team as the care to be investigated is likely to be
influenced by the whole team. In addition to gaining the support of clinical colleagues, most healthcare providers will have an audit committee that must approve the topic in advance. Once the topic is agreed it is usually important to find appropriate standards or criteria against which current practice can be assessed. The use of standards is a central component in the audit process. If there are no published standards then they will need to be formulated according to the evidence in the literature. While there may be several factors which influence or are part of a standard, for the purposes of audit it is recommended that only one or two criteria are reviewed at a time (Cooper and Benjamin, 2004). In contrast, standards are not a feature of research.
The next stage in the audit cycle, measuring practice against standards, can involve similar methods to those used in research. Firstly, when measuring practice, the target population and the sample size has to be chosen, and if there are a large number of potentially suitable individuals the auditors must decide how they should be selected. As in research, it is important to have an unbiased sample of patients and have a sample that is representative of the clinical population from which it is drawn. However, a more rigorous sampling strategy would be expected in a research project than in an audit, which may be based on more practical factors (Clinical Central Audit Office, 2000).
The auditors must then decide how the data are to be gathered and the methods can again be similar to those of research. The range of methods is far wider than that of checking written case notes, which is the image that many have when visualising audit activity. Depending on the nature of the care to be assessed, it may be appropriate to ask participants questions (using a questionnaire or an interview), to observe practice or to collect samples to send for laboratory analysis. These are all methods that are employed in research investigations. Data-gathering instruments that are specially designed will be needed to document the data in an efficient and robust way. It is recommended that audit methods are piloted, as is the case for research studies.
Once gathered the data must be analysed and this may also involve similar analytical techniques to those employed in research. However, when considering outcomes the differences between audit and research become more obvious. Data will usually be presented as descriptive summary statistics to illustrate the extent to which the standard has, or has not, been met. It is then crucial that results are discussed with colleagues to identify areas which need to be changed. Because the rationale for audit is the improvement of care (Crombie et al, 1993):
‘there is no point in describing a health care problem if nothing is done to ameliorate it.’
It is recommended that an action plan is developed detailing what is to be done, who is to do it and what the time frame is to be. In order to complete the audit cycle, plans must be made for a re-audit at a specified date. The changes required may be modest. Alternatively, the change required may be on a large scale. For example, it might be changing the education curriculum to influence professional development, purchasing of equipment or changing schedules for service delivery. Such changes could involve many individuals and require significant discussion.
For instance, Wallymahmed et al (2005) report an audit in which a baseline audit had been conducted 12 years previously. Deficiencies in the service were detected and led to the appointment of an in- patient diabetes liaison nurse. Twelve years later the service was re-audited to monitor the status of the current service. While not all re-audits would be so far apart, complex changes may take years to complete. Furthermore, having the two sets of data enables powerful comparisons to be made about service provision and also about the changing context of care.
The subject of conducting an audit is covered in greater detail elsewhere (e.g. Morrell and Harvey, 1999; Crombie et al, 1993).
One of the problems with the confusion between audit and research is that often only the first three steps of audit are undertaken and deficiencies are identified and reported as though that is the end of the task. In research, completion of the task may end with appropriate dissemination of results but there is not necessarily an expectation that the researcher will then begin to amend any deficiencies detected as a result of the intervention. However, it could be expected that the research will generate evidence that may be used to inform future standards (Crombie et al, 1993):
‘The difference is between adding to the body of medical knowledge and ensuring that knowledge is effectively used.’
Although at times audit and non- experimental research appear to overlap, Cooper and Benjamin (2004) draw attention to the fact that the intention of the activities is quite different:
‘The distinction is in their intention […] As a general rule, clinical audit ensures we achieve the outcomes that have been agreed in set standards; research provides new knowledge and the evidence on which to base such standards.’
Testing Times
To illustrate some of the points mentioned above, the methods used in an audit to review diabetes services in England and Wales, Testing Times (Audit Commission, 2000), will be outlined. The audit was based on the premise that there is sound evidence of what constitutes good management of diabetes, which includes, for example, prompt diagnosis, regular checks to identify complications at an early stage and treatment to control blood glucose and blood pressure. Support and education are also listed as crucial elements of diabetes services. The audit was designed to indicate whether patients were receiving best care.
Audit sample
In terms of sample size, nine sites were selected from across England and Wales. Selection was influenced by the need to ensure a reasonable spread of demographic variables, a mixture of urban and rural services, and a good range of regions across the country. No details were given regarding the specific selection of these sites, although the report does state that the audit team had access to a multidisciplinary advisory group and these members may have been involved in the selection process. Once the hospitals were selected, their associated health authority and community trust were also invited to participate in the audit to enable the full picture of diabetes provision in the locality to be audited. This appears to be a pragmatic rather than a scientifically robust justification of the sample and is in keeping with an audit. Such a sampling strategy would be termed a ‘convenience sample’ if it were applied to a research study. Such a broad sample indicates that it would be incorrect to assume that audits only include local populations while research studies aim for generalisability.
Data collection
Data were collected by site visits and by postal and telephone surveys. During the site visits to each hospital, which lasted for 3–4 days, data were gathered using the following methods:
- review of documents
- review of resources
- clinic quality survey
- inpatient census
- patient interviews.
During the postal surveys data were gathered by: q a survey of people with diabetes q a survey of general practitioners. A telephone survey of the health authorities was also completed. In addition another 30 visits were made to identify good practice and current issues.
Ethical approval was sought from each site if required, even though this would not normally be required in an audit. These data-gathering methods involved a wide variety of research instruments – which can be obtained from the Audit Commission website if required (www.audit-commission.gov.uk [accessed 21.06.05]) – and yielded both quantitative and qualitative data. These data were then analysed to illustrate the state of service provision. Explicit standards were not used but broad indicators of good practice were listed. For example, with regard to patient education, the report (Audit Commission, 2000) states that:
‘Good quality patient education has a number of key components.’
It then lists ten statements, such as what patient education should enable people with diabetes to do:
‘Know the basics of the condition and the complications that it can cause and when to access more information as they need it.’
‘Know about patient groups and how to get in touch with other people with diabetes.’
These are not tightly defined criteria and it would not be possible for the auditors to gain a precise measurement of the extent to which these identified aspects of good practice are being achieved.
Audit results
The audit results offer a general impression of the standards of care in which the results are presented in broad terms (Audit Commission, 2000), as exemplified by:
‘Four out of nine [trusts] did not have a structured programme of education with a written curriculum.’
‘Only one hospital had taken steps to evaluate the education that patients received.’
The report did include examples of good practice (Audit Commission, 2000), such as the use of:
‘Varied modes of delivery, including both group and one to one sessions.’
It was not possible to state that certain percentages of trusts or patients had achieved a particular standard of education.
Therefore, in many respects the type of data reported is very similar to that of a research report. The final chapter of the report, ‘Meeting the challenges of the 21st Century’, considers the growing demands, current service provision and the changes that may be needed for services to cope. The audit results provide baseline data to help hospitals consider how their services may be improved, and examples of good practice are included that may serve as inspiration for others to appreciate what is possible and potentially applicable to their own locality. Thus, there is scope for service improvement but at a broad and general level rather than at a site-specific level.
Overall, the magnitude of this audit, the inclusion of quantitative and qualitative approaches, the range of data-gathering methods involved and the use of a combination of descriptive summary statistics, case studies and quotations all serve to illustrate that audit and research activity can be very similar. The specific difference lies, as Cooper and Benjamin (2004) state, in the intent rather than in the method or the process. The intent of the Audit Commission was to gauge the extent to which patients were in receipt of best practice. They noticed examples of good practice and as a result of the report, recommendations were made to improve the quality of care offered by the diabetes services across England and Wales. The audit was conducted in anticipation of the National Service Framework for diabetes in 2001 and to enable hospitals to assess their local services and prepare for the framework.
In making recommendations for future work the next stage of the audit cycle – ‘Identify areas which need to be changed’ – has been accomplished. The following stage, that of ‘Implementing change in practice’, was not covered in the report but the style of the report indicated that service improvements were to be made. While specific time frames for re-auditing were not given, it was stated that ‘this study is accompanied by a programme of audits of hospital diabetes services in England and Wales,’ so there is the explicit understanding that this is not a single, isolated audit.
After such a rigorous investigation it can be claimed that this audit has produced evidence of the state of diabetes services in England and Wales and overall it is suggested that there is much scope for improvement. The examples of good practice could be used by others for standards to be applied in their own setting.
Conclusion
This paper has outlined the reasons why audit may be undertaken and the process by which it is conducted. The key differences between audit and research activity were discussed. Through using the Testing Times audit it was illustrated that a narrow interpretation of audit activity may not be helpful in delineating the activities. It could be that an audit does not include specific standards, may not at the time of reporting include all stages of the audit cycle, may use a design often thought to be indicative of research and, indeed, may involve gaining ethical approval. However, if there is doubt about whether a proposed investigation should be categorised as an audit or as research then the overall purpose of the work should be considered. If the remit is to improve care in a direct and specific way and with an intention to return to the clinical area to monitor the situation in the future, then it should be considered an audit. In order to comply with the research governance regulations, it is important that research is not conducted as though it is an audit, however tempting that avenue may seem.
Su Down looks back on a year of change and achievement.
17 Dec 2024