Measuring hospital performance
Measuring hospital performance is a complex and essential activity.
Nikolas Matthes (European Hospital. Vol 14, issue 6/05, page 7) raises some interesting issues, but his analysis is incomplete. The first two central questions to be answered before proceeding to manage performance are Why? and How?
There are many reasons for measuring performance. The ultimate gaol is to ensure that the hospital delivers good quality care, and the most important aspect of ‘quality care’ is whether medical interventions improve patient outcomes, in terms of the length and quality of their lives.
The way in which most healthcare systems approach this issue is to measure and manage processes of care and activity. Process quality performance focuses on whether doctors and nurses are polite to patients and whether hospital food is edible, car parking is adequate and how the institution fares in terms of patient and staff satisfaction surveys.
Activity performance measures often tend to focus on waiting time. Thus the Welsh have to wait for elective procedures longer than the English, Norwegians who wait can access Danish hospitals, Danes who have to wait for elective care can access German hospitals and the French and Germans wait short periods because of their high capacity levels and their payment systems.
Activity performance measurement and management may also include aspects of care such as the timely provision of thrombolytic interventions after myocardial infarction, as well as waiting times for outpatients and diagnostic procedures.
Such measures highlight the universal problem of variation in healthcare: whatever is analysed in medicine, doctors do different things to similar patients. Thus in the US Medicare system there is enduring evidence of variation in expenditure and activity with patients in this universal Federally financed healthcare system costing more in the East of the USA than West, due to differences in volume of care delivered and with no observable benefit in terms of satisfaction or outcome for the patient. (REFERENCES: Maynard, A, Enduring problems in health care delivery, in A. Maynard, ‘The public-Private Mix for Health: plus ca change, plus c’est la meme chose’, Radcliff Publishers, Oxford and Seattle, 2005. V. Fuchs, Perspective: More variation in the use of care; more flat-of-the-curve medicine, Health Affairs, electronic supplement, 7/10/2004 (www.healthaffiars.org)
This focus on activity performance and variation begs answers to question such as: If you reduce waiting do you always improve patient health or do you get, and when do you get diminishing returns? Similarly, in any analysis of variation: Will, for instance, reducing the variation in the activity levels of orthopaedic surgeons by incentivising the shifting of the mean of the distribution, lead to increased activity but poorer patient outcomes?
Performance management that ignores patient outcomes is potentially dangerous. Often politicians, anxious to reduce waiting times and squeeze more activity out of doctor stocks, ignore this conclusion. Increasingly there is recognition that there is a need to measure outcomes, but all too often this involves a narrow medical approach that focuses on failure.
The narrow orthodoxy of outcome performance management is epitomised by President Reagan’s decision, nearly a quarter a century ago, to publish the mortality rates of all hospitals treating Medicare patients. The hospitals were outraged and insisted that many of the data were inaccurate. To which the administration responded by emphasising that it was the data they, the hospitals, had given the Federal authorities and were they lying! The work of the Reagan administration in publicising the mortality outcomes of hospitals is yet to be emulated in many European countries.
Mortality data is useful in performance management, but it is a measure of failure rather than success in medicine. Similarly measures such as complication rates and re-admission rates are measures of failure.
All such measures have to be carefully collected and analysed because of variations in case mix and other compounding factors. Furthermore, their use will inevitably affect behaviours. For instance, in Pennsylvania and New York, postoperative mortality rates after cardiac surgery are published by individual surgeon. The goal of this data production was to inform patient choice. However, the evidence from this and similar American studies shows that the publication of such data has little affect on patient behaviour: they continue to seek medical advice to interpret available information and make their choice of doctor.
However, in Pennsylvania and New York, the publication of post operative cardiac mortality by individual practitioner had an immediate effect on providers. Poorly performing providers changed their patient selection practices, treating less complex patients with fewer co-morbidities and, in so doing shifting the mean of the distribution (thereby apparently ‘improving’ outcomes) by excluding the more risky patients from surgical procedures. These high-risk patients were then treated medically at higher cost and with poorer outcomes. This example shows clearly the significant and sometimes perverse effects of publishing performance data.
It is now time to complement measures of failure, such as mortality, with measures of success in medical interventions i.e. patient reported outcome measures (PROM). The measurement of changes in mental and physical well-being uses generic instruments (i.e. measures that can be used across clinical specialties) such as short form 36 (www.sf36.org) and EQ5D (www.euroqol.org). These instruments have been translated into dozens of languages and used in thousands of clinical trails. However, they are not used in routine clinical care, particularly as performance measures.
An exception to this is the British private insurer, the British United Provident Association (BUPA). They offer SF36 to their elective patients at entry to the hospital and 3-6 months after the completion of the procedure. This enables them to ensure consumer protection against poor clinical practice and it also enables them to monitor and manage the relative performance of the practitioners they employ in terms of restoring the mental and physical functioning of their patients. This pioneering work is now being emulated by experimental work in the UK-NHS.
Performance management is complex and has to use both activity and outcome data, especially PROM. These data can be used for the appraisal of practitioner performance and for their revalidation as well as for ensuring value of money in public and private healthcare systems. It is curious how little use there is of PROM measures and how all measures of activity and outcome are often not linked to incentive systems by which clinical practice can be changed.
01.03.2006