Image credit: CDC/James Gathany
Article • QC at AACC 2023
New risk-based quality control assessment for clinical labs
Juggling the cost of quality control (QC) resources versus the risk of testing error is a balancing act no clinical laboratory manager enjoys. It is an inexact process, itself prone to error, which can impact the operations of hospital labs and independent clinical testing companies. In the current resource-constrained healthcare environment, there is pressure to improve the cost effectiveness of QC programs and reduce the considerable resources spent on them by clinical labs.
Report: Cynthia E. Keen
A new method in development to more accurately determine the level of acceptable risk was introduced at the 2023 Association for Diagnostics and Laboratory Medicine (formerly the AACC) Annual Scientific Meeting held in July in Anaheim, California, USA. The Precision QC (PQC) model offers more precision than the one-size-fits-all models currently in use in many clinical hospital laboratories, according to its developers from the University of Utah in Salt Lake City. It is dynamic and flexible, and can be adapted to a wide range of QC monitoring methods, QC behaviours, and clinical scenarios.
A watchful eye on systemic and measurement errors
In spite of best efforts, errors in laboratory test results occur, caused by systemic error, measurement error, or both. This results in false positive (FP) and false negative (FN) events. FP events incur costs associated with troubleshooting to identify their cause, reagent use, downtime, repeat testing, and recalibration. Identifying the reason for a FN event can stretch lab financial and staff resources, hinder lab operations, and potentially can have significant health consequences to a patient.
We view an assay as a dynamic system that evolves from state to state over time. It will shift and the question that needs to determine risk is frequently of shift and shift size distribution
Robert Schmidt
QC systems are designed to detect when systemic errors start occurring. How stringently parameters are set has a direct impact on the number of FPs that occur. Assessing risk of error is important, because QC settings that are too stringent may be just as problematic for a lab as those that are too lax.
'We view an assay as a dynamic system that evolves from state to state over time. It will shift and the question that needs to determine risk is frequently of shift and shift size distribution,' says Robert Schmidt, MD, PhD, formerly a professor of pathology and principal investigator of the Utah team.
The expert compared these shifts in laboratory equipment to change in performance in other equipment, such as cars or bikes. Only in the former, there are countless components that are subject to change over time that can manifest itself as a change in performance, he explains.
Assessing the “problem potential” of lab tests
Incorporating risk can be beneficial for a QC program. Some assays are reliable, robust, and stable, with few errors. For others, measurement error will not result in immediate clinical harm, or, the results of the tests being performed have a low impact on clinical outcome if an error occurs. At the other extreme are “problem” assays, with known performance volatility, or whose results have critical cut points, and which have a high cost when failures occur.
The researchers define risk as an event equal to the probability and cost of the event. It’s also important to factor in intuition, ideally without introducing bias from prior experienced events.
'The PQC model views an assay as a dynamic system that evolves through various states over time, and is not necessarily a single shift that remains constant until discovered,' explains Joseph W. Rudolf, MD, an assistant professor and medical director in the Clinical Pathology Division’s Automated Core Lab. 'The system starts in the In Control (IC) state and at some point moves to an out of control (OCC) where it remains, until the QC monitoring system raises a detection signal, at which it is restored to the IC state. The overall behaviour of the system is determined by the proportion of time spent in each state (a level of system systematic error), which determines the risk. One of the key features of the PQC model is that it explicitly includes the shift probability, which links the rates of FP and FN events. This feature makes it possible to construct trade-off curves between FP risk and FN risk, which characterise the performance of the QC monitoring system.’
'This model will help you quantitatively assess if QC resources spent on each assay to maintain its testing accuracy is appropriate, too little, or excessive,' Schmidt and Rudolf advise. 'By mixing and matching parameters relating to different assays, you will be able to clearly visualize the results and make informed decisions.’
Profiles:
Joseph W. Rudolph, MD, is an assistant professor in the Department of Pathology and the University of Utah. He serves as Medical Director of the Automated Core Laboratory at ARUP Laboratories, a national nonprofit and academic reference laboratory. His clinical and research interests focus on the intersection of informatics and clinical operations, including clinical decision support, utilisation management, and reporting and analytics, as well as clinical process improvement.
Robert Schmidt, MD, PhD, is the Medical Director for healthcare systems and head of population analytics at Labcorp. His work focuses on the use of laboratory data to identify and close gaps in care. Before joining Labcorp. Schmidt was Professor of Pathology at the University of Utah, where he was the Director of the Center for Effective Medical Testing. His research focused on cost-effectiveness, utilisation analysis, and evidence-based evaluation of diagnostic testing.
08.02.2024