This website is intended for healthcare professionals only.

Hospital Healthcare Europe
Hospital Pharmacy Europe     Newsletter    Login            

Patient safety and quality improvement: challenges of technical and adaptive work

Christine Goeschel
RN MPA MPS

Peter Pronovost
MD PhD FCCM

Johns Hopkins
Quality & Safety Research Group (QSRG)
Baltimore, MD
USA

Since 1999, when the Institute of Medicine (IOM) created a compelling case for improving quality of care and enhancing patient safety in its report To Err Is Human: Building a Safer Health ­System, feasible yet effective methods to achieve these goals have been slow to emerge.(1) Though the number of reported quality and safety initiatives has increased substantially, and global dialogue on how to focus, measure and evaluate quality and safety efforts has intensified, there are still limited examples of broad-scale, measurable improvements in patient outcomes or safety culture.(2–4)

In 2004 the Johns Hopkins Quality and Safety Research Group (QSRG) started working with the Michigan Health & Hospital Association and 127 intensive care units on an improvement project ­beginning that has become a safety exemplar (Keystone ICU). The project resulted in rapid and widespread improvement in the culture of safety and teamwork and reduction in central-line-associated bloodstream infections (CLABSIs; formerly known as catheter-related bloodstream infections, or ­CRBSIs) (see Table 1).(5)

[[HHE07_table1_C4]]

We believe that an important factor in this success was project leaders’ understanding of differences between ­technical and adaptive work, and their commitment to get both right. In his book Leadership Without Easy Answers, Dr Ron Heifetz clarifies distinctions between technical work (issues for which there is ­science and answers) and adaptive work (which requires a change in attitudes, beliefs and behaviours).(6)

Research at Johns Hopkins
In the Keystone ICU exemplar we used these distinctions and applied them to a model we had ­developed for leading change. The model includes four phases (engagement, education, execution and evaluation).(7,8) Technical work (education and evaluation phases of the model) included compilation and ­synopsis of empiric evidence for the project interventions, development of standardised evidence-based measures, creation of a sound data quality-control plan, a user-friendly yet stringent database for collection of ICU-level data, and rigorous data analysis at the team and collaborative level. This work was led by the Hopkins technical team. This approach capitalised on academic familiarity with the evidence and well-honed research skills, and leveraged the resources of the local teams toward the adaptive work of changing clinical practice, where their leadership was pivotal.

Article continues below this sponsored advert
Cogora InRead Image
Explore the latest advances in respiratory care at events delivered by renowned experts from CofE
Advertisement

The local ICU teams led the hard work of improving safety and teamwork climate in their own organisations (engage and execute phases of the model). Hopkins supported this adaptive work via monthly coaching and content calls, face-to-face meetings (two a year) and by creating intervention toolkits that teams personalised and used in their local setting. These kits included: references to sources of evidence; examples of intervention tools; educational slides and presentations; and suggestions on how to use the materials.

In addition, the commitment to get both the technical work and the adaptive work right facilitated the creation of a virtual learning community that continues to thrive. Within the community, respect and reliance on unique knowledge and skills of local teams and research experts benefits all. The dearth of similarly successful efforts, with measurable improvement in patient outcomes, may be testament to the fact that differences between technical and adaptive work are likely not well understood, and methods to rigorously address each are even less clear. Hence, while efforts to improve quality and safety flourish, adoption of methods to measure with certainty that outcomes are improving languish. The reasons for this lack of stringent measurement and evaluation are just beginning to surface. Not surprisingly perhaps, the immature nature of quality and safety improvement in healthcare is potentially at the root of this dilemma. In rapid response to the IOM reports, many well-intentioned organisations and clinicians rushed into action and ramped up what were often perfunctory QI programmes staffed by clinicians with little if any advanced training. Unfortunately study design and measurement skills are foreign to most clinical leaders, and empiric evidence on what works in implementation of evidence-based care is sparse. Very quickly the industry responded with short courses, training programmes and seminars designed to help committed organisations rise to the challenge of providing safer care. We are learning that, just as the quality and safety issues that confront us did not develop overnight, neither will successful methods to address them.

Practicing quality and safety
We have come to recognise that the skills for improvement are different from the skills needed to evaluate that improvement. Training clinicians to implement a plan–do–study–act (PDSA) cycle does not equate to having reliable data with which to make legitimate inferences about quality of care or safety of patients. (In fact, study in the cycle is often interpreted to mean looking at the results, not scientifically analysing them.) Projects that purport to represent legitimate quality outcomes when in fact they are ripe with biases certainly squander scarce resources and may be strategically bereft of value.

Thus, a twofold challenge now faces the healthcare industry: how can we streamline the plethora of resource-intense QI and safety projects that offer no hope of being able to answer with scientific rigour whether care is actually improving or patients are safer, yet maintain momentum for development of a more scientifically sound process for the design, implementation, measurement, evaluation and ­reporting of quality and safety efforts?

Potential answers, unsurprisingly, involve both technical and adaptive work. Considerations for improving the design, measurement, evaluation and reporting of quality and safety projects are primarily technical and may include:

  • Creation of industry infrastructure to support rigorous training of personnel assigned responsibility for designing and managing projects. 
  • Enhancement of national and international efforts to standardise measures.
  • Development and enforcement of standards for the reporting of quality and safety data.
  • Increased funding for research to empirically evaluate the effectiveness of strategies to implement evidence-based care.

Considerations for improving the implementation of evidence-based quality and safety behaviours and discontinuation of projects that cannot reliably answer the question “Are we safer?” are primarily adaptive and may include:

  • Industry commitment to philosophy that harm is untenable.
  • Leadership support for enhanced measurement, evaluation and reporting as part of natural evolution of a maturing field.
  • New structured and unstructured communication between senior administrators and clinicians at all levels, with acknowledgement that improvement by “mandate” does not work.
  • Dedication of resources and creation of institutional infrastructure to prioritise quality and safety efforts, similarly to other core business functions, such as finance.

Conclusion
In summary, the evolution of healthcare commitment to quality and safety improvement is at an important juncture. The rapid improvements that were hoped for after the IOM report To Err Is Human have not been realised. Efforts to improve safety have increased, but rigorous evaluation of most efforts is missing.­ ­Valuable resources are being invested in projects that offer little hope for an empirically sound assessment of impact. Stating that culture must improve and care must be evidence-based requires a ­commitment to invest in the resources and infrastructure to support measurement and evaluation of care and research to understand what works. These technical changes will require significant new thinking but will be relatively easy in comparison with the adaptive work that must accompany them if quality and safety are to improve. The adaptive work of changing minds, behaviours and beliefs of clinicians, administrators, policymakers, purchasers and consumers will be a much slower process. ­Measurably improving quality of care and safety of patients will take a commitment by all parties to address both the technical and the adaptive work of change.

References

  1. Kohn L, et al, editors. To err is human: ­building a safer health system. Institute of Medicine Report. Washington, DC: National ­Academy Press; 1999.
  2. Institute of Medicine. Crossing the quality chasm: a new health ­system for the 21st ­century. ­Washington, DC: National Academy Press; 2001.
  3. Wachter RM. The end of the beginning: patient safety five years after To err Is human. Health Affairs 2004;1-12.
  4. Leape L, Berwick D. Five years after To err is human: what have we learned? JAMA 2005;293:2384-90.
  5. Pronovost P, et al. An intervention to decrease catheter-related blood stream infections in the ICU. N Engl J Med 2006;355:2725-32.
  6. Heifetz R. Leadership without easy answers. Cambridge: Harvard University Press; 1994.
  7. Pronovost PJ, et al. Developing and ­implementing an ­innovative improvement model: patient safety research experts and hospital associations as partners. International Federation of Hospitals. World Congress;Sept 2005.
  8. Pronovost PJ, et al. Creating high ­reliability in healthcare ­organisations. Health Serv Res 2006;41:1599-617.

x