Quality health care “means doing the right thing at the right time, in the right way, for the right person, and getting the best possible results.”1 The term quality, by definition, can mean excellence, status, or grade; thus, it can be measured and quantified. The patient, and perhaps the patient’s family, may interpret quality health care differently from the way that health care providers interpret it. Therefore, it is important to determine—if possible—what is “right” and what is “wrong” with regard to quality health care. The study and analysis of health care are important to maintain a level of quality that is satisfactory to all parties involved. As a result of the current focus on patient safety, and in an attempt to reduce deaths and complications, providing the best quality health care while maintaining cost controls has become a challenge to all involved. Current quality initiatives are multifaceted and include government-directed, private sectorsupported, and consumer-driven projects.
This chapter explores the historical development of health care quality including a review of the important pioneers and the tools they developed. Their work has been studied, refined, and widely used in a variety of applications related to performance-improvement activities. Risk management is discussed, with emphasis on the importance of coordination with quality activities. The evolution of utilization management is also reviewed, with a focus on its relationship to quality management.
In addition, this chapter explores current trends in data collection and storage, and their application to improvements in quality care and patient safety. Current events are identified that influence and provide direction to legislative support and funding. This chapter also provides multiple tips and tools for both personal and institutional use.
Data quality refers to the high grade, superiority, or excellence of data. Data quality is intertwined with the concept of quality patient care; it refers to data that can demonstrate and represent in an objective sense the delivery of quality patient care. When the data collected are reflective of the care provided, one can reach conclusions about the quality of care the patient received.
The concept of studying the quality of patient care has been a part of the health care field for almost 100 years. Individual surgeons, such as A. E. Codman, pioneered the practice of monitoring surgical outcomes in patients and documenting physician errors concerning specific patients. These physicians began the practice of conducting morbidity and mortality conferences as a means to improve patient care. Building on the prior work of individual surgeons, the American College of Surgeons (ACS) created the Hospital Standardization Program in 1918. This program served as the genesis for the accreditation movement of the 20th century, which included the concept of quality patient care and the formation of the Joint Commission on Accreditation of Hospitals (JCAH) in 1951. The ACS transferred the Hospital Standardization Program to the JCAH in 1953.
Efforts to improve the quality of patient care have varied during the 20th century, beginning with the establishment of formalized mechanisms to measure patient care against established criteria. A timeline illustrating these efforts is shown in Figure 7-1. These mechanisms focused on an organization’s reaction to individual events and the mistakes of individual health care providers. A variety of quality efforts followed, including ones developed in other industries that were adapted to the health care environment. The concepts of total quality management, defined as the organization-wide approach to quality improvement, and continuous quality improvement, defined as the systematic, team-based approach to process and performance improvement, introduced the team-based approach to quality health care. These newer efforts moved the focus from individual events and health care providers to an organization’s systems and their potential for improvement.
Figure 7-1 | Quality management timeline
Accompanying the change in focus were new terms such as quality management, quality assurance, process improvement, and performance improvement. Quality management generally means that every aspect of health care quality may be subject to managerial oversight. Quality assurance refers to those actions taken to establish, protect, promote, and improve the quality of health care. Process improvement refers to the improvement of processes involved in the delivery of health care. Performance improvement refers to the improvement of performance as it relates to patient care. Regardless of the names applied and their respective approaches, most health care organizations in the 21st century are bound by the requirements of various accrediting and regulatory bodies to engage in some function that focuses on the quality of patient care.2
In order to measure patient care for quality purposes, one must first possess data. The data crucial to supporting any quality initiative are the data found in the patient health record. These data must be reliable with respect to quality. Data errors can be made during many stages, such as when data are entered into the record (the documentation process), when data are retrieved from the record (the abstracting process), when data are manipulated (the coding process), when data are processed (the indexing and registry processes), and when data are used (the interpreting process). At each stage, the data must be both consistent and accurate. Furthermore, good quality data are the result of coordinated efforts to ensure integrity at each stage. A recent focus on the legibility of handwritten data, the appropriate use of abbreviations, and their relationship to medication errors has increased pressure from accrediting agencies to improve the quality of data as a means to improve patient safety.
Quality health care management is the result of the dedication of a variety of professionals working in all levels of employment and in all aspects of health care. These professionals are supported by governmental offices at the federal, state, and local levels that define what data they require to be reported to them. When data definitions are not specified by the agency or organization requiring a report, the responsibility to define the data falls to the team or group that is responsible for collecting and disseminating the data. Fundamental to the collection and dissemination of data is the application of the appropriate collection format and reporting tools. However, before data collection can begin, there must be consensus on the perimeters of the data to be collected. The team or group should also select an assessment model, such as quality circles, PDSA, or FOCUS PDCA. Quality circles are small groups of workers who perform similar work that meet regularly to analyze and solve work-related problems and to recommend solutions to management. These groups are also known as Kaizen teams, a Japanese term meaning to generate or implement employee ideas.3 PDSA (Plan, Do, Study, Act), also known as PDCA (Plan-Do- Check-Act),4 is illustrated in Figure 7-2. FOCUS PDCA5 involves finding a process to improve, organizing a team that knows the process, clarifying the current knowledge of the process, understanding the causes of special variation, and selecting the process improvement. Figure 7-3 illustrates the FOCUS PDCA approach.
Essentially, these assessment models provide groups with guidance about how to organize the process. These models were developed largely as a result of the manufacturing industry quality movement of the 1950s and 1960s led by W. Edwards Deming, J. M. Juran, and Philip Crosby. In the 1960s, these models were applied to the health care sector by Avedis Donabedian, who separated the quality of health care measures into three distinct categories: structure, process, and outcomes.6 In the 1970s, when the Joint Commission on Accreditation of Healthcare Organizations, now known as the Joint Commission, and the Health Care Financing Administration (HCFA), now known as Centers for Medicare and Medicaid Services (CMS), began to mandate quality initiatives, health care looked to the successes of the manufacturing industry for direction and ideas.
Figure 7-2 | Plan, do, study (or check), and act assessment model
Figure 7-3 | FOCUS assessment model
The quest for quality, and the tools necessary to achieve it, eventually led to the development of the Malcolm Baldrige National Quality Award. The U.S. Congress created this award in 1987,7 which led to the creation of a new public-private partnership. Principal support for the award comes from the Foundation for the Malcolm Baldrige National Quality Award. The U.S. president announces the award annually. The award initially recognized the manufacturing and service sectors, including both large and small businesses, but it was expanded in 1999 to include the education and health care sectors; several health care organizations have applied for and received this award since then. In 2006, the program expanded even further to consider nonprofit and governmental organizations in the application process. The seven categories in which participants are judged for the Malcolm Baldrige Award are listed in Table 7-1. The focus of the evaluation centers on total quality management with emphasis on sustaining results.
Table 7-1 | Health Care Criteria in the Malcolm Baldrige Award
Customer and market focus
Measurement, analysis, and knowledge management
Source: Malcolm Baldrige National Quality Award, http://www.quality.nist.gov Courtesy of The National Institute of Standards and Technology (NIST).
Early pioneers who applied the Malcolm Baldrige concepts found it difficult at times to achieve effective implementation and/or sustain improvement. In an effort to achieve the greatest possible savings from the improvement projects, the Juran Institute, working with Motorola, developed a methodology called Six Sigma.8 Six Sigma is defined as the measurement of quality to a level of near perfection or without defects. General Electric (GE) and Allied Signal (now Honeywell) also contributed to the development and popularity of the methodology. Part of its success is attributed to the organization of training and leadership. High-level executives are trained and appointed as “champions” to drive the program, and employees receive training and support to become certified internal experts. The amount of training one receives results in different belt levels: black belts are technical personnel who are trained to apply the statistically based methodology. Master black belts coach black belts and coordinate projects. The project team members are referred to as green belts and also receive basic process-improvement training.
The Six Sigma Improvement Methodology is similar to that of PDCA and FOCUS PDSA, but it uses five steps, known as (D)MAIC: Define, Measure, Analyze, Improve, and Control. Many components of the health care industry have applied the Six Sigma improvement methodology toward the elimination of errors rather than the correction of defects (as it has been applied in industry). The approach is similar and both ultimately strive for perfection. In light of the fact that one error can be of catastrophic consequence if it involves a sentinel event or even death, the concept of near perfection in the Six Sigma standards is important for all applications of health care delivery.
Federal Efforts Whereas the quest for quality led to the development of the Baldrige Award and Six Sigma, efforts at the federal level resulted in the formation of the Agency for Health Care Policy and Research (AHCPR) in 1989. Later changed to the Agency for Healthcare Research and Quality (AHRQ) as part of the Healthcare Research and Quality Act of 1999, this body is a scientific research agency located within the Public Health Service (PHS) of the U.S. Department of Health and Human Services. AHRQ focuses on quality of care research and acts as a “science partner” between the public and private sectors to improve the quality and safety of patient care. Over time, the agency has changed its focus from developing and supporting clinical practice guidelines to developing evidence-based guidelines. AHRQ’s mission is to develop scientific evidence that enables health care decision makers to reach more informed health care choices. The agency assumes the responsibility to conduct, support, and disseminate scientific research designed to improve the outcomes, quality, and safety of health care. The agency is also committed to supporting efforts to reduce health care costs, broaden access to services, and improve the efficiency and effectiveness of the ways health care services are organized, delivered, and financed.
AHRQ has achieved numerous accomplishments since its inception. These accomplishments range in focus from the Medical Expenditure Panel Survey (MEPS), the Healthcare Cost and Utilization Project (HCUP), and the Consumer Assessment of Healthcare Plans Survey (CAHPS), to the grant component of AHRQ’s Translation of Research into Practice (TRIP) activity and the Quality/Safety of Patient Care program. The latter program encompasses both the Patient Safety Health Care Information program and the Health Care Information Technology program. Each of the programs listed here provides valuable information to the agency. For example, the Medical Expenditure Panel Survey (MEPS) serves as the only national source for annual data on how Americans use and pay for medical care. The survey collects detailed information from families on access, use, expense, insurance coverage, and quality. This information provides public and private sector decision makers with important data to analyze changes in behavior and the market. The Healthcare Cost and Utilization Project (HCUP) also provides information regarding the cost and use of health care resources but focuses on how health care is used by the consumer. HCUP is a family of databases containing routinely collected information that is translated into a uniform format to facilitate comparison. The Consumer Assessment of Health Plans (CAHP) uses surveys to collect data from beneficiaries about their health care plans. The grant component, Translation of Research into Practice (TRIP), provides the financial support to initiate or improve programs where identified. Patient safety research is also an important element of these activities and includes a significant effort directed toward promoting information technology, particularly in small and rural communities where health information technology has been limited due to cost and availability. Other research efforts for patient safety are focused on reducing medical errors and improving pharmaceutical outcomes through the Centers of Excellence for Research and Therapeutics (CERT) program.
AHRQ has provided grants to increase the use of health information technology, including electronic health records.
As a result of the growing concern for the increased use of health information technology (HIT) to improve the quality of health care and control costs, AHRQ awarded $139 million in contracts and grants in 2004 to promote the use of health information technology. The goals of the AHRQ projects are listed in Table 7-2. Grants were awarded to providers, hospitals, and health care systems, including rural health care settings, critical access hospitals, hospitals and programs for children, as well as university hospitals in urban areas. The locations were spread throughout the country from coast to coast, border to border, and included Alaska and Hawaii. Many grant recipients sought to develop HIT infrastructure and data-sharing capacity among clinical provider organizations. Other grant recipients sought to improve existing systems that were considered outdated, or to install technology where it had not previously existed, such as pharmacy dispensing systems, bar coding, patient scheduling, and decision-support systems. Some grants went toward the construction of a fully integrated electronic health record (EHR), such as one effort by the Tulare District Hospital Rural Health Consortium. Some universities received grants to employ technology for disease-specific projects, such as the Trial of Decision Support to Improve Diabetes Outcomes at Case Western Reserve University; others sought to develop cancer care management programs, such as the Technology Exchange for Cancer Health Network (TECH-Net) established by the University of Tennessee; and others worked to automate tracking of adverse events, such as the Automated Adverse Drug Events Detection and Intervention System established by Duke University. Still other grants focused on promoting statewide and regional networks for health information exchange, sometimes referred to as regional health information organizations (RHIOs). The goal of these projects is to develop a health information exchange that connects the systems of various local health care providers so they can better coordinate care and enable clinicians to obtain patient information at the point of care.9 More information concerning the work of RHIOs is found in Chapter 10, “Database Management.”
Table 7-2 | Goals of the AHRQ Projects
Improve patient safety by reducing medical errors
Increase health information sharing between providers, labs, pharmacies, and patients
Help patients transition between health care settings
Reduce duplicative and unnecessary testing
Increase our knowledge and understanding of the clinical, safety, quality, financial, and organizational values and benefits of HIT
© 2014 Cengage Learning, All Rights Reserved.
Among its accomplishments of the 21st century, the AHRQ has begun certifying patient safety organizations (PSOs). These organizations were created pursuant to the Patient Safety and Quality Improvement Act of 2005 and are designed to serve as independent entities that collect, analyze, and aggregate information about patient safety. They use this data to identify the underlying causes of lapses in patient safety. PSOs gather data through the voluntary reporting of health care providers and organizations according to the terms of the Patient Safety and Quality Improvement Final Rule (Safety Rule).
A second 21st century accomplishment of the AHRQ involves the creation of the National Strategy for Quality Improvement in Health Care (National Quality Strategy). Created pursuant to the Patient Protection and Affordable Care Act, the National Quality Strategy aims to improve the overall quality of patient care, reduce costs, and improve patient health. AHRQ developed the National Quality Strategy using evidence-based results of medical research and input from a wide range of stakeholders across the health care system.
A similar effort at the federal level to improve quality patient care initiated in the U.S. Department of Health and Human Services and resulted in creation of the Center for Medicare and Medicaid Innovation. Also created pursuant to the Patient Protection and Affordable Care Act, the Center is designed to test innovative care and payment models and encourage adoption of practices that reduce costs, while simultaneously delivering highquality patient care at lower cost.
The U.S. President connects the use of electronic health records with improvement in quality patient care.
One of the most significant efforts to focus attention on the importance of advancing health information technology as a means to improve the quality of patient care was made by U.S. President George W. Bush. In his State of the Union Address on January 20, 2004, he stated, “By computerizing health records, we can avoid dangerous medical mistakes, reduce costs, and improve care.”10 He acted on this statement shortly thereafter, establishing a national coordinator for health information technology within the U.S. Department of Health and Human Services. This coordinator announced that a 10-year plan would be developed to outline the steps necessary to transform the delivery of health care by adopting health information technology in both the public and private sectors. Included in these steps are the EHR and a national health information infrastructure (NHII), topics that are addressed in further detail in Chapter 10, “Database Management,” and Chapter 11, “Information Systems and Technology.”
Private Efforts Concern for improving the quality of health care also moved others to action. The Institute of Medicine, a private nonprofit organization that provides health policy advice under a congressional charter granted to the National Academy of Sciences, conducted an in-depth analysis of the U.S. health care system and issued a report in 2001. This report, Crossing the Quality Chasm: A New Health System for the 21st Century,11 identified a significant number of changes that had affected the delivery of health care services, specifically the shift from care of acute illnesses to care of chronic illnesses. The report recognized that current health care systems are more devoted to dealing with acute, episodic conditions, and are poorly organized to meet the challenges of continuity of care. The report challenged all health care constituencies—health professionals, federal and state policy makers, purchasers of health care, regulators, organization managers and governing boards, and consumers—to commit to a national statement of purpose and adopt a shared vision of six specific aims for improvement.
The report did not include a specific “blueprint” or standard for the future because it encouraged imagination and innovation to drive the effort. Specific recommendations included a set of guiding principles known as the Ten Steps for Redesign, the establishment of the Health Care Quality Innovation Fund to initiate the process of change, and development of care processes for common health conditions—most of them chronic—that afflict great numbers of people. This report served as a driving force behind the funding of grants through AHRQ and the other programs that have already been identified.
The National Committee for Quality Assurance (NCQA) is another organization involved in improving health care quality. Established in 1990, this organization focuses on the managed care industry. It began accrediting these organizations in 1991 in an effort to provide standardized information about them. Its Managed Care Organization (MCO) program is voluntary, and approximately 50 percent of the current HMOs in this country have undergone review by NCQA. Earning the accreditation status is important to many HMOs, because some large employers refuse to conduct business with health plans that have not been accredited by NCQA. In addition, more than 30 states recognize the accreditation for regulatory requirements and do not conduct separate reviews.
In 1992, NCQA assumed responsibility for management of the Health Plan Employer Data and Information Set (HEDIS), a tool used by many health plans to measure performance of care and service. Purchasers and consumers use the data to compare the performances of managed health care plans. Because more than 60 measures are present in the data set, containing a high degree of specificity, performance comparisons are considered very reliable and comprehensive. The NCQA has designed an audit process that utilizes certified auditors to assure data integrity and validity. HEDIS data are frequently the source of health plan “report cards” that are published in magazines and newspapers. Included in HEDIS is the CAHPS 3.0H survey that measures members’ satisfaction with their care in areas such as claims processing, customer service, and receiving needed care quickly. The data are also used by the plans to help identify opportunities for improvement. A sample of HEDIS measures is shown in Table 7-3.
Table 7-3 | Sample HEDIS Measures, Addressing a Broad Range of Important Topics
Asthma medication use
Controlling high blood pressure
Antidepressant medication management
Smoking cessation programs
Beta-blocker treatment after a heart attack
Source: Information compiled from the National Association for Healthcare Quality (NAHQ), http://www.nahq.org.
Courtesy of the National Association for Healthcare Quality.
The NCQA also operates recognition programs for individual physicians and medical groups. These programs are voluntary, and physicians may apply through NCQA. Doctors who qualify must meet widely accepted evidence-based standards of care. One program includes a Diabetes Physician Recognition Program that was developed in conjunction with the American Diabetes Association. This program recognizes physicians who keep their patients’ blood sugar and blood pressure at acceptable levels and routinely perform eye and foot examination. The Heart/Stroke Recognition Program (HSRP) is a partnership with the American Heart Association/American Stroke Association and recognizes doctors and practices that control their patients’ blood pressure and cholesterol levels, prescribe antithrombotics such as aspirin, and provide advice for smokers looking to quit.
Table 7-4 | NCQA Accrediting Domains for Accountable Care Organizations
ACO structure and operations
The organization clearly defines its organizational structure, demonstrates capability to manage resources and aligns provider incentives through payment arrangements and other mechanisms to promote the delivery of efficient and effective care.
Access to needed providers
The organization has sufficient numbers and types of practitioners and provides timely access to culturally competent health care.
Patient-centered primary care
The primary-care practices within the organization act as medical homes for patients.
The organization collects, integrates and uses data from various sources for care management, performance reporting, and identifying patients for population health programs. The organization provides resources to patients and practitioners to support care management activities.
Care coordination and transitions
The organization facilitates timely exchange of information between providers, patients, and their caregivers to promote safe transitions.
Patient rights and responsibilities
The organization informs patients about the role of the ACO and its services. It is transparent about its clinical performance and any performance-based financial incentives offered to practitioners.
Performance reporting and quality improvement
The organization measures and publicly reports performance on clinical quality of care, patient experience, and cost measures. The organization identifies opportunities for improvement and brings together providers and stakeholders to collaborate on improvement initiatives.
Source: National Committee on Quality Assurance, www.ncqa.org.
Courtesy of the National Committee on Quality Assurance.
In 2011, NCQA began accrediting accountable care organizations, an entity created pursuant to the Affordable Care Act of 2010. An accountable care organization (ACO) refers to a group of providers and suppliers of services (e.g., hospitals, physicians, and others involved in patient care) that work together to coordinate care for the patients who receive Medicare health benefits. An ACO is designed to focus on preventive care, coordinate care among providers to reduce error and duplication of services, involve patients in their health care, and contain costs. The accreditation domains to be applied by NCQA to accountable care organizations are listed in Table 7-4.
The organization that brings all of the professionals involved in quality health care management together is the National Association for Healthcare Quality (NAHQ). This organization is based on the idea that quality health care professionals drive the delivery of vital data for effective decision making in health care systems. Organized in 1975 as the National Association for Quality Assurance Professionals (NAQAP) to represent these health care workers, the organization provides educational, research, and certification programs to its membership. Members include a wide range of professionals who focus on quality management, quality improvement, case/care/disease/utilization management, and risk management. The membership is composed of all levels of employment from all types of health care settings. Members achieve certification through examination and earn the credential of Certified Professional in Healthcare Quality (CPHQ); the examination recognizes professional and academic achievement. The organization also promotes networking and mentoring through educational meetings and publications. Membership includes physicians, nurses, health information management professionals, health care management professionals, information systems management professionals, social workers, and physical and occupational therapists, all with a common focus on improving the outcomes of health care.
Equally important as selecting a methodology is using assessment tools effectively. Several tools are often employed, including idea generation, data gathering and organizing techniques, cause analysis, and data display methods. While each tool is applicable in many environments, they apply especially well in the context of data quality because they assist in identifying progress, relationships, and the presence or absence of trends. This process of identification leads to a determination of the presence, absence, or level of quality. One useful resource for quality assessment tools is the Web site of the American Society for Quality (http:// www.asq.com), where instructions and samples are available.
When new ideas are needed to address an issue or problem, brainstorming and benchmarking are often employed. Brainstorming refers to an idea-generating tool in which ideas are offered on a particular topic, in an unrestrained manner, by all members of a group within a short period of time. Brainstorming can be structured or unstructured, and it generally employs guidelines to assure that ideas are not criticized and that all ideas are accepted during the process. Benchmarking refers to the structured process of comparing outcomes or work practices generated by one group or organization against those of an acknowledged superior performer as a means of improving performance.
Once ideas are generated, the challenge lies in organizing them into a fashion in which they can be processed or analyzed. Organizational tools frequently used include affinity diagrams, nominal group techniques, Gantt charts, and PERT. An affinity diagram refers to a diagram that organizes information into a visual pattern to show the relationship between factors in a problem. This diagram is developed following a brainstorming session by grouping ideas into categories. Nominal group technique is an organizational tool wherein a list of ideas is labeled alphabetically and then prioritized by determining which ideas have the highest degree of importance or should be considered first. Gantt charts are graphic representations that show the time relationships in a project; these are often used to track the progress of a project and the completion of milestones and goals. Within the health care context, they are often used in process improvement activities to depict clinical guidelines or critical paths of treatment. PERT stands for Program Evaluation and Review Technique and is a tool used to track activities according to a time sequence, thereby showing the interdependence of activities. Concurrent activities are called parallel activities and follow arrows to document their paths. PERT is often used by health care teams as a means to complete process improvement activities on time and in the proper order.
Figure 7-4 | A sample cause-and-effect diagram
When the root of a problem or situation is particularly difficult to understand, analysis tools such as cause-and-effect diagrams and Pareto charts may be used. A cause-and-effect diagram, sometimes referred to as a fishbone or Ishikawa diagram, identifies major categories of factors that influence an effect and the sub-factors within each of those categories. The diagram begins with broad causes and works toward specifics, often examining the categories of the 4 Ms (methods/manpower /materials/machinery) or the 4 Ps (policies/procedures/people /plant). See Figure 7-4 for a sample cause-and-effect diagram. Within the health care context, this diagram is often used to conduct root-cause analysis of sentinel events as required by the Joint Commission. A Pareto chart is a bar graph used to identify and separate major and minor problems. It is based on the Pareto Principle, which posits that, for many events, 20 percent of problems pose 80 percent of the impact. This chart orders categories according to frequency in descending order from left to right and is used to determine priorities in problem solving.
In addition to tools that generate ideas, tools are available to gather data in both time- and labor-efficient fashions. Data gathering can be accomplished using forms, check sheets, surveys, questionnaires, written inventories, or computer screens with database or spreadsheet applications. Data can be gathered concurrently (i.e., at the same time the activity occurs) or retrospectively (i.e., looking backward at activity), with a time limit set for the period in which data are collected. The decision about which tool to employ rests on issues of whether a given project is time sensitive, cost sensitive, or both.
Once data are gathered, one must determine how to display it. Frequently used methods include bar graphs, histograms, pie charts, line graphs, control charts, and scatter diagrams. A bar graph demonstrates the frequency of data through the use of horizontal and vertical axes. Typically, the horizontal axis, or x-axis, shows discrete categories; the vertical axis, or y-axis, shows the number or frequency, as seen in Figure 7-5. A histogram is similar to a bar graph, containing both the x- and y-axes, with the exception that it can display data proportionally. This proportionality is shown through the use of continuous intervals for categories on the vertical axis, as seen in Figure 7-6. Histograms are chosen over bar graphs when trying to identify problems or changes in a system or process, or where large amounts of continuous data are difficult to interpret in lists or other nongraphic forms. A pie chart is a graph used to show relationships to the whole, or how each part contributes to the total product or process. The frequency of data is shown through the use of a circle drawn and divided into sections that correspond to the frequency in each category. The 360 degrees of the circle, or pie, represent the total, or 100 percent. The “slices” of the pie are the proportions to each component’s percentage of the whole. A pie chart is seen in Figure 7-7. A line graph uses lines to represent data in numerical form, as seen in Figure 7-8. These graphs can show a process or progress over time, with several sets of data displayed concurrently in a graph to show relationships. A control chart is a graph with statistically generated upper and lower control limits used to measure key processes over time. Control charts focus attention on a variation in the process and help a team determine whether a variation is normal or the result of special circumstances. An example of a control chart is shown in Figure 7-9. A scatter diagram is a graph that shows the relationship between two variables and is often used as the first step in regression analysis. The graph pairs numerical data, with one variable in each axis, to help identify a relationship. An example is shown in Figure 7-10.
Figure 7-5 | Bar graph
Figure 7-6 | Histogram
Deciding which tool to employ is often driven by determining the purpose behind an assignment, question, or project (e.g., is it to analyze, compare, or plan?) and how best to display the data. By becoming familiar with the tools and learning what they are used for, the learner is better able to reach these decisions. The tools, for the most part, can be useful for both planning and organizing, whether drawn by hand or using automated means via a computer or template. For example, in the study stage of a project, one may choose to employ a cause-and-effect diagram to help sort the information into categories. Whereas textbooks display neatly drawn diagrams with examples of completed projects, it is sometimes beneficial to use the tools oneself as a means to understand how to better choose from among them. Drawing diagrams, graphs, or charts by hand after generating ideas is a common way to organize thoughts. These diagrams, graphs, or charts can be changed repeatedly as the process progresses and formalized by automated means.
Applications Data quality refers to more than just the “correctness” of data. Inherent in the concept of quality data is that data must be comprehensive, current, relevant, accurate, complete, legible, timely, and appropriate. To accomplish this, data must be viewed from prospective, concurrent, and retrospective approaches. Using the prospective approach, appropriate protocols and procedures for capturing required data must be established before a patient is even treated. For example, protocols regarding what data should be captured during preregistration and at the time of patient admission, who is responsible for capturing the data, and what procedures must be employed should all be coordinated in advance. Such protocols often involve the use of minimum data sets, a concept discussed in detail in Chapter 10, “Database Management.” The concurrent approach to data viewing allows for the ability to clarify, verify, and edit data while the patient receives treatment. The concurrent approach is most practical when the patient receives treatment over time, as in an inpatient setting or during a nursinghome stay. The retrospective approach to data viewing applies to a review of all data after the fact, allowing for editing where necessary and the completion of the coding and billing processes.
Just as important as putting the processes and procedures in place is determining what to do with the data that have been captured. Originally, the focus rested on internal examination of data, such as a hospital tracking the number of patients who had a certain diagnosis during a certain window of time. Reporting requirements to public health agencies and accrediting bodies gradually emerged, along with the needs of third-party payers to verify the provision of services for reimbursement purposes. Researchers began to demand quality data for studies, as did health care administrators and policy makers who compare costs associated with specific diseases. As a result of these demands for quality, the scope and amount of data required for comparison and study have also increased.
Figure 7-7 | Pie chart
Figure 7-8 | Line graph
Figure 7-9 | Control chart
During the most recent decade, the discussion of data quality has focused on how to use data to improve patient care and safety. One way to improve care and safety is through careful and constant observation. This observation activity involves the use of the quality monitoring cycle PDSA, the steps of which are described in Figure 7-11. The quality monitoring cycle uses data to recognize patterns and trends. This recognition serves as the connection between raw data and real-life circumstances, thereby turning data into meaningful information.
A second way to improve patient care and safety using observation is through benchmarking. Benchmarking is the process of comparing outcomes with those of an acknowledged superior performer. Utilizing this definition, outcomes refers to the changes or end results, whether positive or negative, that can be attributed to the task at hand (i.e., the delivery of health care). For example, outcomes could include changes in health status, knowledge, behavior, or satisfaction. The usefulness of benchmarking, however, extends beyond the mere comparison of outcomes. Rather, it helps people and organizations to learn how the superior performer achieved its goals and to determine how to incorporate those methods into operational practice. Within the data quality context, benchmarking has most recently involved the collection of core measurements—standardized sets of valid, reliable, and evidence-based measures. These measures are used in benchmarking as a way to determine whether a health care institution meets the standards of superior performance. For example, a surgical prevention measure may require administration of prophylactic antibiotics one hour prior to surgery as a means to reduce postoperative infections. A health care institution may examine its own records to determine the number of times it meets this standard compared to the number of surgeries performed.
Benchmarking has often been used in conjunction with the quality improvement model in the health care context. Using the quality improvement model, a problem or process is chosen for study, data are collected to measure the problem or process, data are assessed, and a method for improvement is developed. It is at the data assessment level that benchmarking comes into play. Once the state of the problem or process is assessed using the appropriate data, the health care provider compares itself to the benchmarked competitor and decides the “what and how” of incorporating the competitor’s methods into its own operational practice. Within the patient safety context, common areas of focus for error reduction have included medication prescribing, dispensing, administering, and monitoring; exposures to communicable diseases and bodily fluids; and patient injuries.
A third way to improve patient care and safety is by the pressure exerted from the issuance of quality indicator (QI) reports. QI reports originally developed as an outgrowth of reporting requirements to public health agencies. For example, nursing homes have submitted a tremendous volume of data to CMS since 1996 through the use of the Minimum Data Set (MDS). CMS in turn has used this data to develop both public and nonpublic reports on the quality of care in nursing homes.
Figure 7-10 | Scatter diagram
The public report, entitled Nursing Home Compare, currently focuses on 15 quality measures.12 Because virtually every nursing home in the United States accepts patients whose care is funded by CMS, each nursing home receives a Nursing Home Compare report from CMS indicating its performance in relation to the quality measures. The Nursing Home Compare report also compares the data from an individual nursing home against national and state averages. Of the 15 measures, 12 are considered long-term measures and 3 are short-stay measures. The observation (or “look back”) time varies for each measure, lasting 7, 14, or 30 days. Regulations currently require that an MDS assessment be performed during admission, quarterly, annually, and whenever a resident experiences a change in status. Using this report, the nursing home is able to gauge its performance with regard to the 15 quality measures. More significantly, the Nursing Home Compare reports are available to the public over the Internet. This enables consumers to make more informed comparisons when choosing a nursing home. It also permits public scrutiny of nursing homes, which may result in improved patient care. The reports to the public are updated quarterly.
Other information that is made available to the public at the Nursing Home Compare Web site is compiled from the CMS Online Survey, Certification, and Reporting (OSCAR) database. This very comprehensive report includes nursing home characteristics, citations issued during the three most recent state inspections, and recent complaint investigations. The information is a combination of that reported by the nursing home through the survey and that of the state survey agencies compiled during on-site evaluations. The OSCAR data are updated on a monthly basis but may not reflect the most recent survey results.
The nonpublic report provides data on 24 performance measures to state public health agencies. These agencies use the data to specify deficiencies that require investigation during on-site inspections. Comparisons are made during the on-site inspection, and quality indicator reports are developed. State agencies issue these QI reports to the nursing homes in their respective states. The facilities then have the ability to gauge their performance against state averages and, where appropriate, introduce measures to improve the quality of patient care.
Figure 7-11 | The PDSA cycle model
In some states, the QI reports issued to nursing homes are also made available to the public over the Internet.
The concept of using quality indicator reports has followed a slow process to fruition. Because the reports are only as accurate as the data collected, some reports have been criticized as not reflecting the actual quality of care delivered in a given institution. This perceived lack of correlation has highlighted the need to assure the recording and transmission of quality data, because those data may serve as a representation to the public of the quality of care at a given institution. The use of QI reports does not appear to abate, though; CMS now issues similar QI reports for the hospital and home health industries.
Table 7-5 | Additional HQA Measures
Heart Attack (Acute Myocardial Infarction)
• Thrombolytic agent received within 30 minutes
• Percutaneous Coronary Intervention (PCI) received within 120 minutes of hospital arrival (previously Percutaneous Transluminal Coronary Angioplasty)
• Adult smoking cessation advice/counseling
• Discharge instructions
• Adult smoking cessation advice/counseling
• Blood culture performed prior to first antibiotic received in hospital
• Adult smoking cessation advice/counseling
• Appropriate initial antibiotic selection
Surgical Infection Prevention
• Prophylactic antibiotic received within one hour prior to surgical incision
• Prophylactic antibiotic discontinued within 24 hours after surgery end time
Source: Information adapted from Hospital Compare, http://www.hospitalcompare.hhs.gov.
Courtesy of the U.S. Department of Health and Human Services, www.hospitalcompare.hhs.gov.
The Hospital Compare report currently reports on four core measures: heart attack (acute myocardial infarction, or AMI), heart failure, pneumonia, and surgical infection prevention. Measures exist for each condition that represent the best practices for treatment. The performance rate for a particular facility is reported, along with comparisons to state and national averages. There is also a checklist for the consumer to use to gather and document comparative information. This information is available on the Hospital Compare Web site.13
The requirement for reporting data is one of the components of the Medicare Prescription Drug Improvement and Modernization Act of 2003 (MMA), Section 501(b),14 which established the incentive payment program for eligible acute care hospitals to report on an initial set of 10 quality performance measures, known at the “starter set,” and to agree to have their data publicly displayed. This new requirement became effective with patient discharges beginning in 2004. These are the same measures collected by the JC for participation in their accreditation programs as well as the voluntary reporting effort established by the Hospital Quality Alliance (HQA), a collaboration that includes CMS, the American Hospital Association, the Federation of American Hospitals, and the Association of American Medical Colleges. This collaboration received the support of numerous public and private organizations, including the Agency for Healthcare Research and Quality, the Joint Commission, the National Quality Forum, the American Medical Association, the American Nurses Association, the National Association of Children’s Hospitals and Related Institutions, the Consumer-Purchaser Disclosure Project, the AFL-CIO, the AARP, and the U.S. Chamber of Commerce.
Additional measures exist for which hospitals may submit data and elect the option of displaying this data to the public on the Hospital Compare Web site. Some of these additional measures are listed in Table 7-5. Four core measures have been identified as the leading causes of hospitalization or for extending the length of stay. Heart attack and heart failure are common causes of admission for patients aged 65 or older, and these present high rates for morbidity and mortality. The measures are evidence based and represent best practices for treatment. One of the AMI measures is the documentation of the administration of aspirin within 24 hours of the patient’s arrival at the hospital. Another measure relating to AMI is the documentation of the prescription of aspirin at discharge. For both sets of core measures with patients who have a history of smoking, there is the additional requirement that they be given smoking cessation advice or counseling. The core measure for pneumonia also includes a requirement for smoking cessation advice or counseling and includes other measures related to the appropriate selection and timing of the administration of antibiotics. Another measure requires the documentation of patient screenings for pneumococcal vaccine status and the administration of the vaccine prior to discharge, if appropriate.
Table 7-6 | Selected Surgeries
Hip and knee arthroplasty
Abdominal and vaginal hysterectomy
Cardiac surgery (including coronary artery bypass grafts [CABG] and vascular surgery)
© 2014 Cengage Learning, All Rights Reserved.
The core measures for surgical infection prevention represent the best practices for the prevention of infection after selected surgeries, as listed in Table 7-6. The evidence-based measures indicate that the best practices for the prevention of infections after these procedures are related to the timing of the administration of antibiotics and the avoidance of prolonged administration of prophylaxis with antibiotics. One core measure requires the documentation of the prophylactic antibiotic within one hour of surgery, with another measure requiring documentation that the prophylactic antibiotic be discontinued within 24 hours after surgery.
Table 7-7 | Home Health Outcome and Assessment Information Set
Measures related to improvement in getting around
• Patients who get better at walking and moving around
• Patients who get better at getting in and out of bed
• Patients who have less pain when moving around
Measures related to meeting the patient’s activities of daily living
• Patients whose bladder control improves
• Patients who get better at bathing
• Patients who get better at taking their medicines correctly (by mouth)
• Patients who are short of breath less often
Measures related to patient medical emergencies
• Patients who have to be admitted to the hospital
• Patients who need urgent, unplanned medical care
Measures related to living at home after an episode of home health care ends
• Patients who stay at home after an episode of home health care ends
Source: Information adapted from Home Health Compare, http://www.medicare.gov/homehealthcompare
Courtesy of the Centers for Medicare & Medicaid Services, www.cms.hhs.gov.
Home health data went public with the Home Health Quality Initiative (HHQI), which started publishing data in the spring of 2003. The Home Health Compare report examines 10 quality measures related to outcomes of an episode or service (see Table 7-7). Three of the measures are related to improvement in mobility; four measures are related to meeting the patient’s activities of daily living, such as improvement in bladder control and the ability to take medicines correctly; two measures are related to patient medical emergencies, such as the percentage of patients who had to be admitted to the hospital and needed urgent, unplanned medical care; and one measure indicates the percentage of patients who remain at home after the episode of home health care ends. The information collected is called the Home Health Outcome and Assessment Information Set (OASIS). The public information is updated monthly, but there is a two- to three-month data lag time.
Quality indicator reports have begun to take a foothold at the state level, perhaps forecasting a trend toward wider use. For example, Pennsylvania, Missouri, and Illinois each require hospitals to report data concerning hospital-acquired or nosocomial infections to the state health department.15 This reporting allows not only for the tracking of trends related to antibiotic-resistant microbes but also for the development of quality indicators related to infection rates that are disseminated to the public via the Internet. Public demand for this information has increased, so that patients may make informed choices when selecting health care facilities; additionally, this information has been somewhat instrumental in passing legislation for public reporting of statistical and study results.
Pennsylvania was the first state to release reports to the public, beginning with information about four types of hospitalacquired infections in 2004. The hospitals were required to submit data to the Pennsylvania Health Care Cost Containment Council (PHC4). The reporting includes four types of hospital-acquired infections, three surgical site infection categories, Foley catheterassociated urinary tract infections, ventilator-associated pneumonia, and central-line associated bloodstream infections. In 2006, the PHC4 began requiring hospitals to submit data on all hospital-acquired infections. The intent is to encourage health care facilities to take appropriate actions to decrease the risks of infection. Making information available to the public encourages facilities to direct resources toward improving or maintaining their statistical reports on infections.
One of the driving forces behind the passage of the legislation in Missouri was a father whose adolescent son developed an infection following a sledding accident and resultant fractured arm. The infection led to osteomyelitis and required six subsequent surgical procedures and five months of drug treatment, forcing the young man to miss school, a season of sports, and a summer lifeguarding job. The purpose of the legislation was not to be punitive but to spur hospitals to reduce the incidence of infection. The Missouri Nosocomial Infection Control Act of 2004 includes hospitals, ambulatory surgery centers, and other facilities that have procedures for monitoring compliance with infection-control regulations and standards. Physician offices are exempt. This information will also be available for the licensing of hospitals and ambulatory surgical centers in Missouri at a future date.
Mandatory reporting of infections is a part of the Illinois Hospital Report Card Act. Like Missouri, Illinois requires reporting of nosocomial infections related to Class I surgical site infections, central-line-related bloodstream infections, and ventilator-associated pneumonia. Other states, including Florida, New York, and Virginia, have since joined this trend to require hospitals to disclose or report information about infection rates to federal or state authorities.16 These requirements center upon nosocomial infections and may include tracking and reporting data concerning surgical site infections, infections associated with catheters, and pneumonia in patients on ventilators.
Reports are also available to the public that rate hospitals, physicians, and nursing homes. Some use Medicare Provider Analysis and Review (MEDPAR) data, which are composed of data from the Medicare population that are reported from claims data submitted by health care facilities. One organization, HealthGrades.com, uses MEDPAR data to form parts of comparison reports that are made available to the public. Another organization, the Leapfroggroup.org, uses data that are submitted voluntarily by health care organizations. One advantage offered by these data is that they are derived from a wide variety of organizations and include multiple categories of third-party payers, as opposed to data collected from only Medicare claims. Still other reports are issued as rankings by benchmarking organizations to which hospitals and other health care delivery systems may subscribe. One quality rating system for health care that uses surveys is available from the Consumer Checkbook, a nonprofit consumer information and service resource. The results of physician surveys that rank facilities by “desirability” ratings, risk-adjusted mortality figures, and adverse outcomes for several surgical procedures is available on a subscription basis for consumers; however, the rankings are a matter of physician opinion. Still other reports—such as those generated by WebMD (part of WebMD Health), a leading provider of health information services to consumers, physicians, and health care professionals— are only available to members who participate.
One newer application related to improving patient care and reducing errors, particularly with regard to medication, is the development of the personal health record (PHR). As patients and consumers become better informed of the expected outcomes of illnesses, the modalities of treatment, and the interactions of medications, they are taking more responsibility for keeping their own records. Through the promotion and use of patient PHRs, health care providers are offered the opportunity to compare their records against those of their patients, thereby leading to the possibility of improved consistency of data between both parties. This, in turn, can lead to a decrease in the risk of errors and complications from current treatments and medications; health care providers are able to assist patients in recalling all historical information accurately when they provide information to providers at various facilities.
PERFORMANCE IMPROVEMENT AND RISK MANAGEMENT
Two areas that relate to data quality and quality patient care are performance improvement and risk management. Similar to statistics and research, both performance improvement and risk management rely upon data that are collected, stored, and retrieved by automated methods. Some data collection, however, may still be abstracted from open or closed health care records, either in paper or electronic versions. Performance improvement and risk management are not limited to acute care facilities but are integral parts of the quality management programs within all types of health care systems, such as skilled nursing facilities (SNFs); home health agencies; and ambulatory, long-term, and rehabilitation facilities.
Performance improvement is a clinical function that focuses on how to improve patient care. It is related to database management in that the trend is toward the use of automated data to measure the performance of a health care provider or institution. The HIM professional may be involved in collecting data and compiling reports, as well as provide trending reports. Strong coding and analytical skills, along with database management skills, are essential to provide the appropriate data for effective performance activities.
Fundamental to the concept of performance improvement is the review of a given process, including a determination of how well that process should function. During the review activity, it is important to understand who is affected by the process (e.g., patients and staff), what product is produced by the process (e.g., quality health care), and what is not working with regard to the current process. Some of this understanding can be gained through the extraction of data from clinical data repositories, data warehouses, and data marts.
It is often helpful to use a benchmarking methodology for performance improvement. Benchmarking, as previously discussed, is the process of comparing outcomes with those of an acknowledged superior performer as a means to improve performance. Data for benchmarking are available from many agencies, as described earlier in this chapter.
To compare its performance with those of other organizations, the health care organization can utilize the data found in external databases. As stated in Chapter 8, “Health Statistics,” health care data are reported to local, state, and federal government agencies pursuant to legal requirements. In addition, some health care organizations voluntarily report data to nongovernmental institutions pursuant to access/participation agreements. These data are collected and maintained under recognized standards and guidelines that govern form and content.
Within the context of accreditation, the most influential performance improvement method of recent years has been the ORYX Initiative of the Joint Commission. The goal of the ORYX Initiative is to “provide a continuous, data-driven accreditation process that focuses on the actual results of care (performance measurement) and is more comprehensive and valuable to all stakeholders.”17 Under the ORYX Initiative, performance data are defined, collected, analyzed, transmitted, reported, and used to examine a health care organization’s internal performance over time and to compare a health care organization’s performance with others. Those data serve as part of the information used by the Joint Commission to determine the accreditation status of health care organizations.
To assess its internal performance under ORYX, a health care organization would collect and aggregate its own data to measure patient outcomes. For example, an organization could aggregate data collected from similar patients and analyze them to determine whether certain treatment options are more effective than others. This analysis could further indicate if the effectiveness of the treatment options has varied over time. From this analysis, the organization could determine the need for additional improvement. Data used for comparisons concerning the ORYX initiative are available to the public through the JC Web site (under “Quality Check”). Reports are available for hospitals, nursing homes, home care agencies, mental health facilities, HMOs, and outpatient services that are accredited by the Joint Commission.
Core measures under ORYX support the integration of outcome data and other performance measurements into the accreditation process. The Joint Commission has developed specific core performance measures that can be applied across health care accreditation programs. These core performance measures are developed using precisely defined data elements, calculation algorithms, and standardized data collection protocols based on uniform medical language. These measures have been communicated to health care organizations for embedding in their respective databases, and data about these measures are to be reported to the Joint Commission on a quarterly basis.18 At this time, three core measures must be reported to the Joint Commission, with additional core measures scheduled for reporting over the next few years.19
Performance improvement as a continuous process has been a part of the Joint Commission reviews since the early 1990s. Prior to then, requirements for quality assessment focused on outcomes and processes; some of these reviews of clinical processes must still be conducted. Documentation review is one type of peer review that has changed its requirements, moving from a review of a designated number of elements—in a specified number of records in monthly and quarterly cycles—to a focused review based on periodic sampling. This review is essential to ensure that the health care record accurately reflects the care provided to the patient and also for safety and quality of care, as well as reimbursement and compliance issues. Health information management professionals usually conduct these documentation reviews, compile the data, and report the results to the appropriate committee or department responsible for initiating corrective action or improvement. Study results are reported to the medical staff as defined in the medical staff bylaws and a hospital’s performance improvement plan. Deficiencies in documentation that become discipline issues are also included in the physician’s record for re-credentialing considerations. Although the format for review and criteria may have changed, the responsibility still remains with the medical staff. The involvement and leadership of the medical staff in these activities is crucial to the success of the performance improvement program.
Physician involvement in other performance activities, such as surgical case review, medication usage review, blood and blood component review, mortality review, and infection control, are often accomplished by committees composed of medical staff, with assistance in data collection and abstraction provided by members of the Health Information Management and Quality Assurance staff. The Joint Commission specifies that these activities be consistent, timely, defensible, balanced, useful, and ongoing. The processes need to be defined clearly, with the participants and their roles, design methods, and criteria all identified. Criteria are the standards upon which judgments can be made or the expected level(s) of achievement. Criteria are described by the JC as the specifications against which performance or quality may be compared.
Within the public health context, the most respected performance improvement initiative is the Comprehensive Assessment for Tracking Community Health (CATCH).20 Developed by the University of South Florida and supported by multiple public and private entities, CATCH collects, organizes, analyzes, prioritizes, and reports data on over 250 health and social indicators on a local community level. These data are gathered from hospitals; local, state, and federal government agencies; and national health care groups. Data are also gathered from door-to-door and mail-in surveys. These data are stored in a data warehouse, then mined and disseminated to Florida communities in the form of indicators of community health. This information brings greater awareness to communities and allows them to focus on initiatives, such as training and education, to improve the public’s health.
Risk management is a nonclinical function that focuses on how to reduce medical, financial, and legal risk to an organization. This reduction is tied to the definition of risk: the estimate of probability of loss from a given event upon the operational or financial performance of an organization. Understanding the universe of probable events, the strategies employed to mitigate and minimize the effects of each of these events, and how to contain negative consequences is central to managing risk.
Traditionally, risk management dealt with assessing patient outcomes and events, writing incident reports, and reviewing past events to determine the need for changes in policy and procedure. An incident report refers to the documentation of an adverse incident, describing the time, date, and place of occurrence; the incident itself; the condition of the subject of the incident; statements or observations of witnesses; and any responsible action taken by the health care provider or organization. Adverse incidents may include accidents or medical errors that result in personal injury or loss of property. Incident reports are generally protected by the work-product privilege, meaning that they need not be released in response to a litigation request. More information about the work-product privilege can be found in Chapter 3, “Legal Issues.” Traditional statistical methods were employed to measure risk, and these statistics were reported to higher management levels and boards of directors. Risk management still uses these processes but now includes more focus on database management, primarily in two areas: using data in an automated fashion to measure a health care institution’s risk, and identifying the risk inherent with databases that contain enormous amounts of sensitive data.
Automated databases can be powerful tools in risk management. Because a database is a structured collection of data on multiple entities and their relationships, often arranged for ease and speed of retrieval, it is an ideal method for storing risk management data. The traditional approach of storing paper-based incident reports in a file cabinet did not provide a mechanism for sophisticated information searches, which can be performed in a database format with ease. Using a common and controlled database approach, data can be added and modified over time, thereby providing end users the data needed to perform their jobs as efficiently as possible. With the advent of sophisticated software applications and techniques such as data mining, databases can be searched for risk patterns that may be difficult to detect using traditional statistical methods. Once discovered, these data can be analyzed to predict the probability of future occurrences and to determine how to proceed with action, including mitigation efforts. This effort can lead to more effective loss prevention and reduction programs.
The incident report is still an integral component of any loss prevention program. This report can be prepared and submitted electronically in many facilities, although the paper version is usually still available. The data from the paper report may then be abstracted to facilitate data storage and documentation requirements. Once abstracted, the data can be analyzed to determine approaches to reduce risk in the future. An example of this analysis can be seen in Figure 7-12. A trend has emerged toward developing specialized reports, such as medication and surgical occurrence reports. Other occurrences that organizations often require to be reported to the risk manager are falls, lost property, IV complications, mislabeled lab specimens, and against medical advice discharges. Management of these types of occurrences is integral to an effective loss prevention program. In addition, risk managers are involved in investigations coordinated with clinical engineering to comply with the federal Safe Medical Devices Act, safety inspections mandated by the Joint Commission, and COBRA investigations.
Figure 7-12 | Incident Report Data
Risk management also involves claims management; risk managers often act as liaison to a health care organization’s attorneys. This may include conducting record reviews, arranging depositions, and providing the necessary documentation for claims investigations. The risk manager may also participate in interviews with professional and other staff related to adverse occurrences.
The Security Rule requires a risk analysis of electronically protected health information.
Risk management and database management also intersect with regard to the clinical data stored in automated systems, such as an electronic health record. The security management process standards (Security Rule) issued pursuant to the Health Insurance Portability and Accountability Act (HIPAA) require a covered entity to perform a risk analysis to determine security risks and implement standards to reduce risks and vulnerabilities to electronic protected health information.21 Such security risks may include breaches to the confidentiality, integrity, and availability of the electronic protected health information. The standards of the Security Rule do not specify the approach for this analysis nor do they specify what security measures should be implemented, allowing for flexibility by the covered entity. The standards do require, however, that the covered entity document its efforts, maintain this documentation for six years, and provide review and modification of the efforts on a regular basis.22
Security risks to electronic health information arise from both technical and nontechnical sources.
Installing security measures such as access and integrity controls are just the beginning of risk management efforts relating to an EHR; non-technological risks also pose threats. For example, access and security controls installed at the technological level can help prevent unauthorized access to sensitive patient information, and, on a non-technological level, in-service education programs can raise employee awareness about handling the same information. Similarly, complete and accurate information in the EHR can support the claims management function, serve as the basis of a defense in a lawsuit, and assist in promoting safety education programs—all areas that are central to a successful risk management program. With the use of data mining techniques, the EHR can be searched to assist in analyzing different areas of a health care delivery system, such as obstetrics, psychiatry, anesthesia, and surgery, to determine if they carry higher levels of risk. Finally, the EHR has been helpful in the risk management context through analyzing the occurrence of medication errors, inconsistent data entries, and contradictions in data.
Another part of an effective risk management program is Sentinel Event Review, a requirement of the Joint Commission since 1998. A sentinel event is an unexpected occurrence involving death or serious physical or psychological injury, or other risks thereof; serious injury includes loss of limb or limb function. The standards that relate specifically to the management of sentinel events are found in the Improving Organization Performance section of the JC accreditation manual. Organizations are required to establish mechanisms to identify, report, and manage these events. Organizations are also required to conduct a root-cause analysis to identify the cause of the event and should include a clinical as well as an administrative review. Examples of sentinel events that must be reviewed include significant medication errors, significant adverse drug reactions, confirmed transfusion reactions, and surgery on the wrong patient or wrong body part. Infant abduction or the discharge of an infant to the wrong family are also considered sentinel events.
Facilities are encouraged but not required to report sentinel events to the JC within 45 days of the event. If a facility chooses not to report the event and a family member makes the JC aware, or the JCAHO becomes aware by other means, the JC will communicate to the facility the requirement to submit the findings of the root-cause analysis and action plans. Failure to do so within the specified time frame could result in placing the organization on Accreditation Watch status until the response is received and the protocol approved. An on-site review will not occur unless the JC deems it necessary due to a potential threat to patient health or safety or if there appears to be significant noncompliance with the Joint Commission standards.
Although risk management has already moved from a traditional focus to one that includes database management, it is evolving even further in the new century. In view of the many external factors that influence health care organizations, particularly those beyond the organization’s control, a new concept has been applied to risk management: enterprise risk management. Enterprise risk management (ERM) refers to the function of analyzing and evaluating all of the risks that confront an organization, not just the legal, financial, and medical risks that are traditionally considered. These additional risks include the threat of terrorism and its impact on professionals, patients, and the community; the heightened emphasis on corporate governance and compliance with statutes, regulations, and ethical standards; the increased presence of oversight authorities over business practices; the expanded awareness of patients and the public in general to medical and medication errors; the shortage of qualified staff in certain health care professions or in certain geographic regions; and the effect of the economy in general and in specific local regions upon the demand for unreimbursed health care. ERM considers these risks, and others not listed here, in combination and determines how they affect the health care organization’s strategic plan and overall health. ERM also considers risks in the context of the opportunities they may present, with the goal of exploring how those risks may be exploited to gain a competitive advantage.
A feature central to ERM is the focus on interrelationships and interdependencies. Instead of viewing risks in isolation and organizational departments as separate entities, ERM examines risks together across departmental lines. ERM also examines risks across activities and functions, factoring in how they interplay. Furthermore, ERM examines the health care organization’s relationship with external entities, sometimes resulting in a collaborative regional effort to mitigate and control loss. Such an approach is particularly applicable to emergency preparedness planning, because it permits the risk manager to examine the organization’s infrastructure and estimate how it will be affected by a catastrophic event. Such a proactive approach may well reduce costs to the health care organization, in both financial terms and how well the organization accomplishes its mission. As ERM increases in acceptance, its use in the health care industry should also increase.
One additional way in which risk management is addressing the 21st century is in its relationship with social media. Given that patient use of social media is becoming more common, health care providers have identified instances where their reputations were called into question publicly by patients complaining about the quality of care they received. Those statements that place the health care provider in a negative light pose risk to the health care provider’s reputation, not only with that patient but also with all who may encounter the negative statements in social media. Left unchecked, these statements not only harm reputations but also may influence potential patients to avoid using the health care provider for any future care. A reduction in potential patient load may have serious financial consequences for the health care provider. For these reasons, some risk managers now include social media as part of their responsibilities.
Utilization management refers to a combination of planned functions directed to patients in a health care facility or setting that includes prudent use of resources, appropriate treatment management, and early comprehensive discharge planning for continuation of care. The process uses established criteria as specified in the organization’s utilization review plan. Utilization review is the clinical review of the appropriateness of admission and planned use of resources, that can be and often is initiated prior to admission and conducted at specific time frames as defined in an organization’s utilization review plan. This review involves the process of comparing pre-established criteria against the health care services to be provided to the patient to determine whether the care is necessary. To understand how utilization management differs from performance improvement and risk management, see Table 7-8.
Efforts at utilization management began in the 1950s and were employed at facilities that had frequent bed shortages as a way to allocate space to patients who demonstrated the greatest need. Utilization management first became mandatory in 1965 with the passage of the federal law establishing the Medicare program. The focus of the legislation at that time was on reducing the patient’s length of stay (LOS) in an effort to control the rising costs of health care. Medical evaluation studies were also part of the review process that focused on improving the quality of patient care. Physician involvement was central to the process and continues to this day, although many changes in the procedures employed have taken place through the years.
Table 7-8 | Contrasts between Performance Improvement, Risk Management, and Utilization Management
Improve patient care (clinical)
Reduce risk and liability (non-clinical)
Use resources wisely (clinical)
Review of a process
Review of an adverse event
Compare patient data against preestablished criteria
Benchmarking Data comparisons ORYX Initiative CATCH
Incident report analysis Sentinel event review
Evidence-based guidelines Medical necessity review
Accrediting agencies, regulatory bodies
Third-party payers, QIOs
© 2014 Cengage Learning, All Rights Reserved.
During the 1970s, utilization management became a required component of JC accreditation standards as well as a requirement for participation in the Medicaid reimbursement program. Further legislation in 1972 led to the formation of Professional Standards Review Organizations (PSROs), groups tasked with monitoring the appropriateness and quality of outcomes. In 1977, new legislation known as the Utilization Review Act defined the review process by requiring hospitals to conduct continued-stay reviews for medical necessity and the appropriateness of Medicare and Medicaid inpatient hospitalizations. The Health Care Financing Administration (HCFA), now called Centers for Medicare and Medicaid Services, began operation, charged with managing the Medicare and Medicaid programs that had previously been the responsibility of the Social Security Administration. Simultaneously, Congress passed fraud and abuse legislation to enable enforcement of the provisions of the act.
With enactment of the Tax Equity and Fiscal Responsibility Act (TEFRA) in 1982, the titles of these PSROs changed to Peer Review Organizations (PROs). TEFRA also established the first Medicare prospective payment system (PPS), which was implemented the following year. Using PPS, reimbursement was no longer based on a per diem rate, but on a predetermined rate based on the discharge diagnosis in relation to diagnosis-related groups (DRGs). More information concerning DRGs can be found in Chapter 6, “Nomenclatures and Classification Systems,” and Chapter 16, “Reimbursement Methodologies.” TEFRA’s changes placed additional focus on managing the length of stay through early and effective discharge planning. While these changes in the reporting and scope of utilization management occurred, the focus continued to be directed toward managing the cost of health care and assuring the best level of quality health care possible. CMS changed the PRO designation to Quality Improvement Organization (QIO) as a part of the “7th Scope of Work” (SOW), a document that updates the direction and focus of the organization.23
By the 1990s, the process of determining medical necessity expanded beyond the beneficiaries of Medicare and Medicaid to include the efforts of many managed care and group health insurance plans. Precertification for hospital admissions and surgical procedures became requirements of many of these private entities. In addition, some plans required authorization from primary care physicians before treatment in emergency care centers in nonemergency circumstances would be reimbursed as well as preauthorization for diagnostic radiological procedures.
Utilization review has evolved in the 21st century to incorporate evidence-based guidelines as part of the screening process. Several private companies, such as Milliman and McKesson (InterQual), have published evidence-based guidelines that are widely used in the health care field. The guidelines may be used at the time of preadmission, admission, and continued stay or concurrent review, as well as during discharge planning. Some are based on the level of illness and the patient services required, whereas others focus on ambulatory care, observation status, inpatient and surgical care, general recovery, home care, and chronic care.
Complying with the changing aspects of utilization review has been a challenge for many health care professionals. Case management refers to the ongoing review of patient care in various health care settings related to assuring the medical necessity of the encounter and the appropriateness of the clinical services provided. Case managers, also known as utilization coordinators, are frequently nurses or health information managers with responsibility for managing the review process and coordinating the patient’s care with physicians, nurses, and other allied health professionals. In many settings, the case management function is organized into a department and may also include social workers and clerical assistants to help with communication and coordination of the review activities. Utilization management continues to be a physician-centered function, though it is coordinated by case managers. In large facilities, case managers may specialize in specific areas, such as cardiology, orthopedics, or pediatrics; in smaller facilities, case managers must be trained to facilitate the variety of cases that the organization treats. Longterm care facilities and home health services are also required to have an established utilization management plan, although their requirements differ. In all settings, the focus rests on medical necessity and appropriate management of health care resources.
Figure 7-13 | Steps in the utilization review process
Utilization Review Process
The utilization review process consists of several steps or levels of review; these are listed in Figure 7-13. The process may begin with preadmission review, an element often required by managed care organizations. Preadmission review is performed prior to admission to the facility and operates to determine if the admission or procedure/treatment plan is medically necessary and appropriate for the setting. The case manager uses criteria and screening software, and in some cases may contact the patient’s third-party payer, to confirm that the admission is approved. If approval is deemed inappropriate, the patient is directed to the appropriate level of care.
If preadmission review is not conducted, admission review is performed at the time of admission or as soon as possible thereafter to determine the medical necessity of the admission and the appropriateness of the plan of treatment. Criteria are also used here, as well as consultation with the patient’s third-party payer for authorization. An estimate of the length of stay may be established during this step. If some services can’t be performed at the facility, plans are initiated for the appropriate transfer. If any services are not deemed appropriate, the patient is notified that responsibility for payment rests with the patient and not with the third-party payer. The notification process is defined by the patient’s third-party payer and varies according to the types of notifications that must be made to the patient.
Concurrent review, or continued-stay review, is similar to preadmission and admission review. This review must assure the continued medical necessity and appropriateness of care being delivered to the patient. The review continues at specific intervals that may be tied to the diagnosis or procedure, or determined by the patient’s third-party payer. Case managers and health information management professionals are also responsible for assuring that appropriate documentation exists to support the decisions made regarding appropriateness of admission and the continued necessity and appropriateness of care. Facilities must have a Corporate Compliance Policy and Plan that addresses the documentation of appropriateness of admission and continued stay, and observation versus inpatient documentation requirements.
Discharge planning is the process of coordinating the activities employed to facilitate the patient’s release from the hospital when inpatient services are no longer needed. Discharge planning can be initiated at any stage of the utilization review process and evolves with the determination of the patient’s needs following discharge from the facility. When discharge planning is initiated at preadmission, there may be coordination with outside agencies, such as a home health agency for continuation of care and delivery of durable medical equipment to the patient’s home. Other arrangements may include transfer to another type of facility for continuation of care. Social workers and other health care professionals may become involved in the stages of discharge planning as well. Changes in the patient’s recovery can alter these plans and the participation of the various agencies involved. Case managers may coordinate this process; alternatively, separate discharge planners might possess primary responsibility for this function. Good communication and coordination are essential to efficient discharge planning.
An important aspect of the discharge planning process is the appropriate and clear documentation of the discharge status of the patient in the patient’s health record. CMS established this requirement in 1998 as part of its post-acute transfer policy (PACT). The discharge status is the description of the facility or service—such as a skilled nursing facility, rehabilitation care facility, or home health service—to which the patient will be transferred upon discharge. This documentation is essential to establish the appropriate patient status code that identifies where the patient will be sent at the conclusion of a health facility encounter or at the end of a billing cycle.
Often referred to as the Special 10 Transfer DRGs, the initial PACT policy specified calculation for reimbursement for all cases assigned to one of the 10 DRGs if the patient was discharged to certain facilities that were considered to be continuation of the episode of care. Patients discharged to home or custodial care, such as residential care or assisted living facilities, were not included in the calculation. In 2004, CMS expanded the list of DRGs for which the transfer rule applies from 10 to 29. The next year, CMS again expanded the list to include 182 DRGs.
The patient status code is a two-digit code that is entered on the UB-04 claim form, formerly known as the UB-92 claim form, an example of which is seen in Figure 7-14. Examples of patient discharge status codes are listed in Table 7-9. Omitting the status code or submitting a claim with the incorrect status code is a claim billing error and could result in rejection of the claim and loss of revenue. The Office of the Inspector General (OIG) has focused attention on those facilities that have high error rates and may assess fines for failure to achieve compliance with the requirement to document accurately. CMS performs edits to assure compliance, comparing the patient status code at discharge against the code used by the post-acute care facility in its billing process. For example, if a patient is transferred from an acute care facility to a skilled nursing facility, the discharge code on the UB-04 should correlate with the billing code from the SNF. This editing underscores the importance of documentation in the record that clearly supports the appropriate transfer status code. Facilities often conduct random audits as a part of their Corporate Compliance Policy and Plan, seeking to assure compliance and initiate corrective measures as well as reduce the potential for revenue loss and fines.
Case managers work closely with health information managers to conduct various audits and reviews. The activities may include audits for compliance with regard to observation versus inpatient stays, premature discharges and subsequent readmissions, and inpatient procedures performed in outpatient settings. Familiarity with the current OIG Work Plan assists case managers in designing the annual compliance activities accordingly. The Work Plan is not limited to acute care facilities but includes review of every facet of health care that receives reimbursement from the Medicare and Medicaid programs.
Although this discussion of the steps in the utilization review process has focused on the acute care setting, utilization review may vary in other settings. For example, utilization review in home care and skilled nursing facilities is similar to the acute care setting in design and process, but it uses criteria that are more specific to the scope of the facility. Medical necessity and appropriateness of the plan of care are central to utilization review. Coordinators who work with the discharge planners or case managers at acute care facilities usually conduct preadmission reviews before the patient receives this new form of treatment. The type and amount of service provided are determined using specific criteria or in consultation with the patient’s third-party payer.
Utilization management remains central to the delivery of patient care in the 21st century. Both accrediting and licensing standards contain elements of utilization management with which health care organizations must comply. For example, the current JC standards specify that the provisions of ongoing care are based on patient needs even when denial of payment has been determined. The standard also includes provisions for the patient’s family to be involved in the decision-making process. Similar requirements are present in the Condition for Participation in the Medicare and Medicaid reimbursement programs.24 Utilization management will continue to evolve as health care in the United States adapts to new changes.
Figure 7-14 | The UB-04 claim form
Table 7-9 | Patient Discharge Status Codes
Home or self-care
Home, assisted living, home IV without home care, retirement home, foster care, home with home 02, homeless shelter, residential care facility, jail, or prison. Also includes transfer to OP services such as catheter labs, or for radiology purposes.
Short-term hospital for acute inpatient care
All acute hospitals except children’s hospitals, VA hospitals, psychiatric hospitals, and rehabilitation facilities.
Skilled nursing facility (SNF)
SNF with Medicare certification. Does not include SNF with Medicaid only.
Intermediate care facility (ICF)
Facilities without Medicare or Medicaid certification. Includes patients returning to Medicare facilities for custodial care.
Intermediate care facility (ICF)
Facilities without Medicare or Medicaid certification. Includes patients returning to Medicare facilities for custodial care.
Another type of facility for inpatient care
Does not include SNFs, rehabilitation, or long-term hospitals with specific status codes. Children’s, cancer, or chemical dependency hospitals are examples.
Home care services for skilled services. Does not include durable medical equipment (DME) supplier, or home IV service only.
Left against medical advice or discontinued care.
Still a patient
Partial billing or interim bill.
Any inpatient care at a VA facility (acute, psychiatric, rehabilitation, or SNF).
Hospice at home.
Hospice service in a hospital, SNF, or ICF.
SNF care in a swing bed arrangement.
Inpatient rehabilitation facility.
Long-term acute care hospital.
Medicaid-only nursing facility
Nursing facility certified under Medicaid only, not Medicare.
Psychiatric hospital or distinct part or unit of a hospital.
Critical access hospitals
Hospitals with designation as critical access hospitals.
Source: Information from UB-92 Handbook for Hospital Billing, 2006 edition; American Medical Association and Patient Status Code FAQs; National Uniform Billing Committee, http://www.nubc.org.
© 2014 Cengage Learning, All Rights Reserved.
As this chapter illustrates, quality health care management is important to the entire health care system, affecting patients, health care providers, governmental entities, accrediting bodies, and third-party payers. Whether used to study clinical outcomes, support performance improvement and risk management efforts, or facilitate the utilization review process, the ready availability and quality of data are essential. Quality management is an equally integral part of health information management. The data that are collected, abstracted, coded, stored, and reported by health information management professionals must be accurate and timely to meet the demands of the health care professionals who use them for patient care delivery, as well as the needs of others for billing, payment, and health care research. Furthermore, the growing use of data to improve patient safety, reduce risks, and improve the allocation of health care dollars signals exciting developments for the future of health care.
Because of its ability to provide objectivity, data are an essential element used to measure the quality of patient care. Data have been used to study the quality of patient care for over a century, leading in part to the formation of accrediting organizations that focus on improving patient care. Data collected from the patient’s health record are a crucial part of any quality initiative in the health care field, including those at the federal level. Private efforts to improve patient care have measured the performance of care and service through data collected at the health care provider level, with the HEDIS data set serving as a model for managed care plans. The use of quality monitoring cycles, benchmarking processes, and quality indicator reports has expanded greatly in the last few decades, helping those within the health care field to improve the delivery of patient care and those outside the field to evaluate care given. Two areas that rely greatly upon data quality are performance improvement and risk management, with performance improvement focusing on the review of clinical processes as a way to improve the quality of patient care and risk management focusing on the review of nonclinical processes as a way to reduce medical, financial, and legal risk to an organization. Both areas have received considerable attention because of their potential to affect both the administration and delivery of patient care. By contrast, utilization management focuses on the appropriateness and planned use of resources as an effort to control health care costs. This focus has become central to the delivery of patient care, as both accrediting and licensing standards require health care organizations to comply with utilization management requirements.
You are a quality analyst in a health information management department appointed to lead a project team. Your team must assess the problem with the documentation of the patient’s discharge disposition status in the health record. An increasing number of errors have been reported and frustration among the coders has risen. These coders claim that conflicting information is often present in the record, requiring them to spend an inordinate amount of time trying to obtain verification. Coding productivity has been affected. How would you assess the problem? Give examples of tools to use in a meeting, some ideas that may develop, and a study mechanism.
1. Name the stages in which data quality errors found in a health record most commonly occur.
2. What are the steps in the quality improvement model, and how is benchmarking involved?
3. What agency is focused on developing the scientific evidence used in decision making?
4. When should a histogram be used to display data?
5. How do performance improvement and risk management relate to database management?
6. Identify the tools that could be used when a group needs to develop new ideas or organize the performance improvement project.
7. How would an organization examine its internal performance under ORYX?
8. How is information collected in the Compare reports for hospitals, nursing homes, and home health agencies?
9. During what stage of the utilization review process is the appropriateness of the admission assessed?
10. Why is it important to have accurate and clear documentation of the patient’s discharge status?
1. Using Medicare’s Web site, http://www.medicare.com, search for “Nursing Home Compare” and compare reports for nursing homes located in your geographic area. Compare the data from at least three individual nursing homes against the national and state averages. Report to your instructor the conclusions of your comparison. As an alternative, three hospitals or three home health agencies can be researched by searching on the Medicare home page for “Hospital Compare” or “Home Health Compare.” Use the different comparison tools of your choice to display the data.
Academy of Certified Case Managers, http://www.academyccm.org
Agency for Healthcare Research and Quality, http://www.ahrq.gov
American Health Information Management Association, http://www.ahima.org
American Society for Quality, http://www.asq.org
Case Management Society of America, http://www.cmsa.org
Center for Medicare and Medicaid Innovation, http://www.innovations.cms.gov
Commission for Case Manager Certification, http://www.ccmcertification.org
Consumers’ Checkbook, http://www.checkbook.org
Home Health Compare, http://www.medicare.gov/hhcompare/home.asp
Hospital Compare, http://www.hospitalcompare.hhs.gov
Institute of Medicine of the National Academies, http://www.iom.edu
iSix Sigma, http://www.isixsigma.com
Joint Commission, http://www.jointcommission.org
Juran Institute, http://www.juran.com
The Leap Frog Group Hospital Safety Score, http://www.leapfroggroup.org
Malcolm Baldrige National Quality Program, http://www.quality.nist.gov
National Association for Healthcare Quality, http://www.nahq.org
National Committee for Quality Assurance, http://www.ncqa.org
NCQA’s Health Plan Report Card, www.hprc.ncqa.org
Nursing Home Compare, http://www.medicare.gov/nhcompare/home.asp
Patient Safety Organizations, http://www.pso.ahrq.gov
Pennsylvania Health Care Cost Containment Council, http://www.phc4.org
Select Quality Care, http://www.selectqualitycare.com
The W. Edwards Deming Institute, http://www.deming.org
Carroll, R. (Ed.). (2004). Risk management handbook for health care organizations (4th ed.). San Francisco: Jossey-Bass.
Donabedian, A. (2003). An introduction to quality assurance in health care. Oxford, UK: Oxford University Press.
Green, M. A., & Bowie, M. J. (2005). Essentials of health information management. Clifton Park, NY: Delmar.
Institute of Medicine. (2001). Crossing the quality chasm: A new health system for the 21st century. Rockville, MD: Author.
Kavaler, F., & Spiegel, A. (2003). Risk management in health care institutions: A strategic approach (2nd ed.). Sudbury, MA: Jones and Bartlett.
LaTour, K., & Eichenwald, S. (2002). Health information management concepts, principles, and practice. Chicago: American Health Information Management Association.
Shaw, P., Elliott, C., Isaacson, P., & Murphy, E. (2003). Quality and performance improvement in healthcare. Chicago: American Health Information Management Association.
Centers for Medicare and Medicaid Services (CMS) Resources, Glossary of Definitions, http://www.hospitalcompare.hhs.gov/.
See, e.g., 2003 Comprehensive Accreditation Manual for Hospitals: The Official Handbook, Performance Improvement Measures PI.1–PI.5, Performance Measurement, and the ORYX Initiative.
http://tutor2u.net/business/production/quality_circles_kaizen.htm (last accessed 01/11/12).
http://www.isixsigma.com/dictionary/Deming_cycle_PDCA-650. htm (last accessed August 7, 2011).
Donabedian A. (1966). Evaluating the quality of medical care. Retrieved February 19, 2012, from http://www.milbank.org /quarterly/830416donabedian.pdf.
Public Law 100 107, signed into law on August 20, 1987, created the Malcolm Baldrige National Quality Award. See http://www .quality.nist.gov.
Six Sigma is a federally registered trademark of Motorola Corporation. See http://www.isixsigma.com.
Listings of other grants are available on the Agency for Healthcare Research and Quality Web site at http://www.ahrq.gov.
January 26, 2004. Wkly. Compilation Presidential Documents 94, 2004 WLNR 11425351.
Institute of Medicine. (2001). Crossing the quality chasm: A new health system for the 21st century. See http://www.iom.edu.
Nursing Home Compare reports available at http://www.medicare .gov/nhcompare/home.asp (last accessed 01/11/12).
Hospital Compare reports available at http://www .hospitalcompare.hhs.gov.
Medicare Prescription Drug Improvement and Modernization Act (MMA), 42 U.S.C. § 1395w (2012).
Hospital Report Card Act, 210 ILL. COMP. STAT. 86/1-99 (West 2012); Missouri Nosocomial Infection Control Act of 2004, MO. REV. STAT. § 192.667 (2012); Healthcare Cost Containment Act, PA. STAT. ANN. tit. 35, § 449.1-.19 (West 2012).
FLA. STAT. ANN. § 408.05 (1–3) (2012) (hospitals to report infection rates); N.Y. PUB. HEALTH LAW § 2819 (McKinney 2012) (hospitals to report nosocomial infections); VA. CODE ANN. § 32.1–35.1 (Michie 2012) (hospitals to report infections to federal and state authorities).
Joint Commission. (2003). Comprehensive accreditation manual for hospitals: The official handbook, performance measurement, and the ORYX initiative (IM.8, IM.10). Chicago: Author.
Ibid. Current core measures include: acute myocardial infarction (AMI), heart failure (HF), community acquired pneumonia (CAP), and pregnancy-related conditions (PR).
Information on CATCH may be found on the USF Center for Health Outcomes Research Web site at http://www.chor.hsc.usf.edu.
45 C.F.R. § 164.306 (2012).
45 C.F.R. § 164.306(e) (2012).
The 7th Scope of Work (SOW), Title XI of the Social Security Act, Part B, as amended by the Peer Review Act of 1982. Details of the most current work plan (9th Scope of Work) are available at https://www.cms.gov/OpenDoorForums/Downloads /QIO111306.pdf (last accessed 01/11/12 ).
42 CFR, Part 456. Utilization Control, Subparts B and C (2012).