Pay for performance and public reporting: risks to patients outweigh benefits.
Nearly all public and private-sector third-party payers have called
for a new medical services delivery system based on the use of pay for
performance (PFP) and public reporting (PR) programs (PFP/PR). These
programs seek to drive physicians to comply with quality and efficiency
standards created by third parties, through financial incentives and
disincentives, increased regulation, and public labeling of doctors as
inefficient or substandard. The programs and their metrics are being
created in committees subject to significant political influence and
Based on numerous studies reviewed here, PFP and PR benefit third parties but put patients at risk. Compliance with "best practice" standards does not improve patient outcomes. Adverse effects include physician avoidance of high-risk patients and system gaming by physicians and hospitals. These effects have a disproportionate effect on patients in minority and lower socioeconomic groups. Administrative and claims source data used in such programs are often inaccurate and invalid. Risk-adjustment methods are not adequate to fully account for the complex features of the highly variable patient population in the United States.
Given the lack of demonstrated benefit and the significant risks of injury to individual patients, the Take Back the Profession Advisory Group (TBPAG) at the AMA recommends immediate cessation of PFP/PR in the public and private sectors.
Pay for performance
Medical care, Cost of (Analysis)
Medical care (Quality management)
Medical care (Analysis)
|Publication:||Name: Journal of American Physicians and Surgeons Publisher: Association of American Physicians and Surgeons, Inc. Audience: Academic Format: Magazine/Journal Subject: Health care industry Copyright: COPYRIGHT 2009 Association of American Physicians and Surgeons, Inc. ISSN: 1543-4826|
|Issue:||Date: Winter, 2009 Source Volume: 14 Source Issue: 4|
|Geographic:||Geographic Scope: United States Geographic Code: 1USA United States|
Governments face pressures from increased entitlement spending on Medicare and Medicaid, and private firms from the cost of employee benefits. Centrally designed and implemented pay for performance (PFP) and public reporting (PR) programs (PFP/PR) are proposed as a solution to perceived quality gaps (1) as well as excess spending, by groups such as the Institutes of Medicine (IOM) and the Institute for Healthcare Improvement (IHI). It is frequently asserted that 100,000 Americans die every year from medical errors and receive only 50% of "appropriate" medical care, while paying excessively. (2)
In treating an individual patient, it is understood that the risks of treatment must be outweighed by the benefits, considering available evidence, physician training and experience, and patient preferences. Programs designed to improve the health of "populations" must also weigh risks and benefits' But as PFP/PR programs proliferate, their benefits are unclear and liabilities are appearing. Petersen et al." have reviewed numerous studies.
Competent studies on PFP demonstrate that such programs simply reward health professionals who are already performing well, rather than improving care of those who are "under-performing."' PR programs of outcomes from coronary artery bypass grafting (CABG) have led, to "gaming" of the system and to exclusion of high-risk patients in order to appear to meet standards, as detailed below.
As performance measures proliferate, doctors spend time "teaching to the test" (6) and focus more time on ensuring compliance than on providing patient care. Expensive systems such as electronic medical records place further economic burdens on hospitals and physician practices while often returning few or no improvements in outcomes or measurable cost savings. (7) They do, however, provide easy-to-manage information to the government and others who use the data to rate doctors' compliance. (7)
Further, such programs risk interfering with the economic viability of a physician's practice (as identified in the AMA's Principles and Guidelines on Pay for Performance), thus reducing patients' access to care. Werner and Asch point out that "in its current state, performance measurement is better suited to improving measured care than improving the care of individual patients." (8) Additionally, they note that performance measures may create only a small clinical benefit after great effort, the measures may not be prioritized to areas with greater clinical benefit, and doctors' attention may be diverted from larger individual needs of a patient in favor of compliance with narrow sets of measures.
Certainly the issues of medical quality and excess cost require careful consideration and innovative solutions. Solutions that work for patients must be found when problems exist. However, the patient safety problem has been greatly exaggerated to justify such PFP/PR programs. Clement McDonald has commented that the often-reported "100,000 deaths due to medical error" (9) number was largely exaggerated. (10) The same study in New York that produced this number was replicated in Colorado and estimated less than half that number of deaths (44,000). (11) The IHI has claimed to have saved 100,000 lives without offering significant supporting data, (12) and it has now embarked on a mission to stop five million episodes of "medical harm" over the next two years. (13) The leader of IHI, Donald Berwick, was initially opposed to PFP (14) but now supports the program. (15)
Healthcare spending has increased and now stands at 17% of the national economy, but few recipients of those services would consider them unnecessary. No one would disagree that there is room for improved quality; medicine is about constant improvement in practice. Few would also disagree that dollars could be better spent. The key question is this: Who decides on such use--third-party payers, or patients based on the advice of their physicians?
Advocates of PFP/PR have used patient safety to justify their programs while a former AMA president has commented that the real intent is to limit spending. (16) PFP/PR programs propose to solve an economic problem with clinical solutions through third-party practice of medicine--without asking what is the cause of the economic problem. PFP/PR would simply perpetuate and expand the current centrally planned economy that started with the creation of Medicare and the reliance on employer-owned, thirdparty-payment policies. This situation has led Americans to believe that a small copayment or low annual out-of-pocket expense is all that is required to receive as many medical services as they choose. The most powerful constraint on spending--consumers spending their own money--has thus been removed from the equation. Instead of empowering consumers, PFP/PR would try to solve the problem by enlarging it. The suggested "consensus-building organizations" and "health-care transparency" are just new terms for more central planning.
Compliance with Performance Measures Does Not Improve Patient Outcome
Werner and Bradlow (17) demonstrated that compliance with Joint Commission on the Accreditation of Healthcare Organizations QCAHO) performance measures for acute myocardial infarction (AMI), congestive heart failure (CHF), and pneumonia did not measurably improve patient outcomes. The authors later note in a reply to letters that "three of the performance measures used by the Hospital Quality Alliance that were studied are not based on evidence from randomized controlled trials. ... When the potential benefit from a measured intervention is uncertain or small, there is increased risk that the inaccuracy of performance measures will outweigh the benefits [emphasis added]." (18,19) Despite the failure of compliance with process measures to improve outcome, the authors suggested creation of still more measures, an approach that has been questioned. (20)
Williams et al. (21) evaluated performance on 18 JCAHO performance measures for AMI, CHF, and pneumonia, which were later adopted in the Centers for Medicare and Medicaid (CMS) Premier Project. Sixteen of 17 process measures showed increased compliance, yet inpatient mortality did not change. The authors point out that inpatient death was not an accurate indicator of efficacy of these process measures since it included all sources of death, not just that from pneumonia, CHF, and AMI. They then state that the inpatient death "would not be expected to mirror trends observed for process measures," but they do not state the value of the process measures. It is fairly certain that if a positive impact on death had been seen in these three conditions, it would have been reported, since these researchers had access to the Diagnosis Related Groups (DRG) data.
Fonarow et al. (22) evaluated the association between performance measure compliance and outcome for heart failure more directly. They demonstrated that compliance with "best practices," designed to decrease mortality and re-hospitalization, produced no improvement in outcome at 60 and 90 days after discharge. These measures included: (a) use of discharge instructions for patients; (b) evaluation of left ventricular systolic function; (c) angotensin converting enzyme (ACE) inhibitor or angiotensin II receptor blocker (ARB) use; (d) adult smoking cessation counseling; (e) anticoagulant use at discharge in atrial fibrillation; (f) prescription of beta-blocker at discharge (not part of the original measure set).
After adjusting the measures and outcomes for risk, there was no improvement in mortality for any indicator from the original American College of Cardiology/American Heart Association (ACC/AHA) set. When mortality and re-hospitalization rates were combined, there seemed to be a small beneficial effect for compliance with ACE inhibitor/ARB usage, but the rates of re-hospitalization were not reported separately and compliance did not independently reduce mortality. Despite assurances from the ACC/AHA in 2003 that these measures would improve outcomes, 23,24 they did not help.
Further, the ACC/AHA did not envision at that time the usage of beta-blockade as a means of improving outcome, although it appeared to have an effect without inducement of its use by a PFP program. Determining the reasons for lack of improvement in outcomes after application of the clinical practice guidelines is beyond the scope of this analysis, but could include poor studies serving as the basis for selection of the performance measures; poor patient compliance with recommended therapy; physician judgment excluding patients from application of the guidelines; or other unknown reasons. Alarmingly, members of the ACC and AHA offered in an editorial reply: "However, the absence of such a relationship [compliance improving outcome] for the other ACC/AHA performance measures does not refute their value. The purpose of process-of-care performance measures is not to improve outcome directly but to improve the provision of appropriate care processes [emphasis added]." (25) They do not explain how "provision of appropriate care processes" is beneficial to patients who see no improved outcome as compliance with performance measures is improved.
Peterson et al. (26) did report an improved outcome in inpatient mortality for AMI based on compliance with a composite of nine ACC/AHA measures. However, this was for composite of measures only, and there were significant exclusions in the study group for transfers, hospitals with fewer than 40 cases, "early" death, and low-risk patients. Thus, results are difficult to interpret due to study bias.
Further, the study purported to show improved inpatient mortality, but correlated this with compliance on discharge medication use (five medications), further clouding interpretation of the results. Further, this was a study of hospitals voluntarily participating in the "CRUSADE" trial, (biasing the source of data), and there still were significant compliance variances between one hospital and another.
Glickman (27) recently reported the failure of compliance with process measures to improve AMI mortality for patients in the CRUSADE trial of PFP. For CMS measures, compliance rates rose in hospitals equally whether or not they received financial incentives. For non-CMS measures, compliance rates also rose in both paid and unpaid participants, but more so for two of six of these indicators. Despite compliance rates above 90% (for CMS composite scores), by the end of the study outcomes for patients did not improve.
Bradley et al. (28) found that "hospital performance on the CMS/JCAHO process measures for AMI explained only 6% of the hospital-level variation in short-term, risk-standardized mortality rates for patents with AML" They concluded that this "finding suggests that a hospital's short-term mortality rate cannot be reliably inferred from performance on the publicly reported process measures."
Poghack et al. (3) raise serious questions about whether adherence to a HbAlc level <7% will result in improved outcome. They indicate that the macrovascular benefits for patients with type II diabetes (90%-95% of diabetics) remained to be defined. Despite this, the National Committee for Quality Assurance (NCQA) has advocated public reporting of the rates of achieving levels <7%. The authors state the unintended consequences of adherence to such a guideline include targeting individuals marginally above the target value, selection biases, patient safety, and less regard for patient preferences.
Landon et al. (29) demonstrated that compliance with process measures for chronic disease at community health centers did not improve outcomes. Measure compliance improved for diabetes, hypertension, and asthma, yet the measured outcomes of hospitalization rates for asthma, blood pressure control, and control of gylcosylated hemoglobin did not improve.
Compliance with CMS and JCAHO Performance Measures MayActually Harm Patients
One publicly reported quality measure is administration of antibiotics within 4 hours of presentation to an emergency department, when it is unclear whether the patient has pneumonia or CHF. The 4-hour antibiotic rule is part of the "Hospital Compare" performance measures, compliance with which Werner and Asch (8) found to have little or no impact on hospital mortality. Early antibiotic use may be leading to false negative sputum cultures (6,18) Inappropriate antibiotic use may increase the incidence of Clostridum docile colitis. The Surgical Infection Prevention (SIP) standard of ceasing an antibiotic by 24 hours after surgery ends was designed to decrease C. docile colitis. (30) These two measures appear to be somewhat at odds with each other, if they have any beneficial effect at all. In fact, there has been reluctance among many surgeons to arbitrarily stop post-operative antibiotics at 24 hours in surgical patients as required by the SIP measures, (31) as this may lead to an increased rate of infection. (32-34) The Society of Thoracic Surgeons had recommended cessation of antibiotics 48 hours after sternotomy due to concerns of a higher rate of infection if antibiotics were stopped at 24 hours. CMS did not change its standard from 24 to 48 hours for some time 35 Further, JCAHO delayed acceptance of the use of ARBs as alternatives to ACE inhibitors for CHF and AMI (36) as an alternative to ACE inhibitors when felt appropriate by a physician. (36)
Wachter reports that due to the inpatient pneumococcal vaccination program, many patients are inappropriately receiving multiple doses of vaccine to ensure compliance. (6) In the same report, Wachter also points out that administrators who focus on ensuring compliance with PR measures may divert attention from more urgent clinical problems such as AMI or septic shock. Further, the need to comply with multiple standards may inappropriately increase the use of pharmacologic agents in the elderly, leading to patient harm or financial hardship for the patient. (6,37)
Use of information technology to ensure compliance through computerized physician order entry and electronic medical records may inordinately increase time at the computer and decrease time with the patient by the doctor and nursing staff. (6,38) The phenomenon of copying and pasting progress notes and the negative impact on care has also been reported. (39) Further, the costs of information technology may divert valuable resources from patient care. Blumenthal recently reviewed the benefit of health information technology and reported benefits to health systems of guideline compliance, surveillance of disease conditions, reduced medication errors, and decreased utilization of care. However, physician workload was negatively impacted and other problems appeared, including increased incidence of certain medication errors, and increased mortality in a pediatric ICU setting that was later disputed. (7)
PFP/PR Leads to Gaming of the System to Allow Better "Grades"
Rather than improving actual quality, PFP/PR-induced compliance with "best practices," as defined by third parties, may simply represent "gaming" the reporting system. (40) Epstein (41) points out that physician bonuses for performance have been shown to increase documentation without changing quality of care. Lindenauer et al. (42) showed modest improvement in compliance with process measures, without improvement in outcomes, based on a financial bonus for compliance in Medicare's "Premier Pilot evaluation." However, several authors noted that compliance may have been achieved merely by having physicians and hospitals "teach to the test," reallocate care toward rewarded dimensions of quality at expense of others, or by "sophisticated gaming of quality measures" without actually improving quality. (43,44)
In response to these criticisms, Lindenauer et al .42 state that "more thorough documentation of patient ineligibility rather than more frequent use of recommended interventions might explain why improved performance on quality measures does not always lead to improved patient outcomes," thus providing a specific description of "gaming."
Gaming behavior has been amply demonstrated in a British PFP program in which a group of family physicians, given bonuses to meet certain objectives, simply excluded large numbers of patients by exception reporting. (45) Physicians may select patients so as to improve their rankings, note Werner and Asch. (46) Such behavior, described more completely below, should be considered a form of gaming.
PFP/PF Induces Physicians toAvoid High-Risk Patients
In 1989 New York began to report mortality in patients undergoing CABG, and some have credited this lowering mortality from 3.52% in 1989 to 2.78% in 1992. (47,48) However, in 1999 Burack et al. (49) demonstrated that public reporting of CABG outcomes led to denial of surgical treatment to high-risk patients in New York. Werner and Asch (46) described a 31 % increased transfer rate from New York State to the Cleveland Clinic for CABG surgery, and an increase in racial disparities in those who received CABG. Further, they noted that Pennsylvania cardiologists had more difficulty fording a surgeon for CABG, as surgeons in that state were also reluctant to operate on high-risk patients, given public reporting of outcomes. This has called into question the purported improvement in outcomes, given the exclusion of those patients likely to increase a surgeon's mortality rating.
In a recent survey, (50) 82% of internists reported that they would avoid high-risk patients as well as patients who are poorly compliant with treatment recommendations, if quality measure data were made public. Moscucci et al. (51) compared the case mix of patients undergoing percutaneous coronary intervention (PCI) in Michigan, which did not have PR, compared to New York, which did. There were fewer PCIs in New York for patients with AMI or CHF.
PFP/PR Likely to Harm Minority Patients More
Patients belonging to minority groups, especially black patients, are reportedly subject to "disparities" in care. (52,53) Liu et al. (54) have described a decreased rate of referral to or use of high-volume hospitals for complex surgery for non-white, Medicaid and unfunded patients. Casalino (55) recently raised concerns that PFP would increase racial disparities in care.
Werner et al. (56) have reported that New York heart surgeons began avoiding nonwhite minorities immediately following institution of PR on CABG-related mortality. Their rate of CABG was 19% lower than expected from 1992-1995, and the racial disparity took 9 years to recover to its baseline. This differential was not observed in states without such reporting, or for the incidence of PCIs in non-white patients in New York. PCI outcome was not reported publicly in New York during the study period.
Fitzgerald (57) writes that in West Virginia only those patients who keep appointments, receive recommended screenings, take medications as directed, and follow "health improvement" plans are eligible for an "enhanced plan" in Medicaid with better benefits. He points out that many socioeconomic factors may interfere with patient compliance with these requirements. Like these provisions, PFP could have unintended consequences in jeopardizing access of minorities and low-income patients to superior care. (50)
PR of measures including pressure sores and ability to walk or self-feed has not appeared to improve these outcomes in nursing homes. (46) PR programs divert resources, and by creating a false sense of security about the benefits of quality reporting programs may minimize attention to the true causes of low quality in nursing homes where poor and minority patients are more likely to be placed. Angelleli et a159 observe that nursing homes that receive lower quality ratings are more likely to exit the Medicare and Medicaid markets, compromising still further the quality of care received by their black residents.
Some have suggested that quality reporting and use of quality measures will decrease racial disparities. (52) However, other centrally planned government programs, such as forced integration through school busing, the "War on Poverty," and the Federal Emergency Management Agency have failed to solve racial disparities in educational achievement, incarceration rates, and aid to Hurricane Katrina victims in New Orleans. Any possible gains for minorities through PFP/PR would probably be more than cancelled out by avoidance of high-risk patients and other adverse effects. (60)
Inaccurate Data, Poor Risk-Adjustment Methods
PFP/PR programs at this point rely chiefly on administrative and claims data, which have been found to be inaccurate in characterizing the complex features of medical care. (61,62) Problems with such data include inaccurate diagnoses as defined by ICD-9 methodology, missing co-morbidities, failure to accurately distinguish complications arising during inpatient stays from presenting diagnoses, and failure to control case mix. (63) Sherman et al. (64) found that administrative data have only a 20% positive predictive value for accurately identifying hospital-acquired infections. Based on administrative data, case volumes for CABG procedures were over-reported by as much as 20% for all patients, and under-reported by as much as 16% for Medicare in one study. (65)
Difficulties with risk adjustment of patient populations has also made comparisons between those populations difficult. (66) Thus, the effect of more complex patients may not be accurately reflected in reported data for PFP/PR. This has led to the additional burden of requiring doctors to document "present on admission" diagnosis data in hospital records. (67) This shifts administrative burden for coding from hospital administrative staff to physicians. Risk adjustment may not account for various social factors such as race and economic status.
Werner and Asch further note that even with the best risk adjustment of data, physicians may still shy away from high-risk patient groups to improve their reported ranking. (46) Boyd et al. reported the significant difficulties of properly controlling for numerous high-risk variables in older patients, and cautioned against using PFP in this population." These patients often meet "exclusion" criteria for data gathering purposes in PFP/PR, thus minimizing any putative benefit of these programs for these patients. Pogach reported a similar problem based on the presence of co-morbid conditions in one-third of diabetic veterans, thus interfering with the accuracy of HgbAlc reporting in these patients and negating any benefits these "excluded' 'patients may receive. (68)
Hayward (69) recently pointed out that performance measurement is significantly different from use of clinical guidelines to educate doctors about treatment options. He states that "basic guidelines are rarely appropriate as 'all or nothing' performance measures," and that "the reasons that guidelines often make poor performance measures are non-intuitive and easily forgotten by those who do not take care of patients." He concludes that the selection of process measures appears to be a highly political and high-stakes process. Influential parties such as major corporations and health insurance companies are in controlling positions at bodies creating PFP/PR standards.
While the AMA has promulgated stringent Principles and Guidelines on PFP, these are often forgotten or blatantly ignored as third-party payers (employers, health insurers, and government regulators) feel increasing pressure to do whatever they can to control their escalating costs. The best example of this is the continuing development of cost-of-care measures for low back pain by the AQA (originally, the Ambulatory Care Quality Alliance), despite the absence of valid quality measures for back pain. I personally led several surgical specialty societies at the AQA to prevent this, but these efforts were overruled by representatives of third-party payers. The AQA also operates without defined due process or voting methods, and does not record the votes of involved parties. Thus, third parties that claim to be advocating for accountability and transparency in daily medical practice are not offering a transparent, accountable, fair, or scientifically sound process for centrally determining how medicine is practiced.
Corporations are responsible to their stockholders, and government responds to political pressure. The programs they propose to solve their economic problems are fraught with danger to patients. Physicians must serve their patients, and avoid doing them harm.
The Take Back the Profession Advisory Group (TBPAG) is a coalition of Delegates and Alternate Delegates to the AMA House of Delegates from various state and specialty medical societies. The group is working to ensure that physicians retain control of their profession, to protect and serve the best interests of their patients. The group is not an official arm of the AMA.
(1) Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, D.C.: National Academy Press; 2001.
(2) McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med 2003;348:2635-2645.
(3) Pogach L, Engelgau M, Aron D. Measuring progress toward achieving hemoglobin Al c goals in diabetes care: pass/fail or partial credit. JAMA 2007;297:520-523.
(4) Petersen LA, Woodard LID, Urech T, Daw C, Sookanan S. Does pay-for-performance improve the quality of health care? Ann Intern Med 2006;145:265-272.
(5) Rosenthal MB, Frank RG, Li Z, Epstein AM. Early experience with pay-for-performance: from concept to practice. JAMA 2005;294:1788-1793.
(6) Wachter RM. Expected and unanticipated consequences of the quality and information technology revolutions. JAMA 2006;295:2780-2783.
(7) Blumenthal D, Glaser JP Information technology comes to medicine. N Engl J Med 2007;356:2527-2534.
(8) Werner RM, Asch DA. Clinical concerns about clinical performance measurement. Ann Fam Med 2007;5:159-163.
(9) Leape LL, Brennan TA, Laird N, et al. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study It. N Engl J Med 1991; 324:377-384.
(10) McDonald CJ, Weiner M, Hui SL. Deaths due to medical errors are exaggerated in Institute of Medicine report. JAMA 2000;284:93-95.
(11) Thomas EJ, Studdert DM, Burstin HR, et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care 2000;38:261-271.
(12) Wachter RM, Pronovost PJ. The 100,000 Lives Campaign: A scientific and policy review. Joint Commission Journal on Quality & Patient Safety 2006;32:621-627.
(13) Berwick DM, Hackbarth AD, McCannon CJ. IHI replies to "The 100,000 Lives Campaign": A scientific and policy review. Joint Commission Journal on Quality& Patient Safety 2006;32:628-630.
(14) Berwick DM. The toxicity of pay for performance. Qual Manag Health Care 1995;4(1):27-33.
(15) Berwick DM, DeParle NA, Eddy DM, et al. Paying for performance: Medicare should lead. Health Aff (Millwood) 2003;22(6):8-10.
(16) Plested WGI. Pay-for-performance: It's about cost control, not quality. Am Med News 2007;2:19.
(17) Werner RM, Bradlow ET Relationship between Medicare's hospital compare performance measures and mortality rates. JAMA 2006;296:2694-2702.
(18) Fierer J. Medicare's Hospital Compare performance measures and mortality rates. JAMA 2007;297:1430.
(19) Shekelle P Medicare's Hospital Compare performance measures and mortality rates. JAMA 2007;297:1430-1431.
(20) McKalip D, Harbaugh RE. Process measures and short-term mortality for acute myocardial infarction. JAMA 2006;296:2557-2558.
(21) Williams SC, Schmaltz SP Morton DJ, et al. Quality of care in U.S. hospitals as reflected by standardized measures, 2002-2004. N Engl J Med 2005;353:255-264.
(22) Fonarow GC, Abraham WT, Albert NM, et al. Association between performance measures and clinical outcomes for patients hospitalized with heart failure. JAMA 2007;297:61-70.
(23) Bonow RO, Bennett S, Casey DE, Jr., et al. ACC/AHA clinical performance measures for adults with chronic heart failure: a report of the American College of Cardiology/American Heart Association Task Force on Performance Measures (Writing Committee to Develop Heart Failure Clinical Performance Measures) endorsed by the Heart Failure Society of America. J Am Coil Cardiol 2005;46:1144-1178.
(24) Fonarow GC, Yancy CW, Heywood JT, ADHERE Scientific Advisory Committee. Adherence to heart failure quality-of-care indicators in US hospitals: analysis of the ADHERE Registry. Arch Intern Med 2005;165:1469-1477.
(25) Radford MJ, Bonow RO, Gibbons RJ, Nissen SE. Performance measures and outcomes for patients hospitalized with heart failure. JAMA 2007;297:1547-1548.
(26) Peterson ED, Roe MT, Mulgund J, et al. Association between hospital process performance and outcomes among patients with acute coronary syndromes. JAMA 2006;295:1912-1920.
(27) Glickman SW, Ou FS, DeLong ER, et al. Pay for performance, quality of care, and outcomes in acute myocardial infarction. JAMA 2007; 297:2373-2380.
(28) Bradley EH, Herrin J, Elbel B, et al. Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short-term mortality. JAMA 2006;296:72-78.
(29) Landon BE, Hicks LS, O'Malley AJ, et al. Improving the management of chronic disease at community health centers. N Engl J Med 2007;356:921-934.
(30) Yee J, Dixon CM, McLean AP Meakins JL. Clostridium difficile disease in a department of surgery. The significance of prophylactic antibiotics. Arch Surg 1991;126:241-246.
(31) Fry DE. The surgical infection prevention project: processes, outcomes, and future impact. Surg Infect (Larchmt) 2006;7(Suppl 3):s17-s26.
(32) Hedrick TL, Anastacio MM, Sawyer RG Prevention of surgical site infections. Expert Rev Anti Infect Ther 2006;4:223-233.
(33) McCahill LE, Ahern JW, Gruppi LA, et al. Enhancing compliance with Medicare guidelines for surgical infection prevention: experience with a cross-disciplinary quality improvement team. Arch Surg 2007; 142:355-361.
(34) Bratzler DW The Surgical Infection Prevention and Surgical Care Improvement Projects: promises and pitfalls. Am Surg 1133;72:1010-1016.
(35) JCAHO. Change in the National Hospital Quality Measure SIP-3 for patients undergoing cardiac surgery. Prophylactic Antibiotics Discontinued Within 48 Hours After Cardiac Surgery End Time. Effective January 2006.
(36) JCAHO. JCAHO/CMS Joint Statement, Jan 2006. Change in ACEI for LVSD measures (HF-3, AMI-3): Incorporation of ARBs, Nov 15, 2004.
(37) Boyd CM, Darer J, Boult C, et al. Clinical practice guidelines and quality of care for older patients with multiple comorbid diseases: implications for pay for performance. JAMA 2005;294:716-724.
(38) Wachter RM. Could computerization harm patient safety? Med Gen Med 2006;8:84.
(39) Hirschtick RE. A piece of my mind. Copy-and-paste. JAMA 2006; 295:2335-2336.
(40) Scott IA, Ward M. Public reporting of hospital outcomes based on administrative data: risks and opportunities. Med J Aust 2006;184:571-575.
(41) Epstein AM. Pay for performance at the tipping point. N Engl J Med 2007; 356:515-517.
(42) Undenauer PK, Remus D, Roman S, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med 2007;356:486-496.
(43) Mullen KJ, Bradley EH. Public reporting and pay for performance. N Engl J Med 2007;356:1782-1783.
(44) Mansi IA. Public reporting and pay for performance. N Engl J Med 2007;356:1783.
(45) Doran T, Fullwood C, Gravelle H, et al. Pay-for-performance programs in family practices in the United Kingdom. N Engl JMed 2006;355:375-384.
(46) Werner RM, Asch DA. The unintended consequences of publicly reporting quality information. JAMA 2005;293:1239-1244.
(47) Hannan EL, Kilburn H, Jr., Racz M, Shields E, Chassin MR. Improving the outcomes of coronary artery bypass surgery in New York State. JAMA 1994;271:761-766.
(48) Narins CR, Dozier AM, Ling FS, Zareba W. The influence of public reporting of outcome data on medical decision making by physicians. Arch Intern Med 2005;165:83-87.
(49) Burack JH, Impellizzeri P Homel P, et al. Public reporting of surgical mortality: a survey of New York State cardiothoracic surgeons. Ann Thorac Surg 1999;68:1195-1200.
(50) Casalino LP Alexander GC, Jin L, et al. General internists' views on pay-for-performance and public reporting of quality scores: a national survey. Health Aff (Millwood) 2007;26:492-499.
(51) Moscucci M, Eagle KA, Share D, et al. Public reporting and case selection for percutaneous coronary interventions: an analysis from two large multicenter percutaneous coronary intervention databases. J Am Coil Cardiol 2005;45:1759-1765.
(52) Trivedi AN, Zaslavsky AM, Schneider EC, Ayanian JZ. Relationship between quality of care and racial disparities in Medicare health plans. JAMA 2006;296:1998-2004.
(53) Schneider EC, Zaslavsky AM, Epstein AM. Racial disparities in the quality of care for enrollees in Medicare managed care. JAMA 2002;287:1288-1294.
(54) Liu JH, Zingmond DS, McGory ML, et al. Disparities in the utilization of high-volume hospitals for complex surgery. JAMA 2006; 296:1973-1980.
(55) Casalino LP Elster A. Will pay-for-performance and quality reporting affect health care disparities? Health Aff (Millwood) 2007;26(3):w405-414
(56) Werner RM, Asch DA, Polsky D, Racial profiling: the unintended consequences of coronary artery bypass graft report cards. Circulation 2005;111:1257-1263.
(57) Fitzgerald F The pitfalls of pay for performance. J Natl Med Assoc 2007;99(2):123-124.
(58) Angelelli J, Grabowski DC, Mor V Effect of educational level and minority status on nursing home choice after hospital discharge. Am J Public Health 2006;96:1249-1253.
(59) Angelelli J, Mor V, Intrator O, Feng Z, Zinn J, Oversight of nursing homes: pruning the tree or just spotting bad apples? Gerontologist 2003;43(Spec No 2):67-75.
(60) Fisher ES. Paying for performance-risks and recommendations. N Engl J Med 2006;355:1845-1847.
(61) Tang PC, Ralston M, Arrigotti MF, Qureshi L, Graham J. Comparison of methodologies for calculating quality measures based on administrative data versus clinical data from an electronic health record system: implications for performance measures. J Am Med Inform Assoc 2007;14(1):10-15.
(62) Holcomb J. The role of administrative data in measurement and reporting of quality of hospital care. TexMed 2000;96(10):48-52.
(63) Scott IA, Ward M, Public reporting of hospital outcomes based on administrative data: risks and opportunities. Med J Aust 2006;184:571-575.
(64) Sherman ER, Heydon KH, St John KH, et al. Administrative data fail to accurately identify cases of healthcare-associated infection. Infect Control Hosp Epidemiol 2006;27:332-337.
(65) Mack MJ, Herbert M. Prince S, et al. Does reporting of coronary artery bypass grafting from administrative databases accurately reflect actual clinical outcomes? J Thorac Cardiovasc Surg 2005;129:1309-1317.
(66) Stukel TA, Fisher ES, Wennberg DE, et al. Analysis of observational studies in the presence of treatment selection bias: effects of invasive cardiac management on AMI survival using propensity score and instrumental variable methods. JAMA 2007;297:278-285.
(67) Pine M, Jordan HS, Elixhauser A, et al. Enhancement of claims data to improve risk adjustment of hospital mortality. JAMA 2007;297:71-76.
(68) Pogach LM, Tiwari A, Maney M, et al. Should mitigating comorbidities be considered in assessing healthcare plan performance in achieving optimal glycemic control? Am J Manag Care 2007;13:133-140.
(69) Hayward RA., Performance measurement in search of a path. N Engl J Med 2007;356:951-953.
David McKalip, M.D., is a practicing neurosurgeon. Contact: 1201 5th" Ave, N. #201, St. Petersburg, FL 33705, Tel.(727) 822-3500. Email: email@example.com.
|Gale Copyright:||Copyright 2009 Gale, Cengage Learning. All rights reserved.|