Can administrative data identify active diagnoses for long-term care resident assessment?
Article Type: Report
Subject: Long-term care of the sick (Analysis)
Nursing home patients (Health aspects)
Patients (Care and treatment)
Patients (Methods)
Authors: Berlowitz, Dan R.
Hickey, Elaine C.
Saliba, Debra
Pub Date: 12/30/2010
Publication: Name: Journal of Rehabilitation Research & Development Publisher: Department of Veterans Affairs Audience: Academic Format: Magazine/Journal Subject: Health Copyright: COPYRIGHT 2010 Department of Veterans Affairs ISSN: 0748-7711
Issue: Date: Dec 30, 2010 Source Volume: 47 Source Issue: 8
Geographic: Geographic Scope: United States Geographic Code: 1USA United States
Accession Number: 246450400
Full Text: INTRODUCTION

Rehabilitation for patients with disabilities is increasingly being provided in skilled nursing facilities [1], now known in the Department of Veterans Affairs (VA) as Community Living Centers (CLCs). Critical to assessing and improving the quality of this rehabilitation care is a comprehensive understanding of resident outcomes [2]. Outcomes data may be used to profile CLCs on the quality of their care and to identify benchmarks for best practices within the entire VA. In the examination of outcomes, risk adjustment helps ensure that any observed variations reflect differences in care rather than differences in patient mix. Risk adjustment for rehabilitation outcomes should incorporate many different patient-mix factors, including sociodemographics, functional status, cognitive ability, and sensory function [3-4]. A number of studies have also shown that comorbidities are an important patient risk factor to consider when adjusting on rehabilitation outcomes [5-7]. Capturing information on comorbidities will then be essential for the development of an outcomes tracking system for VA rehabilitation patients residing in CLCs.

Information on comorbidities is available on all nursing home residents, including those in VA CLCs, through the Minimum Data Set (MDS). This comprehensive resident assessment system was developed in response to the 1986 Institute of Medicine report on improving care in nursing homes [8] and includes information necessary for care planning. Specific sections address topics such as physical function, cognition, behavior, health conditions, and disease diagnoses. However, concerns have long been raised about the use of MDS data for purposes such as quality assessment and research [9-10]. In part, these concerns have been fueled by questions about the reliability of resident assessments, and studies have shown that the correlation among specially trained nurse assessors on various items may be low [11]. The Disease Diagnoses section of the MDS, which contains information on important comorbidities, has been viewed as especially difficult, in part because of the requirement that only active diagnoses be recorded. This requirement reflects the importance of the MDS in care planning, where knowledge of active diagnoses, as opposed to all diagnoses, is critical. Active diagnoses are defined as those that have a relationship to the resident's current functional status, cognitive status, mood or behavioral status, treatments, monitoring plan, or prognosis. The recently completed Data Assessment and Verification project, performed for the Centers for Medicare and Medicaid Services, identified Disease Diagnoses as one of the most common sections for discrepancies, mostly because of diagnoses that were no longer active being recorded in the MDS.

VA has a wealth of diagnostic data in its National Patient Care Database. Because these data are generated from recent hospital, outpatient, or long-term care encounters between patients and clinicians, they may be an alternate source of information on active diagnoses for use on the MDS. Therefore, as part of a validation of the proposed MDS version 3.0, we examined the correlation between VA administrative data and diagnostic data recorded in the MDS. Specifically, for the MDS data, we used MDS assessments performed by both specially trained research nurses and clinical nurses as part of routine care. These results could help inform the accuracy of VA administrative data and whether it may replace assessments currently performed by clinical nurses.

METHODS

Study Setting and Sample

This study was a part of the larger VA MDS 3.0 pilot testing and validation study funded by the Health Services Research and Development Service. Among the many goals of this study was to improve the accuracy of the diagnostic data collected during MDS assessments. Study participants were from four VA CLCs located in the northeast. At each CLC, residents were selected based on their being scheduled for their routine MDS 2.0 assessment, which is typically done on admission, quarterly, and with significant changes in health status. As an additional exclusion criterion, residents could not be comatose.

Minimum Data Set Assessments

Within 48 hours of the required MDS 2.0 assessment, either of two research nurses completed an additional pilot MDS 3.0 assessment. We used this pilot version of MDS 3.0 to collect information on active diagnoses. The Disease Diagnoses section of the pilot MDS 3.0 is similar to that of the currently used MDS 2.0 in terms of the specific diseases captured. However, a major change is the development of more detailed protocols to describe when a disease is active, where in the medical record this information should be sought, and the time frame to be considered for activity. Thus, it stresses first determining whether the condition is present and then whether it is active. As an example, for heart failure, active disease requires a physician-documented diagnosis of heart failure plus one or more of the following: a physician note indicating active disease; a positive test, such as a chest X-ray, within the past 30 days indicating heart failure; signs or symptoms, such as dyspnea, attributed to heart failure; current medication treatment; or hospitalization for heart failure within the past 30 days. Specific International Classification of Diseases-9th Revision-Clinical Modification (ICD-9-CM) codes were assigned to each MDS 3.0 diagnosis to facilitate comparisons with administrative data.

The trained research nurses conducted a detailed review of medical records to identify active diagnoses. These two research nurses had received extensive training in the use of MDS 3.0 and, in the case of the Disease Diagnoses section, had helped in the development of the criteria used to determine disease activity. Thus, the research nurses may be considered as the "gold standard" assessment. A total of 120 patients were evaluated by the research nurses.

Fifty-eight of these patients also had a pilot MDS 3.0 assessment completed by the clinical team. Typically, these assessments were performed by the MDS coordinator on the unit and were based on the assessor's knowledge of the resident, discussions at team meeting, and review of the medical records. These nurse-assessors had received more limited training in the use of the instrument and could be considered to represent how the assessment would typically be completed in actual clinical practice.

Department of Veterans Affairs Administrative Data

We used the VA National Patient Care Database to collect all ICD-9-CM codes from the year before the MDS 2.0 assessment for the 120 patients. We used ICD9-CM codes from hospital, outpatient, and long-term care settings. However, we excluded codes from nonclinician visits, such as laboratory or radiology. No diagnostic data were collected from non-VA sources such as Medicare.

Analyses

Separate analyses were performed for the research and clinical nurses. We examined those MDS 3.0 diagnoses present in at least 15 percent of the patients when assessed by any source, whether research nurse, clinical nurse, or administrative data. Two-by-two tables were constructed for the presence or absence of each diagnosis in the nurse assessment and in the VA administrative data. Overall level of agreement between the two data sources was calculated for each diagnosis with use of the kappa statistic. Sensitivity and specificity were then calculated, with the nursing assessment as the gold standard. Thus, sensitivity described what proportion of patients identified by the nurse as having the disease was also identified as having the disease in the administrative data and specificity described what proportion of patients identified by the nurse as not having the disease was also identified as not having the disease in the administrative data.

RESULTS

Nineteen diagnoses were evaluated. For most diagnoses, limited agreement existed between the research nurses and the administrative data (Table). In only eight diagnoses did the kappa value equal or exceed 0.60: uncomplicated diabetes mellitus, stroke/transient ischemic attack (TIA), coronary artery disease, chronic heart failure, thyroid disorder, hemiplegia/paraplegia/quadriplegia, asthma/chronic obstructive pulmonary disease, and schizophrenia. For other diagnoses, the level of agreement was generally poor, with a kappa level as low as 0.18 for depression. Results were very similar when clinical nurses were compared with administrative data (Table), again with only eight of the diagnoses having kappa values exceeding 0.60. Research and clinical nurses were also similar in terms of which diagnoses they had high and low levels of agreement on with administrative data. The eight diagnoses with the highest kappa values (>0.60) for the research nurses included six diagnoses with the highest kappa values for the clinical nurses. The six diagnoses with low kappa values for the research nurses (<0.40) included the four with the lowest kappa values for the clinical nurses. No clear pattern was evident as to which diagnoses had high or low levels of agreement. Mental health disorders included the diagnoses with the highest kappa value, schizophrenia, and the lowest kappa value, depression.

Sensitivity of administrative data compared with the research nurses varied considerably, ranging from 30 percent for depression to 100 percent for both stroke/TIA and hemiplegia/paraplegia/quadriplegia. Low sensitivity indicates that diagnoses identified by the research nurse as present may not be listed in the administrative data. Specificity, in most cases, was better than sensitivity and varied less, with a range from 77 to 100 percent. High specificity indicates that the administrative data rarely listed diagnoses as present within the past year when the nurses indicated it was absent. Results from the clinical nurses were generally similar to the research nurses.

DISCUSSION

Accurate information on diagnoses is essential in care planning and tracking of outcomes of nursing home residents receiving rehabilitation. Numerous studies have confirmed the validity of the diagnostic data contained in VA administrative records [12-14]. However, for nursing homes, the MDS requires that the diagnosis be not only present but active. This added requirement has not been previously examined in the VA. Given the difficulties clinical staff have in identifying active diagnoses, we hypothesized that VA administrative data might serve as a useful substitute in the completion of MDS for CLC residents.

Our results did not support this hypothesis. We found that the level of agreement, as reflected by kappa values, was generally low when we compared administrative data and research nurses. Kappa values were greater than or equal to 0.60 for only 8 of the 19 conditions. Tremendous variability was also found in sensitivity and specificity of administrative data, although specificities were generally higher. This suggests that when diagnoses are listed in VA administrative data within the past year, they do reflect conditions that are active. Using a time frame longer than 1 year would be expected to increase the sensitivity and reduce this specificity.

Results were not substantially different when we compared clinical nurses and administrative data. Levels of agreement were often poor, and for only eight conditions was the kappa greater than 0.60. However, perhaps reflecting the fact that information in administrative databases is generally derived from clinicians, specificity remained high.

Relatively few studies have examined whether administrative databases could replace clinicians' assessments on the MDS. In one study of Ontario nursing homes, the MDS often did not include many important diagnoses that had been present in the discharge diagnosis database from the preceding hospitalization [15]. Reasons for these discrepancies were unclear but thought to possibly reflect the incomplete transfer of diagnostic information upon resident transfer between settings. Other studies have either compared different research nurses [11] or compared clinicians' MDS assessments with trained assessors and standard protocols [16-17].

VA administrative data are generally felt to be more comprehensive than databases from other healthcare settings. Thus, it is difficult to imagine that other databases would be better able to identify active diagnoses. Results from this study, then, would be applicable to MDS assessments outside the VA. However, our study did not use the additional diagnostic data available in Medicare files. Studies have shown that incorporation of Medicare data improves the capture of comorbidity burden in veterans who are dual users. The addition of this diagnostic data contained in Medicare files would be expected to increase sensitivity but reduce specificity; the effect on kappa values would be uncertain. Further studies would be required to determine whether additional diagnoses from Medicare would assist in the accurate identification of active diagnoses within the VA.

Several additional limitations of this study should be noted. We only examined four nursing homes located in the northeast. Results could differ in other locations. Our sample size was also relatively small, so the number of patients per diagnosis was low. Furthermore, several diagnoses from MDS 3.0 were excluded because they were present in less than 15 percent of the sample. We do not know whether administrative data would be better at coding these rare conditions.

While study results highlight that administrative data should not be used for the identification of active diagnoses on the MDS, our results do not suggest how the identification of active diagnoses may be improved. Our assumption is that the research nurses most accurately identified active diagnoses because of their reliance on strict protocols of medical record reviews. Additional training of clinical nurses on completion of the MDS would then be required to ensure the most accurate information on active diagnoses in nursing home residents. However, research nurses could miss important diagnoses because of poor documentation. The Ontario study suggests that improved transfer of data from hospital stays could help improve the identification of MDS diagnoses [15]. Given VA's electronic medical records, we believe a lack of data on transfer is a less likely explanation for our results. Additional studies are clearly indicated.

CONCLUSIONS

The MDS is a valuable tool for VA clinicians, managers, and researchers working with rehabilitation patients in CLC settings. An important aspect of the MDS is the Disease Diagnoses section that provides information essential in care planning and outcomes measurement. Despite the importance of these data, studies have shown that clinical staff poorly identify active diagnoses when completing the MDS [16]. Our results suggest that administrative data cannot substitute for the assessments currently performed by VA clinicians.

Abbreviations: CLC = Community Living Center, ICD-9-CM = International Classification of Diseases-9th Revision-Clinical Modification, MDS = Minimum Data Set, TIA = transient ischemic attack, VA = Department of Veterans Affairs.

ACKNOWLEDGMENTS

Author Contributions:

Study concept and design: D. R. Berlowitz, E. C. Hickey, D. Saliba. Acquisition of data: E. C. Hickey.

Analysis and interpretation of data: D. R. Berlowitz, E. C. Hickey, D. Saliba.

Drafting of manuscript: D. R. Berlowitz.

Critical revisions of manuscript: E. C. Hickey, D. Saliba.

Obtained funding: D. R. Berlowitz.

Financial Disclosures: The authors have declared that no competing interests exist.

Funding/Support: This material was based on work supported by the VA, Veterans Health Administration, Health Services Research and Development Service (grant SDR 03-217).

Institutional Review: The study protocol was approved through the VA Human Subjects Protection and Privacy Board process.

REFERENCES

[1.] Retchin SM, Brown RS, Yeh SC, Chu D, Moreno L. Outcomes of stroke patients in Medicare fee for service and managed care. JAMA. 1997;278(2):119-24. [PMID: 9214526] DOI:10.1001/jama.278.2.119

[2.] Rao P, Boradia P, Ennis J. Shift happens: Using outcomes to survive and thrive under PPS. Top Stroke Rehabil. 2005; 12(2):1-3. [PMID: 15940579] DOI:10.1310/UG5D-4RBT-VET6-AJ29

[3.] Iezzoni LI. Risk adjusting rehabilitation outcomes: An overview of methodologic issues. Am J Phys Med Rehabil. 2004;83(4):316-26. [PMID: 15024335] DOI:10.1097/01 .PHM.0000118041.17739.BB

[4.] Kane RL. Improving outcomes in rehabilitation. A call to arms (and legs). Med Care. 1997;35(6 Suppl):JS21-27. [PMID: 9191711] DOI:10.1097/00005650-199706001-00004

[5.] Hoenig H, Sloane R, Horner RD, Zolkewitz M, Reker D. Differences in rehabilitation services and outcomes among stroke patients cared for in veterans hospitals. Health Serv Res. 2001;35(6):1293-1318. [PMID: 11221820]

[6.] Ottenbacher KJ, Smith PM, Illig SB, Linn RT, Ostir GV, Granger CV. Trends in length of stay, living setting, functional outcome, and mortality following medical rehabilitation. JAMA. 2004;292(14):1687-95. [PMID: 15479933] DOI:10.1001/jama.292.14.1687

[7.] Berlowitz DR, Hoenig H, Cowper DC, Duncan PW, Vogel WB. Impact of comorbidities on stroke rehabilitation outcomes: Does the method matter? Arch Phys Med Rehabil. 2008;89(10):1903-6. [PMID: 18929019] DOI:10.1016/j.apmr.2008.03.024

[8.] Institute of Medicine Committee on Nursing Home Regulation. Improving the quality of care in nursing homes. Washington (DC): National Academy Press; 1986.

[9.] Teresi JA, Holmes D. Should MDS data be used for research? Gerontologist. 1992;32(2):148-49. [PMID: 1577305]

[10.] Brooks S. What's wrong with the MDS (Minimum Data Set)? Contemp Longterm Care. 1996;19(11):41,43,45-47. [PMID: 10162289]

[11.] Hawes C, Morris JN, Phillips CD, Mor V, Fries BE, Nonemaker S. Reliability estimates for the Minimum Data Set for nursing home resident assessment and care screening (MDS). Gerontologist. 1995;35(2):172-78. [PMID: 7750773]

[12.] Kashner TM. Agreement between administrative files and written medical records: A case of the Department of Veterans Affairs. Med Care. 1998;36(9):1324-36. [PMID: 9749656] DOI:10.1097/00005650-199809000-00005

[13.] Szeto HC, Coleman RK, Gholami P, Hoffman BB, Goldstein MK. Accuracy of computerized outpatient diagnoses in a Veterans Affairs general medical clinic. Am J Manag Care. 2002;8(1):37-43. [PMID: 11814171]

[14.] Borzecki AM, Wong AT, Hickey EC, Ash AS, Berlowitz DR. Identifying hypertension-related comorbidities from administrative data: What's the optimal approach? Am J Med Qual. 2004;19(5):201-6. [PMID: 15532912] DOI:10.1177/106286060401900504

[15.] Wodchis WP, Naglie G, Teare GF. Validating diagnostic information on the Minimum Data Set in Ontario Hospital based long-term care. Med Care. 2008;46(8):882-87. [PMID: 18665069]

[16.] Centers for Medicare & Medicaid Services [Internet]. MDS 2.0 for nursing homes. Baltimore (MD): Centers for Medicare & Medicaid Services. [updated 2010 Mar 29; cited 2009 Aug 5]. Available from: http://www.cms.gov/nursinghomequalityinits/20nhqimds20.asp

[17.] Stevenson KB, Moore JW, Sleeper B. Validity of the Minimum Data Set in identifying urinary tract infections in residents of long-term care facilities. J Am Geriatr Soc. 2004; 52(5):707-11. [PMID: 15086649] DOI:10.1111/j.1532-5415.2004.52206.x

Submitted for publication August 13, 2009. Accepted in revised form December 29, 2009.

This article and any supplementary material should be cited as follows:

Berlowitz DR, Hickey EC, Saliba D. Can administrative data identify active diagnoses for long-term care resident assessment? J Rehabil Res Dev. 2010;47(8):719-24.

DOI: 10.1682/JRRD.2009.08.0123

Dan R. Berlowitz, MD, MPH; (1) * Elaine C. Hickey, RN, MS; (1) Debra Saliba, MD, MPH (2)

(1) Center for Health Quality, Outcomes, and Economic Research, Edith Nourse Rogers Memorial Veterans Hospital, Bedford, MA; and Boston University School of Public Health, Boston, MA; (2) Greater Los Angeles Department of Veterans Affairs Geriatric Research, Education, and Clinical Center and Health Sciences Research and Development Center of Excellence, Los Angeles, CA; and University of California Los Angeles/Los Angeles Jewish Homes Borun Center, Los Angeles, CA

* Address all correspondence to Dan R. Berlowitz, MD, MPH; Bedford VA Hospital-CHQOER, 200 Springs Road, Bedford, MA 01730; 781-687-2962; fax: 781-687-2227. Email: Dan.Berlowitz@va.gov
Table.
Comparisons between administrative data and research and clinical
nurses' identification of nursing home residents' active diagnoses.
In calculating sensitivity and specificity, we considered nurses
as "gold standard."

Diagnosis in Administrative              Research Nurse
Data
                                          Sensitivity   Specificity
                                  Kappa       (%)           (%)

Arrhythmias (nonatrial            0.31         57            90
  fibrillation)
Coronary Artery Disease           0.65         68            94
Chronic Heart Failure             0.60         68            93
Hypertension                      0.47         74            77
GERD/Peptic Ulcer                 0.48         54            91
Benign-Prostatic Hypertrophy      0.52         57            93
Anemia                            0.36         48            87
Uncomplicated Diabetes Mellitus   0.69         78            91
Arthritis                         0.36         53            87
Stroke/TIA                        0.77        100            93
Hemiplegia/Paraplegia/            0.91        100            97
  Quadriplegia
Dementia: Alzheimer Disease       0.47         57            96
Dementia: Non-Alzheimer Disease   0.37         34            97
Asthma/COPD                       0.63         68            94
Cancer                            0.35         80            83
Thyroid Disorder                  0.67         55           100
Anxiety Disorder                  0.50         55            95
Depression                        0.18         30            87
Schizophrenia                     0.94         94            99

Diagnosis in Administrative              Clinical Nurse
Data
                                          Sensitivity   Specificity
                                  Kappa       (%)           (%)

Arrhythmias (nonatrial            0.38        100            86
  fibrillation)
Coronary Artery Disease           0.72         80            91
Chronic Heart Failure             0.56         78            88
Hypertension                      0.46         89            55
GERD/Peptic Ulcer                 0.68         65           100
Benign-Prostatic Hypertrophy      0.59         64            94
Anemia                            0.38         45            93
Uncomplicated Diabetes Mellitus   0.65        100            80
Arthritis                         0.53         60            91
Stroke/TIA                        0.90         92            98
Hemiplegia/Paraplegia/            0.71         89            92
  Quadriplegia
Dementia: Alzheimer Disease       0.62        100            91
Dementia: Non-Alzheimer Disease   0.31         36            92
Asthma/COPD                       0.63         59            98
Cancer                            0.59         80            84
Thyroid Disorder                  0.46         50            96
Anxiety Disorder                  0.48         50            94
Depression                        0.20         33            85
Schizophrenia                     0.85        100            96

COPD = chronic obstructive pulmonary disease, GERD =
gastroesophageal reflux disease, TIA = transient ischemic attack.
Gale Copyright: Copyright 2010 Gale, Cengage Learning. All rights reserved.