Between-Seasons Test-Retest Reliability of Clinically Measured Reaction Time in National Collegiate Athletic Association Division I Athletes.
Abstract: Context: Reaction time is typically impaired after concussion. A clinical test of reaction time ([RT.sub.clin]) that does not require a computer to administer may be a valuable tool to assist in concussion diagnosis and management.

Objective: To determine the test-retest reliability of [RT.sub.clin] measured over successive seasons in competitive collegiate athletes and to compare these results with a computerized measure of reaction time ([RT.sub.comp]).

Design: Case series with repeated measures.

Setting: Preparticipation physical examinations for the football, women's soccer, and wrestling teams at a single university.

Patients or Other Participants: 102 National Collegiate Athletic Association Division I athletes.

Intervention(s): The [RT.sub.clin] was measured using a measuring stick embedded in a weighted rubber disk that was released and caught as quickly as possible. The [RT.sub.comp] was measured using the simple reaction time component of CogState Sport.

Main Outcome Measure(s): Data were collected at 2 time points, 1 season apart, during preparticipation physical examinations. Outcomes were mean simple [RT.sub.clin] and [RT.sub.comp].

Results: The intraclass correlation coefficient estimates from season 1 to season 2 were 0.645 for [RT.sub.clin] (n=102, entire sample) and 0.512 for [RT.sub.comp] (n=62 athletes who had 2 consecutive valid baseline CogState Sport test sessions).

Conclusions: The test-retest reliability of [RT.sub.clin] over consecutive seasons compared favorably with that of a concurrently tested computerized measure of reaction time and with literature-based estimates of computerized reaction time measures. This finding supports the potential use of [RT.sub.clin] as part of a multifaceted concussion assessment battery. Further prospective study is warranted.

Key Words: intraclass correlation coefficient, concussions, traumatic brain injuries, assessment
Article Type: Report
Subject: Reaction time (Physiological aspects)
Reaction time (Comparative analysis)
College athletes (Physiological aspects)
Athletic ability (Research)
Biomechanics (Research)
Authors: Eckner, James T.
Kutcher, Jeffrey S.
Richardson, James K.
Pub Date: 07/01/2011
Publication: Name: Journal of Athletic Training Publisher: National Athletic Trainers' Association, Inc. Audience: Academic Format: Magazine/Journal Subject: Sports and fitness Copyright: COPYRIGHT 2011 National Athletic Trainers' Association, Inc. ISSN: 1062-6050
Issue: Date: July-August, 2011 Source Volume: 46 Source Issue: 4
Topic: Event Code: 310 Science & research; 540 Executive changes & profiles
Organization: Organization: NCAA
Geographic: Geographic Scope: United States Geographic Code: 1USA United States
Accession Number: 284551942
Full Text: Concussion is a common and potentially serious injury that frequently results from sport participation. Recent management guidelines (1, 2) emphasize a multifaceted approach to concussion assessment with the goal of determining that an athlete has fully recovered before returning to play. Full recovery from a concussion includes subjective symptom resolution and complete normalization of the physical examination, including postural stability and cognitive assessment. Over the past 2 decades, the use of computerized concussion assessment batteries such as the Immediate Post-Concussion Assessment and Cognitive Test (ImPACT; ImPACT Applications, Inc, Pittsburgh, PA), (3) CogState Sport (CogState Ltd, Melbourne, Australia), (4) and the Automated Neuropsychological Assessment Metrics (ANAM; Defense and Veterans Brain Injury Center, Washington, DC) (5) has grown substantially to assist sports medicine professionals in assessing a concussed athlete's cognitive recovery. One measure that is included in all widely used computerized concussion assessment batteries is reaction time.

Impaired reaction time after sport-related concussion has been demonstrated repeatedly. (6-10) Reaction time measures provide one of the most sensitive indices of cognitive changes after concussion in both athletic and general head injury populations. (11, 12) Decreased speed of information processing is thought to account for the cognitive performance deficits seen after concussion. (11) Impaired reaction time after concussion parallels the persistence of postconcussive symptoms (7, 9) and may persist beyond resolution of self-reported symptoms and clinical findings. (8) Reaction time assessment can add sensitivity to the sports medicine practitioner's concussion assessment battery. However, the measurement of reaction time relies on specialized computer programs, which limits its accessibility to many athletes, especially younger athletes, who are traditionally underserved by the sports medicine community as compared with athletes at the collegiate and professional levels. This point is especially concerning given that people aged 5-18 years account for 65% of all sport-related concussions. (13)

To provide clinicians with a reaction time measure that does not rely on computers, we (14) developed a simple clinical method for measuring reaction time ([RT.sub.clin]). Pilot reliability and validity studies (14) in a small sample of healthy adult volunteers demonstrated excellent test-retest and intertester reliabilities for this technique and conformity to known reaction time characteristics, such as slowing with age and dual task, as well as correlation with a computerized reaction time measure. We (15) have also demonstrated a significant positive correlation between [RT.sub.clin] and the simple reaction time component of the CogState Sport computerized neuropsychological test battery in a sample of collegiate football players during preparticipation physical examinations. Furthermore, we (16) have shown [RT.sub.clin], to strongly correlate with a test participant's ability to protect the head using the hands during a laboratory task designed to simulate a sport-specific protective response. In ongoing research, we (17) are investigating the effect of concussion on [RT.sub.clin], in collegiate athletes using a postinjury comparison with baseline design.

It is critical to determine the stability of any measure intended to be used serially, as when comparing postinjury results with a preseason baseline. Only a small body of literature describes the test-retest reliability over various time intervals of currently available computerized sport concussion assessment batteries, each including a computerized measure of reaction time. Generally speaking, such studies have demonstrated the greatest stability over short test-retest intervals, on the order of 1-2 weeks, and decreasing reliability over intervals up to 2 months. Season-to-season comparisons are rare, despite these being common retest intervals in practice.

In pilot work, we (14) reported a test-retest intraclass correlation coefficient (ICC) of 0.860 (P = .004) for [RT.sub.clin] in a sample of 10 healthy adult volunteers tested by a single examiner on 2 occasions within 1 month. However, little can be inferred from these data with respect to the test's long-term stability in athletes given the small sample size, the lack of inclusion of athletes, and the short between-tests time interval. Therefore, the purpose of our current study was to measure the 1-year test-retest reliability of [RT.sub.clin] in a larger sample of National Collegiate Athletic Association Division I athletes. To accomplish this, we compared year 1 and year 2 mean [RT.sub.clin] values in athletes who completed preseason [RT.sub.clin]. testing over 2 consecutive seasons. This testing was performed as part of an ongoing concussion monitoring program. A secondary goal of the study was to compare the stability of [RT.sub.clin] over seasons with that of a concurrently administered computerized measure of reaction rime ([RT.sub.comp]), namely the simple reaction time component of the CogState Sport test battery.

METHODS

Study Participants

We recruited student-athletes from the football, women's soccer, and wrestling teams from a single university during their preparticipation physical examination sessions. Recruitment of football players began before the 2007-2008 season, whereas recruitment of wrestlers and soccer players began the next year. Recruitment continued through the 2009-2010 season for all 3 teams. Before testing, all student-athletes provided informed written consent. This research was approved by the University of Michigan Institutional Review Board. All members of the football, women's soccer, and wrestling teams who were at least 18 years of age at the time of recruitment were eligible to participate. Athletes were excluded if they were recovering from a concussion or had an acute upper extremity injury affecting their ability to complete the clinical reaction time task at the time of testing. The order of [RT.sub.clin] and [RT.sub.comp] testing was determined by convenience during the preparticipation examination and was not strictly controlled or counterbalanced.

Measurement of [RT.sub.clin].

Clinical reaction time is determined using a simple, manual visuomotor task: the time needed to catch a suspended vertical shaft by hand closure. The device is equipped with a weighted spacer at the lower end to ensure near verticality and standardize finger closure distance. Testing took place in an isolated room when one was available and otherwise in the comer of a larger room where the athletes waited as they moved between components of the preparticipation examinations. The [RT.sub.clin] test protocol used has been previously described. (14,15) In brief, the athlete sat with the forearm resting on a horizontal desk or table surface with the hand positioned at the edge of the surface. The athlete held the hand sufficiently open to fit around, but not touch, the weighted disk portion of the clinical reaction time apparatus. The examiner suspended the apparatus and released it after randomly determined time delays between 2 and 5 seconds so as to minimize the athlete's ability to anticipate release of the device. Upon release, the athlete caught the device as rapidly as possible by hand closure (Figure). Clinical reaction time was calculated from the fall distance of the device using the formula for a body falling under the influence of gravity (d = 1/2[gt.sup.2]), with fall distance measured from the most superior aspect of the athlete's hand after catching the device. Anticipatory grasps and "drop" trials were excluded, as previously described. (15) Each athlete performed 2 practice trials, immediately followed by 8 data acquisition trials,

Measurement of [RT.sub.comp]

During the same preparticipation physical examination session, each athlete completed a single baseline CogState Sport (version 5.6.4) computerized neuropsychological test session. Testing was completed as previously described, (15) in maximum groups of 8 athletes seated at separate personal computers in a computer laboratory supervised by physicians familiar with the program. Each athlete wore noise-blocking headphones. The simple reaction time component of the test battery involves depressing the "k" key as rapidly as possible when a playing card presented on the computer screen turns face up. The raw simple reaction time data for all nonanticipatory trials (all trials that were not preceded by a keystroke before the card turned face up) were extracted for analysis. The [RT.sub.comp] data were included for all athletes who had 2 valid CogState Sport test sessions, as determined by the program's internal integrity check process.

[FIGURE OMITTED]

Statistical Analyses

Means and SDs of [RT.sub.clin] and [RT.sub.comp] were calculated for each athlete during each test session. Test-retest reliability for [RT.sub.clin] between the first and second test sessions was characterized by ICC (2,8), determined by a 2-way random-effects analysis-of variance model, with a corresponding ICC (2,35) for [RT.sub.comp]. (18,19) Test-retest ICCs are interpreted from 0 to 1, with a value of zero representing no, or random, consistency and a value of 1 representing perfect consistency between test sessions. (20) In general, higher ICC values indicate less error variance and better test-retest reliability. (21) Standard error of measurement (SEM) values (22) were also calculated for [RT.sub.clin] and [RT.sub.comp] at year 1 and year 2.

We compared [RT.sub.clin] directly with [RT.sub.comp] across athletes at year 1 and year 2 using paired t tests. To investigate a systematic change from year 1 to year 2 that would suggest a learning effect, we compared mean [RT.sub.clin], and [RT.sub.comp] values within each athlete over the 2 test sessions using paired t tests. The [RT.sub.clin] was also analyzed after dividing the overall population into athletes who had 2 valid CogState Sport baseline test sessions and those who did not in order to determine whether [RT.sub.clin] differences were present between these subgroups. Independent-samples t tests were used to compare mean [RT.sub.clin] between groups. All statistical analyses were conducted using SPSS for Windows (version 16.0; SPSS Inc, Chicago, IL).

RESULTS

Of 251 student-athletes who participated in the study, 102 athletes completed baseline [RT.sub.clin] test sessions during 2 consecutive seasons and were included in the data analysis. Of these, 78 played football, 14 wrestled, and 10 played women's soccer. Sixty-two of these athletes had 2 corresponding valid [RT.sub.comp] data points and were included in the [RT.sub.comp] analysis. Of the 40 athletes excluded from [RT.sub.comp] analysis, 27 had at least 1 invalid CogState Sport session and 13 had missing CogState Sport data from athlete identifier coding errors.

The overall mean [RT.sub.clin] and [RT.sub.comp] values and their SDs, as well as ICCs and SEMs for [RT.sub.clin] and [RT.sub.comp], are presented in Table 1. The [RT.sub.clin] had a higher overall test-retest ICC than did [RT.sub.comp], and the SEM was lower for [RT.sub.clin] than for [RT.sub.comp]. In athletes with valid CogState Sport data for comparison, mean [RT.sub.clin] was 50 milliseconds shorter than mean [RT.sub.comp] in year 1 (t = -12.01, P<.001) and 62 milliseconds shorter in year 2 (t = -12.02, P<.001). A learning effect occurred in [RT.sub.clin] (year 1 to year 2:11 milliseconds, t = 4.66, P<.001), whereas [RT.sub.comp] did not differ between the years (t = -0.16, P = .875). The [RT.sub.clin] did not differ between athletes with and without valid [RT.sub.comp] data for comparison in year 1 (205 [+ or -] 22 milliseconds versus 209 [+ or -] 25 milliseconds, respectively; t = -0.894, P = .373) or year 2 (194 [+ or -] 23 milliseconds versus 198 [+ or -] 25 milliseconds, respectively; t = -0.834, P = .406). Furthermore, the ICCs for [RT.sub.clin] in athletes with and those without 2 valid CogState Sport computerized test sessions (ICC = 0.548 and ICC = 0.746, respectively) both fall within the overall 95% confidence interval for [RT.sub.clin] (0.422, 0.775).

DISCUSSION

The results of this study demonstrate that [RT.sub.clin] has sufficient long-term stability, as measured by its test-retest reliability over successive seasons, to be used serially as a concussion assessment tool for preseason baseline to postinjury comparisons. The 1-year reliability estimate for [RT.sub.clin] (ICC = 0.645) compares favorably with that of concurrently measured [RT.sub.comp]. Although the ICC for [RT.sub.clin] obtained in this study is somewhat lower than ICCs in the literature for computerized reaction time measures over retest intervals of 1 hour to 2 weeks, (3,11,12) it is comparable with or greater than published values for time intervals ranging from 45 days to 2 years. (21,23) Table 2 summarizes literature-based reliability estimates for the reaction time components of ImPACT, CogState Sport, Concussion Resolution Index (Head-Minder, Inc, New York, NY), and Concussion Sentinel (CogState Ltd). Overall, the 1-year stability of [RT.sub.clin] falls within the range of values reported for computerized reaction time measures that are currently in widespread use.

The level of test-retest agreement thought to be acceptable for clinical use varies among authors. The 1-year [RT.sub.clin] ICC falls above the minimum acceptable ICC value of 0.60 proposed by Anastasi (24) but below the 0.90 ICC value cited by Randolph et al (25) as desirable for making decisions about individual change in the context of sport-related concussion. In reality, it is rare for a clinical test to achieve test-retest reliability greater than 0.90, and none of the currently available and widely used computerized concussion batteries have yielded ICCs for reaction time or any other test measure that approach this level for retest intervals longer than 1 hour. Although reliability estimates of ICC > 0.90 would be ideal, this appears to be an unrealistic benchmark for a test assessing something as complex as the brain's processing speed over prolonged time periods. Tests with lower reliability indices, as demonstrated for [RT.sub.clin] and the numerous computerized reaction time measures detailed here, can still be of clinical value.

An important contributor to a test's reliability is the learning effect associated with repeated administration of the test. Even though the pilot [RT.sub.clin] reliability and validity study did not suggest a learning effect in nonathletes, (14) a learning effect over 8 [RT.sub.clin] trials appeared to be present during our initial baseline assessment (15) of [RT.sub.clin] in 94 collegiate football players. The present study further suggests the possibility of a learning effect in athletes by demonstrating an average [RT.sub.clin] decrease of 11 milliseconds from year 1 to year 2. The test protocol for determining [RT.sub.clin] in athletes has been consistent across studies, with participants given 2 practice trials followed by 8 data acquisition trials. Investigation into the effect of more practice trials before data collection is warranted.

It is noteworthy that almost one-third of the athletes who participated in this study had one or more invalid CogState Sport test sessions. Although a variety of reasons for an invalid computerized test session exist, poor effort or motivation on the part of the athlete is probably one of the most common. The finding that [RT.sub.clin] did not differ between athletes with and without valid [RT.sub.comp] data for comparison parallels our prior findings (15) and suggests that [RT.sub.clin] may be immune to this concern. Furthermore, [RT.sub.clin] may have actually been more stable across seasons in the athletes without valid [RT.sub.comp] data for comparison, that is, primarily athletes excluded from [RT.sub.comp] analysis due to invalid CogState Sport sessions. A possible explanation is that [RT.sub.clin] may be more intrinsically motivating than computerized reaction time tasks. This could be because of several factors, including the more physical nature of the [RT.sub.clin] task anal the direct one-on-one interaction inherent in [RT.sub.clin] testing that is absent from computerized test batteries. Less than full effort and motivation on cognitive test performance has a potent influence on outcome, as demonstrated by Green et al, (26) who showed that suboptimal effort suppressed overall test battery performance 4.5 times more than did moderate to severe brain injury. Ensuring optimal effort is critical when preseason baseline tests are the basis for comparison with postinjury testing, when an athlete is often highly motivated to perform well so as to return to play. Therefore, an intrinsically motivating test is likely to provide higher-quality baseline test results. Additional study is necessary to specifically address the role of motivation in baseline reaction time testing.

The strength of the conclusions in this study must be tempered by the limitations. We investigated only Division I athletes, 76% (78/102) of whom were football players and 90% (92/102) of whom were male. Therefore, the results are applicable primarily to this population and may not generalize to female athletes or athletes participating at high school or youth levels. We would not expect test-retest reliability to differ in these populations, but additional study in females anal younger populations is warranted. This is especially true given that the simplicity and anticipated low cost of [RT.sub.clin] may make it most valuable in younger athletes who do not have access to computerized neuropsychological test batteries. To our knowledge, Erlanger et al (11) provided the only test-retest reliability estimates for computerized concussion batteries that allowed comparison of athletes across age groups. The reliability estimates reported do not differ greatly between high school students and collegiate or adult club athletes on processing speed (0.79 and 0.90, respectively), simple reaction time (0.72 and 0.73, respectively), or complex reaction time (0.65 and 0.72, respectively). In addition, the results may not apply to the shorter time periods that more closely approximate the typical timeframe seen in sport-related concussion assessment and management. (21) An additional study limitation is the lack of counterbalancing in the order of [RT.sub.clin] and [RT.sub.comp] testing. This lack of control was pragmatic because of the logistics of testing large numbers of athletes during their preparticipation physical examinations. We do not hypothesize any order effect, but the true effect of test order is unknown. A final limitation of this study is that the [RT.sub.comp] data used for comparison with [RT.sub.clin] were not of optimal quality. All athletes completed only a single CogState Sport test session each season, and 40 athletes had at least 1 missing or invalid CogState Sport test session. With regard to the 13 athletes with missing [RT.sub.comp] data, a systematic problem in the way athlete identifiers were coded within the CogState system in the first year of testing, which was subsequently fixed, is to blame. Regarding the large number of invalid CogState Sport test sessions with unusable data, authors of future prospective studies should consider a double-baseline CogState Sport test protocol to reduce the learning effect associated with the test and improve data quality. (27) Although it would not affect the primary outcome variable of this study ([RT.sub.clin] stability), this change in protocol may allow a better comparison of [RT.sub.clin] with [RT.sub.comp].

In summary, the test-retest reliability of [RT.sub.clin] over consecutive seasons compares favorably with computerized reaction time measures. This suggests that it is a stable measure across seasons and, taken in the context of our previous work, (14-17) supports its potential use as part of the sports medicine practitioner's multifaceted concussion assessment battery. We caution that impaired simple reaction time is only one of many typical signs indicating that an athlete has sustained a concussion. Therefore, it must be interpreted within the greater clinical context of the concussed athlete. Furthermore, in athletes who have access to computerized neuropsychological testing, [RT.sub.clin] should not be considered a replacement for computerized tests, which measure multiple indices of concussion in addition to simple reaction time. However, the simplicity and low cost of [RT.sub.clin] could facilitate its use in youth athletes and others who do not have access to computerized concussion assessment programs. In athletes who do have access to computerized testing, [RT.sub.clin] may serve a complementary role as a true sideline tool in the initial concussion diagnosis, when use of computerized test batteries is impractical, Further study is warranted to examine the influence of motivation on baseline reaction time assessment, evaluate the use of [RT.sub.clin] in younger populations, and prospectively investigate the effect of concussion on [RT.sub.clin] Controlled research involving concussed athletes will also need to define a clinically meaningful change in [RT.sub.clin] from baseline on which management decisions can be based.

Key Points

* Reaction time is typically prolonged after sport-related concussion and can be measured clinically.

* Test-retest reliability of the clinical measure of reaction time over consecutive seasons compared favorably with a computerized measure of reaction time.

* The potential use of the clinical measure of reaction time as part of the sports medicine practitioner's multifaceted concussion assessment battery is supported.

ACKNOWLEDGMENTS

We thank Dr. David Darby, chief medical officer for CogState Ltd, for his advice and assistance relating to the CogState Sport program and Mr. Steve Nordwall and his athletic training staff for their support in organizing and conducting this project. We also thank Katherine Bohard, James Burke, Sri Krishna Chandran, Heather Eckner, Burton Engel, Shawn Heiler, Jennifer Kendall, Michael Louwers, Raman Malhotra, Stephen Oh, and Devon Shuchman for their assistance with data collection and management. We thank the University of Michigan Biostatistics Core (UL1RR024986 Clinical & Translational Science Award funded) for its statistical advice. We acknowledge the support of our sponsors, the Foundation for Physical Medicine and Rehabilitation (Chicago, IL; awarded to J.T.E.) and the University of Michigan Bone & Joint Injury Prevention & Rehabilitation Center (Ann Arbor, MI; awarded to J.T.E. and J.K.R.).

REFERENCES

(1.) Guskiewicz KM, Bruce SL, Cantu RC, et al. National Athletic Trainers' Association position statement: management of sport-related concussion. J Athl Train. 2004;39(3):280-297.

(2.) McCrory P, Meeuwisse W, Johnston K, et al. Consensus statement on Concussion in Sport 3rd International Conference on Concussion in Sport held in Zurich, November 2008. Clin J Sport Med. 2009;19(3):185-200.

(3.) Iverson GL, Lovell MR, Collins MW. Validity of ImPACT for measuring processing speed following sports-related concussion. J Clin Exp Neuropsychol. 2005;27(6):683-689.

(4.) Collie A, Maruff P, Darby D, Makdissi M, McCrory P, McStephen M. CogSport. In: Echemendia RJ, ed. Sports Neuropsychology: Assessment and Management of Traumatic Brain Injury. New York, NY: Guilford Publications; 2006:240-262.

(5.) Cernich A, Reeves D, Sun W, Bleiberg J. Automated Neuropsychological Assessment Metrics sports medicine battery. Arch Clin Neuropsychol. 2007;22(suppl 1):S101-S114.

(6.) Hugenholtz H, Stuss DT, Stethem LL, Richard MT. How long does it take to recover from a mild concussion? Neurosurgery. 1988;22(5):853-858.

(7.) Makdissi M, Collie A, Maruff P, et al. Computerised cognitive assessment of concussed Australian rales footballers. Br J Sports Med. 2001;35 (5):354-360.

(8.) Warden DL, Bleiberg J, Cameron KL, et al. Persistent prolongation of simple reaction time in sports coneussion. Neurology. 2001;57(3):524-526.

(9.) Collins MW, Field M, Lovell MR, et Relationship between postconcussion headache and neuropsyehological test performance in high school athletes. Am J Sports Med. 2003;31(2): 168-173.

(10.) Collie A, Makdissi, M, Maruff P, Bennell, McCrory P. Cognition in the days following concussion: comparison of symptomatic versus asymptomatic athletes. J Neurol Neurosurg Psychiatry. 2006;77(2):241-245.

(11.) Erlanger D, Saliba E, Barth J, Almquist J, Webright W, Freeman J. Monitoring resolution of postconcussion symptoms in athletes: preliminary results of a Web-based neuropsyehological test protocol, J Athl Train. 2001;36(3):280-287.

(12.) Collie A, Maruff P, Makdissi M, McCrory P, MeStephen M, Darby D. CogSport: reliability and correlation with conventional cognitive tests used in postconcussion medical evaluations. Clin J Sport Med. 2003; 13(1): 28-32.

(13.) Centers for Disease Control and Prevention (CDC). Nonfatal traumatic brain injuries from sports and recreation activities: United States, 20012005. MMWR Morb Mortal Wkly Rep. 2007;56(29):733-737.

(14.) Eckner JT, Whitacre RD, Kirsch N, Richardson JK. Evaluating a clinical measure of reaction time: an observational study. Percept Mot Skills. 2009;108(3):717-720.

(15.) Eckner JT, Kutcher JS, Richardson JK. Pilot evaluation of a novel clinical test of reaction time in National Collegiate Athletic Association Division I football players. J Athl Train. 2010;45(4):327-332.

(16.) Eckner J, Lipps D, Kim H, Richardson J, Ashton-Miller JA. Can a clinical test of reaction time predict a functional head-protective response? Med Sci Sports Exerc. 2011 ;43(3):382-387.

(17.) Eckner J, Kutcher J, Richardson J. Effect of eoncussion on clinically measured reaction time in nine NCAA Division I collegiate athletes: a pilot study. PMR. 2011;3(3):212-218.

(18.) McGraw KO, Wong SD. Forming inferences about some intraclass correlation coeffieients. Psychol Methods. 1996;1(1):30-46.

(19.) Shrout P, Fleiss J. Intraclasss: uses m assessing rater reliability. Psychol Bull. 1979;86(2):420-428.

(20.) Tammemagi MC, Frank JW, Leblanc M, Artsob H, Streiner DL. Methodological issues in assessing reproducibility: a comparative study of various indices of reproducibility applied to repeat ELISA serologie tests for Lyme disease. J Clin Epidemiol. 1995;48(9): 1123-1132.

(21.) Broglio SP, Ferrara MS, Macciocchi SN, Baumgartner TA, Elliott R. Test-retest reliability of computerized concussion assessment programs. J Athl Train. 2007;42(4):509-514.

(22.) Jacobson NS, Truax P. Clinical significanee: a statistical approach to defining meaningful change in psychotherapy research. J Consult Clin Psychol. 1991;59(1):12-19.

(23.) Schatz P. Long-term test-retest reliability of baseline cognitive assessments using ImPACT. Am J Sports Med. 2010;38(1):47-53.

(24.) Anastasi A. Psychological Testing. 6th ed. New York, NY: Macmillan; 1998.

(25.) Randolph C, MeCrea M, Barr WB. Is neuropsyehologieal testing useful in the management of sport-related concussion? J Athl Train. 2005;40(3): 139-152.

(26.) Green P, Rohling ML, Lees-Haley PR, Allen LM III. Effort has a greater effect on test scores than severe brain injury in compensation claimants. Brain Inj. 2001;15(12):1045-1060.

(27.) Straume-Naesheim TM, Andersen TE, Balar R. Reproducibility of computer based neuropsychological testing among Norwegian elite football players. Br J Sports Med. 2005;39(suppl 1):i64-i69.

Address correspondence to James T. Eckner, MD, MS, Department of Physical Medicine & Rehabilitation, University of Michigan, 325 E. Eisenhower, Suite 100A, Ann Arbor, MI 48108. Address e-mail to jeckner@med.umich.edu.

James T. Eckner, MD, MS*; Jeffrey S. Kutcher, MD ([dagger]); James K. Richardson, MD*

* Department of Physical Medicine & Rehabilitation and ([dagger]) Department of Neurology, University of Michigan, Ann Arbor
Table 1. Overall Group Results for Clinical and Computerized Measures
of Reaction Time

                                 Mean [+ or -] SD, ms

Measure                n          Year 1            Year 2

Clinical measure of    102   207 [+ or -] 23   196 [+ or -] 24
reaction time
Computerized measure    62   255 [+ or -] 29   256 [+ or -] 34
of reaction time

                       Standard Error of
                       Measurement (a)     Intraclass
                                           Correlation
Measure                Year 1   Year 2     Coefficient

Clinical measure of    16.6     16.6       0.645
reaction time
Computerized measure   23.5     27.5       0.512
of reaction time

                       95% Confidence
                       Interval for

                       Intraclass

                       Correlation
Measure                Coefficient

Clinical measure of    0.422, 0.775
reaction time
Computerized measure   0.186, 0.707
of reaction time

(a) Calculated as SD * [MATHEMATICAL
EXPRESSION NOT REPRODUCIBLE IN ASCII.]

Table 2. Summary of Studies Reporting Test-Retest Reliability for
Computerized Reaction Time Measures

                                                  Computerized
Study                       Sample Tested         Test Battery Used

Iverson et (3) (2005)       56 healthy            ImPACT (a)
                            adolescents and
                            young adults

Collie et al (12)           60 healthy young      CogState Sport (b)
(2003)                      adults

Erlanger et al (11)         High school,          Concussion
(2001) (c)                  collegiate, and       Resolution Index (d)
                            adult club athletes

Broglio et al (21) (2007)   118 college           ImPACT
                            students
                                                  Concussion Sentinel
                                                  (e)
                                                  Concussion
                                                  Resolution Index

Schatz (23) (2010)          95 collegiate         ImPACT
                            athletes

Study                       Retest Interval      Reliability

Iverson et (3) (2005)       1-13 d               Pearson correlation
                                                 coefficient r=0.79

Collie et al (12)           1 h                  ICC=0.90
(2003)                      1 wk                 ICC=0.76

Erlanger et al (11)         2 wk                 0.73 (collegiate and
(2001) (c)                                       adult club athletes)

                                                 0.72 (high school
                                                 students)

Broglio et al (21) (2007)   0-45 d               ICC=0.39
                            45-50 d              ICC=0.51
                            0-45 d               ICC=0.60
                            45-50 d              ICC=0.55

                            0-45 d               ICC=0.65
                            45-50 d              ICC=0.36
Schatz (23) (2010)          1.9 [+ or -] 0.6 y   ICC=0.676

Study                       Retest Interval      Reliability

Iverson et (3) (2005)       1-13 d               Pearson correlation
                                                 coefficient r=0.79

Collie et al (12) (2003)    1 h                  ICC=0.90
                            1 wk                 ICC=0.76

Erlanger et al (11)         2 wk                 0.73 (collegiate and
(2001) (c)                                       adult club athletes)

                                                 0.72 (high school
                                                 students)

Broglio et al (21) (2007)   0-45 d               ICC=0.39
                            45-50 d              ICC=0.51
                            0-45 d               ICC=0.60
                            45-50 d              ICC=0.55

                            0-45 d               ICC=0.65
                            45-50 d              ICC=0.36
Schatz (23) (2010)          1.9 [+ or -] 0.6 y   ICC=0.676

Abbreviation: ICC, intraclass correlation coefficient.

(a) ImPACT Applications, Inc, Pittsburgh, PA.

(b) CogState Ltd, Melbourne, Australia.

(c) The number of athletes tested in each group and the
reliability index used were not reported.

(d) HeadMinder, Inc, New York, NY

(e) CogState Ltd.
Gale Copyright: Copyright 2011 Gale, Cengage Learning. All rights reserved.