Preseason neurocognitive tests in young athletes have surprisingly high base rates of failure, according to a large observational study.
More than 50% of the nearly 8,000 participants who completed baseline neurocognitive testing failed at least one of four published validity indicators in the cross-sectional study.
Base rates of failure ranged from 29% in 21-year-old athletes to 84% among 10-year-olds, reported Christopher Abeare, PhD, of the University of Windsor in Canada, and colleagues in .
Action Points
- Note that this observational study found a very high rate of failure on baseline cognitive tests in young athletes.
- This speaks to potential invalidity of the "comparison to baseline" method of determining whether an athlete can return to play after a concussion.
"There's been growing concern about the validity of baseline test results -- meaning there's concern over the degree to which the scores on these baseline tests actually reflect an athlete's true cognitive ability," Abeare told . "These findings suggest that the rates of invalid performance on baseline testing may be alarmingly high."
Baseline testing assesses an athlete's preseason cognitive abilities to provide a guideline for return-to-play decisions if an injury occurs.
(Immediate Post-Concussion Assessment and Cognitive Testing) -- a computerized neurocognitive test that evaluates athletes both at baseline and post-injury -- is the most widely used tool. About 75% of member schools use ImPACT, as do thousands of high schools. The tests often are administered in group settings.
ImPACT has five cognitive indices -- reaction time, visual motor speed, visual memory, impulse control and verbal memory. Its 1-year test-retest reliability's are variable and range from low (intraclass correlation coefficient of ) to high ().
To date, ImPACT has four published embedded validity indicators. Two are included in the ImPACT Clinical Manual -- a default indicator that automatically marks invalid performance and "Red Flags," an index of suboptimal performance that is more liberal than the default indicator.
The other two were developed by independent research teams. and have each developed alternative indicators that have been shown to produce higher base rates of failure.
In this study, researchers compared these four indicators at baseline for 7,897 consecutively evaluated athletes, from ages 10 to 21, in the Midwest. Participants were tested in groups of about 20 athletes and the online test took approximately 45 minutes.
About half (52%) were male, and the average age was 15. They played football (20.9%), soccer (15.9%), volleyball (9.8%), basketball (9.3%), hockey (9.2%) and field hockey (8.6%). Nearly all (99%) spoke English and most (87%) were right-handed. The most commonly reported preexisting diagnoses were attention-deficit/hyperactivity disorder (11%), dyslexia (1.8%), and autism (0.3%).
Most of the sample -- 55.7% -- failed at least one of the four validity indicators. The base rates of failure varied widely: 6.4% for the ImPACT default indicator, 31.8% for Red Flags, 34.9% for Higgins et al, and 47.6% for Schatz and Glatts.
"This research supports the clinical practices of most sports neurologists, who have found this type of testing to be of little use when making a diagnosis of concussion or determining when it is safe for an athlete to return to a collision sport," said Anthony Alessi, MD, of the University of Connecticut, who was not involved the study. "It reinforces the importance of a detailed neurologic history and exam before making a diagnosis."
The results also showed a strong age association. Ten-year-olds had the highest cumulative base rate of failure at 83.6%, and 21-year-olds had the lowest at 29.2% (risk ratio 2.86, 95% CI 2.60-3.16, P<0.001). Three-fourths of participants ages 10 to 12 failed validity indicators.
This type of testing may not be useful for very young people, observed William Mullally, MD, of Harvard Medical School, who also was not involved in the research. "We must be cognizant of the fact that athletes under the age of 16 are physically, cognitively, and emotionally different from adults."
Computerized neuropsychological testing like ImPACT can be a useful adjunct, but "must be interpreted in the context of history, neurologic exam, and clinical course and is certainly not a stand-alone tool," added Mullally.
While invalid results may stem from fatigue, test-taking in noisy group environments, or not paying attention to instructions, "some athletes will actually intentionally under-perform in the hope that a post-concussion test in the future will compare more favorably to the baseline test," he told .
"To ensure a baseline test is valid, testing should only be conducted by a trained health professional in a controlled environment, and athletes should be retested if there is possible invalidity," he said. And after a concussion, young athletes may need more specialized testing under the direct supervision of a neuropsychologist rather than a computerized neuropsychological screen, he added.
This high overall base rate of failure signals a potential confound in the measurement model, and the degree to which invalid performance reflects false-positive errors or truly invalid response sets is unknown, Abeare and colleagues wrote, advising that clinicians consider performance validity and age-specific base rates of failure when making return-to-play decisions.
Disclosures
The authors reported no conflicts of interest.
Primary Source
JAMA Neurology
Abeare C, et al "Prevalence of invalid performance on baseline testing for sport-related concussion by age and validity indicator" JAMA Neurol 2018; DOI: 10.1001/jamaneurol.2018.0031.