Researchers: Little evidence that brain-training games yield real-world benefits

Study led by U of I psychologists challenges industry claims

Psychology professor Daniel Simons and educational psychology professor Elizabeth Stine-Morrow, pictured here, and their colleagues found no compelling evidence that brain-training games provide cognitive benefits that are relevant to daily life.
Psychology professor Daniel Simons and educational psychology professor Elizabeth Stine-Morrow, pictured here, and their colleagues found no compelling evidence that brain-training games provide cognitive benefits that are relevant to daily life.

A systematic review of the scientific studies cited by brain-training companies has found no evidence to support their claims that their products improve cognition in daily life.

The new study, led by researchers at the U of I, reports that while people tend to improve on the specific tasks they practice, it’s premature to conclude that computerized brain-training programs yield broader cognitive benefits or improve real-world outcomes for their users.

The analysis and an independent commentary on the findings appear in the journal Psychological Science in the Public Interest. The study also included researchers from Florida State University, Michigan State University, Union College, and the Medical Research Council in Cambridge, U.K.

Daniel Simons, professor of psychology at Illinois who led the study with U of I educational psychology professor Elizabeth Stine-Morrow, said the idea behind “brain training” is that if you practice a task that taps a core component of cognitive ability, such as memory, the training will improve your ability to perform other tasks that also rely on memory. The researchers cast doubt on claims that brain games can achieve this, however.

“If you practice remembering playing cards, you’ll get really good at remembering playing cards,” Simons said. “But does that help you remember which medications to take, and when? Does it help you remember your friends’ names? Historically, there is not much evidence that practicing one task improves different tasks in other contexts, even if they seem to rely on the same ability.”

The researchers closely examined 132 journal articles cited by a large group of brain-training proponents in support of their claims. The team supplemented that list with all of the published articles cited on the websites of leading brain-training companies that were identified by SharpBrains, an independent market-research firm that follows the industry.

The review found numerous problems with the way many of the cited studies were designed and how the evidence was reported and interpreted. The problems included small sample sizes and studies in which researchers reported only a handful of significant results from the many measures collected.

“Sometimes the effects of a single brain-training intervention are described in many separate papers without any acknowledgment that the results are from the same study,” Simons said. “That gives the misleading impression that there is more evidence than actually exists, and it makes it hard to evaluate whether the study provided any evidence at all.”

Some studies conducted with special groups (such as people diagnosed with schizophrenia, children with language delays, or older adults with dementia) were used as support for broad claims about the benefits of brain training for the general population.

One of the most glaring problems in the cited research was the use of inadequate control groups as a baseline for measuring improvements. Ideally, participants in a control group do not engage in the intervention but are otherwise matched closely with those who do, the researchers said. Not only should the control group’s demographics (age, sex, race, income and education) match those of the intervention group as closely as possible, control-group participants also should be equally engaged, Simons said. That way, if the group that receives treatment improves more than the group that does not, the difference can be credited to the treatment itself.

Some of the studies in question had no control group, however. Some had a passive control group, whose members took the same pre- and post-test as the intervention group, but were not engaged in any other way. Some studies had participants in a control group come into the lab and play crossword puzzles, watch educational DVDs or just socialize with the experimenters. Such control groups differ in many ways from the intervention group, so greater improvement in the treatment group might be due to those other differences, including differences in expected improvement, rather than to the brain-training intervention itself, the researchers said.

Most of the research in question tested for improvements on simplified, abstract laboratory tasks rather than on measures of real-world performance.

“There are relatively few studies in this literature that objectively measure improvements on the sorts of real-world tasks that users of the programs presumably want to improve – and that the programs’ marketing materials emphasize,” Simons said.

“Based on our comprehensive review of the evidence cited by brain-training proponents and companies, we found little evidence for broad transfer from brain-training tasks to other tasks,” Simons said. “We hope future studies will adopt more rigorous methods and better control groups to assess possible benefits of brain training, but there is little evidence to date of real-world benefits from brain training.”

The research team included Walter Boot and Neil Charness, of Florida State University; Susan Gathercole, of the Medical Research Council, Cambridge, U.K.; Christopher Chabris, of Union College and Geisinger Health System; and David Hambrick, of Michigan State University. Simons and Stine-Morrow are affiliates of the Beckman Institute for Advanced Science and Technology at Illinois.

News Source

Diana Yates, Illinois News Bureau

Date