Stanford Center for Longevity and the Max Planck Institute for Human Development, in Berlin, released “A Consensus on the Brain-Training Industry From the Scientific Community,” (2013) a statement objecting “to the claim that brain games offer consumers a scientifically grounded avenue to reduce or reverse cognitive decline.”

70 psychology, neuroscience, and gerontology professors in the area signed this letter.

The general conclusion was:

“Some of the initial results are promising and make further research highly desirable. However, at present, these findings do not provide a sound basis for the claims made by commercial companies selling brain games. ….The promise of a magic bullet detracts from the best evidence to date, which is that cognitive health in old age reflects the long-term effects of healthy, engaged lifestyles. ”

Points Made

  • Exaggerated claims. It is customary for advertising to highlight the benefits and overstate potential advantages of their products. The consensus of the group is that claims promoting brain games are frequently exaggerated and at times misleading.
    • Consumers are reassured that claims and promises are based on solid scientific evidence, as the games are “designed by neuroscientists” at top universities and research centers.
    • The cited research is only tangentially related to the scientific claims of the company.
    • Findings are often not integrated over a body of research rather than relying on single studies that often include only a small number of participants.
  • Limitations of current evidence. Cognitive training produces statistically significant improvement in practiced skills that sometimes extends to improvement on other cognitive tasks administered in the lab. But.
    • The effects are often small.
    • It is often not appropriate to conclude that training-induced changes go significantly beyond the learned skills, that they affect broad abilities with real-world relevance, or that they generally promote “brain health”.
    • Studies reporting positive effects of brain games on cognition are more likely to be published than studies with null results –the so-called “file drawer effect”– such that even the available evidence is likely to draw an overly positive picture of the true state of affairs.
    • Some meta-analyses report small positive effects of training on cognition, others note substantial disparities in methodological rigor among the studies.
  • Scientific evidence – required conditions
    • Performance gains are not enough: You need to show that benefits are not explained by factors that are long known to benefit performance, such as the acquisition of new strategies or changes in motivation.
    • Newly developed psychological tests must meet specific psychometric standards, including reliability and validity. The same standards should be extended into the brain game industry.
    • Additional systematic research is needed to replicate, clarify, consolidate, and expand existing positive results.
    • Does the improvement encompass a broad array of tasks that constitute a particular ability, or does it just reflect the acquisition of specific skills?
    • Do the gains persist for a reasonable amount of time?
    • Are the positive changes noticed in real life indices of cognitive health?
    • What role do motivation and expectations play in bringing about improvements in cognition when they are observed?
    • Opportunity costs need to be evaluated: Is time spent brain training better served e.g. learning a language or a musical instrument, or exercising or socializing?
    • Need for meta-studies that estimate effect size and likelihood of biased result reporting.
  • Recommendations of the group
    • Consider opportunity costs: Physical activity, intellectual challenges and social engagement may be more effective for brain health.
    • Aerobic exercise has particular promise for brain health / wellness.
    • Findings need to be replicated at multiple sites, based on studies conducted by independent researchers who are funded by independent sources. And there need to be good control groups.
    • Even if programs do work, in all likelihood, gains won’t last long after you stop the challenge.

The Chronicle of Higher Education Feature on the Brain Training Industry

The following points have been made in the Oc 22, 2014 Article in The Chronicle of Higher Education on the brain training industry.

1. Avoiding exaggerated claims

“There is a lot of concern in the scientific community because consumers hear that the games are based on scientific evidence when that ‘scientific evidence’ is generally tangential to the exaggerated claims,” Laura L. Carstensen, director of the Stanford Center on Longevity and a professor of public policy and psychology.

2. Navigating conflict of interest

Scientists are asked to serve as advisers in return for equity (personal compensation – not research funding)).

“Personally I think it falls in the same kind of category as medical researchers’ taking money from drug companies….It’s incredible how much money is thrown around there. …It’s hard to get around the fact that people don’t argue against their pocketbook” (when they stand to profit if the company does well) L. Todd Rose, educational-neuroscience professor at Harvard University.

3. Better Industry-Research relations needed

“In general I think academics and industry have to work together to have real innovations in people’s lives, …We have to learn how to work together in as transparent and effective a way as possible.” Adam Gazzaley, a professor of neurology, physiology, and psychiatry at the University of California at San Francisco,

4. Funding Research

Posit Science, the cognitive-training company behind a program called BrainHQ, contributes funds to Dr. Gazzaley’s lab.

One former lab member, Jyoti Mishra, split her postdoctoral research time between the lab and the company. She’s now an assistant professor of neurology at the University of California at San Francisco.

Michael Merzenich, co-founder and chief scientific officer of the company, said the company does not tamper with the research it supports.

Lumosity awarded the first Human Cognition Grant of $150,000 last year to Joseph B. Hopfinger, a professor of cognitive psychology, and Kathleen Gates, an assistant professor of quantitative psychology, both at the University of North Carolina at Chapel Hill. Ms. Gates said the process had been similar to receiving funding from the National Institutes of Health.

Lumosity uses these criteria when it considers funding research:

  1. sample size
  2. total training time of 20 hours or greater
  3. sufficient randomization and active controls
  4. relevant outcomes measures or assessments
  5. processes and controls to ensure study integrity
  6. suitability of investigator(s) and institution

5. Research Ethics

  • Need for transparency with data. Many psychologists CANNOT get direct access to the commercial products to do research so they rely on tasks that are virtually identical to the training tasks used by brain training companies. A number of scientists have reported that even if you do work with a brain training company, it is very difficult to get the data from them.
  • Need for Federal/Government regulation on claims made (currently lacking).
  • Need to use control groups typical in peer-reviewed scientific research.
  • Clear ways of showing transfer (beyond improvements on the game itself).
  • Company should publish proposal abstracts and hypotheses in advance and require researchers to submit their results for publication in peer-reviewed journals no matter the outcomes. Studies not accepted should still be summarized on the website.

PLOS article ‘How to Make More Published Research True’ by John P. A. Ioannidis at Stanford.

From this PLOS article for how to improve standards for developing interventions more generally. We could incorporate any of this into our own criteria for research ethics.

  • Large-scale collaborative research
  • Adoption of replication culture
  • Registration (of studies, protocols, analysis codes, datasets, raw data, and results)
  • Sharing (of data, protocols, materials, software, and other tools)
  • Reproducibility practices
  • Containment of conflicted sponsors and authors
  • More appropriate statistical methods (e.g. power rather than just significant effect)
  • Standardization of definitions and analyses
  • More stringent thresholds for claiming discoveries or ‘‘successes’’
  • Improvement of study design standards
  • Improvements in peer review, reporting, and dissemination of research
  • Better training of scientific workforce in methods and statistical literacy

General Research Ethics Criteria

Here are some methodological approaches that help ensure we’re making progress in developing cognitive interventions.

  • Demand large sample size.
  • Demand replication, preferably exact replication, most preferably multiple exact replications.
  • Trust systematic reviews and meta-analyses rather than individual studies. Meta-analyses must prove homogeneity of the studies they analyze.
  • Use Bayesian rather than frequentist analysis, or even combine both techniques.
  • Stricter p-value criteria. It is far too easy to massage p-values to get less than 0.05. Also, make meta-analyses look for “p-hacking” by examining the distribution of p-values in the included studies.
  • Require pre-registration of trials.
  • Address publication bias by searching for unpublished trials, displaying funnel plots, and using statistics like “fail-safe N” to investigate the possibility of suppressed research.
  • Do heterogeneity analyses or at least observe and account for differences in the studies you analyze.
  • Demand randomized controlled trials. None of this “correlated even after we adjust for confounders” BS.
  • Stricter effect size criteria. It’s easy to get small effect sizes in anything.

Comments.

Currently there are no comments related to this article. You have a special honor to be the first commenter. Thanks!

Leave a Reply.

* Your email address will not be published.
You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>