Cognitive Training as an Intervention to Improve Driving Ability in the Older Adult

14 views
Skip to first unread message

zzzz

unread,
Mar 12, 2010, 3:49:25 PM3/12/10
to Dual N-Back, Brain Training & Intelligence

cev

unread,
Mar 12, 2010, 4:10:30 PM3/12/10
to Dual N-Back, Brain Training & Intelligence
Nice one - this is by Jaeggi and co.

"The training effects transferred to differential improvements for the
training group on other measures of working
memory and speed of processing. Unlike in our previous work (Jaeggi et
al., 2008) we did
not observe transfer to measures of intelligence."

Jonathan Toomim

unread,
Mar 12, 2010, 4:37:57 PM3/12/10
to brain-t...@googlegroups.com
Currently, their sample size is very small:  only 11 young adults and 13 older adults have completed training and testing.  Their results so far are promising.  The finding that it improved older drivers' performance on the simulated driving task is particularly noteworthy, since this means DnB could save lives.

Quote from the paper regarding transfer to measures of intelligence:

There were no significant group by test session interactions for the intelligence 
measures or complex motor tasks for the young adults, although one of the intelligence 
measures exhibited a trend for transfer effects that scaled with training task gains.  Again, 
there is insufficient data to statistically evaluate these transfer effects for older adults at this 
point in the study.  Figure 3 shows a promising trend in the Walking While Talking task 
(Verghese et al., 2002), however, with the older adults in the intervention group exhibiting 
larger improvements on this measure of dual tasking while walking.  This is particularly 
significant given that scores on this assessment are predictive of falls in older adults. 

Jonathan

wzeller

unread,
Mar 12, 2010, 4:49:26 PM3/12/10
to brain-t...@googlegroups.com
But doesn't this new finding call into question the merit of young adults spending hours using DNB to attempt to increase their fluid intelligence?

--
You received this message because you are subscribed to the Google Groups "Dual N-Back, Brain Training & Intelligence" group.
To post to this group, send email to brain-t...@googlegroups.com.
To unsubscribe from this group, send email to brain-trainin...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/brain-training?hl=en.

Jonathan Toomim

unread,
Mar 12, 2010, 5:58:36 PM3/12/10
to brain-t...@googlegroups.com
Well, it's not good news for us young adults, but it doesn't mean that DnB is meaningless.  Absence of evidence is not evidence of absence.  The explanation that the authors gave in their Conclusion section is plausible:

This may have been a by-product of the 
rather extensive pre and post test battery of assessments that we performed, particularly 
given that one of the intelligence measures was always performed last in the sequence of 
tests.  Given this, participants may have been too fatigued and / or unmotivated to perform 
these tests well.

I misreported in my last message:  56 young adults and 23 older adults have completed both training and testing.  In contrast, Jaeggi (2008) had 69 young adult subjects.

Gwern Branwen

unread,
Mar 12, 2010, 5:59:39 PM3/12/10
to brain-training
On Fri, Mar 12, 2010 at 4:49 PM, wzeller <wze...@gmail.com> wrote:
> But doesn't this new finding call into question the merit of young adults
> spending hours using DNB to attempt to increase their fluid intelligence?

It definitely is important, and I think I'll be including it in the
FAQ. It's definitely not a good result for gf.

The conclusion offers the lame explanation that maybe participants
were sick of testing when they got to the end. But this doesn't seem
too plausible to me. It says that only 'one of the intelligence
measures was always performed last', so shouldn't the other measures
show gains? And if people were just tired, that ought to depress both
sets of scores (before & after) equally and not affect relative gains
- presumably a bored 150 IQ person scores better than a bored 130 IQ
person and if someone went from 130 to 150 their bored scores would
still reflect that.

There is one thing I noticed: the absolutely huge variance in the
young adults DNB scores. Besides the zero overlap in scores (did I
read that graph right? The worst young adult always scored better than
the best old adult for a particular session? And why did the variance
massively widen in the last round and the floor fall?), at the end,
the young adults were anywhere from D3B to D7B, while the old adults
were bunched at D2B-D3B.

A spread of 4 compared to a spread of 1 is pretty noticeable; whatever
mental changes are happening to n-backers, there would seem to be
*much* more of it in the young adults.

But I suppose a technical report won't tell us all that much. For
example, they used Raven's, but did they pull the same
you-only-have-10-minutes trick as with Jaeggi 2008, or did they take
Moody to heart and allot the full time span? It doesn't seem to say.

--
gwern

Jonathan Toomim

unread,
Mar 12, 2010, 11:18:22 PM3/12/10
to brain-t...@googlegroups.com
I believe the error bars show the standard error of the mean, which is
equal to something like (stdev)/sqrt(n). Thus, as n decreases (not
all subjects completed the extra sessions), stderr increases. They
might also show the 95% confidence interval for the mean, which is
similar to the standard error of the mean.

The lack of overlap of the error bars does not mean that the data
don't overlap, just that the uncertainty about the true value of the
means is small compared to the apparent difference between the means.

Also, keep in mind that the young adults were probably university
students (average IQ=~115) and the old adults were probably just from
the general population (average IQ=100), which should account for some
of the difference between the two groups in DnB scores.

Test scores from tired subjects are much more variable than test
scores from alert subjects. The ability to identify statistically
significant results depends on the number of data points (i.e.
subjects), the variance of the data, the distribution of the data, the
size of the effect, and chance. If you quadruple the variance (and
double the stdev), you need four times as many data points to get a
significant effect.

jttoto

unread,
Mar 22, 2010, 10:09:49 PM3/22/10
to Dual N-Back, Brain Training & Intelligence
Either way, this is big. Many are playing this under the assumptions
that many aspects of their intellect will be improved, mainly Gf.
This finding is congruent with Moody's criticism, that N-back is
training WM and therefore the gains are WM-specific, not applicable to
the more difficult Raven questions or other domains of intelligence.
This is also related to my own experience, that my WM improved but not
my score on the Raven's.

In addition, since WM does improve but not intelligence, this finding
conflicts with the idea that many cognitive measures can be reduced to
WM, otherwise intelligence would have improved as well. Perhaps
intelligent people, on average, display cognitve versatility, (they
score highly in many cognitive areas) and therefore correlations seem
like direct relations. It is the old correlation is not causation
cliche, but relevant nonetheless.

That being said, I think we are too hung up on improving
intelligence. WM does indeed improve. And since WM is a stronger
prediction of academic success than IQ, n-backing is still very
useful. Still, now I am even more weary about WM relation to
success. Is it indeed WM contributing to success, or a 2nd
unidentified factor that is correlated to WM? And if this is true, is
n-backing improving this factor.

> > .- Hide quoted text -
>
> - Show quoted text -

Reply all
Reply to author
Forward
0 new messages