Reinterpretation of Jaeggi et al.

537 views
Skip to first unread message

Ari

unread,
May 24, 2009, 1:42:54 PM5/24/09
to Dual N-Back, Brain Training & Intelligence
Published in the journal Intelligence by David E. Moody, apparently a
math tutor for high school students. What do you make of this?

http://dx.doi.org/10.1016/j.intell.2009.04.005

The May 13, 2008 issue of Proceedings of the National Academy of
Sciences featured a cover article that purported to demonstrate
increases in fluid intelligence following training on a task of
working memory (Jaeggi, Buschkuehl, Jonides, & Perrig, 2008). The
authors described their own findings as a “landmark result”. Their
study was the subject of an introductory comment by Robert Sternberg
(2008), as well as articles in the mainstream media, including a
lengthy column in a recent edition of The New York Times (Wang &
Aamodt, 2009).

In view of the potential significance of the study and the quantity of
attention it has received, the results have been subjected to
remarkably little critical analysis. A close examination of the
evidence reported by Jaeggi et al. shows that it is not in fact
sufficient to support the authors' conclusion of any increase in their
subjects' fluid intelligence.

What Jaeggi et al. reported were modest increases in performance on a
test of fluid intelligence following several days of training on a
task of working memory. The reported increases in performance are not
in question here. But the manner in which the test was administered
severely undermines the authors' interpretation that their subjects'
intelligence itself was increased.

The subjects were divided into four groups, differing in the number of
days of training they received on the task of working memory. The
group that received the least training (8 days) was tested on Raven's
Advanced Progressive Matrices (Raven, 1990), a widely used and well-
established test of fluid intelligence. This group, however,
demonstrated negligible improvement between pre- and post-test
performance.

The other three groups were not tested using Raven's Matrices, but
rather on an alternative test of much more recent origin. The Bochumer
Matrices Test (BOMAT) (Hossiep, Turck, & Hasella, 1999) is similar to
Raven's in that it consists of visual analogies. In both tests, a
series of geometric and other figures is presented in a matrix format
and the subject is required to infer a pattern in order to predict the
next figure in the series. The authors provide no reason for switching
from Raven's to the BOMAT.

The BOMAT differs from Raven's in some important respects, but is
similar in one crucial attribute: both tests are progressive in
nature, which means that test items are sequentially arranged in order
of increasing difficulty. A high score on the test, therefore, is
predicated on subjects' ability to solve the more difficult items.

However, this progressive feature of the test was effectively
eliminated by the manner in which Jaeggi et al. adminstered it. The
BOMAT is a 29-item test which subjects are supposed to be allowed 45
min to complete. Remarkably, however, Jaeggi et al. reduced the
allotted time from 45 min to 10. The effect of this restriction was to
make it impossible for subjects to proceed to the more difficult items
on the test. The large majority of the subjects—regardless of the
number of days of training they received—answered less than 14 test
items correctly.

By virtue of the manner in which they administered the BOMAT, Jaeggi
et al. transformed it from a test of fluid intelligence into a speed
test of ability to solve the easier visual analogies.

The time restriction not only made it impossible for subjects to
proceed to the more difficult items, it also limited the opportunity
to learn about the test—and so improve performance—in the process of
taking it. This factor cannot be neglected because test performance
does improve with practice, as demonstrated by the control groups in
the Jaeggi study, whose improvement from pre- to post-test was about
half that of the experimental groups. The same learning process that
occurs from one administration of the test to the next may also
operate within a given administration of the test—provided subjects
are allowed sufficient time to complete it.

Since the whole weight of their conclusion rests upon the validity of
their measure of fluid intelligence, one might assume the authors
would present a careful defense of the manner in which they
administered the BOMAT. Instead they do not even mention that subjects
are normally allowed 45 min to complete the test. Nor do they mention
that the test has 29 items, of which most of their subjects completed
less than half.

The authors' entire rationale for reducing the allotted time to 10 min
is confined to a footnote. That footnote reads as follows:

Although this procedure differs from the standardized procedure,
there is evidence that this timed procedure has little influence on
relative standing in these tests, in that the correlation of speeded
and non-speeded versions is very high (r = 0.95; ref. 37).

The reference given in the footnote is to a 1988 study (Frearson &
Eysenck, 1986) that is not in fact designed to support the conclusion
stated by Jaeggi et al. The 1988 study merely contains a footnote of
its own, which refers in turn to unpublished research conducted forty
years earlier. That research involved Raven's matrices, not the BOMAT,
and entailed a reduction in time of at most 50%, not more than 75%, as
in the Jaeggi study.

So instead of offering a reasoned defense of their procedure, Jaeggi
et al. provide merely a footnote which refers in turn to a footnote in
another study. The second footnote describes unpublished results,
evidently recalled by memory over a span of 40 years, involving a
different test and a much less severe reduction in time.

In this context it bears repeating that the group that was tested on
Raven's matrices (with presumably the same time restriction) showed
virtually no improvement in test performance, in spite of eight days'
training on working memory. Performance gains only appeared for the
groups administered the BOMAT. But the BOMAT differs in one important
respect from Raven's. Raven's matrices are presented in a 3 × 3
format, whereas the BOMAT consists of a 5 × 3 matrix configuration.

With 15 visual figures to keep track of in each test item instead of
9, the BOMAT puts added emphasis on subjects' ability to hold details
of the figures in working memory, especially under the condition of a
severe time constraint. Therefore it is not surprising that extensive
training on a task of working memory would facilitate performance on
the early and easiest BOMAT test items—those that present less of a
challenge to fluid intelligence.

This interpretation acquires added plausibility from the nature of one
of the two working-memory tasks administered to the experimental
groups. The authors maintain that those tasks were “entirely
different” from the test of fluid intelligence. One of the tasks
merits that description: it was a sequence of letters presented
auditorily through headphones.

But the other working-memory task involved recall of the location of a
small square in one of several positions in a visual matrix pattern.
It represents in simplified form precisely the kind of detail required
to solve visual analogies. Rather than being “entirely different” from
the test items on the BOMAT, this task seems well-designed to
facilitate performance on that test.

More generally, the foregoing considerations suggest a deeper problem
with the conclusions presented by Jaeggi et al.: To what extent does
improvement on any test of fluid intelligence reflect an increase in
actual intelligence rather than merely an increase in test-taking
skills? A full analysis of this issue is beyond the scope of the
present review, but the methodological challenges involved are
formidable and deserve further discussion.

Whatever the meaning of the modest gains in performance on the BOMAT,
the evidence produced by Jaeggi et al. does not support the conclusion
of an increase in their subjects' intelligence. Their research may be
sufficient to encourage further investigation, but any larger
inferences are unwarranted.

Iron

unread,
May 24, 2009, 1:58:45 PM5/24/09
to Dual N-Back, Brain Training & Intelligence
An interesting point I would like to bring into this thread is the
mention earlier of people with autism performing higher than average
on the raven's with a lower average working memory. That suggests to
me that the raven's as a test may not require a large working memory
capacity. It looks to me like there may be some other bottleneck for
this test.

It is rather curious that they administered the tests in that fashion
and that they changed the tests mid-experiment. I hope that in the
future a more extensive test is administered after an even longer
period of DNB training. If all this is good for is increasing our
speed at solving problems we could already solve, I still feel that it
is an incredibly valuable exercise.

There are other studies indicating significant improvements in ADHD
symptoms and unrelated skills, so I still maintain the belief that DNB
is good for us and we are improving something vitally important by
training.

Pontus Granström

unread,
May 24, 2009, 2:03:57 PM5/24/09
to brain-t...@googlegroups.com

Pontus Granström

unread,
May 24, 2009, 2:20:29 PM5/24/09
to brain-t...@googlegroups.com
Here's another report supporting the claim. "General intelligence (g) is highly correlated with working-memory capacity (WMC). It has been argued that these central psychological constructs should share common neural systems. The present study examines this hypothesis using structural magnetic resonance imaging to determine any overlap in brain areas where regional grey matter volumes are correlated to measures of general intelligence and to memory span. In normal volunteers (N = 48) the results (p < .05, corrected for multiple comparisons) indicate that a common anatomic framework for these constructs implicates mainly frontal grey matter regions belonging to Brodmann area (BA) 10 (right superior frontal gyrus and left middle frontal gyrus) and, to a lesser degree, the right inferior parietal lobule (BA 40). These findings support the nuclear role of a discrete parieto-frontal network. "

http://www.informaworld.com/smpp/content~content=a788318424~db=all

jttoto

unread,
May 24, 2009, 2:31:01 PM5/24/09
to Dual N-Back, Brain Training & Intelligence

Moody brings up an excellent point, that perhaps DNB really does train
for the easier questions on the BOMAT, considering the larger working
memory load of the BOMAT. While he does highlight flaws in the Jaeggi
study, there are somethings I would like to point out in Moody's
critique.

He admits that improvement on the BOMAT happens because of
improvements in spatial working memory, which DNB trains. He seems to
imply that there is no proof that DNB improves one's analytical
ability (which may be true), which is crucial for solving the more
complex problems on the Raven's. Instead, he says the improvements
are shown because improved working memory alone can help on the easier
portions of the BOMAT.

This may be true, but there are plenty of studies that suggest a
correlation with working memory and Fluid intelligence, so it would
seem reasonable that improving one will improve the other.

Also, isn't it still significant that DNB improves working memory?
Even if that is all it does, I'd say it is still pretty helpful in the
real world.

Ari

unread,
May 24, 2009, 2:33:45 PM5/24/09
to Dual N-Back, Brain Training & Intelligence
Yes. While Raven's seems to be more of a test of ability to deal with
complexity (intelligence?), the earlier levels of BOMAT test mostly
working memory. Working memory is indisputably related to
intelligence, but it may be the case that DNB training improves only
speed or capacity, not intelligence however constituted. As you point
out, though, this is a useful skill regardless.

Pontus Granström

unread,
May 24, 2009, 2:41:13 PM5/24/09
to brain-t...@googlegroups.com
But you forget that they saw improvements even among those with a high capacity working memory.

"However, our additional analyses show that there is more to
transfer than mere improvement in working memory capacity in
that the increase in Gf was not directly related to either
preexisting individual differences in working memory capacity
or to the gain in working memory capacity as measured by simple
or complex spans, or even, by the specific training effect itself."

Mike L.

unread,
May 24, 2009, 2:54:01 PM5/24/09
to Dual N-Back, Brain Training & Intelligence
But wait, isn't the speed or even the capacity with which one is
endowed to accomplish whatever task they might have at hand, also
strongly correlated with intelligence?

That would be like saying, a faster and more fuel efficient car does
not have a better engine than that of a slower, less fuel efficient
car.

To use a more concrete example, on the WAIS-IV IQ test, there is
indeed a section on processing information and spatial reasoning, as
well as digit-span memorization. These categories, it can be said,
will indeed rise as due to DNB training, thus, increase overall IQ.
Therefore, though, according to the hypothesis moody devises of DNB
not improving analytical processes, training with DNB will still
result in a higher IQ.

Mike L.

unread,
May 24, 2009, 3:52:29 PM5/24/09
to Dual N-Back, Brain Training & Intelligence
I wanted to add something:

Moody himself states that the processing of the "easier" tasks became
quicker in those people who trained with DNB. Now, using logic: When
one does something quicker, or finds that they can perform something
quicker, that task has then become easier to do for them. For this
reason, to say that performing something quicker but not finding it
easier to do, is almost illogical, and therefore, to say that harder
tasks remain just as hard for the person who has found it easier to
perform the easy tasks just doesn't make sense.

Taking this into consideration, it is more sensible to believe that
the easy tasks have become easier but the harder tasks have TOO become
easier.

Pontus Granström

unread,
May 24, 2009, 4:27:15 PM5/24/09
to brain-t...@googlegroups.com
I agree, what they are claiming is that the first questions of raven's and bomat is just questions that takes working memory into account, but then after a few questions it measures analytical intelligence?

Ron Williams

unread,
May 24, 2009, 11:39:10 PM5/24/09
to brain-t...@googlegroups.com
You are assuming that the harder tasks are possible at all for a given person. This may not be so. If you are required to keep ten things in mind all at once and do a manipulation of them to achieve the result, it may never be possible, if you are only capable of holding seven things in mind.

The harder tasks of this kind will remain just has impossible as before, and will result in a big X on the score sheet. On the other hand, the easier tasks might well be done much more quickly than before if there's a speed improvement. I.e. speeding up attempts on the impossible will not give a better result!

Mike L.

unread,
May 25, 2009, 1:07:42 AM5/25/09
to Dual N-Back, Brain Training & Intelligence
No but getting faster at an easier task, at least in terms of
intelligence, and for the most part, is indicative of the possibility
that one has opened their capabilities and as a result is much less
prone to struggle on that easier task and much more likely to succeed
at a harder one.

I mean, you simply cannot rule that out.

On May 24, 11:39 pm, Ron Williams <rhwil...@gmail.com> wrote:
> You are assuming that the harder tasks are possible at all for a given
> person. This may not be so. If you are required to keep ten things in mind
> all at once and do a manipulation of them to achieve the result, it may
> never be possible, if you are only capable of holding seven things in mind.
>
> The harder tasks of this kind will remain just has impossible as before, and
> will result in a big X on the score sheet. On the other hand, the easier
> tasks might well be done much more quickly than before if there's a speed
> improvement. I.e. speeding up attempts on the impossible will not give a
> better result!
>

Pontus Granström

unread,
May 25, 2009, 5:16:36 AM5/25/09
to brain-t...@googlegroups.com
Hi, read this article http://www.psych.rutgers.edu/~jose/courses/578/Conway_etal_2003.pdf especially the "Neuroimaging studies of WMC and g" it shows that there is high correlation between n-back "lure trials" and ravens (r=0.54), so it's the lure trials (which takes alot of executive control) that correlates with intelligence. It also shows that n-back and Ravens engages the same areas of the brain (DLPFC and ACC) and that the correlation depends 92% on lure trials and activity in the DLPFC area, which means that DLPFC is the mediator between G and WMC. Very interesting! Dual-n-backs lure trials engages DLPFC and perhaps strengthen it with we fail? Just like a muscle that fails sends signals to build more muscle tissue? Or just the fact the we activate the area strengthens DLPFC and therefore IQ-score?

Ashirgo

unread,
May 25, 2009, 5:22:17 AM5/25/09
to Dual N-Back, Brain Training & Intelligence
Unfortunately, I cannot read this article: "404 Page Not Found: The
Internet Just Broke" . Moreover, could you present your interpretation
of "lure trials"? I have a single conception, but... ;)

Pontus Granström

unread,
May 25, 2009, 5:38:07 AM5/25/09
to brain-t...@googlegroups.com
Hi, I've uploaded it in the files section. A "lure trial" is basiclly B-R-B-A in a 3-back session. Here the second b "lures the mind" since it's almost 3-back and therefore takes alot of executive control not to press the L button. Since ravens correlates with the "lure trials" we might want to add a feature to the the Brainworkshop program that display the number of time we didnt fall for a "lure trials"?  So after every session it would display
Visual: 80% Audio: 80% and Lure Trials: 75%?

Ashirgo

unread,
May 25, 2009, 6:06:02 AM5/25/09
to Dual N-Back, Brain Training & Intelligence
Thank you. One can also consider an option of manipulating maths a
little so as to increase the chance of "lure trials" :) Regrettably, I
cannot see how it could work at higher n-levels. That is to say, I
have never perceived it as a problem/nuisance/etc.

Pontus Granström

unread,
May 25, 2009, 6:34:33 AM5/25/09
to brain-t...@googlegroups.com
I think that there are som cases where you are really "lured", not that it might be hard but still relative harder then just a "K-L-R-[K]" type of sequence. For 4 and 5-back there are some really "luring" stuff.

Wade

unread,
May 25, 2009, 10:35:06 AM5/25/09
to Dual N-Back, Brain Training & Intelligence
Ron,

Your argument makes a hidden assumption that working memory and "speed
of processing" are fully separable components of mental performance.
They apparently are not. Studies have consistently found that working
memory and "processing speed" covary:

From "Variation in Working Memory", page 61:

"In many studies, WMC (Working memory capacity) was found to be
strongly correlated to measures of processing speed [they give several
citations]. ...one obvious explanation is that working memory tasks
are complex span tasks which require a processing component
themselves, and the speed of performing the component is one source of
variance in complex span tasks. However, this cannot be the whole
story, because not all WMC tasks are complex span tasks, and speed
measures are also correlated to working memory tasks lacking a
processing component. A clear example of this is in the study by
Conway et al, (2002), who found a substantial correlation of speed
with a factor that extracted the common variance of short-term and
working memory tasks, but none found a second factor capturing the
specific variance of working memory. This is the opposite of what one
would expect if speed is related to complex span through the
processing component of the latter."

A question that needs answering is whether the "analytical" abilities
that are supposedly required of the more difficult BOMAT questions
covary with WM and processing speed too. Although it is tempting to
think that they do not, I would not jump to this conclusion either.



On May 24, 10:39 pm, Ron Williams <rhwil...@gmail.com> wrote:
> You are assuming that the harder tasks are possible at all for a given
> person. This may not be so. If you are required to keep ten things in mind
> all at once and do a manipulation of them to achieve the result, it may
> never be possible, if you are only capable of holding seven things in mind.
>
> The harder tasks of this kind will remain just has impossible as before, and
> will result in a big X on the score sheet. On the other hand, the easier
> tasks might well be done much more quickly than before if there's a speed
> improvement. I.e. speeding up attempts on the impossible will not give a
> better result!
>

Wade

unread,
May 25, 2009, 10:36:30 AM5/25/09
to Dual N-Back, Brain Training & Intelligence
What are "lure" trials?

On May 25, 4:16 am, Pontus Granström <lepon...@gmail.com> wrote:
> Hi, read this articlehttp://www.psych.rutgers.edu/~jose/courses/578/Conway_etal_2003.pdfes...
> the "Neuroimaging studies of WMC and g" it shows that there is
> high correlation between n-back "lure trials" and ravens (r=0.54), so it's
> the lure trials (which takes alot of executive control) that correlates with
> intelligence. It also shows that n-back and Ravens engages the same areas of
> the brain (DLPFC and ACC) and that the correlation depends 92% on lure
> trials and activity in the DLPFC area, which means that DLPFC is the
> mediator between G and WMC. Very interesting! Dual-n-backs lure trials
> engages DLPFC and perhaps strengthen it with we fail? Just like a muscle
> that fails sends signals to build more muscle tissue? Or just the fact the
> we activate the area strengthens DLPFC and therefore IQ-score?
>

Wade

unread,
May 25, 2009, 10:41:03 AM5/25/09
to Dual N-Back, Brain Training & Intelligence
Cool idea.

On May 25, 4:38 am, Pontus Granström <lepon...@gmail.com> wrote:
> Hi, I've uploaded it in the files section. A "lure trial" is basiclly
> B-R-B-A in a 3-back session. Here the second b "lures the mind" since it's
> almost 3-back and therefore takes alot of executive control not to press the
> L button. Since ravens correlates with the "lure trials" we might want to
> add a feature to the the Brainworkshop program that display the number of
> time we didnt fall for a "lure trials"?  So after every session it would
> display
> Visual: 80% Audio: 80% and Lure Trials: 75%?
>

Vlad

unread,
May 25, 2009, 11:54:29 AM5/25/09
to Dual N-Back, Brain Training & Intelligence
Nice article, but for me there was hardly something new (I didn't
catch that change to ONLY bomat in jaeggi). Nevertheless, in my study
(people are flaking the final tests :(, lets hope at least 15 more
will take it) I use only raven, we'll see. And I think that other
theoretical questions appearing in this article (effectivity VS new
quality of intellect) were the subject of discussion here already.

Mike L.

unread,
May 25, 2009, 2:36:33 PM5/25/09
to Dual N-Back, Brain Training & Intelligence
hmm, someone else posted on here that you had already finished your
studies. Guess it was a misinformation.

In any case, interesting stuff; keep us posted, it'd be interesting to
see what other results have come from secondary studies [even if they
are on a more minimal scale].

Ari

unread,
May 25, 2009, 2:54:58 PM5/25/09
to Dual N-Back, Brain Training & Intelligence
"For this reason, to say that performing something quicker but not
finding it easier to do, is almost illogical"

Why? I'll give you an example: caffeine vs. amphetamines. The first
improves speed in many people; you'll work faster but not necessary
better (and possibly worse). There will be no increase in the
complexity of your thoughts.

Amphetamine, by contrast, improves not only speed but also maximum
thought complexity. (At least this has been my experience!)

It's like the distinction between processor speed (GHz) and processor
architecture. The former is a measure of cycles per second, the latter
is design. A well-designed, more efficient processor can outsmart a
poorly-designed one that works faster.

Ari

unread,
May 25, 2009, 3:12:53 PM5/25/09
to Dual N-Back, Brain Training & Intelligence
Also, I've uploaded the PDF version.
Message has been deleted

Ron Williams

unread,
May 25, 2009, 5:53:53 PM5/25/09
to brain-t...@googlegroups.com
My point is that this is _not_ a continuum, and so one can achievce a
speed increase that will make the easier tasks easier, but leave the
difficult tasks still impossible.

Pontus Granström

unread,
May 25, 2009, 5:55:14 PM5/25/09
to brain-t...@googlegroups.com
But then you are saying that these questions doesnt take any "analytical intelligence" into account just speed and memory?

Ron Williams

unread,
May 25, 2009, 5:56:49 PM5/25/09
to brain-t...@googlegroups.com
I meant to say 'speed/working memory increase'

Mike L.

unread,
May 25, 2009, 6:36:49 PM5/25/09
to Dual N-Back, Brain Training & Intelligence
The examples you give, though, do not relate with the study that was
done; The people in the study became quicker not after having
ingested something alien but rather, after having trained.
The example I pose, and one which i think bears more fruit in
something like the study that was done is the following:
A man who can max at 10 repetitions (in the span of 30 seconds) daily
of a weight of 200 pounds will eventually build muscle and become
more
efficient in his reps for that weight as well as become capable of
lifting more weight. This same man, then, after 19 days of training
(familiar?) would have (theoretically) gained in muscle and can lift
now 250 pounds and do a max of 8 reps (same span of 30 seconds). Now,
if this same man were to revisit the earlier weight of 200 pounds, he
would find that his ability to lift that weight would be much greater
and thus, a weight which he once could max at 10 reps for he can now
max at 15 reps (span of 30 seconds) thus, not only becoming more
efficient by virtue of his ability to lift more weight, but quicker
in
his ability to do more reps of the same weight (200 lb) in the same
time.

Toto

unread,
May 26, 2009, 5:23:18 AM5/26/09
to Dual N-Back, Brain Training & Intelligence
to say that performing something quicker but not finding it
> easier to do, is almost illogical, and therefore, to say that harder
> tasks remain just as hard for the person who has found it easier to
> perform the easy tasks just doesn't make sense.

Of course, the easy tasks became easier to do, but that doesnot yet
mean that the harder became easier too. You probably have seen a
culture-free test. Once you understand the logic behind a problem, you
have to remember what elements the correct answer should include. You
have to form an image of it, if you can, or check the elements one by
one. If you don't understand the logic, WM is of no use.

> But wait, isn't the speed or even the capacity with which one is
> ENDOWED to accomplish whatever task they might have at hand, also
> strongly correlated with intelligence?

It is strongly correlated if it's not trained. If there is correlation
between two things, that doesn't necessarily mean that one causes the
other. There is said to be some correlation between looks and
intelligence. Does this mean that if you have a plastic operation it
will make you smarter?

I admit that by training WM it may be possible to increase IQ SCORE
signifficantly if the test is an easy one or if the initial score was
low , but then intelligence and score will probably be two different
things.

"hmm, someone else posted on here that you had already finished your
studies. Guess it was a misinformation".

I never said Vlado had finished his studies. I said he had not tested
the control group yet. It seems he still hasn't. So, it was not a
misinformation, it was a misunderstanding.


Pontus Granström

unread,
May 26, 2009, 5:33:47 AM5/26/09
to brain-t...@googlegroups.com
But havent you missed the whole point here Toto? What you are claiming is that the easier questions are just a matter of working memory? How can you then call it a test of analytical intelligence (formally you would have to split the test)?
Who decides what's easy and not? To me the interesting thing that's also mentioned is that it's the shared neural networks that causes the increase in IQ (read the article i posted) and that there is a high correlation between "lure trials" and RAPM score (based on the neurological structure and plasticity). The increased score could not be accounted for by increase in working memory (since all people increased their IQ scores).

Pontus Granström

unread,
May 26, 2009, 5:39:26 AM5/26/09
to brain-t...@googlegroups.com
All questions are supposed to be "harder" and as a such they cannot be equally or all depend on the same thing because if they did they would lose power of dicriminate.

Toto

unread,
May 26, 2009, 6:21:01 AM5/26/09
to Dual N-Back, Brain Training & Intelligence
I haven't missed anything. I am not saying that the easier questions
are just a matter of working memory. I'm just saying they are within
the reach of most people. To solve a problem you need to understand
its logic. Then its all about memory.
Easy is what most people can do.

>To me the interesting thing that's also
> mentioned is that it's the shared neural networks that causes the increase
> in IQ .
That must be a hypothesis. We still don't even know if there is an
increase in IQ.

>The increased score could not be accounted for by increase
> in working memory (since all people increased their IQ scores).

All people gave more correct answers. Those with lowest initial scores
showed most improvement. There was increase in the scores as long as
there was increase in n-level (look at the graphics in the study
paper). We don't know what happens afterwards.

The reason I think that IQ score cannot be increased if the test is a
difficult one is my own experience. I took a high-ceiling timed test
before and after training and there was no difference. I could solve
the easier items before DNB, and the difficult ... well, they were
still difficult for me :). I admit that at first I thought you were
all suffering from a mass psychosis :) I hope there really is some
improvement on easier tasks, though I'm sure many people just take a
test too many times, less than six moths after the first time (some
tests should not be taken a second time at all) , or refuse to
understand that scores on different tests could be different too.

On May 26, 12:33 pm, Pontus Granström <lepon...@gmail.com> wrote:
> But havent you missed the whole point here Toto? What you are claiming is
> that the easier questions are just a matter of working memory? How can you
> then call it a test of analytical intelligence (formally you would have to
> split the test)?
> Who decides what's easy and not? To me the interesting thing that's also
> mentioned is that it's the shared neural networks that causes the increase
> in IQ (read the article i posted) and that there is a high correlation
> between "lure trials" and RAPM score (based on the neurological structure
> and plasticity). The increased score could not be accounted for by increase
> in working memory (since all people increased their IQ scores).
>

Pontus Granström

unread,
May 26, 2009, 6:36:11 AM5/26/09
to brain-t...@googlegroups.com
So you mean that the IQ-score isnt gaussian distributed, that it's more "flat" and then raises (doesnt make any mathematical sense)? What you are saying is that those who score low doesnt have a low IQ just not a good enough working memory . If it were like that a person could score low on the easy part (due to a lack of WM) but complete the more difficult items?. I still doesnt think your reasoning is consistent/logical. 

Toto

unread,
May 26, 2009, 7:26:22 AM5/26/09
to Dual N-Back, Brain Training & Intelligence
"So you mean that the IQ-score isnt gaussian distributed, that it's
more
"flat" and then raises (doesnt make any mathematical sense)?"

I have no idea what you are talking about :)

" What you are
saying is that those who score low doesnt have a low IQ just not a
good
enough working memory".
Didn't you say that WM strongly correlates with IQ? If it is true than
in most cases those with bad WM will have low intelligence too.

"If it were like that a person could score low on the
easy part (due to a lack of WM) but complete the more difficult
items?"

Test are usually not divided into an easy and a difficult part. You
have certain amount of time for the whole test, and the score does not
depend on the difficulty of the problems solved, it depends on their
number. If WM capacity is lower, even if you are very intelligent, it
probably will take you more time to solve a problem. The result will
be less problems solved and, consequently, lower score. If the
problems in a test are more difficult, they are supposed to be solved
in more time and the stress is not so much on WM, it is on some
reasoning ability.

On May 26, 1:36 pm, Pontus Granström <lepon...@gmail.com> wrote:
> So you mean that the IQ-score isnt gaussian distributed, that it's more
> "flat" and then raises (doesnt make any mathematical sense)? What you are
> saying is that those who score low doesnt have a low IQ just not a good
> enough working memory . If it were like that a person could score low on the
> easy part (due to a lack of WM) but complete the more difficult items?. I
> still doesnt think your reasoning is consistent/logical.
>

Pontus Granström

unread,
May 26, 2009, 9:56:38 AM5/26/09
to brain-t...@googlegroups.com
Well have you ever heard about the normal distrubtion, IQ is supposed to be distrubuted like that. The problems on ravens etc takes you more to the right on the scale for every problem you solve. So if you have 4 more right questions your IQ will probaly go up with something like 10-15 points. But this shouldnt then be considerad a gain in IQ just in "working memory" according to you, so say someone scores 10 correct answers then 14 he should still not be consider to have a analytical intelligence just a good working memory? At which type of questions does not working memory play a role?

Pontus Granström

unread,
May 26, 2009, 10:01:18 AM5/26/09
to brain-t...@googlegroups.com
Is it really fair to assume that all "easy questions" depends to 100% on working memory and all hard doesnt?

Wade

unread,
May 26, 2009, 10:35:29 AM5/26/09
to Dual N-Back, Brain Training & Intelligence
Vlad,

So people are participating in your study and they aren't even showing
up to take the test? I wonder why. Maybe they feel they didn't get too
far on dual-n-back so don't feel motivated to retake?

When you say they are scoring 2 or 3 points higher, do you mean their
IQ scores? Or you mean they are successfully answering 2 or 3 more
questions? Also, when they retake, is it the exact same questions or
is their an alternate form to the Raven's?

Wade

unread,
May 26, 2009, 10:52:38 AM5/26/09
to Dual N-Back, Brain Training & Intelligence
Ron,

Well I guess that testing intelligence cannot be a continuum from
easier questions to harder ones since the the questions provide
discrete feedback on a person's performance. However, this doesn't
mean that the brain circuitry that is responsible for varying levels
of performance on these tests, don't vary according to a continuum
from slow (with poor analytical ability) to fast (with good analytical
ability) --necessarily.

My point is that, to my knowledge at least, it is not established that
the differences in the neural circuitry that are responsible for
faster speed of processing in one person and slower processing in
another person, and better wm in one person and poorer wm in another,
are not the exact same circuitry. In fact, according to many
neuroscientists, it's tempting to think they are since they covary. In
other words the phenomena of "fast thinking" and "working memory" may
only be epiphenomena of the same physical circuitry.

Also, while we do understand "analytical ability" as separate
phenomena, it is still a human construct that we don't yet know just
how it relates to other human constructs like wm and speed. Increases
in wm may tend to improve speed and may tend to improve working
memory, might also improve analytical ability. Maybe analytical
ability improves with improved processing speed (not CAUSED by).

IQ tests, which are discrete, may not be sensitive enough to catch,
what are very likely, subtle changes in these things after only a
month of training.

Wade

unread,
May 26, 2009, 10:58:34 AM5/26/09
to Dual N-Back, Brain Training & Intelligence
Ari,

I think this may be a false analogy to what Mike it trying to say.
"performing something quicker" is assumed to mean "successfully
performing it quicker." When a person is hyped up on caffeine they may
move from thought to thought or from task to task quicker, but if they
aren't doing it as accurately then this isn't the same level of
performance.

Wade

unread,
May 26, 2009, 11:42:24 AM5/26/09
to Dual N-Back, Brain Training & Intelligence

"Of course, the easy tasks became easier to do, but that doesnot yet
mean that the harder became easier too. You probably have seen a
culture-free test. Once you understand the logic behind a problem, you
have to remember what elements the correct answer should include. You
have to form an image of it, if you can, or check the elements one by
one. If you don't understand the logic, WM is of no use."


I think it confuses the issue in question with Mike's argument: What
then enables one person to "understand the logic" of a problem to
which they had no prior knowledge and arrive at a correct solution,
when another person cannot? One's ability to analyze a problem or
object and synthesize a novel solution involve the executive network
which is responsible for fast processing of *novel* information. I
realize that solving a problem fast and solving it correctly are two
distinct, measurable phenomena. The fact that it is, however, creates
the confusion:

Consider by analogy how fast a person can play a scale on a piano, or
how fast a person can play a lick on a guitar. This would not in any
way predict the intelligence of the person. It may only reflect
practice time. However, when we "solve a problem" either slow or fast,
we are talking about a *new* problem, the solution to which has yet to
be learned. Moody must be assuming that the solutions to easier
questions on an iq test are already "known" and that working memory
and a person's "speed of processing" help you think of the answers
more quickly. While this may actually be true, Mike's argument is
still valid.

Raven's Progressive Matrices presumably doesn't involve crystalized
knowledge, even on the easier problems. A person still has to use
their executive ability to analyze what is given and synthesize *new*
knowledge to give an answer. A person who can do go through the
process of synthesizing new knowledge faster, relative to a level of
difficulty on which a duller person can still perform, will certainly
perceive this process as easier than the duller person. This person is
also probably smarter too. Since they analyze and synthesize faster,
they probably have crystalized more knowledge in their lifetime as
well.

"If you don't understand the logic, WM is of no use." ...this seems
illogical :). If you don't understand the logic, you may need more of
the "stuff" working WM is made of.

This is probably over simplifying the issue too, though. There have
been studies that show parietal lobe activity is responsible for some
of what we call "attention" and certainly problem solving. A person
can have a faulty executive network and a superior parietal region
which results in the kind of "freakish" intelligence where a person
can figure out how the universe works but can't figure out how to get
along socially or pay their bills on time. All else being equal
though, better executive control = better WM = higher intelligence =
faster processing of information at a given level of difficulty.

Iron

unread,
May 26, 2009, 2:12:12 PM5/26/09
to Dual N-Back, Brain Training & Intelligence
So, if we consider each person to have stats (yes... like in an RPG),
then for sake of discussion lets assume one has a WM stat and a
"critical faculty" stat (ability to determine relations between
objects). Now, if we assume that every problem has a lower limit for
the required stats to solve the problem, and that both of these stats
must be exceeded by an individual for them to successfully solve the
problem, these stats could be very different for different problems.
For example, one problem could require a superior critical faculty,
while only taxing one's working memory up to 3 objects. This problem
would not be solved more easily if one improves their working memory
if they don't already have the superior critical faculty because if
the individual has an average critical faculty and a working memory of
"8" their critical faculty is still too weak to solve the problem.

So if we assume the possibility that there are two distinct stats, one
of which is not trained by DNB, we can see why there are still
problems that DNB can not train us to solve.

This doesn't mean, however, that we are not training ourselves to be
able to solve problems that we could not solve before. It simply
means that on these tests, the problems become more difficult through
elevating the critical faculty requirement more so than they do
increasing the WM requirement.

Mike L.

unread,
May 26, 2009, 3:46:13 PM5/26/09
to Dual N-Back, Brain Training & Intelligence
I mentioned earlier in this same group the theory that DNB, like many
medicines, might not work for some people. Simply put, some medicines
just do not have the desired effect in people (often times, due to an
allergy, it has quite the opposite) so perhaps we can postulate that
DNB, like medicines, just doesn't work for certain people's brain
composition. Why? Who knows. Bottom line is, if many people claim they
feel a difference and have proof of that difference, while a select
few state the contrary, there must be a sound reason, one which does
not completely reject the purported results of DNB. For what reason?
Simply because there is no concrete evidence to corroborate such a
rejection.

Iron

unread,
May 26, 2009, 4:33:28 PM5/26/09
to Dual N-Back, Brain Training & Intelligence
And as I recall I replied saying you had no basis for determining that
we have seen anyone for which it does not work. You reasoned that
slower n-level progression indicated a lack of efficacy when n-level
is not correlated with IQ increase! The reason for some people not
noticing a difference could simply be that they do not self monitor
well enough to notice the differences in their cognition. If they are
not taking an objective test to determine effect, they are self-
reporting, which can be very suspect.

Can you relate your hypothesis further to the topic of this thread?
I'm having a hard time making a connection.

Mike L.

unread,
May 26, 2009, 5:04:41 PM5/26/09
to Dual N-Back, Brain Training & Intelligence
There was someone earlier on this thread who said that they themselves
had not seen any difference in their cognition as based on a test they
took. This might be for a plethora of reasons but when everything else
is omitted, i hypothesized, then one could say that it simply does not
work for them. That is a last resort though, nothing to jump on
immediately.

And as for earlier threads in which i touched upon the efficacy of DNB
as related to n-level, i merely stated that to reach a higher n-level
(if that was your purpose), yet, were stuck at a certain level, you
would have to proceed onto the next n level and struggle in that one
(because upon revisiting the previous n-level, that person would find
that they indeed improved)...

But that was something else completely..

Toto

unread,
May 27, 2009, 6:14:22 AM5/27/09
to Dual N-Back, Brain Training & Intelligence
I admit I'm simplifying the issue . I'm know that WM is involved in
the process of solving a problem, not just at the end of it. But I
think it is just a basis - you can't solve a problem if you don't keep
certain things in your mind, but they are just things you operate
with... I'm sorry I can't express myself better :)

I was trying to find what was the correlation between WM and
intelligence and came across this: http://redalyc.uaemex.mx/redalyc/pdf/727/72718421.pdf

According to it there is higher correlation between intelligence and
executive functioning (and the correlation is higher for the more
difficult items) than between intelligence and WM. This might explain
why the participants with lower initial scores improved most - the
questions in the beginning are easier and the difficulty is supposed
to increase with each question.

"Moody must be assuming that the solutions to easier
questions on an iq test are already "known" and that working memory
and a person's "speed of processing" help you think of the answers
more quickly"
I don't think anyone is assuming such a thing...

Ari

unread,
May 27, 2009, 5:01:58 PM5/27/09
to Dual N-Back, Brain Training & Intelligence
"Now, if we assume that every problem has a lower limit for
the required stats to solve the problem, and that both of these stats
must be exceeded by an individual for them to successfully solve the
problem, these stats could be very different for different problems."

This is precisely the crux of the issue. Working memory, whatever its
correlates and networks (regarding the studies mentioned previously),
is unquestionably not the sole determinant of intelligence, or even of
fluid intelligence. There are multiple components that comprise the
faculty of problem solving, of which WM is but one. So a score of 0 on
any given set of problems could reflect a weakness in any, some, or
all of these components, and raising a specific component could make
your score anywhere from 1-100, or even not improve it at all. If all
the questions on the test are unsolvable if one lacks the necessary WM
capacity, but only some tax other abilities, then improving WM will
only improve one's scores on the first few problems, not on all of
them.

Working memory is a short-term ability to store and manipulate
information. The higher centers of cognition presumably operate on
whatever is given to them by working memory. If you can't handle the
data, then your prefrontal cortex certainly can't work them, so the
"stat" (borrowing the RPG analogy, which I think is quite apt) that
determines the operating capacity of everything else is WM. Actually,
the Windows Experience Index is an excellent metaphor:
http://www.microsoft.com/windows/windows-vista/get/experience-index.aspx

Pontus Granström

unread,
May 27, 2009, 5:35:37 PM5/27/09
to brain-t...@googlegroups.com
Still it seems strange that even at midpoint at BOMAT for example only should tax WM, so at what number do the "real questions begin", and besides if everybody increase their IQ-score with DNB maybe they would have to remove those questions that are correlates with DNB 100%?

Pontus Granström

unread,
May 30, 2009, 2:14:09 PM5/30/09
to brain-t...@googlegroups.com
The whole idea of IQ being normal distrubted falls if the "easy questions" are questions that everybody can solve. It shouldnt be any principle diffrence between a question separating one from 80 to 90 IQ then a questions separating a person from 130 to 140. I have a few objections to your idea. First of all the group participation were University students it's fair to assume that they were average or above average in IQ, therefore the questions that they got correct the first time could not be consider easy, there are actually questions that almost every can solve but his wasnt the case since they solved more questions the second time.

Ari

unread,
May 31, 2009, 2:38:03 PM5/31/09
to Dual N-Back, Brain Training & Intelligence
"the questions that they got correct the first time could not be
consider easy, there are actually questions that almost every can
solve but his wasnt the case since they solved more questions the
second time."

Regardless of your intelligence, when you have 10 minutes to take a
test designed to be taken in 45, you will only be able to answer the
easy questions.

On May 30, 1:14 pm, Pontus Granström <lepon...@gmail.com> wrote:
> The whole idea of IQ being normal distrubted falls if the "easy questions"
> are questions that everybody can solve. It shouldnt be any principle
> diffrence between a question separating one from 80 to 90 IQ then a
> questions separating a person from 130 to 140. I have a few objections to
> your idea. First of all the group participation were University students
> it's fair to assume that they were average or above average in IQ, therefore
> the questions that they got correct the first time could not be consider
> easy, there are actually questions that almost every can solve but his wasnt
> the case since they solved more questions the second time.
>
> On Wed, May 27, 2009 at 11:35 PM, Pontus Granström <lepon...@gmail.com>wrote:
>
> > Still it seems strange that even at midpoint at BOMAT for example only
> > should tax WM, so at what number do the "real questions begin", and besides
> > if everybody increase their IQ-score with DNB maybe they would have to
> > remove those questions that are correlates with DNB 100%?
>

Pontus Granström

unread,
May 31, 2009, 2:39:10 PM5/31/09
to brain-t...@googlegroups.com
How do you explain that they all didnt get the same score and that it varied with dual-n-back training time?

Shamanu999

unread,
May 31, 2009, 3:40:24 PM5/31/09
to brain-t...@googlegroups.com
This is a interesting discussion.

I have come up with my own theorie about the jaeggi study.
There are two problems with the study:
1. why only 19 days?
2. why only 10 minutes for the test?
This points are really strange, would anyone who seriously want's to test a real iq improvement from a training use such extrem low limitations?

Let's compare the 19 days to fitness training: What would you expect after 19 days of training in the gym?
I would expect to get used to the training but not much of any physical changings at all.
So how can we explain the improvements? Well first there wasn't a well suited control group
as in my opinion they should have gotten a "challenging" task where we wouldn't expect any iq changings - like sudoku.
But the more important part from this theorie is that most people (especially after school) aren't really good at concentrating really on one task.
So give this people a challanging task where they have to concentrate and their ability will increase quickly to some degree in a short time.

Theoretical supports for this theorie:
* 10 minutes part - with a higher concentration you will complete especially simple tasks faster
* 19 days - might be perfect to messure improvements from people who aren't used to concentrate
* don't use a system - if you are used (compared to the average) to concentrate because of solving sudoku games or something similar the chance is higher you
will use a strategy for any task like n-back. (to use a strategy is more familiar to you)
So the ability to concentrate for this people would on average be already higher -> less improvement.

Any thoughts on this?

Pontus Granström schrieb:

Pontus Granström

unread,
May 31, 2009, 4:00:58 PM5/31/09
to brain-t...@googlegroups.com
Doesnt your arguments contradict each other? First of all the oberservation is clear: the more time spent training the more correct answers. If dnb just traines concentration and that's the reason they got more correct answers then the brain do form new cells/neural pathways in just 19 days and especially more in those that trained 8 days (otherwise they would got the same score).
If let's say 14 questions out of 29 is trainable then I would say that the whole test is invalid and the test do not measure intelligence but rather concentration. But on the other hand having a high intelligence demands concentration and memory so it might be a bottleneck for some but that it should be it for all with "low IQs" and that all the 14 questions just demand slighty demand a little bit more memory than the previous one seems very unlikley! Do you get my point?

Mike L.

unread,
May 31, 2009, 4:36:44 PM5/31/09
to Dual N-Back, Brain Training & Intelligence
I think I understand; To say that getting used to concentrating (which
is what Sham is saying) and getting higher scores purely for that
reason is to forget all too easily that the reason the Jaeggi study
did not use a control group which performed another task was because
they did not need to- it was not necessary. Why?

Simply put, it's been done before. The study itself mentions how other
studies before it, which have used other methods of training such as
playing Chess or Sodoku, have failed in proving their efficacy in
improving IQ scores.

For this reason, to have used a control group which performed a task
whose efficacy in improving IQ scores had already been disproved would
have been redundant and highly unnecessary; all that was needed to
prove, in fact, was whether or not a different task -DNB- would prove
to be effective in that which the other tasks were not.

So Sham's theory of concentration just simply does not bear enough
fruit to hold true; The only truly disputable thing in the Jaeggi
study are the 10 minute tests (which Sham's theory, once again, does
not even dispute validly) which -not to mention- ALSO proves somewhat
insubstantial in undermining the results of the Jaeggi study.

Shamanu999

unread,
May 31, 2009, 4:47:35 PM5/31/09
to brain-t...@googlegroups.com
My interpretation of a higher concentration level in this connection was a bit differnt.
As I see it a average person won't have much effort to concentrate on a higher level but they aren't used to it.
(the phrase "higher level" here is rather limited to unused resources who already exist)
Its like to have already enough physical strength to balance two chairs - you will just have to learn the coordination part
which would be a lot easier compared to one who needs to strengthen first his body for the task.
So they have physically the ability to concentrate better but can't access it fully for the 10 minute test.

Yes, high intelligence demands a higher degree of concentration and memory but there might be other problems with this training if jaeggi was only measuring basic concentration on a level of unused resources.

Pontus Granström schrieb:

Shamanu999

unread,
May 31, 2009, 5:00:20 PM5/31/09
to brain-t...@googlegroups.com
The 10 minute part and the 19 days in connection are exactly the problem I see with the necessarity of a control group.
Yes, such studies with Sudoku and others have been done before but as far as I know not with the same changings.
(only 10 minutes for the test)
And this is the critical part I see as the changings in combination might lead exactly to measuring only the ability to concentration.

Mike L. schrieb:

Gwern Branwen

unread,
May 31, 2009, 10:33:21 PM5/31/09
to brain-t...@googlegroups.com
On Wed, May 27, 2009 at 5:01 PM, Ari <ariz...@gmail.com> wrote:
>
> This is precisely the crux of the issue. Working memory, whatever its
> correlates and networks (regarding the studies mentioned previously),
> is unquestionably not the sole determinant of intelligence, or even of
> fluid intelligence. There are multiple components that comprise the
> faculty of problem solving, of which WM is but one. So a score of 0 on
> any given set of problems could reflect a weakness in any, some, or
> all of these components, and raising a specific component could make
> your score anywhere from 1-100, or even not improve it at all. If all
> the questions on the test are unsolvable if one lacks the necessary WM
> capacity, but only some tax other abilities, then improving WM will
> only improve one's scores on the first few problems, not on all of
> them.
>
> Working memory is a short-term ability to store and manipulate
> information. The higher centers of cognition presumably operate on
> whatever is given to them by working memory. If you can't handle the
> data, then your prefrontal cortex certainly can't work them, so the
> "stat" (borrowing the RPG analogy, which I think is quite apt) that
> determines the operating capacity of everything else is WM. Actually,
> the Windows Experience Index is an excellent metaphor:
> http://www.microsoft.com/windows/windows-vista/get/experience-index.aspx

If we're using computer metaphors, I think the right one for this
discussion is Amdah's Law
https://secure.wikimedia.org/wikipedia/en/wiki/Amdahl's_law

The relevant bit being:
"or example, if a program needs 20 hours using a single processor
core, and a particular portion of 1 hour cannot be parallelized, while
the remaining promising portion of 19 hours (95%) can be parallelized,
then regardless of how many processors we devote to a parallelized
execution of this program, the minimal execution time cannot be less
than that critical 1 hour."

The application here is, I think, fairly clear. Moody's 2 essential points are:
1) the 8-dayers showed no improvement on Raven's, which tests WM & Gf
2) there was BOMAT improvement, but the data is only on the easiest questions
2.1) these easy questions only meaningfully test WM, and not Gf
∴ the simplest explanation is that N-back improves WM but not Gf

So even if one 'parallelizes' the WM bits, the analysis still
bottlenecks the solution.

I don't know if Moody is right, since the picture is incomplete: we
don't know what the full BOMAT would've shown, and the 8-dayers aren't
a definitive datapoint against N-back (in the previous Jaeggi studies,
the clearest results were for people who were doing N-back noticeably
longer than 8-days, and it seems plausible to me that it could take 2
weeks or so for neurological changes - muscles adapt on that
timescale, after all).

(But I'm a little embarrassed personally. I had noticed and read
quizzically that footnote, but I did not look at the old study; and I
misunderstood the time limit comment as indicating that subjects had
10 minutes for each question - which I considered eminently reasonable
- and not 10 minutes for the *entire* test! ~_~)

(Or if you don't like Amdahl's Law, then one could point that that if
your program is IO-bound, using less CPU time won't make it go
faster.)

--
gwern

Pontus Granström

unread,
Jun 1, 2009, 3:33:24 AM6/1/09
to brain-t...@googlegroups.com
First of all there was an improvement in Ravens and it was propotional to the training time. I still think there are some key issues that you miss here, WM and Gf might share common capacity constraints and activates same areas of the brain while some of you see WM and Gf as totally independent of each other which they only are from "services point of view" but not from a neurological point of view. The research has never stated that working memory and Gf is the same, it's stated that by training the working memory in a certain way (updating often, pararell executive functions etc) would strengthen shared neural pathways leading to an improvement. Assuming that the first 14 questions of 29 (?) only should tax working memory seems highly unlikely.

Gwern Branwen

unread,
Jun 2, 2009, 5:42:08 PM6/2/09
to brain-t...@googlegroups.com
2009/6/1 Pontus Granström <lepo...@gmail.com>:

> First of all there was an improvement in Ravens and it was propotional to
> the training time.

"A subsequent analysis of the gain scores (posttest minus pretest) as a function
of training time (F(3,30) ϭ 9.25;
P Ͻ 0.001; ␩2 ϭ 0.48; Fig. 3b). Analyses of covariance (AN-
COVA) with the factor group (trained vs. control), the posttest
scores as the dependent variable, and the pretest scores as the
covariate revealed a trend for group differences after 12 days
(F(1,19) ϭ 1.93; P ϭ 0.09; ␩2 ϭ 0.09), and statistically significant
group differences after 17 (F(1,13) ϭ 4.65; P Ͻ 0.05; ␩2 ϭ 0.26),
and 19 training days (F(1,12) ϭ 4.53; P Ͻ 0.05; ␩2 ϭ 0.27). Post
hoc analyses (Gabriel’s procedure; two-tailed) for the training
group revealed significant differences between the following
groups: 8 vs. 17 days (P Ͻ 0.01); 8 vs. 19 days (P Ͻ 0.001); and
12 vs. 19 days (P Ͻ 0.01). There was a trend for a difference
between 12 and 17 days (P ϭ 0.06). "

Yes, the 8-day group fit the dosage-dependent graph, inasmuch as if
you set them to 0, then their near-0 improvement fits perfectly.

I still think there are some key issues that you miss
> here, WM and Gf might share common capacity constraints and activates same
> areas of the brain while some of you see WM and Gf as totally independent of
> each other which they only are from "services point of view" but not from a
> neurological point of view. The research has never stated that working
> memory and Gf is the same, it's stated that by training the working memory
> in a certain way (updating often, pararell executive functions etc) would
> strengthen shared neural pathways leading to an improvement.

Yes, I suppose that's possible. We do have all those other results
linking the two. But as I said before, my takeaway from Moody is that
Jaeggi 2008 doesn't prove it by boosting WM and then Gf.

> Assuming that
> the first 14 questions of 29 (?) only should tax working memory seems highly
> unlikely.

It's true that assuming any random # of questions will only tax
working memory, but we might expect some number of early questions to
test working memory more. The diagrams can only have so many objects
in them, while the relationships can be made ever more
subtle/arbitrary.

In this way it's perfectly possible that the early questions owe more
of their difficulty to WM than to pattern. But per above, with each
question the pattern becomes more difficult, while the WM difficulty,
even if it increases, is quickly bounded.

Or to put it another way, any individual should be surprised to win
the lottery; but we shouldn't be surprised that an individual won the
lottery. It's a little surprising 14 won the lottery, but not
surprising that some number did.

--
gwern

Pontus Granström

unread,
Jun 2, 2009, 5:56:36 PM6/2/09
to brain-t...@googlegroups.com
But then let's assume that 14 questions of BOMAT is of reach of everybody then they shouldnt be considered a measure of intelligence? However let's say you can solve 40% more questions in a 10 minute interval then you would have more time completing the more difficult items which for many might make a diffrence.

Gwern Branwen

unread,
Jun 2, 2009, 6:32:54 PM6/2/09
to brain-t...@googlegroups.com
On Tue, Jun 2, 2009 at 5:56 PM, Pontus Granström <lepo...@gmail.com> wrote:
> But then let's assume that 14 questions of BOMAT is of reach of everybody
> then they shouldnt be considered a measure of intelligence?

A worse measure, certainly. They may be fine for 'stupid' people,
though. One doesn't use the same questions or tests for geniuses as
for morons. But anyway, given the strong correlation between Gf and
WM, they may be fine questions for the purpose, until one is
interested in WM and Gf as distinct from each other.

> However let's
> say you can solve 40% more questions in a 10 minute interval then you would
> have more time completing the more difficult items which for many might make
> a diffrence.

All that much of one?

Suppose I'm solving 9 out of the 29 questions before training. (The
'vast majority' are solving <14 questions, so this doesn't seem too
bad.)
ow N-back improves me by an amazing 40% - I can do those 9 problems in
just 6 minutes instead of 10. In other words, the second time around,
I now have a luxurious 4 minutes to do 20 harder questions.
Supposing that I solve these harder questions at the same rate I did
the easier ones before training, that means I'm answering less than a
question per minute. So I'll answer another 3 questions, bumping me
all the way up to 12 questions out of 29.
On a 10 minute test, you have so little time that even dramatic
improvements don't buy you much. Amdahl's law again.

--
gwern

Mike L.

unread,
Jun 2, 2009, 7:32:14 PM6/2/09
to Dual N-Back, Brain Training & Intelligence
I think we're just gonna have to wait for another study to make things
a bit more concrete; i must admit though, the probability that the
jaeggi study is indeed valid, seems to me to be high, but, for the
sake of removing most doubt, another, more thorough and less faulty
study is necessary.

On Jun 2, 6:32 pm, Gwern Branwen <gwe...@gmail.com> wrote:

Pontus Granström

unread,
Jun 3, 2009, 3:40:43 AM6/3/09
to brain-t...@googlegroups.com
It's valid in the sense that dual-n-back without doubt has a strong effect on solving IQ-problems under time pressure.  What's interesting is to discuss the underlying biology rather then technical details of test taking of course there must be some correlation between 10 minutes test and let's say 45 minutes. Many tests depend on speed for example the FRT test. I myself took a high-end test when applying for the air force and I can say that high intense focus (like in 4-5-6 back) together with a strong memory and executive function (calculations/comparisons) was an absulute most. They way I see it is that "dnb" thinking activates processes that are containers for thought in general just like getting a better a faster CPU and getting more RAM but that doesnt imply that all thoughts of the high performance machine are ingenious.    

Ari

unread,
Jun 3, 2009, 9:24:56 PM6/3/09
to Dual N-Back, Brain Training & Intelligence
"I think we're just gonna have to wait for another study to make
things a bit more concrete"

And to simply reproduce the results.

Ron Williams

unread,
Jun 10, 2009, 8:16:06 AM6/10/09
to brain-t...@googlegroups.com
That implies that if we want to see gains in actual thinking ability
we should be trying to develop new thinking tools that can now operate
in the 'expanded space' that was formerly too small to contain them.

It sounds like an interesting train of thought, and requires some
creativity. Just playing the same small game in a bigger field won't
give you a qualitatively different result.

Assuming, of course, that DNB does stretch the mind in the way we're guessing.

Paul

unread,
Jun 10, 2009, 8:34:57 AM6/10/09
to brain-t...@googlegroups.com
In terms of existing thinking tools, Edward De Bono is probably the most well known.

Pontus Granström

unread,
Jun 16, 2009, 5:45:40 AM6/16/09
to brain-t...@googlegroups.com
We had a discussion some time ago whether dual-n-back is just a working memory game or something more, today I found a article saying that n-back is considered a "monitoring" task and is a part of the executive function. Shifting task and inhibition of impulses (lure trials) are also a part of the executive functions. Actually there are three parts of the executive function:  shifting, inhibition och monitoring. The executive function on the other hand is considered a low level controller of cognitive processes it's relation with IQ seems obvious. DNB is also progressive which loads the memory more and more but also the other systems which makes the brain adapt to working with more information therefore the increase in IQ. Any thoughts?  

Ron Williams

unread,
Jun 16, 2009, 9:36:28 AM6/16/09
to brain-t...@googlegroups.com
I've always thought DeBono states the bleeding obvious. That isn't
the kind of 'thinking tool' I was imagining. What DeBono comes up with
are pathetic crutches that deadhead management types will think are
miraculous.

On 6/16/09, Pontus Granström <lepo...@gmail.com> wrote:
> We had a discussion some time ago whether dual-n-back is just a working
> memory game or something more, today I found a article saying that n-back is
> considered a "monitoring" task and is a part of the executive function.
> Shifting task and inhibition of impulses (lure trials) are also a part of
> the executive functions. Actually there are three parts of the executive
> function: *shifting*, *inhibition* och *monitoring. *The executive function
Reply all
Reply to author
Forward
0 new messages