Re: Jaeggi 2011

334 views
Skip to first unread message

Pontus Granström

unread,
Jun 15, 2011, 3:42:28 PM6/15/11
to brain-t...@googlegroups.com
Gwern she doesn't avoid it, she doesn't have to deal with his questions, because it's something that comes from a layman that has his own neuropsychological constructs.

On Wed, Jun 15, 2011 at 9:35 PM, Gwern Branwen <gwe...@gmail.com> wrote:
### Jaeggi 2011

Jaeggi's work with University of Michigan is available as a preprint:
Jaeggi, Buschkuehl, Jonides & Shah. 2011. ["Short- and long-term
benefits of cognitive
training"](http://www.pnas.org/content/early/2011/06/03/1103228108.abstract)
([PDF](http://www.pnas.org/content/early/2011/06/03/1103228108.full.pdf))

> "We trained elementary and middle school children by means of a videogame-like working memory task. We found that only children who considerably improved on the training task showed a performance increase on untrained fluid intelligence tasks. This improvement was larger than the improvement of a control group who trained on a knowledge-based task that did not engage working memory; further, this differential pattern remained intact even after a 3-mo hiatus from training. We conclude that cognitive training can be effective and long-lasting, but that there are limiting factors that must be considered to evaluate the effects of this training, one of which is individual differences in training performance. We propose that future research should not investigate whether cognitive training works, but rather should determine what training regimens and what training conditions result in the best transfer effects, investigate the underlying neural and cognitive mechanisms, and finally, investigate for whom cognitive training is most useful."

It is worth noting that the study used Single N-back (visual). Unlike
Jaeggi 2008, "despite the experimental group’s clear training effect,
we observed no significant group × test session interaction on
transfer to the measures of Gf" (so perhaps the training was long
enough for subjects to hit their ceilings). The group which did n-back
could be split, based on final IQ & n-back scores, into 2 groups;
interestingly "Inspection of n-back training performance revealed that
there were no group differences in the first 3 wk of training; thus,
it seems that group differences emerge more clearly over time [first 3
wk: t(30) < 1; P = ns; last week: t(16) = 3.00; P < 0.01] (Fig. 3)." 3
weeks is ~21 days, or >19 days (the longest period in Jaeggi 2008).
It's also worth noting that Jaeggi 2011 seems to avoid Moody's most
cogent criticism, the speeding of the IQ tests; from the paper's
'Material and Methods' section;

> "We assessed matrix reasoning with two different tasks, the Test of Nonverbal Intelligence (TONI) (23) and Raven’s Standard Progressive Matrices (SPM) (24). Parallel versions were used for the pre, post-, and follow-up test sessions in counterbalanced order. For the TONI, we used the standard procedure (45 items, five practice items; untimed), whereas for the SPM, we used a shortened version (split into odd and even items; 29 items per version; two practice items; timed to 10 min after completion of the practice items. Note that virtually all of the children completed this task within the given timeframe)."

The IQ results were, specifically, the control group averaged
15.33/16.20 (before/after) correct answers on the SPM and 20.87/22.50
on the TONI; the n-back group averaged 15.44/16.94 SPM and 20.41/22.03
TONI. 1.5 more right questions rather than ~1 may not seem like much,
but the split groups look quite different - the 'small training gain'
n-backing group actually fell on its second SPM and improved by <0.2
questions on the TONI, while the 'large training gain' increased >3
questions on the SPM and TONI. The difference is not so dramatic in
the followup 3 months later: the small group is now 17.43/23.43
(SPM/TONI), and the large group 15.67/24.67. Strangely in the
followup, the control group has a higher SPM than the large group (but
not the small group), and a higher TONI than either group. (The
control group has higher IQ scores on both TONI & SPM in the followup
than the aggregate n-back group.)

Jaeggi 2011 has been discussed in mainstream media. From the _Wall
Street Journal_'s ["Boot Camp for Boosting
IQ"](http://online.wsj.com/article/SB10001424052702304432304576371462612272884.html):

> "...when several dozen elementary- and middle-school kids from the Detroit area used this exercise for 15 minutes a day, many showed significant gains on a widely used intelligence test. Most impressive, perhaps, is that these gains persisted for three months, even though the children had stopped training...these schoolchildren showed gains in fluid intelligence roughly equal to five IQ points after one month of training...There are two important caveats to this research. The first is that not every kid showed such dramatic improvements after training. Initial evidence suggests that children who failed to increase their fluid intelligence found the exercise too difficult or boring and thus didn't fully engage with the training."

From _Discover_'s blogs, ["Can intelligence be boosted by a simple
task? For some…"](http://blogs.discovermagazine.com/notrocketscience/2011/06/13/can-intelligence-be-boosted-by-a-simple-task-for-some/),
come additional details:

> She [Jaeggi] recruited 62 children, aged between seven and ten. While half of them simply learned some basic general knowledge questions, the other half trained with a cheerful computerised n-back task. They saw a stream of images where a target object appeared in one of six locations – say, a frog in a lily pond. They had to press a button if the frog was in the same place as it was two images ago, forcing them to store a continuously updated stream of images in their minds. If the children got better at the task, this gap increased so they had to keep more images in their heads. If they struggled, the gap was shortened.
>
> Before and after the training sessions, all the children did two reasoning tests designed to measure their fluid intelligence. At first, the results looked disappointing. On average, the n-back children didn’t become any better at these tests than their peers who studied the knowledge questions. But according to Jaeggi, that’s because some of them didn’t take to the training. When she divided the children according to how much they improved at the n-back task, she saw that those who showed the most progress also improved in fluid intelligence. The others did not. Best of all, these benefits lasted for 3 months after the training. That’s a first for this type of study, although Jaeggi herself says that the effect is “not robust.” Over this time period, all the children showed improvements in their fluid intelligence, “probably [as] a result of the natural course of development”.
>
> ...Philip Ackerman, who studies learning and brain training at the University of Illinois, says, “I am concerned about the small sample, especially after splitting the groups on the basis of their performance improvements.” He has a point – the group that showed big improvements in the n-back training only included 18 children....Why did some of the children benefit from the training while others did not? Perhaps they were simply uninterested in the task, no matter how colourfully it was dressed up with storks and vampires. In Jaeggi’s earlier study with adults, every volunteer signed up themselves and were “intrinsically motivated to participate and train.” By contrast, the kids in this latest study were signed up by their parents and teachers, and some might only have continued because they were told to do so.
>
> It’s also possible that the changing difficulty of the game was frustrating for some of the children. Jaeggi says, “The children who did not benefit from the training found the working memory intervention too effortful and difficult, were easily frustrated, and became disengaged. This makes sense when you think of physical training – if you don’t try and really run and just walk instead, you won’t improve your cardiovascular fitness.” Indeed, a recent study on IQ testing which found that [they reflect motivation as well as intelligence](http://blogs.discovermagazine.com/notrocketscience/2011/04/26/iq-scores-reflect-motivation-as-well-as-intelligence/).

--
gwern
http://www.gwern.net

--
You received this message because you are subscribed to the Google Groups "Dual N-Back, Brain Training & Intelligence" group.
To post to this group, send email to brain-t...@googlegroups.com.
To unsubscribe from this group, send email to brain-trainin...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/brain-training?hl=en.


Gwern Branwen

unread,
Jun 15, 2011, 3:35:18 PM6/15/11
to N-back

Windt

unread,
Jun 22, 2011, 7:05:09 PM6/22/11
to Dual N-Back, Brain Training & Intelligence
"When she divided the children according to how much they improved at
the n-back task, she saw that those who showed the most progress also
improved in fluid intelligence. The others did not."

It certainly sounds like correlation rather than causation. The kids
who did best at an intelligence-related task also did best at some
other intelligence-related task later on (what exactly IS a test of
fluid intelligence? I thought quantifying intelligence was
problematic).
> IQ"](http://online.wsj.com/article/SB1000142405270230443230457637146261227...
>
> > "...when several dozen elementary- and middle-school kids from the Detroit area used this exercise for 15 minutes a day, many showed significant gains on a widely used intelligence test. Most impressive, perhaps, is that these gains persisted for three months, even though the children had stopped training...these schoolchildren showed gains in fluid intelligence roughly equal to five IQ points after one month of training...There are two important caveats to this research. The first is that not every kid showed such dramatic improvements after training. Initial evidence suggests that children who failed to increase their fluid intelligence found the exercise too difficult or boring and thus didn't fully engage with the training."
>
> From _Discover_'s blogs, ["Can intelligence be boosted by a simple
> task? For some…"](http://blogs.discovermagazine.com/notrocketscience/2011/06/13/can-int...),
> come additional details:
>
> > She [Jaeggi] recruited 62 children, aged between seven and ten. While half of them simply learned some basic general knowledge questions, the other half trained with a cheerful computerised n-back task. They saw a stream of images where a target object appeared in one of six locations – say, a frog in a lily pond. They had to press a button if the frog was in the same place as it was two images ago, forcing them to store a continuously updated stream of images in their minds. If the children got better at the task, this gap increased so they had to keep more images in their heads. If they struggled, the gap was shortened.
>
> > Before and after the training sessions, all the children did two reasoning tests designed to measure their fluid intelligence. At first, the results looked disappointing. On average, the n-back children didn’t become any better at these tests than their peers who studied the knowledge questions. But according to Jaeggi, that’s because some of them didn’t take to the training. When she divided the children according to how much they improved at the n-back task, she saw that those who showed the most progress also improved in fluid intelligence. The others did not. Best of all, these benefits lasted for 3 months after the training. That’s a first for this type of study, although Jaeggi herself says that the effect is “not robust.” Over this time period, all the children showed improvements in their fluid intelligence, “probably [as] a result of the natural course of development”.
>
> > ...Philip Ackerman, who studies learning and brain training at the University of Illinois, says, “I am concerned about the small sample, especially after splitting the groups on the basis of their performance improvements.” He has a point – the group that showed big improvements in the n-back training only included 18 children....Why did some of the children benefit from the training while others did not? Perhaps they were simply uninterested in the task, no matter how colourfully it was dressed up with storks and vampires. In Jaeggi’s earlier study with adults, every volunteer signed up themselves and were “intrinsically motivated to participate and train.” By contrast, the kids in this latest study were signed up by their parents and teachers, and some might only have continued because they were told to do so.
>
> > It’s also possible that the changing difficulty of the game was frustrating for some of the children. Jaeggi says, “The children who did not benefit from the training found the working memory intervention too effortful and difficult, were easily frustrated, and became disengaged. This makes sense when you think of physical training – if you don’t try and really run and just walk instead, you won’t improve your cardiovascular fitness.” Indeed, a recent study on IQ testing which found that [they reflect motivation as well as intelligence](http://blogs.discovermagazine.com/notrocketscience/2011/04/26/iq-scor...).
>
> --
> gwernhttp://www.gwern.net

Pontus Granström

unread,
Jun 23, 2011, 2:16:48 AM6/23/11
to brain-t...@googlegroups.com
When they design IQ-tests they group them accordingly to first and second order factors. For example, matrix reasoning, defined as adding or removing figures, rules for their movement and so on correlates highly with for example finding a pattern in a number series, hence they are placed in the same "bucket". Word knowledge has a high inter correlation, those are placed in the same bucket. Remembering
a bunch of visual objects correlates with rotation of pictures, so they are placed in the "bucket" visual spatial test.

Then they name the buckets after their main "apparent feature". Adding pictures is considered reasoning, explaining words verbal ability and so on. After this has been done they calculate their predictive power, for example Gf tests are more predictive of overall performance than word knowledge so they are highly g-loaded, this usually reveals a hierarchic structure. Almost all predictive power comes
from the G-load. Pure Gv-gc only explains one percent of the variation of the actual performance (what one tries to predict) and their only value is usually that they are statistically linked to G. That is,
the faster your brain is, the better your genetic "make up", hence you learn more per unit of time, this only holds if you don't study words, since it will expand the time spent and thus lowering
the bit/s.

This is how they do it.


Gwern Branwen

unread,
Jun 28, 2011, 9:05:05 PM6/28/11
to N-back
I'm surprised and a little disappointed to see so little discussion of
Jaeggi 2011, as compared to a random slacker kid or any of the other
random topics that seem to obsess this ML these days.

I expected as much, though, so after pondering the strange IQ test
scores and followups and still not finding any convincing explanation,
I asked another group of people to look at it: LessWrong. I posted
links as a Discussion article:
http://lesswrong.com/lw/68k/nback_news_jaeggi_2011_or_is_there_a/
(Notice I didn't describe what misgivings I had and I specifically
asked people to read the paper *first*.)

Of the 18 comments (many more than here), none seemed to regard it as
even weak evidence, which is interesting. I'll quote some of the most
relevant comments since I know otherwise a lot of people won't bother
reading the link.

[Jonathan Graehl](http://lesswrong.com/lw/68k/nback_news_jaeggi_2011_or_is_there_a/4d34)
(who, incidentally, has expertise in probability & statistics;
http://www.isi.edu/~graehl/publications.html &
http://www.isi.edu/~graehl/CV.html) writes:

> My primary objection is: perhaps some of the students in both groups got smarter (these are 8-9 year olds and still developing) for reasons independent of the interventions, which caused them to improve on the n-back training task AND on the other intelligence tests (fluid intelligence, Gf). If you separated the "active control" group into high and low improvers post-hoc just like was done for the n-back group, you might see that the active control "high improvers" are even smarter than the n-back "high improvers". We should expect some 8-9 year olds to improve in intelligence or motivation over the course of a month or two, without any intervention.
>
> Basically, this result sucks, because of the artificial post-hoc division into high- and low- responders to n-back training, needed to show a strong "effect". I'm not certain that the effect is artificial; I'd have to spend a lot of time doing some kind of sampling to show how well the data is explained by my alternative hypothesis.
>
> It's definitely legitimate to look at the whole n-back group vs. the whole active control group. Those results there aren't impressive at all. I just can't give any credit for the post-hoc division because I don't know how to properly penalize it and it's clearly self-serving for Jaeggi. It's borderline deceptive that the graphs don't show the unsplit n-back population.
>
> It's unsurprising (probably offering no evidence against my explanation) that the initial average n-back score for the low improvers is higher than the initial average for the high improvers; this is what you'd expect if you split a set of paired samples drawn from the same distribution with no change at all, for example.

[Douglas Knight](http://lesswrong.com/lw/68k/nback_news_jaeggi_2011_or_is_there_a/4d3s)
in replying to Jonathan notices the same problem I did in the IQ score
section:

> When you say that the aggregate results "aren't impressive," you imply that they are positive, but if I read table 1 correctly, the aggregate results are often negative.

[Unnamed](http://lesswrong.com/lw/68k/nback_news_jaeggi_2011_or_is_there_a/4d3h)
offers what seems like a pretty good summary:

> The result looks pretty weak. They had 62 kids. First, they gave all the kids a fluid intelligence test to measure their baseline fluid intelligence. Then half the kids (32) were given a month of n-back training (which the authors expect to increase their fluid intelligence) while the other half (30) did a control training which was not supposed to influence fluid intelligence. At the end of the month's training all of the kids took another fluid intelligence test to see if they'd improved, and 3 months later they all took a fluid intelligence test once more to see if they'd retained any improvement.
>
> The result that you'd look for with this design, if n-back training improves fluid intelligence, is that the group that did n-back training would show a larger increase in fluid intelligence scores from the baseline test to the test after training. They looked and did not find that result - in fact, it was not even close to significant (F < 1). That's the effect that the study was designed to find, and it wasn't there. So that's not a good sign.
>
> The kids who did n-back training did improve at the n-back task, so the authors decided to look at the data in another way - they divided the 32 kids in that group in half based on how much they had improved on the n-back task, and looked separately at the 16 who improved the most and the 16 who improved the least. The group of 16 high-improvers did improve on the fluid intelligence test, significantly more than the control group, and they retained that improvement on the follow-up test of fluid intelligence. That is the main result that the paper reports, which they interpret as a causal effect of n-back training. The 16 low-improvers did not have a statistically significant difference from the control group on the fluid intelligence test.
>
> But this just isn't that convincing a result, as the study no longer has an experimental design when you're using n-back performance to divide up the kids. If you give kids 2 intelligence tests (one the n-back task, one the fluid intelligence test), and a month later you give them both intelligence tests again, then it's not surprising that the kids who improved the most on one test would tend to also improve the most on the other test. And that's basically all that they found. Their study design involved training the kids on one of those two tests (n-back) during the month-long gap, but there's no particular reason to think that this had a causal effect on their improvement on the other test. There are plenty of variables that could affect intelligence test performance which would affect performance on both tests similarly (amount of neural development, being sick, learning disability, etc.).
>
> If there is a causal benefit of n-back, then it should show up in the effect that they were originally looking for (more fluid intelligence improvement in the group that did n-back training than the control group). Perhaps they'd need a larger sample size (200 kids instead of 62?) to find it if the benefit only happens to some of the kids (as they claim), but if some kids benefit from the training while others get no effect from it then the net effect should be a measurable benefit. I'd want to see that result before I'm persuaded.

--
gwern
http://www.gwern.net

Message has been deleted

ao

unread,
Jun 28, 2011, 9:34:00 PM6/28/11
to Dual N-Back, Brain Training & Intelligence
Preferably adults in their 30s onward, i.e., when the brain has
reached a semblance of genetic maturity and before performance
declination.

argumzio


On Jun 28, 8:29 pm, Zaraki <zaraki...@gmail.com> wrote:
> Well, personally I don't put much stock in this study. I got
> disappointed the moment I heard that kids were used, even before I
> read it. The problem with them still developing and beeing poorly
> motivated is not something that should be involved in this mix. Folly
> to choose children from the get-go in other words. Why Jaeggi decided
> to use them I cannot fathom. Did she hope for greater results in
> children, or simply want schools to start implementing the n-back
> training? More proof is needed with adults first.

Zaraki

unread,
Jun 28, 2011, 9:42:55 PM6/28/11
to Dual N-Back, Brain Training & Intelligence
woopsy.. deleted my own post. Had just planned to edit it. Anyways, I
basically said that this study is weak and that older testees should
have been used, not developing children who don't even understand the
value of getting better at this task.

ao

unread,
Jun 28, 2011, 10:01:32 PM6/28/11
to Dual N-Back, Brain Training & Intelligence
"Children who don't even understand the value of getting better at
this task" is irrelevant as other factors can be used as motivators.

"Now, listen, Jimmy boy, if you improve your n-level by this much,
I'll give you something really, really nice, okay?" "Okay! I just love
toys/money/someglucsucfructosepackedthing."

How do you think scientists are able to train animals of sub-human
intelligence to do seemingly complicated tasks? Conditioning. No need
to understand anything besides "I want more of something good and less
of something bad". There's plenty of "value" in that.

The real issue is children are well known not to have terribly
reliable scores on psychometric variables like I.Q. tests, because
their brains are under siege by constantly changing gene expression.
I.Q. test scores from the period of birth to 16 years are remarkably
unreliable measures of the final adult-age I.Q. level. On those
grounds, if there were any kind of gain on account of training, it
could easily be masked by the developmental trajectory of the children
– and it is not all too likely that 62 of them qualify as a random
sample at the end of the day. Speaking generally, of course.

argumzio
Message has been deleted

Zaraki

unread,
Jun 28, 2011, 10:08:37 PM6/28/11
to Dual N-Back, Brain Training & Intelligence
well, we don't know if they used motivators or just told them to do
it. I expect it to be the latter, as Jaeggi used that reason to divide
the experimental group later on.

ao

unread,
Jun 28, 2011, 10:28:04 PM6/28/11
to Dual N-Back, Brain Training & Intelligence
Actually, in the penultimate page, they write:

"After the posttest, we assessed the children’s engagement
and motivation for training with a self-report questionnaire
consisting of 10 questions in which they rated the training or control
task on
dimensions such as how much they liked it, how difficult it was, and
whether
they felt that they became better at it."

Turns out that .67 of the total variance [for completing the task, if
not task-related gains, one assumes] can be explained by these
intrinsic motivators. Not surprising. Who is motivated to do a task
and get better at it usually does. But who is motivated ("I'm smart, I
can do it", "hey, this problem is fun") also tends to be more
intelligent, particularly among children. Nothing new.

argumzio
Message has been deleted

ao

unread,
Jun 28, 2011, 10:49:23 PM6/28/11
to Dual N-Back, Brain Training & Intelligence
Ah, it's so easy to ignore null or negative results but so hard in the
case of positive ones.

Unfortunately, scientific practice doesn't work that way.

The result of this study isn't the first of its kind, and it isn't
going to be the last.

argumzio


On Jun 28, 9:36 pm, ailambris <ailamb...@gmail.com> wrote:
> I don't think they were ignorant to the fact that there would be some
> confounding motivational issues. Maybe they believed it would actually
> play in their favor, who knows? Every study is a product to be sold to
> the medical community. Sometimes you gamble, and sometimes you do
> really well for the circumstances. The opposite may be true here. It
> could very well be that they were targeting youth because, well, maybe
> somehow the application may be marketable, is there a patent pending
> or something? If I were a parent, I would be willing to drop heavy
> sums of money on an application if it had been confirmed to produce
> the kind of improvements that n-back is purported to offer.
>
> With a rudimentary understanding of statistics, I know just as well
> that even if your molecule, your device, whatever, actually *does*
> what it is supposed to do, there is still a probability that your
> study is going to be a complete failure. That's why we reproduce them.
> That's why Jaeggi 2011 doesn't bother me.
> ...
>
> read more »
Message has been deleted

whoisbambam

unread,
Jun 29, 2011, 12:44:45 AM6/29/11
to Dual N-Back, Brain Training & Intelligence
So what is everybody implying here?

just that the study sucks?

or you all believe that dnb does not help improve working memory in
adults who already dont have a good working memory (demonstrated by
the fact that they suck and dnb)?

or what?
> ...
>
> read more »

The.Fourth.Deviation.

unread,
Jun 29, 2011, 12:45:50 AM6/29/11
to Dual N-Back, Brain Training & Intelligence
With the more studies done, and the results being replicated, it gets
more difficult to suggest that results are invalid. People in the
thread keep saying "this study only has N=8, and this one, N=25, or N
= 62. Is it not completely obvious that as you add all of those
together, the actual sample that has been trained with generally the
same tasks, and same improvements, has approached one thousand or more
people, in several different locations and under several different
conditions. Yet the results are the same, positive, time and again.
Retorts?

Also, the people who supposedly don't benefit may actually be
benefitting in some way that is not being tested . E.G. After taking a
few weeks off of training or more, DNB immediately improves my short
term memory whenever I go back to using it. This occurs without fail.
Yet this will not show up in RAPM, because that has to do with mental
rotations etc. In other words, I suggest that these studies may
understate the benefits of DNB, instead of overstate, as some here
imply.

ao

unread,
Jun 29, 2011, 1:04:38 AM6/29/11
to Dual N-Back, Brain Training & Intelligence
Children aren't adults. That's what's being said here, not implied.
Children don't prove squat for a number of obvious, already-stated
reasons.

I regard the study as unconvincing.

argumzio
> ...
>
> read more »

ailambris

unread,
Jun 29, 2011, 1:12:14 AM6/29/11
to Dual N-Back, Brain Training & Intelligence
It'd be nice if you could add the numbers that way, but you can't.

On Jun 28, 9:45 pm, "The.Fourth.Deviation." <davidsky...@gmail.com>
wrote:

The.Fourth.Deviation.

unread,
Jun 29, 2011, 1:14:35 AM6/29/11
to Dual N-Back, Brain Training & Intelligence
I think that's irrelevant. The gestalt is the more important picture.
If DNB causes particular effects in adults, it should also cause the
same effects in children. Not only this, but it should probably cause
stronger adaptations, because training done in youth is quickly
adapted to and causes greater overall benefit. I.e. language training,
musical training before age 12, etc.

The study is also useful, because it can simply be compared to a
longitudinal study which measures the changes in fluid IQ of a normal
group of aging children. If this group appears to show more rapid
increases over such a short time frame, it does indeed give us some
useful information. And the same point I made above, about DNB in
adults vs children, is especially potent. There is no question that
adaptations are being made in all subjects put to training. The
question is how and where those adaptations occur. I.E. all subjects
should show increased cortical activity. It is just the task of the
scientists to show what that means in the case of each individual,
especially since individuals may have difficulty pointing out what has
changed. The battery of tests should be more robust.
> ...
>
> read more »- Hide quoted text -
>
> - Show quoted text -

The.Fourth.Deviation.

unread,
Jun 29, 2011, 1:16:41 AM6/29/11
to Dual N-Back, Brain Training & Intelligence
I don't see any convincing reason that you should not add them for a
rough estimate of overall results.
> > imply.- Hide quoted text -

genvirO

unread,
Jun 29, 2011, 1:25:23 AM6/29/11
to Dual N-Back, Brain Training & Intelligence
Regardless of how unconvinced you are, the results of the study still
remain positive. Regardless of how positive the results of the study
are, we still need more studies but with more substance (brain scans &
other). For instance, we could hypothetically have 20 more studies
that come out with exactly the same thing by the end of the year, yet
we would still have the same unanswered questions.

My belief is that in general n-back does "work" (its obvious that
motivation is a key variable in deciding how much it does "works"
though), however, there are still a lot of unanswered questions.
> ...
>
> read more »

ao

unread,
Jun 29, 2011, 2:00:37 AM6/29/11
to Dual N-Back, Brain Training & Intelligence
I'm guessing you actually read the data table, correct? Are you so
sure such results are "positive"? If you look at the standard
deviation and the mean of both groups for their post results on SPM,
2SD above the mean is equivalent for both groups. (Active: mean=16.2,
SD=5.1; NB grp: mean=16.94, SD=4.75 .) In effect, we're seeing no
change here except in the mean and variance.

By the way, I wrote a response to David earlier, but it hasn't posted
yet, but when it does, if it does, you'll get a better flavor of where
I'm coming from on the "gestalt" of the issue as well.

As regards the asinine idea of adding together different samples from
different studies, such is an erroneous means of deriving explanations
or observations, because the studies were conducted under different
conditions, with different procedures, different individuals, and
different levels of experimental validity. Painting in broad strokes
might work with literature, but in statistics, it isn't so.

A note for others to ruminate on. Let's consider the validity of
separating between a large-training gain group and a small-training
gain group: would not that be equivalent to separating the wheat from
the chaff, i.e., sorting out who learns from a task more quickly and
hence would almost by default do better on a test the second time.

Hmm... if only I could say, "yes! yes! it's all so positive!"

argumzio
> ...
>
> read more »

Pontus Granström

unread,
Jun 29, 2011, 2:24:45 AM6/29/11
to brain-t...@googlegroups.com
Gwern, why I "accuse" you of being biased against n-backing is because I think that you do not give "the other side a chance". For example, you post a full length article that Moody
has written but Jaeggis reply is given as a little link and you have to browse through the whole thread to see her "reply". Same goes with the research, that you first said was "fraud", then
when more recent research was posted you ignored it. There are articles that can give explanations to what might not seem so intuitive obvious. This is the case in many areas. We often
tend to overlook small details etc just when programming and that's also why we need much research.

> ...
>
> read more »

Pontus Granström

unread,
Jun 29, 2011, 3:57:45 AM6/29/11
to brain-t...@googlegroups.com
The study clearly showed that does who advanced in n-backing improved on Gf the whole rationale for transfer. Since they were dealing with kids some of them were unmotivated or found the task
to difficult hence they didn't improve on Gf. Of course to satisfy critics mere looking at the test would improve to Gf to make it "valid". Since they saw improvements in untimed tests in the group that
did improve on n-back it's very likely that this was due to the training, either that, or that it was just a bunch of "lucky guesses", which of course isn't the case. Same goes if you want to improve
your long distance running with intervals, if you do not engage in the intervals at max effort there will be no transfer. If I ask a bunch of kids to do this, some of them would most certainly not complete them, this does not prove that intervals are useless. That's my analysis.

Pontus Granström

unread,
Jun 29, 2011, 5:17:47 AM6/29/11
to brain-t...@googlegroups.com
It might also confirm one of my worries that n-backing will be utilized the most by people who are already above average and got a strong motivation, which will lead to even larger gaps
between different groups.

genvirO

unread,
Jun 29, 2011, 5:21:31 AM6/29/11
to Dual N-Back, Brain Training & Intelligence
"I'm guessing you actually read the data table, correct? Are you so
sure such results are "positive"?"

I don't think the gains are significant, but I also don't think
they're "positively negligible". I was only making reference to the
group that saw the largest gains. The study doesn't enable anyone to
celebrate the glorification of 'intelligence training' (which is, to
the discredit of the "Jaeggi team", what they attempted [being nice]"
but it does bring one closer to understanding what may or may not work
in relation to task parameters and the motivation of subjects in
respect to potential training gains.

On Jun 29, 5:57 pm, Pontus Granström <lepon...@gmail.com> wrote:
> The study clearly showed that does who advanced in n-backing improved on Gf
> the whole rationale for transfer. Since they were dealing with kids some of
> them were unmotivated or found the task
> to difficult hence they didn't improve on Gf. Of course to satisfy critics
> mere looking at the test would improve to Gf to make it "valid". Since they
> saw improvements in untimed tests in the group that
> did improve on n-back it's very likely that this was due to the training,
> either that, or that it was just a bunch of "lucky guesses", which of course
> isn't the case. Same goes if you want to improve
> your long distance running with intervals, if you do not engage in the
> intervals at max effort there will be no transfer. If I ask a bunch of kids
> to do this, some of them would most certainly not complete them, this does
> not prove that intervals are useless. That's my analysis.
>
> On Wed, Jun 29, 2011 at 8:24 AM, Pontus Granström <lepon...@gmail.com>wrote:
>
>
>
> > Gwern, why I "accuse" you of being biased against n-backing is because I
> > think that you do not give "the other side a chance". For example, you post
> > a full length article that Moody
> > has written but Jaeggis reply is given as a little link and you have to
> > browse through the whole thread to see her "reply". Same goes with the
> > research, that you first said was "fraud", then
> > when more recent research was posted you ignored it. There are articles
> > that can give explanations to what might not seem so intuitive obvious. This
> > is the case in many areas. We often
> > tend to overlook small details etc just when programming and that's also
> > why we need much research.
>
> ...
>
> read more »

Pontus Granström

unread,
Jun 29, 2011, 5:26:54 AM6/29/11
to brain-t...@googlegroups.com
From what I could see there was a clear gain, I do not have the exact number infront of me now.

> ...
>
> read more »

ao

unread,
Jun 29, 2011, 1:00:59 AM6/29/11
to Dual N-Back, Brain Training & Intelligence
I would suggest that you read more of the literature that discusses
this with greater claims to expertise than yourself, but then again,
you presumably have already done so, which is why I wouldn't have to
bring up something like this: http://tinyurl.com/26vu6z2 .

Interesting how someone comes away from the literature thinking "the
results are the same, positive, time and again". What a strange state
of affairs. Or perhaps not for a group that does not, almost by
definition, preclude group-think.

Instead of vaguely gesturing at a sample of "one thousand or more
people" why not compile the data and refer to it clearly for us all to
scrutinize without vagueness or ambiguity?

argumzio


On Jun 28, 11:45 pm, "The.Fourth.Deviation." <davidsky...@gmail.com>
wrote:

Pontus Granström

unread,
Jun 29, 2011, 9:17:04 AM6/29/11
to brain-t...@googlegroups.com
We have study after study showing the same thing, still people ask this question: does n-backing work? I am not sure what you would like to see? The exclusion of every probable explanation? That god himself descends and tell us that n-backing works? If the training gain was proportional to the training time does this mean that n-backing was behind this, or was it a coincident? We don't know, that's why we need more research. That's the problem with statistical analysis, it might be due to chance, we don't know why this happened but we know it happened in a group that engaged in n-backing. What else can we say?


ao

unread,
Jun 29, 2011, 12:24:57 PM6/29/11
to Dual N-Back, Brain Training & Intelligence
The results are negligible because what we see in the group with the
largest gains (the point I was trying to make) is that they're able to
learn more with the same amount of training, i.e., are more able,
i.e., are more intelligent, i.e., are more motivated to learn because
they know they're good at it. It isn't as though the training had a
greater effect on them, it is more likely the case that they are more
able, which shows up in both their training gains and their retest
results. One could easily have made a group of higher-retest scores
and lower-retest scores and seen the same general thing.

Anyway, I am seriously not considering this study as a valid indicator
of anything beyond: smart people excel more than not-as-smart people.
Old news in new, seductive clothing. More intelligent people have
always been more motivated, and there's no vague question regarding
"understanding what may or may not work in relation to task parameters
and the motivation of subjects in respect to potential training gains"
as you like to state, which is a red herring if there ever was one.
Funnily enough, though, Jaeggi et al. say, "Our findings show that
transfer to Gf is critically dependent on the amount of the
participants’ improvement on the WM task." That is what I call
marketing, or at a bare minimum _overstating results_.

argumzio
> ...
>
> read more »

Pontus Granström

unread,
Jun 29, 2011, 3:17:22 PM6/29/11
to brain-t...@googlegroups.com
To flip your argument it would mean that not engaging in the task would yield the same results as actively trying to get better? I say this, more research is welcomed. Nothing is more probable then the other, except perhaps that n-backing does work.

> ...
>
> read more »

ao

unread,
Jun 29, 2011, 4:26:16 PM6/29/11
to Dual N-Back, Brain Training & Intelligence
That doesn't flip my argument, and that "n-backing does work" is not
more probable.

argumzio
> ...
>
> read more »

ao

unread,
Jun 29, 2011, 12:28:30 PM6/29/11
to Dual N-Back, Brain Training & Intelligence
Yes, I know it's hard when someone doesn't agree with something you
find so agreeable.

argumzio

Pontus Granström

unread,
Jun 29, 2011, 4:32:18 PM6/29/11
to brain-t...@googlegroups.com
Well maybe not strictly speaking it's a flip I agree (I am not really aware of the definition). But as a consequence of your reasoning, it suggests that the quality of the training is unimportant for the outcome?

> ...
>
> read more »

Zaraki

unread,
Jun 29, 2011, 12:07:12 PM6/29/11
to Dual N-Back, Brain Training & Intelligence
I agree with Jonathan Graehl. No large training gain group was created
for the control group, thereby failing to eliminate 'natural'
improvement as a cause for the gains in fluid intelligence.

Also, referencing to Jaeggi's previous studies in defence of this one
does not improve anything since children weren't used in those.

This is the third time I am trying to send this message, what is going
on?

On Jun 29, 3:17 pm, Pontus Granström <lepon...@gmail.com> wrote:
> We have study after study showing the same thing, still people ask this
> question: does n-backing work? I am not sure what you would like to see? The
> exclusion of every probable explanation? That god himself descends and tell
> us that n-backing works? If the training gain was proportional to the
> training time does this mean that n-backing was behind this, or was it a
> coincident? We don't know, that's why we need more research. That's the
> problem with statistical analysis, it might be due to chance, we don't know
> why this happened but we know it happened in a group that engaged in
> n-backing. What else can we say?
>

ao

unread,
Jun 29, 2011, 4:33:57 PM6/29/11
to Dual N-Back, Brain Training & Intelligence
Define "quality of the training". The only difference indicated by the
study is that highly motivated subjects _tend to do much better_ than
less motivated subjects and hence are more intelligent.

In fact, your "flip" of my argument is a mischaracterization that
grossly misses the point.

argumzio
> ...
>
> read more »

ao

unread,
Jun 29, 2011, 4:36:15 PM6/29/11
to Dual N-Back, Brain Training & Intelligence
Zaraki, Google Groups servers seem to be on the fritz. Just now one of
my messages showed up even though I submitted it 4 hours ago:
http://groups.google.com/group/brain-training/msg/82011d7f25dc2cc8

Jonathan is right. A passive control group was definitely needed in
order to show that such gains really are positive and not a
statistical fluke of highly able children (my premiss).

argumzio

whoisbambam

unread,
Jun 29, 2011, 8:05:35 PM6/29/11
to Dual N-Back, Brain Training & Intelligence
it would be nice if they would study a large, adult group.

perhaps ppl in their 20s who dropped out of high school.

or perhaps blue-collar workers who graduated from high school, perhaps
went to a non-collegiate technical school (non-degree bearing).

i think that would be more objective.