"Putting brain training to the test" - Nature paper

282 views
Skip to first unread message

Jonathan Toomim

unread,
Apr 21, 2010, 4:01:13 AM4/21/10
to brain-t...@googlegroups.com
http://www.nature.com/nature/journal/vnfv/ncurrent/pdf/nature09042.pdf

Abstract:
> ‘Brain training’, or the goal of improved cognitive function through
> the regular use of computerized tests, is a multimillion-pound
> industry, yet in our view scientific evidence to support its
> efficacy is lacking. Modest effects have been reported in some
> studies of older individuals and preschool children, and video-game
> players outperform non-players on some tests of visual attention5.
> However, the widely held belief that commercially available
> computerized brain-training programs improve general cognitive
> function in the wider population in our opinion lacks empirical
> support. The central question is not whether performance on
> cognitive tests can be improved by training, but rather, whether
> those benefits transfer to other untrained tasks or lead to any
> general improvement in the level of cognitive functioning. Here we
> report the results of a six-week online study in which 11,430
> participants trained several times each week on cognitive tasks
> designed to improve reasoning, memory, planning, visuospatial skills
> and attention. Although improvements were observed in every one of
> the cognitive tasks that were trained, no evidence was found for
> transfer effects to untrained tasks, even when those tasks were
> cognitively closely related.

--
You received this message because you are subscribed to the Google Groups "Dual N-Back, Brain Training & Intelligence" group.
To post to this group, send email to brain-t...@googlegroups.com.
To unsubscribe from this group, send email to brain-trainin...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/brain-training?hl=en.

Pontus Granström

unread,
Apr 21, 2010, 7:01:17 AM4/21/10
to brain-t...@googlegroups.com
This is nothing new, but does not state in any way that the brain is immune to improvement,nor does it discuss dual-n-back or other similar studies. It also fails to explain the flynn effect.

jttoto

unread,
Apr 21, 2010, 7:54:38 AM4/21/10
to Dual N-Back, Brain Training & Intelligence
Actually, this is more important then some will likely give credit
for. Since the newest evidence shows that single n-back improves
intelligence just as well as dual, then we can imagine that training
in any WM task will cause results in intelligence

. This study shows the opposite. The game used in the study likely
functions similar to n-back, since the WM games escalate in difficulty
as the user improves. Yet, no improvement.
> For more options, visit this group athttp://groups.google.com/group/brain-training?hl=en.- Hide quoted text -
>
> - Show quoted text -

jttoto

unread,
Apr 21, 2010, 7:59:18 AM4/21/10
to Dual N-Back, Brain Training & Intelligence
And in fact,this is more compelling evidence that, while one can
improve in a small number of cognitive domains given training, one
won't see an improvement everywhere. It is very likely that Jaeggi's
measure for "general fluid intelligence" was in fact very training-
specific.
> > For more options, visit this group athttp://groups.google.com/group/brain-training?hl=en.-Hide quoted text -

Pontus Granström

unread,
Apr 21, 2010, 7:59:04 AM4/21/10
to brain-t...@googlegroups.com
WM-tasks although labeled as "equal"  are in fact not from a neuroactivity point of view. N-back is something that stresses the central executive while just "remembering stuff" does not.

Pontus Granström

unread,
Apr 21, 2010, 8:00:09 AM4/21/10
to brain-t...@googlegroups.com
How can IQ-tests then measure something that should be common to all mental acitivty? Then IQ-tests by themselves are specific if there's not overlap.

Pontus Granström

unread,
Apr 21, 2010, 8:12:32 AM4/21/10
to brain-t...@googlegroups.com
I think you use the word WM-task a bit sloppy. All "WM-tasks" are not equal from a executive function point of view. N-back activates roughly 3 times more areas than just "maintaining information". The reason dual-n-back increases IQ is because Working memory and attentional control is a big part of IQ-tests, this is widely accepted and almost certainly the reason behind the flynn effect. In fact only 0.25 of the variation is explained by "procedual knowledge" or what some of the dnb critics want to call intelligence. This is true, but still IQ-tests demands more than this.

On Wed, Apr 21, 2010 at 1:54 PM, jttoto <jtde...@uncc.edu> wrote:

jttoto

unread,
Apr 21, 2010, 8:15:25 AM4/21/10
to Dual N-Back, Brain Training & Intelligence
Let's keep it in focus. In the end, Jaeggi used a very specific
measure of visuospatial reasoning. Yes, it has a strong correlation
with other functions, but we still are not 100% sure why these
correlations exist. Now lets look at this evidence this study has in
comparison to Jaeggi.

The sample size was large, much larger then Jaeggi's.

The study used games that train and improve WM.

The study measured a broad range of cognitive functions, like one
would measure in g. Therefore, it is more comprehensive then Jaeggi's
study.

No result.
> > brain-trainin...@googlegroups.com<brain-training%2Bunsubscribe@go­oglegroups.com>
> > .
> > > > For more options, visit this group athttp://
> > groups.google.com/group/brain-training?hl=en.-Hide quoted text -
>
> > > > - Show quoted text -
>
> > > --
> > > You received this message because you are subscribed to the Google Groups
> > "Dual N-Back, Brain Training & Intelligence" group.
> > > To post to this group, send email to brain-t...@googlegroups.com.
> > > To unsubscribe from this group, send email to
> > brain-trainin...@googlegroups.com<brain-training%2Bunsubscribe@go­oglegroups.com>
> > .
> > > For more options, visit this group athttp://
> > groups.google.com/group/brain-training?hl=en.- Hide quoted text -
>
> > > - Show quoted text -
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Dual N-Back, Brain Training & Intelligence" group.
> > To post to this group, send email to brain-t...@googlegroups.com.
> > To unsubscribe from this group, send email to
> > brain-trainin...@googlegroups.com<brain-training%2Bunsubscribe@go­oglegroups.com>
> > .

Pontus Granström

unread,
Apr 21, 2010, 8:16:56 AM4/21/10
to brain-t...@googlegroups.com
You can not put all WM-tasks into one I am afraid.

1. Working memory was often measured using a small, unsystematic set of tasks. The
resulting representation of the construct is therefore contaminated by task-specific
variance and does not necessarily reflect all relevant aspects of working memory.
2. In most studies on individual differences, working-memory capacity was operationalized
as an undifferentiated construct. This contrasts sharply with numerous experimental findings
that suggest a multicomponential view of working memory (e.g., Baddeley, 1986).
Different components of working memory, i.e., distinguishable cognitive resources, could
be expected to contribute to different extents to different intellectual abilities.
3. Similarly, studies relating working memory to intelligence constructs usually focused on
a single mental ability, such as reasoning or reading ability, as the criterion. This does
not provide a clear picture of how working memory relates to the structure of
intelligence, in other words, which abilities depend to what degree on working memory.
4. The tasks used to measure working memory were very similar to and sometimes
indistinguishable from common reasoning tasks. This problem is apparent, e.g., in the
work of Kyllonen and Christal (1990), where the same type of task served as a workingmemory
task in one study and as a reasoning task in another. If tasks used to measure
working-memory capacity have many common features with those used to measure
reasoning, it is not clear which features are responsible for the high correlation obtained
between them.

Pontus Granström

unread,
Apr 21, 2010, 8:20:18 AM4/21/10
to brain-t...@googlegroups.com
The present study yielded three main results. (1) Working-memory capacity is highly
related to intelligence. The strongest relationship found was to reasoning ability, thereby
replicating results found by Kyllonen (1994a) and Kyllonen and Christal (1990). The
working-memory tasks in our test pool were selected so that overlap in cognitive
processes and strategies with the reasoning tasks was minimal.

jttoto

unread,
Apr 21, 2010, 8:21:22 AM4/21/10
to Dual N-Back, Brain Training & Intelligence
Present the evidence then. As far as my knowledge, there is no direct
study comparing the results of a standard WM training compared to n-
back or dual- n-back. A direct comparison must be made to draw such
conclusions, within the same parameters.
> > brain-trainin...@googlegroups.com<brain-training%2Bunsubscribe@go­oglegroups.com>
> > .
> > > For more options, visit this group athttp://
> > groups.google.com/group/brain-training?hl=en.- Hide quoted text -
>
> > > - Show quoted text -
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Dual N-Back, Brain Training & Intelligence" group.
> > To post to this group, send email to brain-t...@googlegroups.com.
> > To unsubscribe from this group, send email to
> > brain-trainin...@googlegroups.com<brain-training%2Bunsubscribe@go­oglegroups.com>
> > .

Pontus Granström

unread,
Apr 21, 2010, 8:20:45 AM4/21/10
to brain-t...@googlegroups.com
I rest my case........

jttoto

unread,
Apr 21, 2010, 8:29:47 AM4/21/10
to Dual N-Back, Brain Training & Intelligence
How does this wall of text disprove anything I said. It only
theorizes that WM may be multicomponental and is more complex than we
believe. #3 in facts supports what I've been saying. It says nothing
on comparing training exercises to each other. So again, show me a
study directly comparing n-back to other WM training games, where one
is the active control.

Pontus Granström

unread,
Apr 21, 2010, 8:36:53 AM4/21/10
to brain-t...@googlegroups.com
Well you said "The study used games that train and improve WM." While WM-tasks are as you point out complex and as stated in the article it is not necessarily those that are important to intelligence that are examined/trained. N-back however is one of those WMC processes that is.

jttoto

unread,
Apr 21, 2010, 8:36:09 AM4/21/10
to Dual N-Back, Brain Training & Intelligence
Ahh, I see, the two quotes are part of the same study. My mistake.

Still, that doesn't explain why a task that trains WM did not transfer
to most of the cognitive domains. We are talking about WM-training,
not what WM is. I am confused as to what you trying to prove with
that quote.
> > For more options, visit this group athttp://groups.google.com/group/brain-training?hl=en.-Hide quoted text -

Pontus Granström

unread,
Apr 21, 2010, 8:37:55 AM4/21/10
to brain-t...@googlegroups.com
All WM-training is not equal. You cannot arbitrary call things WM.

jttoto

unread,
Apr 21, 2010, 8:45:27 AM4/21/10
to Dual N-Back, Brain Training & Intelligence
Yes, but how does the game in the nature study break the standard
definition of WM? Is it comparable to say, the CogMed used on ADHD?
This is something I with the authors were more specific about, but it
seems reasonable, given the huge number of participants and obvious
knowledge of cognition, that they properly researched WM and chose a
game that would measure and improve it.
> ...
>
> read more »- Hide quoted text -

Pontus Granström

unread,
Apr 21, 2010, 8:48:11 AM4/21/10
to brain-t...@googlegroups.com
No it just shows that those WM-tasks did not lead to any improvements.

Pontus Granström

unread,
Apr 21, 2010, 8:53:48 AM4/21/10
to brain-t...@googlegroups.com
Because they do not tax the brain in the same way dnb does, and are not important underlying functions for solving Gf-problems.

jttoto

unread,
Apr 21, 2010, 8:56:45 AM4/21/10
to Dual N-Back, Brain Training & Intelligence


The researchers specifically said that the games used were standard
measures of WM, modified to get more difficult as the user improves.

Then please, explain how the game used is so different to n-back. It
requires the standard memorizing and updating information, does it
not?

jttoto

unread,
Apr 21, 2010, 8:59:43 AM4/21/10
to Dual N-Back, Brain Training & Intelligence
In other words, the games are simply standard measures of WM used
extensively in scientific literature. Are standard measures of WM not
taxing enough on the brain?

Pontus Granström

unread,
Apr 21, 2010, 9:02:54 AM4/21/10
to brain-t...@googlegroups.com
Once again WM is not a SINGLE thing. They used tasks that required something we might label as WM but not something that is equal to dnb from a neurological point of view, if it were they would have gotten the same results naturally.

Pontus Granström

unread,
Apr 21, 2010, 9:04:50 AM4/21/10
to brain-t...@googlegroups.com
Remembering information uses only 1/3 of the brain areas compared to dnb. The brain activity and oxygen supply to the brain is proportional to n-level.

jttoto

unread,
Apr 21, 2010, 9:10:43 AM4/21/10
to Dual N-Back, Brain Training & Intelligence
At one time, n-back was just another measure of WM like these games
were. And again, the games were based of scientifically recognized
measures of WM used extensively in scientific journals. Are you
telling me that you have indisputable proof that training in n-back
goes above and beyond other measures of WM. You don't, because no
study exists directly comparing two training methods.

"if it were they would have gotten the same
results naturally. "

Under the assumption that the Jaeggi study is irrefutable proof that
WM can be improved. You must be able to replicate results
consistently before you can say something has strong empirical
support. In fact, wasn't there a study that didn't replicate Jaeggi's
findings using the exact method? Not very encouraging.

Pontus Granström

unread,
Apr 21, 2010, 9:20:09 AM4/21/10
to brain-t...@googlegroups.com
I do not know which tasks that were used. Just because they are standard WM-tasks does not imply that they are the ones that indeed Gf-tests require. Since 0.6-0.8 of thev variation is explained by variation in WMC.

jttoto

unread,
Apr 21, 2010, 9:32:16 AM4/21/10
to Dual N-Back, Brain Training & Intelligence
I have to leave to work soon but I'm going to end with this. When
presenting with conflicting evidence one must sit back and objectively
look at the results with scrutiny. Quite frankly you are being very
close-minded about the null results. I personally believe that
intelligence can be improved, but one must look at the zero effects
closely to properly reduce what is behind the improvements. One must
also face the fact that maybe improvements could be partially due to
researcher bias. So again, since you never properly rebutted it:

- Dual n-back, and n-back in general, does not have strong empirical
support. Even some studies using the same experimental methods as
Jaeggi have not replicated the same results. The fact that
participants are young is void, since in theory children can improve
as well based on other studies.

- You still present no hard evidence that one method of training is
superior to another. Saying that one is inferior because it doesn't
produce results is a weak argument. One must directly compare within
the same study.



-

Pontus Granström

unread,
Apr 21, 2010, 9:46:09 AM4/21/10
to brain-t...@googlegroups.com
Besides they trained 6 tasks during 10 minutes three days a week. Meaning they spent in total 1.6 min on each task a day. No wonder they didnt improve. The WMC training were in total 5 minutes a week and 15 minutes for the whole period corrsponding to not even ONE training session with dnb. No wonder they didnt see any results....

Pontus Granström

unread,
Apr 21, 2010, 10:41:29 AM4/21/10
to brain-t...@googlegroups.com
I am sorry one group spent 2.5 minutes on short term memory tasks. I guess this study is joke.

jttoto

unread,
Apr 21, 2010, 8:06:01 PM4/21/10
to Dual N-Back, Brain Training & Intelligence
The study addressed that issue. Some spent more time. There was no
correlation with time spent and improvement. And again, you did not
address the points I brought up. Do I need to copy and paste?

On Apr 21, 10:41 am, Pontus Granström <lepon...@gmail.com> wrote:
> I am sorry one group spent 2.5 minutes on short term memory tasks. I guess
> this study is joke.
>
> On Wed, Apr 21, 2010 at 3:46 PM, Pontus Granström <lepon...@gmail.com>wrote:
>
>
>
> > Besides they trained 6 tasks during 10 minutes three days a week. Meaning
> > they spent in total 1.6 min on each task a day. No wonder they didnt
> > improve. The WMC training were in total 5 minutes a week and 15 minutes for
> > the whole period corrsponding to not even ONE training session with dnb. No
> > wonder they didnt see any results....
>

jttoto

unread,
Apr 21, 2010, 8:12:36 PM4/21/10
to Dual N-Back, Brain Training & Intelligence
Let me make this clear:

There is no hard evidence that dual n-back, a training method whose
results have not been consistently replicated, is a superior training
method to the games described in this study. There is only
speculation.

Pontus Granström

unread,
Apr 22, 2010, 3:13:16 AM4/22/10
to brain-t...@googlegroups.com
Well show me a evidence that a group spent 200 minutes on a Short Term Memory task. That all STM tasks are equal is a thing that is only speculation. N-back has a very special place due to it's load on the executive functions. The study is not equal to the Jaeggi study in that sense. N-back activates areas linked to IQ as seen by fmri scans, it increases oxygen to the brain (known to increase IQ), N-back requires extensive use of the updating executive function which has a 0.6 correlate with IQ and all other measures of intelligence. I would like to know what STM task that were used before that we can't say anything.

1. First no evidence that anyone spent 200 minutes on a STM-task.
2. The STM-task is probably inferior to dnb. (not just speculation)

polar

unread,
Apr 22, 2010, 3:53:26 AM4/22/10
to Dual N-Back, Brain Training & Intelligence
interesting study, I dont have time to go through it thouroughly now,
but I will. I just get the feeling they did not use n-back. And if
they didnt, that can a huge difference. Not because I'm a fan of it
(I'm even more fan of "truth"), but because n-back is subjectively
waaaaaaay more demanding that anything else in cognitive area. IMHO
you cant compare backward digit span training with n-back. And,
anything that transfers to raven (Kligberg, Jaeggi 2008, 2010) is
pretty much FAR transfer.

On 21. Apr, 10:01 h., Jonathan Toomim <jtoo...@jtoomim.org> wrote:
> http://www.nature.com/nature/journal/vnfv/ncurrent/pdf/nature09042.pdf
>
> Abstract:
>
>
>
>
>
> > ‘Brain training’, or the goal of improved cognitive function through  
> > the regular use of computerized tests, is a multimillion-pound  
> > industry, yet in our view scientific evidence to support its  
> > efficacy is lacking. Modest effects have been reported in some  
> > studies of older individuals and preschool children, and video-game  
> > players outperform non-players on some tests of visual attention5.  
> > However, the widely held belief that commercially available  
> > computerized brain-training programs improve general cognitive  
> > function in the wider population in our opinion lacks empirical  
> > support. The central question is not whether performance on  
> > cognitive tests can be improved by training, but rather, whether  
> > those benefits transfer to other untrained tasks or lead to any  
> > general improvement in the level of cognitive functioning. Here we  
> > report the results of a six-week online study in which 11,430  
> > participants trained several times each week on cognitive tasks  
> > designed to improve reasoning, memory, planning, visuospatial skills  
> > and attention. Although improvements were observed in every one of  
> > the cognitive tasks that were trained, no evidence was found for  
> > transfer effects to untrained tasks, even when those tasks were  
> > cognitively closely related.
>
> --
> You received this message because you are subscribed to the Google Groups "Dual N-Back, Brain Training & Intelligence" group.
> To post to this group, send email to brain-t...@googlegroups.com.
> To unsubscribe from this group, send email to brain-trainin...@googlegroups.com.
> For more options, visit this group athttp://groups.google.com/group/brain-training?hl=en.

Michael Campbell

unread,
Apr 22, 2010, 6:57:06 AM4/22/10
to brain-t...@googlegroups.com
Pontus Granström wrote:
> 2. The STM-task is probably inferior to dnb. (not just speculation)

"probably" is pretty close to speculation.

Pontus Granström

unread,
Apr 22, 2010, 7:24:59 AM4/22/10
to brain-t...@googlegroups.com
Well true, but I do not think it's a coincident that they use n-back for so much neurological research when studying brain activity and so on.

Pontus Granström

unread,
Apr 22, 2010, 8:01:56 AM4/22/10
to brain-t...@googlegroups.com
Well I have found definite proof now, take a look in the files section "A comparison of laboratory and clinical working memory tests and their prediction of fluid intelligence". This proofs that n-back is superior to other WM-tasks when it comes to Gf. I rest my case.

Consistent
with previous research (Friedman et al., 2006; Gray et al.,
2003), n-back performance was also significantly correlated
to measures of fluid intelligence (rs range from .37–.40), and
tended to correlate more strongly across the gF measures
relative to the other WM tests used in the study. Overall, the
recall version of the n-back task used in the present study
proved to be a valid measure of WM function.

jttoto

unread,
Apr 22, 2010, 8:22:04 AM4/22/10
to Dual N-Back, Brain Training & Intelligence
- Strawman. Doesn't prove or disprove anything and doesn't directly
counter my argument. The STM games were used in neurological research
as well. Even n-back is used more extensively (which you don't cite),
it doesn't matter. The idea that n-back is a superior method is not
conclusive because it can't be replicated consistently, not conclusive
because there is no direct comparison with other games within similar
parameters, thus still just speculation.

:::"Well show me a evidence that a group spent 200 minutes on a Short
Term
Memory task."

Not relevant. That is the thing, we don't know. I'm not saying that
n-back does nothing, I'm saying we don't have enough evidence due to
conflicting results. In this case, it is up to you to provide hard
evidence, because a lack thereof only proves my point.

And the study shows that there is no correlation with time spent.
Don't you think that is significant at all?!

:::"That all STM tasks are equal is a thing that is only
speculation. "

Exactly! You are just proving my point; we don't know either way. So
why construct canned theories around a lack of evidence?

:::"N-back has a very special place due to it's load on the
executive functions."

Tall order since some studies show n-back does nothing.

::: "N-back activates areas linked to IQ as seen by fmri scans, it
increases oxygen to the brain (known to increase IQ), "

Changes in the brain don't automatically mean increases in IQ. One,
there aren't enough studies to show that it can be replicated on a
wider scales. Two, changes in the brain don't automatically correlate
to higher IQ. Do taxi drivers, jugglers, and meditators have higher
IQ as well? (there have been many studies showing transcendental
meditation doesn't increase IQ. Many times only personally funded
research shows a positive effect)

I'm done here. You epitomize using personal biases over scientific
evidence. When a method such as n-back can't be replicated, it is up
to us to look at the reasons why, and not make excuses for it. It is
up to us to be skeptical of its efficacy and not become a zealot. It
is becoming increasingly clear that there is no reasoning with you, so
I'm ending it here before I waste more time.

On Apr 22, 7:24 am, Pontus Granström <lepon...@gmail.com> wrote:
> Well true, but I do not think it's a coincident that they use n-back for so
> much neurological research when studying brain activity and so on.
>
> On Thu, Apr 22, 2010 at 12:57 PM, Michael Campbell <
>
>
>
>
>
> michael.campb...@unixgeek.com> wrote:
> > Pontus Granström wrote:
>
> >> 2. The STM-task is probably inferior to dnb. (not just speculation)
>
> > "probably" is pretty close to speculation.
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Dual N-Back, Brain Training & Intelligence" group.
> > To post to this group, send email to brain-t...@googlegroups.com.
> > To unsubscribe from this group, send email to
> > brain-trainin...@googlegroups.com<brain-training%2Bunsubscribe@go­oglegroups.com>
> > .
> > For more options, visit this group at
> >http://groups.google.com/group/brain-training?hl=en.
>
> --
> You received this message because you are subscribed to the Google Groups "Dual N-Back, Brain Training & Intelligence" group.
> To post to this group, send email to brain-t...@googlegroups.com.
> To unsubscribe from this group, send email to brain-trainin...@googlegroups.com.
> For more options, visit this group athttp://groups.google.com/group/brain-training?hl=en.- Hide quoted text -
>
> - Show quoted text -

Pontus Granström

unread,
Apr 22, 2010, 8:27:15 AM4/22/10
to brain-t...@googlegroups.com
I guess you haven't read my last post which provides real evidence of earlier claims.

Pontus Granström

unread,
Apr 22, 2010, 8:30:40 AM4/22/10
to brain-t...@googlegroups.com
I have as you know by now uploaded evidence that n-back is indeed a superior task when it comes to Gf correlation, so who's the who "epitomize using personal biases over scientific
evidence
."

jttoto

unread,
Apr 22, 2010, 8:34:21 AM4/22/10
to Dual N-Back, Brain Training & Intelligence
I'm only posting this because you brought this study up beforehand
which I forgot to address. But after this I am not posting any
more.

The study looked at operation span, listening span, and n-back. Not
every WM measure ever made. The first two are barely considered
standard measures of WM,so of course n-back is superior. And again we
are talking about training, not measuing! It is clear that you spent
about as much time reading this study as you did my posts.


On Apr 22, 8:01 am, Pontus Granström <lepon...@gmail.com> wrote:
> Well I have found definite proof now, take a look in the files section "A
> comparison of laboratory and clinical working memory tests and their
> prediction of fluid intelligence". This proofs that n-back is superior to
> other WM-tasks when it comes to Gf. I rest my case.
>
> *Consistent
> with previous research (Friedman et al., 2006; Gray et al.,
> 2003), n-back performance was also significantly correlated
> to measures of fluid intelligence (rs range from .37–.40), and
> tended to correlate more strongly across the gF measures
> relative to the other WM tests used in the study. Overall, the
> recall version of the n-back task used in the present study
> proved to be a valid measure of WM function.*
>
> On Thu, Apr 22, 2010 at 1:24 PM, Pontus Granström <lepon...@gmail.com>wrote:
>
>
>
>
>
> > Well true, but I do not think it's a coincident that they use n-back for so
> > much neurological research when studying brain activity and so on.
>
> > On Thu, Apr 22, 2010 at 12:57 PM, Michael Campbell <
> > michael.campb...@unixgeek.com> wrote:
>
> >> Pontus Granström wrote:
>
> >>> 2. The STM-task is probably inferior to dnb. (not just speculation)
>
> >> "probably" is pretty close to speculation.
>
> >> --
> >> You received this message because you are subscribed to the Google Groups
> >> "Dual N-Back, Brain Training & Intelligence" group.
> >> To post to this group, send email to brain-t...@googlegroups.com.
> >> To unsubscribe from this group, send email to
> >> brain-trainin...@googlegroups.com<brain-training%2Bunsubscribe@go­oglegroups.com>
> >> .
> >> For more options, visit this group at
> >>http://groups.google.com/group/brain-training?hl=en.
>
> --
> You received this message because you are subscribed to the Google Groups "Dual N-Back, Brain Training & Intelligence" group.
> To post to this group, send email to brain-t...@googlegroups.com.
> To unsubscribe from this group, send email to brain-trainin...@googlegroups.com.
> For more options, visit this group athttp://groups.google.com/group/brain-training?hl=en.- Hide quoted text -
>
> - Show quoted text -

Pontus Granström

unread,
Apr 22, 2010, 8:35:45 AM4/22/10
to brain-t...@googlegroups.com
It proves that not all WM-tasks are equal, and you have no idea which WM-task that were used so why are you so sure that you are right? There's not just any data that support your claim.

Pontus Granström

unread,
Apr 22, 2010, 8:42:53 AM4/22/10
to brain-t...@googlegroups.com
The other WM-tasks were from the WAIS test in this study. You claimed that increasing the difficulty on any WM-task is equal to dnb, then you accused me of not being scientific, when it's clearly you who are not. You even said that they also required updating etc, you just ASSUMED that all WM-tasks are equal and that spending 25% of the time on such a task compared to lowest significant result by jaeggi is a proof that training doesn't work. Why do you still claim this, when you are so overwhelmingly proved wrong?

Pontus Granström

unread,
Apr 22, 2010, 8:46:25 AM4/22/10
to brain-t...@googlegroups.com
Clinical measures
1. Arithmetic 13.97 3.05 .75
2. Spatial span 17.75 2.99 .22 ⁎⁎ .74
3. Digit span 19.55 3.76 .31 ⁎⁎ .33 ⁎⁎ .81
4. Letter number 12.56 2.67 .45 ⁎⁎ .43 ⁎⁎ .60 ⁎⁎ .74
Lab measures
5. Ospan score 44.15 15.54 .23 ⁎⁎ .35 ⁎⁎ .54 ⁎⁎ .41 ⁎⁎ .73
6. Lspan score 29.06 10.93 .34 ⁎⁎ .29 ⁎⁎ .43 ⁎⁎ .45 ⁎⁎ .55 ⁎⁎ .74
7. Lag score 54.23 16.36 .41 ⁎⁎ .33 ⁎⁎ .48 ⁎⁎ .44 ⁎⁎ .38 ⁎⁎ .45 ⁎⁎ .79
gF measures
8. RAPM 25.50 4.04 .34 ⁎⁎ .25 ⁎⁎ .16 ⁎ .29 ⁎⁎ .29 ⁎⁎ .30 ⁎⁎ .40 ⁎⁎ .75
9. Block design 45.79 10.94 .41 ⁎⁎ .43 ⁎⁎ .27 ⁎⁎ .37 ⁎⁎ .29 ⁎⁎ .32 ⁎⁎ .38 ⁎⁎ .41 ⁎⁎ .75
10. Matrix Reasoning 20.17 2.51 .19 ⁎ .30 ⁎⁎ .18 ⁎ .21 ⁎⁎ .12 .23 ⁎⁎ .36 ⁎⁎ .37 ⁎⁎ .33 ⁎⁎ .65

So 7 different tests.

Pontus Granström

unread,
Apr 22, 2010, 8:50:45 AM4/22/10
to brain-t...@googlegroups.com
Furthermore, the best prediction of individual differences in
fluid intelligence was accomplished using a hybrid model that
depicted a latent construct comprising scores from the LNS and
laboratory WM tests. Taken together, these findings suggested
that while the laboratory and psychometric indices of WM may
be measuring similar cognitive processes, there were subtle
differences that should be considered, including their predictive
utility.


Something I have been claiming all the time.

Pontus Granström

unread,
Apr 22, 2010, 9:08:04 AM4/22/10
to brain-t...@googlegroups.com
These findings provided
support for their claimthat individual differences inWMwill be
good predictors of fluid abilities to the extent that the individual
memory tests emphasize a controlled search of SM.

The n-back task also emphasizes the
need for a controlled search process by forcing participants to
retrieve an item that fell in a specific position in the list. The
Dspan and Sspan tasks, on the other hand, involve the simple
storage of information without an additional processing
demand.

This does not imply that performance on these tasks
is void of attention, but the extent towhich controlled attention
is emphasized in the taskmay be different. Performance on the
longer list lengths of these testswould presumably yield better
predictive power, but theway inwhich the data fromtheWAISIII
and WMS-III were collected (absolute accuracy scores for
each list rather than item-level accuracy) does not allow for a
direct examination of this possibility.
The problems associated with the Arithmetic subtest could
also reflect the fact that performance is less reliant on general
attention factors and more reliant on a specific skill set.

Although the present data cannot fully speak to the nature of
the predictive relationship between WM and higher-order
cognitive function, it still offers insight into the specific WM
tests that hold the most predictive power. This is particularly
important for clinical evaluationwhere performance on cognitive
tests could be used to predict howa patient will function in other
areas. It is clear that certain memory tests are more sensitive to
variation in other cognitive abilities, and simple evaluation of the
correlational relationship between these testswill not necessarily
speak to these subtle, but important, differences.

Letter/Number-sequencing subtest of theWAIS-III andWMS-III
represent the purest battery of tests for the WM construct.
Furthermore, these four tests offer the best predictive capability
of higher-order cognitive abilities in this college-student
sample. When making the decision to use a particular set of
memory tests one should consider several factors, including
howwell these tests predict performance in other areas and the
other skills or cognitive processes that are being represented in
the tasks.


N-back is superior predictor of Gf and there are subtle differences between what we call "WM-tasks". This study proves or disproves nothing, and the training time is so small that even with a good training regime there wouldn't be any results.

Case closed.

Pontus Granström

unread,
Apr 23, 2010, 6:50:33 AM4/23/10
to brain-t...@googlegroups.com
 The first two are barely considered
standard measures of WM,so of course n-back is superior.


Furthermore, these tasks (Ospan and reading span) have
been repeatedly shown to be reliable measures of WM that
demonstrate excellent construct and criterion-related validity
(for a list of the many higher order cognitive tasks that
correlate with WM, see Conway et al., 2005, p.777).

Barely a valid measure of WM?

jttoto

unread,
Apr 23, 2010, 7:53:19 AM4/23/10
to Dual N-Back, Brain Training & Intelligence
Still arguing with yourself? Pontus. I came to that conclusion from
your own cited study. If n-back correlates with Gf at .4 (which is a
weak-to-moderate correlation at best), and n-back is better than the
other two measures, then your own citation clearly states that the
other measures are weak correlations. I'm sorry, but I clearly don't
have the free time as you do to cherry-pick every study that supports
my claim.

I could pick apart the egregious logical fallacies in your last post,
but I already said I won't discuss this furthur. If you can actually
find a proper study which counters my arguments, then I will
continue. Lets look at what you have yet to disprove:

--I said source a study comparing WM training methods. There clear
difference between comparing training methods and measures. My little
sister can figure this out. Let me spell it out. Just because
something is a more accurate measure, doesn't mean praciticing it will
improve what it measures. This is logic 101. Lets put this in
perspective. RPM is an accurate measure of Gf. If I practice the
test, I get better at RPM. Does this mean my Gf improves as well?
No. I just get more efficient at the test, albeit an accurate one.
Read the entire paragraph, I've been repeating this for 3 posts

-- Phantom arguments. It seems to can't argue with me directly, so
you create arguments I allegedly said for your rebuttal. For the last
time, and read it this time, I never said that every WM training is
equal. I said that we should compare two training methods within the
same study. Find one, otherwise you have no argument.

-- I never said WM can't be improved. My only main argument is that
there is conflicting evidence on whether n-back can indeed improve
intelligence at its core. Jaeggi cited other studies showing null
results in her own paper. Other people have posted null effects on n-
back. And yet, you still say I have no data? It is already sourced
on this forum. Well, I guess I have no data when you make up
arguments for me. Your barrage of experimental studies and
correlations doesn't change that.



On Apr 23, 6:50 am, Pontus Granström <lepon...@gmail.com> wrote:
> * The first two are barely considered
> standard measures of WM,so of course n-back is superior.*
>
> Furthermore, these tasks (Ospan and reading span) have
> been repeatedly shown to be reliable measures of WM that
> demonstrate excellent construct and criterion-related validity
> (for a list of the many higher order cognitive tasks that
> correlate with WM, see Conway et al., 2005, p.777).
>
> Barely a valid measure of WM?
>
> > *
> > Case closed.*
>
> > On Thu, Apr 22, 2010 at 2:50 PM, Pontus Granström <lepon...@gmail.com>wrote:
>
> >> *Furthermore, the best prediction of individual differences in
> >> fluid intelligence was accomplished using a hybrid model that
> >> depicted a latent construct comprising scores from the LNS and
> >> laboratory WM tests. Taken together, these findings suggested
> >> that while the laboratory and psychometric indices of WM may
> >> be measuring similar cognitive processes, there were subtle
> >> differences that should be considered, including their predictive
> >> utility.*
>
> >> Something I have been claiming all the time.
>
> >> On Thu, Apr 22, 2010 at 2:46 PM, Pontus Granström <lepon...@gmail.com>wrote:
>
> >>> Clinical measures
> >>> 1. Arithmetic 13.97 3.05 .75
> >>> 2. Spatial span 17.75 2.99 .22 ⁎⁎ .74
> >>> 3. Digit span 19.55 3.76 .31 ⁎⁎ .33 ⁎⁎ .81
> >>> 4. Letter number 12.56 2.67 .45 ⁎⁎ .43 ⁎⁎ .60 ⁎⁎ .74
> >>> Lab measures
> >>> 5. Ospan score 44.15 15.54 .23 ⁎⁎ .35 ⁎⁎ .54 ⁎⁎ .41 ⁎⁎ .73
> >>> 6. Lspan score 29.06 10.93 .34 ⁎⁎ .29 ⁎⁎ .43 ⁎⁎ .45 ⁎⁎ .55 ⁎⁎ .74
> >>> 7. Lag score 54.23 16.36 .41 ⁎⁎ .33 ⁎⁎ .48 ⁎⁎ .44 ⁎⁎ .38 ⁎⁎ .45 ⁎⁎ .79
> >>> gF measures
> >>> 8. RAPM 25.50 4.04 .34 ⁎⁎ .25 ⁎⁎ .16 ⁎ .29 ⁎⁎ .29 ⁎⁎ .30 ⁎⁎ .40 ⁎⁎ .75
> >>> 9. Block design 45.79 10.94 .41 ⁎⁎ .43 ⁎⁎ .27 ⁎⁎ .37 ⁎⁎ .29 ⁎⁎ .32 ⁎⁎ .38
> >>> ⁎⁎ .41 ⁎⁎ .75
> >>> 10. Matrix Reasoning 20.17 2.51 .19 ⁎ .30 ⁎⁎ .18 ⁎ .21 ⁎⁎ .12 .23 ⁎⁎ .36
> >>> ⁎⁎ .37 ⁎⁎ .33 ⁎⁎ .65
>
> >>> So 7 different tests.
>
> >>> On Thu, Apr 22, 2010 at 2:42 PM, Pontus Granström <lepon...@gmail.com>wrote:
>
> >>>> The other WM-tasks were from the WAIS test in this study. You claimed
> >>>> that increasing the difficulty on any WM-task is equal to dnb, then you
> >>>> accused me of not being scientific, when it's clearly you who are not. You
> >>>> even said that they also required updating etc, you just ASSUMED that all
> >>>> WM-tasks are equal and that spending 25% of the time on such a task compared
> >>>> to lowest significant result by jaeggi is a proof that training doesn't
> >>>> work. Why do you still claim this, when you are so overwhelmingly proved
> >>>> wrong?
>
> >>>> On Thu, Apr 22, 2010 at 2:35 PM, Pontus Granström <lepon...@gmail.com>wrote:
>
> >>>>> It proves that not all WM-tasks are equal, and you have no idea which
> >>>>> WM-task that were used so why are you so sure that you are right? There's
> >>>>> not just any data that support your claim.
>
> >>>>>> groups.google.com/group/brain-training?hl=en.- Hide quoted text -
>
> >>>>>> > - Show quoted text -
>
> >>>>>> --
> >>>>>> You received this message because you are subscribed to the Google
> >>>>>> Groups "Dual N-Back, Brain Training & Intelligence" group.
> >>>>>> To post to this group, send email to brain-t...@googlegroups.com.
> >>>>>> To unsubscribe from this group, send email to
> >>>>>> brain-trainin...@googlegroups.com<brain-training%2Bunsubscribe@go­oglegroups.com>
> >>>>>> .
> >>>>>> For more options, visit this group at
> >>>>>>http://groups.google.com/group/brain-training?hl=en.
>
> --
> You received this message because you are subscribed to the Google Groups "Dual N-Back, Brain Training & Intelligence" group.
> To post to this ...
>
> read more »- Hide quoted text -

jttoto

unread,
Apr 23, 2010, 8:01:07 AM4/23/10
to Dual N-Back, Brain Training & Intelligence
And it is hard to take you seriously, when you make outlandish
conclusions such as this: "This proofs that n-back is superior to
other WM-tasks when it comes to Gf. I rest my case." So one weak
correlation is superior to two other weak correlations. Big whoop.
This still doesn't change the fact that we are talking about comparing
training methods, not measures. I don't see how this is a hard
concept to grasp. Is this your definition of a rested case?

Pontus Granström

unread,
Apr 23, 2010, 8:28:53 AM4/23/10
to brain-t...@googlegroups.com
First of all are you still claiming that the test used for example ospan is barely not a valid measurement of WM?

jttoto

unread,
Apr 23, 2010, 8:33:27 AM4/23/10
to Dual N-Back, Brain Training & Intelligence
Anywho, this debate is so scattered that I need bullet points to
figure out where I'm at. Lets see what we have accomplished so far:

- I said that this paper is evidence that WM games may not increase
overall intelligence, and only improve what they train

- You said that this isn't the case because n-back is a superior
training method.

- I said that in order to prove that you need a study directly
comparing the effect size between n-back training and another WM
training method.

Instead of citing a proper study (you know, one that actually has
training and effect on Gf), we now have pages of haphazard mess
because you can't follow simple instructions. Perhaps I'm partially
to blame, because I should have copied and paste "please cite proper
study" as a response repeatedly. So again, do you have a study or
not?

jttoto

unread,
Apr 23, 2010, 8:34:54 AM4/23/10
to Dual N-Back, Brain Training & Intelligence
Not relevant. And no I'm not. It in fact was not even a valid point
just a tangent I decided to make based off of results from your
study. It serves no purpose whatsoever, so stay on topic this time,
and I'll do the same.

Pontus Granström

unread,
Apr 23, 2010, 8:53:03 AM4/23/10
to brain-t...@googlegroups.com
First of all there is dispute weather n-back really is a WM-task as you can read in the study. Still it is a superior predictor of Gf compared to the excellent WM-tasks used in this study. Training on tasks that are not equal to n-back does not mean that n-back training can't increase intelligence. You are absolutely right that training and measurements does not have to be the same.

More specifically,
the n-back task may not provide a clear indication of the
capacity of WM, rather a person's ability to efficiently update
the contents of WM to better maintain current task goals.


 I claim that this study in no way contradicts the results of jaeggi since they are not equal in time nor in training task. What's your problem? To spell it out to you: WM is a label and does not necessarily imply that all tasks are equal or use the same processes/areas in the brain, perhaps it's even wrong to call n-back a WM-task.

Recent studies have offered conflicting results on the
utility of the n-back task as a valid measure of WM (Kane,
Conway, Miura, & Colflesh, 2007; Shelton, Metzger, & Elliott,
2007).


This study had no similar task to n-back so comparing the two are absolutely wrong. It's like saying that training on pull ups doesn't increase muscle strength in the arms since training squats did not.

jttoto

unread,
Apr 23, 2010, 9:07:46 AM4/23/10
to Dual N-Back, Brain Training & Intelligence
I never disputed most of what you are typing above. But this does not
change the fact that we don't have irrefutable proof that n-back can
be trained to the point that we are not simply getting better at the
measure. If this wasn't the case, how do you explain studies with no
effect? How do you explain studies with no effect? Even ones that
used n-back.

Do you have a study comparing training n-back to another training
control or not?

On Apr 23, 8:53 am, Pontus Granström <lepon...@gmail.com> wrote:
> First of all there is dispute weather n-back really is a WM-task as you can
> read in the study. Still it is a superior predictor of Gf compared to the
> excellent WM-tasks used in this study. Training on tasks that are not equal
> to n-back does not mean that n-back training can't increase intelligence.
> You are absolutely right that training and measurements does not have to be
> the same.
> *
> More specifically,
> the n-back task may not provide a clear indication of the
> capacity of WM, rather a person's ability to efficiently update
> the contents of WM to better maintain current task goals.*
>
>  I claim that this study in no way contradicts the results of jaeggi since
> they are not equal in time nor in training task. What's your problem? To
> spell it out to you: WM is a label and does not necessarily imply that all
> tasks are equal or use the same processes/areas in the brain, perhaps it's
> even wrong to call n-back a WM-task.
>
> *Recent studies have offered conflicting results on the
> utility of the n-back task as a valid measure of WM (Kane,
> Conway, Miura, & Colflesh, 2007; Shelton, Metzger, & Elliott,
> 2007).*
>
> This study had no similar task to n-back so comparing the two are absolutely
> wrong. It's like saying that training on pull ups doesn't increase muscle
> strength in the arms since training squats did not.
>
> *
>
> *
> > brain-trainin...@googlegroups.com<brain-training%2Bunsubscribe@go­oglegroups.com>
> > .
> > For more options, visit this group at
> >http://groups.google.com/group/brain-training?hl=en.
>
> --
> You received this message because you are subscribed to the Google Groups "Dual N-Back, Brain Training & Intelligence" group.
> To post to this group, send email to brain-t...@googlegroups.com.
> To unsubscribe from this group, send email to brain-trainin...@googlegroups.com.
> For more options, visit this group athttp://groups.google.com/group/brain-training?hl=en.- Hide quoted text -

jttoto

unread,
Apr 23, 2010, 9:10:28 AM4/23/10
to Dual N-Back, Brain Training & Intelligence
"This study had no similar task to n-back so comparing the two are
absolutely
wrong. It's like saying that training on pull ups doesn't increase
muscle
strength in the arms since training squats did not. "

You can compare effect sizes of two seperate WM tasks. You just stick
one as the control and find out which is more effective. You can't
just say one is more effective without trying it out.
> ...
>
> read more »- Hide quoted text -
>
> - Show quoted text -

--
You received this message because you are subscribed to the Google Groups "Dual N-Back, Brain Training & Intelligence" group.
To post to this group, send email to brain-t...@googlegroups.com.

jttoto

unread,
Apr 23, 2010, 9:14:38 AM4/23/10
to Dual N-Back, Brain Training & Intelligence
"To
spell it out to you: WM is a label and does not necessarily imply that
all
tasks are equal or use the same processes/areas in the brain, perhaps
it's
even wrong to call n-back a WM-task. "

Where have I denied this? I already said two posts ago that I never
made that assertion.

Pontus Granström

unread,
Apr 23, 2010, 9:22:34 AM4/23/10
to brain-t...@googlegroups.com
I am confused, this study used short term memory tasks not n-back tasks. If n-back is not even a WM-task how can this prove that training on n-backing doesn't increase IQ? I do not understand that at all.

It is also interesting to note that in "my" article they also used a speeded version with a 5 min time limit on each block consisting of 12 problems, a bit of topic but still interesting since this was something that Moody criticised Jaeggi for.


This version of the task included three blocks of 12 items
each (Raven, Raven, & Court,1998). For each item, a portion of a
geometric pattern was missing and participants were instructed
to choose the response that correctly completed the
pattern. Six response options were given for items in Set 1, and 8
response options were given for items in Set 2. The items
increased in difficulty across each block and 5min were allotted
to solve each block. This task was computer administered and
participants advanced by making responses with the mouse.
Individual scores on RAPM represented the total number of
items they responded to correctly across the three blocks.

jttoto

unread,
Apr 23, 2010, 9:29:07 AM4/23/10
to Dual N-Back, Brain Training & Intelligence
I am confused, this study used short term memory tasks not n-back
tasks. If
n-back is not even a WM-task how can this prove that training on n-
backing
doesn't increase IQ? I do not understand that at all.

Not surprising. Back to where we started

N-back is considerd by most to be a WM measure.
The studies used tasks that also measure WM
In theory, training measures of WM will improve intelligence.

I am saying that we shoud find a study comparing training on two
measures of WM within the same study (preferably one with n-back) to
find out if one or both produce gains intelligence, and which method
produce greater gains. I fail to see how this perplexes you.

A direct comparison is a more accurate way to find out which is more
effective. Not this barrage of speculation and correlations.

On Apr 23, 9:22 am, Pontus Granström <lepon...@gmail.com> wrote:
> I am confused, this study used short term memory tasks not n-back tasks. If
> n-back is not even a WM-task how can this prove that training on n-backing
> doesn't increase IQ? I do not understand that at all.
>
> It is also interesting to note that in "my" article they also used a speeded
> version with a 5 min time limit on each block consisting of 12 problems, a
> bit of topic but still interesting since this was something that Moody
> criticised Jaeggi for.
>
> *
> This version of the task included three blocks of 12 items
> each (Raven, Raven, & Court,1998). For each item, a portion of a
> geometric pattern was missing and participants were instructed
> to choose the response that correctly completed the
> pattern. Six response options were given for items in Set 1, and 8
> response options were given for items in Set 2. The items
> increased in difficulty across each block and 5min were allotted
> to solve each block. This task was computer administered and
> participants advanced by making responses with the mouse.
> Individual scores on RAPM represented the total number of
> items they responded to correctly across the three blocks.*

Pontus Granström

unread,
Apr 23, 2010, 9:34:55 AM4/23/10
to brain-t...@googlegroups.com
There is a lot of dispute around WM-tasks so making the assumption that they are as well suited for IQ-training is pretty much bogus. She stated training on a WM-task, this task is n-back, which might not even be a WM-task, so that could explain why training on n-back seem to increase IQ while spending a couple of minutes on a STM-task do not.

jttoto

unread,
Apr 23, 2010, 9:39:55 AM4/23/10
to Dual N-Back, Brain Training & Intelligence
That is what she claims. Didn't you just cited a study where other
Phds were claiming it n-back is a measure, and an accurate one?
Clearly there is no consensus on these claims.
"Might" is the key word.

Pontus Granström

unread,
Apr 23, 2010, 9:44:23 AM4/23/10
to brain-t...@googlegroups.com
Well n-back is in such a case a updating working memory task. So more specific training on a updating memory task leads to increased intelligence while training om short term memory tasks do not.

Updating tasks measure WM equally well as CSTs. These
results indicate that reasoning, CSTs, and updating tasks share
common processing mechanisms
. Building, maintaining, and updating
arbitrary bindings may constitute these mechanisms, but
further research including additional tasks designed to directly
assess bindings are needed to elucidate this assumption.

Pontus Granström

unread,
Apr 23, 2010, 9:51:16 AM4/23/10
to brain-t...@googlegroups.com
How to best measure working memory capacity is an issue of ongoing debate. Besides established
complex span tasks, which combine short-term memory demands with generally unrelated secondary
tasks, there exists a set of paradigms characterized by continuous and simultaneous updating of several
items in working memory, such as the n-back, memory updating, or alpha span tasks. With a latent
variable analysis (N 96) based on content-heterogeneous operationalizations of both task families, the
authors found a latent correlation between a complex span factor and an updating factor that was not
statistically different from unity (r .96). Moreover, both factors predicted fluid intelligence (reasoning)
equally well. The authors conclude that updating tasks measure working memory equally well as
complex span tasks. Processes involved in building, maintaining, and updating arbitrary bindings may
constitute the common working memory ability underlying performance on reasoning, complex span, and
updating tasks.

jttoto

unread,
Apr 23, 2010, 7:57:39 PM4/23/10
to Dual N-Back, Brain Training & Intelligence
Let's start with what I agree with you with.

I do agree that the average training length in the nature article is
ridiculously small. It was almost as if they were trying to prove
there was no effect. Perhaps the limitations were due to the large
sample size. That being said, the monkey on my back is that there was
no correlation between time spent and improvement. That is a very
large and angry monkey.

I also agree that the training regime is not identical to n-back.
Therefore, this study as transferable to n-back training is weak. I
agree here, but the mechanisms are close, one would expect to see some
effect. In addition, people did see improvement on the tasks they
studied, just not in tasks they didn't study. That is bothersome,
because to my knowledge, this is the first study to compare cognitive
functions on a broader scale.

Yes, n-back studies typically rely on a measure of Gf, but think about
this (which I already mentioned). Retest effects prove that one can
get better at measures with Gf, just with practice. Meaning, I can
practice the Raven's all day and get better over time. But does this
mean my Gf has improved? Probably not. Now can we conceive that we
can improve on a game that trains similar functions that help me with
the Raven's, while not improving overall cogntive function? Yes it
is possible. No study has convinced me otherwise, including
Jaeggi's. This is the crux of Moody's criticism, and it still stands
firm.

And this is why some studies probably don't show a postive result from
training with n-back. Does it really improve Gf, or, like constantly
practicing the Raven's, it helps me improve on just taking the
Raven's? Now I'm the one guilty of speculating, but these are
questions that the data has yet to answer.

On Apr 23, 9:51 am, Pontus Granström <lepon...@gmail.com> wrote:
> *How to best measure working memory capacity is an issue of ongoing
> debate.*Besides established
> complex span tasks, which combine short-term memory demands with generally
> unrelated secondary
> tasks, there exists a set of paradigms characterized by continuous and
> simultaneous updating of several
> items in working memory, such as the n-back, memory updating, or alpha span
> tasks. With a latent
> variable analysis (N   96) based on content-heterogeneous
> operationalizations of both task families, the
> authors found a latent correlation between a complex span factor and an
> updating factor that was not
> statistically different from unity (r   .96). Moreover, both factors
> predicted fluid intelligence (reasoning)
> equally well. The authors conclude that updating tasks measure working
> memory equally well as
> complex span tasks. Processes involved in building, maintaining, and
> updating arbitrary bindings may
> constitute the common working memory ability underlying performance on
> reasoning, complex span, and
> updating tasks.
>
> On Fri, Apr 23, 2010 at 3:44 PM, Pontus Granström <lepon...@gmail.com>wrote:
>
>
>
> > Well n-back is in such a case a updating working memory task. So more
> > specific training on a updating memory task leads to increased intelligence
> > while training om short term memory tasks do not.
>
> > Updating tasks measure WM equally well as CSTs. These
> > results indicate that *reasoning*, CSTs, and updating tasks *share
> > common processing mechanisms*. Building, maintaining, and updating
> > arbitrary bindings may constitute these mechanisms, but
> > further research including additional tasks designed to directly
> > assess bindings are needed to elucidate this assumption.
>

jttoto

unread,
Apr 23, 2010, 8:33:18 PM4/23/10
to Dual N-Back, Brain Training & Intelligence
I also like to point out a mistake I made. The 2 working memory tests
STM and VTM, were in fact the benchmarking tests to test how the
participants did before and after they played the games. When reading
over the description of the games, I made the assumption that they
were describing the two memory games for training. This is a
description of the actual 2 games:

"After a short period the
conveyor belt stopped and the participant had to respond with how many
bags
were left in the X-ray machine. In the sixth task (memory 2), the
participant was
shown a set of cards and asked to remember the picture on each. The
cards were
then flipped over and the user had to identify pairs of cards with
identical objects
on them. For all of these tasks, exceptmemory 1, each training session
consisted of
two ‘runs’ of 90 s each and the main outcome measure was the total
number of
correct trials across the two runs. For memory 1, the main outcome
measure was
the number of problems completed in 3 min."

As you can see, there is no evidence that the training increases in
difficulty. It is very possible that the improvements were due to
crystallized strategies as opposed to constantly taxing WM. This
study is nothing like n-back, and should not be taken as the final
word on cognitive improvement.

aerodm

unread,
Apr 24, 2010, 12:04:30 AM4/24/10
to Dual N-Back, Brain Training & Intelligence
On what basis should we be assuming the games used in the study
function similar to dual or even single n-back?

The study cited claims to have used 'brain training' games as a method
for testing whether or not the games sold by companies actually result
in increased intelligence. Because the test sought to test the
efficacy of commercial games sold by companies, then it would be most
probable the study employed games similar to those sold or advertised
commercially for the testing. It seems to me then that we do not need
to make assumptions concerning whether or not the games do indeed
function similar to n-back. We need only to visit a commercial
website which allows us to play such games that the researchers chose
to employ. Namely, math games, problem solving games, etc. A website
such as http://www.lumosity.com may provide us with examples.

I won't post links, but a bit of searching around on the internet
shows that most commercial brain training websites all sell or allow
people to use very similar products with very similar functions.
Things such as basic addition, subtraction, multiplication and
division for math games. Problem solving puzzles may involve finding
the shortest number of moves required to get to point B from point A
given some type of constraining condition.

Because I am in no way an expert on working memory or the methods via
which the mind functions, I must ask some questions. In what way do
an addition, subtraction, or multiplication game relate to n-back
tasks? Do multiplication and subtraction have implications for
working memory? Do critical thinking tasks designed to hone problem
solving skills bear resemblance to dual or single n-back? If n-back
is a test of working memory, and if indeed the tests employed in the
study are similar to those which can be found online, then do these
games or do these games not tax working memory? If these games do tax
working memory, then perhaps Jaeggi's results have been contradicted;
however, I believe it would be premature to claim Jaeggi's results as
invalidated. If these games do not tax working memory then the
discussion here has gone on for far too long because by asking the
above questions and answering them carefully and correctly one should
be able to come to a conclusive result.

Perhaps my analysis is wrong, and if so, feel free to point out where,
why, and how.

-aerodm

On Apr 21, 7:54 am, jttoto <jtdem...@uncc.edu> wrote:
> Actually, this is more important then some will likely give credit
> for.  Since the newest evidence shows that single n-back improves
> intelligence just as well as dual, then we can imagine that training
> in any WM task will cause results in intelligence
>
> .  This study shows the opposite. The game used in the study likely
> functions similar to n-back, since the WM games escalate in difficulty
> as the user improves.  Yet, no improvement.
>
> On Apr 21, 4:01 am, Jonathan Toomim <jtoo...@jtoomim.org> wrote:
>
>
>
>
>
> >http://www.nature.com/nature/journal/vnfv/ncurrent/pdf/nature09042.pdf
>
> > Abstract:
>
> > > ‘Brain training’, or the goal of improved cognitive function through  
> > > the regular use of computerized tests, is a multimillion-pound  
> > > industry, yet in our view scientific evidence to support its  
> > > efficacy is lacking. Modest effects have been reported in some  
> > > studies of older individuals and preschool children, and video-game  
> > > players outperform non-players on some tests of visual attention5.  
> > > However, the widely held belief that commercially available  
> > > computerized brain-training programs improve general cognitive  
> > > function in the wider population in our opinion lacks empirical  
> > > support. The central question is not whether performance on  
> > > cognitive tests can be improved by training, but rather, whether  
> > > those benefits transfer to other untrained tasks or lead to any  
> > > general improvement in the level of cognitive functioning. Here we  
> > > report the results of a six-week online study in which 11,430  
> > > participants trained several times each week on cognitive tasks  
> > > designed to improve reasoning, memory, planning, visuospatial skills  
> > > and attention. Although improvements were observed in every one of  
> > > the cognitive tasks that were trained, no evidence was found for  
> > > transfer effects to untrained tasks, even when those tasks were  
> > > cognitively closely related.
>
> > --
> > You received this message because you are subscribed to the Google Groups "Dual N-Back, Brain Training & Intelligence" group.
> > To post to this group, send email to brain-t...@googlegroups.com.
> > To unsubscribe from this group, send email to brain-trainin...@googlegroups.com.
> > For more options, visit this group athttp://groups.google.com/group/brain-training?hl=en.-Hide quoted text -

Pontus Granström

unread,
Apr 24, 2010, 3:37:47 AM4/24/10
to brain-t...@googlegroups.com
I am a bit curious about the fact that they used a 5 min time limit to solve 12 problems in the study where they examined working memories prediction of fluid intelligence. This time limit is even tighter than jaeggi's.

jttoto

unread,
Apr 24, 2010, 9:15:41 AM4/24/10
to Dual N-Back, Brain Training & Intelligence
Thank you for that thoughtful post. Unfortunately, I have to head to
work, but I will leave you with a couple of points, and when I come
back I will give a more detailed response.


>On what basis should we be assuming the games used in the study
> function similar to dual or even single n-back?

I have, in fact, corrected this mistake. A scant description of the
games are posted above. Memory 2 is in fact nothing like n-back. If
memory 1 does indeed get more difficult the more questions the user
answers right, one could assume it does tax WM, since it requires the
user to hold and update information at increasing difficulty (not
unlike n-back) However, we aren't given a description beyond what is
mentioned above.

Correct me if I typed otherwise in this heated debate, but the core of
my argument is not that Jaeggi's findings are invalidated, but that
they are inconclusive. There is a difference.

Moody's argument is that n-back is indeed task-specific, meaning you
are training in a specific cognitive function without improving
overall intelligence. Take a look at Klingberg's findings. Klingberg
has been unable to replicate his findings. Klingberg describes a
training regimen similar but not identical to single n-back:

"(1) a visual span task where
circles appeared one at a time in different locations of a four by
four grid. Participants were instructed to indicate the positions of
the circles in the correct order; (2) a backwards digit-span task
where participants were required to repeat a spoken series of digits
in the reverse order;"

He also states that the training increases in difficulty.

The 1st one is almost identical to n-back. The main difference is
that you can argue that unlike n-back, they are not updating
information by the second, but instead only updating during the
memorization process, and taking a break as they answer. But if this
is so functionally different from n-back, why gains in Gf?When a study
compared IQ gains using the Raven's, their IQ did improved. (like
Jaeggi's findings) This proves that the regimen does work, for
certain processes. However, when a different study looked at the
Wechsler's, no IQ gain was reported, despite improving on other
untrained tasks.

Perhaps, Jaeggi should test results of n-back to a more comprehensive
IQ test such as the Wechsler, and we can put this argument to rest.


On Apr 24, 12:04 am, aerodm <dmbra...@gmail.com> wrote:
> On what basis should we be assuming the games used in the study
> function similar to dual or even single n-back?
>
> The study cited claims to have used 'brain training' games as a method
> for testing whether or not the games sold by companies actually result
> in increased intelligence.  Because the test sought to test the
> efficacy of commercial games sold by companies, then it would be most
> probable the study employed games similar to those sold or advertised
> commercially for the testing.  It seems to me then that we do not need
> to make assumptions concerning whether or not the games do indeed
> function similar to n-back.  We need only to visit a commercial
> website which allows us to play such games that the researchers chose
> to employ.  Namely, math games, problem solving games, etc.  A website
> such ashttp://www.lumosity.commay provide us with examples.
> > > For more options, visit this group athttp://groups.google.com/group/brain-training?hl=en.-Hidequoted text -
>
> > > - Show quoted text -
>
> > --
> > You received this message because you are subscribed to the Google Groups "Dual N-Back, Brain Training & Intelligence" group.
> > To post to this group, send email to brain-t...@googlegroups.com.
> > To unsubscribe from this group, send email to brain-trainin...@googlegroups.com.
> > For more options, visit this group athttp://groups.google.com/group/brain-training?hl=en.
>
> --
> You received this message because you are subscribed to the Google Groups "Dual N-Back, Brain Training & Intelligence" group.
> To post to this group, send email to brain-t...@googlegroups.com.
> To unsubscribe from this group, send email to brain-trainin...@googlegroups.com.
> For more options, visit this group athttp://groups.google.com/group/brain-training?hl=en.- Hide quoted text -

Pontus Granström

unread,
Apr 24, 2010, 9:45:37 AM4/24/10
to brain-t...@googlegroups.com
I believe it was someone who failed the raven's and passed the WAIS with 150 IQ score as reported on the FAQ. Then it was speculated that it was due to the extensive use of cultural tests which might have been in favour of the test taker and as a such it did not measure increases in Gf. Now there's the other way around, there seems to be transfer effects to Gf-tests but we should use cultural tests instead because they are more "accurate". If G represents "raw" biological processes associated with solving problems and learning we can settle with that. We do not need to study the extension of such processes in a social context.
Since it takes a while to increase the Gc which is learned knowledge.

Does anyone have a comment on the 5 minute limit by the way? 

jttoto

unread,
Apr 24, 2010, 9:48:43 PM4/24/10
to Dual N-Back, Brain Training & Intelligence
Perhaps we should ask the user which section he/she scored highest in.
There are sections of the WAIS which measure fluid intelligence,
rather accurately as well.

Off topic a little. It is interesting to note that people with
bipolar score lower on tests almost identical to RAPM, and have WM
deficits, while above average in arithmetic reasoning.
http://ajp.psychiatryonline.org/cgi/content/abstract/162/10/1904 They
are also more likely to be high acheivers. http://www.physorg.com/news184573059.html
(and low achievers, but researchers speculate that these have a
subtype bipolar more in common with schizophrenia) They also vastly
overrepresent in the creative fields, particularily writing, music,
and poetry, but also art despite having a clear spatial deficit.
http://www.pendulum.org/articles/articles_bipolar_troubled.html

Now the latent inhibtion experiment showed a clear correlation between
cognition and creative achievement when LI was low, but keep in mind
they only looked at Harvard students, so it is most likely not
representative of the entire creative population. Come to think about
it, if you look at the biographies of many famous people in the arts
at came from Harvard? (or went to college for that matter). Very
few.

Is it possible that the high bipolar achievers represent a different
cognitive profile then the average bipolar? Very statistically
unlikely considering the deficits and standard deviations associated
with the average sufferer, and the fact that the average bipolar
scores almost twice the average then controls on creativity tests(in
spite of cognitive deficits).
http://www.webmd.com/bipolar-disorder/news/20051114/study-bipolar-kids-often-more-creative
Now I have no personal benefit of mentioning this, since I have
mentioned multiple times on this forum that tests like the RAPM I tend
to score very high in. But the point of all this is this...

How much stock should we be putting in the RAPM and other visuospatial
reasoning tests, when a large and signifcant portion of the creative
and high achieving population may in fact score horribly in it?
> > > such ashttp://www.lumosity.commayprovide us with examples.
> > brain-trainin...@googlegroups.com<brain-training%2Bunsubscribe@go­oglegroups.com>
> > .
> > > > > For more options, visit this group athttp://
> > groups.google.com/group/brain-training?hl=en.-Hidequoted text -
>
> > > > > - Show quoted text -
>
> > > > --
> > > > You received this message because you are subscribed to the Google
> > Groups "Dual N-Back, Brain Training & Intelligence" group.
> > > > To post to this group, send email to brain-t...@googlegroups.com.
> > > > To unsubscribe from this group, send email to
> > brain-trainin...@googlegroups.com<brain-training%2Bunsubscribe@go­oglegroups.com>
> > .
> > > > For more options, visit this group athttp://
> > groups.google.com/group/brain-training?hl=en.
>
> > > --
> > > You received this message because you are subscribed to the Google Groups
> > "Dual N-Back, Brain Training & Intelligence" group.
> > > To post to this group, send email to brain-t...@googlegroups.com.
> > > To unsubscribe from this group, send email to
> > brain-trainin...@googlegroups.com<brain-training%2Bunsubscribe@go­oglegroups.com>
> > .
> > > For more options, visit this group athttp://
> > groups.google.com/group/brain-training?hl=en.- Hide quoted text -
>
> > > - Show quoted text -
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Dual N-Back, Brain Training & Intelligence"
>
> ...
>
> read more »- Hide quoted text -

jttoto

unread,
Apr 24, 2010, 10:38:16 PM4/24/10
to Dual N-Back, Brain Training & Intelligence
And before we go into theories about the possibility that high-
achieving bipolars may have a different profile then the average, the
study cited above found that the deficits in Raven's were found
regardless of socioeconomic status (or barely had an effect). In
other words, high achieving bipolars still had deficits in Raven's and
similar tests.

I am hardly ignorant of the RAPM and its correlation to success. I am
however saying that we should re-evaluate its place as one of the gold
standards of intellectual reasoning and achievement. (considering it
is always used as a placeholder for Gf) Clearly there is more going on
here.

On Apr 24, 9:48 pm, jttoto <jtdem...@uncc.edu> wrote:
> Perhaps we should ask the user which section he/she scored highest in.
> There are sections of the WAIS which measure fluid intelligence,
> rather accurately as well.
>
> Off topic a little.  It is interesting to note that people with
> bipolar score lower on tests almost identical to RAPM, and have WM
> deficits, while above average in arithmetic reasoning.http://ajp.psychiatryonline.org/cgi/content/abstract/162/10/1904 They
> are also more likely to be high acheivers.http://www.physorg.com/news184573059.html
> (and low achievers, but researchers speculate that these have a
> subtype bipolar more in common with schizophrenia)  They also vastly
> overrepresent in the creative fields, particularily writing, music,
> and poetry, but also art despite having a clear spatial deficit.http://www.pendulum.org/articles/articles_bipolar_troubled.html
>
> Now the latent inhibtion experiment showed a clear correlation between
> cognition and creative achievement when LI was low, but keep in mind
> they only looked at Harvard students, so it is most likely not
> representative of the entire creative population.  Come to think about
> it, if you look at the biographies of many famous people in the arts
> at came from Harvard? (or went to college for that matter).  Very
> few.
>
> Is it possible that the high bipolar achievers represent a different
> cognitive profile then the average bipolar?  Very statistically
> unlikely considering the deficits and standard deviations associated
> with the average sufferer, and the fact that the average bipolar
> scores almost twice the average then controls on creativity tests(in
> spite of cognitive deficits).http://www.webmd.com/bipolar-disorder/news/20051114/study-bipolar-kid...
> > > > such ashttp://www.lumosity.commayprovideus with examples.

Thales

unread,
Apr 25, 2010, 2:50:40 AM4/25/10
to Dual N-Back, Brain Training & Intelligence
Does the finding that increased training time does not correlate with
greater cognitive improvement take into account what tasks were being
trained?

This seems like an important point, because clearly the finding that a
great deal of time spent on problem solving tasks does not lead to any
kind of far transfer would not contradict the dual-n back studies.

Pontus Granström

unread,
Apr 25, 2010, 5:52:38 AM4/25/10
to brain-t...@googlegroups.com
Off topic a little.  It is interesting to note that people with
bipolar score lower on tests almost identical to RAPM, and have WM
deficits, while above average in arithmetic reasoning.

Yes and aspberger people score high on RAPM/GF while obviously having problems with other type of cognition. Just proves how narrow a mental skill can be.

jttoto

unread,
Apr 25, 2010, 8:04:11 AM4/25/10
to Dual N-Back, Brain Training & Intelligence
Yes, I will try to make this my last post on the matter since I am
delving off-topic.

Despite scoring very low on the RAPM regardless of social status,
there are some interesting facts about people with bipolar: (all
facts are cited on my post above)

- If you are a straight A student, you are 4 times more likely to be
afflicted with the disorder. Getting As in the arts means you are
very likely to have the disorder, but to a lesser extant, also the
sciences.
- Having the disorder means that you, on average, will score twice as
high on the Barron Welsh's Art Scale then normal controls, a measure
of creativity. (However, Arguzmio pointed out that the accuracy of
metrics of creativity have not been firmly established)
- Vast overreprentation in the creative arts. In the top creative
achievers, despite being 1% of the population, they make up over half
of playwrights, almost half of other creative writers, 20% of
biographers, and despite clear visuospatial deficits, 17% of artists.
A 1949 German study shows they also overrepresent as architects. This
is congruent with the bullet point above.

Keep in mind that most researchers agree that visuospatial deficits
are so widespread among sufferers, that its more likely a symptom of
bipolar rather then a probability. Combine that with the fact that
there is little variance in RAPM scores and socioeconomic status among
the afflicted, then it is highly unlikely that top bipolar achievers
represent an exception to the rule.

It is possible that those with bipolar have sacrificed visuospatial
skills for creativity, arithmetical reasoning, and a strong desire to
succeed. Oddly enough, verbal IQ is on par with the general
population. It is possible that strong arithmetic reasoning skills
could be a proxy for a strong skill in overall logical reasoning,
which allows them to arrange words in reasonable yet inventive ways.

Additional sources:
http://www.independent.co.uk/news/science/you-dont-have-to-be-bipolar-to-be-a-genius-ndash-but-it-helps-1887646.html



On Apr 25, 5:52 am, Pontus Granström <lepon...@gmail.com> wrote:
> *Off topic a little.  It is interesting to note that people with
> bipolar score lower on tests almost identical to RAPM, and have WM
> deficits, while above average in arithmetic reasoning.
>
> *Yes and aspberger people score high on RAPM/GF while obviously having
> >http://www.webmd.com/bipolar-disorder/news/20051114/study-bipolar-kid...
> > > > > such ashttp://www.lumosity.commayprovideus with examples.

Pontus Granström

unread,
Apr 25, 2010, 8:26:44 AM4/25/10
to brain-t...@googlegroups.com
Arithmetic reasoning has a very high g-load, actually among the highest in for example the AVSAB battery.
http://subjectpool.com/ed_papers/2007/Deary2007Intelligence451-456_Brother_sister_sex_differences.pdf

It perhaps proves that RAPM indeed is a highly visual test and this ability is a limiting factor.

jttoto

unread,
Apr 25, 2010, 9:36:42 AM4/25/10
to Dual N-Back, Brain Training & Intelligence
Yes, it is interesting that word knowledge is also strongly correlated
with g, while science knowledge is one of the least. Anyone who has
studied for a vocabulary test would know that word knowledge is
something you can improve with time and diligence.

Also keep in mind that vocabulary knowledge is very limited. No one
has time to memorize the entire dictionary. Also know that vocabulary
tests tend to be multiple choice. Because of this, vocabulary tests
aren't really a measure of knowledge per se but instead a measure of
inductive reasoning and process of elimination, given the limited
information available from the set of choices and user's memory. That
would in fact be very g-loaded.

IMO, a verbal fluency test is a more accurate measure of word
knowledge.

On Apr 25, 8:26 am, Pontus Granström <lepon...@gmail.com> wrote:
> Arithmetic reasoning has a very high g-load, actually among the highest in
> for example the AVSAB battery.http://subjectpool.com/ed_papers/2007/Deary2007Intelligence451-456_Br...
> >http://www.independent.co.uk/news/science/you-dont-have-to-be-bipolar...

Pontus Granström

unread,
Apr 25, 2010, 10:05:04 AM4/25/10
to brain-t...@googlegroups.com
Well science knowledge in this case is refered to specific knowledge of certain phenomenas in life or specific knowledge about for example wave lengths of light and similar rather than words of a scientific value, so it's not so strange that the g-load is rather low.

milestones

unread,
Apr 25, 2010, 10:36:39 AM4/25/10
to Dual N-Back, Brain Training & Intelligence
Vocabulary has the highest G loading of all the sub-tests on the
Wechsler but it's unfair against certain groups. Also, there is not
much question high socio-economic status brings with it a greater
acquaintaince with more words than other groups have. For this reason
(and others) we have the Ravens as the culture fair gold standard.
My personal opinion is that a test like the Ravens should be in more
use than it is. I mean, why not use it (along with a vocab and
arithmetic) test -- as a substitute for the SAT? This way colleges get
to see complete profile of intellectual ability and colleges get a
chance to see a profile that measures fluid and crystallized measures.
I am not sure of the use of the SAT. Even Charles Murray has proposed
banishing the test. Certainly some people do better on some tests than
others but I think what is good about the Raven's is that it would
help evaluate a student from an exclusive prep school against a
student who comes from an inner city school. Yet I realize that the
difficulty with this sort of solution is that certain groups (and
obviously individuals) might fare better in one type of test than the
other. The test would thus be quite a bit shorter and Princeton Review
and Kaplan, et al, could not therefore help people prep for at least
part of exam and so maybe the prep companies would then go out
business? I'd love to see it. But, given that G presupposes roughly
equivalent ability in all areas for most people, a large discrepancy
favoring verbal and non verbal might signal something that the college
might want to know -- this person is perhaps more educated than they
are bright? (or bipolar)? Or just more verbal than visual? I tend to
think, though, that Ravens is one of the fairest tests we have and
could be useful admissions tool and would help more people than it
hurts, especially if it is weighed only as a piece of the puzzle.
Also, high performance on an arithmetic section might offset weakness
in one or the other areas, especially if a disparate and not a
composite scale is used. I'm not the first person to think of this but
maybe the first person to air it at this place. Using the Ravens (or
a matrix test like it) is perhaps a controversial idea as a basis for
college admissions...I would like to know why specifically, though.

jttoto

unread,
Apr 25, 2010, 11:03:37 AM4/25/10
to Dual N-Back, Brain Training & Intelligence
"this person is perhaps more educated than they
are bright? (or bipolar)? "

I would, in fact, love to go through with your proposal since I tend
to score much higher on visual reasoning tests compared to other
mental functions. That being said, I think it is human nature to
elevate our strengths and downplay the importance of our weaknesses.
And I think we are overplaying the importance of the RAPM. If bipolar
disorder does indeed predispose someone to scrupulously search for
knowledge, thus elevating ones score in arithmetic, then one would
expect verbal IQ to be elevated as well. That isn't the case.

No, the ditchotomous discrepency between arithmetic and RAPM could
only mean that bipolars skill in such area represents an innate skill
not caused by practice, especially considering when the group seems to
be drawn to verbal artistic interests with no direct relation to
math. It also clearly shows that one can be good in one and not the
other, and with that said:

"Also, high performance on an arithmetic section might offset
weakness
in one or the other areas, especially if a disparate and not a
composite scale is used."

Could I just as easily say that high performance on the RAPM could
offset weaknesses in other areas? I will find the study, but I
remember recently there was a tribe in South America with almost
identical arithmetic skills as Americans despite no form of
mathmematical education. This shows that there is a strong hereditary
component that can't be chalked to "education".

Despite what Murray may say the facts speak for themselves.
Vocabulary tests and arithmetic are very high g-loaded, perhaps more
so then the Raven's. Downplaying its relavance because its isn't
culturally-fair is fine, but I think within an English-speaking
population and similar educational background, it is very reliable
indicator of g.

jttoto

unread,
Apr 25, 2010, 11:06:35 AM4/25/10
to Dual N-Back, Brain Training & Intelligence
But I do agree with you that we should use the Raven's in congruence
with arithmetic and vocab. I don't think we should be putting more
emphasis on the Raven's and give it undue weight over vocab and
especially arithmetic, unless the person is from another country.

On Apr 25, 10:36 am, milestones <wgweathe...@gmail.com> wrote:

Pontus Granström

unread,
Apr 26, 2010, 4:19:48 PM4/26/10
to brain-t...@googlegroups.com
I uploaded yet another article Working Memory Capacity and Fluid Intelligence Are Strongly Related.pdf
here they publish numerous research and show a correlation averaging around 0.7 between G and WMC, hence around 50% of the variation is shared among the two. N-back is btw a measure of "executive memory" rather than WMC.

Pontus Granström

unread,
Apr 30, 2010, 11:51:52 AM4/30/10
to brain-t...@googlegroups.com
Got this email from jaeggi today, I guess Moody is debunked now.

hi pontus, thank you for your interest in our work.
the reason why we used the 10min limit was that most of our earlier (pilot) participants completed all of the RAPM items correctly if no time limit was given.
thus, they can't get better after training since they already scored at ceiling before the training.
further, there is quite some research showing that timed versions of the RAPM are well predictive for scores in untimed versions.
in sum - i don't see any problems there.
however, we are giving untimed versions of various intelligence tests to our current participants, with similar (preliminary) results as in our earlier studies (but again, with the ceiling problem in the RAPM we encountered before).

hope this helps! have fun with training!

susanne

Gwern Branwen

unread,
Apr 30, 2010, 11:59:00 AM4/30/10
to brain-training
On Fri, Apr 30, 2010 at 11:51 AM, Pontus Granström <lepo...@gmail.com> wrote:
> Got this email from jaeggi today, I guess Moody is debunked now.
>
> hi pontus, thank you for your interest in our work.
> the reason why we used the 10min limit was that most of our earlier (pilot)
> participants completed all of the RAPM items correctly if no time limit was
> given.
> thus, they can't get better after training since they already scored at
> ceiling before the training.
> further, there is quite some research showing that timed versions of the
> RAPM are well predictive for scores in untimed versions.
> in sum - i don't see any problems there.
> however, we are giving untimed versions of various intelligence tests to our
> current participants, with similar (preliminary) results as in our earlier
> studies (but again, with the ceiling problem in the RAPM we encountered
> before).
>
> hope this helps! have fun with training!
>
> susanne

I don't think that settles it. If the participants were all at the
RAPM ceiling, then isn't using it (or the BOMAT) at all a bad idea? If
anything, them all getting ceiling scores and then the test being
shortened only shows exactly what Moody suggested - a *speed*
increase, not a _Gf_ increase.

And I believe the 'quite some research' when I see it. The email
leaves the justification for speed where it was - a footnote
referencing a footnote referencing an unpublished study.

--
gwern

Pontus Granström

unread,
Apr 30, 2010, 12:03:36 PM4/30/10
to brain-t...@googlegroups.com
They are seeing the same results with untimed tests as well so I guess it's pretty clear that n-back increases your score on Gf-tests.

Gwern Branwen

unread,
Apr 30, 2010, 12:15:36 PM4/30/10
to brain-training
On Fri, Apr 30, 2010 at 12:03 PM, Pontus Granström <lepo...@gmail.com> wrote:
> They are seeing the same results with untimed tests as well so I guess it's
> pretty clear that n-back increases your score on Gf-tests.

That's one bit I don't understand. If they're running into ceiling
effects *before* they start, then how can the researchers be seeing
any effect after DNB training 'like before'?

Perhaps she means that the scores remain the same before/after, but
the time it takes them to do the testing is decreased.

Or perhaps they aren't using smart above-average college students like
before (?) who *all* are at the ceiling, and this time only have a few
participants who hit the ceiling, and they are observing score
improvements in the non-ceiling participants.

Pontus Granström

unread,
Apr 30, 2010, 1:18:37 PM4/30/10
to brain-t...@googlegroups.com
Yes I guess so since they see the same results on untimed tests. If all people ceiled the untimed test then it would be hard to notice any increases :-)

Pontus Granström

unread,
Apr 30, 2010, 3:58:16 PM4/30/10
to brain-t...@googlegroups.com
I guess that timing in RAPM raises the ceiling rather than just measuring speed. You are awared an higher score if you complete the problems in less time, something moody never have reflected over obviously.

polar

unread,
May 1, 2010, 1:52:58 PM5/1/10
to Dual N-Back, Brain Training & Intelligence
For me, this information is interesting because any claimed gains are
apparently present on probands, who were well above average
(intuitively it should be easier to gain something when you're at
average of some skill). btw i'm always a bit surprised that nobody
from the Jaeggi team seems to be following these discussions

On 30. Apr, 21:58 h., Pontus Granström <lepon...@gmail.com> wrote:
> I guess that timing in RAPM raises the ceiling rather than just measuring
> speed. You are awared an higher score if you complete the problems in less
> time, something moody never have reflected over obviously.
>
> On Fri, Apr 30, 2010 at 7:18 PM, Pontus Granström <lepon...@gmail.com>wrote:
>
>
>
> > Yes I guess so since they see the same results on untimed tests. If all
> > people ceiled the untimed test then it would be hard to notice any increases
> > :-)
>
> > On Fri, Apr 30, 2010 at 6:15 PM, Gwern Branwen <gwe...@gmail.com> wrote:
>
> >> On Fri, Apr 30, 2010 at 12:03 PM, Pontus Granström <lepon...@gmail.com>
> >> wrote:
> >> > They are seeing the same results with untimed tests as well so I guess
> >> it's
> >> > pretty clear that n-back increases your score on Gf-tests.
>
> >> That's one bit I don't understand. If they're running into ceiling
> >> effects *before* they start, then how can the researchers be seeing
> >> any effect after DNB training 'like before'?
>
> >> Perhaps she means that the scores remain the same before/after, but
> >> the time it takes them to do the testing is decreased.
>
> >> Or perhaps they aren't using smart above-average college students like
> >> before (?) who *all* are at the ceiling, and this time only have a few
> >> participants who hit the ceiling, and they are observing score
> >> improvements in the non-ceiling participants.
>
> >> --
> >> gwern
>
> >> --
> >> You received this message because you are subscribed to the Google Groups
> >> "Dual N-Back, Brain Training & Intelligence" group.
> >> To post to this group, send email to brain-t...@googlegroups.com.
> >> To unsubscribe from this group, send email to
> >> brain-trainin...@googlegroups.com<brain-training%2Bunsu...@googlegroups.com>
> >> .
> >> For more options, visit this group at
> >>http://groups.google.com/group/brain-training?hl=en.
>
> --
> You received this message because you are subscribed to the Google Groups "Dual N-Back, Brain Training & Intelligence" group.
> To post to this group, send email to brain-t...@googlegroups.com.
> To unsubscribe from this group, send email to brain-trainin...@googlegroups.com.
> For more options, visit this group athttp://groups.google.com/group/brain-training?hl=en.

Pontus Granström

unread,
May 2, 2010, 3:37:23 AM5/2/10
to brain-t...@googlegroups.com
Maybe they do! However much of the criticism that basically claim that the whole study is a well calculated "hoax" seems to be false. I can't even find one single valid argument in his criticism.

*During the war H. J. Eysenck and J. C. Raven carried out many unpubhshed studies for the U.K. army on different
methods of administering the APM test. using 10, 40min and untimed procedures. All correlations Here well into
the 0.90 s averaging around 0.95. suggesting that time limitations have little influence on relative standing as far as
the APL1 test is concemsd.

Here is the research on RAPM time limit by the way, carried out by Raven himself.

Pontus Granström

unread,
May 2, 2010, 4:00:53 AM5/2/10
to brain-t...@googlegroups.com
So I do not know why she would have to "carefully defend" this since there is indeed quite a lot of research into the subject. It's moody who should read the footnotes carefully.
Reply all
Reply to author
Forward
0 new messages