1) implement mental rotation into BW. I remember Argumzio said many
things about how MRT is great'n'stuff. Well, he's right - mental
rotation can be actually correlated with Gf more than updating tasks.
So why dont merge these two task in one? Now, we can surely find more
proposals how this can be done. A) maybe to work with 2d objects we
have, and every time the same object appears, it must be rotated
another 90 degrees, then it's a hit. B) Work with 3d objects (probably
generate them like http://psych.hanover.edu/JavaTest/CLE/Cognition/Cognition/mentalrotation_instructions.html
) and check whether its rotated (hit) or mirrored. C) rotate slowly
the whole grid - I like this idea very much (cyberiad apparently
too :). I looked into BW source code (it's not that easy, but not that
hard too) I just dont have experinces with doing graphics in python. I
asked shamanu (he already has some rotations implemeted) how about his
version, and he was curious about what are the proposals - so here
they are. And what are your proposals now? ;)
2) implement semantics into BW. I dont want to go into details of the
debate about whether is intelligence / WM domain specific, i.e.
whether are visuospatial abilities substantially different from verbal
or not. But if they are (which is the prevalent view), we should
ignore neither side. This is not a question of what do you like more,
words or images (for some people squares are more funny that words I
know ;). This is just a reminder, that we have a special area in brain
for language, that these maybe 50 000 words in our brain are all
related to each other like neurons, and that vocabulary and word
relationships has the biggest G loading (when you combine Gf+Gc). Now,
the good news is, we dont need too much of graphics to implement it.
The bad news is (or "bad" for some :), I cant figure out the concept.
I just know we need to work with words and relationships between them,
but usable within fast n-back framework. In friedman's 2006 article
("Not all functions are related to intelligence" -
http://www.iapsych.com/articles/friedman2006.pdf ), there's one kind
of updating task, which correlates with RAPM more than 8 other tasks -
it's called "keep track" (3 categories, belongs/belongs not, whis is
the last in each?...). Words and categories are not problem, we can
take them from wikipedia list of sports / living things etc. The
problem is you need hierarchy, and need to give responseANDstimulus,
so you need to present two words (one audio, one visual?)... maybe. Or
do you have any semantics ideas?
And maybe one more thing - the "real" triple n-back, I mean tactile
stimulus. Many people mentioned it here, I proposed using some joypad
with different vibrations, but one article inspired me to easier
solution, in which you need just only one impuls, and it doesnt matter
what it is. BUT, you need to separate it to different fingers (on one
hand, the other plays with keyboard). But I think there could be some
USB gadgets to do this, with not-that-bad python accesibility. I
thought about CD-ROM drive too, but how do you separate it for
fingers? :)
Ok, these are my very open thoughts, I look forward to your
contributions of any kind. I'm continuing to read (have only like 400
articles to go ;) and trying to do my thesis research / publishing
(anybody needs a foreign eu research partner to get funding? :). And
the very last thing: I did a list of all n-back articles I found so
far (42 of them), enjoy:
polar schrieb:
On 23. Mar., 19:49 h., Shamanu999 <Shamanu...@gmx.at> wrote:
> Thanks, my "proposals"? I will experiment with additional tasks like
> n-back on the left side and something differnt on the right.
> Another experiment I have planed should allow what you described in the
> second point.
>
> polar schrieb:
>
>
>
> > After following this group, doing some theoretical research (and
> > yesterday cyberiads post :), I would like to propose two ways for
> > improving n-back / BW, in terms of better training of Gf. Both ways
> > are just hypothesis, but built upon articles published. Btw, only in
> > last decade there are hundreds of articles related to working memory,
> > even more related to intelligence, and tens of them related to n-back
> > (altough mostly not the dual version). So I wont be citing now all
> > this stuff, but the ideas are quite self explaining and supported in
> > articles.
>
> > 1) implement mental rotation into BW. I remember Argumzio said many
> > things about how MRT is great'n'stuff. Well, he's right - mental
> > rotation can be actually correlated with Gf more than updating tasks.
> > So why dont merge these two task in one? Now, we can surely find more
> > proposals how this can be done. A) maybe to work with 2d objects we
> > have, and every time the same object appears, it must be rotated
> > another 90 degrees, then it's a hit. B) Work with 3d objects (probably
> > generate them likehttp://psych.hanover.edu/JavaTest/CLE/Cognition/Cognition/mentalrotat...
> > ) and check whether its rotated (hit) or mirrored. C) rotate slowly
> > the whole grid - I like this idea very much (cyberiad apparently
> > too :). I looked into BW source code (it's not that easy, but not that
> > hard too) I just dont have experinces with doing graphics in python. I
> > asked shamanu (he already has some rotations implemeted) how about his
> > version, and he was curious about what are the proposals - so here
> > they are. And what are your proposals now? ;)
>
> > 2) implement semantics into BW. I dont want to go into details of the
> > debate about whether is intelligence / WM domain specific, i.e.
> > whether are visuospatial abilities substantially different from verbal
> > or not. But if they are (which is the prevalent view), we should
> > ignore neither side. This is not a question of what do you like more,
> > words or images (for some people squares are more funny that words I
> > know ;). This is just a reminder, that we have a special area in brain
> > for language, that these maybe 50 000 words in our brain are all
> > related to each other like neurons, and that vocabulary and word
> > relationships has the biggest G loading (when you combine Gf+Gc). Now,
> > the good news is, we dont need too much of graphics to implement it.
> > The bad news is (or "bad" for some :), I cant figure out the concept.
> > I just know we need to work with words and relationships between them,
> > but usable within fast n-back framework. In friedman's 2006 article
> > ("Not all functions are related to intelligence" -
> >http://www.iapsych.com/articles/friedman2006.pdf), there's one kind