Questions about debrief codes in quiat9.js

61 visualizações
Pular para a primeira mensagem não lida

Daniel Cheong

não lida,
15 de nov. de 2022, 23:08:4215/11/2022
para Minno.js
Hello,

I have a few questions about the debrief page in this script: https://cdn.jsdelivr.net/gh/baranan/minno-tasks@0.*/IAT/qualtrics/quiat9.js

1. The codes below for the debrief page only show preference for categoryB over categoryA. 
a. What about the preference for categoryA over categoryB? In other words, does the result always show a one-way preference?
b. Also, what are the ranges of difference in reaction time for the result to be computed as strong, moderate or slight preference?
fb_strong_Att1WithCatA_Att2WithCatB : 'Your responses suggested a strong automatic preference for categoryB over categoryA.',

            fb_moderate_Att1WithCatA_Att2WithCatB : 'Your responses suggested a moderate automatic preference for categoryB over categoryA.'

            fb_slight_Att1WithCatA_Att2WithCatB : 'Your responses suggested a slight automatic preference for categoryB over categoryA.',

            fb_equal_CatAvsCatB : 'Your responses suggested no automatic preference between categoryA and categoryB.',



2. What is the basis for the error messages below?
a. How many errors will result in the manyErrors message?
b. What is the RT that is considered to be too fast?
c. How many trials should be considered to be enough to determine a result?
//Error messages in the feedback
            manyErrors: 'There were too many errors made to determine a result.',
            tooFast: 'There were too many fast trials to determine a result.',
            notEnough: 'There were not enough trials to determine a result.'
        };


I'm new to using Minno and javascript, so I would appreciate any help. Thank you!

Yoav Bar-Anan

não lida,
17 de nov. de 2022, 02:34:3317/11/2022
para Daniel Cheong, Minno.js

Hi Daniel,

I’ll try to answer all of your questions below:

On Wed, Nov 16, 2022 at 6:08 AM Daniel Cheong <dannyc...@gmail.com> wrote:

Hello,

I have a few questions about the debrief page in this script: https://cdn.jsdelivr.net/gh/baranan/minno-tasks@0.*/IAT/qualtrics/quiat9.js

1. The codes below for the debrief page only show preference for categoryB over categoryA. 
a. What about the preference for categoryA over categoryB? In other words, does the result always show a one-way preference?

Yoav: According to the comment in that file “categoryA is the name of the category that is found to be associated with attribute1, and categoryB is the name of the category that is found to be associated with attribute2.”
You can see in the file that by default attribute1 is “Bad words” and attribute2 is “Good words”.
That is why the feedback messages that you see below refer to a preference for categoryB (whichever category was more strongly associated with attribute2) over categoryA (whichever category was more strongly associated with attribute1). In other words, there cannot be a preference for categoryA because categoryA is defined as the one more associated with attribute1, which is, by default, “Bad words”. If you change attribute1 and attribute2, you might need to change the text (e.g., “Your responses suggested a stronger association of categoryA with attribute1 and categoryB with attribute2 than the opposite”).

b. Also, what are the ranges of difference in reaction time for the result to be computed as strong, moderate or slight preference?
fb_strong_Att1WithCatA_Att2WithCatB : 'Your responses suggested a strong automatic preference for categoryB over categoryA.',
            fb_moderate_Att1WithCatA_Att2WithCatB : 'Your responses suggested a moderate automatic preference for categoryB over categoryA.'
            fb_slight_Att1WithCatA_Att2WithCatB : 'Your responses suggested a slight automatic preference for categoryB over categoryA.',
            fb_equal_CatAvsCatB : 'Your responses suggested no automatic preference between categoryA and categoryB.',

YBA: To find out, I searched for the text “moderate” in the code, and this is the relevant code:

var messageDef = [
        { cut:'-0.65', message : getFB(piCurrent.fb_strong_Att1WithCatA_Att2WithCatB, cat1.name, cat2.name) },
        { cut:'-0.35', message : getFB(piCurrent.fb_moderate_Att1WithCatA_Att2WithCatB, cat1.name, cat2.name) },
        { cut:'-0.15', message : getFB(piCurrent.fb_slight_Att1WithCatA_Att2WithCatB, cat1.name, cat2.name) },
        { cut:'0.15', message : getFB(piCurrent.fb_equal_CatAvsCatB, cat1.name, cat2.name) },
        { cut:'0.35', message : getFB(piCurrent.fb_slight_Att1WithCatA_Att2WithCatB, cat2.name, cat1.name) },
        { cut:'0.65', message : getFB(piCurrent.fb_moderate_Att1WithCatA_Att2WithCatB, cat2.name, cat1.name) },
        { cut:'5', message : getFB(piCurrent.fb_strong_Att1WithCatA_Att2WithCatB, cat2.name, cat1.name) }

This refers to the D score computed by the extension. So, above 0.65 is strong, between .35 and .65 is moderate, and so on.

This would be a good place to paste one of the comments in the extension about the debriefing:

We do not recommend showing participants their results. The IAT is a typical psychological measure so it is not very accurate. 
In Project Implicit's website, you can see that we added much text to explain that there is still much unknown about the meaning of these results.
We strongly recommend that you provide all these details in the debriefing of the experiment.
It would also be a good time to mention that we do not recommend using the IAT D score provided by the program in data analysis. We strongly recommend computing the IAT D score on your own.

2. What is the basis for the error messages below?
Yoav:  The cutoffs explained below are legacy, probably determined by Brian Nosek a couple of decades ago. 
If you want to dig deeper, then documentation relevant to the scorer we use in the extension can be found here, and the code here.

a. How many errors will result in the manyErrors message?
Yoav:  > 40% [based on computeD.js, line 21.]
b. What is the RT that is considered to be too fast?
Yoav: < 150ms [based on the extension file, fastRT=150]
c. How many trials should be considered to be enough to determine a result?
//Error messages in the feedback
            manyErrors: 'There were too many errors made to determine a result.',
            tooFast: 'There were too many fast trials to determine a result.',
            notEnough: 'There were not enough trials to determine a result.'
        };


Yoav: 2 from each pairing condition. [based on parcelMg.js, line 282]

I hope this helps, 
Yoav 

Daniel Cheong

não lida,
17 de nov. de 2022, 12:03:1817/11/2022
para Minno.js
Hi Yoav,

Awesome, thank you very much for answering my questions in great detail! Now I have a fairly better understanding of how the results are computed and the basis for the error messages.

Much appreciated!

Best regards,
Daniel

Daniel Cheong

não lida,
23 de nov. de 2022, 05:21:2523/11/2022
para Minno.js
Hi Yoav,

Thank you for your help previously! I managed to run my custom IAT on Qualtrics. I have a couple of questions about the results:
- How is the d score computed (0.94s in the example below) in the dataset that I downloaded and formatted from Qualtrics?
image.png

- Is it possible to change the codes in the script to compute specific blocks that we want? For example, omitting or including blocks 3 and 6 in the computation of the d score. If so, which part of the script can I find it?

Thank you!

Best regards,
Daniel

Yoav Bar-Anan

não lida,
29 de nov. de 2022, 02:55:5429/11/2022
para Daniel Cheong, Minno.js
Hi Daniel, 

Sorry for the late reply. Please see below.

On Wed, Nov 23, 2022 at 12:21 PM Daniel Cheong <dannyc...@gmail.com> wrote:
Hi Yoav,

Thank you for your help previously! I managed to run my custom IAT on Qualtrics. I have a couple of questions about the results:
- How is the d score computed (0.94s in the example below) in the dataset that I downloaded and formatted from Qualtrics?
Yoav: It is almost identical to D2 from Greenwald et al.'s 2003 (see Table 2). The only difference is that instead of computing D1 from Blocks 3 and 6 and D2 from Blocks 4 and 7, we compute one D directly from Blocks 3&4 vs. Blocks 6&7. We did that for technical reasons (the scoring code is awful and I could not make it compute a score from separate parcels), but it turns out that it is also recommended (Richetin et al., 2015). However, see my next comment for a mistake that the current code for the Qualtrics extension seems to have.
 
image.png

- Is it possible to change the codes in the script to compute specific blocks that we want? For example, omitting or including blocks 3 and 6 in the computation of the d score. If so, which part of the script can I find it?
Yoav: You will need to modify the extension code (duplicate the extension and change it on your own). It can probably be done by changing the parcel variables (lines 1174, 1199, 1297, 1395, 1396 here). However, as I said, the scorer code is not great, and from my reading of the code right now it seems that the score for the most recent qiat versions is actually computed only from blocks 3 and 6, and not 4 and 7, which is not the correct method at all. We will fix it in a couple of weeks, but as I said, we strongly recommend not showing participants feedback messages and not using the program's D score in your data analysis. 

Yoav

Daniel Cheong

não lida,
30 de nov. de 2022, 12:51:0730/11/2022
para Minno.js
Hi Yoav,

Thank you very much for clarifying the computation of the d score and the scorer code. Appreciate it :)

Best regards,
Daniel
Responder a todos
Responder ao autor
Encaminhar
0 nova mensagem