error: JATOS isn't allowed to run a study with the study result ID

456 views
Skip to first unread message

Becky Gilbert

unread,
Oct 16, 2017, 12:21:22 PM10/16/17
to JATOS
Hi JATOS users,

I'm working on a study in my local JATOS 3.1.6. I've run through the whole experiment and a number of individual components as a JATOS user (i.e. via the 'run' button in the GUI rather than a link). When I run one specific component, I'm getting the following errors in the console when it finishes:

Failed to load resource: the server responded with a status of 403 (Forbidden) http://127.0.0.1:9000/publix/1/5/resultData?srid=27

Bad Request: JATOS isn't allowed to run a study with the study result ID 27. Maybe it was closed automatically by JATOS because you started other studies in the same browser in the meantime.

Could it be that the result ID numbers are out of synch, i.e. the data is sent back with results ID = 27, but maybe the study has already ended for this ID, or the results already exist, or there's no record of it starting..? And could this be happening because this component was started without finishing (via either jatos.endStudy or jatos.startNextComponent)? 

I'm not sure, but I think it's only an issue when the results sent with this line:

  jatos.submitResultData(allData, jatos.endStudyAjax(false, 'HT_FAIL'));

(The above is called after a 'study ended' message is displayed because the participant didn't meet the criteria to continue.)

I'd appreciate any thoughts you might have on the source of this kind of error, and how to prevent and/or resolve it.

Thanks,

Becky

Kristian Lange

unread,
Oct 16, 2017, 3:47:24 PM10/16/17
to Becky Gilbert, JATOS
Hi Becky,

this error message usually appears if a study run was already finished and one tries to access some resource from it, like you try to send result data.

I can think of three ways how the study finished, 1) via jatos.js with one of the abortStudy or endStudy calls, or 2) via JATOS itself because there are more then 10 studies running in the same browser in parallel, or 3) because a worker reloaded a component that is not allowed to reload.

>Could it be that the result ID numbers are out of synch, i.e. the data is sent back with results ID = 27, but maybe the study has already ended for this ID, or the results already exist, or there's no record of it starting..?
No, JATOS always keeps the study IDs (and component IDs, result IDs, group IDs, batch IDs) - they are fixed. The only one who can delete a study and its results is a JATOS user via the GUI.

>And could this be happening because this component was started without finishing (via either jatos.endStudy or jatos.startNextComponent)?
I don't really understand. If you finish a study run, you can't start another component. The study run is finished. This would actually lead to your error message. You have to start a new study run before.

Actually I'm not sure I understood your question. Maybe you could clarify it for me a bit more.

Best
Kristian



--
You received this message because you are subscribed to the Google Groups "JATOS" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jatos+un...@googlegroups.com.
To post to this group, send email to ja...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jatos/aa89e1e1-ec8a-4539-b4a4-7c7b76de08b4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Becky Gilbert

unread,
Oct 17, 2017, 9:15:54 AM10/17/17
to JATOS
Hi Kristian,

Sorry for my poorly-formed question, and thanks for your response! 

You did answer my question, which was: how exactly could I have ended up with these console errors? I think I understand your explanation about accessing a resource from a study that's already finished. In my case, the study couldn't have ended via your options 2 and 3. The task has one call to jatos.startNextComponent, which runs if the performance criteria is met, and one call to jatos.endStudyAjax(), which only runs if the performance criteria is not met. Maybe at some point I had added an 'on success' or 'on error' callback function to the jatos.endStudyAjax function, and maybe this callback function tried to access a resource from the same study run (e.g. send the result data) after the endStudy function finished - I think that would produce the same console error, right? (sorry - I haven't had time to try this yet)

As an aside, I'd be interested hearing about the sorts of things the endStudyAjax 'on success' and 'on error' callbacks are useful for. 

Thanks again!
Becky 

Kristian Lange

unread,
Oct 17, 2017, 2:41:58 PM10/17/17
to Becky Gilbert, JATOS
Hi Becky,

Great I could be of some help. I wasn't sure I understood your questions.

>Maybe at some point I had added an 'on success' or 'on error' callback function to the jatos.endStudyAjax function, and maybe this callback function tried to access a resource from the same study run (e.g. send the result data) after the endStudy function finished - I think that would produce the same console error, right? (sorry - I haven't had time to try this yet)

Yes, it should produce the same error.

>As an aside, I'd be interested hearing about the sorts of things the endStudyAjax 'on success' and 'on error' callbacks are useful for. 
Internally jatos.js does a Ajax call to JATOS to end the study. Since Ajax calls might fail (e.g. network problems) I wanted to provide a way to react on success of the call or problems during the call. Maybe there is something you want to do in your component where you have to be 100% sure that the study run was really finished. Accordingly the onSuccess is called if the Ajax call was successful and the onError is called if it fails. You can have a look at the source code of jatos.js https://github.com/JATOS/JATOS/blob/5f457c092632d200100721968a329ca72ed56e8b/modules/publix/app/assets/javascripts/jatos.js, in the current version (3.1.9) the Ajax call via jQuery begins in line 1810. Then the endStudyAjax function takes 2 more parameters, but they are no callback functions: the first one is a boolean and defines whether JATOS should store this study run (study result) as a success (true) or a failure (false) internally. The second parameter is just a error message you can give that JATOS stores in the study result. This error message is just for you and your later understanding and jatos.js or JATOS won't do anything with it.

Was this helpful? Do you think I could write more in the jatos.js Reference page about it?

Best,
Kristian



Kristian Lange

unread,
Oct 17, 2017, 2:48:21 PM10/17/17
to Becky Gilbert, JATOS
Hi again!
I just wanted to add, that you don't need to use the jatos.endStudyAjax and jatos.abortStudyAjax functions. Using the jatos.endStudy, jatos.abortStudy or jatos.nextComponent functions is enough in most cases. Actually I can't remember any case where it was necessary for me to use the Ajax variants in a real study. 
Best
Kristian

Becky Gilbert

unread,
Oct 19, 2017, 8:48:29 AM10/19/17
to JATOS
Hi Kristian,

Thanks for explaining this! Sorry again that my question wasn't very clear: I'm wondering if you have any examples of useful onSuccess and onError callback functions that you might use with endStudyAjax? Are there any good ways of dealing with an AJAX error in particular (like you said, this can happen due to network problems)? Maybe just try the same function again? Same goes for the submitResultData function - is it worht trying to send the data again if it fails the first time? I can't think of anything else that might be useful in case of an error, but thought I'd ask in case there are clever ways of dealing with these issues.

I actually find the jatos.endStudyAjax function very useful! I have experiments where we do pre-screening tasks, and if the participant doesn't meet the criteria then the task ends early. I could just use the endStudy function in this case, but this function automatically redirects the user to the default end study page. With the AJAX function, I can show my own 'end study' message that stays in the window indefinitely. This also has the advantage of ending the study from JATOS' point of view, with a message sent back about why it ended. That is, I could just leave the person on the 'end study' message without the endStudyAjax call, but then it would be harder to differentiate between people who didn't finish because they failed the pre-screening and those who didn't finish for another reason. 

I guess another way to do this would be to have two 'end study' components - one for people who actually finish the study and one for those who fail the prescreening. Then I could direct people to the appropriate 'end study' component using startComponentByPos. (This has only just occurred to me!)

Becky

Kristian Lange

unread,
Oct 19, 2017, 1:58:45 PM10/19/17
to Becky Gilbert, JATOS
Hi Becky,

it's nice you find the Ajax functions like jatos.endStudyAjax useful. We actually discussed whether they should be removed during the update from JATOS 2 to 3. I thought it might be to confusing for people to have jatos.endStudy and jatos.endStudyAjax and they wouldn't see the use of the Ajax variant. But we decided against it. I can still see the use case that someone wants to redirect to different webpage after the JATOS study run finished, a webpage outside of JATOS, to do some subsequent tasks. Another idea I just had: one could start another JATOS study at this point and this way have something like a chain of JATOS study runs. 

>I'm wondering if you have any examples of useful onSuccess and onError callback functions that you might use with endStudyAjax? Are there any good ways of dealing with an AJAX error in particular (like you said, this can happen due to network problems)? Maybe just try the same function again? Same goes for the submitResultData function - is it worht trying to send the data again if it fails the first time? I can't think of anything else that might be useful in case of an error, but thought I'd ask in case there are clever ways of dealing with these issues.
jatos.js does already repeat unsuccessful Ajax requests. There are actually two jatos.js variables you can change to adapt the retries to your needs. I didn't document them in the reference again not to confuse people, but they are documented in the source code (https://github.com/JATOS/JATOS/blob/5f457c092632d200100721968a329ca72ed56e8b/modules/publix/app/assets/javascripts/jatos.js):
/**
 * How many times should jatos.js retry to send a failed HTTP call.
 */
jatos.httpRetry = 5;
/**
 * How long in ms should jatos.js wait between a failed HTTP call and a retry.
 */
jatos.httpRetryWait = 1000;
As you can see the defaults are 5 retries with a waiting of 1s inbetween. So to repeat the Ajax request from your end is unnecessary. What I could imagine is to show warn messages to the worker, that the study couldn't complete and they should call a phone number or so.

>I guess another way to do this would be to have two 'end study' components - one for people who actually finish the study and one for those who fail the prescreening. Then I could direct people to the appropriate 'end study' component using startComponentByPos. (This has only just occurred to me!)
Yes. :) 

Best
Kristian

  


Becky Gilbert

unread,
Oct 20, 2017, 12:56:02 PM10/20/17
to JATOS
Hi Kristian,


I can still see the use case that someone wants to redirect to different webpage after the JATOS study run finished, a webpage outside of JATOS, to do some subsequent tasks.
 
I forgot that I use endStudyAjax for this too! In my case I redirect participants to a 'completion page' on Prolific Academic (like MTurk). But in the future I might also use it to redirect people back to a static page (hosted elsewhere) containing a set of links to tasks that can be done in any order.

Another idea I just had: one could start another JATOS study at this point and this way have something like a chain of JATOS study runs. 
 
That hadn't occurred to me. Cool! 

Thanks for explaining how JATOS handles failed AJAX requests - this is really good to know. Sorry, I really should've looked through the jatos.js code before asking. And thanks for the suggestion about presenting the participant with an error message via the onError callback. This is exactly what I was wondering about.

Thanks again for all your help :)

Becky
Reply all
Reply to author
Forward
0 new messages