Hello Lucas, dear LIONESS team,
The bug unfortunately persists.
It looks like I never replied to your message, did I?
I am sorry if I did not. And I thank you for your answers. I wasn't working on this experiment for a while.
The following is a longer text than I expected when I started writing. I realize my experiment is also heavy with customized code, and it may take long to understand it.
So I would like to offer here to have a live chat (via a video call), if you prefer that.
If you do, please email me to arrange a convenient time for us to chat next week (I'm only available from Wednesday, 3 July, but then very flexibly for most of the next two weeks).
I have two bugs to report that I think need to be fixed. I believe these are LIONESS-related bugs. But it could have something to do with individual users' system settings? I am only conjecturing this because the bugs I report below happen for some people but not for others, and different bugs seem to affect different users.
The first problem (already reported before) happens on stages 2 and 3 (where participants count letters or numbers).
Sometimes, the code is executed correctly, and all data is recorded correctly. Other times, it is not.
Whenever I test the experiment (ID 34856) myself or ask others personally to test it, we never have any issues. When Prolific participants do it, it seems like about 50% of them have problems.
I asked participants what browser they used, and they said they used Google Chrome. So it's not the browser, because I am using Chrome, too.
The problem is that sometimes the system records values or numbers that simply make no sense.
I am incrementing a score based on entered values in a task where participants count numbers (or letters).
This incremented score can only range from 0 to 20, because there are only 20 values that can be entered. But for some participants, I have score values of 30, 50, ... This makes no sense.
The codes I am referring to below are in stage 3.
In the code, it looks like the system correctly saves and records the participants' entered values, with, for example
count_0a = parseInt(document.getElementById("0a").value);
setValue('count0a', count_0a) ;
(in the JS code on lines 205-206).
But the system seems to add up the score incorrectly sometimes. For example,
if(count_0a==8 && skipped_0a===0){
scoreNumbersP1 += 1;
}
(in the JS code on lines 212-214)
should only increment the scoreNumbersP1 by one, and only if the count_0a equals 8 and skipped_0a equals zero.
A similar increment is done for each number in paragraph 1, for a total of 10 different numbers (see lines 204-319 for paragraph 1).
The resulting score is then saved to the database with
setValue('scoreNumbersP1', scoreNumbersP1) ;
(in the JS code on line 315)
Therefore, the scoreNumberP1 should have a range from 0 to 10. But for some participants, the system calculates a value of, for example, 24 or 30.
The weirdest thing is that this problem does not always happen, and it does not happen for every participant.
A similar things seems to have happened on stage 4 where I increment attention checks in the final LIONESS button on that stage, with for example:
if(attentionSurvey1==6) {attentionSurvey += 1;}
Could it be that incrementing like this makes problems in some browsers or for some users who have specific system settings, perhaps who use browser add-ons or so?
There is yet another bug.
When I test the experiment, on stage 4 (survey), I allow participants to continue only after they have responded to all survey items (Likert scales).
The system automatically makes a button appear (conditional display) once participants have selected a value for all items. Well, this does not always work. For some participants (and these arei not the same participants who had problems with the score increment), the system simply never makes this button appear, even if they replied to all currently shown questions in the survey.
For some participants it worked, as they were able to continue and finish the study, but for others this did not work.
What is happening here? Seems like something is making the LIONESS JS code unreliable. What could it be?
I hope you have the time to look into this and suggest any fixes. Many thanks for your efforts!
Best regards
Thomas