AWS & GDPR I had a quick chat with a Dutch data steward. The situation seems less dire than I expected: AWS would require a data processing agreement with Amazon and a mention of the Amazon server in the informed consent. Quite similar to what Becky thought, actually.
Below is an extensive reflection, for TLDR, skip to the heading on end-to-end testing :)
PsychoJS, jsPsych, lab.js, OSWeb, Labvanced on the client-side | Pavlovia, Pushkin, JATOS, Gorilla on the server side
There are quite a lot of systems out there with similar functionality; client-side web apps for cognitive tasks and some QN functionality. I call them cognitive task libraries. Those listed above are all open source, so they could, in principle, be plugged into a server-side app (such as Pavlovia, Pushkin, JATOS, Gorilla, etc.). Most are affiliated with a particular server-side app, but jsPsych is unique in that it's not affiliated with any one system. A unique thing about Pavlovia is that it supports many different cognitive task libraries (PsychoJS, jsPsych, lab.js, and OSWeb). Its revenue model is different from Pushkin; while Pushkin is financed via sub-contracting, Pavlovia is Software-as-a-Service (clients pay per participant), and part of the profit is donated to whatever task library is used for administering the task. This way Pavlovia can give back to the open source community. The same revenue model finances part of the PsychoPy and PsychoJS development.
Challenges to collaboration
I see three challenges:
- Developers don't want to give up their revenue models. For example, if PsychoJS would be made compatible with Pushkin, it would lose revenues via Pavlovia (the only system that presently supports PsychoJS)
- Developers don't want to give up their system. Even if someone else builds something superior and you don't have a revenue model, it's still hard to let go of your baby, so to say.
- Researchers don't want to switch unless they have to. We like to adopt new technologies, but most researchers aren't that tech-savvy, nor do they have the incentive. My own experiences underline this: I used to maintain my own cognitive task library, but I actively
disbanded it, because other systems had become so much better (and I wanted to support those). It took quite a long time (and me reiterating that support had ended) before researchers stopped using my system.
Where could we collaborate?
I'd like to sidestep the challenges above, by exploring ways of collaboration that are external to the software. For instance on the output formats and perhaps in time on particular modules that we all need. On the short term, e2e testing could be an nice one, because I'm already working on that.
End-to-end testing
I'm financed by a grant of the Zuckerberg foundation to develop open source software and improve the stability of PsychoJS and Pavlovia. To this end, I'm setting up a system for automated end-to-end testing. It's based on the WebDriver/Appium standards, written in WebdriverIO, and deployed via BrowserStack. Software for online experiments has quite some idiosyncrasies compared to a standard website, which are reflected in the software I built so far. It's quite nicely documented and free to use to by any OS group that's involved in online experiments. I shared it with Josh (jsPsych) and Felix (lab.js). I'll share it with you in a moment.