I read the paper a bit quickly, so I could have missed something. But my takeaway from this is that they are essentially advocating for the view that LMS data are pointless and not worth the effort to gather. But actually, my interpretation of their work is "We think that the effort of collecting LMS data is not worthwhile because we couldn't improve our pointless models used for a pointless use case."
Seriously, what would you do with the result of such a study, regardless of the outcome? Consider:
* We've known for decades that we could predict student performance "pretty well" from simple indicators like their past grades.
* Given how well we could do that, it's a tall order to do better by another method.
* For decades, educators have tried to take advantage of that information with some form of intervention for "high risk" individuals as identified by the models.
* Obviously that hasn't worked... as evidenced by the fact that the models give us pretty much the same exact results even with the interventions in place!! (After all, it's not like this group can still collect "clean" data that don't include ongoing interventions of this nature.)
* I found it hard to suss out the probability of predicting a given case accurately from their data. A lot of their measures are more abstract, like "c-statistics". But it looks to me, to greatly oversimplify, that one can generally predict accurately about 85% of the time when someone will "struggle" (meaning not succeed in the next class).
* OK, so now what do you do with that? Well, one example is to impose a policy like a requirement to get a certain grade to continue. Which generally, academic programs do. One might look at the data and decide to tweak the threshold grade. Perhaps an 85% failure rate for people who got a C is too high, and we should require a C+? (Not an arbitrary example -- we require a grade better than C- to continue past a prereq course.) Keep in mind that anything you do is probabilistic. We only know "mostly" what will happen, not perfectly.
None of this strikes me as being on the right track. Ultimately, static predictive models don't get us anywhere. If we really want to effect change, then we have to dynamically, when-it-happens, identify that someone IS JUST STARTING to struggle, and we have to come up with an effective intervention RIGHT THEN that changes their trajectory.
I am not necessarily saying that LMS data is the solution to that (though I have difficulty imagining success in a large class for any sort of monitoring that doesn't include LMS data). But I am saying that the work in this paper doesn't appear to me to have anything to do with that.