New SRS algorithm based on optimal stochastic control theory: Tabibian et al 2019

151 views
Skip to first unread message

Gwern Branwen

unread,
Feb 11, 2019, 3:34:57 PM2/11/19
to Mnemosyne mailing list
"Enhancing human learning via spaced repetition optimization",
Tabibian et al 2019:
https://www.pnas.org/content/early/2019/01/18/1815156116

> Understanding human memory has been a long-standing problem in various scientific disciplines. Early works focused on characterizing human memory using small-scale controlled experiments and these empirical studies later motivated the design of spaced repetition algorithms for efficient memorization. However, current spaced repetition algorithms are rule-based heuristics with hard-coded parameters, which do not leverage the automated fine-grained monitoring and greater degree of control offered by modern online learning platforms. In this work, we develop a computational framework to derive optimal spaced repetition algorithms, specially designed to adapt to the learners’ performance. A large-scale natural experiment using data from a popular language-learning online platform provides empirical evidence that the spaced repetition algorithms derived using our framework are significantly superior to alternatives.

More popularized overview: http://learning.mpi-sws.org/memorize/

Dataset/code: https://github.com/duolingo/halflife-regression

---------

It's unclear to me if this is superior in practice to any of
SuperMemo/Anki/Mnemosyne's current algorithms, since they don't
directly compare them (just to a uniform strawman baseline, and a
'threshold' of unclear origin) or implement it on real-world users.
They are proud of their optimality guarantee, but of course that's
only optimal based on a specific set of assumptions and algorithm
classes, like being limited to a stochastic algorithm. (There might be
some other limits like being efficient asymptotically rather than at
all time-scales.)

Nevertheless, it's cool that the result is *so* simple, and control
theory is a very rich mathematical area, so more realistic optimal
algorithms can probably be devised. (And the topic of 'point
processes' is relevant to me for my interest in various kinds of
'anti' spaced repetition, for note-reviewing or movie-watching, which
I've mentioned before.

--
gwern
https://www.gwern.net

Peter Bienstman

unread,
Feb 12, 2019, 12:03:50 AM2/12/19
to mnemosyne-...@googlegroups.com
Cool, thanks for unearthing this!

Peter



From: Gwern Branwen <gw...@gwern.net>
Sent: Monday, 11 February 2019 21:34
To: Mnemosyne mailing list
Subject: [mnemosyne-proj-users] New SRS algorithm based on optimal stochastic control theory: Tabibian et al 2019

--
You received this message because you are subscribed to the Google Groups "mnemosyne-proj-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mnemosyne-proj-u...@googlegroups.com.
To post to this group, send email to mnemosyne-...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/mnemosyne-proj-users/CAMwO0gy_KQDo%3DuWHWtAsmXFjAALo72MckwakSYXqfzCXT8qMig%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

maugha...@gmail.com

unread,
May 22, 2019, 11:02:28 AM5/22/19
to mnemosyne-proj-users
Hey all! I've not posted here before, but I've been into SuperMemo and other spaced repetition algorithms for quite some time.

This study is great. So the gist looks like it's looks like it's using an optimizing function with a feedback loop, where future guesses at the "rate of forgetting" are adjusted based on success/fail of previous guesses?

I'm with you in asking whether this approach could be compared to something like Mnemosyne/Supermemo family algorithms rather than a uniform distribution, which really does seem like a straw man.

Am I correct in thinking that Mnemosyne does the same sort of optimization relying on the user's "judgment of learning" (to use a Brainscape term) in place of an optimization function?

Peter Bienstman

unread,
May 23, 2019, 3:01:46 AM5/23/19
to mnemosyne-...@googlegroups.com
Hi,

I wouldn't call SM2 (Mnemosyne's algorithm) an optimisation algorithm, it's more like heuristic...

Cheers,

Peter
--
You received this message because you are subscribed to the Google Groups "mnemosyne-proj-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mnemosyne-proj-u...@googlegroups.com.
To post to this group, send email to mnemosyne-...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/mnemosyne-proj-users/3c0ce8a8-5e8c-4f4d-8d1e-1be3f3b7534e%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages