Skip to content

Research resources

Jarrett Ye edited this page Feb 21, 2024 · 13 revisions

Dataset

MaiMemo

https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/VAGUL0

Duolingo

https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/N8XJME

Anki

https://github.com/open-spaced-repetition/fsrs-benchmark

SuperMemo

https://github.com/open-spaced-repetition/fsrs-vs-sm15

https://github.com/open-spaced-repetition/fsrs-vs-sm17

Mnemosyne

https://www.dropbox.com/sh/epx7hzezh1ok6qe/AAAh6rUIVvyt7TRmlyxuaOUMa/data

Code

FSRS

https://github.com/open-spaced-repetition/short-term-memory-research

https://github.com/open-spaced-repetition/heterogeneous-memory-research

Ebisu

https://github.com/fasiha/ebisu

MaiMemo

https://github.com/maimemo/SSP-MMC

https://github.com/maimemo/SSP-MMC-Plus

Duolingo

https://github.com/duolingo/halflife-regression

Memorize

https://github.com/Networks-Learning/memorize

Deep Tutor

https://github.com/rddy/deeptutor

Leitner Queue Network

https://github.com/rddy/leitnerq

Paper

Su, J., Ye, J., Nie, L., Cao, Y., & Chen, Y. (2023). Optimizing Spaced Repetition Schedule by Capturing the Dynamics of Memory. IEEE Transactions on Knowledge and Data Engineering, 1–13. https://doi.org/10.1109/TKDE.2023.3251721

Ye, J., Su, J., & Cao, Y. (2022). A Stochastic Shortest Path Algorithm for Optimizing Spaced Repetition Scheduling. Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 4381–4390. https://doi.org/10.1145/3534678.3539081

Upadhyay, U., Lancashire, G., Moser, C., & Gomez-Rodriguez, M. (2021). Large-scale randomized experiments reveals that machine learning-based instruction helps people memorize more effectively. Npj Science of Learning, 6(1), Article 1. https://doi.org/10.1038/s41539-021-00105-8

Nioche, A., Murena, P.-A., de la Torre-Ortiz, C., & Oulasvirta, A. (2021). Improving Artificial Teachers by Considering How People Learn and Forget. 26th International Conference on Intelligent User Interfaces, 445–453. https://doi.org/10.1145/3397481.3450696

Randazzo, Giacomo. (2020-21). Memory Models for Spaced Repetition Systems (Tesi di Laurea Magistrale in Mathematical Engineering - Ingegneria Matematica, Politecnico di Milano). Advisor: Marco D. Santambrogio. Retrieved from https://hdl.handle.net/10589/186407

Zaidi, A., Caines, A., Moore, R., Buttery, P., & Rice, A. (2020). Adaptive Forgetting Curves for Spaced Repetition Language Learning. In I. I. Bittencourt, M. Cukurova, K. Muldner, R. Luckin, & E. Millán (Eds.), Artificial Intelligence in Education (pp. 358–363). Springer International Publishing. https://doi.org/10.1007/978-3-030-52240-7_65

Yang, Z., Shen, J., Liu, Y., Yang, Y., Zhang, W., & Yu, Y. (2020). TADS: Learning time-aware scheduling policy with dyna-style planning for spaced repetition. Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, 1917–1920. https://doi.org/10.1145/3397271.3401316

Aydin, R., Klein, L., Miribel, A., & West, R. (2020). Broccoli: Sprinkling Lightweight Vocabulary Learning into Everyday Information Diets. Proceedings of The Web Conference 2020, 1344–1354. https://doi.org/10.1145/3366423.3380209

Tabibian, B., Upadhyay, U., De, A., Zarezade, A., Schölkopf, B., & Gomez-Rodriguez, M. (2019). Enhancing human learning via spaced repetition optimization. Proceedings of the National Academy of Sciences, 116(10), 3988–3993. https://doi.org/10.1073/pnas.1815156116

Sinha, S. (2019). Using deep reinforcement learning for personalizing review sessions on e-learning platforms with spaced repetition (2019:217; p. 82). KTH, School of Electrical Engineering and Computer Science (EECS) / KTH, School of Electrical Engineering and Computer Science (EECS). https://api.semanticscholar.org/CorpusID:196203887

Hunziker, A., Chen, Y., Aodha, O. M., Rodriguez, M. G., Krause, A., Perona, P., Yue, Y., & Singla, A. (2019). Teaching multiple concepts to a forgetful learner. In H. M. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. B. Fox, & R. Garnett (Eds.), Advances in neural information processing systems 32: Annual conference on neural information processing systems 2019, NeurIPS 2019, december 8-14, 2019, vancouver, BC, canada (pp. 4050–4060). https://proceedings.neurips.cc/paper/2019/hash/2952351097998ac1240cb2ab7333a3d2-Abstract.html

Choffin, B., Popineau, F., Bourda, Y., & Vie, J.-J. (2019). DAS3H: Modeling Student Learning and Forgetting for Optimally Scheduling Distributed Practice of Skills. ArXiv:1905.06873 [Cs, Stat]. http://arxiv.org/abs/1905.06873

Upadhyay, U., De, A., & Gomez Rodriguez, M. (2018). Deep Reinforcement Learning of Marked Temporal Point Processes. Advances in Neural Information Processing Systems, 31. https://papers.nips.cc/paper/2018/hash/71a58e8cb75904f24cde464161c3e766-Abstract.html

Reddy, S., Levine, S., & Dragan, A. (2017). Accelerating Human Learning with Deep Reinforcement Learning. University of California, Berkeley, 9. https://api.semanticscholar.org/CorpusID:44144165

Settles, B., & Meeder, B. (2016). A Trainable Spaced Repetition Model for Language Learning. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 1848–1858. https://doi.org/10.18653/v1/P16-1174

Reddy, S., Labutov, I., Banerjee, S., & Joachims, T. (2016). Unbounded human learning: Optimal scheduling for spaced repetition. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1815–1824. https://doi.org/10.1145/2939672.2939850

Jones, M. N. (Ed.). (2016). Predicting and Improving Memory Retention: Psychological Theory Matters in the Big Data Era. In Big Data in Cognitive Science (0 ed., pp. 43–73). Psychology Press. https://doi.org/10.4324/9781315413570-8

Lindsey, R. (2014). Probabilistic models of student learning and forgetting (Doctoral dissertation, University of Colorado at Boulder).

Pashler, H., Cepeda, N., Lindsey, R. V., Vul, E., & Mozer, M. C. (2009). Predicting the optimal spacing of study: A multiscale context model of memory. In Y. Bengio, D. Schuurmans, J. Lafferty, C. Williams, & A. Culotta (Eds.), Advances in neural information processing systems (Vol. 22). Curran Associates, Inc. https://proceedings.neurips.cc/paper/2009/file/6bc24fc1ab650b25b4114e93a98f1eba-Paper.pdf

Pavlik, P. I., & Anderson, J. R. (2008). Using a model to compute the optimal schedule of practice. Journal of Experimental Psychology: Applied, 14(2), 101–117. https://doi.org/10.1037/1076-898X.14.2.101

Pavlik, P. I., & Anderson, J. R. (2005). Practice and Forgetting Effects on Vocabulary Memory: An Activation-Based Model of the Spacing Effect. Cognitive Science, 29(4), 559–586. https://doi.org/10.1207/s15516709cog0000_14\

Woźniak, P. A., Gorzelańczyk, E. J., & Murakowski, J. A. (1995). Two components of long-term memory. Acta neurobiologiae experimentalis, 55(4), 301–305. https://pubmed.ncbi.nlm.nih.gov/8713361/

Woźniak, P. A., & Gorzelańczyk, E. J. (1994). Optimization of repetition spacing in the practice of learning. Acta neurobiologiae experimentalis, 54(1), 59–62. https://pubmed.ncbi.nlm.nih.gov/8023714/