Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds a hook for specifying an alternative to learnr:::evaluate_exercise() #386

Closed
wants to merge 2 commits into from

Conversation

dtkaplan
Copy link

Motivation

Per issue 356, this proposed change adds a way to replace the built-in exercise code evaluator (learnr:::evaluate_exercise) with one of the tutorial author's choosing. This is accomplished by adding a global option, learnr.alt.evaluator that can be set to a string naming the evaluator to be used. By default (that is, if the option learnr.alt.evaluator is NULL), the built in code evaluator is used, so there is absolutely no change to the learnr code evaluation process.

Being able to specify an alternative evaluator allows more seamless parse-time checking and provides a means to evaluate the exercise code while doing the code checking, rather than the method in learnr where the code is evaluated and the results handed off to the checker.

Structure of change

I attempted to minimize the extent of changes to the code base by using options() rather than tutorialOptions() to set the name of the alternative evaluator. It would be perhaps more natural to the user to set the alternative using tutorialOptions(), but this requires additional
changes to the code base. (And, since tutorialOptions() involves knitr hooks, I'm not confident that I would do this properly.)

I do not know enough about the new remote evaluators to determine if there is any impact on that.

Impact of change

I am already using an alternative evaluator for a large set of tutorials I am developing. The evaluator is published as a package at github.com/dtkaplan/learnrAlt. The evaluator is compatible with the facilities of the gradethis package. I cannot predict whether there will be many people who want to write an alternative evaluator, but mine can serve as a framework for others to work on parse-time checkers or alternative graders. @garrettgman may have some opinion about this.

Pull Request

  • Add an entry to NEWS concisely describing what you changed. DONE.

  • Add unit tests in the tests/testthat directory. NOT DONE. But see the test below.

  • Run Build->Check Package in the RStudio IDE, or devtools::check(), to make sure your change did not add any messages, warnings, or errors. DONE

Minimal reproducible example

See the attached learnr Rmd file, which contains a trivial alternative evaluator. For a complete working evaluator, see the github.com/dtkaplan/learnrAlt repository.
alt-eval.Rmd.zip

@cpsievert
Copy link
Contributor

Thanks for submitting this PR @dtkaplan. Instead of providing an alternative evaluator, we'll likely be provide a custom checking function to run on errors in #403

@cpsievert cpsievert closed this Jul 21, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants