Not all exercises have exactly one right answer: there could be a few different ways to arrive at the same result, or the exercise prompt could be open-ended and ask the student to pick from a selection of possibilities. In either case, and with minimal changes to your learnr tutorial, gradethis can accommodate exercises with multiple solutions!
To signal that your exercise has multiple solutions, write all of the
solutions in the exercise’s -solution
chunk, separating the
solutions with code headers — a comment followed by at
least four dashes.
# code header ----
You can think of the code headers as labels for each
solution variant. Here’s an example -solution
chunk for an
exercise with two possible solutions (the full exercise is presented
below).
There are two types of exercises with multiple solutions:
Multiple solutions, same result. When the solutions describe multiple ways of reaching the same final result, gradethis provides code feedback hints based on the closest solution to the user’s code.
In most cases, you’ll only need to write out the solutions in the
-solution
chunk, and gradethis will take care of the rest.
Learn more in Multiple solutions, same result
below.
Multiple solutions, different results. Other exercises have more than one acceptable result. In these exercises, multiple solutions can be used both for grading students’ results and for providing code feedback.
In addition to writing out the solutions in the
-solution
chunk, you may also need to modify your grading
code slightly. Learn more in Multiple
solutions, different results below.
In both cases, multiple solutions are written in the same way in the
-solution
chunk, and minimal changes are needed in your
grading code. For advanced grading requirements, gradethis adds two
objects into the grading environment: .solution_code_all
and .solution_all
. We’ll cover both
straight-forward and advanced uses in the examples that follow.
If all your solutions have the same result, you might not need to do anything special in your grading code, as we’ll see in the next example.
In this example, students are asked to convert a numeric value to a
whole number. The most likely solution is to use the
round()
function, but a creative student might call
floor()
or as.integer()
.
Notice that we include all three variations in the
round-solution
chunk below, but we’ve placed the answer we
consider “most correct” as the last solution in the chunk. The last
solution in the chunk will be the default solution used by gradethis
when the student’s code is not a good match for any of the
solutions.
```{r setup}
library(learnr)
library(gradethis)
```
Convert 5.2 to a reasonable whole number.
```{r round, exercise = TRUE}
____(5.2)
```
```{r round-solution}
# floor ----
floor(5.2)
# as.integer ----
as.integer(5.2)
# round ----
round(5.2)
```
```{r round-check}
grade_this({
# provide code hints if submission is wrong
fail_if_code_feedback()
# pass if the value is correct
pass_if_equal()
# or fallback to a failing grade
fail(hint = TRUE)
})
```
If a student enters any of the correct answers to arrive at the value
5
, they receive positive feedback and encouragement.
Fantastic! Correct!
If a student gives an incorrect answer using round()
,
the code feedback hint provided by fail(hint = TRUE)
will
nudge them towards the correct round()
solution:
I did not expect your call to round()
to include digits = 1
. You may have included an unnecessary argument, or you may have left out or misspelled an important argument name. Give it another try.
And if a student gives an incorrect answer but their text is closer to one of the other solutions, the code feedback will nudge the student toward the correct solution.
I expected you to call floor()
where you called flor()
. Try it again. I have a good feeling about this.
I expected you to call as.integer()
where you called as.numeric()
. Don't give up now, try it one more time.
Grading functions that use solution code — like
fail_if_code_feedback()
, code_feedback()
or
fail(hint = TRUE)
— will automatically give feedback based
on the closest solution to the student’s input. In some cases, you may
want to accept multiple solutions, but give different feedback depending
on the solution the student gave.
To enable advanced custom grading, gradethis makes all of the
solutions to an exercise available to grading code as
.solution_code_all
, a named list containing the code for
each solution.
You can use code like the following template to give a special
message (<message>
) for user input that matches a
particular solution identified by the label used for the code heading
above the solution (<solution_label>
):
correctly_used_solution <-
is.null(code_feedback(.user_code, .solution_code_all[["<solution_label>"]]))
pass_if(
correctly_used_solution,
message = "<message>"
)
Some things of note in this template:
You can obtain the code for a specific solution using the label
in its code heading: e.g. .solution_code_all[["floor"]]
in
our example would return "floor(5.2)"
.
You need to explicitly ask for code_feedback()
for
the solution of interest. Otherwise, by default,
code_feedback()
will first match the user’s code with the
most option among the set of solutions.
code_feedback()
returns NULL
when
user_code
matches solution_code
. Testing that
the feedback is.null()
is a robust way to test that the
student submitted code that matches a solution because it ignores minor
differences like whitespace and pipe usage.
Using the template above, we can provide some feedback specifically
to students who use the floor()
solution.
```{r round-check}
grade_this({
correctly_used_floor <-
is.null(code_feedback(.user_code, .solution_code_all[["floor"]]))
pass_if(
correctly_used_floor,
"Correct, but remember that `floor()` always rounds down.
If you want to round to the nearest whole number, `round()` is usually safer."
)
pass_if_equal()
fail(hint = TRUE)
})
```
Impressive work! Correct, but remember that floor()
always rounds down.
If you want to round to the closest whole number, round()
is usually safer.
If your exercise has multiple acceptable results, you’ll need to slightly adapt your code to ensure students’ inputs are tested against all acceptable results.
To compare the result of student input to each solution, you should
use .solution_all
as the first argument in
pass_if_equal()
. Multiple solution results are saved in an
environment called .solution_all
. (Because
.solution_all
is an environment, it requires some special
handling. Check Working with
.solution_all
for more details.)
Within pass_if_equal()
messages, you can use the object
.solution_label
, which corresponds to the code header of
the solution that the student’s code matched.
Functions that grade students’ code can still be used without any changes.
```{r setup}
library(learnr)
library(gradethis)
fibonacci <- c(0, 1, 1, 2, 3, 5, 8, 13, 21, 34)
```
Use `mean()` or `median()` to find the average of `fibonacci`, a vector containing the first 10 numbers in the Fibonacci sequence.
```{r average, exercise = TRUE}
____(fibonacci)
```
```{r average-solution}
# mean ----
mean(fibonacci)
# median ----
median(fibonacci)
```
```{r average-check}
grade_this({
pass_if_equal(.solution_all, "You solved it with `{.solution_label}()`!")
fail(hint = TRUE)
})
```
If a student gives either correct solution,
pass_if_equal()
will give them a passing grade:
Super job! You solved it with mean()
!
Bravo! You solved it with median()
!
If the student’s input doesn’t match any of the solutions, grading will proceed as normal:
I expected you to call floor()
where you called sum()
. Don't give up now, try it one more time.
.solution_all
is an environment, not a list. This
carries certain advantages. Exercises can be graded faster because each
solution can be evaluated as needed rather than all up front.
But it also comes with drawbacks. The most important are:
If you need to access the results of specific solutions, make sure you give a unique label to every solution.
To reference a specific solution result in grading code, use the
solution label to access the result in the
.solution_all
:
grade_this({
pass_if_equal(
.solution_all[["mean"]],
"That's right! Do you think the mean is skewed?"
)
pass_if_equal(.solution_all)
fail_if_code_feedback()
fail()
})
If you don’t give your solutions unique names, gradethis
will generate unique names for you. For example, these solutions
# mean ----
mean(fibonacci)
# mean ----
mean(fibonacci, na.rm = TRUE)
# median ----
median(fibonacci)
# median ----
median(fibonacci, na.rm = TRUE)
have these names:
#> [1] "mean" "mean_1" "median" "median_1"
This means that if you need to access specific solution results in
your grading code, it’s almost always better to give your solutions
unique labels. On the other hand, if your passing message makes use of
.solution_label
, you may want to duplicate the solution
labels.
Recall that the -check
chunk for the
average
exercise was:
```{r average-check}
grade_this({
pass_if_equal(.solution_all, "You solved it with `{.solution_label}()`!")
fail(hint = TRUE)
})
```
By including the mean()
and median()
solution variants with na.rm = TRUE
we get an appropriate
passing message even though the solution labels are duplicated:
Delightful! You solved it with median()
!