Course infra for use with Autolab. This README contains documentation for creating, testing, and maintaining your course's infrastructure.
You should fork this repo if you wish to use it as your own course's infrastructure. The rest of the instructions are written assuming you have already forked this repo.
- Install
- Usage
- Testing & Linting
- Lab Release Checklist
- Guide for Lab Authors
- Contribute
- Troubleshooting
- License
There is minimal setup required before you can develop with these labs.
First, do a deep clone (because we're using submodules):
git clone --recursive <your fork here>
Next, install the bask
CLI, which is a simple task runner for Bash
that we use. This can be as simple as
brew install jez/formulae/bask
This section contains tips for how to do things TAs frequently need to do. You are encouraged to add to this section if you realize you are doing something over and over again that's not documented here.
- Each lab shares common infrastructure. Commands that work in one lab should work exactly the same in another lab.
- The
build/
folder in any lab can be deleted and rebuilt at any time. - We're using a build tool called Bask; you may want to skim the usage documentation.
You can run bask
to see the list of available tasks:
❯ bask
[00:00:00.000] Starting 'default'...
[00:00:00.000] Available tasks:
... snipped ...
You can get play around in a sandbox containing the files as a student would see them (after unzipping) with the following command:
❯ bask sandbox
...
[00:00:00.000] Done staging file(s).
[00:00:00.000] See them at './build/sandbox'
...
You can get the same sandbox as above but pre-populated with the reference solution like this:
❯ bask sandbox_refsol
...
[00:00:00.000] Done staging file(s).
[00:00:00.000] See them at './build/sandbox-refsol'
...
Before you do anything, you should use the "Assessment Builder" on Autolab to create an "assessment" for your lab.
Once you've created it, use "Edit Assessment" to set some sensible defaults. (TODO: expand this section)
Next, you need to prepare all the required materials:
❯ bask autograder
...
[00:00:00.000] Done staging file(s).
[00:00:00.000] See them at './build/autolab'
...
Once you have all the staged files, you'll want to scp them to the lab directory on AFS. This is a folder named like
/afs/cs/academic/class/15131-SEMESTER/autolab/LABNAME
so,
❯ scp build/autolab/* andrew:/afs/cs/academic/class/15131-SEMESTER/autolab/LABNAME
where SEMESTER
and LABNAME
are replaced appropriately.
While bask autograder
automatically handles creating the zipfile, if you want
to create it yourself for some reason, you can:
❯ bask handout
...
[00:00:00.000] Done staging file(s).
[00:00:00.000] See them at './build/LABNAME-handout.zip'
...
Oftentimes you will need to debug a student solution, the driver, or the autograding environment. For this, see Manual Testing.
We should strive to write and maintain quality labs. One of the best tools for accomplishing this is automated and manual testing.
The most basic form of automated checks we do are checks on correct Bash usage. For this we use a tool called shellcheck. Shellcheck tells you if you are using Bash incorrectly or in a dangerous way.
Shellcheck is automatically run every time you push a branch to GitHub. You will be able to see if Shellcheck passes or fails within each pull request that you make.
Shellcheck can also be run locally by invoking the CI build manually:
❯ ./support/ci-build.sh
...
Linting files with Shellcheck...
...
[OK] Lint checks passed.
...
Finally, you are encouraged to configure your editor to print errors from shellcheck inline. If you use Vim, you can do this by installing shellcheck on your laptop and installing the Syntastic Vim plugin.
In addition to checking Bash usage, we run automated functional tests† for all the labs.
These tests are not comprehensive; they only test a few submission types, and they only check that the autograder runs, not that it yields the correct score. This is enough for the time being, though we may want to consider expanding these checks in the future.
† "Functional tests" are different from "unit tests". Unit tests verify that code within a module works correctly. Functional tests verify that a piece of software meets it's "functional requirements," which for us means that all the labs run correctly on Autolab.
Since our automated tests aren't comprehensive, we have a number of ways of manually testing our labs to ensure they work as well as to aid development.
The first way you can manually test a lab is to craft a handin.zip
for that
lab and simulate submitting it to Autolab.
- Use the sandbox (or the refsol sandbox) and its
Makefile
to create ahandin.zip
file. - Move that file to
build/handin.zip
- Run
bask test_one
This will capture the output from simulating a run in the autograder. You can then manually verify things like whether it got the right score. This is very useful as a debugging tool when writing a new lab.
Sometimes you suspect that the autograder is broken, and you need to debug what's up. For this, you can follow this recipe:
bask autograder
to get the autograder files- Manually create a
handin.zip
file - Move the
handin.zip
file tobuild/autolab
- Within
build/autolab/
, runmake -f autograde-Makefile
These are the only steps required to run the autograder manually. This is the
same thing that bask test_one
does, but you have a little more control over
the process, because the directory isn't deleted at the end.
Before releasing any lab, you should complete the following checklist.
- Install the lab to Autolab
- Verify that you can download the lab handout through Autolab
- Test that the autograder works on Autolab:
- Use the sandbox to create an incorrect
handin.zip
and submit it- Verify that the autograder runs and reports a non-perfect score
- Use the refsol sandbox to create a correct
handin.zip
and submit it- Make sure it gets a perfect score
- Use the sandbox to create an incorrect
Understand the contents of this section before starting to write your own lab.
First off, here's a breakdown of the folder structure of a lab:
myexamplelab
├── build/
│ └── ...
└── src/
├── dist/
│ ├── driver/
│ └── ...
├── driver-private/
├── driver-public/
├── refsol/
├── Baskfile -> ../shared/Baskfile
├── README.md
└── config
At the very top level, there are two files:
config
- Declare lab-specific config here (like required handin files).
Baskfile
- A symlink to the shared Baskfile (see above for some useful bask targets).
Next up, all build assets are placed into the build
folder. It's included here
for visualization--you should never have to create it yourself, and you can
safely delete and recreate it at any time.
Most of the important folders are in src/
. Here's a list of the top level
src/
files and folders and what they're used for.
dist/
- Everything in this folder will be seen by students in the top level of their
handout folder, as well as inside the autograder as
src/dist
. This is the place for scaffold code, instructions, helper files for the lab, etc. driver/
- Inside
dist/
is the driver code, whose purpose is to check the student's work and give them feedback. It should be able to work both on Autolab as well as in the student's local environment. - There should always be an executable called
driver
in this folder which checks the student's work.
- Inside
- Everything in this folder will be seen by students in the top level of their
handout folder, as well as inside the autograder as
driver-public/
- Everything in here is copied into
dist/driver/
when both when creating the student handout folder and the autograder.
- Everything in here is copied into
driver-private/
- Related to the above, the contents here end up in
dist/driver
, but only inside the autograder, not in the student handout. - This folder is useful for making "private" tests, i.e., tests that students can't see.
- Files are copied over top of those in
driver-public
, so you can overwritedriver-public
files in the generated output if you want to.
- Related to the above, the contents here end up in
refsol
- The staff solutions. There should be one file for each declared required
file in
config
. - Files in here are copied over top of everything in
dist/
when collecting the lab files, overwriting on name clashes.
- The staff solutions. There should be one file for each declared required
file in
-
If you're looking to copy and modify an existing lab,
pipelab
is a good example. -
When writing the driver, make sure that students can't log any output to the console. If they could, we could leak information about our autograder or private test cases.
This is how you should act when developing assignments.
You should never push to the master
branch directly. Instead, create a pull
request, and merge it after it's been reviewed. This ensures that at least two
people understand what a given piece of code does.
For more about why code review is important, watch this video.
Don't fork this repo. Instead, use feature branches within this repo. This
makes it easier for other TAs to collaborate on your work if necessary. Prefix
your feature branch names with your initials or username for clarity (i.e.,
jez-finish-pipelab
rather than finish-pipelab
).
Whenever you expect a response or changes from someone, assign the issue or pull request to them with a note about what you'd like from them. Similarly, self-assign a PR to indicate that it's a work in progress.
If someone asked you to review a PR, and you both agree that the changes are
good to go, add the "Approved" label, and let the author of a PR merge it into
master
out of courtesy (i.e., try not to merge it for them).
Please use 2 spaces for indentation levels.
When necessary, prefer naming files using kebab-case
(i.e.,
hyphen-separated names) instead of snake_case
(i.e., underscore-separated
names).
Choosing good names is important. Your lab's name must meet these criteria.
Names for assessments on Autolab should be named using UpperCamelCase and always end in "Lab". For example: "PipeLab", "SportsLab", etc.
The autograder is the first way students will get feedback about their progress on a lab. It's important to make this feedback as useful as possible.
The following does not apply to all autograders (TrainerLab stands out as an example where these tips don't apply). Nonetheless, these are some good principles to have in mind when writing an autograder.
- Feedback should be as useful as possible without revealing too much about the solution.
- Make one title per problem.
- Indent all feedback related to that problem by 4 spaces.
- Indent all diff or auto-generated output for that problem by 8 spaces.
- Leave one empty line between each problem
- Color success output in green
- Color failure output in red
- Color neutral information in bright white
Here's an example of the output from ForceLab:
If bask seems to be misbehaving in some way, try updating your bash.
MIT License. See LICENSE.