Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

N of M authoring edge cases #191

Open
coleshaw opened this issue Jan 11, 2018 · 3 comments
Open

N of M authoring edge cases #191

coleshaw opened this issue Jan 11, 2018 · 3 comments
Labels

Comments

@coleshaw
Copy link

coleshaw commented Jan 11, 2018

Currently, when authoring an N of M question, it is possible to get into a non-intuitive state. For example, if you author 10 questions and set the N of M selector to "5 of 10", but then delete 6 questions from the assessment, the server still has a "5" flag set (even though there are < 5 questions in the assessment). In the authoring UI, the selector will drop back to it's default option (Answer all questions), so the author may think that the assessment is now non-N-of-M, but that setting isn't sent to the server. So this leads to counter-intuitive states on the student-facing player (it will treat the assessment as N of M still, and there will be buttons missing from the UI).

Should there be logic in the authoring tool, to remove the N of M flag (i.e. set to all questions) on the server, when the number of questions in the assessment is less than the N of M flag?

@coleshaw coleshaw added the bug label Jan 11, 2018
@bmuramatsu
Copy link

bmuramatsu commented Jan 11, 2018 via email

@coleshaw
Copy link
Author

Thanks for the alternate idea. I agree there are multiple ways to approach this -- I think the overall issue is more of a workflow / UX issue to be aware of, than a purely technical bug.

But I want to note that there is a small technical "bug" in the sense that, currently, the N of M selector changes state to show "All questions required" when the # of questions drops too low. But that isn't true server-side, so it's a mismatch in data values.

Technical bug (known) + UX improvements (to be defined), is what would be needed to fix this issue, I think.

@resource11
Copy link
Collaborator

These are both valid points. I like @bmuramatsu's point about the UX needs here when we get to a non-intuitive state (aka, a friction point in the UX).

This is a good opportunity to make use of the friction in a good way to bring up that warning that the assessment doesn't meet the N of M criteria any longer, and... guide the user to choose whether to add a question to meet criteria or change assessment type.

Also, when do you want this warning to show up? Immediately when the amount of N questions drops or at some other point?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants