Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Group randomization can result in response overlap #205

Open
everitt-andrew opened this issue Mar 4, 2019 · 2 comments
Open

Group randomization can result in response overlap #205

everitt-andrew opened this issue Mar 4, 2019 · 2 comments
Assignees
Labels

Comments

@everitt-andrew
Copy link
Contributor

everitt-andrew commented Mar 4, 2019

Description of the Issue

While working on the testing for group_rrobin, Lancaster and I found a potential bug within the test_rrobin_responses method where groups could be "unbalanced" based on their responses. This issue arose out of now closed issue 137 and currently open issue 184.

Add Detailed Comment with More Information

In order to better understand the way that the program grouped members of the class, we changed some example responses in test_group_method.py. Once we ran the program, we found that it tries to create groups with differing responses among group members for diversity and balancing purposes. However, during some of our tests we found that occasionally a group will be formed that is unbalanced. Specifically, we saw that a group was formed with two members whose responses were all "True" and another had none of these all "True" members. Previously, these sample students would each be placed in their own group alongside students with varying responses.

Steps to Reproduce Issue

In order to reproduce this issue, a short segment of code needs to be added to test_group_method.py which prints the members of each group to the terminal. Then, run the test and examine the responses in each group. The test may need to be run several times in order to see the issue (it occurred in approximately one of the ten tests that we ran).

        """Testing the grouping function according to responses"""
        lst = [
            ["Dan", True, True, True],
            ["Jesse", True, True, True],
            ["Austin", True, True, True],
            ["Nick", False, True, False],
            ["Nikki", False, True, False],
            ["Maria", False, True, False],
            ["Jeff", True, False, False],
            ["Simon", True, False, False],
            ["Jon", True, False, False],
            ["Angie", False, False, True],
            ["Izaak", False, False, True],
            ["Jacob", False, False, True],
        ]
        numgrps = 4
        response_output = group_rrobin.group_rrobin_num_group(lst, numgrps)
        assert len(response_output[0]) == 3
        assert len(response_output) == numgrps
        assert ["Dan", True, True, True] not in response_output[2]
>       assert ["Jesse", True, True, True] not in response_output[0]
E       AssertionError: assert left not in right failed.
E         Showing split diff:
E         
E         left:  ['Jesse', True, True, True]
E         right: [['Dan', True, True, True],
E          ['Jesse', True, True, True],
E          ['Nick', False, True, False]]

The screenshot above shows a test that failed while working on some more re-factorization/enhancement of group_rrobin.py, but shows that an unbalanced group was created.

Assigned Developers

@Lancasterwu @everitt-andrew @shafferz

@everitt-andrew
Copy link
Contributor Author

@Lancasterwu also mentioned that we might want to look at the impact that the number of Booleans has on the randomization of groups. Currently there are three Booleans in the test cases and four Boolean values in the student input CSV file.

@everitt-andrew
Copy link
Contributor Author

@shafferz Could you link to the CSV file/table that is currently being used?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants