Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem: it is unclear when the tests are running NLP-related tests #17

Open
jrwdunham opened this issue Feb 14, 2018 · 0 comments
Open
Assignees

Comments

@jrwdunham
Copy link
Member

The functional tests are written in such a way that if foma and/or MITLM are not installed on the host, then certain tests will pass vacuously. On a TravisCI host, these OS deps are not installed. On a docker-compose local deploy, they should be installed. This is problematic if you want to know whether all of these tests are passing.

Proposed solution: add a strict flag to the test runner so that the test will fail if these foma/MITLM-dependent tests fail.

@jrwdunham jrwdunham self-assigned this Feb 14, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant