-
-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Benchmark tests are broken #1059
Closed
Comments
shonfeder
added
the
DevOps
As practice, not role. Subsumes CI/CD, project organization, operationalization etc.
label
Oct 24, 2021
Closed
4 tasks
shonfeder
pushed a commit
that referenced
this issue
Oct 28, 2021
* Fix documentation to indicate the correct config file * Expose output configuration via CLI arguments - Followup to #1025 - Required for #1059 This plumbs configuration of the OutputManager's settings for the out-dir and write-intermediate fields through to the CLI. It's is needed for us to control the location and kind of output for the benchmark tests, and was planned in #1025 * Rework docker wrapper scripts to create files using the same user and group ID of the user who invoked the script, fixing incorrect file permissions on files generated within the docker container.
shonfeder
pushed a commit
that referenced
this issue
Oct 28, 2021
- Followup to #1025 - Required for #1059 This flag is proposed in the ADR for the OutputManager but wasn't implemented in #1025, which left us without a way to enable the generation of the profile-rules.txt file that used to always be created (other than using the global config). The benchmarks depend on this file being created.
shonfeder
pushed a commit
that referenced
this issue
Oct 29, 2021
* Add --profiling flag - Followup to #1025 - Required for #1059 This flag is proposed in the ADR for the OutputManager but wasn't implemented in #1025, which left us without a way to enable the generation of the profile-rules.txt file that used to always be created (other than using the global config). The benchmarks depend on this file being created.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
The benchmark tests run in https://github.com/informalsystems/apalache-tests are broken as an (obvious and expected) result of #1036 (this also indicates that this is a breaking change, so we should be sure to increment the minor version next release). To fix, I'll need to fix where we're looking for the various outputs. See https://github.com/informalsystems/apalache-tests/runs/3981443971?check_suite_focus=true for the failure.
The text was updated successfully, but these errors were encountered: