-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallelise tests by default #119
Comments
Next steps:
|
kegsay
added a commit
that referenced
this issue
Jul 20, 2021
This is the beginning of the work for #119. - Add the concept of `PackageNamespace` to `config.Complement`. Set to `""` for now. - Update all filters to look for a matching package namespace when creating blueprints and containers. - Cleanup `docker.Builder` to read from the config more and from copies of config fields less. This change should be invisible to any existing Complement users. Background: Previously, Complement assumed that each container could be uniquely identified by the combination of `deployment_namespace + blueprint_name + hs_name`. For example, if you were running `BlueprintAlice` then a unique string might look like `5_alice_hs1` where `5` is an atomic, monotonically increasing integer incremented when `Deploy()` is called (see #113 for more info). In a parallel by default world this is no longer true because the `deployment_namespace` is not shared between different test processes. This means we cannot co-ordinate non-clashing namespaces like before. Instead, we will bring in another namespace for the test process (which in #119 will be on a per-package basis, hence the name `PackageNamespace`). As of this PR, literally everything Complement makes (images, containers, networks, etc) are prefixed with this package namespace, which allows multiple complement instances to share the same underlying docker daemon, with caveats: - Creating CA certificates will race and needs a lockfile to prevent 2 processes trying to create the certificate at the same time. - Complement federation servers cannot run together due to trying to bind to `:8448` at the same time. That being said, this PR should enable the parallelisation of a number of CS API only tests, which will come in another PR.
kegsay
added a commit
that referenced
this issue
Jul 21, 2021
* Introduce the concept of a package namespace This is the beginning of the work for #119. - Add the concept of `PackageNamespace` to `config.Complement`. Set to `""` for now. - Update all filters to look for a matching package namespace when creating blueprints and containers. - Cleanup `docker.Builder` to read from the config more and from copies of config fields less. This change should be invisible to any existing Complement users. Background: Previously, Complement assumed that each container could be uniquely identified by the combination of `deployment_namespace + blueprint_name + hs_name`. For example, if you were running `BlueprintAlice` then a unique string might look like `5_alice_hs1` where `5` is an atomic, monotonically increasing integer incremented when `Deploy()` is called (see #113 for more info). In a parallel by default world this is no longer true because the `deployment_namespace` is not shared between different test processes. This means we cannot co-ordinate non-clashing namespaces like before. Instead, we will bring in another namespace for the test process (which in #119 will be on a per-package basis, hence the name `PackageNamespace`). As of this PR, literally everything Complement makes (images, containers, networks, etc) are prefixed with this package namespace, which allows multiple complement instances to share the same underlying docker daemon, with caveats: - Creating CA certificates will race and needs a lockfile to prevent 2 processes trying to create the certificate at the same time. - Complement federation servers cannot run together due to trying to bind to `:8448` at the same time. That being said, this PR should enable the parallelisation of a number of CS API only tests, which will come in another PR. * Set to pkg to keep the ref format of committed containers valid
So this was split based on the protocol in the end due to practical limitations. Global function pollution is still an issue albeit somewhat reduced now. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Tests are currently only run sequentially, unless
t.Parallel()
is called in a subtest. This will not scale as we increase the number of tests. Thego test
framework will parallelise tests in different packages by default, but the environments are completely isolated from each other so test suites cannot co-ordinate who is making what blueprint, or when to clean up docker containers. It would be good to figure out a way of doing this such that we have the ability to have at least some parallelisation by default to make it not scale with the number of tests quite so badly.We may be able to do the package solution, provided we have some co-ordination mechanism for cleanup, even if it's just a naive "okay, all packages get their own namespace to work with". From a sheer complexity PoV this may actually be the best solution as it's easier to reason about (in that you don't need to care about who set up what, at the cost of making O(n) blueprints where
n
is the number of packages). Then the problem is how to slice and dice the existing tests. I tried this briefly, and we cannot split based entirely on feature, due to how the build tags work. If a directory has test files which are all build tagged up (e.g MSCs) thengo test
complains that there is nothing testable there (when the tag is not specified). We could add a dummy test file, but that is icky given we then expect new MSCs to add new directories causing the writer of the test to know about this. We could split based on protocol (CS API, SS API, ST API) but that reduces the amount of helper function sharing that can be done, which is typically per-feature (think registration, knocking). Currently all non-test functions share a global namespace as they are all in the same package which is a bit gross in terms of having a mess of global functions lying around.Thoughts welcome.
The text was updated successfully, but these errors were encountered: