-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test: Integration tests of the PolarisHubClient
#31
Comments
Thinking about this some more, I think we should add a protected
ping @mercuryseries @jstlaurent - Does this make sense? |
Well, there's maybe a few ways to go about this. But we're talking end-to-end tests here, which tend to be heavier and longer to run. Maybe some integration tests with a mock Hub API might be a good stopgap measure in the meantime. So, end-to-end testing with an actual Hub an and actual DB. Option 1: Run local You'll need some scaffold to do the following:
It's a pain to run on a laptop, but we could set that up in GitHub Actions CI and have it run there for PRs on the Polaris repo. Option 2: Leverage Vercel If we want to leverage Vercel, to get as close to prod as possible, then it would probably have to go down this way:
Maybe slightly less of a pain to do locally than option 1. GH Actions could also help here to wire all that. |
I thought about the two (three, if you count the mock hub API) options you described as well, but I ultimately landed on what I described above because I question whether we need to completely setup and teardown a new deployment every-time we run a suite of tests. I suspect that having a deployment per release will suffice. If that's not frequent enough and the DB builds up too much, we could use the Github CICD to regularly reset the test deployment, e.g. every night. Having a dedicated test environment would also allow external collaborators without access to the hub code base to run the testing suite, right? |
PolarisHubClient
PolarisHubClient
Well, Vercel creates a new deployment for every commit that's pushed to any branch, and their infrastructure is designed to enable that. The capability is there, I would leverage it. Besides, that's actually fairly fundamental to Vercel's model. We don't "maintain" deployments, not even prod. There's a new one every time we make a change. Beside, setting up a new environment is typically what we do for integration testing, only we do it locally by spinning up a containerized DB. In this case, having a DB branch from prod adds a level of randomness, but it's probably worth it to get the most realistic environment. I mean, we could do that too. If it's easier to containerize the Hub application and let you run it in GitHub Actions as a service when you launch a workflow (or locally to run in Docker), we can probably make it happen.
Per Polaris client release, you mean? To me, "resetting the test deployment" would mean destroying the existing one and creating a new one.
You mean, if we open-up the REST API to third-parties? There's many ways to go about enabling third-party testing, when we get there, with various degree of load on the team. An OpenAPI spec they can use to auto-generate a mock Hub would be my first step. Client-facing sandboxed environments with semi-ephemeral data would be... many steps further, lets say. |
Thank you for the elaborate answers, @jstlaurent! I think we're saying almost the same thing. I'll ping you on Slack for some final questions. I think it will be easier to discuss in person! 🙂 |
Context
We currently do not test the
PolarisHubClient
automatically. There's two factors that limit us from writing tests right now:Description
Once the above two limitations have been addresses in #30 and https://github.com/valence-labs/polaris-hub/issues/64, start writing test cases for the Polaris client. Currently,
polaris/hub/client.py
andpolaris/hub/settings.py
are omitted from the coverage. These will have to be added back.Acceptance Criteria
polaris.hub
module.Links
The text was updated successfully, but these errors were encountered: