-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[COR-37] Integration / functional tests #23
Conversation
sjmiller609
commented
Dec 20, 2022
•
edited
Loading
edited
- add integration testing setup
- starts Kind cluster(s)
- installs CRD
- starts operator
- runs integration tests
- integration tests:
- apply a configuration of coreDB
- check pod becomes running
|
||
#[tokio::test] | ||
#[ignore] | ||
async fn functional_test_basic_create() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this test deploys a CoreDB into the cluster we are connected to, then checks that a pod starts
- 'kindest/node:v1.25.3' | ||
- 'kindest/node:v1.22.15' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can test against multiple k8s versions
if let Err(_) = tokio::time::timeout( | ||
std::time::Duration::from_secs(timeout_seconds), | ||
await_condition(pods, &pod_name, conditions::is_pod_running()), | ||
) | ||
.await | ||
{ | ||
panic!( | ||
"\n\nERROR: Did not find the pod {} to be running after waiting for {} seconds\n\n", | ||
pod_name, timeout_seconds | ||
) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found this pattern to wait on something to be useful
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if you are just panic'ing you could do tokio::timeout(...).await.expect("CRD was not installed in time")
for a "one-liner" - also passes error info into the panic
let crds: Api<CustomResourceDefinition> = Api::all(client.clone()); | ||
if let Err(_) = tokio::time::timeout( | ||
std::time::Duration::from_secs(2), | ||
await_condition(crds, "coredbs.kube.rs", conditions::is_crd_established()), | ||
) | ||
.await | ||
{ | ||
panic!("\n\nERROR: Did not find the CRD to be installed.\n\n") | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For example same pattern here to wait on CRD. kube-rs also offers custom conditions, so can write a function and wait until that is satisfied.
https://docs.rs/kube/latest/kube/runtime/wait/trait.Condition.html
let client_future = Client::try_default(); | ||
let client = match client_future.await { | ||
Ok(wrapped_client) => wrapped_client, | ||
Err(_error) => panic!("Please configure your Kubernetes Context"), | ||
}; | ||
// Get the name of the currently selected namespace | ||
let selected_namespace = Config::infer().await.unwrap().default_namespace; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NB: you are technically doing Config::infer
twice in this block. you could use the result of the config inference to create a Client
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will check on this. Thank you!
hard code image in all containers
* Updates test so it cleans up after itself and doesn't leave things in the config file