-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add workload services #732
Conversation
a7636ef
to
d6320bf
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general this looks fine.
To fix the compilation errors you will need to update the ConvertSpec
function in /pkg/resources/opnicluster/util.go
and replace
services.Drain = v1beta2.DrainServiceSpec(input.Services.Drain)
with
services.Drain = v1beta2.DrainServiceSpec{
ImageSpec: input.Services.Drain.ImageSpec,
Enabled: input.Services.Drain.Enabled,
NodeSelector: input.Services.Drain.NodeSelector,
Tolerations: input.Services.Drain.Tolerations,
Replicas: input.Services.Drain.Replicas,
}
You will also need to remove the commented lines from /controllers/ai_opnicluster_controller_test.go
. Also please look into adding tests for the new services you have added.
817d77b
to
d5c68cc
Compare
e21c8df
to
eb7d092
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You also need to remove the section in /pkg/resourcees/opnicluster/opnicluster.go
that logs about GPU learning not being supported. It starts on line 166
bddba7f
to
b22fe46
Compare
55d5049
to
f4a38c0
Compare
…ller service for Opni log anomaly configuration
…ithin training controller deployment
…inference deployment automatically
f4a38c0
to
20d11f5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
This PR adds the specs for the workload and pretrained DRAIN services as well as the specs for the CPU and GPU inferencing services and the training controller.