-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add API framework decouple proposal #17
Conversation
Signed-off-by: leonwanghui <wanghui71leon@gmail.com>
I really like the idea. I mentioned some spelling nits inline, but otherwise this proposal looks like a great idea to me. |
Signed-off-by: leonwanghui <wanghui71leon@gmail.com>
} | ||
|
||
message GenericResponse {} | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should have Get and List methods for each resource to check the status of an operation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One question for these methods, since we can check the status of operation through etcd, is there any need to double-check the status by calling the backend?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about adding a QoS parameter into the design like some messaging implementations.
Level 1: no check
Level 2: check backend.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@noelmcloughlin Good suggestion, but actually in the long term we want to redesign a unified model for both controller and dock module.
|
||
##### Pros | ||
Decoupling `api-server` from `controller` would make `api-server` much easier to scale out, and therefore improving | ||
performance when handling heavy workloads. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does allow api-server to scale out, but I don't think this improves performance as it adds another layer of indirection.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I mean here is that for heavy workloads, only one api-server will cause the traffic delay for the request, but as for multiple api-server modules all requests will be split and therefore the overall time would be reduced. Does that make sense?
Can you add a diagram to show this change? |
@@ -82,7 +82,7 @@ separate PR after the items above are resolved. | |||
|
|||
### Data model impact | |||
|
|||
Because grpc model requires generated from protobuf, so some protobuf files should be added: | |||
Because grpc model requires generated codee from protobuf, so some protobuf files should be added: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/codee/code
Signed-off-by: leonwanghui <wanghui71leon@gmail.com>
@leonwanghui can you address the comments? After that I'll merge this. |
1 similar comment
lgtm |
No description provided.