-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create initial structure #14
Conversation
@llucax is there a trick to make pylint happy or do I need to add exceptions for most of those?
|
It is a known bug, we need to put an ignore in the import line: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, some early feedback. About the SDK thing, if it is only a temporary hack to get things going, I'm good with it!
if interval_filter: | ||
time_interval = dispatch_pb2.TimeIntervalFilter( | ||
start_from=to_timestamp(interval_filter.start_from), | ||
start_to=to_timestamp(interval_filter.start_to), | ||
end_from=to_timestamp(interval_filter.end_from), | ||
end_to=to_timestamp(interval_filter.end_to), | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea is that your TimeIntervalFilter
wrapper class has from_proto()
and to_proto()
methods. For now in the SDK we only have from_proto()
(example) but I guess it could make sense to have the other too.
That said, to be honest, I don't like this approach, as the idea is to hide the underlying protocol we are using in the client, the idea is that the client user doesn't need to know about protobuf. Since conversion only needs to happen in the client internally, I would prefer an approach where we provide conversion functions separately from the wrapper class, similar to what marshmallow does, or even dataclass when converting to dict
or tuple
.
Maybe this is something to discuss in the next API meeting (or even SDK meeting, as this is a Python thing more than a API thing).
FYI @frequenz-floss/python-sdk-team
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure for this particular type it's worth having all that, given that it's only used in this function and no-where outside, and the conversion is extremely simple
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, maybe I picked the wrong example.
39333a8
to
1fc6612
Compare
23b5783
to
8e72bde
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly minor things, except for the gRPC mock server which caused us a lot of headaches in the SDK.
pb_dispatch.selector.CopyFrom(_component_selector_to_protobuf(self.selector)) | ||
pb_dispatch.is_active = self.is_active | ||
pb_dispatch.is_dry_run = self.is_dry_run | ||
# pb_dispatch.payload = self.payload |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this commented out?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It caused trouble because the attribute is read-only. Haven'- investigated more
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, weird. Maybe add a comment or create an issue about it?
tests/test_dispatch_client.py
Outdated
@asynccontextmanager | ||
async def grpc_servicer() -> AsyncGenerator[tuple[aio.Server, str], None]: | ||
"""Async context manager to setup and teardown a gRPC server for testing.""" | ||
server = aio.server() | ||
dispatch_servicer = DispatchServicer() | ||
dispatch_pb2_grpc.add_MicrogridDispatchServiceServicer_to_server( | ||
dispatch_servicer, server | ||
) | ||
port = server.add_insecure_port("localhost:0") | ||
await server.start() | ||
try: | ||
yield server, f"localhost:{port}" | ||
finally: | ||
await server.stop(None) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I should not use this mock service if it is really listening to a port, we did that in the SDK and it had a lot of issues, among them that multiple tests can be run at the same time. We should mock the server instead at the python level, patching the functions we are calling in our code.
All you need is to mock/patch (in the pyhon sense) this function used by the client: self._stub.ListMicrogridDispatches(request)
to return whatever mock data you want to have. We don't need to test the whole gRPC stuff, only our client code.
We could also do a test of the whole thing, but that should be an integration test, not a unit test.
tests/test_dispatch_types.py
Outdated
for selector in ( | ||
[1, 2, 3], | ||
[10, 20, 30], | ||
ComponentCategory.BATTERY, | ||
ComponentCategory.GRID, | ||
ComponentCategory.METER, | ||
): | ||
protobuf = _component_selector_to_protobuf(selector) # type: ignore | ||
assert _component_selector_from_protobuf(protobuf) == selector |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason not to use pytest.mark.parametrize
for this? It will move some clutter out of the test itself.
Maybe you can borrow some ideas from here if you have more complex tests and you want, for example, to assign names to them, etc.:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tbh, I didn't feel the need to do more complex testing with these methods, they seem simple enough
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, you don't have to make it more complicated, but using parametrize()
is actually simpler, you don't need to create classes for test cases, that was just a FYI in case it was useful. Just using parametrize is as simple as:
from frequenz.client.common.microgrid.components.components import ComponentCategory
from frequenz.client.dispatch._types import (
ComponentSelector,
_component_selector_from_protobuf,
_component_selector_to_protobuf,
)
@pytest.mark.parametrize("selector", [
[1, 2, 3],
[10, 20, 30],
ComponentCategory.BATTERY,
ComponentCategory.GRID,
ComponentCategory.METER,
])
def test_component_selector(selector: list[int] | ComponentCategory) -> None:
"""Test the component selector."""
protobuf = _component_selector_to_protobuf(selector) # type: ignore
assert _component_selector_from_protobuf(protobuf) == selector
Another advantage of using parametrize is it creates one test per parameter, so you get exactly which parameter failed and you can execute the test for one particular parameter only, also without it, the test will be aborted after the first failure, while using parametrize will continue testing other parameters (unless the fail early option is used).
If you think updating these tests here now are not worth it, I'm OKish with that, but I strongly suggest using parametrize()
in future tests.
tests/test_dispatch_types.py
Outdated
from frequenz.client.common.microgrid.components.components import ComponentCategory | ||
|
||
from frequenz.client.dispatch._types import ( | ||
ComponentSelector, | ||
_component_selector_from_protobuf, | ||
_component_selector_to_protobuf, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason for the imports inside the test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess keeping things near where they are used. No deeper reason
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there is no good reason, I would put it on the top-level, which is the most popular style and doesn't require an exception for pylint
.
Signed-off-by: Mathias L. Baumann <mathias.baumann@frequenz.com>
@llucax this is almost ready, some mypy/pylint/flake8 complains, but everything else is working already. Feel free to review already. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Besides the minor (optional) comments, I'm mostly OK to merge this as is, but there are a few more major points that I think we should change in the future:
- Make
list
async iterable - Make
update
type safe - Separate public from private classes more clearly (I would put them in different files)
- Make sure public classes don't inherit from private classes
PS: I didn't had a look at the tests yet
if start_from or start_to or end_from or end_to: | ||
time_interval = PBTimeIntervalFilter( | ||
start_from=to_timestamp(start_from), | ||
start_to=to_timestamp(start_to), | ||
end_from=to_timestamp(end_from), | ||
end_to=to_timestamp(end_to), | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just out of curiosity, this is some sort of optimization to send a message as small as possible over the wire, right? But it should be also possible to always create the PB object with all None
s and it will have the same effect?
If so maybe you can add a comment to clarify it, so nobody thinks it is redundant in the future and removes the if
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But it should be also possible to always create the PB object with all Nones and it will have the same effect?
I'd need to research that to be sure, right now, no idea
async def update( | ||
self, | ||
dispatch_id: int, | ||
new_fields: dict[str, Any], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
new_fields: dict[str, Any], | |
**new_fields: Any, |
But I would actually list explicitly everything that can be updated here, otherwise we lose type safety and also we allow introducing bugs because of typos (update(start_tim=...)
won't be caught by mypy
) and the code is quite convoluted with the huge match
statement. Actually now that I see it the **new_fields: Any
won't work because you use .
in keys.
For this PR I'm fine with this but I think we should definitely do it more static and type checked in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the code is quite convoluted with the huge match statement
It would look similar with any other solution as I have to take care / convert every (type of ) member individually.. if it's not a match
statement it would be a if para: ...
instead for each parameter..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, is true that you will still check which ones are set to be updated and which are not one by one, but you could at least group RecurrenceRuleUpdate
in wrapper, so that can be omitted as a group if nothing has to be changed for example, but the main point here for me is still type safety and being able to get errors at mypy
time :D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I though about other options, using TypedDict
for example, to avoid the bloat, but all have their issues, I guess this is something we might need to think more about, so I would leave it as in this PR and create an issue about it. I think how we do updates should also be consistent among all API clients, so is something we might want to discuss with the other API teams.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A data class with every type optional as a parameter could be a way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we would need to use a sentinel to indicate if a value was set or not (because you might need to update something from a value to None
, so None
can't mean "this attribute is not updated") but that still requires you to check one by one every attribute to see if it was overridden or not. dict
s are the only ones that have the nice property of encoding the absence of a field natively, so you can just iterate over the existing keys to set the attributes in the message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could also use the top suggestion and use _
instead of .
as path separator..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For what exactly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For making **new_fields: Any
possible
Signed-off-by: Mathias L. Baumann <mathias.baumann@frequenz.com>
Signed-off-by: Mathias L. Baumann <mathias.baumann@frequenz.com>
pylint still makes some problems, apparently it can't find the generated files?