-
Notifications
You must be signed in to change notification settings - Fork 568
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updating the status of TuyaDevice using the response of the set_dps() call #55
Conversation
I'm doing a re-write/clean up of Anyway, I'm not sure what the "cache" is need for anyway? When are we ever retrieving the cached value? It could only ever happen when doing the regular update (as the platforms themselves are never allowed to do any I/O) anymore. So I don't really see a use case. Are you trying to optimize the case when the user changes something (e.g. turn off light) and the update the state without having to ask the device? |
Yes, exactly: the I found out that when set_dps() is called, the device responds with a list of the DPs whose status has changed. So, it would be correct to update immediately the device status and forwarding the status update invoking status_updated() of the involved entities rather than re-asking the device for a status update. What do you think? |
I agree, let's go that route for now. We might be able to go another way later, but this should be solid now. As you probably have figure out, we need to send this update to all entities as we don't know which entities depend on the changes DP. A |
OK, the fact is that it is not very clear to me when the UI is updated. Anyway, I am reading that even using asyncio some caching has to be made: https://developers.home-assistant.io/docs/asyncio_working_with_async , so I guess caching shall be kept in some way... |
# NOW WE SHOULD TRIGGER status_updated FOR ALL ENTITIES | ||
# INVOLVED IN result["dps"] : | ||
# for dp in result["dps"]: | ||
# have status_updated() called.... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you should do something like this:
from homeassistant.helpers.dispatcher import async_dispatcher_send
...
signal = f"localtuya_{self._interface.id}"
async_dispatcher_send(hass, signal, self._cached_status["status"])
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I thought of this, but the fact is that we don't have access to the hass
object here... or am I wrong??
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nah, not right now. You should pass it to the constructor and save it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that's what I was thinking. I'll try this...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK it seems to be working now!
I'll push this PR, then we'll wait for #58 before merging everything.
@@ -216,6 +216,61 @@ def _send_receive(self, payload): | |||
s.close() | |||
return data | |||
|
|||
def _decode_received_data(self, data, is_status): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I mentioned, I'm cleaning up pytuya quite a bit and would greatly appreciate if we wait with these changes until I'm done. It will be a lot easier to solve this conflict than doing it on my end.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK no problem, let's wait for your part to be ready.
For a polling platform, the ‘update For polling platforms it's important to remember that the only method allowed to do I/O is So yes, caching is in some sense always needed. But the cache you have in |
OK now I am beginning to get it... just one curiosity: in
Instead of using |
The receiving code expects the status value and not a function returning it. As we are in sync context here we need to run it via an executor thread, which is exactly what |
OK I was thinking that the |
The In the end, yes, Sounds nice, the persistent connection will hopefully solve a bunch of problems! |
@postlund , gitHub tells me that this branch cannot be rebased, how am I supposed to merge this? |
@rospogrigio While being on this branch, try:
When you run into a conflict, just fix it and do |
90bf4af
to
e36e191
Compare
Merged #55 into master. |
* change var `gateway` to `fake_gateway` * Fix: `set dp service` and support set list of dps (rospogrigio#55) Give the user the ability to change multi values with 1 command e.g. value: - 1: true - 2: false
Optimizing the set_dps calls.
With previous implementation,
TuyaDevice.set_dps()
just clears the cached status, resulting in a delay before having the status updated.Actually, the call returns a json with the new status of the changed DPs, such as:
{'devId': 'xxxxxx', 'dps': {'7': False}}
This PR introduces (and fixes) the decryption of the response of the
TuyaInterface.set_dps()
call, and uses the returned JSON to update the TuyaDevice cached status.TODO ( @postlund please help):
once the cached status is updated, we probably need to call the
status_updated()
method for each entity involved, right? Which is the best way to do this? I don't know whether it's better to dispatch a signal, or to call the methods directly, or whatever.BTW, I think the involved entity should be just the one that triggered the set_dps (at least, it's what happens for switches and covers) but I also think we'd better keep it general.
PS, I've tried to do this by defining a
set_dps()
method inLocalTuyaEntity
class and having it called by the platform instead of the interface's one but it doesn't work, or at least it doesn't solve the bug mentioned in #51.