Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updating the status of TuyaDevice using the response of the set_dps() call #55

Merged
merged 8 commits into from
Sep 30, 2020

Conversation

rospogrigio
Copy link
Owner

@rospogrigio rospogrigio commented Sep 28, 2020

Optimizing the set_dps calls.
With previous implementation, TuyaDevice.set_dps() just clears the cached status, resulting in a delay before having the status updated.

Actually, the call returns a json with the new status of the changed DPs, such as:
{'devId': 'xxxxxx', 'dps': {'7': False}}
This PR introduces (and fixes) the decryption of the response of the TuyaInterface.set_dps() call, and uses the returned JSON to update the TuyaDevice cached status.

TODO ( @postlund please help):
once the cached status is updated, we probably need to call the status_updated() method for each entity involved, right? Which is the best way to do this? I don't know whether it's better to dispatch a signal, or to call the methods directly, or whatever.
BTW, I think the involved entity should be just the one that triggered the set_dps (at least, it's what happens for switches and covers) but I also think we'd better keep it general.
PS, I've tried to do this by defining a set_dps() method in LocalTuyaEntity class and having it called by the platform instead of the interface's one but it doesn't work, or at least it doesn't solve the bug mentioned in #51.

@postlund
Copy link
Collaborator

I'm doing a re-write/clean up of pytuya I hope to be able to push that later tonight. The idea is to make the code more general, implement some missing stuff (like sequence numbers) and prepare for migration to asyncio. Maybe we can have a look at what changes after that.

Anyway, I'm not sure what the "cache" is need for anyway? When are we ever retrieving the cached value? It could only ever happen when doing the regular update (as the platforms themselves are never allowed to do any I/O) anymore. So I don't really see a use case. Are you trying to optimize the case when the user changes something (e.g. turn off light) and the update the state without having to ask the device?

@rospogrigio
Copy link
Owner Author

Yes, exactly: the I found out that when set_dps() is called, the device responds with a list of the DPs whose status has changed. So, it would be correct to update immediately the device status and forwarding the status update invoking status_updated() of the involved entities rather than re-asking the device for a status update. What do you think?

@postlund
Copy link
Collaborator

I agree, let's go that route for now. We might be able to go another way later, but this should be solid now. As you probably have figure out, we need to send this update to all entities as we don't know which entities depend on the changes DP. A sensor could be dependent on it as well for instance, so we need to make sure it's propagated everywhere.

@rospogrigio
Copy link
Owner Author

OK, the fact is that it is not very clear to me when the UI is updated. Anyway, I am reading that even using asyncio some caching has to be made: https://developers.home-assistant.io/docs/asyncio_working_with_async , so I guess caching shall be kept in some way...

# NOW WE SHOULD TRIGGER status_updated FOR ALL ENTITIES
# INVOLVED IN result["dps"] :
# for dp in result["dps"]:
# have status_updated() called....
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you should do something like this:

from homeassistant.helpers.dispatcher import async_dispatcher_send
...
signal = f"localtuya_{self._interface.id}"
async_dispatcher_send(hass, signal, self._cached_status["status"])

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I thought of this, but the fact is that we don't have access to the hass object here... or am I wrong??

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nah, not right now. You should pass it to the constructor and save it.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that's what I was thinking. I'll try this...

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK it seems to be working now!
I'll push this PR, then we'll wait for #58 before merging everything.

@@ -216,6 +216,61 @@ def _send_receive(self, payload):
s.close()
return data

def _decode_received_data(self, data, is_status):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I mentioned, I'm cleaning up pytuya quite a bit and would greatly appreciate if we wait with these changes until I'm done. It will be a lot easier to solve this conflict than doing it on my end.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK no problem, let's wait for your part to be ready.

@postlund
Copy link
Collaborator

OK, the fact is that it is not very clear to me when the UI is updated. Anyway, I am reading that even using asyncio some caching has to be made: https://developers.home-assistant.io/docs/asyncio_working_with_async , so I guess caching shall be kept in some way...

For a polling platform, the ‘updatemethod is called everyscan_intervalafter which the entity state is updated (which is then propagated to everyone listening, e.g. the UI). When not polling, something else updates the information stored in the entity and the entity must then tell Home Assistant that the state has changed. This is done with theschedule_update_ha_state`:

https://github.com/rospogrigio/localtuya-homeassistant/blob/e673340229a1148996075b6f3c493c24ffaceb9c/custom_components/localtuya/common.py#L151

For polling platforms it's important to remember that the only method allowed to do I/O is update (or async_update). No other method or property can do that as it blocks the event loop. So update must store the state inside the entity class and make sure all the methods and properties return data from there. Doesn't matter if the code is sync or async, same rule apply. This is extremely reasonable if you think about it. It would soon become really slow (and spammy) it you were to, let's say, iterate all entities to find the ones in a specific state and retrieving the state from each entity would trigger an external call somewhere.

So yes, caching is in some sense always needed. But the cache you have in TuyaDevice is for the call that is only allowed to be performed in update. Since we always want update to retrieve the latest status, the cache isn't really useful. That's kinda what I'm getting at. With the changes made here we can make use of it though.

@rospogrigio
Copy link
Owner Author

rospogrigio commented Sep 28, 2020

OK now I am beginning to get it... just one curiosity: in
` async def update_state(now):
status = None
try:
status = await hass.async_add_executor_job(device.status)
except Exception:
_LOGGER.debug("update failed")

    signal = f"localtuya_{entry.data[CONF_DEVICE_ID]}"
    async_dispatcher_send(hass, signal, status)

unsub_track = async_track_time_interval(
    hass, update_state, timedelta(seconds=POLL_INTERVAL)
)`

Instead of using async_add_executor_job, was it possible to pass device.status directly to async_dispatcher_send ? Or maybe you needed to do it this way in order to catch the Exception?

@postlund
Copy link
Collaborator

The receiving code expects the status value and not a function returning it. As we are in sync context here we need to run it via an executor thread, which is exactly what async_add_executor_job allows us to do.

@rospogrigio
Copy link
Owner Author

The receiving code expects the status value and not a function returning it. As we are in sync context here we need to run it via an executor thread, which is exactly what async_add_executor_job allows us to do.

OK I was thinking that the async_dispatcher_send could then be made within device.status() but maybe there is some issue in doing so? I'm not asking to do it like this, I'm just trying to better understand the async mechanism.
Another question: shouldn't set_dps() be async as well? It's doing I/O, right? Or am I missing something?
Anyway, after I introduced the async_dispatcher_send() within the set_dps() call the status update is super fast now, I just keep getting "104 connection reset by peer" errors from my plug but I believe that the persistent connection will fix them.

@postlund
Copy link
Collaborator

The async_dispatcher_send just sends some data to a specific signal and anyone listening to that signal will receive the data. So it's technically possible to pass on a reference to the update method, but that would mean that all entities would receive it (since they listen to that signal) and need to call it to get the status. This of course means that we gain nothing as all platforms would request status just like before, defeating the goal of just doing one update. So it has to be done here.

In the end, yes, set_dps should also be async. But since Home Assistant provides mechanisms for using either a sync or an async library, we should stay with the sync version for now. Once pytuya is converted to asyncio we can change a few methods to be async. In general, sync methods can call other potentially blocking sync methods but async methods are never allowed to do that. Using executors to run sync code is the way to solve that, but we should shove as much of that into the library as possible. Since asyncio has native support for sockets (e.g. for TCP), we won't need it at all in the end.

Sounds nice, the persistent connection will hopefully solve a bunch of problems!

@rospogrigio
Copy link
Owner Author

rospogrigio commented Sep 30, 2020

@postlund Looks like #58 had already everything sorted out pretty well... this is working super ok in my environment, status is updated super fast when giving commands, without the need to ask for a status update. Please review, after this I'll move on to merging #10.

@rospogrigio
Copy link
Owner Author

rospogrigio commented Sep 30, 2020

@postlund , gitHub tells me that this branch cannot be rebased, how am I supposed to merge this?
Edit: Shall I use Squash and merge?

@postlund
Copy link
Collaborator

@rospogrigio While being on this branch, try:

$ git fetch origin
$ git rebase origin/master

When you run into a conflict, just fix it and do git add on the files (don't commit!), then continue with git rebase --continue. Repeat until done. You can then force push the update again.

@rospogrigio rospogrigio merged commit 2e46d65 into master Sep 30, 2020
@rospogrigio rospogrigio deleted the setdps_updates_status branch September 30, 2020 09:51
@rospogrigio
Copy link
Owner Author

Merged #55 into master.

PaulCavill pushed a commit to PaulCavill/localtuya that referenced this pull request May 9, 2024
* change var `gateway` to `fake_gateway`

* Fix: `set dp service` and support set list of dps (rospogrigio#55)

Give the user the ability to change multi values with 1 command e.g.
value:
  - 1: true
  - 2: false
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants