Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Get status once per device instead of per platform #51

Merged
merged 2 commits into from
Sep 27, 2020

Conversation

postlund
Copy link
Collaborator

This is the one that needs some serious testing.

@rospogrigio
Copy link
Owner

I am getting (sorry, too late to debug for me, going to bed...):
2020-09-26 22:53:28 ERROR (MainThread) [homeassistant.components.cover] Error while setting up localtuya platform for cover Traceback (most recent call last): File "/root/home-assistant/homeassistant/lib/python3.7/site-packages/homeassistant/helpers/entity_platform.py", line 201, in _async_setup_platform await asyncio.gather(*pending) File "/root/home-assistant/homeassistant/lib/python3.7/site-packages/homeassistant/helpers/entity_platform.py", line 310, in async_add_entities await asyncio.gather(*tasks) File "/root/home-assistant/homeassistant/lib/python3.7/site-packages/homeassistant/helpers/entity_platform.py", line 481, in _async_add_entity await entity.add_to_platform_finish() File "/root/home-assistant/homeassistant/lib/python3.7/site-packages/homeassistant/helpers/entity.py", line 522, in add_to_platform_finish self.async_write_ha_state() File "/root/home-assistant/homeassistant/lib/python3.7/site-packages/homeassistant/helpers/entity.py", line 296, in async_write_ha_state self._async_write_ha_state() File "/root/home-assistant/homeassistant/lib/python3.7/site-packages/homeassistant/helpers/entity.py", line 317, in _async_write_ha_state if not self.available: File "/root/etc2/custom_components/localtuya/cover.py", line 84, in available return self._available AttributeError: 'LocaltuyaCover' object has no attribute '_available'

@rospogrigio
Copy link
Owner

And still I get occasional of these errors:
Failed to receive data from 192.168.1.26. Raising Exception. Failed to update status of device [192.168.1.26]

@postlund
Copy link
Collaborator Author

Some relic override of available. I removed it and pushed again, so you can try again now.

Maybe no connection problems was a lucky shot for me.

@rospogrigio
Copy link
Owner

I've done some research, and somebody says that this behavior might be caused by opening and closing the socket for each request: codetheweb/tuyapi#84 .
So I'd merge this PR, and implement the persistent socket immediately to check if it solves.
Do you want me to do it or do you want to do it yourself?
Let me know!

@rospogrigio rospogrigio merged commit caaf884 into master Sep 27, 2020
@rospogrigio rospogrigio deleted the single_connection branch September 27, 2020 12:19
@rospogrigio
Copy link
Owner

PR #51 merged into master.

@rospogrigio
Copy link
Owner

So I'd merge this PR, and implement the persistent socket immediately to check if it solves.
Do you want me to do it or do you want to do it yourself?

I've sketched something, and it does seem to keep the connection more robustly...

@postlund
Copy link
Collaborator Author

I agree, we should definitely move towards a persistent connection. I think I included that in my "future" issue (#15). Also, clean up and convert to asyncio. If you have something up and running worth looking at, open a PR and we can look at it together 😊

@rospogrigio
Copy link
Owner

OK @postlund , but before we move to the persistent connection, I have something else to submit to your attention. In detail:

  1. there is a serious bug in this PR. What I am experiencing is that if I toggle a switch (or a cover command), in the web interface the switch bounces back to its original status, until the status of the device is polled and then the switch status is updated correctly. I tested that in the previous commit is not present. Please confirm that you are experencing the same issue.
  2. I created a new PR (see Updating the status of TuyaDevice using the response of the set_dps() call #55), that would use the response of the set_dps() call to update the TuyaDevice status, that requests your review and support, please let me know what you think

After we've fixed these 2, we can move on to persistent connection.

@rospogrigio
Copy link
Owner

rospogrigio commented Sep 28, 2020

Jeeez, I've struggled really hard to comply to the lint and pydocstyle remarks, don't you think some rules are a bit too strict?
Also, how can I run these tools locally so I can pre-test before pushing garbage?
Thank you

Edit: nevermind, I figured it out and now I have set up my environment for local tests, but still believe that some rules are really cumbersome....

@postlund
Copy link
Collaborator Author

As long as you don't override 88 characters in comments, running black . should sort everything out for you. Most other things are common sense or PEP8. Anything in particular you feel are to strict? I added a short text to the wiki, maybe we can let that grow over time.

@rospogrigio
Copy link
Owner

Well, things like "Words must be imperative, not in 3rd person", or "missing trailing period"... I found them a bit too much 😆

@rospogrigio
Copy link
Owner

PS, what about the bug I reported?

@postlund
Copy link
Collaborator Author

Yeah, ok, some of the pydoc stuff can be a bit much.... 😉

Yeah, you are right regarding the bug. Since we now have an external loop that updates everything, service calls can't trigger additional state changes. The trivial way would be to somehow trigger an update after changing a DP. This has a small cost of course, since we have to fetch all DPS again, but this only happens when changing a DP which is very seldom when amortized over time (so a drip in the ocean). But, since you have already made a more optimized version we can probably go that route as well. We just need to send out the updated status (full cache) to all entities. I can add some comments in your other PR.

PaulCavill pushed a commit to PaulCavill/localtuya that referenced this pull request May 9, 2024
* Rework SubDevices Connection

* Fix some stuff in payload_dict for sub devices `3.3`

* Don't send warning disconnect dc is intended

* Hotfix for `3.3` devices `detect_available_dps`

* Add sensors for code `wsdcg`

* Attempt Fix: some dps aren't detected.

* Adjust `detect_available_dps` Function when gateway existed

* int `t` since sub_devices payload `t` is int

* revert last commit force int `"t"` always, need for some cmds

* test: disable removing the [ gwID, devId and uid ]

* Disconnect the sub devices if gateway dc'd

* ensure that the gateway is ready before setup subdevices

* Fix: Block status_updated if gateway is sub_device

* Add gateway_gwId to discovered sub devices. and prevent non local subdevices

* Store Gateway ID If found on mergeDevicesList

* minor changes of connect on initialization

* config_entry_by_device_id search for gateways

* Minor fix: Rename climates.py in tuya devices data

* Tuya Devices Data rename binary_sensor

* Adjust Auto Configure category `wkf`

* Fix automatic update `ip` and includes sub_devices

* Except errors from pytuya.

* Except error on config flow and set timeout for connect.

* Fix: Entity_Category and convert get_gateway to async

* Fix Mirgate: Force Int for config_flow values and fix reverse always off

* Fix: Fake Gateway fails due to no DPS found on parent DPS.

* reformat unload function.

* Fix Reconfigure cloud step

* Fix error if "Auto Configure" used without cloud.

* Add platform support for humidifiers and Rename DPS, and DPS_CONF Functions (rospogrigio#47)

* Add support for humidifier platform

* Add Humidifer to Auto Configure feature.

* Improve English translations to be more user friendly rospogrigio#45 by @codyc1515

* Adjust `en` translation for humidifer

* mark set_humiditiy as optional

* log the disconnect reason of exists

* Adjust get_gateway function

* adjust error msgs in auto configure

* Refactor: entities initialization msg and remove dp_value_conf (rospogrigio#49)

* Initialized msg to `common.py` and del: `dp_value_conf`

* def should_pull to _attr_should_pull and refactor restore_on_reconnect

* refactor entity category

* Refactor: `device_info` and properties annotation

* Improve entry initialization (rospogrigio#50)

* Connecting `CloudAPI` runs on background without interrupt initialization.

* Unload: pulls platforms from `TuyaDevice` data Explain: rospogrigio#50

* Improves config flow (rospogrigio#51)

* DPS Data now will be pulled with devices data, data stored in `device_id` -> in `devices_list` with `dps_data` key.
* Adding new device DPS Data will be pulled from stored data `except the value!`
* Devices list in config flow now will be sorted: Starting from known devices first.
* Refresh cloud devices data when configure opened. `previously reload was needed in order to pull new devices data.`
* Handle the errors if refreshing token failed.

* Enable manually enter template filename (rospogrigio#52)

* Templates list field are now insertable and searchable, so if you add templates new template into `templates` directory you can manually enter there name without need to load them up with HA boot.
* Templates names now will shows the same as filename with extension.

* update pytuya version

* revert auther name

* Sort discovered devices by `ip`

* Handle sorting discovered devices better

* refactor: devices list sorting

* adjust dc reason log

* refactor: mergeDevicesList

* new helper `get_gateway_by_id` refactor mergeDeivceList

* typo

* * Refactor codes a little bit to make it easier to maintain. (rospogrigio#53)

* * Refactor codes a little bit to make it easier to maintain.
* cloud_api now has async_connect function.
* Localtuya HASS Data is now stored as namedtuple `HassLocalTuyaData`
* Removed `TUYA_DEVICES`, `DATA_CLOUD` and `UNSUB LISTENERS`
* Sotred all unsub_callback in tuya_devices.unsub_listeners
* Reconnect if disconnected will be called after 2secounds.

* Fix reconnect after 2 secounds.

* Hide reconifgured device if no devices setuped (rospogrigio#54)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants