-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integration of 4.1.0 #197
Integration of 4.1.0 #197
Conversation
Remove pydantic v1 workaround. We will fully migrate from cerberus to pydantic for 4.1.0 anyways, hence we can remove this hack
migrate persistent state
add fixes for current validation models for v2
Move persistent state load/dump logic to type class
Move state and partial state in one file, pydantify state
implement an elegant way of partial updates which is fully typed without having typeddicts everywhere
migrate PLC state to pydantic
migrate PLC specification
use tuples instead of lists in PLC specification
rename `State` to `PyraCoreState` rename `PersistentState` to `PyraCoreStatePersistent`
rename upload meta
Upgrade pydantic version
Implement beautiful new config
Refactor pyra core for new config
Refactor tests for new config
Add function to read the last line efficiently
Use function
Parse exact julian datetime not only the date from the last logline
Add config parameter to specify whether to ignore too old lines or not
Issue a restart if specified and last log line too long
Improve function to read last line
Make output of "cli config get" command colorful and indented by default Add flag to check paths to this command
Add more explanation about the threshold to the UI.
Do not send out config updates for security reasons Do not send more than 50 lines of logs.
9e00269
to
6f63586
Compare
Merge system state and tum plc state
Move "system is measuring" bar to the top of the overview tab
Simplify and reposition measurement mode
Add `TUMPLCLogger` class
Write out data right after PLC read
def perform_camera_power_cycle(self) -> None: | ||
"""Performs a power cycle of the camera every day at midnight: Turn off | ||
the camera power at midnight system time. Turn on the camera power two | ||
minutes later. | ||
|
||
Exact logic: Whenever the time is between 00:00 and 00:15, and the last | ||
power down was on a different day or never logged (core has been stopped | ||
since last power cycle), power down the camera. Two minutes later, power | ||
up the camera. This works because this function should be run at least | ||
every 10 minutes (`config.general.seconds_per_core_interval` is <= 600). | ||
|
||
It leads to multiple power cycles if the core is restarted between 00:00 | ||
and 00:15, but this doesn't break any hardware. The only way to circumvent | ||
this is adding it to the persistent state which is not worth it right now.""" | ||
|
||
current_time = datetime.datetime.now() | ||
|
||
if (current_time.hour == 0) and (current_time.minute < 15): | ||
if (self.last_camera_power_down_time is None) or ( | ||
self.last_camera_power_down_time.date() < current_time.date() | ||
): | ||
self.last_camera_power_down_time = current_time | ||
self.last_camera_power_up_time = None | ||
self.plc_interface.set_power_camera(False) | ||
logger.info("Powering down the camera.") | ||
|
||
if (self.last_camera_power_down_time | ||
is not None) and (self.last_camera_power_up_time is None): | ||
if (current_time - | ||
self.last_camera_power_down_time).total_seconds() > 120: | ||
self.last_camera_power_up_time = current_time | ||
self.plc_interface.set_power_camera(True) | ||
logger.info("Powering up the camera.") | ||
|
||
self.plc_interface.set_power_camera(False) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems a bit complicated.
You should be able to switch the power of the camera to False on a certain trigger.
In a future main-loop call you can read the PLC state for the camera power == False and turn the camera power back on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have a proposal?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would go like this:
init: next_camera_shutdown = "Next day 0:01".
during main loop:
if read_camera_power = off:
set_camera_power = on
if next_camera_shutdown:
set_camera_power = off
next_camera_shutdown = "next day 0:01"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And we also need to start the script to show the camera after a power cycle.
assert sys.platform == "win32" | ||
|
||
if self.config["opus"]["experiment_path"] != self.current_experiment: | ||
if interfaces.StateInterface.read_persistent()["active_opus_macro_id"] == None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't remember why this was in there. But as far as I can tell, the experiment is loaded into OPUS during the OPUS startup in the morning. After this, it should indeed only matter to startup the macro during measurements and handle the experiment only if it changes.
Let's keep an eye in this during the integration. But it seems AFAIK.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took a while to find it. We have the problem, that Opus answers in cryptic characters (#124), so when starting Opus, we do not get back a Macro ID. Hence, this was always true. I think until now, changing the experiment path while a macro was running did not work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added comments for the Core. Overall, the typing changes improve readability a lot - good job.
Integration:
What is the current status of local integration tests?
UI:
We already discussed the UI. Thanks for implementing the changes.
Let's remember to do a final grammar check before releasing.
Smaller things:
We could think about adding the info that Pyra was added to Open Sustainable Technology to the website or the README (https://github.com/protontypes/open-sustainable-technology)
We could also consider adding an active user list (+ feedback) to the website.
if motor_position_state == "valid": | ||
logger.debug("CamTracker motor position is valid.") | ||
|
||
if restart_camtracker: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We might need to add a timer here. CamTracker Shutdown and Init take some time and if the loop revisits this part too fast it might end up in an infinite restart process.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as in the comment below.
This validation only start after CamTracker has been running for at least 5 minutes.
logger.debug("TUM enclosure cover position is unknown.") | ||
if cover_position_state == "invalid": | ||
logger.info("TUM enclosure cover is closed.") | ||
if self.config.camtracker.restart_if_cover_remains_closed: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The cover is controlled by the PLC reading the CamTracker motor position. As a shutdown and startup can take some time and the cover is synced accordingly this might lead to issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch! If you look at lines 54 to 80, that vaidation is only used when the measurements should be running and CamTracker is already running for 5 minutes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is true. In 7b9a965 I added a state that it does nothing else but waiting for the shutdown. I actually expect this to solve a few issues as well.
|
||
if restart_camtracker: | ||
logger.info("Stopping CamTracker. Preparing for reinitialization.") | ||
self.stop_sun_tracking_automation() | ||
|
||
def ct_application_running(self) -> bool: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be a good idea to harmonize it with the opus_is_running() function
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea! I replace it in ef2d9b7.
Clean up log statements and code
When starting CamTracker, wait until it has properly shut down
Use new technique from Opus to find a running instance in CamTracker
Update API reference
Code Quality Improvements:
toast.promise
fromreact-hot-toast
(https://react-hot-toast.com/docs/toast#advanced)Core Improvements:
DEBUG
logs in email too #183Opus Improvements:
Helios Improvements:
Upload Improvements:
CamTracker Improvements:
UI Improvements:
config.json
: Add buttons to open and copyconfig.json
#182