-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve screen api performance #748
Conversation
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## main #748 +/- ##
==========================================
+ Coverage 72.77% 72.79% +0.01%
==========================================
Files 470 470
Lines 30522 30592 +70
Branches 868 868
==========================================
+ Hits 22213 22268 +55
- Misses 8220 8235 +15
Partials 89 89
Flags with carried forward coverage won't be shown. Click here to find out more.
☔ View full report in Codecov by Sentry. |
Kudos, SonarCloud Quality Gate passed! 0 Bugs No Coverage information |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like this adds a really quick in-memory cache for simultaneous telemetry lookups resulting from screens calling get_tlm_values. Nice!
@@ -28,6 +28,12 @@ class AuthModel | |||
PRIMARY_KEY = 'OPENC3__TOKEN' | |||
SERVICE_KEY = 'OPENC3__SERVICE__TOKEN' | |||
|
|||
TOKEN_CACHE_TIMEOUT = 5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I forget all the timeout values ... access token is valid for 5min? This is 5s right? Why this value?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is how long an old password will still work. No affect on Enterprise, only open source.
@@ -66,30 +66,30 @@ module Api | |||
# @param args [String|Array<String>] See the description for calling style | |||
# @param type [Symbol] Telemetry type, :RAW, :CONVERTED (default), :FORMATTED, or :WITH_UNITS | |||
# @return [Object] The telemetry value formatted as requested | |||
def tlm(*args, type: :CONVERTED, scope: $openc3_scope, token: $openc3_token) | |||
target_name, packet_name, item_name = tlm_process_args(args, 'tlm', scope: scope) | |||
def tlm(*args, type: :CONVERTED, cache_timeout: 0.1, scope: $openc3_scope, token: $openc3_token) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
0.1 is basically just to cache multiple requests in the same cycle but still be responsive to data changing at 10Hz?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's why I chose that as default. It still supports 10Hz, but any faster queries will use the cache.
|
||
now = now.to_f |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unused?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its used at line 137
if hash.empty? | ||
Store.hdel("#{scope}__override__#{target_name}", packet_name) | ||
else | ||
Store.hset("#{scope}__override__#{target_name}", packet_name, JSON.generate(hash.as_json(:allow_nan => true))) | ||
end | ||
end | ||
|
||
def self.determine_latest_packet_for_item(target_name, item_name, cache_timeout: 0.1, scope: $openc3_scope) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if you've characterized this speed up but do many people use LATEST? Seems like a bunch of work to speed up an infrequently used use case. Looks good anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its way faster. LATEST was not optimized at all and could requires 100s of Redis queries. Now it takes only the exact number of packets that contain the item.
cvt_items = items.map { | item | "#{target_name}__#{packet_name}__#{item}__#{type}" } | ||
current_values = CvtModel.get_tlm_values(cvt_items, stale_time: stale_time, scope: scope) | ||
cvt_items = items.map { | item | [target_name, packet_name, item, type] } | ||
current_values = CvtModel.get_tlm_values(cvt_items, stale_time: stale_time, cache_timeout: cache_timeout, scope: scope) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you changing this method to take an array of array of strings? It used to be an array of strings with the underscores.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this was just an optimization to prevent splitting twice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Won't that break existing code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If anyone is using CvtModel.get_tlm_values this is a breaking change. That's an internal api though.
No description provided.