Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tools/rerun: streaming to one Viewer from multiple processes #32595

Merged
merged 3 commits into from
Jun 3, 2024

Conversation

bongbui321
Copy link
Contributor

@bongbui321 bongbui321 commented Jun 2, 2024

There can be multiple inits, but the Viewer should only spawned oncee, and the other processes can log their data into the same Viewer through the designated port

After the change (no more bloated log info) and the processes would just stream logged data to the port:

[2024-06-02T14:05:45Z INFO  re_sdk_comms::server] Hosting a SDK server over TCP at 0.0.0.0:9876. Connect with the Rerun logging SDK.
[2024-06-02T14:05:45Z INFO  winit::platform_impl::platform::x11::window] Guessed window scale factor: 1
[2024-06-02T14:05:45Z WARN  wgpu_hal::vulkan::instance] Unable to find extension: VK_EXT_swapchain_colorspace
Getting route log paths
[2024-06-02T14:05:45Z INFO  re_sdk_comms::server] New SDK client connected: 127.0.0.1:45872
MESA-INTEL: warning: Performance support disabled, consider sysctl dev.i915.perf_stream_paranoid=0

[2024-06-02T14:05:46Z INFO  egui_wgpu] There were 3 available wgpu adapters: {backend: Vulkan, device_type: DiscreteGpu, name: "NVIDIA GeForce GTX 1650", driver: "NVIDIA", driver_info: "535.171.04", vendor: 0x10DE, device: 0x1F9D}, {backend: Vulkan, device_type: Cpu, name: "llvmpipe (LLVM 12.0.0, 256 bits)", driver: "llvmpipe", driver_info: "Mesa 21.2.6 (LLVM 12.0.0)", vendor: 0x10005}, {backend: Vulkan, device_type: IntegratedGpu, name: "Intel(R) Xe Graphics (TGL GT2)", driver: "Intel open-source Mesa driver", driver_info: "Mesa 21.2.6", vendor: 0x8086, device: 0x9A49}
  0%|                                                                                                                                                                                  | 0/13 [00:00<?, ?it/s]
[2024-06-02T14:05:54Z INFO  re_sdk_comms::server] New SDK client connected: 127.0.0.1:57324
[2024-06-02T14:05:54Z INFO  re_sdk_comms::server] New SDK client connected: 127.0.0.1:57338
[2024-06-02T14:05:54Z INFO  re_sdk_comms::server] New SDK client connected: 127.0.0.1:57340
[2024-06-02T14:05:54Z INFO  re_sdk_comms::server] New SDK client connected: 127.0.0.1:57346
[2024-06-02T14:05:55Z INFO  re_sdk_comms::server] New SDK client connected: 127.0.0.1:57356
[2024-06-02T14:05:55Z INFO  re_sdk_comms::server] New SDK client connected: 127.0.0.1:57362
[2024-06-02T14:05:56Z INFO  re_sdk_comms::server] New SDK client connected: 127.0.0.1:57366
[2024-06-02T14:05:57Z INFO  re_sdk_comms::server] New SDK client connected: 127.0.0.1:57368

Will do further improvements

@github-actions github-actions bot added the tools label Jun 2, 2024
Copy link
Contributor

github-actions bot commented Jun 2, 2024

Thanks for contributing to openpilot! In order for us to review your PR as quickly as possible, check the following:

  • Convert your PR to a draft unless it's ready to review
  • Read the contributing docs
  • Before marking as "ready for review", ensure:
    • the goal is clearly stated in the description
    • all the tests are passing
    • the change is something we merge
    • include a route or your device' dongle ID if relevant

@adeebshihadeh
Copy link
Contributor

What does this improve? It's still allocating tons of memory.

@bongbui321 bongbui321 changed the title tools/rerun: improve rerun tools/rerun: streaming to one Viewer from multiple processes Jun 2, 2024
@bongbui321
Copy link
Contributor Author

bongbui321 commented Jun 2, 2024

This is rather a more correct way to spawn and stream data to a Viewer than a memory usage optimization. Prior, each process create a new Viewer and blueprint which try to overwrite the Viewer and blueprint created before that. Now we only spawn one Viewer and the other processes just stream data to it.

spawn() api is like this: def spawn(*, port=9876, connect=True, memory_limit='75%', hide_welcome_screen=False, default_blueprint=None, recording=None) and the default memory limit is 75%. Big memory usage is probably due to how we log data, will need to do more investigation to see if this cuts down performance if we lower the memory limit

can just save log to a temp file and load to rerun from there, which is what PlotJuggle does

@adeebshihadeh adeebshihadeh merged commit f717e1e into commaai:master Jun 3, 2024
18 checks passed
@bongbui321 bongbui321 deleted the rerun branch June 3, 2024 13:33
Edison-CBS pushed a commit to Edison-CBS/openpilot that referenced this pull request Sep 15, 2024
…#32595)

* one spawn only

* one blueprint

* comment
old-commit-hash: f717e1e
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants