-
Notifications
You must be signed in to change notification settings - Fork 132
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add zoo_helper CI/CD workflows for linux-x86_64 and linux-arm64 #1216
base: v3_develop
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
Left a few questions/suggestions, but if it's too much work we can merge mostly as is.
CMakeLists.txt
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe better to separate this out in a separate folder & CMakeLists.txt and depend directly on DAI to avoid duplication.
I mean doing
target_link_libraries(zoo_helper, depthai_core)
.
We should be able to get core pretty lean on size, if everything is disabled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would love to keep this the way it is and it was a deliberate decision not to have depthai-core
library as a direct dependency. The reasoning behind this decision was that zoo_helper
is made up of just a handful of source files, whereas depthai-core
contains hundreds of files.
If depthai-core
was a direct dependency, every time we would be willing to build zoo_helper
, we would have to rebuild the entirety of core. And even if all opt for a minimal build, it certainly takes longer than building just those handful of files zoo_helper
directly depends on.
What is your take on this? The goal of the above reasoning is to minimize compile times in case only the zoo_helper
binary is to be produced.
I generally think the size of the binary should be smaller. The current ~9MB is too much for what the
Ideally the size of the binary should be less than a few KB but given we want to link statically, which is necessary for ease of use, the size will be a bit bigger. |
The bulk of the size likely comes from openssl, cpr, curl which are all statically compiled. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
This PR adds two workflows for compiling
zoo_helper
binaries forlinux-x86_64
andlinux-arm64
systems.The resulting binary is stored in it's own folder on artifactory with it's identifier being the current commit's hash. It has minimum dependencies, so it should be possible to
scp
the appropriate binary (based on the target architecture) to the target machine and start using it.The workflow itself uses
almalinux8
containers for it's use of relatively oldglibc
library. Asglibc
is backward compatible, this ensures that newer systems can use the produced binary as well. In our case, we should be able to support, among others, any Ubuntu operating system with version >= 20.04.almalinux8
was chosen for it's EOL being relatively far in the future: https://wiki.almalinux.org/release-notes/#almalinux-os-8Action example: https://github.com/luxonis/depthai-core/actions/runs/12896599874
glibc versions on different Linux distributions: https://repology.org/project/glibc/versions
Clickup task: https://app.clickup.com/t/86c0kymut
Let me squash the commits later, this is a mess 😄