You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After updating an Openshift cluster to using the v2.5.0 version kubectl-build-deploy-dind image from a much older version of this image, it was noticed that a specific project with multiple hundreds of routes was now taking multiple hours to build, where it took ~20 minutes previously. The overwhelming majority of this time is spent with the build pod sitting at ~100% CPU usage (1 core) churning through executing the exec-routes-generation.sh script.
To Reproduce
Steps to reproduce the behavior:
Create an example project with multiple hundred routes in its .lagoon.yml
Deploy the project
See build take multiple hours
????
PROFIT
Expected behavior
Deployments should not take multiple hours, regardless of the number of domains present in a project/environment.
Screenshots
N/A
Additional context
Given the massive amounts of domains in this project, it seems to me that what we're seeing here is simply handling/processing a substantial amount of data numerous times. I would assume it would be better to store this data somehow once accessed, so that we don't need to read the .lagoon.yml file, pipe that to shyaml, then filter for the specific bit we need; multiple times. Maybe this routes generation activity is better suited to a higher programming language like Python?
The text was updated successfully, but these errors were encountered:
This would've been resolved by #3133, but that functionality was reverted as part of the work done in #3181 to resolve issues we saw with badly formatted docker-compose.yml files. For now, this issue remains unresolved as of Lagoon release 2.8.4
Describe the bug
After updating an Openshift cluster to using the
v2.5.0
versionkubectl-build-deploy-dind
image from a much older version of this image, it was noticed that a specific project with multiple hundreds of routes was now taking multiple hours to build, where it took ~20 minutes previously. The overwhelming majority of this time is spent with the build pod sitting at ~100% CPU usage (1 core) churning through executing theexec-routes-generation.sh
script.To Reproduce
Steps to reproduce the behavior:
.lagoon.yml
Expected behavior
Deployments should not take multiple hours, regardless of the number of domains present in a project/environment.
Screenshots
N/A
Additional context
Given the massive amounts of domains in this project, it seems to me that what we're seeing here is simply handling/processing a substantial amount of data numerous times. I would assume it would be better to store this data somehow once accessed, so that we don't need to read the
.lagoon.yml
file, pipe that toshyaml
, then filter for the specific bit we need; multiple times. Maybe this routes generation activity is better suited to a higher programming language like Python?The text was updated successfully, but these errors were encountered: