-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci: switch to free runners #2319
Conversation
…_given_by_identities` (#2309) Co-authored-by: Ivan Shumkov <ivan@shumkov.ru>
…entity-transfers-in-strategy-tests
WalkthroughThe pull request includes updates to several GitHub Actions workflows and a Dockerfile, primarily focusing on refining the execution environment to use Changes
Possibly related PRs
Suggested reviewers
Warning Rate limit exceeded@shumkov has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 5 minutes and 12 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (2)
.github/workflows/tests-build-image.yml (1)
Line range hint
1-52
: Consider documenting the caching strategy changes.Given that this PR is about debugging layer cache and involves removing caching commands from Dockerfile:
- The workflow file should document the caching strategy
- Consider adding comments explaining why caching was modified
- Document any performance implications
Consider adding a comment block at the top of the file explaining:
- The caching strategy being used
- Why certain caching mechanisms were removed
- Expected impact on build times
- Any prerequisites for proper caching
on: + # Cache Strategy: + # - Removed Rust component caching from Dockerfile for debugging + # - Using default GitHub Actions cache for ubuntu-24.04 runner + # - Build times may increase but provides better debugging capabilities + # Note: This is a temporary change for debugging layer cache issues workflow_call:Dockerfile (1)
Line range hint
1-489
: Recommend restructuring the debugging approach.While commenting out cache layers helps debug issues, the current approach is too broad and breaks the build. Consider:
- Adding build arguments to selectively disable specific cache layers
- Implementing proper logging for cache operations
- Using Docker's
--no-cache
flag for specific build stages instead of commenting out critical stepsThis would provide better debugging capabilities while maintaining the ability to build functional images.
Example build argument structure:
ARG DEBUG_CACHE=false ARG SKIP_CARGO_CHEF=false ARG SKIP_YARN_CACHE=false # Use in RUN commands RUN if [ "${DEBUG_CACHE}" != "true" ]; then \ # Normal cached operation else \ # Debug operation fi
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (2)
.github/workflows/tests-build-image.yml
(2 hunks)Dockerfile
(4 hunks)
🔇 Additional comments (5)
.github/workflows/tests-build-image.yml (2)
20-21
: Verify if switching from self-hosted ARM64 runner to ubuntu-24.04 is intentional.
The change from a self-hosted ARM64 runner to a GitHub-hosted x86_64 runner could have significant implications:
- Build times might increase as GitHub-hosted runners don't persist cache between runs
- The build environment will be more consistent but potentially slower
- This aligns with the platform architecture change to amd64
Let's verify the runner requirements across the codebase:
#!/bin/bash
# Description: Check for other workflows that might depend on ARM64 runners
# or have similar runner configurations that should be updated for consistency
# Search for workflow files with runner specifications
echo "Checking other workflow files for runner configurations..."
rg -l "runs-on.*self-hosted.*arm64" .github/workflows/
rg -l "runs-on.*ubuntu-" .github/workflows/
# Check Dockerfile for architecture-specific instructions
echo "Checking Dockerfile for architecture-specific instructions..."
rg "FROM.*arm64|RUN.*arch" Dockerfile
45-45
:
Architecture change requires careful verification of dependencies.
The switch from linux/arm64
to linux/amd64
platform needs thorough testing:
- Ensure all dependencies support amd64 architecture
- Verify if this change affects production deployment architecture
- This change aligns with the runner change to ubuntu-24.04
Let's verify the implications:
✅ Verification successful
Let's gather more specific information about the architecture-related configurations:
Platform architecture change is aligned with infrastructure
The switch from linux/arm64
to linux/amd64
is a deliberate change that aligns with the infrastructure:
- The workflow was previously running on a self-hosted ARM runner (commented out line
#runs-on: ["self-hosted", "linux", "arm64", "ubuntu-platform"]
) and now runs onubuntu-24.04
which is x86_64/amd64. - The Dockerfile already handles both architectures through
$TARGETARCH
variable - Other workflows in the repository support both architectures:
- Release workflow supports both
linux/amd64
andlinux/arm64
- Devcontainer builds support both platforms
- Release workflow supports both
The change is consistent with the runner migration from self-hosted ARM to GitHub-hosted x86_64 runners.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for architecture-specific dependencies and configurations
# Search for platform-specific configurations
echo "Checking for platform-specific configurations..."
rg -l "arm64|amd64" .
# Check Docker Compose files for platform specifications
echo "Checking Docker Compose configurations..."
fd -e yml -e yaml | xargs rg "platform:"
# Check package.json and Cargo.toml for architecture-specific dependencies
echo "Checking dependency specifications..."
fd "package.json|Cargo.toml" | xargs rg -l "arch|platform|arm64|amd64"
Length of output: 1479
Script:
#!/bin/bash
# Check Dockerfile for platform-specific configurations
echo "=== Dockerfile platform configurations ==="
cat Dockerfile | grep -A 5 -B 5 "arm64\|amd64"
# Check the specific workflow file for any other platform configurations
echo -e "\n=== Current workflow file platform configurations ==="
cat .github/workflows/tests-build-image.yml
# Check other workflow files for platform configurations
echo -e "\n=== Other workflow files platform configurations ==="
fd -e yml -e yaml -d 1 . .github/workflows/ | grep -v "tests-build-image.yml" | xargs rg "platform:" -A 2 -B 2
# Check CHANGELOG.md for recent platform-related changes
echo -e "\n=== Recent platform-related changes ==="
cat CHANGELOG.md | grep -A 3 -B 3 "arm64\|amd64"
Length of output: 5837
Dockerfile (3)
223-244
:
Critical: Drive ABCI binary build step is commented out.
The build and copy steps for the drive-abci binary are commented out. This will break the final image as the binary won't be available at /artifacts/drive-abci
.
#!/bin/bash
# Verify if the binary is referenced elsewhere
rg -A 2 "drive-abci" | grep -v '^#'
256-289
:
Verify impact on JavaScript and WASM components.
Commenting out these build steps affects:
- WASM DPP builds
- Yarn dependency caching
- JavaScript component builds
This could impact multiple downstream stages that depend on these builds.
#!/bin/bash
# Check for WASM and JS dependencies in other stages
rg -l "COPY.*build-js" | grep -v '^#'
161-174
: Consider marking these changes as temporary debug modifications.
Commenting out the dependency caching mechanism will help debug layer cache issues but significantly impacts build performance. Consider adding a TODO comment indicating this is temporary or use build arguments to toggle this behavior.
Dockerfile
Outdated
@@ -305,15 +305,15 @@ RUN mkdir -p /var/log/dash \ | |||
/var/lib/dash/rs-drive-abci/db \ | |||
${REJECTIONS_PATH} | |||
|
|||
COPY --from=build-drive-abci /artifacts/drive-abci /usr/bin/drive-abci | |||
#COPY --from=build-drive-abci /artifacts/drive-abci /usr/bin/drive-abci |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Final image will be non-functional.
Two critical steps are commented out:
- Copying the drive-abci binary to the final image
- Dependency verification using
ldd
This will result in a non-functional container as the main executable will be missing.
Consider adding a validation step in the CI pipeline to ensure the final image contains all required components.
Also applies to: 316-316
Dockerfile
Outdated
#RUN source $HOME/.cargo/env && \ | ||
# cargo chef prepare --recipe-path recipe.json |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider using build arguments for cache control.
Instead of commenting out the cargo-chef preparation, consider adding a build argument to control this behavior. This would allow easier switching between debug and optimized builds.
+ARG SKIP_CARGO_CHEF
RUN source $HOME/.cargo/env && \
- cargo chef prepare --recipe-path recipe.json
+ if [ "${SKIP_CARGO_CHEF}" != "true" ]; then \
+ cargo chef prepare --recipe-path recipe.json; \
+ fi
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
#RUN source $HOME/.cargo/env && \ | |
# cargo chef prepare --recipe-path recipe.json | |
ARG SKIP_CARGO_CHEF | |
RUN source $HOME/.cargo/env && \ | |
if [ "${SKIP_CARGO_CHEF}" != "true" ]; then \ | |
cargo chef prepare --recipe-path recipe.json; \ | |
fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
.github/actions/docker/action.yaml (1)
Line range hint
97-116
: Enhance layer cache debugging capabilitiesGiven this PR's focus on debugging layer cache, consider adding these improvements:
- Enable BuildKit debug logs by setting
BUILDKIT_DEBUG=1
- Add cache debugging outputs to help troubleshoot cache hits/misses
Apply this diff to enhance debugging:
with: context: . builder: ${{ steps.buildx.outputs.name }} + build-args: | + BUILDKIT_DEBUG=1 + ${{ inputs.cargo_profile != '' && format('CARGO_BUILD_PROFILE={0}', inputs.cargo_profile) }} target: ${{ inputs.target }} labels: ${{ steps.docker_meta.outputs.labels }} tags: ${{ inputs.push_tags == 'true' && steps.docker_meta.outputs.tags || '' }} platforms: ${{ inputs.platform }} - build-args: | - CARGO_BUILD_PROFILE=${{ inputs.cargo_profile }} cache-from: ${{ steps.layer_cache_settings.outputs.cache_from }} cache-to: ${{ steps.layer_cache_settings.outputs.cache_to }} + outputs: | + type=image + name=${{ inputs.image_org }}/${{ inputs.image_name }} + push-by-digest=${{ inputs.push_tags != 'true' }} + name-canonical=true + push=true + cache-debug=true - outputs: type=image,name=${{ inputs.image_org }}/${{ inputs.image_name }},push-by-digest=${{ inputs.push_tags != 'true' }},name-canonical=true,push=true
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
.github/actions/docker/action.yaml
(1 hunks)
🔇 Additional comments (2)
.github/actions/docker/action.yaml (2)
Line range hint 97-116
: Consider documenting the push-by-digest configuration
The addition of push-by-digest=${{ inputs.push_tags != 'true' }}
and name-canonical=true
changes how images are pushed and referenced. This might affect downstream jobs or deployments that expect specific image references.
Consider:
- Adding comments explaining the push-by-digest strategy
- Documenting any changes needed in downstream jobs
#!/bin/bash
# Description: Check for potential impacts on downstream workflows
# Test: Look for workflow files that might be affected by the image reference change
fd -e yml -e yaml . .github/workflows/ --exec rg -l "uses: ./.github/actions/docker"
# Test: Check for direct image references that might need updating
rg -g '*.{yml,yaml}' '(${{ inputs.image_org }}/${{ inputs.image_name }}|dashpay/)' .github/
97-97
: Verify compatibility with docker/build-push-action@v6
The upgrade from v5 to v6 is a major version change that might introduce breaking changes. Please ensure you've reviewed the changelog and tested the workflow thoroughly.
✅ Verification successful
Upgrade to docker/build-push-action@v6 is compatible and properly configured
The upgrade is safe as the action is used correctly with the new v6 features:
- The
push-by-digest
andname-canonical
parameters are properly configured in the outputs field - All required inputs and configuration remain compatible
- The action is integrated correctly with other actions (buildx, metadata-action, cache settings)
- The workflow maintains proper integration with AWS ECR and DockerHub
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any breaking changes in docker/build-push-action v6
# Test: Fetch the changelog or release notes
gh release view v6 --repo docker/build-push-action
# Test: Look for any existing issues related to v6 migration
gh issue list --repo docker/build-push-action --search "v6 migration"
Length of output: 408
Script:
#!/bin/bash
# Let's analyze the workflow files using docker/build-push-action to understand the usage
# Check the workflow files that use our docker action
fd -e yml -e yaml . .github/workflows/ --exec cat {} \; | rg -A 10 -B 10 'uses: ./.github/actions/docker'
# Check our docker action implementation
cat .github/actions/docker/action.yaml
# Look for any other docker-related configurations
fd Dockerfile . --type f
Length of output: 6287
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (2)
.github/actions/docker/action.yaml (2)
131-150
: Review Yarn cache configuration.The implementation looks correct, but there's a misleading step name:
Line 140 incorrectly states "Inject cargo cache into docker" but it's actually injecting the Yarn cache. Please update the step name to:
- - name: Inject cargo cache into docker + - name: Inject yarn cache into docker
Line range hint
154-174
: Document cache strategy changes.The switch from custom S3-based layer caching to GitHub Actions cache (
type=gha
) is a significant architectural change that aligns with the PR's objective of debugging layer cache issues.Consider adding inline documentation to explain:
- Why GitHub Actions cache was chosen over S3
- The implications of
mode=max
for cache storage- Any potential limitations or considerations for other teams
The change looks technically sound, but documenting these decisions will help future maintainers understand the rationale.
🧰 Tools
🪛 yamllint
[error] 96-96: syntax error: could not find expected ':'
(syntax)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (2)
.github/actions/docker/action.yaml
(3 hunks)Dockerfile
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- Dockerfile
🧰 Additional context used
🪛 yamllint
.github/actions/docker/action.yaml
[error] 96-96: syntax error: could not find expected ':'
(syntax)
🔇 Additional comments (3)
.github/actions/docker/action.yaml (3)
Line range hint 1-182
: Overall assessment of cache strategy changes.
The migration from S3-based caching to GitHub Actions cache appears to be a strategic move to simplify debugging of layer cache issues. The implementation is generally sound, but there are a few items that need attention:
- Remove the misplaced Dockerfile RUN commands
- Fix the Cargo cache IDs in the buildkit-cache-dance configuration
- Clean up or document the commented-out code
- Fix the misleading Yarn cache step name
These changes should make the caching behavior more predictable and easier to debug.
🧰 Tools
🪛 yamllint
[error] 96-96: syntax error: could not find expected ':'
(syntax)
99-130
: Verify Cargo cache paths and keys.
The new Cargo caching implementation looks solid but has a few potential issues:
- The cache ID in
buildkit-cache-dance
mapping (lines 122, 126) usescargo_registry_index
for all three caches, which might cause conflicts - The cache paths are relative (
../cargo-cache-*
), which could be problematic if the working directory changes
Let's verify the cache configuration:
#!/bin/bash
# Check if these cache paths are referenced consistently
rg "cargo-cache-(registry|git)" --type yaml
# Look for other uses of actions/cache to compare path patterns
rg "actions/cache.*path:" --type yaml -A 5
72-77
: Document the rationale for disabling cache mount operations.
The previous cache mount operations using buildkit-cache-dance
have been commented out rather than removed. This makes it unclear whether these are temporary changes for debugging or permanent improvements.
Consider either:
- Removing the commented code if the new caching strategy is permanent
- Adding a comment explaining why these sections are preserved for reference
Let's verify if these cache mount operations are still used elsewhere:
Also applies to: 88-94, 177-182
.github/actions/docker/action.yaml
Outdated
RUN --mount=type=cache,sharing=shared,id=,target=${CARGO_HOME}/registry/index \ | ||
--mount=type=cache,sharing=shared,id=cargo_registry_cache,target=${CARGO_HOME}/registry/cache \ | ||
--mount=type=cache,sharing=shared,id=cargo_git,target=${CARGO_HOME}/git/db \ | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix invalid YAML syntax in RUN command.
These lines appear to be Dockerfile RUN commands mistakenly placed in the YAML file. They should be removed as they are causing syntax errors and don't belong in this workflow file.
Remove these lines as they appear to be accidentally copied from a Dockerfile.
🧰 Tools
🪛 yamllint
[error] 96-96: syntax error: could not find expected ':'
(syntax)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (8)
.github/workflows/prebuild-devcontainers.yml (1)
21-21
: Document rationale for runner changeWhile the change to
ubuntu-24.04
aligns with standardization across workflows, consider documenting the reasons for moving away from self-hosted runners in the PR description or workflow comments.Add a comment above the
runs-on
line:name: Build and push devcontainer + # Using GitHub-hosted runner for consistent environment and better cache debugging runs-on: ubuntu-24.04
.github/workflows/tests-packges-functional.yml (1)
7-7
: LGTM: Standardized runner environmentThe change to use
ubuntu-24.04
aligns with the standardization across other workflow files, providing a consistent and predictable CI environment.This change will:
- Ensure consistent behavior across all workflows
- Simplify maintenance by using GitHub-hosted runners
- Reduce infrastructure management overhead
.github/workflows/tests-build-js.yml (2)
Line range hint
45-52
: Consider adding cache debugging informationSince this PR focuses on debugging layer cache, consider adding debug logging for the Rust/WASM cache configuration:
env: RUSTC_WRAPPER: sccache SCCACHE_BUCKET: multi-runner-cache-x1xibo9c SCCACHE_REGION: ${{ secrets.AWS_REGION }} SCCACHE_S3_KEY_PREFIX: ${{ runner.os }}/sccache/wasm/wasm32 + SCCACHE_LOG: debug + SCCACHE_ERROR_LOG: /tmp/sccache_log.txtAlso consider adding a post-build step to display cache statistics:
- name: Show cache statistics run: | sccache --show-stats if [ -f /tmp/sccache_log.txt ]; then cat /tmp/sccache_log.txt fi if: always()
Line range hint
54-65
: Consider using .gitignore patterns fileInstead of manually recreating the .gitignore patterns, consider maintaining these patterns in a separate file for better maintainability.
- - name: Ignore only already cached artifacts - run: | - find . -name '.gitignore' -exec rm -f {} + - echo ".yarn" >> .gitignore - echo "target" >> .gitignore - echo "node_modules" >> .gitignore - echo ".nyc_output" >> .gitignore - echo ".idea" >> .gitignore - echo ".ultra.cache.json" >> .gitignore - echo "db/*" >> .gitignore + - name: Setup cache ignore patterns + run: cp .github/workflows/config/cache-ignore-patterns.txt .gitignore.github/workflows/tests-dashmate.yml (3)
Line range hint
44-52
: Consider migrating to GitHub's official artifacts actionThe TODO comment and use of a forked cache action suggest this is a temporary solution. GitHub's official
actions/upload-artifact
andactions/download-artifact
actions are recommended for artifact handling as they:
- Are officially supported and maintained
- Provide better security guarantees
- Have consistent behavior across GitHub's infrastructure
Line range hint
54-84
: Optimize Docker image handling scriptWhile the current implementation works, consider these improvements:
- Extract version handling into a separate function
- Create a reusable function for pull/tag operations
- Consider using an array of image names to reduce code duplication
Example refactor:
- name: Replace with pre-built images run: | set -x # Login to ECR DOCKER_HUB_ORG="${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.${{ secrets.AWS_REGION }}.amazonaws.com" aws ecr get-login-password --region ${{ secrets.AWS_REGION }} | docker login --username AWS --password-stdin $DOCKER_HUB_ORG SHA_TAG=sha-${{ github.sha }} + + # Function to pull and tag images + pull_and_tag() { + local image_name=$1 + local target_tag=$2 + docker pull $DOCKER_HUB_ORG/$image_name:$SHA_TAG + docker tag $DOCKER_HUB_ORG/$image_name:$SHA_TAG $target_tag + } + + # Drive + DRIVE_TARGET=$(yarn dashmate config get --config=local platform.drive.abci.docker.image) + pull_and_tag "drive" "$DRIVE_TARGET" + + # DAPI + DAPI_TARGET=$(yarn dashmate config get --config=local platform.dapi.api.docker.image) + pull_and_tag "dapi" "$DAPI_TARGET" + + # Dashmate helper + VERSION=$(jq -r '.version' package.json) + pull_and_tag "dashmate-helper" "dashpay/dashmate-helper:${VERSION}" - # Drive - DRIVE_IMAGE_AND_VERSION=$(yarn dashmate config get --config=local platform.drive.abci.docker.image) - docker pull $DOCKER_HUB_ORG/drive:$SHA_TAG - docker tag $DOCKER_HUB_ORG/drive:$SHA_TAG $DRIVE_IMAGE_AND_VERSION - - # DAPI - DAPI_IMAGE_AND_VERSION=$(yarn dashmate config get --config=local platform.dapi.api.docker.image) - docker pull $DOCKER_HUB_ORG/dapi:$SHA_TAG - docker tag $DOCKER_HUB_ORG/dapi:$SHA_TAG $DAPI_IMAGE_AND_VERSION - - # Dashmate helper image is hardcoded so we replace it with the built one - VERSION=$(cat package.json | jq -r '.version') - docker pull $DOCKER_HUB_ORG/dashmate-helper:$SHA_TAG - docker tag $DOCKER_HUB_ORG/dashmate-helper:$SHA_TAG dashpay/dashmate-helper:${VERSION}
Line range hint
118-134
: LGTM! Consider adding test timeoutThe test execution setup is well-structured with proper error handling. Consider adding a timeout to the mocha command to prevent hanging tests:
- run: yarn workspace dashmate run mocha ${{ inputs.test-pattern }} + run: yarn workspace dashmate run mocha --timeout 60000 ${{ inputs.test-pattern }}.github/workflows/release.yml (1)
Line range hint
41-54
: Consider optimizing artifact caching strategyThe current artifact caching implementation has room for improvement:
- Cache key is based only on git SHA, which might not capture all relevant changes
- No fallback cache keys are specified
Consider enhancing the cache configuration:
uses: strophy/actions-cache@opendal-update id: cache with: bucket: multi-runner-cache-x1xibo9c root: actions-cache path: build-js-artifacts-${{ github.sha }}.tar - key: build-js-artifacts/${{ github.sha }} + key: build-js-artifacts/${{ github.sha }}-${{ hashFiles('**/package.json', '**/yarn.lock') }} + restore-keys: | + build-js-artifacts/${{ github.sha }}- + build-js-artifacts/
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (8)
.github/actions/docker/action.yaml
(4 hunks).github/workflows/prebuild-devcontainers.yml
(1 hunks).github/workflows/release.yml
(1 hunks).github/workflows/tests-build-image.yml
(2 hunks).github/workflows/tests-build-js.yml
(1 hunks).github/workflows/tests-dashmate.yml
(2 hunks).github/workflows/tests-packges-functional.yml
(1 hunks).github/workflows/tests-test-suite.yml
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- .github/actions/docker/action.yaml
- .github/workflows/tests-build-image.yml
🔇 Additional comments (12)
.github/workflows/prebuild-devcontainers.yml (1)
21-21
: Verify performance impact of switching to GitHub-hosted runners
The change from self-hosted to GitHub-hosted runners (ubuntu-24.04
) provides a more consistent environment, which could help debug cache issues. However, this might affect build times and action minutes consumption.
Run this script to analyze historical build times:
✅ Verification successful
Build times show significant variance, further investigation needed
The workflow execution times show high variability (0.3 to 50.2 minutes) without a clear pattern related to the runner change. To properly assess the impact, we need to verify when the runner change was made.
Runner change impact appears minimal on build performance
Based on the git history and workflow run times analysis:
- The change from self-hosted to ubuntu-24.04 runner is recent
- Build times vary significantly (0.3 to 50.2 minutes) but this appears to be due to different workload types rather than the runner change
- Most recent runs on both runner types complete within expected timeframes
- The switch to GitHub-hosted runners provides a more consistent environment for debugging cache issues without significant performance regression
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Compare build times before and after the runner change
# Note: Limited to last 30 builds due to API pagination
echo "Analyzing recent workflow runs..."
gh run list --workflow=prebuild-devcontainers.yml --limit 30 --json durationInMinutes,status,headBranch \
| jq -r '.[] | select(.status=="completed") | [.headBranch, .durationInMinutes] | @tsv' \
| awk '{ branch[$1] += $2; count[$1]++ }
END { for (b in branch) printf "%s: avg %.1f minutes over %d runs\n",
b, branch[b]/count[b], count[b] }'
Length of output: 685
Script:
#!/bin/bash
# Description: Compare build times before and after the runner change using available fields
# Note: Limited to last 30 builds due to API pagination
echo "Analyzing recent workflow runs..."
gh run list --workflow=prebuild-devcontainers.yml --limit 30 --json startedAt,conclusion,status,headBranch,updatedAt \
| jq -r '.[] | select(.status=="completed") | [.headBranch, (.updatedAt | fromdateiso8601) - (.startedAt | fromdateiso8601)] | @tsv' \
| awk '{ branch[$1] += $2/60; count[$1]++ }
END { for (b in branch) printf "%s: avg %.1f minutes over %d runs\n",
b, branch[b]/count[b], count[b] }'
Length of output: 1549
Script:
#!/bin/bash
# Get the commit history for the workflow file to identify when the runner was changed
git log -p .github/workflows/prebuild-devcontainers.yml | grep -A 5 -B 5 "runs-on:"
Length of output: 685
.github/workflows/tests-packges-functional.yml (1)
7-7
: Verify Docker image compatibility with amd64 architecture
The switch from a self-hosted ARM64 runner to ubuntu-24.04
(AMD64) requires ensuring that all Docker images used in the workflow support the new architecture.
✅ Verification successful
The previous output showed Docker images are referenced through environment variables. Let's check the environment files and workflow configurations to understand the actual images being used and any architecture-specific settings.
Docker images are compatible with AMD64 architecture
The switch from ARM64 to AMD64 is safe because:
- The repository already builds and publishes multi-architecture images, as evidenced by the
release-docker-image.yml
workflow which builds for bothlinux/amd64
andlinux/arm64
- The test images are built using
linux/amd64
platform intests-build-image.yml
- The devcontainer images also support both architectures (
linux/amd64,linux/arm64
) - All Docker images are referenced through environment variables, allowing for architecture-specific image selection
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check architecture support for Docker images used in the workflow
# Extract image references from local-network action
echo "Checking local-network action configuration..."
cat .github/actions/local-network/action.yml
# List all Docker images and their architectures from the compose files
echo "Checking Docker Compose configurations..."
fd --type f "docker-compose.*yml" --exec sh -c 'echo "=== {} ==="; cat {} | grep -A1 "image:"'
Length of output: 1908
Script:
#!/bin/bash
# Check the actual Docker image references from environment files
echo "=== Checking environment files for Docker image references ==="
fd -e env --exec cat {}
# Check if there are any platform/architecture specific configurations
echo -e "\n=== Checking for architecture specific configurations ==="
rg -i "arch|platform|amd64|arm" .github/workflows/
Length of output: 4094
.github/workflows/tests-test-suite.yml (3)
Line range hint 28-61
: LGTM! Well-maintained security practices and action versions
The workflow demonstrates good practices:
- Using latest versions of actions (docker/login-action@v3, aws-actions/configure-aws-credentials@v4)
- Proper handling of secrets and credentials
- Appropriate timeout settings
25-25
: Clarify the relationship with layer cache debugging
The PR objective mentions debugging layer cache, but it's not immediately clear how switching to ubuntu-24.04
relates to this goal. Could you please clarify:
- How does this change help with debugging layer cache issues?
- Are there specific debugging capabilities in ubuntu-24.04 that were needed?
Let's check if there are any cache-related configurations:
#!/bin/bash
# Description: Look for cache-related configurations
# Check workflow files for cache configurations
echo "Checking workflow files for cache configurations..."
rg -A 5 "cache:|uses: actions/cache" .github/workflows/
# Check for Docker layer caching configurations
echo "Checking for Docker layer caching configurations..."
rg -A 5 "buildkit|cache-from|cache-to" .github/workflows/
25-25
: Verify compatibility with the new runner environment
The change from a self-hosted ARM64 runner to ubuntu-24.04
(AMD64) is significant. While this aligns with other workflow files, we should ensure:
- All dependencies and tests are compatible with AMD64 architecture
- The performance characteristics are acceptable compared to the previous setup
Let's verify the dependencies and Docker image configurations:
✅ Verification successful
Based on the analysis of the Dockerfile and workflow files, I can now provide a definitive response about the runner change:
The change from ARM64 to ubuntu-24.04 is safe and compatible
The codebase is designed to be architecture-agnostic:
- The Dockerfile handles both architectures through conditional builds:
ARG TARGETARCH
RUN if [[ "$TARGETARCH" == "arm64" ]] ; then export PROTOC_ARCH=aarch_64; else export PROTOC_ARCH=x86_64; fi;
-
No architecture-specific dependencies or configurations were found in:
- Package dependencies
- Build scripts
- Test configurations
-
The codebase already supports multi-architecture builds through Docker's buildx, with proper handling of both ARM64 and AMD64 architectures.
The change aligns with GitHub's recommended runner practice and maintains compatibility while potentially improving build performance through better caching support on ubuntu-24.04.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for architecture-specific configurations and dependencies
# Check Dockerfile for architecture-specific settings
echo "Checking Dockerfile for architecture-specific configurations..."
rg -A 5 "arch|amd64|arm64"
# Check package.json files for architecture-specific dependencies
echo "Checking package.json files for architecture-specific dependencies..."
fd package.json --exec jq -r '.dependencies + .devDependencies | keys[]' {} \; | sort -u | rg -i "arm|arch"
# Check for platform-specific build scripts
echo "Checking for platform-specific build scripts..."
fd package.json --exec jq -r '.scripts | keys[]' {} \; | sort -u | rg -i "platform|arch"
Length of output: 1097
Script:
#!/bin/bash
# Let's try a different approach to check for architecture dependencies
# Check Dockerfiles for platform-specific configurations
echo "Checking Dockerfiles for platform-specific settings..."
fd Dockerfile --exec cat {} \;
# Check workflow files for any platform/architecture specific settings
echo "Checking workflow files for platform/architecture specific settings..."
fd -e yml -e yaml --exec cat {} \; -p .github/workflows/
# Check for any build or test scripts that might have platform dependencies
echo "Checking build and test scripts..."
fd -e sh -e bash --exec cat {} \;
Length of output: 59624
.github/workflows/tests-build-js.yml (2)
7-7
: Verify AMD64 compatibility for all build artifacts
The change from a self-hosted ARM64 runner to ubuntu-24.04
(AMD64) could affect build outputs and cached artifacts. Ensure all JavaScript packages and WebAssembly builds are compatible with this architecture change.
✅ Verification successful
Architecture change from ARM64 to AMD64 is safe to proceed
The codebase analysis shows that architecture-specific code is properly handled:
- Dockerfile uses
$TARGETARCH
variable to conditionally set architecture-specific variables - No architecture-specific JavaScript dependencies found in package.json files
- ARM64 references in Cargo.lock are for Windows cross-compilation support only
- Other ARM64 mentions are in historical changelog entries and documentation
The change from self-hosted ARM64 to ubuntu-24.04 (AMD64) for the JavaScript build workflow is safe as the build process is architecture-agnostic.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for architecture-specific dependencies or configurations
# Search for architecture-specific build configurations
rg -i 'arm64|aarch64'
# Check package.json files for architecture-specific dependencies
fd package.json --exec jq -r '.dependencies,.devDependencies | select(. != null) | to_entries[] | select(.value | contains("arm"))'
Length of output: 3672
Line range hint 16-22
: Verify cache action compatibility with OpenDAL
The workflow uses a custom fork of the cache action (strophy/actions-cache@opendal-update
). Since this PR is about debugging layer cache:
- Ensure this action version properly supports the OpenDAL storage backend
- Verify the S3 bucket permissions are correctly configured for the new cache implementation
✅ Verification successful
Let me generate additional scripts to verify the OpenDAL support and configuration:
Let me generate one final verification script to check the permissions configuration:
Cache action configuration appears to be properly set up
The verification shows that:
- The OpenDAL-based cache action is consistently used across all workflows with proper configuration
- Required AWS credentials and permissions are correctly set up:
id-token: write
permission is configured in workflows- AWS credentials are properly configured using
aws-actions/configure-aws-credentials@v4
- The cache action is used with consistent bucket and root path configuration:
- Bucket:
multi-runner-cache-x1xibo9c
- Root:
actions-cache
- Bucket:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify cache configuration and permissions
# Check for other workflows using the same cache action
rg "strophy/actions-cache@opendal-update" .github/workflows/
# Look for cache configuration patterns
rg -A 5 "actions-cache" .github/workflows/
Length of output: 5964
Script:
#!/bin/bash
# Check the cache action repository and its configuration
# Get the repository URL and check if it exists
gh repo view strophy/actions-cache --json url,description,defaultBranch
# Look for OpenDAL configuration in workflows
rg -i "opendal|s3" .github/workflows/ -A 5
# Check for any environment variables or secrets used for S3/OpenDAL auth
rg "(AWS_|OPENDAL_|S3_)" .github/workflows/
Length of output: 16912
Script:
#!/bin/bash
# Check for permissions configuration in workflows
# Look for permissions blocks and id-token configurations
rg "permissions:|id-token" .github/workflows/
# Check for workflow calls that might need cache permissions
rg "uses: .+/.+@" .github/workflows/ -A 3
Length of output: 23526
.github/workflows/tests-dashmate.yml (2)
20-20
: Verify architecture compatibility after runner change
The workflow has been moved from a self-hosted ARM64 runner to ubuntu-24.04
(AMD64). Ensure that all Docker images and dependencies are compatible with the x86_64 architecture.
Line range hint 86-116
: LGTM! Well-structured caching implementation
The caching strategy is well-implemented with:
- Proper cache key generation based on git commit hash
- Conditional execution based on input parameter
- Comprehensive cache paths including both config and volume data
.github/workflows/release.yml (3)
Line range hint 241-252
: LGTM! Well-structured matrix strategy
The matrix strategy effectively handles different package types with appropriate OS selections:
- Linux packages (tarballs, win, deb) on ubuntu-24.04
- MacOS packages on macos-14
Line range hint 156-158
: LGTM! Effective conditional execution strategy
The only_drive
flag implementation provides good control over the release process:
- Drive-related jobs always run
- Other components (DAPI, Test Suite, Dashmate Helper) can be skipped when needed
This helps optimize CI resources and supports focused releases.
Also applies to: 169-171, 182-184
24-24
: Verify x86_64 compatibility for all build steps
The change from a self-hosted ARM64 runner to GitHub-hosted x86_64 (ubuntu-24.04
) could impact:
- Build performance and resource availability
- Binary compatibility of built artifacts
- Dependencies that might be architecture-specific
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (4)
.github/actions/docker/action.yaml (4)
25-30
: Remove commented out cache mount configurations.Since we're transitioning to a new caching strategy using
actions/cache@v4
, these commented-out configurations should be removed to maintain code cleanliness.
127-153
: LGTM: Effective platform-aware Yarn caching.The Yarn caching implementation correctly handles platform-specific caching with proper cache keys and architecture-aware cache IDs.
Consider adding a comment explaining why the unplugged cache needs to be platform-specific, as this might not be immediately obvious to future maintainers.
174-177
: Clean up commented cache configurations.Remove the commented out cache configurations since we've transitioned to GitHub Actions cache:
- #cache-from: ${{ steps.layer_cache_settings.outputs.cache_from }} cache-from: type=gha - #cache-to: ${{ steps.layer_cache_settings.outputs.cache_to }} cache-to: type=gha,mode=max
180-185
: Remove obsolete cache save configuration.This commented-out section is no longer needed as we've moved to GitHub Actions cache.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
.github/actions/docker/action.yaml
(4 hunks)
🔇 Additional comments (2)
.github/actions/docker/action.yaml (2)
95-126
: LGTM: Well-structured Cargo caching implementation.
The Cargo caching strategy is well-implemented with:
- Proper cache key based on OS and Cargo.lock
- Separate caching for registry index, cache, and git database
- Correct usage of buildkit-cache-dance for Docker integration
157-163
: LGTM: Good upgrade to latest build-push-action.
The upgrade to docker/build-push-action@v6
and the addition of explicit push
parameter improve clarity and maintainability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (1)
.github/actions/local-network/action.yaml (1)
38-38
: Consider implementing the TODO commentThe comment suggests using upload artifacts action instead of the current caching mechanism. This change could improve the debugging capabilities of the CI process, which aligns with the PR's objectives.
Would you like me to help implement this change or create a GitHub issue to track this improvement?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
.github/actions/local-network/action.yaml
(3 hunks)
🔇 Additional comments (2)
.github/actions/local-network/action.yaml (2)
39-39
: LGTM: Improved path portability
The change from hardcoded path to ${{ env.HOME }}
environment variable makes the workflow more portable and maintainable.
76-76
: LGTM: Consistent path handling
The path modification maintains consistency with the restore step, ensuring symmetrical cache operations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (1)
.github/workflows/tests-dashmate.yml (1)
Custom
actions-cache
fork is not actively maintainedUsing an unmaintained custom fork can lead to security and maintenance risks. It's advisable to switch to the official
actions/cache@v3
.
- File:
.github/workflows/tests-dashmate.yml
(lines 41-47)🔗 Analysis chain
Line range hint
44-52
: Address TODO and consider using official actions/cacheThe current implementation has two potential concerns:
- There's a TODO comment indicating this is a temporary solution
- Using a custom fork of actions-cache could pose maintenance risks
Consider:
- Creating a tracking issue for replacing this temporary solution
- Evaluating if the official
actions/cache@v3
could meet your needs🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check if the custom action fork is actively maintained gh api repos/strophy/actions-cache/commits --jq '.[0].commit.author.date'Length of output: 96
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (2)
.github/actions/rust/action.yaml
(1 hunks).github/workflows/tests-dashmate.yml
(4 hunks)
🧰 Additional context used
🪛 actionlint
.github/workflows/tests-dashmate.yml
89-89: shellcheck reported issue in this script: SC2086:info:1:22: Double quote to prevent globbing and word splitting
(shellcheck)
🔇 Additional comments (4)
.github/actions/rust/action.yaml (2)
98-100
: LGTM! Good practice for runner portability.
Setting the HOME variable explicitly ensures consistent behavior across different runner environments, which is essential for the transition to free runners.
109-111
: LGTM! Verify cache paths across different runner environments.
The updated cache paths using ${{ env.HOME }}
correctly remove the hardcoded paths, making the workflow more portable across different runner environments.
✅ Verification successful
Verification Successful: All cache paths correctly use ${{ env.HOME }}
, ensuring portability across different runner environments.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify no hardcoded /home/ubuntu paths remain in workflow files
# Search for any remaining hardcoded home paths in workflow files
echo "Checking for hardcoded home paths in workflows..."
rg -l "/home/ubuntu" .github/
# Search for inconsistent HOME variable usage in cache paths
echo "Checking for inconsistent cache path patterns..."
rg "\.cargo/(registry|git)" .github/
Length of output: 516
.github/workflows/tests-dashmate.yml (2)
20-20
: LGTM: Runner configuration aligns with PR objectives
The switch to ubuntu-24.04
successfully implements the transition to free runners as intended.
Line range hint 98-112
: LGTM: Well-structured caching implementation
The caching implementation is well-designed with:
- Efficient cache key based on dashmate fingerprint
- Proper path handling with environment variables
- Smart conditional test execution based on cache availability
🧰 Tools
🪛 actionlint
84-84: shellcheck reported issue in this script: SC2086:info:1:64: Double quote to prevent globbing and word splitting
(shellcheck)
89-89: shellcheck reported issue in this script: SC2086:info:1:22: Double quote to prevent globbing and word splitting
(shellcheck)
- name: Set HOME variable to github context | ||
shell: bash | ||
run: echo "HOME=$HOME" >> $GITHUB_ENV | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix potential path handling issue
The current HOME variable setting could have issues with paths containing spaces.
Apply this fix to prevent word splitting:
- run: echo "HOME=$HOME" >> $GITHUB_ENV
+ run: echo "HOME=\"$HOME\"" >> $GITHUB_ENV
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
- name: Set HOME variable to github context | |
shell: bash | |
run: echo "HOME=$HOME" >> $GITHUB_ENV | |
- name: Set HOME variable to github context | |
shell: bash | |
run: echo "HOME=\"$HOME\"" >> $GITHUB_ENV | |
🧰 Tools
🪛 actionlint
89-89: shellcheck reported issue in this script: SC2086:info:1:22: Double quote to prevent globbing and word splitting
(shellcheck)
Issue being fixed or feature implemented
Running self-hosted runners with current infrastructure (on-demand instances) is very expensive. To save money we going to try to switch to free runners.
What was done?
CHROME_BIN
How Has This Been Tested?
Running CI
Breaking Changes
None
Checklist:
For repository code-owners and collaborators only
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Chores