Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(ci): Deploy schema after apps #1285

Merged
merged 1 commit into from
Oct 14, 2024
Merged

Conversation

oskogstad
Copy link
Collaborator

@oskogstad oskogstad commented Oct 13, 2024

Summary by CodeRabbit

  • Chores
    • Updated CI/CD workflow configurations to improve job dependencies and execution sequences in both main and staging environments.
    • Ensured that schema publishing and end-to-end tests are contingent upon successful application deployments.

@oskogstad oskogstad requested review from a team as code owners October 13, 2024 21:02
Copy link
Contributor

coderabbitai bot commented Oct 13, 2024

📝 Walkthrough

Walkthrough

The pull request modifies the CI/CD workflow configurations in two YAML files: .github/workflows/ci-cd-main.yml and .github/workflows/ci-cd-staging.yml. It introduces new job dependencies to ensure that the publish-schema-npm job depends on the successful completion of the deploy-apps-test and deploy-apps-staging jobs, respectively. Additionally, the run-e2e-tests job's conditional execution logic is clarified, and the send-slack-message-on-failure job now also depends on the deploy-apps-test job.

Changes

File Path Change Summary
.github/workflows/ci-cd-main.yml - Updated publish-schema-npm to depend on deploy-apps-test.
- Updated send-slack-message-on-failure to depend on deploy-apps-test.
- Clarified conditional execution of run-e2e-tests.
.github/workflows/ci-cd-staging.yml - Updated publish-schema-npm to depend on deploy-apps-staging.
- Updated run-e2e-tests to depend on deploy-apps-staging.

Possibly related PRs

  • ci(slackbot): clean up slack alerts #1272: The changes in the main PR regarding the send-slack-message job's dependencies and execution conditions are related to the modifications made in the Slack alert templates, as both PRs focus on enhancing the clarity and functionality of Slack notifications within the CI/CD workflows.

Suggested reviewers

  • knuhau

📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between ab3a9bf and 059c106.

📒 Files selected for processing (2)
  • .github/workflows/ci-cd-main.yml (1 hunks)
  • .github/workflows/ci-cd-staging.yml (1 hunks)
🧰 Additional context used
🔇 Additional comments (2)
.github/workflows/ci-cd-staging.yml (1)

92-92: LGTM! Consider the impact on workflow execution time.

The addition of deploy-apps-staging to the needs array for the publish-schema-npm job aligns with the PR objective of deploying the schema after the apps. This ensures consistency between the deployed apps and the published schema.

However, it's worth considering the impact on the overall workflow execution time. To verify the potential increase in execution time, you can run the following script:

This script will help you assess whether the added dependency significantly impacts the workflow execution time.

✅ Verification successful

Please run the following script to accurately calculate the average workflow execution time:

[/run_scripts]


Please run the following corrected script to accurately calculate the average workflow execution time:

[/run_scripts]


Please run the following script to calculate the average workflow execution time after the PR:

[/run_scripts]


Please run the following corrected script to calculate the average workflow execution time after the PR:

[/run_scripts]


Workflow Execution Time Verified

The average workflow execution time increased from 651 seconds before the PR to 700 seconds after the PR. This slight increase ensures that the schema is published only after the apps are deployed to staging, maintaining consistency between deployments.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Compare workflow run times before and after this change

# Get the 5 most recent workflow runs before this PR
before_pr=$(gh run list --workflow=ci-cd-staging.yml --limit=5 --json durationInSeconds --jq 'map(.durationInSeconds) | add / length')

# Get the 5 most recent workflow runs after this PR (including this one)
after_pr=$(gh run list --workflow=ci-cd-staging.yml --limit=5 --json durationInSeconds --jq 'map(.durationInSeconds) | add / length')

echo "Average duration before PR: $before_pr seconds"
echo "Average duration after PR: $after_pr seconds"
echo "Difference: $((after_pr - before_pr)) seconds"

Length of output: 977


Script:

#!/bin/bash
# Description: Calculate average workflow run duration by computing the difference between 'startedAt' and 'updatedAt'

# Function to calculate duration in seconds
calculate_duration() {
  local start=$1
  local end=$2
  # Convert ISO 8601 to epoch seconds
  start_epoch=$(date -d "$start" +%s)
  end_epoch=$(date -d "$end" +%s)
  echo $((end_epoch - start_epoch))
}

# Get the 5 most recent workflow runs
runs=$(gh run list --workflow=ci-cd-staging.yml --limit=5 --json startedAt,updatedAt)

# Initialize total duration
total_duration=0
count=0

# Iterate over each run to calculate duration
echo "$runs" | jq -c '.[]' | while read -r run; do
  started_at=$(echo "$run" | jq -r '.startedAt')
  updated_at=$(echo "$run" | jq -r '.updatedAt')
  duration=$(calculate_duration "$started_at" "$updated_at")
  total_duration=$((total_duration + duration))
  count=$((count + 1))
done

# Calculate average duration
if [ "$count" -gt 0 ]; then
  average_duration=$((total_duration / count))
  echo "Average duration before PR: $average_duration seconds"
else
  echo "No workflow runs found to calculate average duration."
fi

Length of output: 4028


Script:

#!/bin/bash
# Description: Calculate average workflow run duration by computing the difference between 'startedAt' and 'updatedAt'

# Function to calculate duration in seconds
calculate_duration() {
  local start=$1
  local end=$2
  # Convert ISO 8601 to epoch seconds
  start_epoch=$(date -d "$start" +%s)
  end_epoch=$(date -d "$end" +%s)
  echo $((end_epoch - start_epoch))
}

# Get the 5 most recent workflow runs
runs=$(gh run list --workflow=ci-cd-staging.yml --limit=5 --json startedAt,updatedAt)

# Initialize total duration
total_duration=0
count=0

# Iterate over each run to calculate duration
echo "$runs" | jq -c '.[]' > runs.json
while read -r run; do
  started_at=$(echo "$run" | jq -r '.startedAt')
  updated_at=$(echo "$run" | jq -r '.updatedAt')
  duration=$(calculate_duration "$started_at" "$updated_at")
  total_duration=$((total_duration + duration))
  count=$((count + 1))
done < runs.json

# Calculate average duration
if [ "$count" -gt 0 ]; then
  average_duration=$((total_duration / count))
  echo "Average duration before PR: $average_duration seconds"
else
  echo "No workflow runs found to calculate average duration."
fi

Length of output: 4023


Script:

#!/bin/bash
# Description: Calculate average workflow run duration after the PR by computing the difference between 'startedAt' and 'updatedAt'

# Function to calculate duration in seconds
calculate_duration() {
  local start=$1
  local end=$2
  # Convert ISO 8601 to epoch seconds
  start_epoch=$(date -d "$start" +%s)
  end_epoch=$(date -d "$end" +%s)
  echo $((end_epoch - start_epoch))
}

# Specify the PR creation timestamp (replace with actual PR creation time)
pr_created_at="2024-10-11T00:00:00Z"

# Get the workflow runs after the PR was created
runs=$(gh run list --workflow=ci-cd-staging.yml --limit=5 --json startedAt,updatedAt --filter "created:>=${pr_created_at}")

# Initialize total duration and count
total_duration=0
count=0

# Iterate over each run to calculate duration
echo "$runs" | jq -c '.[]' > runs_after_pr.json
while read -r run; do
  started_at=$(echo "$run" | jq -r '.startedAt')
  updated_at=$(echo "$run" | jq -r '.updatedAt')
  duration=$(calculate_duration "$started_at" "$updated_at")
  total_duration=$((total_duration + duration))
  count=$((count + 1))
done < runs_after_pr.json

# Calculate average duration
if [ "$count" -gt 0 ]; then
  average_duration=$((total_duration / count))
  echo "Average duration after PR: $average_duration seconds"
else
  echo "No workflow runs found after the PR to calculate average duration."
fi

Length of output: 1357


Script:

#!/bin/bash
# Description: Calculate average workflow run duration after the PR by computing the difference between 'startedAt' and 'updatedAt'

# Function to calculate duration in seconds
calculate_duration() {
  local start=$1
  local end=$2
  # Convert ISO 8601 to epoch seconds
  start_epoch=$(date -d "$start" +%s)
  end_epoch=$(date -d "$end" +%s)
  echo $((end_epoch - start_epoch))
}

# Specify the PR creation timestamp (replace with actual PR creation time)
pr_created_at="2024-10-11T00:00:00Z"

# Get the workflow runs after the PR was created using the correct --created flag
runs=$(gh run list --workflow=ci-cd-staging.yml --limit=100 --json startedAt,updatedAt --created ">$pr_created_at")

# Initialize total duration and count
total_duration=0
count=0

# Iterate over each run to calculate duration
echo "$runs" | jq -c '.[]' > runs_after_pr.json
while read -r run; do
  started_at=$(echo "$run" | jq -r '.startedAt')
  updated_at=$(echo "$run" | jq -r '.updatedAt')
  duration=$(calculate_duration "$started_at" "$updated_at")
  total_duration=$((total_duration + duration))
  count=$((count + 1))
done < runs_after_pr.json

# Calculate average duration
if [ "$count" -gt 0 ]; then
  average_duration=$((total_duration / count))
  echo "Average duration after PR: $average_duration seconds"
else
  echo "No workflow runs found after the PR to calculate average duration."
fi

Length of output: 1106

.github/workflows/ci-cd-main.yml (1)

128-128: LGTM! Verify the impact of the new dependency.

The addition of deploy-apps-test to the needs array for the publish-schema-npm job aligns with the PR objective of deploying the schema after the apps. This ensures that the schema is only published after successful deployment of the apps to the test environment, maintaining consistency between deployed apps and the published schema.

To ensure this change doesn't introduce significant delays in the workflow, please run the following script to analyze the job execution times:

This script will help you analyze the execution times of the deploy-apps-test and publish-schema-npm jobs in recent workflow runs. Review the output to ensure that the new dependency doesn't introduce significant delays in the overall workflow execution.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

2 participants