Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add instance to vulcan metrics #2265

Merged
merged 1 commit into from
Sep 16, 2024
Merged

Add instance to vulcan metrics #2265

merged 1 commit into from
Sep 16, 2024

Conversation

roy-dydx
Copy link
Contributor

@roy-dydx roy-dydx commented Sep 16, 2024

Changelist

[Describe or list the changes made in this PR]

Test Plan

[Describe how this PR was tested (if applicable)]

Author/Reviewer Checklist

  • If this PR has changes that result in a different app state given the same prior state and transaction list, manually add the state-breaking label.
  • If the PR has breaking postgres changes to the indexer add the indexer-postgres-breaking label.
  • If this PR isn't state-breaking but has changes that modify behavior in PrepareProposal or ProcessProposal, manually add the label proposal-breaking.
  • If this PR is one of many that implement a specific feature, manually label them all feature:[feature-name].
  • If you wish to for mergify-bot to automatically create a PR to backport your change to a release branch, manually add the label backport/[branch-name].
  • Manually add any of the following labels: refactor, chore, bug.

Summary by CodeRabbit

  • New Features

    • Enhanced logging and metrics tracking across various order handling processes by including instance IDs.
    • Improved service operational transparency with instance ID logging during service startup.
  • Bug Fixes

    • None reported.
  • Documentation

    • Added functions for retrieving instance IDs to improve clarity in metrics tracking.
  • Refactor

    • Updated metrics increment calls to include instance identifiers for better observability.

@roy-dydx roy-dydx requested a review from a team as a code owner September 16, 2024 20:01
Copy link
Contributor

coderabbitai bot commented Sep 16, 2024

Walkthrough

The changes across several files in the Vulcan indexer enhance the logging and metrics tracking functionality by incorporating an instance identifier into various statistics increment calls. This allows for better monitoring and debugging of service performance in a distributed environment. Key modifications include the addition of the getInstanceId() function and the inclusion of an instance parameter in metrics tracking across order handling classes and message processing functions.

Changes

File Path Change Summary
indexer/services/vulcan/src/handlers/order-*.ts Enhanced logging by adding instance ID to stats.increment calls in OrderPlaceHandler, OrderRemoveHandler, and OrderUpdateHandler.
indexer/services/vulcan/src/index.ts Introduced logging of instance ID retrieval in the startService function and added setInstanceId() and getInstanceId() functions.
indexer/services/vulcan/src/lib/on-*.ts Added instance ID to metric tags in on-batch.ts, on-message.ts, and improved message tracking in send-message-helper.ts.

Possibly related PRs

Suggested labels

indexer

Poem

🐇 In the land of code where metrics grow,
We track each instance, now we know!
With logs that sparkle, clear and bright,
Our service dances, a joyful sight.
So hop along, let’s celebrate,
For better tracking, oh, isn’t it great! 🎉


Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 2882271 and 56bc5cb.

Files selected for processing (7)
  • indexer/services/vulcan/src/handlers/order-place-handler.ts (2 hunks)
  • indexer/services/vulcan/src/handlers/order-remove-handler.ts (5 hunks)
  • indexer/services/vulcan/src/handlers/order-update-handler.ts (5 hunks)
  • indexer/services/vulcan/src/index.ts (2 hunks)
  • indexer/services/vulcan/src/lib/on-batch.ts (2 hunks)
  • indexer/services/vulcan/src/lib/on-message.ts (6 hunks)
  • indexer/services/vulcan/src/lib/send-message-helper.ts (3 hunks)
Additional comments not posted (24)
indexer/services/vulcan/src/lib/on-batch.ts (2)

1-1: LGTM!

The import statement for getInstanceId is correctly added.


16-16: Looks good!

The addition of the instance key to the metricTags object using getInstanceId() is a valuable enhancement for monitoring and debugging purposes. It will provide better context when tracking metrics across different instances in a distributed environment.

indexer/services/vulcan/src/index.ts (2)

1-3: LGTM!

The imports are necessary for the added instance management functionality. The usage of the imported functions is correct.


23-33: Looks good!

The added code segment enhances the functionality of the startService function by setting and logging the instance ID. The logging statements provide visibility into the process of obtaining the instance ID, which improves the service's operational transparency and aids in debugging.

The code changes do not alter the existing logic of the service initialization and are a valuable addition to the codebase.

indexer/services/vulcan/src/lib/send-message-helper.ts (3)

2-2: LGTM!

The import statement for getInstanceId is syntactically correct and aligns with the list of alterations.


80-80: Enhance statistics with instance identifier.

Including the instance identifier obtained from getInstanceId() in the statistics allows for better differentiation of metrics based on the instance of the service. This change is particularly useful in a distributed system where multiple instances may be running concurrently, as it provides more granular data for monitoring and analysis.


110-110: Enhance statistics tags with instance identifier.

Adding the instance tag with the value obtained from getInstanceId() to the tags object allows for better differentiation of metrics based on the instance of the service. This change is particularly useful in a distributed system where multiple instances may be running concurrently, as it provides more granular data for monitoring and analysis when the tags object is used in the subsequent stats.histogram and stats.timing calls.

indexer/services/vulcan/src/lib/on-message.ts (7)

2-2: LGTM!

The import statement for getInstanceId is correct and aligns with the goal of incorporating instance identification into metrics tracking.


46-50: LGTM!

Including the instance ID as a tag in the stats.increment call for received Kafka messages enables better monitoring and debugging of service performance across different instances. This change aligns with the overall goal of enhancing metrics tracking.


52-56: LGTM!

Including the instance ID as a tag in the stats.increment call for empty Kafka messages enables better monitoring and debugging of service performance across different instances. This change aligns with the overall goal of enhancing metrics tracking.


71-71: LGTM!

Including the instance ID as a tag in the stats.timing call for message queue time enables better monitoring and debugging of service performance across different instances. This change aligns with the overall goal of enhancing metrics tracking.


84-84: LGTM!

Including the instance ID as a tag in the stats.timing call for message processing time since the message was received enables better monitoring and debugging of service performance across different instances. This change aligns with the overall goal of enhancing metrics tracking.


144-144: LGTM!

Including the instance ID as a tag in the stats.timing call for message processing time since the message was received, after post-processing, enables better monitoring and debugging of service performance across different instances. This change aligns with the overall goal of enhancing metrics tracking.


179-179: LGTM!

Including the instance ID as a tag in the stats.timing call for the time taken to process an update enables better monitoring and debugging of service performance across different instances. This change aligns with the overall goal of enhancing metrics tracking.

indexer/services/vulcan/src/handlers/order-place-handler.ts (1)

94-98: LGTM!

The addition of the instance parameter to the stats.increment call is a valuable enhancement for monitoring and debugging purposes. By associating the replaced_order metric with the specific instance ID, it enables more granular tracking of order replacement events across different instances of the service in a distributed environment.

This change improves the observability of the system without altering the core logic of the OrderPlaceHandler. It can help identify performance issues or anomalies specific to certain instances, facilitating better troubleshooting and optimization efforts.

indexer/services/vulcan/src/handlers/order-update-handler.ts (5)

3-3: LGTM!

The import of getInstanceId from @dydxprotocol-indexer/base is consistent with the goal of incorporating an instance identifier into various statistics increment calls.


106-108: LGTM!

The addition of the instance parameter to the stats.increment call for the order_update_order_does_not_exist metric enhances the tracking of order updates by associating the metric with a specific instance of the service. This change is consistent with the overall goal of the PR.


115-119: LGTM!

The addition of the instance parameter to the stats.increment call for the order_update_with_zero_delta.count metric enhances the tracking of order updates by associating the metric with a specific instance of the service. This change is consistent with the overall goal of the PR.


201-205: LGTM!

The addition of the instance parameter to the stats.increment call for the order_update_old_total_filled_exceeds_size metric enhances the tracking of order updates by associating the metric with a specific instance of the service. This change is consistent with the overall goal of the PR.


232-236: LGTM!

The addition of the instance parameter to the stats.increment call for the order_update_total_filled_exceeds_size metric enhances the tracking of order updates by associating the metric with a specific instance of the service. This change is consistent with the overall goal of the PR.

indexer/services/vulcan/src/handlers/order-remove-handler.ts (4)

95-99: Enhance metrics tracking by including the instance ID.

Including the instance ID in the stats.increment call allows for better identification of the source of the metrics in a distributed system. This provides more granular insights into the service's performance and behavior.


125-129: Enhance metrics tracking by including the instance ID.

Including the instance ID in the stats.increment call allows for better identification of the source of the metrics in a distributed system. This provides more granular insights into the service's performance and behavior.


462-466: Enhance metrics tracking by including the instance ID.

Including the instance ID in the stats.increment call allows for better identification of the source of the metrics in a distributed system. This provides more granular insights into the service's performance and behavior.


490-494: Enhance metrics tracking by including the instance ID.

Including the instance ID in the stats.increment call allows for better identification of the source of the metrics in a distributed system. This provides more granular insights into the service's performance and behavior.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    -- I pushed a fix in commit <commit_id>, please review it.
    -- Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    -- @coderabbitai generate unit testing code for this file.
    -- @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    -- @coderabbitai generate interesting stats about this repository and render them as a table.
    -- @coderabbitai read src/utils.ts and generate unit testing code.
    -- @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    -- @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@roy-dydx roy-dydx merged commit 46a1c88 into main Sep 16, 2024
15 of 16 checks passed
@roy-dydx roy-dydx deleted the roy/vulcanins branch September 16, 2024 20:21
@roy-dydx
Copy link
Contributor Author

@Mergifyio backport release/indexer/v5.x

Copy link
Contributor

mergify bot commented Sep 16, 2024

backport release/indexer/v5.x

✅ Backports have been created

@roy-dydx
Copy link
Contributor Author

https://github.com/Mergifyio backport release/indexer/v6.x

mergify bot pushed a commit that referenced this pull request Sep 16, 2024
(cherry picked from commit 46a1c88)

# Conflicts:
#	indexer/services/vulcan/src/handlers/order-place-handler.ts
#	indexer/services/vulcan/src/lib/on-message.ts
Copy link
Contributor

mergify bot commented Sep 16, 2024

backport release/indexer/v6.x

✅ Backports have been created

mergify bot pushed a commit that referenced this pull request Sep 16, 2024
@coderabbitai coderabbitai bot mentioned this pull request Sep 16, 2024
6 tasks
roy-dydx added a commit that referenced this pull request Sep 16, 2024
Co-authored-by: roy-dydx <133032749+roy-dydx@users.noreply.github.com>
roy-dydx added a commit that referenced this pull request Sep 16, 2024
Co-authored-by: roy-dydx <133032749+roy-dydx@users.noreply.github.com>
Co-authored-by: Roy Li <roy@dydx.exchange>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Development

Successfully merging this pull request may close these issues.

2 participants