Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

parallelize mercury requests in pipeline #9744

Merged
merged 6 commits into from
Jul 12, 2023
Merged

parallelize mercury requests in pipeline #9744

merged 6 commits into from
Jul 12, 2023

Conversation

FelixFan1992
Copy link
Contributor

No description provided.

@github-actions
Copy link
Contributor

github-actions bot commented Jul 5, 2023

I see that you haven't updated any README files. Would it make sense to do so?

@FelixFan1992 FelixFan1992 changed the base branch from develop to AUTO-3421 July 6, 2023 14:55
@FelixFan1992 FelixFan1992 changed the base branch from AUTO-3421 to develop July 6, 2023 14:55
@FelixFan1992 FelixFan1992 changed the base branch from develop to AUTO-3421 July 6, 2023 14:56
@FelixFan1992 FelixFan1992 changed the base branch from AUTO-3421 to develop July 6, 2023 15:09
@FelixFan1992 FelixFan1992 marked this pull request as ready for review July 6, 2023 15:33
@FelixFan1992 FelixFan1992 requested a review from a team as a code owner July 6, 2023 15:33
var wg sync.WaitGroup
for i, lookup := range lookups {
wg.Add(1)
r.doLookup(ctx, &wg, lookup, i, upkeepResults)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

goroutine?

Copy link
Contributor

@infiloop2 infiloop2 Jul 6, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will we be writing to upkeepResults in parallel? Not sure if we need to add a mutex on top of it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

two quesions:

  • it appears that r.doLookup() is blocking, and therefore we don't actually do the lookups in parallel. A go routine should solve that as amir mentioned.
  • we only parallelize the the feed lookups on a per-upkeep basis. Would it be possible to parallelize all feed lookups across all upkeeps?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed the go routine.

re @infiloop2
I think we are OK with concurrency bc we only access upkeep results at different indices in each go routine so technically we will not write/read the same memory location in two go routines.

re: @RyanRHall
what this PR tries to do is parallelizing all feed lookups in 1 pipeline run.
if plugin calls this pipeline with several upkeeps (say 10), this will find all the upkeeps which need feed lookup and spawn go routines for all of them.
then in each feed lookup go routine, it will also fetch mercury data for each asset pair requested by user's revert data in parallel. but this is already done in previous PR.

@@ -96,52 +98,71 @@ func (r *EvmRegistry) feedLookup(ctx context.Context, upkeepResults []EVMAutomat
continue
}

feedLookup, err := r.decodeFeedLookup(upkeepResults[i].PerformData)
r.lggr.Infof("[FeedLookup] upkeep %s block %d decodeFeedLookup performData=%s", upkeepId, block, hexutil.Encode(upkeepResults[i].PerformData))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit. will this always be 'block'? As i understand it can be timestamp also based on user revert data?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this block is from block := upkeepResults[i].Block so this is the block which OCR3 has agreed upon.
the block/time from the revert data is denoted as time in FeedLookup struct

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah might be better to rename to checkBlock to clarify

@cl-sonarqube-production
Copy link

SonarQube Quality Gate

Quality Gate failed

Failed condition 65.5% 65.5% Coverage on New Code (is less than 80%)

See analysis details on SonarQube

@FelixFan1992 FelixFan1992 added this pull request to the merge queue Jul 11, 2023
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Jul 11, 2023
@FelixFan1992 FelixFan1992 added this pull request to the merge queue Jul 12, 2023
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Jul 12, 2023
@FelixFan1992 FelixFan1992 added this pull request to the merge queue Jul 12, 2023
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Jul 12, 2023
@FelixFan1992 FelixFan1992 added this pull request to the merge queue Jul 12, 2023
Merged via the queue into develop with commit 42e3dcc Jul 12, 2023
@FelixFan1992 FelixFan1992 deleted the AUTO-2862 branch July 12, 2023 15:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants