Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scale LH Prover #4281

Merged
merged 13 commits into from
May 22, 2023
Merged

Scale LH Prover #4281

merged 13 commits into from
May 22, 2023

Conversation

preethamr
Copy link
Collaborator

Description

Type of change

  • Docs change / dependency upgrade
  • Configuration / tooling changes
  • Refactoring
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Requires changes in customer code

High-level change(s) description - from the user's perspective

Related Issue(s)

Fixes

Related pull request(s)

Rahul Sethuram and others added 9 commits May 17, 2023 21:55
* feat: do not cache the sdks to make it possible to have different client configs in one runtime

* put back yarn release and yarnrc

* v2.0.3: stable version release

* v2.0.3-alpha.1 alpha release

* v2.0.4-alpha.1 alpha release
fix: release version

* v2.0.4-alpha.2 alpha release

* Add ERD for reference

* WIP: sdk-server

* fix: handle empty chainData cases.

* Update fastpath.ts

* rmv: mnemonic (#4230)

* remove mnemonic

* increase timeouts (#4226)

* Sequencer cache persists (#4245)

* feat: add expiry time for executor cache

* fix test

* Batch pulling from the rabbitmq using amqplib  (#4216)

* feat: pull data from the queue in batch

* revert

* feat: move executor params to config

* fix: integration config

* rm

* fix: test

* Lint

* ci(test): no need to include it

* fix: queue limit setup

---------

Co-authored-by: kristohberg <kristoffer.hovland.berg@gmail.com>
Co-authored-by: sanchaymittal <sanchaymittal@gmail.com>
Co-authored-by: just-a-node <eye1717@gmail.com>
Co-authored-by: carlomazzaferro <carlo.mazzaferro@gmail.com>
Co-authored-by: wanglonghong <wanglonghong4467@gmail.com>
Copy link
Contributor

@wanglonghong wanglonghong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am still skeptical about this solution how it could improve the messaging layer processing because as we know nodejs is natively a single threaded application.
The one way I am thinking is to make lh prover pub/sub as we do in sequencer. but need others input here

Comment on lines 165 to 166
concurrency:
process.env.NXTP_PROVER_CONCURRENCY || configJson.concurrency || configFile.concurrency || DEFAULT_CONCURRENCY,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, the data type of env variable is string but concurrency is number so need a fix here

Copy link
Collaborator

@rhlsthrm rhlsthrm May 21, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think Wang is correct.

Copy link
Collaborator Author

@preethamr preethamr May 21, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True. Pushed one way check and convert for this, perhaps there is a more elegant way to do this.

Interesting that we have this pattern at a few different places like for NXTP_RELAYER_WAIT_TIME , NXTP_CARTOGRAPHER_POLL_INTERVAL.
We should update this in all places right @rhlsthrm @wanglonghong ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes right. we have similar things in other places.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct I think it's working because we are using the defaults, so we need this if we override it.

@rhlsthrm
Copy link
Collaborator

I am still skeptical about this solution how it could improve the messaging layer processing because as we know nodejs is natively a single threaded application. The one way I am thinking is to make lh prover pub/sub as we do in sequencer. but need others input here

This would still work because even though Node is single threaded it will work on all the processes at the same time. So when it's waiting on one, it will process the other.

@wanglonghong wanglonghong merged commit 81836e4 into main May 22, 2023
@wanglonghong wanglonghong deleted the scale_prover branch May 22, 2023 13:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants