-
Notifications
You must be signed in to change notification settings - Fork 171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scale LH Prover #4281
Scale LH Prover #4281
Conversation
* feat: do not cache the sdks to make it possible to have different client configs in one runtime * put back yarn release and yarnrc * v2.0.3: stable version release * v2.0.3-alpha.1 alpha release * v2.0.4-alpha.1 alpha release fix: release version * v2.0.4-alpha.2 alpha release * Add ERD for reference * WIP: sdk-server * fix: handle empty chainData cases. * Update fastpath.ts * rmv: mnemonic (#4230) * remove mnemonic * increase timeouts (#4226) * Sequencer cache persists (#4245) * feat: add expiry time for executor cache * fix test * Batch pulling from the rabbitmq using amqplib (#4216) * feat: pull data from the queue in batch * revert * feat: move executor params to config * fix: integration config * rm * fix: test * Lint * ci(test): no need to include it * fix: queue limit setup --------- Co-authored-by: kristohberg <kristoffer.hovland.berg@gmail.com> Co-authored-by: sanchaymittal <sanchaymittal@gmail.com> Co-authored-by: just-a-node <eye1717@gmail.com> Co-authored-by: carlomazzaferro <carlo.mazzaferro@gmail.com> Co-authored-by: wanglonghong <wanglonghong4467@gmail.com>
Testnet Sync
Testnet Prod Sync
Testnet Prod Sync
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am still skeptical about this solution how it could improve the messaging layer processing because as we know nodejs is natively a single threaded application.
The one way I am thinking is to make lh prover pub/sub as we do in sequencer. but need others input here
concurrency: | ||
process.env.NXTP_PROVER_CONCURRENCY || configJson.concurrency || configFile.concurrency || DEFAULT_CONCURRENCY, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general, the data type of env variable is string but concurrency is number so need a fix here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think Wang is correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True. Pushed one way check and convert for this, perhaps there is a more elegant way to do this.
Interesting that we have this pattern at a few different places like for NXTP_RELAYER_WAIT_TIME
, NXTP_CARTOGRAPHER_POLL_INTERVAL
.
We should update this in all places right @rhlsthrm @wanglonghong ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes right. we have similar things in other places.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct I think it's working because we are using the defaults, so we need this if we override it.
This would still work because even though Node is single threaded it will work on all the processes at the same time. So when it's waiting on one, it will process the other. |
Description
Type of change
High-level change(s) description - from the user's perspective
Related Issue(s)
Fixes
Related pull request(s)