-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prod Release 06/06/2024 #768
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Rejected promises must be handled via `await` or `.catch()`, without either of these Node will exit due to the "unhandled rejection". Currently, Lake S3 requests are created in advance, and then handled later when the block is ready to be executed. Usually, this is not an issue, as the delay between promise creation/handling is not too large. But for slow indexers (i.e. `nearpavel_near/bitmap_v2`), this delay is long enough for Node to consider it "unhandled". This PR attaches a rejection handler to block requests, so that failures can be handled gracefully. This does not affect the "pre-fetch" behaviour, these requests will still be executed ahead of time, but failed requests will be handled within our code. Explicit handling binds the error to our execution context providing a meaningful call-stack, as opposed to a lonesome error which seemingly comes from no-where. To mitigate the problem itself - failed S3 requests, I have bumped `maxAttempts`, and changed the `retryMode` in the hopes that the transient error can be overcome.
Maintaining what Indexer was forked during development is useful for various usage statistics. I've updated the frontend to now store what indexer was forked from and store that information in the contract after publishing. We specifically store the indexer which was directly forked. With this information, we can construct a graph using the "forked_from" as edge information, and calculate various other statistics such as how many indexers used some other indexer as a base by doing path counts. This can be done by creating an indexer which indexes the QueryApi contract. There are some bugs in the workflow currently. Specifically, refreshing while in the forked page will cause the loss of whatever code was written. You need to re-fork the Indexer. While this benefits to ensuring the forked from field is correct, if we end up fixing this behavior, we need to then ensure forked from is stored persistently. As part of updating the contract, I've removed various old functions which are no longer necessary.
This PR centralises persistent Indexer State within the `IndexerStateManager` struct. Currently, we only persist the "stream version", but this will soon grow to "enabled/disabled", which will be implemented in my next PR. This is just a tidy up to make the next step a bit easier. Indexer state will be stored as stringified JSON under `{account_id}/{function_name}:state`, currently this only includes when the block stream was last synced. I've included a migration step in to move from the old key/structure to the new.
This PR exposes a new gRPC endpoint from Coordinator to "manage" indexers. Currently, this only allows for enabling/disabling, but will probably be expanded over time. There isn't any intention to use this from another service, it's more of a manual & internal tool that we can use. The endpoint is essentially just a wrapper over the persistent Redis state. The exposed methods end up mutating this state, which in turn, is then used to govern how Indexers should be synchronised. Within the `coordinator/` directory, the endpoint can be used with `grpcurl` like so: - enable: `grpcurl -plaintext -proto proto/indexer_manager.proto -d '{"account_id": "morgs.near", "function_name": "test"}' 0.0.0.0:8002 indexer.IndexerManager.Enable` - disable: `grpcurl -plaintext -proto proto/indexer_manager.proto -d '{"account_id": "morgs.near", "function_name": "test"}' 0.0.0.0:8002 indexer.IndexerManager.Disable` - list: `grpcurl -plaintext -proto proto/indexer_manager.proto 0.0.0.0:8002 indexer.IndexerManager.List`
Hasura permissions were set without backend only mutations set. We no longer want to allow users to make mutations to the indexer data as the data should be consistent with the results of indexer code execution. For non-select permissions, Runner now sets backend only mutations.
In Coordinator, to iterate over all Indexers you need two loops: one for the accounts, and another for its functions. As iteration is quite common, I've added a custom `Iterator` implementation which achieves the same with a single loop.
The error given to Stream Handler is occasionally a JSON object in disguise, and not an Error. As a result, calling error.toString() returns `[Object object]` rather than the error contents. I've added a check so that if the result of toString() is the earlier value, return a JSON.stringify result instead. Doing JSON.stringify on a proper Error type results in `{}`. The two cases must be handled separately. To test this, I created test indexers which called one of the two pieces of code which throw unawaited async errors. I verified both wrote the error message and stack trace into the log table. ``` const timeoutPromise = new Promise((_, reject) => { setTimeout(() => { reject(new Error('Error thrown after 100ms')); }, 100); }); ``` ```context.db.IndexerStorage.upsert({}, [], []); ```
Block Streamer will be querying block match data from an Indexer's postgres tables through Hasura. Thus, Block Streamer needs to be able to query and parse data returned by Hasura's graphQL APIs. This PR introduces a crate which can generate code necessary to parse returned data from a graphQL query. I've also created a struct which encapsulates the code for making graphQL calls for ease of use when integrating or mocking this feature in the future.
Introduce an improved logs table with new fields and the ability to search and filter based of radio inputs.
The new website for [near.org](http://near.org/) is simply a marketing website now served by this repo and will no longer serve as the gatway: https://github.com/near/nearorg The gateway now lives at [dev.near.org](http://dev.near.org/) and remains the same: https://github.com/near/near-discovery Looked for dead links in queryAPI repo. A few I avoided that used alpha.near have not been modified. Changed most near.org to dev.near.org
@typescript-eslint/eslint-plugin and @typescript-eslint/parser required by your project and the versions required by ESLint. Specifically, ESLint requires version ^8.56.0 of @typescript-eslint/parser, while your project requires version ^7.9.0. https://console.cloud.google.com/cloud-build/builds/1825a649-e414-4a72-8c68-5c50625d128f;step=0?project=pagoda-data-stack-dev
The bitmap indexer will return a list of bitmaps in the form of base 64 strings, and associated start block heights. We need a way to convert all that data into a single block height and an associated bitmap. This PR introduces a new BitmapOperator class which holds all the operations necessary to perform the function of returning a combined binary bitmap with the lowest start block height as index 0.
Disabled some lint rules causing build to fail. Run local build and it succeeds.
improved indexingLogic components by the following: 1. Separation into view/container 2. Broke down larger components into more reusable and meaningful counterparts 3. Adding TS to files 4. Redesigned visuals for IndexingLogic matching Logs themes (GCP) 5. Removed Unnecessary Folder Structure of "Forms" and it is coupled tightly with the Modals 6. Removed dead code/Refactored Repeated Logic
Runner's Hasura Client test container was exceeding its 120s timeout for the "starting API Server" log message. It turns out the start message now uses a capital S, which was the cause of the error. Changing this allowed the integ tests to work again.
…Minor Bugs (#775) Enhanced the logs table by adding contextual information such as timestamps and the number of blocks from NEAR's latest block for developers on the Logs Table.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Bug Fixes:
[Object object]
Changes:
Feature Releases: