-
Notifications
You must be signed in to change notification settings - Fork 176
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(torii): index whitelisted erc20/erc721 #2494
Conversation
commit-id:5231a946
commit-id:d618e874
commit-id:10465a00
commit-id:3bfe5955
commit-id:923dbba6
- cleanup plus other refactor for the PR stack - `start_block` configuration removed since it was no longer necessary now that we use `get_events` for `sync_range` - use debug log level transfer events commit-id:4acad5f3
commit-id:c6195721
…untime (#2483) * refactor(torii): handle normal and legacy erc contract type at runtime * refactor(torii): use inmemory cache to speedup erc indexing * fix lints * add script to verify balances in torii db by quering rpc * fix(torii/graphql): return only actual token_id in graphql response * add debug statement for shouldn't reach state * add tests for I256 type * update torii toml file
WalkthroughOhayo, sensei! This pull request introduces several changes across multiple files in the project. The Changes
Suggested reviewers
📜 Recent review detailsConfiguration used: .coderabbit.yaml 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2494 +/- ##
==========================================
- Coverage 68.82% 67.74% -1.09%
==========================================
Files 377 388 +11
Lines 48930 50421 +1491
==========================================
+ Hits 33677 34156 +479
- Misses 15253 16265 +1012 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 55
🧹 Outside diff range and nitpick comments (48)
crates/torii/core/src/lib.rs (1)
Ohayo sensei,
The
cache
module is still present and referenced in multiple files across the codebase. Please verify if its removal was intentional and ensure that all related references are appropriately handled.🔗 Analysis chain
Line range hint
1-11
: Sensei, let's double-check on thecache
module!The AI summary mentioned the removal of the
cache
module, but I can't see this change in the provided code. Could we verify if this removal was intentional and if it has been applied correctly?Let's run a quick check to confirm:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the removal of the cache module # Test: Search for "pub mod cache" in the file rg --type rust "pub mod cache" crates/torii/core/src/lib.rs # If the above command doesn't find anything, it means the cache module has indeed been removed. # If it finds something, it means the cache module is still present, contrary to the AI summary.Length of output: 61
Script:
#!/bin/bash # Description: Comprehensive verification of the cache module removal # Search for any pub mod or pub use declarations related to cache ast-grep --lang rust --pattern $'pub mod cache { $$$ }' . # Search for any imports or uses of the cache module ast-grep --lang rust --pattern $'use cache::$_' . # Perform a broader search for the term "cache" in the codebase rg --type rust "cache" .Length of output: 14116
scripts/send_erc20_transfer.sh (1)
1-12
: Ohayo, sensei! Let's add a brief explanation to our script.The script is concise and focused, which is great! However, it would be helpful to add a brief comment at the beginning explaining its purpose and usage. This will improve maintainability and help other developers understand the script quickly.
Consider adding a comment like this at the beginning of the script:
#!/bin/bash # This script sends an ERC20 token transfer using starkli. # It requires a contract address as an argument and uses environment variables # for configuration. If not set, it uses default values. # # Usage: ./send_erc20_transfer.sh <contract_address> # # Environment variables: # RPC_URL: The RPC endpoint (default: http://localhost:5050) # RECIPIENT: The recipient address (default: 0x1234) # AMOUNT: The amount to transfer (default: u256:1) # ACCOUNT_FILE: Path to the account file (default: $HOME/.starkli/account.json) # KEYSTORE_FILE: Path to the keystore file (default: $HOME/.starkli/signer.json) # ... rest of the script ...This comment provides a quick overview of the script's purpose and how to use it, making it more user-friendly and easier to maintain.
crates/torii/graphql/src/error.rs (1)
12-13
: Ohayo, sensei! This addition looks great!The new
NotFelt(String)
variant is a solid addition to theExtractError
enum. It follows the existing pattern and expands the error handling capabilities for the "felt" type.As a tiny nitpick, consider adjusting the error message to match the capitalization of the other variants:
- #[error("Not a felt: {0}")] + #[error("Not a Felt: {0}")]This would maintain consistency with variants like "Not a String" and "Not a Number".
crates/torii/graphql/src/object/erc/erc_token.rs (3)
6-7
: Ohayo! Nice struct definition, sensei!The
ErcTokenObject
struct is well-defined with theDebug
derive attribute, which is great for debugging. However, I noticed it doesn't have any fields.Consider adding a comment explaining why this struct has no fields, or if it's intended to be a marker struct. This would help future developers understand its purpose more easily.
9-21
: Ohayo! Excellent trait implementation, sensei!The
BasicObject
trait implementation forErcTokenObject
looks solid. I appreciate the use of constants, which promotes maintainability and reduces the chance of typos. Thetype_mapping
method efficiently returns a reference to a staticTypeMapping
.To improve code readability, consider adding a brief comment above each method explaining its purpose or any specific behavior. This would make it easier for other developers to understand the intent behind each method quickly.
1-21
: Ohayo! Overall, great work on this new file, sensei!The implementation of
ErcTokenObject
with theBasicObject
trait is concise and effective. The use of constants and the efficient implementation of methods are commendable.To further enhance this code:
- Consider adding comments to explain the purpose of the
ErcTokenObject
struct.- Add brief comments above each method in the trait implementation to clarify their specific roles.
These small additions would greatly improve the maintainability and readability of the code.
If you'd like assistance in drafting these comments or have any questions about the suggestions, please let me know. I'm here to help, sensei!
crates/torii/migrations/20240913104418_add_erc.sql (2)
1-10
: Ohayo, sensei! Thebalances
table structure looks solid!The table design effectively captures the necessary information for token balances. However, I have a small suggestion:
Consider using a numeric type (like DECIMAL or BIGINT) for the
balance
column instead of TEXT. This would ensure type safety and potentially improve query performance when dealing with numerical operations.
25-35
: Konnichiwa, sensei! Theerc_transfers
table is looking mighty fine!The structure effectively captures all necessary transfer details. However, I have a small suggestion:
Consider using a numeric type (like DECIMAL or BIGINT) for the
amount
column instead of TEXT. This would ensure type safety and potentially improve query performance when dealing with numerical operations.Also, great choice using DATETIME for
executed_at
!bin/torii/Cargo.toml (1)
49-49
: Ohayo, sensei! The new dependency looks good!The addition of
tempfile
as a workspace dependency is a solid choice. It's a useful crate for handling temporary files and directories, which can be handy in various scenarios.If you need any help implementing temporary file handling in your code using this new dependency, just let this humble student know, sensei!
crates/torii/graphql/src/utils.rs (1)
34-44
: Ohayo once more, sensei! This implementation is looking sharp!The new
ExtractFromIndexMap
implementation forFelt
is like a well-executed kata. It handles errors gracefully and follows the established pattern. Excellent work!A small suggestion to make it even more efficient:
Consider using
and_then
to chain theFelt::from_str
operation:impl ExtractFromIndexMap for Felt { fn extract(indexmap: &ValueMapping, input: &str) -> Result<Self, ExtractError> { indexmap .get(input) .ok_or_else(|| ExtractError::NotFound(input.to_string())) .and_then(|value| match value { Value::String(s) => Felt::from_str(s).map_err(|_| ExtractError::NotFelt(input.to_string())), _ => Err(ExtractError::NotString(input.to_string())), }) } }This approach reduces nesting and makes the code flow more linear. What do you think, sensei?
crates/torii/graphql/src/constants.rs (1)
36-38
: Ohayo, sensei! These new ERC constants look great!The new ERC-related type name constants are well-defined and follow the existing naming conventions. They'll be super helpful for working with ERC20 and ERC721 tokens.
Just a tiny suggestion to keep things consistent:
Consider adding a comment above these new constants, similar to the one at line 22, to group them together. Something like:
// ERC-related type names pub const ERC_BALANCE_TYPE_NAME: &str = "ERC__Balance"; pub const ERC_TRANSFER_TYPE_NAME: &str = "ERC__Transfer"; pub const ERC_TOKEN_TYPE_NAME: &str = "ERC__Token";This will help maintain the organization of the file as it grows. What do you think, sensei?
scripts/compare-torii-data.py (4)
7-13
: Ohayo, sensei! This function looks solid, but let's make it even better!The
fetch_table_data
function is well-implemented. However, we can improve it by using a context manager for the database connection.Consider updating the function like this:
def fetch_table_data(db_path, table_name, columns): with sqlite3.connect(db_path) as conn: cursor = conn.cursor() cursor.execute(f"SELECT {', '.join(columns)} FROM {table_name}") data = cursor.fetchall() return {row[0]: row[1:] for row in data}This ensures that the connection is properly closed, even if an exception occurs.
23-42
: Ohayo, sensei! This comparison function is looking sharp!The
compare_data
function does a great job of comparing and reporting differences between the two datasets. It's clear and easy to understand.For a small efficiency boost, consider using
set
operations for finding missing IDs:set1 = set(data1.keys()) set2 = set(data2.keys()) missing_in_2 = set1 - set2 missing_in_1 = set2 - set1 for id in missing_in_2: print(f"ID {id} found in {table_name} of Database 1 but not in Database 2") for id in missing_in_1: print(f"ID {id} found in {table_name} of Database 2 but not in Database 1")This approach can be faster for large datasets.
52-81
: Ohayo, sensei! This function is the heart of our script, and it's looking good!The
compare_databases
function is well-structured and covers all the necessary steps for a comprehensive comparison. Great job on checking for table existence before comparison!To make it even more robust, consider adding some basic error handling:
def compare_databases(db_path1, db_path2): # ... (existing code) ... for table_name, columns in table_columns.items(): try: if table_exists(db_path1, table_name) and table_exists(db_path2, table_name): # ... (existing comparison code) ... else: print(f"\nSkipping {table_name} table as it doesn't exist in one or both databases.") except sqlite3.Error as e: print(f"An error occurred while comparing {table_name}: {e}") # ... (rest of the function) ...This will help catch and report any SQLite-related errors that might occur during the comparison process.
83-89
: Ohayo, sensei! The main script is looking clean and efficient!The main script structure is well-organized, using
argparse
for handling command-line arguments. This makes the script user-friendly and easy to use.To make it even more robust, consider adding some basic error handling:
if __name__ == "__main__": parser = argparse.ArgumentParser(description="Compare data in events, entities, transactions, balances, tokens, and erc_transfers tables between two SQLite databases.") parser.add_argument("db_path1", help="Path to the first SQLite database") parser.add_argument("db_path2", help="Path to the second SQLite database") args = parser.parse_args() try: compare_databases(args.db_path1, args.db_path2) except Exception as e: print(f"An error occurred: {e}") sys.exit(1)This will catch and report any unexpected errors that might occur during the execution of
compare_databases
.crates/torii/core/src/sql/cache.rs (1)
128-162
: Ohayo, sensei! Impressive newLocalCache
struct!The new
LocalCache
struct and its implementation look well-designed for caching ERC token-related data. The methods provided offer a good set of basic functionality for managing the cache.One suggestion for improvement:
The
Clone
implementation creates new empty collections instead of cloning the existing ones. This might not be the expected behavior. Consider updating it to actually clone the existing data:impl Clone for LocalCache { fn clone(&self) -> Self { Self { erc_cache: self.erc_cache.clone(), token_id_registry: self.token_id_registry.clone(), } } }This ensures that the cloned instance contains the same data as the original.
Overall, great addition to the caching mechanism, sensei!
crates/torii/graphql/src/schema.rs (1)
Line range hint
1-150
: Ohayo once more, sensei! Let's wrap this up!The changes to add ERC-related functionality to the schema are well-integrated and don't disrupt the existing structure. Great job on maintaining consistency!
To further enhance the implementation, consider the following suggestions:
- Add error handling or validation specific to the new ERC objects.
- Update any relevant documentation or comments to reflect these new additions.
- Ensure that appropriate tests are added or updated to cover the new ERC functionality.
Would you like assistance in implementing any of these suggestions, sensei?
crates/torii/graphql/src/object/mod.rs (1)
Line range hint
1-24
: Ohayo again, sensei! Let's consider the integration of our newerc
module.While the addition of the
erc
module is a good start, it's important to consider how it will integrate with the existing GraphQL object structure. Here are a few points to ponder:
- Will the
erc
module define newBasicObject
orResolvableObject
implementations?- Are there any new resolvers, subscriptions, or input objects that need to be added for ERC20/ERC721 functionality?
- How will the new ERC-related types interact with the existing connection and edge objects?
It might be beneficial to outline these considerations in the PR description or in comments within the
erc
module itself.crates/torii/core/src/sql/query_queue.rs (2)
8-8
: Ohayo, sensei! Nice import addition!The new import of
felt_to_sql_string
fromsuper::utils
looks good. It's a great way to keep the code organized and promote reusability.Consider grouping related imports together. You could move this import next to other utility imports if there are any.
85-85
: Ohayo again, sensei! Excellent refactoring!The replacement of inline formatting with
felt_to_sql_string(felt)
is a great improvement. It enhances code readability and ensures consistent formatting ofFieldElement
throughout the codebase.Consider adding error handling for the
felt_to_sql_string
function call. While it's likely a simple conversion, it's always good to be prepared for unexpected issues:Argument::FieldElement(felt) => query.bind(felt_to_sql_string(felt).map_err(|e| anyhow::anyhow!("Failed to convert Felt to SQL string: {}", e))?),This assumes
felt_to_sql_string
returns aResult
. If it doesn't, you can ignore this suggestion.crates/torii/libp2p/src/tests.rs (1)
Line range hint
527-538
: Ohayo, import-sensei! Let's talk about these changes.I see you've made some adjustments to the imports. It looks like we're shifting towards more database-focused operations and async programming. Nice move!
Consider grouping related imports together for better readability. For example, you could group all database-related imports and all async-related imports.
examples/spawn-and-move/manifests/dev/deployment/manifest.json (2)
Line range hint
1258-2359
: Sugoi changes to the models, sensei!New models have been added or modified, expanding the game's data structures:
- PlayerItem: Includes item_id, quantity, and score.
- PlayerConfig: Contains player address, name, items array, and favorite_item.
- ServerProfile: Stores player address, server_id, and name.
These changes suggest new features or refinements to the gameplay. For example:
- The PlayerItem model might indicate an inventory system.
- PlayerConfig could be used for player customization and preferences.
- ServerProfile might be related to multi-server support or player profiles per server.
Please ensure that these new models are properly integrated into your game logic and that any existing code interacting with these data structures is updated accordingly.
Would you like assistance in generating example usage code for these new models, sensei?
Ohayo sensei!
I’ve detected that the old contract addresses are still present in the following files:
examples/spawn-and-move/manifests/release/base/models/dojo_examples-ServerProfile-4caad1e6.toml
examples/spawn-and-move/manifests/dev/base/models/dojo_examples-ServerProfile-4caad1e6.toml
examples/spawn-and-move/manifests/dev/deployment/manifest.toml
examples/spawn-and-move/manifests/dev/base/models/dojo_examples_foes-RiverSkale-39535c12.toml
Please update these references to ensure consistency across the project.
🔗 Analysis chain
Line range hint
1242-1245
: Contract redeployments detected, sensei!Several contracts have been redeployed with new addresses:
dojo_examples-actions
:0x47dcd4e30618d32fd27951e977df1d819d6edff2506e52c653f658134062036
dojo_examples-mock_token
:0x71c327d19b2d0c6a0f65cced59f60fe69776af6885e8f42d8f6775af00f5d5b
dojo_examples-others
:0x4f3cbb247febb63bf5ab34d87504fd85e7a3b4ab6ff16fa2bf23597bf3309c7
The class hashes and ABIs remain unchanged, indicating that the contract logic is the same. Please update any references to these contract addresses in your project.
Also applies to: 1246-1249, 1250-1253, 1254-1257
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Verify that the new contract addresses are used consistently across the project echo "Checking for old contract addresses..." rg --type-add 'code:*.{rs,toml}' --type code '0x7fed9d6acaa45011152ea992784d41df3ae74eedbc199feb91b3fcbdb78abad|0x6a116545acc77daea729bf5f14a71f634ddddedb7e7e77f280d8d984054edaf|0x2fa72f20995710bef20ac3c36e2f43ec210517a787927ea3407e2b29c21bb0b' echo "Checking for new contract addresses..." rg --type-add 'code:*.{rs,toml}' --type code '0x47dcd4e30618d32fd27951e977df1d819d6edff2506e52c653f658134062036|0x71c327d19b2d0c6a0f65cced59f60fe69776af6885e8f42d8f6775af00f5d5b|0x4f3cbb247febb63bf5ab34d87504fd85e7a3b4ab6ff16fa2bf23597bf3309c7'Length of output: 3269
crates/torii/core/src/processors/erc20_transfer.rs (1)
53-55
: Optimize Database Interaction by Passing the Provider DirectlyOhayo sensei,
In the
process
method,world.provider()
is passed todb.handle_erc20_transfer
. If the provider is needed only for database operations, consider passing the provider directly to reduce coupling and enhance modularity.Apply this diff to pass the provider directly:
async fn process( &self, world: &WorldContractReader<P>, db: &mut Sql, _block_number: u64, block_timestamp: u64, _event_id: &str, event: &Event, ) -> Result<(), Error> { let token_address = event.from_address; let from = event.keys[1]; let to = event.keys[2]; let value = U256Cainome::cairo_deserialize(&event.data, 0)?; let value = U256::from_words(value.low, value.high); - db.handle_erc20_transfer(token_address, from, to, value, world.provider(), block_timestamp) + let provider = world.provider(); + db.handle_erc20_transfer(token_address, from, to, value, provider, block_timestamp) .await?; debug!(target: LOG_TARGET, from = ?from, to = ?to, value = ?value, "ERC20 Transfer"); Ok(()) }crates/torii/core/src/processors/erc20_legacy_transfer.rs (2)
26-35
: Simplify thevalidate
method for clarity, senseiYou can streamline the
validate
method by returning the condition directly:-fn validate(&self, event: &Event) -> bool { - if event.keys.len() == 1 && event.data.len() == 4 { - return true; - } - false -} +fn validate(&self, event: &Event) -> bool { + event.keys.len() == 1 && event.data.len() == 4 +}
55-55
: Includetoken_address
in the debug log for better traceability, senseiAdding
token_address
to the debug statement enhances logging and aids in debugging.Suggested change:
-debug!(target: LOG_TARGET,from = ?from, to = ?to, value = ?value, "Legacy ERC20 Transfer"); +debug!(target: LOG_TARGET, token_address = ?token_address, from = ?from, to = ?to, value = ?value, "Legacy ERC20 Transfer");crates/torii/core/src/processors/erc721_transfer.rs (1)
14-65
: Consider adding unit tests forErc721TransferProcessor
Ohayo sensei! Adding unit tests for
Erc721TransferProcessor
would help ensure its correct functionality and prevent future regressions.Would you like assistance in creating unit tests for this processor, or should I open a new GitHub issue to track this task?
crates/torii/core/src/processors/erc721_legacy_transfer.rs (1)
62-62
: Consider adjusting log level for transfer eventsOhayo, sensei! The transfer details are being logged at the
debug
level. If these events are important for monitoring or auditing, consider using theinfo
level to make them more visible in the logs.crates/torii/core/src/types.rs (2)
91-91
: Ohayo, sensei! ShouldToriiConfig
also deriveSerialize
?Since you're deriving
Deserialize
forToriiConfig
, consider derivingSerialize
as well if you plan to serialize the configuration elsewhere.Apply this diff:
#[derive(Default, Deserialize, Debug, Clone)] +#[derive(Serialize)] pub struct ToriiConfig {
126-126
: Ohayo, sensei! Update the error message for clarity infrom_str
.Since
ContractType
includesWORLD
, which isn't an ERC type, consider changing the error message toInvalid contract type
.Apply this diff:
- _ => Err(anyhow::anyhow!("Invalid ERC type: {}", input)), + _ => Err(anyhow::anyhow!("Invalid contract type: {}", input)),crates/torii/graphql/src/tests/metadata_test.rs (1)
87-87
: Ohayo sensei! Ensure consistent error handling withunwrap()
In lines 87 and 129, you're using
unwrap()
afterdb.set_metadata
. While it's acceptable in tests to useunwrap()
, consider handling potential errors explicitly to improve test robustness and clarity.If you expect these operations to always succeed in your tests, you might leave a comment explaining this assumption.
Also applies to: 129-129
scripts/verify_db_balances/src/main.rs (3)
71-71
: Ohayo, sensei! Consider making the database path configurable.The database path is currently hardcoded. Making it configurable enhances flexibility and reusability.
You might introduce a command-line argument or environment variable for the database URL.
91-93
: Ohayo, sensei! Suggest making the RPC endpoint configurable.Hardcoding the RPC URL limits flexibility. Allowing it to be set via command-line arguments or environment variables can be beneficial.
Consider using
std::env::var
or a command-line parsing crate likestructopt
orclap
.
116-121
: Ohayo, sensei! Enhance mismatch logging for better clarity.Including the
token_id
andcontract_type
in the mismatch log can provide more context during debugging.Apply this diff:
-error!( - "Mismatch for account {} and contract {}: DB balance = {}, Starknet balance = {}", - account_address, contract_address, db_balance, starknet_balance -); +error!( + "Mismatch for account {}, contract {} ({}), token ID {}: DB balance = {}, Starknet balance = {}", + account_address, contract_address, contract_type, token_id, db_balance, starknet_balance +);crates/torii/grpc/src/server/tests/entities_test.rs (2)
41-43
: Ohayo, sensei! Consider specifying connection timeouts to prevent resource leaks.Setting
idle_timeout
andmax_lifetime
toNone
in the database connection options may cause connections to remain open indefinitely. While this might be acceptable in a test environment, explicitly setting reasonable timeouts is good practice to ensure proper resource management.
110-111
: Ohayo, sensei! Avoid duplicating theworld_contracts
HashMap.The
HashMap
containingstrat.world_address
is created twice, once when initializingdb
and again forengine
. To adhere to the DRY (Don't Repeat Yourself) principle, consider defining it once and reusing it.Apply this change to reduce duplication:
+let world_contracts = Arc::new(HashMap::from([(strat.world_address, ContractType::WORLD)])); // Initialize db let db = Sql::new( pool.clone(), sender, - &HashMap::from([(strat.world_address, ContractType::WORLD)]), + &world_contracts, ) .await .unwrap(); // Initialize engine let mut engine = Engine::new( world_reader, db.clone(), Arc::clone(&provider), Processors { ..Processors::default() }, EngineConfig::default(), shutdown_tx, None, - Arc::new(HashMap::from([(strat.world_address, ContractType::WORLD)])), + world_contracts.clone(), );Also applies to: 124-124
crates/torii/graphql/src/mapping.rs (1)
7-7
: Ohayo sensei! Ensure consistent ordering of imports.The added import of
ERC_TOKEN_TYPE_NAME
is great. For better readability, please ensure that import statements are consistently ordered, perhaps alphabetically or by relevance.crates/torii/core/src/sql/utils.rs (2)
33-36
: Sensei, consider using an existing big integer library for signed 256-bit integers.Implementing signed big integer arithmetic can be complex and error-prone. Libraries like
num
orethereum-types
provide robustI256
implementations that handle edge cases, overflows, and offer additional functionality.
114-267
: Sensei, consider adding tests for overflow scenarios.While the current tests cover various arithmetic cases, they don't include scenarios where
U256
values are at or near their maximum. Testing with large values can help ensure that your implementation handles overflows correctly.Would you like assistance in generating these additional tests?
crates/torii/graphql/src/tests/mod.rs (1)
334-334
: Consistent use of providersOhayo, sensei! You've updated
TransactionWaiter
to useaccount.provider()
. Ensure consistent use ofaccount.provider()
throughout the code to maintain clarity and prevent potential mismatches.crates/torii/core/src/sql/erc.rs (1)
89-89
: Add context to error handling for consistencyOhayo sensei! When calling
self.execute().await
inhandle_erc721_transfer
, consider using.with_context()
to provide meaningful error messages, just like inhandle_erc20_transfer
. This will enhance error traceability and maintain consistency across the codebase.crates/torii/core/src/sql/test.rs (2)
127-131
: Ohayo sensei! Handle potential errors instead of usingunwrap()
Using
.unwrap()
can cause the test to panic if an error occurs during database setup. Consider handling errors gracefully with proper error messages to improve test robustness and reliability.
112-123
: Ohayo sensei! Ensure proper handling of transaction failuresIn the
move
transaction execution, errors are unwrapped using.unwrap()
. If the transaction fails, it will cause a panic. Consider adding error handling to gracefully manage potential transaction failures.crates/torii/core/src/engine.rs (3)
295-300
: Variable shadowing detectedAt
lines 295~
and299~
, the variablefrom
is re-assigned:let from = cursors.head.unwrap_or(0); // ... let from = if from == 0 { from } else { from + 1 };This shadowing might be confusing. Consider renaming the second
from
toadjusted_from
or similar to improve clarity:let adjusted_from = if from == 0 { from } else { from + 1 };
Line range hint
748-767
: Graceful handling when no processors are foundAt
line 749~
, ifprocessors.get(&event_key)
returnsNone
, the code attempts to use thecatch_all_event
processor or logs unprocessed events. Ensure that this fallback mechanism adequately handles unexpected events without causing errors.
Ohayo, sensei!
I've verified the
cursor_map
updates inprocess_pending
and found thatcursor_map.insert
is not called within lines 475-513 ofcrates/torii/core/src/engine.rs
. This meanscursor_map
may not be updated correctly for each processed event, potentially leading to re-processing in future iterations.🔗 Analysis chain
Line range hint
475-496
: Ensure correct cursor updates for pending transactionsIn
process_pending
(starting atline 475~
), updates tocursor_map
are crucial for accurately tracking processed events. Verify thatcursor_map
is updated for each processed event to prevent re-processing in future iterations.To check where and how
cursor_map
is updated, use the following script:
Ohayo, sensei!
The previous script had a syntax error. Here's the corrected version to verify
cursor_map
updates:🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify cursor_map updates in process_pending # Expected: Find all instances where cursor_map is modified. ast-grep --lang rust --pattern $'cursor_map.insert($_, $_);'Length of output: 3431
Script:
#!/bin/bash # Description: Verify cursor_map updates in process_pending # Expected: Find all instances where cursor_map is modified via insert ast-grep --lang rust --pattern $'cursor_map.insert($_, $_);'Length of output: 171
crates/torii/core/src/sql/mod.rs (1)
Line range hint
627-647
: Ohayo sensei! Correct the misuse of mutable references inbuild_register_queries_recursive
callIn the
build_member
closure, mutable references are being taken to temporary values when callingbuild_register_queries_recursive
. This is invalid in Rust as you cannot take a mutable reference to a temporary value.Here's a suggested fix to adjust the variables before the recursive call:
// Before the recursive call + *model_idx += 1; + let mut new_array_idx = *array_idx + if let Ty::Array(_) = member { 1 } else { 0 }; + let mut new_parent_array_idx = *parent_array_idx + if let Ty::Array(_) = model { 1 } else { 0 }; self.build_register_queries_recursive( selector, member, path_clone, - &mut (*model_idx + 1), block_timestamp, - &mut (*array_idx + if let Ty::Array(_) = member { 1 } else { 0 }), - &mut (*parent_array_idx + if let Ty::Array(_) = model { 1 } else { 0 }), + model_idx, + &mut new_array_idx, + &mut new_parent_array_idx, )?;This change ensures that you're passing valid mutable references to variables rather than temporary values.
crates/torii/graphql/src/tests/subscription_test.rs (1)
Line range hint
90-93
: Avoid using arbitrary sleep durations in asynchronous testsOhayo, Sensei! In the asynchronous tasks starting at lines 90-93, 240-243, 314-317, 388-391, and 459-462,
tokio::time::sleep(Duration::from_secs(1)).await
is used to wait before proceeding. Relying on fixed sleep durations can lead to flaky tests due to timing variations. Consider using synchronization primitives like channels or barriers to coordinate between tasks more reliably.Also applies to: 240-243, 314-317, 388-391, 459-462
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (3)
Cargo.lock
is excluded by!**/*.lock
spawn-and-move-db.tar.gz
is excluded by!**/*.gz
types-test-db.tar.gz
is excluded by!**/*.gz
📒 Files selected for processing (55)
- .gitignore (1 hunks)
- Cargo.toml (1 hunks)
- bin/torii/Cargo.toml (1 hunks)
- bin/torii/src/main.rs (9 hunks)
- bin/torii/torii.toml (1 hunks)
- crates/torii/core/Cargo.toml (2 hunks)
- crates/torii/core/src/engine.rs (24 hunks)
- crates/torii/core/src/executor.rs (1 hunks)
- crates/torii/core/src/lib.rs (1 hunks)
- crates/torii/core/src/processors/erc20_legacy_transfer.rs (1 hunks)
- crates/torii/core/src/processors/erc20_transfer.rs (1 hunks)
- crates/torii/core/src/processors/erc721_legacy_transfer.rs (1 hunks)
- crates/torii/core/src/processors/erc721_transfer.rs (1 hunks)
- crates/torii/core/src/processors/metadata_update.rs (2 hunks)
- crates/torii/core/src/processors/mod.rs (1 hunks)
- crates/torii/core/src/processors/store_set_record.rs (2 hunks)
- crates/torii/core/src/processors/store_transaction.rs (1 hunks)
- crates/torii/core/src/sql/cache.rs (5 hunks)
- crates/torii/core/src/sql/erc.rs (1 hunks)
- crates/torii/core/src/sql/mod.rs (35 hunks)
- crates/torii/core/src/sql/query_queue.rs (2 hunks)
- crates/torii/core/src/sql/test.rs (7 hunks)
- crates/torii/core/src/sql/utils.rs (1 hunks)
- crates/torii/core/src/types.rs (2 hunks)
- crates/torii/graphql/Cargo.toml (2 hunks)
- crates/torii/graphql/src/constants.rs (2 hunks)
- crates/torii/graphql/src/error.rs (1 hunks)
- crates/torii/graphql/src/mapping.rs (2 hunks)
- crates/torii/graphql/src/object/erc/erc_balance.rs (1 hunks)
- crates/torii/graphql/src/object/erc/erc_token.rs (1 hunks)
- crates/torii/graphql/src/object/erc/erc_transfer.rs (1 hunks)
- crates/torii/graphql/src/object/erc/mod.rs (1 hunks)
- crates/torii/graphql/src/object/mod.rs (1 hunks)
- crates/torii/graphql/src/schema.rs (3 hunks)
- crates/torii/graphql/src/tests/entities_test.rs (2 hunks)
- crates/torii/graphql/src/tests/metadata_test.rs (4 hunks)
- crates/torii/graphql/src/tests/mod.rs (5 hunks)
- crates/torii/graphql/src/tests/models_ordering_test.rs (2 hunks)
- crates/torii/graphql/src/tests/models_test.rs (2 hunks)
- crates/torii/graphql/src/tests/subscription_test.rs (9 hunks)
- crates/torii/graphql/src/utils.rs (2 hunks)
- crates/torii/grpc/Cargo.toml (1 hunks)
- crates/torii/grpc/src/server/mod.rs (1 hunks)
- crates/torii/grpc/src/server/tests/entities_test.rs (4 hunks)
- crates/torii/libp2p/src/server/mod.rs (3 hunks)
- crates/torii/libp2p/src/tests.rs (4 hunks)
- crates/torii/migrations/20240913104418_add_erc.sql (1 hunks)
- crates/torii/migrations/20240918200125_rename_column_contracts_table.sql (1 hunks)
- examples/spawn-and-move/manifests/dev/deployment/manifest.json (1 hunks)
- examples/spawn-and-move/manifests/dev/deployment/manifest.toml (1 hunks)
- scripts/compare-torii-data.py (1 hunks)
- scripts/deploy_erc20_katana.sh (1 hunks)
- scripts/send_erc20_transfer.sh (1 hunks)
- scripts/verify_db_balances/Cargo.toml (1 hunks)
- scripts/verify_db_balances/src/main.rs (1 hunks)
✅ Files skipped from review due to trivial changes (5)
- bin/torii/torii.toml
- crates/torii/graphql/src/object/erc/mod.rs
- crates/torii/grpc/src/server/mod.rs
- crates/torii/migrations/20240918200125_rename_column_contracts_table.sql
- scripts/verify_db_balances/Cargo.toml
🔇 Additional comments (87)
crates/torii/core/src/lib.rs (1)
5-5
: Ohayo, sensei! The newexecutor
module looks promising!The addition of the
executor
module aligns well with our mission to implement features for indexing whitelisted ERC20 and ERC721 tokens. It's a step in the right direction for our Torii module.scripts/deploy_erc20_katana.sh (1)
1-1
: Ohayo, sensei! The shebang looks good!The shebang line is correct for a bash script. Nice work!
.gitignore (1)
24-24
: Ohayo, sensei! This change looks great!The addition of
**/*.log
to the.gitignore
file is a smart move. It will ensure that all those pesky log files, no matter where they're hiding in your project, won't sneak into your Git repository. This keeps your repo clean and prevents accidental commits of potentially sensitive or unnecessary information.Keep up the excellent work, sensei!
scripts/send_erc20_transfer.sh (1)
1-7
: Ohayo, script header looks great, sensei!The shebang line and argument check are implemented correctly. The error message and usage instructions are clear and helpful.
crates/torii/graphql/src/object/erc/erc_token.rs (1)
1-4
: Ohayo! Imports look good, sensei!The imports are well-organized and relevant to the functionality implemented in this file. No unused imports are present.
crates/torii/core/src/processors/store_transaction.rs (1)
24-24
: Ohayo, error handling sensei! This change looks sugoi!The addition of the
?
operator todb.store_transaction
is a great improvement. It ensures that any errors during the transaction storage process are properly propagated, making our code more robust and easier to debug.crates/torii/migrations/20240913104418_add_erc.sql (3)
12-13
: Excellent indexing strategy, sensei!The indexes on
account_address
andcontract_address
are well-chosen. They will significantly boost query performance when filtering or joining on these columns, which are likely to be common operations.
15-23
: Ohayo! Thetokens
table is looking sharp, sensei!The structure captures all essential token information efficiently. The use of INTEGER for
decimals
is spot-on, and the foreign key relationship with thecontracts
table ensures data integrity. Great job!
1-35
: Arigato gozaimasu for this excellent migration file, sensei!Overall, the structure of this migration is well-thought-out and robust. The tables (
balances
,tokens
, anderc_transfers
) are designed with clear relationships and appropriate constraints. The indexes on thebalances
table will help optimize query performance.The only minor suggestions are to consider using numeric types for the
balance
column in thebalances
table and theamount
column in theerc_transfers
table. These changes could potentially improve type safety and query performance for numerical operations.Great work on implementing this feature for indexing whitelisted ERC20 and ERC721 tokens!
crates/torii/core/Cargo.toml (3)
32-32
: Ohayo, sensei! Thestarknet
dependency is back in action!Great job reintroducing the
starknet
dependency with workspace configuration. This aligns perfectly with our ERC20 and ERC721 token indexing objectives.
46-46
: Ohayo!tempfile
joins the dev-dependencies party, sensei!Nice addition of the
tempfile
dev-dependency with workspace configuration. This will be handy for our testing scenarios. Just make sure we're putting it to good use in our test suite!#!/bin/bash # Description: Check for tempfile usage in tests # Expected: Find test files using the tempfile crate rg --type rust 'use.*tempfile' -g 'tests/**'
37-37
: Ohayo! Newtoml
dependency spotted, sensei!I see you've added the
toml
dependency with workspace configuration. While it fits nicely with our other dependencies, could you enlighten us on its specific purpose in this context? Is it related to some configuration parsing for our token indexing feature?✅ Verification successful
Ohayo, sensei! 🐰 It looks like there was an issue with the previous search. Let me run a corrected check for the
toml
usage.
Ohayo, sensei! 🐰 The
toml
dependency is indeed utilized for configuration parsing in several modules, includingconfig/profile_config.rs
andmanifest/mod.rs
, which supports our token indexing feature.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for TOML usage in the codebase # Expected: Find TOML-related code that justifies this new dependency rg --type rust 'use.*toml' -g '!Cargo.toml'Length of output: 289
Script:
#!/bin/bash # Description: Corrected check for TOML usage in the codebase # Expected: Find TOML-related code that justifies this new dependency rg --type rust 'use\s+toml' -g '!Cargo.toml'Length of output: 318
crates/torii/graphql/Cargo.toml (4)
34-34
: Ohayo again, sensei! Welcome back, tokio!The return of the tokio dependency is a great move! It's like welcoming back an old friend to help with our asynchronous tasks.
Let's make sure tokio is being used properly:
#!/bin/bash # Description: Verify the usage of tokio in the codebase. # Test: Search for tokio usage rg --type rust 'use tokio|#\[tokio::' crates/torii
51-51
: Ohayo once more, sensei! StarkNet is back in action!Great to see the starknet dependency make a comeback in our dev-dependencies! It's like inviting a powerful ally to our testing dojo.
Let's check how we're wielding this StarkNet power in our tests:
#!/bin/bash # Description: Verify the usage of starknet in tests. # Test: Search for starknet usage in test files rg --type rust 'use.*starknet' crates/torii/**/*test.rs
52-52
: Ohayo for the last time, sensei! Welcome, tempfile-san!Adding tempfile to our dev-dependencies is a smart move! It's like having a helpful ninja to manage our temporary test files.
Let's see how we're utilizing this new tempfile power in our tests:
#!/bin/bash # Description: Verify the usage of tempfile in tests. # Test: Search for tempfile usage in test files rg --type rust 'use.*tempfile' crates/torii/**/*test.rs
28-28
: Ohayo, sensei! The starknet-crypto addition looks good!The addition of the starknet-crypto dependency is a solid choice for enhancing StarkNet-related cryptographic capabilities. Using the workspace version is a wise move for maintaining consistency.
Let's make sure it's being put to good use, shall we? Here's a quick check:
✅ Verification successful
Ohayo, sensei! It looks like our last attempt didn't yield the results we were hoping for. Let's try a more comprehensive search for
starknet_crypto
to ensure it's being utilized properly.
Ohayo, sensei! The
starknet-crypto
dependency is actively used across multiple modules in thecrates/torii
package, ensuring that our cryptographic functionalities are well-integrated and consistent throughout the project.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the usage of starknet-crypto in the codebase. # Test: Search for starknet-crypto usage rg --type rust 'use.*starknet_crypto' crates/toriiLength of output: 197
Script:
#!/bin/bash # Description: Re-verify the usage of starknet-crypto in the codebase. # Test: Search for starknet-crypto usage in all Rust files within crates/torii rg --type rust 'starknet_crypto' crates/toriiLength of output: 1279
crates/torii/grpc/Cargo.toml (1)
39-39
: Ohayo, sensei! This change looks great!The addition of
tempfile
as a development dependency is a solid choice. It's perfect for creating temporary files and directories during testing, which can be super helpful for our torii-grpc package. Keep up the awesome work!crates/torii/core/src/processors/store_set_record.rs (2)
11-11
: Ohayo, sensei! Nice import addition!The new import brings in the
felts_to_sql_string
function from theutils
module. This is a good move towards centralizing utility functions and promoting code reuse.
Line range hint
1-85
: Ohayo, sensei! Let's wrap up this review!The changes in this file are minimal but positive. By replacing a local function with a centralized utility function, we're improving code reuse and maintainability. The overall functionality of the
StoreSetRecordProcessor
remains unchanged, which is good for stability.Great job on this refactoring, sensei! It's a small step towards a more organized and maintainable codebase.
crates/torii/graphql/src/utils.rs (2)
1-1
: Ohayo, sensei! LGTM on this import!The addition of
use std::str::FromStr;
is spot-on for the newFelt
implementation. It's like finding the perfect ingredient for our code ramen!
5-5
: Ohayo again, sensei! This import is dojo-approved!Adding
use starknet_crypto::Felt;
is like equipping our code with a legendary sword. It's essential for our newFelt
implementation. Excellent work!crates/torii/graphql/src/constants.rs (1)
51-53
: Ohayo again, sensei! These ERC name constants are looking sharp!The new ERC-related name constants are well-defined and follow the existing patterns. They'll be super useful for maintaining consistency in naming throughout the project.
I noticed something interesting, though:
All three new constants have empty strings for their plural forms. This is similar to PAGE_INFO_NAMES but different from most other constants in this file. Is this intentional? If so, could you help me understand the reasoning behind it?
To double-check if this is consistent with other recent changes, I'd like to run a quick verification:
This will help us ensure we're maintaining consistency across the codebase. What do you think, sensei?
✅ Verification successful
Ohayo again, sensei!
I've reviewed the other constants, and it turns out the empty plural forms for the ERC name constants are consistent with
PAGE_INFO_NAMES
. This consistency suggests that the implementation is intentional.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for other recent changes to constant tuples with empty plural forms # Test: Search for constant tuples with empty second elements rg --type rust -e 'pub const \w+: \(&str, &str\) = \("[^"]+", ""\);' crates/torii/graphql/src/Length of output: 495
crates/torii/graphql/src/tests/models_ordering_test.rs (3)
6-6
: Ohayo, sensei! Nice addition of the tempfile dependency.The use of
tempfile::NamedTempFile
is a great choice for creating isolated test environments. It ensures that temporary files are automatically cleaned up after the test, preventing leftover artifacts. Good job on following best practices!
Line range hint
1-85
: Ohayo one last time, sensei! Overall, these changes look great.The modifications to the test setup, particularly the introduction of a temporary file, significantly enhance the test's isolation and reproducibility. These changes align well with best practices in testing.
Just a few minor suggestions were made to further improve error handling and code clarity. Great work on continuously improving the test infrastructure!
48-50
: Ohayo again, sensei! Great improvements to the test setup.The introduction of a temporary file for the test setup is a solid improvement. It enhances test isolation and potentially makes the tests more reproducible. Well done!
A small suggestion to make the code even more robust:
Consider using the
?
operator for error propagation when creating the temporary file. This would handle potential file creation errors more gracefully:-let tempfile = NamedTempFile::new().unwrap(); +let tempfile = NamedTempFile::new()?;Also, it might be worth adding a comment explaining why we're using a temporary file here, to help future maintainers understand the intent.
Let's make sure the
spinup_types_test
function has been updated to accept a file path:This will help us confirm that the change is consistent across the codebase.
✅ Verification successful
Ohayo again, sensei! Verified the update to
spinup_types_test
.The
spinup_types_test
function now correctly accepts a file path parameter, confirming that the test setup changes are consistent across the codebase. Excellent work!🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify that spinup_types_test function accepts a file path parameter rg --type rust 'fn spinup_types_test.*&str' crates/toriiLength of output: 163
scripts/compare-torii-data.py (2)
4-5
: Ohayo, sensei! These imports look great!The
sqlite3
andargparse
modules are perfect choices for this script's database operations and command-line argument parsing.
1-89
: Ohayo, sensei! Overall, this script is a masterpiece in the making!The
compare-torii-data.py
script is well-structured, efficient, and accomplishes its intended purpose of comparing data across multiple tables between two SQLite databases. The code is clean, easy to understand, and follows good programming practices.Here's a summary of the suggested improvements:
- Use context managers for database connections in all functions to ensure proper resource management.
- Implement error handling in the
compare_databases
function and the main script to catch and report any unexpected errors.- Consider using
set
operations in thecompare_data
function for more efficient ID comparison in large datasets.These minor enhancements will make your already solid script even more robust and efficient. Great work, sensei!
crates/torii/core/src/sql/cache.rs (6)
1-1
: Ohayo, import changes look good, sensei!The new imports are well-aligned with the introduced
LocalCache
struct and its implementation. Nice job keeping the imports clean and relevant!Also applies to: 5-5, 11-12
40-40
: Ohayo! Consistent update in the constructor, sensei!The
new
method has been correctly updated to initializemodel_cache
. This change aligns perfectly with the field renaming in the struct. Well done!
54-54
: Ohayo, sensei! Spot-on update in themodel
method!The
model
method has been correctly updated to usemodel_cache
. This change maintains consistency with the field renaming. Excellent attention to detail!
119-119
: Ohayo! Consistent updates inset
andclear
methods, sensei!Both the
set
andclear
methods have been correctly updated to usemodel_cache
. These changes maintain consistency with the field renaming throughout the struct. Great job ensuring all references were updated!Also applies to: 124-125
Line range hint
1-162
: Ohayo, sensei! Excellent work on enhancing the caching mechanism!Your changes to this file have significantly improved and expanded the caching functionality. Here's a summary of the key improvements:
- The
ModelCache
struct has been refined with clearer naming.- A new
LocalCache
struct has been introduced for ERC token-related caching.- The changes are consistent throughout the file, maintaining code quality.
These enhancements should lead to more efficient and clearer cache management in the Torii module. Great job on implementing these features, sensei!
35-35
: Ohayo, sensei! Nice rename for clarity!The renaming of
cache
tomodel_cache
improves the code's readability. Great job!Let's make sure this change is consistent throughout the file:
✅ Verification successful
Ohayo, sensei! 🎉
I've verified that the renaming of
cache
tomodel_cache
is consistent throughout the file. Great work!🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Verify the renaming of 'cache' to 'model_cache' is consistent rg '\bcache\b' crates/torii/core/src/sql/cache.rsLength of output: 440
crates/torii/graphql/src/schema.rs (2)
13-15
: Ohayo, sensei! These imports look great!The new import statements for ERC-related objects are well-placed and follow the existing import style. They're essential for the new ERC functionality being added to the schema.
119-120
: Ohayo again, sensei! These new objects look promising!The additions of ErcBalanceObject and ErcTransferObject as Resolvable objects are consistent with the existing pattern. Nice work!
However, I'm curious about the classification of ErcTokenObject as a Basic object. Could you verify if this is the intended behavior? If it needs to be queryable, it might need to be a Resolvable object instead.
Also applies to: 124-124
crates/torii/graphql/src/object/mod.rs (1)
3-3
: Ohayo, sensei! New module added for ERC tokens.The addition of the
erc
module aligns well with the PR objectives of implementing features for indexing whitelisted ERC20 and ERC721 tokens. This is a good start to organizing the new functionality.Let's make sure the
erc
module is properly implemented:✅ Verification successful
Ohayo, sensei! It seems we couldn't find the
erc.rs
file in the expected location. Let's investigate further to ensure everything is set up correctly:
Ohayo, sensei! 🎉 The
erc
module is correctly located incrates/torii/graphql/src/object/erc/
with all necessary components in place. This setup aligns perfectly with the PR goals for indexing ERC20 and ERC721 tokens. Great work!🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the existence and basic structure of the erc module # Test: Check if the erc.rs file exists fd -t f "erc.rs" crates/torii/graphql/src # Test: If it exists, check its content fd -t f "erc.rs" crates/torii/graphql/src -x catLength of output: 90
Script:
#!/bin/bash # Description: Search for any files or directories named 'erc' within the graphql src directory # Test: Find any files or directories named 'erc' fd "erc" crates/torii/graphql/src # Test: Search for mentions of 'erc' in the graphql src directory rg "erc" crates/torii/graphql/srcLength of output: 3210
Cargo.toml (1)
47-47
: Ohayo, sensei! New script added to the workspace. LGTM!The addition of "scripts/verify_db_balances" to the workspace members looks good. This script seems to align with the PR objectives for indexing whitelisted tokens.
To ensure smooth integration, please run this script to verify the new member's setup:
examples/spawn-and-move/manifests/dev/deployment/manifest.toml (2)
6-8
: Ohayo, sensei! Please verify the updated world contract details.The world contract details have been updated, which likely reflects a new deployment or update. The changes are:
- New address:
0x5fedbace16902d9ca4cdc1522f9fe156cd8c69a5d25e1436ee4b7b9933ad997
- New transaction hash:
0x4c8e0d28e32c21f29f33ff68e245b65fcc91763abf53f284cce8c2274ff6115
- Block number increased from 3 to 6
Could you please confirm that these new values are correct and correspond to the intended world contract deployment?
Line range hint
1-359
: Ohayo again, sensei! The manifest structure looks solid.I've scanned through the rest of the file, and everything appears to be in order. The
[base]
,[[contracts]]
, and[[models]]
sections are well-structured and consistent with Dojo project conventions. No issues or improvements to suggest for these sections.Keep up the great work, sensei!
crates/torii/graphql/src/tests/entities_test.rs (2)
8-8
: Ohayo, sensei! Nice import you've got there!The addition of
NamedTempFile
from thetempfile
crate is a solid choice for handling temporary files in tests. It's well-placed with the other imports and will serve you well in your testing adventures.
94-96
: Ohayo again, sensei! Your temporary file game is strong!The use of
NamedTempFile
is a great choice for test isolation and cleanup. The conversion of the path to a string usingto_string_lossy()
is spot-on for handling potential non-UTF-8 characters.One small suggestion to make your code even more kawaii:
Consider adding a comment explaining why we're using a temporary file here. It might help future you or your fellow code ninjas understand the purpose at a glance. Something like:
// Create a temporary file for test isolation let tempfile = NamedTempFile::new().unwrap();Also, don't forget to verify that
spinup_types_test
is expecting this new path parameter in its signature!✅ Verification successful
Ohayo again, sensei! The
spinup_types_test
function is correctly updated to accept the newpath
parameter. Everything looks great!🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Verify the signature of spinup_types_test rg --type rust "fn spinup_types_test" -A 3Length of output: 452
crates/torii/graphql/src/tests/models_test.rs (3)
9-9
: Ohayo, sensei! Nice addition of thetempfile
crate.The inclusion of
tempfile::NamedTempFile
is a good choice for managing temporary files in tests. It ensures clean-up after test execution and provides a safe way to create unique temporary files.
170-171
: Sugoi work on creating a temporary file, sensei!The creation of a
NamedTempFile
and extraction of its path is a clever way to ensure each test run has its own isolated file. This approach prevents potential conflicts between test executions and helps maintain test independence.
172-172
: Ohayo! Excellent modification tospinup_types_test
, sensei!Passing the temporary file path to
spinup_types_test
is a great way to ensure that the test uses the newly created temporary file. This change completes the implementation of using isolated temporary files for each test run.crates/torii/libp2p/src/server/mod.rs (3)
28-30
: Ohayo, sensei! Nice import updates!The changes in imports reflect the shift in how we handle SQL queries and generate SQL strings from felts. The addition of
QueryMessage
andfelts_to_sql_string
while removingfelts_sql_string
aligns well with the modifications in theset_entity
function.
534-534
: Ohayo once more, sensei! Interesting change inset_entity
!The shift from
db.execute().await?
todb.executor.send(QueryMessage::execute())?
suggests a move towards a message-based execution model for database operations. It's a cool update, but I noticed we removed the.await
. Let's make sure this was intentional and doesn't introduce any synchronization issues.Could you run this script to check for similar patterns in the codebase?
#!/bin/bash # Description: Check for similar database execution patterns # Test: Search for database execution patterns rg --type rust -C 5 'db\.execute\(\)\.await|db\.executor\.send\(QueryMessage::execute\(\)\)'
250-250
: Ohayo again, sensei! Good catch on updatingkeys_str
!The change from
felts_sql_string
tofelts_to_sql_string
is consistent with our import updates. It's a good move, but let's make sure the new function behaves identically to the old one.Could you run this script to compare the output of both functions?
✅ Verification successful
Ohayo, sensei!
It looks like the previous script had a syntax error. Let's try running a corrected version to search for both
felts_sql_string
andfelts_to_sql_string
in the Rust codebase.
Ohayo, sensei!
It seems the previous script had a syntax issue, likely due to mismatched quotes. Let's try a corrected version to search for both
felts_sql_string
andfelts_to_sql_string
in the Rust codebase.
Ohayo, sensei! Great job updating
keys_str
to usefelts_to_sql_string
consistently across the codebase. Everything looks good.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Compare the output of felts_sql_string and felts_to_sql_string # Test: Search for both function usages and their surrounding context rg --type rust -C 5 'felts_sql_string|felts_to_sql_string'Length of output: 12569
Script:
#!/bin/bash # Description: Compare the usage of felts_sql_string and felts_to_sql_string # Search for both function usages and their surrounding context rg --type rust -C 5 'felts_sql_string|felts_to_sql_string'Length of output: 14456
Script:
#!/bin/bash # Description: Compare the usage of felts_sql_string and felts_to_sql_string # Search for both function usages and their surrounding context rg --type rust -C 5 "felts_sql_string|felts_to_sql_string"Length of output: 14456
crates/torii/libp2p/src/tests.rs (3)
Line range hint
1-780
: Ohayo, test-master-sensei! Let's wrap this up.Overall, the changes to the
test_client_messaging
function are solid improvements. You've enhanced the database handling, added graceful shutdown capabilities, and implemented async operations more effectively. These modifications should make the test more robust and efficient.Here's a quick summary of the key changes:
- Switched to a temporary SQLite file for better test persistence.
- Adjusted database connection pool settings.
- Implemented an executor for async database operations.
- Added a shutdown channel for graceful test termination.
While the changes are generally good, consider the following suggestions for further improvement:
- Group related imports for better readability.
- Add comments explaining the role of the new components (like the executor).
- Ensure proper shutdown of all components at the end of the test.
Great work on enhancing this test, sensei! Your changes will definitely contribute to a more reliable testing environment.
575-584
: Ohayo, architecture-sensei! These changes are quite substantial.You've made some significant architectural improvements here:
- Adding a shutdown channel for graceful test termination.
- Implementing an executor for async database operations.
- Adjusting the database model registration process.
These changes should improve the test's robustness and performance. Nicely done!
Consider adding a comment explaining the role of the executor and how it improves the test architecture. This will help future developers understand the design decisions.
Let's ensure the executor is properly shut down at the end of the test:
#!/bin/bash # Check if there's a corresponding shutdown for the executor rg "executor.run\(\)" -A 20
555-566
: Ohayo, database-sensei! These changes look interesting.I see you've switched to using a temporary SQLite file instead of an in-memory database. This could be great for test persistence! Also, you've tweaked the connection pool settings.
Could you confirm if removing the idle timeout and max lifetime settings was intentional? It might affect connection management in long-running tests.
examples/spawn-and-move/manifests/dev/deployment/manifest.json (1)
1237-1239
: Ohayo, sensei! The world contract has been updated.The world contract has been redeployed or updated:
- New address:
0x5fedbace16902d9ca4cdc1522f9fe156cd8c69a5d25e1436ee4b7b9933ad997
- New transaction hash:
0x4c8e0d28e32c21f29f33ff68e245b65fcc91763abf53f284cce8c2274ff6115
- Block number increased from 3 to 6
These changes suggest a new deployment state for the world contract. Please ensure that all references to the world contract are updated accordingly in your project.
crates/torii/core/src/processors/mod.rs (2)
9-12
:⚠️ Potential issueOhayo sensei! Confirm the removal impact of
generate_event_processors_map
.The
generate_event_processors_map
function has been removed. This function previously mapped event keys to their corresponding event processors using aHashMap
. Ensure that the event processing system has been updated accordingly and that no parts of the codebase rely on this function.Please run the following script to locate any remaining references to
generate_event_processors_map
:#!/bin/bash # Description: Search for any usages of the `generate_event_processors_map` function. # Test: Find references to the function in the codebase. Expect: No occurrences. rg --type rust 'generate_event_processors_map'
9-12
: Ohayo sensei! Verify integration of new modules.The addition of the modules
erc20_legacy_transfer
,erc20_transfer
,erc721_legacy_transfer
, anderc721_transfer
aligns with the goal to index whitelisted ERC20 and ERC721 tokens. Please ensure that these modules are correctly implemented and integrated throughout the codebase to prevent any potential issues with module resolution or functionality.You can run the following script to check for references to these new modules:
crates/torii/core/src/processors/erc20_transfer.rs (1)
26-35
: Ensureprocess
Method is Invoked Only After Successful ValidationOhayo sensei,
While the
validate
method checks the event structure, please verify that theprocess
method is called only whenvalidate
returnstrue
. This ensures safety when accessingevent.keys[1]
andevent.keys[2]
, preventing potential runtime panics from index out-of-bounds errors.crates/torii/core/src/processors/erc20_legacy_transfer.rs (2)
22-24
: Ohayo sensei! Theevent_key
method is correctly implemented.The method returns the expected event key
"Transfer"
.
37-59
: Great job on theprocess
method, sensei!The
process
method accurately handles the ERC20 legacy transfer event and updates the database as expected.crates/torii/core/src/processors/erc721_legacy_transfer.rs (4)
14-15
: Struct definition is appropriateOhayo, sensei! The
Erc721LegacyTransferProcessor
struct is correctly defined with the necessary traits derived.
22-24
: Event key method correctly identifies the eventOhayo, sensei! The
event_key
method correctly returns "Transfer" as the event identifier.
26-35
: Validation logic is correctOhayo, sensei! The
validate
method accurately checks that the event has one key and four data elements, ensuring it conforms to the expected ERC721 Transfer event structure.
37-65
: Ensurevalidate
is called beforeprocess
Ohayo, sensei! To prevent potential panics from out-of-bounds indexing, please ensure that
validate
is always called beforeprocess
to guarantee that the event data meets the expected structure.You can run the following script to verify that
validate
is called beforeprocess
in all instances:crates/torii/core/src/processors/metadata_update.rs (1)
67-67
: Ohayo, sensei! Good use of the?
operator for error propagationUsing the
?
operator withdb.set_metadata
ensures that any errors are properly propagated, improving the error handling in theprocess
method.crates/torii/graphql/src/tests/metadata_test.rs (1)
3-3
: Ohayo sensei! Validate the necessity of new importsThe added imports on lines 3 (
HashMap
), 9 (broadcast
), 10 (Executor
), and 12 (ContractType
) are essential for the new functionality introduced. Just a reminder to ensure that all these imports are necessary and there are no unused imports, keeping the code clean.Also applies to: 9-10, 12-12
scripts/verify_db_balances/src/main.rs (2)
139-139
: Ohayo, sensei! Job well done!All balances have been checked successfully.
39-40
: Ohayo, sensei! Verify the correct use ofU256::from_words
.Ensure that
balance_low
andbalance_high
are provided in the correct order toU256::from_words
. An incorrect order might lead to incorrect balance calculations.Run the following script to confirm the correct usage:
crates/torii/grpc/src/server/tests/entities_test.rs (1)
128-128
: Ohayo, sensei! Verify that passing an emptyHashMap
tofetch_range
works as intended.By passing an empty
HashMap
toengine.fetch_range(0, to, &HashMap::new())
, you may not fetch any data if the method relies on parameters to determine what to fetch. Ensure that this aligns with your test's expectations and that necessary data isn't inadvertently skipped.crates/torii/core/src/sql/utils.rs (3)
14-16
: Looks great, sensei!The function correctly formats a
Felt
into a hexadecimal string.
18-21
: Great work, sensei!The function properly combines a
Felt
and aU256
into a formatted string with the expected delimiter.
22-24
: Good job, sensei!Formatting
U256
with padding ensures consistent string length, which is excellent for SQL storage and retrieval.crates/torii/graphql/src/tests/mod.rs (1)
31-31
: Good addition of necessary importsOhayo, sensei! Including
Executor
andContractType
imports is appropriate for the new functionality introduced below.Also applies to: 33-33
crates/torii/core/src/sql/erc.rs (1)
337-349
: Ensure cache is properly cleared after applying diffsOhayo sensei! In
apply_cache_diff
, we're replacing theerc_cache
with a newHashMap
. While this effectively clears the cache, consider verifying that no references to the old cache remain, to prevent potential memory leaks or unintended behavior.Would you like to run a script to check for remaining references?
bin/torii/src/main.rs (8)
56-56
: Ohayo, sensei! Ensure proper handling of optionalworld_address
By changing
world_address
toOption<Felt>
, please verify that all dependent components gracefully handle the absence of this value. This includes providing meaningful error messages or setting appropriate defaults.
64-64
: Ohayo, sensei! Clarify the default behavior whendatabase
is not specifiedWith
database
now defaulting to an empty string, ensure that the application correctly creates and uses a temporary database file when this argument is omitted. Please confirm that this behavior is documented so users are aware of where their data is stored.
136-144
: Ohayo, sensei! Ensure mutual exclusivity ofcontracts
andconfig
arguments is properly enforcedThe
contracts
andconfig
arguments are set to conflict using#[arg(conflicts_with = "config")]
. Please verify that this conflict is enforced at runtime and that clear error messages are provided when both are supplied by the user.
150-161
: Ohayo, sensei! Validate configuration loading logicThe logic for loading the configuration file or parsing contracts from command-line arguments has been updated. Ensure that when a configuration file is provided, it overrides the command-line arguments as intended, and that all configurations are correctly loaded into the application.
163-164
: Ohayo, sensei! Review the use ofverify_single_world_address
The
world_address
is now obtained via theverify_single_world_address
function. Please confirm that this function correctly resolves the world address from either the command-line arguments or the configuration file, and handles conflicts appropriately.
183-192
: Ohayo, sensei! Ensure proper handling and cleanup of temporary database filesWhen no database path is provided, a temporary file is created using
NamedTempFile
. Please verify that the temporary file remains accessible for the duration of the application's runtime and that it is properly cleaned up upon exit to prevent any resource leaks.
325-343
: Ohayo, sensei! Review the logic inverify_single_world_address
functionThe newly added
verify_single_world_address
function ensures that the world address is specified only once. Please double-check that all possible cases are handled correctly and that helpful error messages guide the user in case of conflicts or missing values.
369-393
: Ohayo, sensei! Check error handling inparse_erc_contracts
functionThe
parse_erc_contracts
function parses ERC contract inputs. Make sure it robustly handles unexpected input formats and provides informative error messages to aid in troubleshooting.crates/torii/core/src/sql/test.rs (3)
51-51
: Ohayo sensei! Verify the use of an emptyHashMap
inengine.fetch_range
An empty
HashMap
is passed toengine.fetch_range(0, to, &HashMap::new())
. Please verify if this is intentional or if it should include the world contract addresses, similar to theEngine
initialization.
199-199
: Ohayo sensei! Confirm the expected count ofevent_messages
The assertion
assert_eq!(count_table("event_messages", &pool).await, 2);
expects exactly 2 event messages. Please verify that this count is correct, considering the events emitted in the test. Adjust the expected value if necessary.
139-145
: Ohayo sensei! Double-check the initialization ofSql
with contract typesWhen initializing
Sql
, theHashMap
includes theworld_reader.address
withContractType::WORLD
. Ensure that all necessary contract addresses and their types are included as required for the test.crates/torii/core/src/engine.rs (3)
Line range hint
788-809
: Ensure safe access to event data when computing task identifiersIn
lines 788~
to800~
,event.data[0]
andevent.data[1]
are used:event.data[0].hash(&mut hasher); event.data[1].hash(&mut hasher);If
event.data
has fewer than two elements, this will cause an index out-of-bounds panic. Please verify that all events processed here have the expected data length.Here's a script to identify all accesses to
event.data
indices:
365-370
:⚠️ Potential issuePossible logic error when filtering events
In the loop starting at
line 360~
, the logic for skipping events based onlast_contract_tx_tmp
may inadvertently skip necessary events:if let Some(last_contract_tx) = last_contract_tx_tmp { if event.transaction_hash != last_contract_tx { continue; } last_contract_tx_tmp = None; }This condition might cause the loop to continue skipping events even after the desired transaction hash is found. Please verify whether
last_contract_tx_tmp
is being reset appropriately.Here's a script to trace the updates to
last_contract_tx_tmp
:
64-110
: Ohayo sensei! There might be an issue with the initialization of event processors.In the
initialize_event_processors
function, the commented-out line atline 99~
suggests a previous approach was replaced:// event_processors_map.entry(contract_type).or_default().insert(key, processor);
From
line 100~
toline 106~
, a new insertion logic is implemented. Please ensure that this change correctly handles multiple processors for the same event key and contract type. The nestedHashMap
structure should map eachContractType
to anEventProcessorMap
, which in turn maps each event key to aVec
of processors.To confirm that events are being processed as intended, you might want to check how processors are retrieved and used. Here's a script to verify the usage:
crates/torii/core/src/sql/mod.rs (1)
61-72
: Ohayo sensei! Verify if 'id' and 'contract_address' should be the sameIn the
new
method, when inserting contracts into the database, bothid
andcontract_address
are set to*contract.0
. Please confirm if setting these two fields to the same value is intentional.To ensure consistency, you might want to verify how
id
andcontract_address
are used elsewhere in the codebase to determine if this is the desired behavior.crates/torii/graphql/src/tests/subscription_test.rs (2)
120-120
: Verify correct handling of key serialization withfelts_to_sql_string
Ohayo, Sensei! At lines 120 and 247,
felts_to_sql_string
is used to serialize keys for SQL queries. Please verify that this function correctly handles all possible key values, and that it properly formats the keys to prevent any SQL-related issues.Also applies to: 247-247
3-3
: Imports are appropriately updatedOhayo, Sensei! The added imports at lines 3 and 16-20 are necessary for the new executor pattern and database interactions. Everything looks good here.
Also applies to: 16-20
@@ -0,0 +1,3 @@ | |||
#!/bin/bash | |||
|
|||
starkli deploy --account ../account.json --keystore ../signer.json --keystore-password "" 0x02a8846878b6ad1f54f6ba46f5f40e11cee755c677f130b2c4b60566c9003f1f 0x626c6f62 0x424c42 0x8 u256:10000000000 0xb3ff441a68610b30fd5e2abbf3a1548eb6ba6f3559f2862bf2dc757e5828ca --rpc http://localhost:5050 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ohayo, sensei! Let's make this script more secure and maintainable!
While the command works, there are several areas we can improve:
-
Security:
- Using an empty keystore password (
--keystore-password ""
) is not secure. Consider using environment variables or prompting for the password. - Hardcoded paths to account and keystore files may expose sensitive information.
- Using an empty keystore password (
-
Maintainability and Flexibility:
- Hardcoded values make the script less flexible. Consider using environment variables or command-line arguments for token details and addresses.
-
Readability:
- The long command line is difficult to read. Let's break it into multiple lines with explanatory comments.
Here's a suggested refactor to address these points:
#!/bin/bash
# Set default values
ACCOUNT_FILE="../account.json"
KEYSTORE_FILE="../signer.json"
RPC_URL="http://localhost:5050"
TOKEN_CONTRACT="0x02a8846878b6ad1f54f6ba46f5f40e11cee755c677f130b2c4b60566c9003f1f"
TOKEN_NAME="0x626c6f62"
TOKEN_SYMBOL="0x424c42"
TOKEN_DECIMALS="0x8"
INITIAL_SUPPLY="u256:10000000000"
OWNER_ADDRESS="0xb3ff441a68610b30fd5e2abbf3a1548eb6ba6f3559f2862bf2dc757e5828ca"
# Allow overriding defaults with environment variables
ACCOUNT_FILE=${ACCOUNT_FILE:-"$ACCOUNT_FILE"}
KEYSTORE_FILE=${KEYSTORE_FILE:-"$KEYSTORE_FILE"}
RPC_URL=${RPC_URL:-"$RPC_URL"}
# Prompt for keystore password
read -s -p "Enter keystore password: " KEYSTORE_PASSWORD
echo
# Deploy the token
starkli deploy \
--account "$ACCOUNT_FILE" \
--keystore "$KEYSTORE_FILE" \
--keystore-password "$KEYSTORE_PASSWORD" \
"$TOKEN_CONTRACT" \
"$TOKEN_NAME" \
"$TOKEN_SYMBOL" \
"$TOKEN_DECIMALS" \
"$INITIAL_SUPPLY" \
"$OWNER_ADDRESS" \
--rpc "$RPC_URL"
This refactored version:
- Uses variables for better readability and maintainability.
- Allows overriding defaults with environment variables.
- Prompts for the keystore password instead of using an empty string.
- Breaks the command into multiple lines for better readability.
What do you think, sensei? Shall we implement these improvements?
contract_address=$1 | ||
rpc="http://localhost:5050" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ohayo, sensei! Consider making the RPC URL configurable.
The contract address assignment looks good. However, hardcoding the RPC URL might limit the script's flexibility. Consider making it configurable through an environment variable or command-line argument.
Here's a suggestion to make the RPC URL configurable:
-rpc="http://localhost:5050"
+rpc=${RPC_URL:-"http://localhost:5050"}
This change allows users to set the RPC_URL
environment variable, falling back to the default if not set.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
contract_address=$1 | |
rpc="http://localhost:5050" | |
contract_address=$1 | |
rpc=${RPC_URL:-"http://localhost:5050"} |
contract_address=$1 | ||
rpc="http://localhost:5050" | ||
|
||
starkli invoke $contract_address transfer 0x1234 u256:1 --account ../account.json --keystore ../signer.json --keystore-password "" --rpc $rpc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ohayo, sensei! Let's enhance the starkli command for better flexibility and security.
The starkli command works, but there are several areas for improvement:
- The recipient address and transfer amount are hardcoded, limiting the script's flexibility.
- Relative paths for account and keystore files might cause issues if the script is run from a different directory.
- Using an empty keystore password could be a security risk.
Here's a suggested improvement:
-starkli invoke $contract_address transfer 0x1234 u256:1 --account ../account.json --keystore ../signer.json --keystore-password "" --rpc $rpc
+recipient=${RECIPIENT:-"0x1234"}
+amount=${AMOUNT:-"u256:1"}
+account_file=${ACCOUNT_FILE:-"$HOME/.starkli/account.json"}
+keystore_file=${KEYSTORE_FILE:-"$HOME/.starkli/signer.json"}
+
+starkli invoke $contract_address transfer $recipient $amount \
+ --account $account_file \
+ --keystore $keystore_file \
+ --rpc $rpc
This change:
- Makes recipient and amount configurable through environment variables.
- Uses absolute paths for account and keystore files, defaulting to the standard Starkli directory.
- Removes the
--keystore-password
option, prompting for the password interactively (more secure).
Don't forget to update the usage instructions at the beginning of the script to reflect these changes, sensei!
Committable suggestion was skipped due to low confidence.
@@ -60,7 +61,7 @@ where | |||
let keys_end: usize = | |||
keys_start + event.data[NUM_KEYS_INDEX].to_usize().context("invalid usize")?; | |||
let keys = event.data[keys_start..keys_end].to_vec(); | |||
let keys_str = felts_sql_string(&keys); | |||
let keys_str = felts_to_sql_string(&keys); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Ohayo, sensei!
While replacing felts_sql_string
with felts_to_sql_string
is a great step towards centralizing our utility functions, it looks like there are no existing tests for felts_to_sql_string
. To ensure this new function works correctly across all its usages, please add appropriate unit tests.
- Add unit tests for
felts_to_sql_string
incrates/torii/core/src/sql/utils.rs
.
🔗 Analysis chain
Sugoi change, sensei!
The replacement of felts_sql_string
with felts_to_sql_string
aligns well with the new import. This change promotes code reuse and centralization of utility functions.
To ensure the new function behaves identically to the old one, please run the following verification script:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the behavior of felts_to_sql_string function
# Test: Search for any tests related to felts_to_sql_string
rg --type rust -g '!target/' 'fn test.*felts_to_sql_string'
# Test: Check if there are any other usages of felts_to_sql_string in the codebase
rg --type rust -g '!target/' 'felts_to_sql_string'
Length of output: 2022
def get_table_row_count(db_path, table_name): | ||
conn = sqlite3.connect(db_path) | ||
cursor = conn.cursor() | ||
cursor.execute(f"SELECT COUNT(*) FROM {table_name}") | ||
count = cursor.fetchone()[0] | ||
conn.close() | ||
return count |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ohayo again, sensei! Let's polish this function too!
The get_table_row_count
function works well, but we can enhance it by using a context manager for the database connection, just like we suggested for the previous function.
Here's how you can update it:
def get_table_row_count(db_path, table_name):
with sqlite3.connect(db_path) as conn:
cursor = conn.cursor()
cursor.execute(f"SELECT COUNT(*) FROM {table_name}")
count = cursor.fetchone()[0]
return count
This ensures proper connection handling and resource cleanup.
self.event_processors.get(&contract_type).unwrap() | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle missing contract types without panicking
At line 116~
, get_event_processor
uses unwrap()
:
self.event_processors.get(&contract_type).unwrap()
If contract_type
is not present, this will cause a panic. To prevent potential runtime errors, consider returning an Option
and handling the None
case gracefully:
- self.event_processors.get(&contract_type).unwrap()
+ self.event_processors.get(&contract_type)
|
||
let Some(&contract_type) = self.contracts.get(&event.from_address) else { | ||
continue; | ||
}; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Optimize by avoiding unnecessary cloning
At line 608~
, cloning is used to create a new Event
:
let event = Event {
from_address: event.from_address,
keys: event.keys.clone(),
data: event.data.clone(),
};
If possible, consider borrowing the data instead of cloning to improve performance:
let event = Event {
from_address: event.from_address,
keys: &event.keys,
data: &event.data,
};
let processor = processors | ||
.iter() | ||
.find(|p| p.validate(event)) | ||
.expect("Must find atleast one processor for the event"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid potential panic from expect
At line 787~
, using expect
may lead to a panic if no processor validates the event:
.expect("Must find at least one processor for the event");
To enhance robustness, consider handling this scenario without panicking. For example, log a warning if no suitable processor is found:
if let Some(processor) = processors.iter().find(|p| p.validate(event)) {
// Process event
} else {
warn!(target: LOG_TARGET, "No processor validated the event.");
}
crates/torii/core/src/sql/mod.rs
Outdated
columns.push("full_array_id".to_string()); | ||
arguments.push(Argument::String( | ||
std::iter::once(entity_id.to_string()) | ||
.chain(indexes.iter().map(|i| i.to_string())) | ||
.collect::<Vec<String>>() | ||
.join(FELT_DELIMITER), | ||
)); | ||
} | ||
|
||
for (column_idx, idx) in indexes.iter().enumerate() { | ||
columns.push(format!("idx_{}", column_idx)); | ||
arguments.push(Argument::Int(*idx)); | ||
} | ||
for (column_idx, idx) in indexes.iter().enumerate() { | ||
columns.push(format!("idx_{}", column_idx)); | ||
arguments.push(Argument::Int(*idx)); | ||
} | ||
|
||
for member in members.iter() { | ||
match &member.ty { | ||
Ty::Primitive(ty) => { | ||
columns.push(format!("external_{}", &member.name)); | ||
arguments.push(Argument::String(ty.to_sql_value().unwrap())); | ||
} | ||
Ty::Enum(e) => { | ||
columns.push(format!("external_{}", &member.name)); | ||
arguments.push(Argument::String(e.to_sql_value().unwrap())); | ||
} | ||
Ty::ByteArray(b) => { | ||
columns.push(format!("external_{}", &member.name)); | ||
arguments.push(Argument::String(b.clone())); | ||
} | ||
_ => {} | ||
for member in members.iter() { | ||
match &member.ty { | ||
Ty::Primitive(ty) => { | ||
columns.push(format!("external_{}", &member.name)); | ||
arguments.push(Argument::String(ty.to_sql_value().unwrap())); | ||
} | ||
Ty::Enum(e) => { | ||
columns.push(format!("external_{}", &member.name)); | ||
arguments.push(Argument::String(e.to_sql_value().unwrap())); | ||
} | ||
Ty::ByteArray(b) => { | ||
columns.push(format!("external_{}", &member.name)); | ||
arguments.push(Argument::String(b.clone())); | ||
} | ||
_ => {} | ||
} | ||
} | ||
|
||
let placeholders: Vec<&str> = arguments.iter().map(|_| "?").collect(); | ||
let statement = if is_store_update_member && indexes.is_empty() { | ||
arguments.push(Argument::String(if is_event_message { | ||
"event:".to_string() + entity_id | ||
} else { | ||
entity_id.to_string() | ||
})); | ||
|
||
// row has to exist. update it directly | ||
format!( | ||
"UPDATE [{table_id}] SET {updates} WHERE id = ?", | ||
table_id = table_id, | ||
updates = columns | ||
.iter() | ||
.zip(placeholders.iter()) | ||
.map(|(column, placeholder)| format!("{} = {}", column, placeholder)) | ||
.collect::<Vec<String>>() | ||
.join(", ") | ||
) | ||
let placeholders: Vec<&str> = arguments.iter().map(|_| "?").collect(); | ||
let statement = if is_store_update_member && indexes.is_empty() { | ||
arguments.push(Argument::String(if is_event_message { | ||
"event:".to_string() + entity_id | ||
} else { | ||
format!( | ||
"INSERT OR REPLACE INTO [{table_id}] ({}) VALUES ({})", | ||
columns.join(","), | ||
placeholders.join(",") | ||
) | ||
}; | ||
|
||
query_queue.enqueue(statement, arguments, QueryType::Other); | ||
entity_id.to_string() | ||
})); | ||
|
||
// row has to exist. update it directly | ||
format!( | ||
"UPDATE [{table_id}] SET {updates} WHERE id = ?", | ||
table_id = table_id, | ||
updates = columns | ||
.iter() | ||
.zip(placeholders.iter()) | ||
.map(|(column, placeholder)| format!("{} = {}", column, placeholder)) | ||
.collect::<Vec<String>>() | ||
.join(", ") | ||
) | ||
} else { | ||
format!( | ||
"INSERT OR REPLACE INTO [{table_id}] ({}) VALUES ({})", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo sensei! Potential SQL injection risk with dynamic table names
When constructing SQL statements, table names are generated dynamically using path.join("$")
, which may include untrusted input. This could lead to SQL injection vulnerabilities if any part of path
contains malicious data.
Consider sanitizing or validating the components of path
before using them in SQL queries. Alternatively, maintain a whitelist of allowed table names or use parameterized queries if supported for table names to mitigate this risk.
let (shutdown_tx, _) = broadcast::channel(1); | ||
let (mut executor, sender) = | ||
Executor::new(pool.clone(), shutdown_tx.clone()).await.unwrap(); | ||
tokio::spawn(async move { | ||
executor.run().await.unwrap(); | ||
}); | ||
let mut db = | ||
Sql::new(pool.clone(), sender, &HashMap::from([(Felt::ZERO, ContractType::WORLD)])) | ||
.await | ||
.unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider refactoring repeated setup code to enhance maintainability
Ohayo, Sensei! The executor and database setup code is repeated across multiple test functions in lines 28-37, 172-181, 296-305, 370-379, and 445-454. To adhere to the DRY (Don't Repeat Yourself) principle and improve maintainability, consider extracting this code into a shared helper function or fixture.
Also applies to: 172-181, 296-305, 370-379, 445-454
76957fa
to
4b8a404
Compare
4b8a404
to
7768422
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post
Actionable comments posted: 13
🧹 Outside diff range and nitpick comments (13)
bin/torii/src/main.rs (4)
56-56
: Ohayo, sensei! LGTM with a small suggestionThe changes to the
Args
struct look good. Makingworld_address
optional and addingcontracts
andconfig
fields provide more flexibility in configuration.However, for better clarity, consider adding a comment explaining the behavior when an empty string is provided for the
database
field.Also applies to: 64-64, 138-144
151-163
: Ohayo, sensei! Great improvements with a small suggestionThe changes to the
main
function look good. The new configuration loading logic and the use of a temporary file for the database when no path is provided are excellent improvements.Consider adding more detailed error messages when loading the configuration fails, to help users troubleshoot configuration issues more easily.
Also applies to: 183-186, 207-207
241-241
: Ohayo, sensei! Consider makingstart_block
configurableThe addition of the
contracts
parameter to theEngine::new
call is a great improvement, allowing for more precise control over which contracts are indexed.However, hardcoding
start_block
to 0 might limit flexibility. Consider makingstart_block
a configurable parameter, allowing users to specify a starting block when needed.Also applies to: 249-249
362-386
: Ohayo, sensei! Great addition with a small suggestionThe
parse_erc_contracts
function is a valuable addition, providing a flexible way to specify ERC contracts from command-line arguments. The error handling is robust, with context-specific error messages that will help users correct any input errors.However, there's a small inconsistency: the comment mentions
start_block
, but it's not used in the actual parsing. Consider either removing thestart_block
from the comment or implementing it in the parsing logic for consistency.crates/torii/core/src/sql/test.rs (2)
127-145
: Ohayo sensei! Excellent update to the test setup, but let's consider a small improvementThe changes to use a temporary SQLite file and run the
Executor
in a separate task are great improvements. They provide a more realistic testing environment and better isolation between tests.However, this setup is repeated in multiple test functions. To improve code reusability and maintainability, consider extracting this setup into a helper function.
Here's a suggestion for a helper function:
async fn setup_test_database() -> (sqlx::Pool<sqlx::Sqlite>, tokio::sync::broadcast::Sender<()>, Sql) { let tempfile = NamedTempFile::new().unwrap(); let path = tempfile.path().to_string_lossy(); let options = SqliteConnectOptions::from_str(&path).unwrap().create_if_missing(true); let pool = SqlitePoolOptions::new().connect_with(options).await.unwrap(); sqlx::migrate!("../migrations").run(&pool).await.unwrap(); let (shutdown_tx, _) = broadcast::channel(1); let (mut executor, sender) = Executor::new(pool.clone(), shutdown_tx.clone()).await.unwrap(); tokio::spawn(async move { executor.run().await.unwrap(); }); let db = Sql::new( pool.clone(), sender.clone(), &HashMap::new(), // You can pass the actual HashMap here if needed ) .await .unwrap(); (pool, sender, db) }You can then use this helper function in your tests:
let (pool, sender, db) = setup_test_database().await;This will reduce duplication and make your tests more maintainable.
199-199
: Ohayo sensei! Nice addition to the test assertionsThe new assertion checking the count of rows in the
event_messages
table is a great addition. It enhances the test coverage by verifying the side effects of the operations.Consider also adding assertions for other relevant tables that might be affected by the operations in this test. This could provide a more comprehensive verification of the system's state after the test operations.
crates/torii/graphql/src/tests/subscription_test.rs (1)
475-476
: Consider more robust error handlingOhayo, Sensei! The addition of
.unwrap()
to thestore_event
call could potentially cause panics if the operation fails. While this might be acceptable in a test environment, it's worth considering a more robust approach.Consider using
expect
with a meaningful error message instead:.expect("Failed to store event in test environment")This will provide more context if the test fails due to this operation.
crates/torii/libp2p/src/tests.rs (3)
555-566
: Ohayo, database setup changes are sugoi, sensei!The switch to a temporary file for the database is a good move, potentially providing more realistic testing conditions. The updated connection options and pool configuration align well with this change.
However, I have a small suggestion:
Consider adding a cleanup step to remove the temporary file after the test completes. This can be done using the
Drop
trait onNamedTempFile
. For example:let tempfile = NamedTempFile::new().unwrap(); let _cleanup_guard = tempfile.keep().unwrap(); let path = tempfile.path().to_string_lossy(); // ... rest of the codeThis ensures that the temporary file is always cleaned up, even if the test panics.
575-584
: Ohayo, executor setup is kakkoii, sensei!The introduction of the executor and shutdown mechanism is a great improvement. It enhances the test's ability to manage asynchronous operations and allows for graceful shutdown.
One small suggestion:
Consider using
tokio::spawn
with aJoinHandle
for better error handling:let executor_handle = tokio::spawn(async move { if let Err(e) = executor.run().await { eprintln!("Executor error: {}", e); } });This way, you can
.await
theexecutor_handle
at the end of the test to catch any panics or errors that occurred in the executor task.
Line range hint
586-706
: Ohayo, test logic is mostly daijoubu, but could use some enhancements, sensei!The core test logic remains solid, effectively testing the client messaging functionality. However, I have a few suggestions to make the test more robust:
Add assertions to verify the executor and database setup:
assert!(executor.is_running()); assert!(db.is_connected());Consider adding a timeout to the final loop to prevent infinite waiting:
let timeout = tokio::time::sleep(Duration::from_secs(10)); tokio::select! { _ = timeout => return Err("Test timed out".into()), result = async { loop { // ... existing select! block } } => result, }Add more granular assertions on the received entity:
let entity = sqlx::query("SELECT * FROM entities").fetch_one(&pool).await?; assert_eq!(entity.get::<String, _>("message"), "mimi"); assert_eq!(entity.get::<String, _>("identity"), account.address.to_string());These changes will make the test more comprehensive and resilient.
crates/torii/core/src/engine.rs (2)
Line range hint
293-316
: Sensei, let's make thefetch_data
method even more sugoi!The refactoring of the
fetch_data
method looks comprehensive, but we could potentially optimize it further. Consider usingmatch
instead ofif-else
for better readability and potentially faster compilation:- let result = if from < latest_block_number { - let from = if from == 0 { from } else { from + 1 }; - let data = self.fetch_range(from, latest_block_number, &cursors.cursor_map).await?; - debug!(target: LOG_TARGET, duration = ?instant.elapsed(), from = %from, to = %latest_block_number, "Fetched data for range."); - FetchDataResult::Range(data) - } else if self.config.index_pending { - let data = - self.fetch_pending(latest_block_number + 1, cursors.last_pending_block_tx).await?; - debug!(target: LOG_TARGET, duration = ?instant.elapsed(), latest_block_number = %latest_block_number, "Fetched pending data."); - if let Some(data) = data { - FetchDataResult::Pending(data) - } else { - FetchDataResult::None - } - } else { - FetchDataResult::None - }; + let result = match (from < latest_block_number, self.config.index_pending) { + (true, _) => { + let from = if from == 0 { from } else { from + 1 }; + let data = self.fetch_range(from, latest_block_number, &cursors.cursor_map).await?; + debug!(target: LOG_TARGET, duration = ?instant.elapsed(), from = %from, to = %latest_block_number, "Fetched data for range."); + FetchDataResult::Range(data) + }, + (false, true) => { + let data = self.fetch_pending(latest_block_number + 1, cursors.last_pending_block_tx).await?; + debug!(target: LOG_TARGET, duration = ?instant.elapsed(), latest_block_number = %latest_block_number, "Fetched pending data."); + data.map_or(FetchDataResult::None, FetchDataResult::Pending) + }, + (false, false) => FetchDataResult::None, + };This change could make the code more idiomatic and potentially easier to maintain.
Line range hint
465-504
: Sensei, let's make theprocess_pending
method even more robust!The
process_pending
method looks good overall, but we could potentially improve its error handling. Consider using afor
loop withcontinue
instead ofif-else
for better readability and to avoid deep nesting:- for t in data.pending_block.transactions { - let transaction_hash = t.transaction.transaction_hash(); - if let Some(tx) = last_pending_block_tx_cursor { - if transaction_hash != &tx { - continue; - } - - last_pending_block_tx_cursor = None; - continue; - } - - if let Err(e) = self - .process_transaction_with_receipt(&t, data.block_number, timestamp, &mut cursor_map) - .await - { - error!(target: LOG_TARGET, error = %e, transaction_hash = %format!("{:#x}", transaction_hash), "Processing pending transaction."); - return Err(e); - } - - last_pending_block_tx = Some(*transaction_hash); - debug!(target: LOG_TARGET, transaction_hash = %format!("{:#x}", transaction_hash), "Processed pending transaction."); - } + for t in data.pending_block.transactions { + let transaction_hash = t.transaction.transaction_hash(); + if let Some(tx) = last_pending_block_tx_cursor { + if transaction_hash != &tx { + continue; + } + last_pending_block_tx_cursor = None; + } + + match self.process_transaction_with_receipt(&t, data.block_number, timestamp, &mut cursor_map).await { + Ok(_) => { + last_pending_block_tx = Some(*transaction_hash); + debug!(target: LOG_TARGET, transaction_hash = %format!("{:#x}", transaction_hash), "Processed pending transaction."); + }, + Err(e) => { + error!(target: LOG_TARGET, error = %e, transaction_hash = %format!("{:#x}", transaction_hash), "Processing pending transaction."); + return Err(e); + } + } + }This change could make the code more readable and easier to maintain.
crates/torii/core/src/executor.rs (1)
181-293
: Ohayo sensei! Consider adding documentation tohandle_query_type
Adding comments to describe how each
QueryType
is handled withinhandle_query_type
would improve code readability and aid future contributors in understanding the processing logic.
🛑 Comments failed to post (13)
crates/torii/graphql/src/tests/mod.rs (1)
276-279:
⚠️ Potential issueImprove error handling for database connection setup
Ohayo, sensei! While the use of the
path
parameter for creating SQLite connection options is good, the use ofunwrap()
could lead to panics if there are issues with the database connection. Consider using proper error handling to make the function more robust.Apply this diff to improve error handling:
- let options = - SqliteConnectOptions::from_str(path).unwrap().create_if_missing(true).with_regexp(); - let pool = SqlitePoolOptions::new().connect_with(options).await.unwrap(); + let options = SqliteConnectOptions::from_str(path) + .map_err(|e| anyhow::anyhow!("Failed to create SQLite options: {}", e))? + .create_if_missing(true) + .with_regexp(); + let pool = SqlitePoolOptions::new().connect_with(options).await + .map_err(|e| anyhow::anyhow!("Failed to connect to SQLite database: {}", e))?;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.pub async fn spinup_types_test(path: &str) -> Result<SqlitePool> { let options = SqliteConnectOptions::from_str(path) .map_err(|e| anyhow::anyhow!("Failed to create SQLite options: {}", e))? .create_if_missing(true) .with_regexp(); let pool = SqlitePoolOptions::new().connect_with(options).await .map_err(|e| anyhow::anyhow!("Failed to connect to SQLite database: {}", e))?;
crates/torii/core/src/sql/erc.rs (3)
74-129: 🛠️ Refactor suggestion
Ohayo sensei! Enhance error handling for ERC721 token registration
The
handle_erc721_transfer
function looks solid, but we can improve its error handling, similar to what we suggested for the ERC20 function.Let's update the error handling for the token registration:
if !token_exists { - self.register_erc721_token_metadata(contract_address, &token_id, provider).await?; - self.execute().await?; + self.register_erc721_token_metadata(contract_address, &token_id, provider) + .await + .with_context(|| format!("Failed to register ERC721 token metadata for {}", token_id))?; + self.execute() + .await + .with_context(|| format!("Failed to execute after registering ERC721 token {}", token_id))?; }This change will provide more context about potential errors, making it easier to debug issues related to ERC721 token registration.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.pub async fn handle_erc721_transfer<P: Provider + Sync>( &mut self, contract_address: Felt, from_address: Felt, to_address: Felt, token_id: U256, provider: &P, block_timestamp: u64, ) -> Result<()> { // contract_address:id let token_id = felt_and_u256_to_sql_string(&contract_address, &token_id); let token_exists: bool = self.local_cache.contains_token_id(&token_id); if !token_exists { self.register_erc721_token_metadata(contract_address, &token_id, provider) .await .with_context(|| format!("Failed to register ERC721 token metadata for {}", token_id))?; self.execute() .await .with_context(|| format!("Failed to execute after registering ERC721 token {}", token_id))?; } self.store_erc_transfer_event( contract_address, from_address, to_address, U256::from(1u8), &token_id, block_timestamp, )?; // from_address/contract_address:id if from_address != Felt::ZERO { let from_balance_id = format!("{}{FELT_DELIMITER}{}", felt_to_sql_string(&from_address), &token_id); let from_balance = self .local_cache .erc_cache .entry((ContractType::ERC721, from_balance_id)) .or_default(); *from_balance -= I256::from(1u8); } if to_address != Felt::ZERO { let to_balance_id = format!("{}{FELT_DELIMITER}{}", felt_to_sql_string(&to_address), &token_id); let to_balance = self .local_cache .erc_cache .entry((ContractType::ERC721, to_balance_id)) .or_default(); *to_balance += I256::from(1u8); } if self.local_cache.erc_cache.len() >= 100000 { self.apply_cache_diff().await?; } Ok(()) }
131-209: 🛠️ Refactor suggestion
Ohayo sensei! Consider optimizing token metadata retrieval
The
register_erc20_token_metadata
function looks good overall, but we can optimize it to reduce unnecessary blockchain calls.Before fetching token metadata from the blockchain, consider checking if it already exists in the database:
async fn register_erc20_token_metadata<P: Provider + Sync>( &mut self, contract_address: Felt, token_id: &str, provider: &P, ) -> Result<()> { + // Check if token metadata already exists in the database + let existing_metadata = sqlx::query_as::<_, (String, String, u8)>( + "SELECT name, symbol, decimals FROM tokens WHERE contract_address = ?", + ) + .bind(felt_to_sql_string(&contract_address)) + .fetch_optional(&self.pool) + .await?; + + if let Some((name, symbol, decimals)) = existing_metadata { + debug!( + contract_address = %felt_to_sql_string(&contract_address), + "Token already registered for contract_address, so reusing fetched data", + ); + self.executor.send(QueryMessage::other( + "INSERT INTO tokens (id, contract_address, name, symbol, decimals) VALUES (?, ?, ?, ?, ?)" + .to_string(), + vec![ + Argument::String(token_id.to_string()), + Argument::FieldElement(contract_address), + Argument::String(name), + Argument::String(symbol), + Argument::Int(decimals.into()), + ], + ))?; + self.local_cache.register_token_id(token_id.to_string()); + return Ok(()); + } // Existing code for fetching token information from the chain // ... }This optimization will help reduce unnecessary blockchain calls and improve performance, especially for frequently used tokens.
Committable suggestion was skipped due to low confidence.
20-71: 🛠️ Refactor suggestion
Ohayo sensei! Consider adding explicit error handling for token registration
The
handle_erc20_transfer
function looks good overall, but there's room for improvement in error handling. When registering a new token, you might want to handle potential errors more explicitly.Consider updating the error handling for the token registration:
if !token_exists { - self.register_erc20_token_metadata(contract_address, &token_id, provider).await?; - self.execute().await.with_context(|| "Failed to execute in handle_erc20_transfer")?; + self.register_erc20_token_metadata(contract_address, &token_id, provider) + .await + .with_context(|| format!("Failed to register ERC20 token metadata for {}", token_id))?; + self.execute() + .await + .with_context(|| format!("Failed to execute after registering ERC20 token {}", token_id))?; }This change provides more context about where and why an error might occur, making debugging easier.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.pub async fn handle_erc20_transfer<P: Provider + Sync>( &mut self, contract_address: Felt, from_address: Felt, to_address: Felt, amount: U256, provider: &P, block_timestamp: u64, ) -> Result<()> { // contract_address let token_id = felt_to_sql_string(&contract_address); let token_exists: bool = self.local_cache.contains_token_id(&token_id); if !token_exists { self.register_erc20_token_metadata(contract_address, &token_id, provider) .await .with_context(|| format!("Failed to register ERC20 token metadata for {}", token_id))?; self.execute() .await .with_context(|| format!("Failed to execute after registering ERC20 token {}", token_id))?; } self.store_erc_transfer_event( contract_address, from_address, to_address, amount, &token_id, block_timestamp, )?; if from_address != Felt::ZERO { // from_address/contract_address/ let from_balance_id = felts_to_sql_string(&[from_address, contract_address]); let from_balance = self .local_cache .erc_cache .entry((ContractType::ERC20, from_balance_id)) .or_default(); *from_balance -= I256::from(amount); } if to_address != Felt::ZERO { let to_balance_id = felts_to_sql_string(&[to_address, contract_address]); let to_balance = self.local_cache.erc_cache.entry((ContractType::ERC20, to_balance_id)).or_default(); *to_balance += I256::from(amount); } if self.local_cache.erc_cache.len() >= 100000 { self.apply_cache_diff().await?; } Ok(()) }
crates/torii/core/src/engine.rs (3)
112-118:
⚠️ Potential issueSensei, let's make the
get_event_processor
method more robust!The current implementation uses
unwrap()
, which could cause a panic if the contract type is not found. Consider returning anOption
orResult
instead to handle this case more gracefully:- pub fn get_event_processor( - &self, - contract_type: ContractType, - ) -> &HashMap<Felt, Vec<Box<dyn EventProcessor<P>>>> { - self.event_processors.get(&contract_type).unwrap() - } + pub fn get_event_processor( + &self, + contract_type: ContractType, + ) -> Option<&HashMap<Felt, Vec<Box<dyn EventProcessor<P>>>>> { + self.event_processors.get(&contract_type) + }This change would allow the caller to handle the case where a contract type is not found without risking a panic.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.pub fn get_event_processor( &self, contract_type: ContractType, ) -> Option<&HashMap<Felt, Vec<Box<dyn EventProcessor<P>>>>> { self.event_processors.get(&contract_type) } }
812-828: 🛠️ Refactor suggestion
Sensei, let's make the error handling in
process_event
even more robust!The current error handling in the non-parallelized event processing could be improved. Consider using a
match
statement for better error handling and logging:- if processor.validate(event) { - if let Err(e) = processor - .process( - &self.world, - &mut self.db, - block_number, - block_timestamp, - event_id, - event, - ) - .await - { - error!(target: LOG_TARGET, event_name = processor.event_key(), error = ?e, "Processing event."); - } - } else { - warn!(target: LOG_TARGET, event_name = processor.event_key(), "Event not validated."); - } + match processor.validate(event) { + true => { + match processor + .process( + &self.world, + &mut self.db, + block_number, + block_timestamp, + event_id, + event, + ) + .await + { + Ok(_) => debug!(target: LOG_TARGET, event_name = processor.event_key(), "Event processed successfully."), + Err(e) => error!(target: LOG_TARGET, event_name = processor.event_key(), error = ?e, "Processing event failed."), + } + }, + false => warn!(target: LOG_TARGET, event_name = processor.event_key(), "Event not validated."), + }This change provides more detailed logging and makes the code more idiomatic Rust.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.match processor.validate(event) { true => { match processor .process( &self.world, &mut self.db, block_number, block_timestamp, event_id, event, ) .await { Ok(_) => debug!(target: LOG_TARGET, event_name = processor.event_key(), "Event processed successfully."), Err(e) => error!(target: LOG_TARGET, event_name = processor.event_key(), error = ?e, "Processing event failed."), } }, false => warn!(target: LOG_TARGET, event_name = processor.event_key(), "Event not validated."), }
835-864: 🛠️ Refactor suggestion
Sensei, let's make the
get_all_events
function even more robust!The
get_all_events
function looks good overall, but we could potentially improve its error handling and make it more idiomatic. Consider using?
for error propagation and awhile let
loop for better readability:async fn get_all_events<P>( provider: &P, events_filter: EventFilter, events_chunk_size: u64, ) -> Result<(Option<Felt>, Vec<EventsPage>)> where P: Provider + Sync, { let mut events_pages = Vec::new(); let mut continuation_token = None; - loop { + while let Some(token) = continuation_token.take() { debug!( "Fetching events page with continuation token: {:?}, for contract: {:?}", - continuation_token, events_filter.address + token, events_filter.address ); - let events_page = provider - .get_events(events_filter.clone(), continuation_token.clone(), events_chunk_size) - .await?; + let events_page = provider + .get_events(events_filter.clone(), Some(token), events_chunk_size) + .await?; continuation_token = events_page.continuation_token.clone(); events_pages.push(events_page); - - if continuation_token.is_none() { - break; - } } Ok((events_filter.address, events_pages)) }This change makes the function more idiomatic and potentially easier to reason about.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.async fn get_all_events<P>( provider: &P, events_filter: EventFilter, events_chunk_size: u64, ) -> Result<(Option<Felt>, Vec<EventsPage>)> where P: Provider + Sync, { let mut events_pages = Vec::new(); let mut continuation_token = None; while let Some(token) = continuation_token.take() { debug!( "Fetching events page with continuation token: {:?}, for contract: {:?}", token, events_filter.address ); let events_page = provider .get_events(events_filter.clone(), Some(token), events_chunk_size) .await?; continuation_token = events_page.continuation_token.clone(); events_pages.push(events_page); } Ok((events_filter.address, events_pages)) }
bin/sozo/src/commands/options/account/controller.rs (3)
116-131: 🛠️ Refactor suggestion
Ohayo sensei! Avoid variable shadowing to improve code clarity
The variable
new_policies
is being shadowed, which might lead to confusion. Consider renaming variables to enhance readability.Suggested refactor:
- let new_policies = new_policies + let policies = new_policies .iter() .map(|p| Policy::new(p.target, get_selector_from_name(&p.method).unwrap())) .collect::<Vec<Policy>>(); // Copied from Session::new - let hashes = new_policies.iter().map(Policy::as_merkle_leaf).collect::<Vec<Felt>>(); + let hashes = policies.iter().map(Policy::as_merkle_leaf).collect::<Vec<Felt>>(); - let new_policies = new_policies + let proved_policies = policies .into_iter() .enumerate() .map(|(i, policy)| ProvedPolicy { policy, proof: MerkleTree::compute_proof(hashes.clone(), i), }) .collect::<Vec<ProvedPolicy>>(); - let new_policies_root = MerkleTree::compute_root(hashes[0], new_policies[0].proof.clone()); + let new_policies_root = MerkleTree::compute_root(hashes[0], proved_policies[0].proof.clone());📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.let policies = new_policies .iter() .map(|p| Policy::new(p.target, get_selector_from_name(&p.method).unwrap())) .collect::<Vec<Policy>>(); // Copied from Session::new let hashes = policies.iter().map(Policy::as_merkle_leaf).collect::<Vec<Felt>>(); let proved_policies = policies .into_iter() .enumerate() .map(|(i, policy)| ProvedPolicy { policy, proof: MerkleTree::compute_proof(hashes.clone(), i), }) .collect::<Vec<ProvedPolicy>>(); let new_policies_root = MerkleTree::compute_root(hashes[0], proved_policies[0].proof.clone());
133-134:
⚠️ Potential issueOhayo sensei! Check for empty vectors to prevent panics
Accessing
hashes[0]
andnew_policies[0]
without checking can lead to a panic if they are empty. Ensure the vectors are not empty before indexing.Suggested fix:
+ if hashes.is_empty() || new_policies.is_empty() { + return false; + } let new_policies_root = MerkleTree::compute_root(hashes[0], new_policies[0].proof.clone());Committable suggestion was skipped due to low confidence.
118-118:
⚠️ Potential issueOhayo sensei! Handle potential errors from
get_selector_from_name
Using
unwrap()
could cause a panic ifget_selector_from_name(&p.method)
fails. It's safer to handle the error to prevent potential crashes.Possible solution:
.map(|p| { - Policy::new(p.target, get_selector_from_name(&p.method).unwrap()) + match get_selector_from_name(&p.method) { + Ok(selector) => Policy::new(p.target, selector), + Err(e) => { + // Handle the error, possibly return an error or log it + return Err(e.into()); + } + } })This may require adjusting the function to return a
Result
.Committable suggestion was skipped due to low confidence.
crates/torii/core/src/executor.rs (2)
314-315:
⚠️ Potential issueOhayo sensei! Replace
unreachable!
with proper error handlingUsing
unreachable!
may cause the application to panic ifContractType::WORLD
is encountered unexpectedly. It's safer to handle this case explicitly with an error to ensure the application remains robust.Apply this diff:
- ContractType::WORLD => unreachable!(), + ContractType::WORLD => { + return Err(anyhow::anyhow!("Unexpected contract type: {:?}", contract_type)); + },📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.ContractType::WORLD => { return Err(anyhow::anyhow!("Unexpected contract type: {:?}", contract_type)); }, ContractType::ERC721 => {
181-293:
⚠️ Potential issueOhayo sensei! Consider enhancing error handling in
handle_query_type
Currently, when
handle_query_type
encounters an error, it is logged but not propagated, which might lead to unnoticed failures in message processing. Consider modifying the error handling to properly manage or propagate errors to maintain system reliability.Apply this diff to propagate errors:
match self.handle_query_type(query, query_type.clone(), &statement, &arguments, tx).await { Ok(()) => {}, Err(e) => { error!(target: LOG_TARGET, r#type = ?query_type, error = %e, "Failed to execute query."); + return Err(e); } }
Committable suggestion was skipped due to low confidence.
crates/torii/core/src/sql/mod.rs (1)
172-174:
⚠️ Potential issueOhayo sensei! Please handle integer conversions without
expect
to prevent panicsUsing
expect
when converting integers could cause panics if the value doesn't fit in the target type. To ensure robustness, consider handling conversion errors gracefully.You can modify the code as follows:
For line 172:
- let head = head.map(|h| h.try_into().expect("doesn't fit in u64")); + let head = head + .map(|h| h.try_into().map_err(|_| anyhow!("Head value {} doesn't fit in u64", h))) + .transpose()?;For line 196:
- let head = Argument::Int(head.try_into().expect("doesn't fit in u64")); + let head = Argument::Int( + head.try_into().map_err(|_| anyhow!("Head value {} doesn't fit in i64", head))? + );For line 224:
- let head = Argument::Int(head.try_into().expect("doesn't fit in u64")); + let head = Argument::Int( + head.try_into().map_err(|_| anyhow!("Head value {} doesn't fit in i64", head))? + );Also applies to: 196-198, 224-225
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 13
🧹 Outside diff range and nitpick comments (20)
crates/torii/grpc/src/server/tests/entities_test.rs (2)
Line range hint
34-45
: Ohayo, sensei! Great job on improving the database setup!The use of a temporary SQLite file is a smart move for testing. It provides persistence during the test while allowing for easy cleanup. The connection options look good too!
However, I have a small suggestion to enhance the cleanup process:
Consider explicitly deleting the temporary file after the test to ensure proper cleanup:
let tempfile = NamedTempFile::new().unwrap(); let path = tempfile.path().to_string_lossy(); +let _guard = tempfile; // This will ensure the file is deleted when it goes out of scope
This way, we can be certain that no temporary files are left behind after the test completes.
Line range hint
103-124
: Ohayo, sensei! Nice work on enhancing the Executor and Engine setup!The addition of a shutdown channel for the Executor is a great practice for graceful shutdown. The Engine initialization looks good with the simplified Processors and the new ContractType mapping.
However, I noticed that we're not explicitly shutting down the Executor at the end of the test.
To ensure clean test execution, let's add an explicit shutdown:
tokio::spawn(async move { executor.run().await.unwrap(); }); + +// At the end of the test function, before it returns: +shutdown_tx.send(()).unwrap();This will ensure that the Executor is properly shut down after the test completes.
crates/torii/core/src/sql/erc.rs (1)
337-350
: Ohayo sensei! Theapply_cache_diff
function is well implementedThe
apply_cache_diff
function efficiently applies the cached balance differences to the database. The use ofmem::replace
is a smart way to clear the cache while maintaining its capacity for future use.The implementation looks good and should perform well.
Consider adding a comment explaining the purpose of the
mem::replace
call, as it might not be immediately obvious to all readers why we're using this approach. For example:// Efficiently clear the cache while preserving its capacity let erc_cache = mem::replace( &mut self.local_cache.erc_cache, HashMap::with_capacity(64), );This small addition would improve the code's readability and maintainability.
bin/torii/src/main.rs (3)
151-163
: Ohayo, sensei! LGTM with a small suggestion: Improved configuration handlingThe new configuration loading logic is a great addition:
- It allows loading from a file, which is excellent for complex setups.
- It falls back to default configuration with command-line overrides when no file is specified.
- The
verify_single_world_address
function ensures consistency in world address specification.These changes significantly improve the application's flexibility and robustness.
Consider adding a log message to indicate whether the configuration was loaded from a file or created with defaults. This would help users understand the source of the current configuration.
Line range hint
186-187
: Ohayo, sensei! LGTM with a small suggestion: Flexible database handlingThe change to use a temporary file when no database path is specified is a good improvement:
- It allows the application to run without requiring a specific database file.
- This is particularly useful for testing or ephemeral usage scenarios.
Consider logging the path of the temporary database file when it's created. This would be helpful for debugging purposes, especially if users need to inspect the database contents after the application has run.
362-386
: Ohayo, sensei! LGTM with a small suggestion: Flexible ERC contract parsingThe
parse_erc_contracts
function is a valuable addition:
- It supports multiple formats for specifying contracts, enhancing flexibility.
- The function includes robust error handling for invalid input formats.
- It defaults to
ContractType::WORLD
when only an address is provided, which is a sensible default.Consider adding support for specifying a custom start block for each contract. This could be useful for contracts that were deployed after the world contract. The format could be extended to
erc_type:address:start_block
oraddress:start_block
.crates/torii/libp2p/src/tests.rs (2)
Line range hint
527-544
: Ohayo, database setup sensei! LGTM with a small suggestion.The changes to use a temporary SQLite file and update the connection pool configuration are good improvements. They make the test environment more similar to a real-world scenario.
Consider adding a comment explaining why we're using a file-based SQLite database for testing instead of an in-memory one. This could help future developers understand the reasoning behind this choice.
581-584
: Ohayo, executor sensei! Great improvements, but let's add a small touch.The addition of the shutdown channel and spawning the executor in a separate task are excellent improvements to the test structure.
Consider adding error handling for the spawned executor task. You could use a
tokio::spawn
with aResult
return type and log any errors:- tokio::spawn(async move { - executor.run().await.unwrap(); - }); + tokio::spawn(async move { + if let Err(e) = executor.run().await { + eprintln!("Executor error: {}", e); + } + });crates/torii/core/src/engine.rs (7)
64-110
: Sugoi implementation, sensei!The
initialize_event_processors
method provides a clear and organized way to set up event processors for different contract types. It greatly improves code readability and maintainability.Consider using
HashMap::with_capacity(3)
when initializingevent_processors_map
to avoid potential reallocations, as we know the exact number of contract types beforehand.
323-423
: Ohayo, sensei! These changes are truly impressive!The modifications to the
fetch_range
method greatly enhance its efficiency:
- Handling multiple contracts and their events concurrently
- Using
join_all
for parallel event fetching- Implementing a semaphore to control concurrent block timestamp fetches
These improvements will significantly boost the performance of data retrieval.
Consider adding a comment explaining the reasoning behind processing world events first in the
fetch_all_events_tasks
queue. This will help future maintainers understand the priority given to world events.
Line range hint
466-545
: Sugoi implementation, sensei!The changes in
process_pending
andprocess_range
methods are excellent:
- Using
cursor_map
to track the last processed transaction for each contract ensures no events are missed or processed twice.- Calling
process_tasks()
for parallelized event processing enhances performance.- Updating cursors after processing maintains consistency.
These improvements significantly enhance the robustness and efficiency of event processing.
In the
process_pending
method, consider wrapping the entire processing logic in atry
block and using?
for error propagation instead of returning early. This would ensure thatapply_cache_diff
is always called, even if an error occurs during processing.
Line range hint
547-586
: Ohayo, sensei! This parallelization is truly impressive!The
process_tasks
method is an excellent addition for handling parallelized event processing:
- Using a semaphore to limit concurrent tasks prevents system overload.
- Spawning tasks for each group of events allows for efficient utilization of resources.
However, there's a potential issue with error handling in the spawned tasks. Errors are currently logged but not propagated. Consider collecting and returning these errors:
let mut errors = Vec::new(); for result in try_join_all(handles).await? { if let Err(e) = result { errors.push(e); } } if !errors.is_empty() { return Err(anyhow::anyhow!("Errors occurred during task processing: {:?}", errors)); }This way, the caller can be aware of and handle any errors that occurred during parallel processing.
Line range hint
724-831
: Sugoi improvements, sensei!The changes in the
process_event
method greatly enhance its flexibility and efficiency:
- Handling different contract types allows for more specialized processing.
- Using
get_event_processor
to retrieve appropriate processors improves modularity.- The parallelization strategy based on task identifiers is a clever approach to balance immediate and deferred processing.
However, there's a potential issue with the use of
expect
when finding a processor:.expect("Must find atleast one processor for the event");This could lead to a panic if no matching processor is found. Consider handling this case more gracefully:
.ok_or_else(|| anyhow::anyhow!("No matching processor found for the event"))?;This way, you can propagate the error and handle it at a higher level without causing a panic.
835-864
: Ohayo, sensei! Nice implementation of event fetching!The
get_all_events
function effectively handles the pagination of event fetching. It's a clean and straightforward implementation.Consider these minor improvements:
- Add a maximum retry count to prevent infinite loops in case of persistent errors.
- Implement exponential backoff for retries in case of temporary network issues.
- Add more detailed logging, including the number of events fetched in each page.
Here's a sketch of how you might implement these improvements:
use tokio::time::sleep; async fn get_all_events<P>( provider: &P, events_filter: EventFilter, events_chunk_size: u64, ) -> Result<(Option<Felt>, Vec<EventsPage>)> where P: Provider + Sync, { let mut events_pages = Vec::new(); let mut continuation_token = None; let mut retry_count = 0; let max_retries = 3; let mut backoff = Duration::from_millis(100); loop { match provider .get_events(events_filter.clone(), continuation_token.clone(), events_chunk_size) .await { Ok(events_page) => { debug!( "Fetched {} events for contract {:?}", events_page.events.len(), events_filter.address ); continuation_token = events_page.continuation_token.clone(); events_pages.push(events_page); retry_count = 0; backoff = Duration::from_millis(100); if continuation_token.is_none() { break; } } Err(e) if retry_count < max_retries => { warn!("Error fetching events, retrying: {}", e); retry_count += 1; sleep(backoff).await; backoff *= 2; } Err(e) => return Err(e.into()), } } Ok((events_filter.address, events_pages)) }These changes will make the function more robust and informative.
866-874
: Sugoi implementation, sensei!The
get_block_timestamp
function is a neat and concise way to fetch block timestamps. It correctly handles both regular and pending blocks.Consider adding more specific error handling to provide clearer context if the block fetch fails:
async fn get_block_timestamp<P>(provider: &P, block_number: u64) -> Result<u64> where P: Provider + Sync, { provider.get_block_with_tx_hashes(BlockId::Number(block_number)) .await .map_err(|e| anyhow::anyhow!("Failed to fetch block {}: {}", block_number, e))? .timestamp() }This change will make debugging easier by providing more context about which block caused an error.
crates/torii/core/src/executor.rs (4)
51-55
: Ohayo sensei! Consider documenting theApplyBalanceDiffQuery
struct.The newly introduced
ApplyBalanceDiffQuery
struct lacks documentation. Adding a doc comment will improve code readability and help other developers understand its purpose.You can add documentation like this:
/// Query to apply balance differences for ERC contracts. pub struct ApplyBalanceDiffQuery { pub erc_cache: HashMap<(ContractType, String), I256>, }
61-61
: Ohayo sensei! Update theQueryType
enum variant order for consistency.It's a good practice to group related enum variants together. Consider moving
ApplyBalanceDiff
next to other query types that modify state, for better organization.Alternatively, ensure variants are ordered logically to enhance maintainability.
371-375
: Ohayo sensei! Simplify balance initialization usingunwrap_or
.You can streamline the balance initialization by using
map
andunwrap_or
methods.Refactor the code like this:
let balance = balance .map(|(b,)| sql_string_to_u256(&b)) .unwrap_or(U256::from(0u8));
387-397
: Ohayo sensei! Ensure consistency in SQL statements formatting.The SQL statement spans multiple lines, which might affect readability. Consider formatting it for clarity.
For example:
sqlx::query( "INSERT OR REPLACE INTO balances ( id, contract_address, account_address, token_id, balance ) VALUES (?, ?, ?, ?, ?)" ) .bind(id) .bind(contract_address) .bind(account_address) .bind(token_id) .bind(u256_to_sql_string(&balance)) .execute(&mut **tx) .await?;crates/torii/core/src/sql/mod.rs (1)
190-195
: Consider passingcursor_map
by reference to avoid unnecessary cloningThe
update_cursors
method receivescursor_map: HashMap<Felt, Felt>
by value, which may lead to unnecessary cloning of the entire map. Consider passing it by reference (&HashMap<Felt, Felt>
) to improve performance and reduce memory overhead.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (17)
- Cargo.toml (1 hunks)
- bin/torii/src/main.rs (8 hunks)
- crates/torii/core/Cargo.toml (1 hunks)
- crates/torii/core/src/engine.rs (21 hunks)
- crates/torii/core/src/executor.rs (5 hunks)
- crates/torii/core/src/lib.rs (0 hunks)
- crates/torii/core/src/sql/cache.rs (5 hunks)
- crates/torii/core/src/sql/erc.rs (1 hunks)
- crates/torii/core/src/sql/mod.rs (8 hunks)
- crates/torii/core/src/sql/test.rs (6 hunks)
- crates/torii/graphql/Cargo.toml (2 hunks)
- crates/torii/graphql/src/tests/metadata_test.rs (3 hunks)
- crates/torii/graphql/src/tests/mod.rs (4 hunks)
- crates/torii/graphql/src/tests/subscription_test.rs (9 hunks)
- crates/torii/grpc/src/server/tests/entities_test.rs (3 hunks)
- crates/torii/libp2p/src/server/mod.rs (2 hunks)
- crates/torii/libp2p/src/tests.rs (3 hunks)
💤 Files with no reviewable changes (1)
- crates/torii/core/src/lib.rs
🚧 Files skipped from review as they are similar to previous changes (7)
- Cargo.toml
- crates/torii/core/Cargo.toml
- crates/torii/core/src/sql/cache.rs
- crates/torii/graphql/Cargo.toml
- crates/torii/graphql/src/tests/metadata_test.rs
- crates/torii/graphql/src/tests/subscription_test.rs
- crates/torii/libp2p/src/server/mod.rs
🔇 Additional comments (22)
crates/torii/grpc/src/server/tests/entities_test.rs (3)
Line range hint
1-27
: Ohayo, sensei! LGTM for the import changes!The new imports align well with the modifications in the test function. Good job on keeping the imports organized and relevant.
128-131
: Ohayo, sensei! Excellent improvements in data processing!The use of an empty HashMap in
fetch_range
is more explicit and type-safe. Great job! Also, addingdb.execute()
ensures all database operations are completed before querying. These changes significantly enhance the reliability and correctness of the test.
Line range hint
133-156
: Ohayo, sensei! The test structure and assertions are on point!The assertions effectively verify the expected outcomes, and the overall test structure follows a clear and logical flow: setup, execution, and assertion. This adheres to best practices for test design. Great job on maintaining a well-structured and comprehensive test!
crates/torii/graphql/src/tests/mod.rs (4)
1-1
: LGTM! New imports added.Ohayo, sensei! The new imports for
HashMap
andContractType
are necessary for the changes in thespinup_types_test
function. Good job on keeping the imports organized and relevant.Also applies to: 33-33
334-334
: Nice improvement on function flexibility!Ohayo, sensei! The addition of the
path
parameter tospinup_types_test
is a great improvement. It allows for dynamic configuration of the database path, enhancing the function's flexibility and making it easier to use in different test scenarios.
356-362
: Great job addressing the previous suggestion!Ohayo, sensei! I'm pleased to see that you've addressed the previous comment about generalizing contract type mappings. The
Sql::new
method now uses aHashMap
to map the world address to its contract type, which is a more flexible approach. This change allows for easier addition of new contract types in the future if needed.
368-373
: Good simplification, but clarification neededOhayo, sensei! The simplification of the
Engine::new
call is a nice improvement for readability. However, I have a question about thefetch_range
method call:
- Why are we passing an empty
HashMap
tofetch_range
? Is this a placeholder for future use, or does the method require aHashMap
parameter even when it's not needed?Could you please provide some context on the purpose of this empty
HashMap
? If it's not needed, consider updating the method signature to make theHashMap
parameter optional.Also applies to: 377-377
crates/torii/core/src/sql/erc.rs (1)
310-335
: Ohayo sensei! Thestore_erc_transfer_event
function looks good!The implementation of
store_erc_transfer_event
is clean and straightforward. It correctly stores the ERC transfer events in the database using the executor to send a query message.The function handles all necessary fields for the transfer event, including contract address, from and to addresses, amount, token ID, and timestamp. The use of
utc_dt_string_from_timestamp
ensures that the timestamp is stored in a consistent format.bin/torii/src/main.rs (3)
56-56
: Ohayo, sensei! LGTM: Enhanced flexibility in configurationThe changes to the
Args
struct improve the flexibility of the application:
- Making
world_address
optional allows for more versatile input handling.- The new
contracts
field enables specifying ERC contract addresses directly from the command line.- The
config
field allows loading configuration from a file, which is a good practice for complex applications.These changes provide users with more options to configure the application, which is a positive improvement.
Also applies to: 137-144
318-336
: Ohayo, sensei! LGTM: Robust world address verificationThe
verify_single_world_address
function is a great addition:
- It ensures that the world address is defined only once, preventing inconsistencies.
- It handles all possible cases of world address specification (command-line, config file, or neither).
- The function updates the configuration when necessary, maintaining consistency.
This function significantly improves the robustness of the application's configuration handling.
Line range hint
337-360
: Ohayo, sensei! LGTM: Improved GraphQL server managementThe changes to the
spawn_rebuilding_graphql_server
function are well-thought-out:
- Taking
shutdown_tx
as a parameter allows for more controlled shutdown of the GraphQL server.- Creating a new shutdown receiver for each iteration ensures that each server instance can be properly shut down.
- The loop structure allows for rebuilding the server when necessary, which can be useful for handling schema changes or other updates.
These changes improve the robustness and flexibility of the GraphQL server management.
crates/torii/libp2p/src/tests.rs (1)
Line range hint
586-611
: Ohayo, model registration sensei! Excellent work, let's add a small verification.The updated model registration process with detailed type information is a great improvement. It enhances type safety and makes the tests more robust.
Let's add a verification step to ensure the model was registered correctly. Run the following script:
This script will help us confirm that the model registration was successful and the data is correct in the database.
crates/torii/core/src/engine.rs (2)
42-42
: Ohayo, sensei! This change looks great!The introduction of
EventProcessorMap
and theevent_processors
field in theProcessors
struct enhances the modularity and scalability of event processing. It allows for more efficient handling of events based on contract types.Also applies to: 49-49
195-201
: Sugoi changes, sensei!The additions to the
Engine
struct, including thetasks
field for parallelized event processing and thecontracts
field for contract-specific handling, are excellent improvements. The use ofArc
for theprovider
allows for safe sharing across threads, which is crucial for concurrent processing.crates/torii/core/src/executor.rs (2)
334-339
:⚠️ Potential issueOhayo sensei! Correct the index for
token_id
in the ERC20 handling.Currently,
token_id
is assigned asid[1]
, butid[1]
refers tocontract_address
. Sinceid
has three elements (account_address
,contract_address
,token_id
),token_id
should beid[2]
.This issue was previously flagged. Ensure that
token_id
correctly usesid[2]
:let account_address = id[0]; let contract_address = id[1]; - let token_id = id[1]; + let token_id = id[2];
377-381
:⚠️ Potential issueOhayo sensei! Prevent negative balances when applying balance differences.
If
balance < balance_diff.value
, subtracting them will cause an underflow. Instead of usingdbg!
for logging, handle this case properly.This concern was previously raised. Update the code to return an error when the balance would become negative:
if balance < balance_diff.value { - dbg!(&balance_diff, balance, id); + return Err(anyhow::anyhow!( + "Resulting balance would be negative for id: {}", + id + )); } balance -= balance_diff.value;crates/torii/core/src/sql/test.rs (3)
27-27
: Ohayo sensei! Import ofContractType
is appropriateThe addition of
use crate::types::ContractType;
is necessary for usingContractType
in the code below.
39-48
: Ohayo sensei! Correctly initializingEngine
with contract typesPassing
Arc::new(HashMap::from([(world_address, ContractType::WORLD)]))
toEngine::new
ensures that theEngine
is aware of the contract types. This change is appropriate and necessary.
51-51
:⚠️ Potential issueOhayo sensei! Verify the use of empty
HashMap
infetch_range
Passing an empty
HashMap
toengine.fetch_range(0, to, &HashMap::new()).await.unwrap()
might lead to missing contract types during data fetching. Consider using the sameHashMap
of contract types passed duringEngine
initialization to maintain consistency.Run the following script to verify how
fetch_range
is used elsewhere:✅ Verification successful
Ohayo sensei! 🌸
To ensure consistent usage of
HashMap
infetch_range
across the codebase, let's check all instances outside the test directories.
Ohayo sensei! 🌸
Verified the usage of
HashMap
infetch_range
across the codebase. Non-test environments correctly use a populatedHashMap
, and test cases intentionally use an emptyHashMap
for isolation.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Find all calls to `fetch_range` and check the parameters passed. # Search for `fetch_range` calls and display them with context. rg 'fetch_range\(' -A 2Length of output: 1343
Script:
#!/bin/bash # Description: Find all calls to `fetch_range` outside test directories and check the parameters passed. # Search for `fetch_range` calls excluding common test directories rg 'fetch_range\(' --glob '!**/tests/**' -A 2Length of output: 813
crates/torii/core/src/sql/mod.rs (3)
48-52
: The newCursors
struct is well-definedOhayo sensei! The introduction of the
Cursors
struct enhances clarity by encapsulating cursor-related fields. The use ofHashMap<Felt, Felt>
forcursor_map
, along with optionallast_pending_block_tx
andhead
, is appropriate and aligns with the intended functionality.
150-152
: Confirm the use ofWHERE 1=1
in SQL update statementOhayo sensei! In the
set_last_pending_block_tx
method, the SQL query usesWHERE 1=1
, which will update thelast_pending_block_tx
for all contracts. Please confirm if updating all contracts is the intended behavior. If not, consider modifying theWHERE
clause to target a specific contract.
Line range hint
558-631
: Potential SQL injection risk with dynamic table names usingpath.join("$")
Ohayo sensei! As previously noted, constructing SQL queries with dynamic table names using
path.join("$")
can introduce SQL injection vulnerabilities if any component ofpath
contains untrusted input. Please ensure that all elements ofpath
are sanitized or validated. Alternatively, consider maintaining a whitelist of allowed table names to mitigate this risk.
|
||
// Execute `delete` and delete Record with id 20 | ||
let InvokeTransactionResult { transaction_hash } = account |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider improving error handling for database connection
Ohayo, sensei! While the dynamic path configuration is great, we're still using unwrap()
when creating the database connection. This could lead to panics if there are issues with the connection.
Consider using ?
or expect()
with informative error messages to handle potential errors more gracefully. For example:
let options = SqliteConnectOptions::from_str(path)
.map_err(|e| anyhow::anyhow!("Failed to create SQLite options: {}", e))?
.create_if_missing(true)
.with_regexp();
let pool = SqlitePoolOptions::new()
.connect_with(options)
.await
.map_err(|e| anyhow::anyhow!("Failed to connect to SQLite database: {}", e))?;
This approach will provide more informative error messages and allow for better error propagation.
@@ -229,25 +238,21 @@ async fn main() -> anyhow::Result<()> { | |||
processors, | |||
EngineConfig { | |||
max_concurrent_tasks: args.max_concurrent_tasks, | |||
start_block: args.start_block, | |||
start_block: 0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo, sensei! LGTM with a suggestion: Engine initialization updates
The changes to the Engine
initialization look good overall:
- Adding
Arc::new(contracts)
to theEngine::new
call is a great addition, allowing for contract-specific processing.
However, hardcoding start_block
to 0 might not be ideal:
- It removes the flexibility to start indexing from a specific block.
- For large chains, this could lead to unnecessarily long initial indexing times.
Consider making start_block
a configurable parameter, either through command-line arguments or the configuration file. This would allow users to choose between full historical indexing and starting from a more recent block.
Also applies to: 249-249
pub fn get_event_processor( | ||
&self, | ||
contract_type: ContractType, | ||
) -> &HashMap<Felt, Vec<Box<dyn EventProcessor<P>>>> { | ||
self.event_processors.get(&contract_type).unwrap() | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo, sensei! A small suggestion for improvement.
The get_event_processor
method is a useful addition, but it could be made safer. Currently, it uses unwrap()
, which might cause a panic if the contract type is not found.
Consider returning an Option
or Result
instead:
pub fn get_event_processor(
&self,
contract_type: ContractType,
) -> Option<&HashMap<Felt, Vec<Box<dyn EventProcessor<P>>>>> {
self.event_processors.get(&contract_type)
}
This way, the caller can handle the case where the contract type is not found more gracefully.
QueryType::ApplyBalanceDiff(apply_balance_diff) => { | ||
debug!(target: LOG_TARGET, "Applying balance diff."); | ||
let instant = Instant::now(); | ||
self.apply_balance_diff(apply_balance_diff).await?; | ||
debug!(target: LOG_TARGET, duration = ?instant.elapsed(), "Applied balance diff."); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo sensei! Handle possible errors in apply_balance_diff
.
While applying balance differences, any error in self.apply_balance_diff(apply_balance_diff).await?
will cause the entire operation to fail. Consider adding error handling to ensure that individual failures do not halt the entire process.
You might modify the code to log errors for individual entries while continuing with others:
for ((contract_type, id_str), balance) in erc_cache.iter() {
if let Err(e) = self.process_balance_diff(contract_type, id_str, balance).await {
error!(target: LOG_TARGET, error = %e, "Failed to apply balance diff for id: {}", id_str);
// Optionally, collect errors to report after the loop
}
}
And refactor the matching logic into process_balance_diff
for clarity.
async fn apply_balance_diff( | ||
&mut self, | ||
apply_balance_diff: ApplyBalanceDiffQuery, | ||
) -> Result<()> { | ||
let erc_cache = apply_balance_diff.erc_cache; | ||
for ((contract_type, id_str), balance) in erc_cache.iter() { | ||
let id = id_str.split(FELT_DELIMITER).collect::<Vec<&str>>(); | ||
match contract_type { | ||
ContractType::WORLD => unreachable!(), | ||
ContractType::ERC721 => { | ||
// account_address/contract_address:id => ERC721 | ||
assert!(id.len() == 2); | ||
let account_address = id[0]; | ||
let token_id = id[1]; | ||
let mid = token_id.split(":").collect::<Vec<&str>>(); | ||
let contract_address = mid[0]; | ||
|
||
self.apply_balance_diff_helper( | ||
id_str, | ||
account_address, | ||
contract_address, | ||
token_id, | ||
balance, | ||
) | ||
.await | ||
.with_context(|| "Failed to apply balance diff in apply_cache_diff")?; | ||
} | ||
ContractType::ERC20 => { | ||
// account_address/contract_address/ => ERC20 | ||
assert!(id.len() == 3); | ||
let account_address = id[0]; | ||
let contract_address = id[1]; | ||
let token_id = id[1]; | ||
|
||
self.apply_balance_diff_helper( | ||
id_str, | ||
account_address, | ||
contract_address, | ||
token_id, | ||
balance, | ||
) | ||
.await | ||
.with_context(|| "Failed to apply balance diff in apply_cache_diff")?; | ||
} | ||
} | ||
} | ||
|
||
Ok(()) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ohayo sensei! Consider refactoring apply_balance_diff
for clarity and efficiency.
The apply_balance_diff
method contains nested logic that can be broken down for better readability. Extracting match arms into helper functions improves code maintainability.
You might refactor like this:
async fn apply_balance_diff(&mut self, apply_balance_diff: ApplyBalanceDiffQuery) -> Result<()> {
let erc_cache = apply_balance_diff.erc_cache;
for ((contract_type, id_str), balance) in erc_cache.iter() {
self.process_balance_diff(contract_type, id_str, balance).await?;
}
Ok(())
}
async fn process_balance_diff(
&mut self,
contract_type: &ContractType,
id_str: &str,
balance: &I256,
) -> Result<()> {
// existing matching and processing logic
}
let account_address = id[0]; | ||
let token_id = id[1]; | ||
let mid = token_id.split(":").collect::<Vec<&str>>(); | ||
let contract_address = mid[0]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo sensei! Handle potential index out-of-bounds in mid[0]
.
When splitting token_id
with ":"
, there's an assumption that mid
has at least one element. If token_id
doesn't contain ":"
, accessing mid[0]
could panic.
Consider adding a check:
let mid = token_id.split(":").collect::<Vec<&str>>();
let contract_address = if !mid.is_empty() {
mid[0]
} else {
return Err(anyhow::anyhow!("Invalid token_id format: {}", token_id));
};
ContractType::WORLD => unreachable!(), | ||
ContractType::ERC721 => { | ||
// account_address/contract_address:id => ERC721 | ||
assert!(id.len() == 2); | ||
let account_address = id[0]; | ||
let token_id = id[1]; | ||
let mid = token_id.split(":").collect::<Vec<&str>>(); | ||
let contract_address = mid[0]; | ||
|
||
self.apply_balance_diff_helper( | ||
id_str, | ||
account_address, | ||
contract_address, | ||
token_id, | ||
balance, | ||
) | ||
.await | ||
.with_context(|| "Failed to apply balance diff in apply_cache_diff")?; | ||
} | ||
ContractType::ERC20 => { | ||
// account_address/contract_address/ => ERC20 | ||
assert!(id.len() == 3); | ||
let account_address = id[0]; | ||
let contract_address = id[1]; | ||
let token_id = id[1]; | ||
|
||
self.apply_balance_diff_helper( | ||
id_str, | ||
account_address, | ||
contract_address, | ||
token_id, | ||
balance, | ||
) | ||
.await | ||
.with_context(|| "Failed to apply balance diff in apply_cache_diff")?; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo sensei! Avoid using assert!
in production code for input validation.
Using assert!
for input validation can cause panics in production. It's better to handle these cases gracefully with proper error handling.
Apply these changes:
For ERC721
:
// account_address/contract_address:id => ERC721
- assert!(id.len() == 2);
+ if id.len() != 2 {
+ return Err(anyhow::anyhow!("Invalid ID format for ERC721: {}", id_str));
+ }
For ERC20
:
// account_address/contract_address/ => ERC20
- assert!(id.len() == 3);
+ if id.len() != 3 {
+ return Err(anyhow::anyhow!("Invalid ID format for ERC20: {}", id_str));
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
ContractType::WORLD => unreachable!(), | |
ContractType::ERC721 => { | |
// account_address/contract_address:id => ERC721 | |
assert!(id.len() == 2); | |
let account_address = id[0]; | |
let token_id = id[1]; | |
let mid = token_id.split(":").collect::<Vec<&str>>(); | |
let contract_address = mid[0]; | |
self.apply_balance_diff_helper( | |
id_str, | |
account_address, | |
contract_address, | |
token_id, | |
balance, | |
) | |
.await | |
.with_context(|| "Failed to apply balance diff in apply_cache_diff")?; | |
} | |
ContractType::ERC20 => { | |
// account_address/contract_address/ => ERC20 | |
assert!(id.len() == 3); | |
let account_address = id[0]; | |
let contract_address = id[1]; | |
let token_id = id[1]; | |
self.apply_balance_diff_helper( | |
id_str, | |
account_address, | |
contract_address, | |
token_id, | |
balance, | |
) | |
.await | |
.with_context(|| "Failed to apply balance diff in apply_cache_diff")?; | |
} | |
} | |
ContractType::WORLD => unreachable!(), | |
ContractType::ERC721 => { | |
// account_address/contract_address:id => ERC721 | |
if id.len() != 2 { | |
return Err(anyhow::anyhow!("Invalid ID format for ERC721: {}", id_str)); | |
} | |
let account_address = id[0]; | |
let token_id = id[1]; | |
let mid = token_id.split(":").collect::<Vec<&str>>(); | |
let contract_address = mid[0]; | |
self.apply_balance_diff_helper( | |
id_str, | |
account_address, | |
contract_address, | |
token_id, | |
balance, | |
) | |
.await | |
.with_context(|| "Failed to apply balance diff in apply_cache_diff")?; | |
} | |
ContractType::ERC20 => { | |
// account_address/contract_address/ => ERC20 | |
if id.len() != 3 { | |
return Err(anyhow::anyhow!("Invalid ID format for ERC20: {}", id_str)); | |
} | |
let account_address = id[0]; | |
let contract_address = id[1]; | |
let token_id = id[1]; | |
self.apply_balance_diff_helper( | |
id_str, | |
account_address, | |
contract_address, | |
token_id, | |
balance, | |
) | |
.await | |
.with_context(|| "Failed to apply balance diff in apply_cache_diff")?; | |
} | |
} |
// when SQL struct is cloned a empty local_cache is created | ||
local_cache: LocalCache, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider the implications of cloning LocalCache
within Sql
struct
Ohayo sensei! The local_cache
field has been added to the Sql
struct. Since Sql
implements Clone
, cloning it will clone local_cache
as well. Please ensure that cloning local_cache
is appropriate and that any internal state maintained by LocalCache
is safe to be cloned. If local_cache
holds mutable state or resources that should not be duplicated, consider wrapping it with Arc
to share ownership across clones.
to verify that this branch doesn't change world indexing.
contracts = [
# flippyflop
{ type = "WORLD", address = "0x043cba5bc94f3ec38aee9841c8c45edf899e77ac944a88eb732b36ef3a4c2ca2" },
# LORDS
{ type = "ERC20", address = "0x044e6bcc627e6201ce09f781d1aae44ea4c21c2fdef299e34fce55bef2d02210" },
# LSVR
{ type = "ERC721", address = "0x0046506cccd3ffb110bc04445d25d94f1d26d645a30e6ac8a17c24e084bed08b" },
] And also ran |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (1)
- crates/torii/core/src/sql/utils.rs (1 hunks)
🧰 Additional context used
🔇 Additional comments (1)
crates/torii/core/src/sql/utils.rs (1)
110-333
: Ohayo, sensei! Excellent work on the comprehensive test suite.The test coverage for the I256 struct and its operations is thorough and well-structured. It includes various edge cases and scenarios, which is crucial for ensuring the correctness of arithmetic operations, especially with signed integers.
Great job on covering different combinations of positive, negative, and zero values, as well as edge cases like negative zero. This comprehensive approach significantly increases confidence in the implementation's correctness.
impl Add for I256 { | ||
type Output = I256; | ||
|
||
fn add(self, other: I256) -> I256 { | ||
// Special case: if both are negative zero, return positive zero | ||
if self.value == U256::from(0u8) | ||
&& other.value == U256::from(0u8) | ||
&& self.is_negative | ||
&& other.is_negative | ||
{ | ||
return I256 { value: U256::from(0u8), is_negative: false }; | ||
} | ||
|
||
if self.is_negative == other.is_negative { | ||
// Same sign: add the values and keep the sign | ||
I256 { value: self.value + other.value, is_negative: self.is_negative } | ||
} else { | ||
// Different signs: subtract the smaller value from the larger one | ||
match self.value.cmp(&other.value) { | ||
Ordering::Greater => { | ||
I256 { value: self.value - other.value, is_negative: self.is_negative } | ||
} | ||
Ordering::Less => { | ||
I256 { value: other.value - self.value, is_negative: other.is_negative } | ||
} | ||
// If both values are equal, the result is zero and not negative | ||
Ordering::Equal => I256 { value: U256::from(0u8), is_negative: false }, | ||
} | ||
} | ||
} | ||
} | ||
|
||
impl Sub for I256 { | ||
type Output = I256; | ||
|
||
fn sub(self, other: I256) -> I256 { | ||
let new_sign = if other.value == U256::from(0u8) { false } else { !other.is_negative }; | ||
let negated_other = I256 { value: other.value, is_negative: new_sign }; | ||
self.add(negated_other) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ohayo, sensei! Consider refactoring to reduce code duplication in arithmetic operations.
The Add
and Sub
implementations share similar logic, particularly in handling value comparisons and sign management. Refactoring common patterns into helper methods can improve maintainability and readability.
Consider extracting the comparison logic into a helper method:
fn compare_and_construct(self, other: I256, is_addition: bool) -> I256 {
match self.value.cmp(&other.value) {
Ordering::Greater => I256 {
value: self.value - other.value,
is_negative: if is_addition { self.is_negative } else { self.is_negative },
},
Ordering::Less => I256 {
value: other.value - self.value,
is_negative: if is_addition { other.is_negative } else { !self.is_negative },
},
Ordering::Equal => I256 { value: U256::from(0u8), is_negative: false },
}
}
Then, use this helper method in both add
and sub
implementations to reduce duplication and improve readability.
pub fn sql_string_to_u256(sql_string: &str) -> U256 { | ||
let sql_string = sql_string.strip_prefix("0x").unwrap_or(sql_string); | ||
U256::from(crypto_bigint::U256::from_be_hex(sql_string)) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo, sensei! Consider enhancing error handling in sql_string_to_u256.
The current implementation might panic if the input string contains invalid hexadecimal characters. It's safer to handle potential errors gracefully.
Apply this diff to improve error handling:
-pub fn sql_string_to_u256(sql_string: &str) -> U256 {
+pub fn sql_string_to_u256(sql_string: &str) -> Result<U256, crypto_bigint::ParseError> {
let sql_string = sql_string.strip_prefix("0x").unwrap_or(sql_string);
- U256::from(crypto_bigint::U256::from_be_hex(sql_string))
+ let bigint = crypto_bigint::U256::from_be_hex(sql_string)?;
+ Ok(U256::from(bigint))
}
This change ensures that any parsing errors are propagated, allowing the caller to handle them appropriately.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
pub fn sql_string_to_u256(sql_string: &str) -> U256 { | |
let sql_string = sql_string.strip_prefix("0x").unwrap_or(sql_string); | |
U256::from(crypto_bigint::U256::from_be_hex(sql_string)) | |
} | |
pub fn sql_string_to_u256(sql_string: &str) -> Result<U256, crypto_bigint::ParseError> { | |
let sql_string = sql_string.strip_prefix("0x").unwrap_or(sql_string); | |
let bigint = crypto_bigint::U256::from_be_hex(sql_string)?; | |
Ok(U256::from(bigint)) | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 11
🧹 Outside diff range and nitpick comments (13)
crates/torii/grpc/src/server/subscriptions/indexer.rs (1)
Line range hint
1-156
: Ohayo one last time, sensei! Let's wrap this up.The switch from
Contract
toContractCursor
(aliased asContractUpdated
) in this file is part of a larger refactoring effort to improve contract update handling. This change aligns well with the PR objectives of implementing features related to indexing whitelisted ERC20 and ERC721 tokens within the Torii module.The modification has been consistently applied throughout the file, affecting the
IndexerManager
andService
structs, as well as their associated methods. The logic within these methods remains unchanged, which is a positive sign for the refactoring process.To ensure the smooth integration of this change, consider the following:
- Verify that all related files have been updated to use
ContractCursor
where necessary.- Update any documentation or comments that might still reference the old
Contract
type.- Run comprehensive tests to ensure that the change doesn't introduce any unexpected behavior in the indexing process.
As this change might affect the overall architecture of contract handling, it would be beneficial to:
- Document the rationale behind switching to a cursor-based approach for contract updates.
- Update any relevant architecture diagrams or documentation to reflect this change.
- Consider the performance implications of this change, especially for large-scale contract indexing.
Great job on this refactoring, sensei! It's a step towards more efficient and specific contract handling.
crates/torii/core/src/engine.rs (6)
44-60
: Excellent addition to the Processors struct, sensei!The new
event_processors
field allows for more granular control over event processing based on contract types. This is a significant improvement in the structure of the code.However, consider adding a brief comment explaining the purpose of the
event_processors
field for better documentation.pub struct Processors<P: Provider + Send + Sync + std::fmt::Debug + 'static> { pub block: Vec<Box<dyn BlockProcessor<P>>>, pub transaction: Vec<Box<dyn TransactionProcessor<P>>>, pub catch_all_event: Box<dyn EventProcessor<P>>, + /// Mapping of contract types to their respective event processors pub event_processors: HashMap<ContractType, EventProcessorMap<P>>, }
63-110
: Sugoi implementation of initialize_event_processors, sensei!This method provides a clear and organized way to initialize event processors for different contract types. It significantly improves code readability and maintainability.
A small suggestion to further enhance readability:
Consider extracting the initialization of each contract type's processors into separate private methods. This would make the
initialize_event_processors
method even more concise and easier to maintain. For example:impl<P: Provider + Send + Sync + std::fmt::Debug + 'static> Processors<P> { pub fn initialize_event_processors() -> HashMap<ContractType, EventProcessorMap<P>> { let mut event_processors_map = HashMap::new(); Self::initialize_world_processors(&mut event_processors_map); Self::initialize_erc20_processors(&mut event_processors_map); Self::initialize_erc721_processors(&mut event_processors_map); event_processors_map } private fn initialize_world_processors(map: &mut HashMap<ContractType, EventProcessorMap<P>>) { // Initialize WORLD processors } // Similar methods for ERC20 and ERC721 }
Line range hint
185-289
: Impressive changes to the Engine struct and implementation, sensei!The addition of
tasks
andcontracts
fields, along with the refactoring of thefetch_data
method, suggests a more sophisticated approach to handling different contract types and parallelizing event processing. This is likely to improve performance and flexibility.A small suggestion to enhance clarity:
Consider adding a brief comment explaining the purpose of the
tasks
andcontracts
fields in theEngine
struct. This would help other developers understand their roles more quickly. For example:pub struct Engine<P: Provider + Send + Sync + std::fmt::Debug + 'static> { // ... other fields ... + /// Tasks for parallelized event processing tasks: HashMap<u64, Vec<(ContractType, ParallelizedEvent)>>, + /// Mapping of contract addresses to their types contracts: Arc<HashMap<Felt, ContractType>>, }
316-420
: Sugoi improvements to the fetch_range method, sensei!The changes significantly enhance the method's capability to handle multiple contracts and events efficiently. The use of concurrency for fetching block timestamps is a great optimization.
A small suggestion to further improve readability:
Consider extracting the block timestamp fetching logic into a separate method. This would make the
fetch_range
method more concise and easier to understand. For example:async fn fetch_block_timestamps(&self, block_numbers: HashSet<u64>) -> Result<BTreeMap<u64, u64>> { let semaphore = Arc::new(Semaphore::new(self.config.max_concurrent_tasks)); let mut set: JoinSet<Result<(u64, u64), anyhow::Error>> = JoinSet::new(); for block_number in block_numbers { let semaphore = semaphore.clone(); let provider = self.provider.clone(); set.spawn(async move { let _permit = semaphore.acquire().await.unwrap(); let block_timestamp = get_block_timestamp(&provider, block_number).await?; Ok((block_number, block_timestamp)) }); } let mut blocks = BTreeMap::new(); while let Some(result) = set.join_next().await { let (block_number, block_timestamp) = result??; blocks.insert(block_number, block_timestamp); } Ok(blocks) }Then, in the
fetch_range
method, you can simply call:let blocks = self.fetch_block_timestamps(block_set).await?;This refactoring would make the
fetch_range
method more focused on its primary responsibility.
Line range hint
551-590
: Sugoi implementation of process_tasks, sensei!The method effectively implements parallel processing of events, which should significantly improve performance. The use of a semaphore to limit concurrency is a great practice to prevent overwhelming the system.
A small suggestion to enhance error handling:
Consider adding more context to the error messages when processing events fails. This could help with debugging in production environments. For example:
if let Err(e) = processor .process(&world, &mut local_db, block_number, block_timestamp, &event_id, &event) .await { - error!(target: LOG_TARGET, event_name = processor.event_key(), error = %e, task_id = %task_id, "Processing parallelized event."); + error!( + target: LOG_TARGET, + event_name = processor.event_key(), + error = %e, + task_id = %task_id, + block_number = %block_number, + event_id = %event_id, + "Failed to process parallelized event." + ); }This additional context could be invaluable when troubleshooting issues in a production environment.
Line range hint
741-851
: Ohayo, sensei! Impressive refactoring of the process_event method!The changes to handle different contract types and implement a task-based approach for certain events should significantly improve the flexibility and efficiency of event processing.
A small suggestion to improve code clarity:
Consider extracting the task identifier logic into a separate method. This would make the
process_event
method more focused and easier to read. For example:fn get_task_identifier(event: &Event, processor_key: &str) -> u64 { match processor_key { "StoreSetRecord" | "StoreUpdateRecord" | "StoreUpdateMember" | "StoreDelRecord" => { let mut hasher = DefaultHasher::new(); event.data[0].hash(&mut hasher); event.data[1].hash(&mut hasher); hasher.finish() } _ => 0, } }Then in the
process_event
method, you can simply call:let task_identifier = get_task_identifier(event, processor.event_key().as_str());This refactoring would make the
process_event
method more concise and easier to understand.crates/torii/core/src/sql/mod.rs (3)
41-49
: Ohayo sensei! Consider the implications of cloningLocalCache
The addition of
local_cache
to theSql
struct is a good improvement, likely enhancing performance through caching. However, sinceSql
now derives theClone
trait, be mindful of the potential implications of cloning thelocal_cache
. Ensure that cloning the cache aligns with your intended behavior and doesn't lead to unexpected side effects or performance issues.Consider wrapping
local_cache
in anArc
if you want to share the cache across clones instead of creating independent copies.
179-210
: Ohayo sensei! Good addition, but let's make it more robustThe new
cursors
method is a valuable addition, providing a comprehensive view of the current state across all contracts. However, there are a couple of areas where we can improve:
Error handling: The use of
expect
could lead to panics in production. Consider using?
ormap_err
to propagate errors more gracefully.Query efficiency: The method uses two separate queries. Consider combining them into a single query to reduce database round-trips and potentially improve performance.
Here's a suggestion to address these points:
pub(crate) async fn cursors(&self) -> Result<Cursors> { let mut conn: PoolConnection<Sqlite> = self.pool.acquire().await?; let result = sqlx::query_as::<_, (String, String, Option<i64>, Option<String>)>( "SELECT c1.contract_address, c1.last_pending_block_contract_tx, c2.head, c2.last_pending_block_tx FROM contracts c1 LEFT JOIN (SELECT head, last_pending_block_tx FROM contracts LIMIT 1) c2 WHERE c1.last_pending_block_contract_tx IS NOT NULL" ) .fetch_all(&mut *conn) .await?; let (cursor_map, head, last_pending_block_tx) = result.into_iter().fold( (HashMap::new(), None, None), |(mut map, _, _), (contract, tx, head, last_tx)| { map.insert( Felt::from_str(&contract).map_err(|e| anyhow!("Invalid Felt: {}", e))?, Felt::from_str(&tx).map_err(|e| anyhow!("Invalid Felt: {}", e))?, ); (map, head.map(|h| h.try_into().map_err(|_| anyhow!("Invalid head"))?), last_tx.map(|t| Felt::from_str(&t).map_err(|e| anyhow!("Invalid Felt: {}", e)))?) }, )?; Ok(Cursors { cursor_map, last_pending_block_tx, head, }) }This version combines the queries, handles errors more gracefully, and avoids the use of
expect
.
233-251
: Ohayo sensei! Another great addition with theupdate_cursors
methodThe
update_cursors
method is a valuable complement to thereset_cursors
method we just reviewed. Here's what makes it shine:
- Comprehensive updates: It allows updating multiple aspects of the cursors (head, last pending block, timestamps) in a single operation.
- Flexibility: Like
reset_cursors
, it uses aHashMap
for specifying which contracts to update and their new values.- Efficiency: By bundling multiple updates into a single operation, it potentially reduces the number of database transactions.
The method aligns well with the multi-contract architecture and provides a more nuanced way to update the system state compared to
reset_cursors
.Consider adding a comment explaining the difference between
reset_cursors
andupdate_cursors
to help other developers choose the appropriate method for their use case.scripts/verify_db_balances/src/main.rs (3)
50-51
: Ohayo, sensei! Address the HACK by properly handlingtoken_id.high
.The comment
// HACK: assumes token_id.high == 0
suggests that the current implementation makes an assumption that may not always hold true. To improve robustness, consider handling cases wheretoken_id.high
is not zero.Would you like assistance in implementing a solution that correctly handles
token_id.high
values? I can help refactor the code to ensure it works for all possibletoken_id
values.
82-82
: Initialize the logger before use.Ensure that the logger is initialized before any logging statements are executed to capture all log outputs.
132-137
: Ohayo, sensei! Enhance error logging with more context.When logging mismatches, consider including the
token_id
andcontract_type
to provide more context for debugging.Apply this diff:
error!( "Mismatch for account {} and contract {}: DB balance = {}, Starknet balance = \ - {}", - account_address, contract_address, db_balance, starknet_balance + {}, token_id = {}, contract_type = {}", + account_address, contract_address, db_balance, starknet_balance, token_id, contract_type );
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (9)
- Cargo.toml (1 hunks)
- crates/torii/core/src/engine.rs (22 hunks)
- crates/torii/core/src/executor.rs (8 hunks)
- crates/torii/core/src/sql/mod.rs (9 hunks)
- crates/torii/core/src/types.rs (2 hunks)
- crates/torii/grpc/src/server/mod.rs (1 hunks)
- crates/torii/grpc/src/server/subscriptions/indexer.rs (1 hunks)
- scripts/verify_db_balances/Cargo.toml (1 hunks)
- scripts/verify_db_balances/src/main.rs (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- Cargo.toml
- crates/torii/grpc/src/server/mod.rs
- scripts/verify_db_balances/Cargo.toml
🧰 Additional context used
🔇 Additional comments (25)
crates/torii/grpc/src/server/subscriptions/indexer.rs (2)
Line range hint
1-156
: Ohayo again, sensei! The new type is consistently applied.The change from
Contract
toContractCursor
(aliased asContractUpdated
) has been consistently applied throughout the file. The logic in methods likeadd_subscriber
andpublish_updates
remains unchanged, which is good. This consistent usage suggests a well-executed refactoring.However, to ensure we haven't missed anything, let's verify the usage of
ContractUpdated
in this file:#!/bin/bash # Description: Verify the usage of ContractUpdated in the current file # Test: Count occurrences of ContractUpdated echo "Counting occurrences of ContractUpdated:" rg --type rust "ContractUpdated" crates/torii/grpc/src/server/subscriptions/indexer.rs | wc -l # Test: List all lines containing ContractUpdated for manual verification echo "Lines containing ContractUpdated:" rg --type rust --line-number "ContractUpdated" crates/torii/grpc/src/server/subscriptions/indexer.rs
16-16
: Ohayo, sensei! LGTM, but let's verify the impact.The change from
Contract
toContractCursor
looks good and aligns with the PR objectives. This modification suggests a move towards a more specific cursor-based approach for contract updates.To ensure consistency across the codebase, let's verify the usage of
ContractCursor
:✅ Verification successful
Ohayo, sensei!
The verification confirms that all usages have been updated toContractUpdated
with no remaining instances of the oldContract
type. Everything looks good and aligns with the PR objectives.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the usage of ContractCursor across the codebase # Test: Search for ContractCursor usage echo "Searching for ContractCursor usage:" rg --type rust "ContractCursor" # Test: Check for any remaining usage of the old Contract type echo "Checking for any remaining usage of the old Contract type:" rg --type rust "use.*Contract(?!Cursor)"Length of output: 727
crates/torii/core/src/types.rs (5)
2-3
: Ohayo, sensei! LGTM for the new imports.The new imports for
VecDeque
andPathBuf
are necessary for the upcoming changes. Good job on keeping the imports organized!
123-127
: Ohayo, sensei! Skipping comment onVecDeque
usage.A previous review has already addressed the potential use of
Vec
instead ofVecDeque
for thecontracts
field. Please refer to that comment for consideration.
129-135
: Ohayo, sensei! Skipping comment on&PathBuf
usage.A previous review has already suggested using
&Path
instead of&PathBuf
for better flexibility. Please refer to that comment for consideration.
137-141
: Ohayo, sensei! Skipping comment onr#type
usage.A previous review has already suggested renaming the
r#type
field to improve readability. Please refer to that comment for consideration.
143-171
: Ohayo, sensei! Excellent work on theContractType
enum and its implementations!The
ContractType
enum and its implementations forFromStr
andDisplay
traits are well-structured and robust. The case-insensitive parsing inFromStr
is a nice touch for better user experience. Keep up the great work!crates/torii/core/src/executor.rs (4)
55-57
: Ohayo sensei! New struct looks good for balance diff queries.The
ApplyBalanceDiffQuery
struct is well-designed for efficiently storing and retrieving balance differences across various contract types. The use ofHashMap
andI256
allows for quick lookups and support for both positive and negative balance changes.
92-92
: Ohayo sensei! Enum update aligns well with new functionality.The addition of the
ApplyBalanceDiff
variant to theQueryType
enum is consistent with the new balance difference application functionality. This update ensures that the executor can properly handle and route balance difference queries.
Line range hint
1-603
: Ohayo sensei! Overall, the changes look promising with room for improvement.The new functionality for applying balance differences is a valuable addition to the executor. The structure and logic are generally sound, but there are a few areas where we can enhance robustness and maintainability:
- Replace
assert!
statements with proper error handling.- Refactor the
apply_balance_diff
method to reduce code duplication.- Fix the token_id index for ERC20 contracts.
- Remove or replace the debug statement in
apply_balance_diff_helper
.- Add overflow checking when adding to balances.
By addressing these points, we can significantly improve the code quality and reliability. Great work on implementing this new feature, sensei!
582-582
:⚠️ Potential issueOhayo sensei! Let's remove the debug statement for production.
The
dbg!
macro at line 582 should be removed or replaced with proper logging for production code. Consider using a logging framework likelog
ortracing
instead.- dbg!(&balance_diff, balance, id); + log::warn!("Balance underflow: diff={:?}, balance={}, id={}", balance_diff, balance, id);Likely invalid or redundant comment.
crates/torii/core/src/engine.rs (4)
Line range hint
1-42
: Ohayo, sensei! These imports look great!The new imports and type definition for
EventProcessorMap<P>
suggest a more organized approach to handling different event types and contracts. This structure is likely to improve code maintainability and readability.
Line range hint
461-549
: Ohayo, sensei! Excellent updates to process_pending and process_range methods!The changes in these methods align well with the overall improvements in the event processing structure. The use of
cursor_map
allows for more granular tracking of event processing across different contracts, which should improve the reliability and efficiency of the indexing process.The consistent approach between
process_pending
andprocess_range
is commendable, as it makes the code more maintainable and easier to understand.
855-894
: Sugoi addition of helper functions, sensei!The new
get_all_events
andget_block_timestamp
functions are excellent additions that encapsulate common operations. This should improve code reusability and readability throughout the codebase.The
get_all_events
function effectively handles pagination of events, which is crucial for dealing with large numbers of events efficiently. Theget_block_timestamp
function provides a clean way to fetch block timestamps, which is used in several places in the code.These additions demonstrate good software engineering practices by promoting code reuse and abstraction of common operations.
Line range hint
1-894
: Ohayo, sensei! Overall, these changes are truly impressive!The refactoring of the
Engine
and related structures represents a significant improvement in the system's capability to handle different contract types and process events efficiently. The introduction of theEventProcessorMap
and the new methods for initializing and managing event processors provide a more flexible and maintainable structure.Key improvements include:
- Better organization of event processors by contract type.
- Implementation of parallel processing for certain event types.
- More granular tracking of event processing with the
cursor_map
.- Improved error handling and logging throughout the code.
These changes should result in a more robust, efficient, and maintainable indexing system. Great work, sensei!
crates/torii/core/src/sql/mod.rs (5)
Line range hint
1-40
: Ohayo sensei! Nice improvements to imports and type definitionsThe changes in this section enhance the code organization and readability. The introduction of new imports, especially the
cache
module, suggests improved functionality and potentially better performance through caching. The type aliasesIsEventMessage
andIsStoreUpdate
make the code more self-explanatory.
58-87
: Ohayo sensei! Excellent improvements to theSql::new
methodThe changes to the
Sql::new
method significantly enhance its flexibility and robustness:
- Support for multiple contracts: The method now accepts a
HashMap
of contracts, allowing for more versatile initialization.- Atomic initialization: The use of a transaction ensures that all contracts are initialized atomically, maintaining data consistency.
- Local cache initialization: The
local_cache
is properly set up, preparing theSql
struct for efficient operation.These changes improve the overall design and should lead to better performance and reliability.
Line range hint
114-142
: Ohayo sensei! Nicely enhancedset_head
methodThe modifications to the
set_head
method are well-thought-out and align perfectly with the multi-contract support:
- Addition of
contract_address
parameter: This allows setting the head for specific contracts, increasing flexibility.- Updated SQL query: The query now correctly targets the specific contract.
- Use of
QueryType::SetHead
: This structured approach to query execution likely improves maintainability and potentially allows for optimizations in query processing.These changes enhance the method's functionality and integrate well with the overall multi-contract architecture.
214-232
: Ohayo sensei! Excellent addition of thereset_cursors
methodThe new
reset_cursors
method is a valuable addition to theSql
struct. Here's why it's great:
- Batch operation: It allows resetting cursors for multiple contracts in a single operation, which can be more efficient than individual updates.
- Flexibility: The use of a
HashMap
provides flexibility in specifying which contracts to reset and their new values.- Separation of concerns: By using
QueryType::ResetCursors
, the method delegates the actual SQL operations to a specialized handler, potentially allowing for optimizations and better maintainability.This method aligns well with the multi-contract architecture and provides a clean interface for resetting the system state.
Line range hint
1-1054
: Ohayo sensei! Overall, impressive improvements to the SQL moduleAfter reviewing the changes in this file, I can say that the modifications significantly enhance the functionality and flexibility of the
Sql
struct and its methods. Here's a summary of the key improvements:
- Multi-contract support: The changes consistently implement support for multiple contracts across various methods, allowing for more versatile use of the module.
- Caching introduction: The addition of
local_cache
suggests potential performance improvements through caching mechanisms.- New methods:
cursors
,reset_cursors
, andupdate_cursors
provide powerful tools for managing the state of multiple contracts.- Query structuring: The use of
QueryType
enums for different operations suggests a more organized approach to query execution, potentially improving maintainability and performance.While these changes are generally positive, there are a few areas that could benefit from further attention:
- SQL injection prevention: Some methods could be vulnerable to SQL injection. Consider using parameterized queries consistently.
- Error handling: Some areas, particularly in the
cursors
method, could benefit from more robust error handling to prevent potential panics in production.- Query optimization: There might be opportunities to combine multiple queries into single operations for improved efficiency.
Overall, these changes represent a significant step forward in the capabilities of the SQL module, particularly in supporting a multi-contract architecture.
scripts/verify_db_balances/src/main.rs (5)
1-13
: Ohayo, sensei! All necessary imports are correctly included.The imported modules and libraries are appropriate for the functionality implemented in the code.
21-22
: Handle potential parsing errors foraccount_address
andcontract_address
.Using
unwrap()
onFelt::from_str(account_address).unwrap()
andFelt::from_str(contract_address).unwrap()
can cause a panic if the input strings are invalid. It's safer to handle potential errors gracefully.
111-111
: Ohayo, sensei! Opportunity to batch RPC calls for improved performance.The comment
// IMPROVEMENT: batch multiple balanceOf calls in same rpc call
indicates that batching RPC calls could enhance performance.
122-122
: Handle potential errors when acquiring the semaphore permit.Using
unwrap()
onsemaphore_clone.acquire().await.unwrap()
can cause a panic if the semaphore is closed. It's advisable to handle the error properly.
155-156
: Ohayo, sensei! All balances have been checked successfully.The program completes by logging that all balances have been checked, indicating the end of the verification process.
pub struct ContractCursor { | ||
pub head: i64, | ||
pub tps: i64, | ||
pub last_block_timestamp: i64, | ||
pub contract_address: String, | ||
pub last_pending_block_tx: Option<String>, | ||
pub last_pending_block_contract_tx: Option<String>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Ohayo, sensei! 🐰
Great progress on renaming Contract
to ContractCursor
and adding the new fields! However, we found some remaining references to the old Contract
struct that need your attention:
- crates/torii/grpc/src/server/subscriptions/indexer.rs
- crates/torii/core/src/executor.rs
Please update these references to ensure the renaming is complete and to avoid potential issues.
🔗 Analysis chain
Ohayo, sensei! Great job on updating the ContractCursor
struct!
The renaming from Contract
to ContractCursor
and the addition of new fields last_pending_block_tx
and last_pending_block_contract_tx
look good. These changes align well with the PR objectives.
To ensure these changes don't introduce any regressions, please run the following verification script:
This script will help identify any places where the old struct name is still in use and ensure that all uses of ContractCursor
are updated to include the new fields.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the impact of renaming Contract to ContractCursor and the addition of new fields
# Test: Search for any remaining references to the old struct name
echo "Checking for remaining references to 'Contract' struct:"
rg --type rust -g '!types.rs' '\bContract\b' crates/torii
# Test: Verify that all uses of ContractCursor include the new fields
echo "Checking usage of ContractCursor with new fields:"
rg --type rust -A 10 'ContractCursor' crates/torii
Length of output: 5753
// account_address/contract_address:id => ERC721 | ||
assert!(id.len() == 2); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo sensei! Let's handle input validation more gracefully.
Replace assert!
statements with proper error handling to avoid panics in production:
- assert!(id.len() == 2);
+ if id.len() != 2 {
+ return Err(anyhow::anyhow!("Invalid ID format for ERC721: {}", id_str));
+ }
- assert!(id.len() == 3);
+ if id.len() != 3 {
+ return Err(anyhow::anyhow!("Invalid ID format for ERC20: {}", id_str));
+ }
Also applies to: 537-538
ContractType::ERC721 => { | ||
// account_address/contract_address:id => ERC721 | ||
assert!(id.len() == 2); | ||
let account_address = id[0]; | ||
let token_id = id[1]; | ||
let mid = token_id.split(":").collect::<Vec<&str>>(); | ||
let contract_address = mid[0]; | ||
|
||
self.apply_balance_diff_helper( | ||
id_str, | ||
account_address, | ||
contract_address, | ||
token_id, | ||
balance, | ||
) | ||
.await | ||
.with_context(|| "Failed to apply balance diff in apply_cache_diff")?; | ||
} | ||
ContractType::ERC20 => { | ||
// account_address/contract_address/ => ERC20 | ||
assert!(id.len() == 3); | ||
let account_address = id[0]; | ||
let contract_address = id[1]; | ||
let token_id = id[1]; | ||
|
||
self.apply_balance_diff_helper( | ||
id_str, | ||
account_address, | ||
contract_address, | ||
token_id, | ||
balance, | ||
) | ||
.await | ||
.with_context(|| "Failed to apply balance diff in apply_cache_diff")?; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ohayo sensei! Let's refactor to reduce code duplication.
Consider extracting the common logic for ERC721 and ERC20 into a separate function to improve maintainability and reduce duplication. For example:
fn process_balance_diff(&mut self, contract_type: &ContractType, id: Vec<&str>, balance: &I256) -> Result<()> {
let (account_address, contract_address, token_id) = match contract_type {
ContractType::ERC721 => (id[0], id[1].split(":").next().unwrap(), id[1]),
ContractType::ERC20 => (id[0], id[1], id[2]),
_ => return Err(anyhow::anyhow!("Unsupported contract type")),
};
self.apply_balance_diff_helper(
&id.join(FELT_DELIMITER),
account_address,
contract_address,
token_id,
balance,
).await
}
Then use this function in the match
statement.
assert!(id.len() == 3); | ||
let account_address = id[0]; | ||
let contract_address = id[1]; | ||
let token_id = id[1]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo sensei! Let's fix the token_id index for ERC20.
There's an incorrect index for token_id
in the ERC20 branch. It should be:
- let token_id = id[1];
+ let token_id = id[2];
This ensures that the correct token_id is used for ERC20 contracts.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
let token_id = id[1]; | |
let token_id = id[2]; |
} | ||
balance -= balance_diff.value; | ||
} else { | ||
balance += balance_diff.value; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo sensei! Let's handle potential overflow when adding to balance.
When adding to the balance, there's a risk of overflow. Consider using checked addition to handle this case gracefully:
- balance += balance_diff.value;
+ balance = balance.checked_add(balance_diff.value)
+ .ok_or_else(|| anyhow::anyhow!("Balance overflow"))?;
This change will return an error if an overflow occurs, allowing the caller to handle it appropriately.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
balance += balance_diff.value; | |
balance = balance.checked_add(balance_diff.value) | |
.ok_or_else(|| anyhow::anyhow!("Balance overflow"))?; |
pub async fn head(&self, contract: Felt) -> Result<(u64, Option<Felt>, Option<Felt>)> { | ||
let indexer_query = | ||
sqlx::query_as::<_, (Option<i64>, Option<String>, Option<String>, String)>( | ||
"SELECT head, last_pending_block_world_tx, last_pending_block_tx, contract_type \ | ||
FROM contracts WHERE id = ?", | ||
"SELECT head, last_pending_block_contract_tx, last_pending_block_tx, \ | ||
contract_type FROM contracts WHERE id = ?", | ||
) | ||
.bind(format!("{:#x}", self.world_address)); | ||
|
||
let indexer: (Option<i64>, Option<String>, Option<String>, String) = | ||
indexer_query.fetch_one(&self.pool).await?; | ||
.bind(format!("{:#x}", contract)); | ||
|
||
let indexer: (Option<i64>, Option<String>, Option<String>, String) = indexer_query | ||
.fetch_one(&self.pool) | ||
.await | ||
.with_context(|| format!("Failed to fetch head for contract: {:#x}", contract))?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo sensei! Good update, but watch out for potential SQL injection
The modification to the head
method to support multiple contracts is a nice improvement, aligning well with the changes in the new
method. However, there's a potential security concern:
The contract
parameter is directly interpolated into the SQL query string. This could lead to SQL injection if the contract
value is not properly sanitized or validated.
Consider using parameterized queries or prepared statements to mitigate the risk of SQL injection. For example:
let indexer_query = sqlx::query_as::<_, (Option<i64>, Option<String>, Option<String>, String)>(
"SELECT head, last_pending_block_contract_tx, last_pending_block_tx, contract_type FROM contracts WHERE id = ?"
).bind(contract);
This approach ensures that the contract
value is properly escaped and treated as data, not part of the SQL command.
info!("Checking {} balances", rows.len()); | ||
|
||
let provider = | ||
Arc::new(JsonRpcClient::new(HttpTransport::new(Url::parse(&args.rpc_url).unwrap()))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle potential errors when parsing the RPC URL.
Using unwrap()
on Url::parse(&args.rpc_url).unwrap()
can cause a panic if the provided RPC URL is invalid. It's safer to handle the error gracefully to prevent runtime crashes.
Apply this diff to handle errors:
-let provider =
- Arc::new(JsonRpcClient::new(HttpTransport::new(Url::parse(&args.rpc_url).unwrap())));
+let rpc_url = Url::parse(&args.rpc_url)?;
+let provider = Arc::new(JsonRpcClient::new(HttpTransport::new(rpc_url)));
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
Arc::new(JsonRpcClient::new(HttpTransport::new(Url::parse(&args.rpc_url).unwrap()))); | |
let rpc_url = Url::parse(&args.rpc_url)?; | |
let provider = Arc::new(JsonRpcClient::new(HttpTransport::new(rpc_url))); |
for handle in handles { | ||
handle.await??; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ohayo, sensei! Consider handling task join errors separately.
When awaiting the spawned tasks, if one fails, it might be helpful to log which task failed to aid in debugging.
Apply this diff:
for handle in handles {
- handle.await??;
+ match handle.await {
+ Ok(Ok(_)) => {}
+ Ok(Err(e)) => error!("Task failed with error: {}", e),
+ Err(e) => error!("Failed to join task: {}", e),
+ }
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
for handle in handles { | |
handle.await??; | |
} | |
for handle in handles { | |
match handle.await { | |
Ok(Ok(_)) => {} | |
Ok(Err(e)) => error!("Task failed with error: {}", e), | |
Err(e) => error!("Failed to join task: {}", e), | |
} | |
} |
let balance = match contract_type { | ||
"ERC20" => { | ||
let balance = provider | ||
.call( | ||
FunctionCall { | ||
contract_address, | ||
entry_point_selector: selector!("balanceOf"), | ||
calldata: vec![account_address], | ||
}, | ||
BlockId::Tag(starknet::core::types::BlockTag::Pending), | ||
) | ||
.await?; | ||
|
||
let balance_low = balance[0].to_u128().unwrap(); | ||
let balance_high = balance[1].to_u128().unwrap(); | ||
|
||
let balance = U256::from_words(balance_low, balance_high); | ||
format!("{:#064x}", balance) | ||
} | ||
"ERC721" => { | ||
let token_id = Felt::from_str(token_id.split(":").nth(1).unwrap()).unwrap(); | ||
let balance = provider | ||
.call( | ||
FunctionCall { | ||
contract_address, | ||
entry_point_selector: selector!("ownerOf"), | ||
// HACK: assumes token_id.high == 0 | ||
calldata: vec![token_id, Felt::ZERO], | ||
}, | ||
BlockId::Tag(starknet::core::types::BlockTag::Pending), | ||
) | ||
.await?; | ||
if account_address != balance[0] { | ||
format!("{:#064x}", U256::from(0u8)) | ||
} else { | ||
format!("{:#064x}", U256::from(1u8)) | ||
} | ||
} | ||
_ => unreachable!(), | ||
}; | ||
Ok(balance) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider refactoring get_balance_from_starknet
for better error handling and readability.
-
Issue with Using
unwrap()
: There are multiple instances whereunwrap()
is used, which can lead to panics. Specifically at lines 37, 38, and 44. -
Suggestion: Replace
unwrap()
with proper error propagation using the?
operator or handle the errors explicitly.
Apply this diff to handle potential errors:
-let balance_low = balance[0].to_u128().unwrap();
-let balance_high = balance[1].to_u128().unwrap();
+let balance_low = balance.get(0).and_then(|v| v.to_u128()).ok_or("Invalid balance_low value")?;
+let balance_high = balance.get(1).and_then(|v| v.to_u128()).ok_or("Invalid balance_high value")?;
-let token_id = Felt::from_str(token_id.split(":").nth(1).unwrap()).unwrap();
+let token_id_part = token_id
+ .split(':')
+ .nth(1)
+ .ok_or("Invalid token_id format")?;
+let token_id = Felt::from_str(token_id_part)?;
Committable suggestion was skipped due to low confidence.
let args = Args::parse(); | ||
|
||
// Use the provided database path | ||
let pool = SqlitePool::connect(&format!("sqlite:{}", args.db_path)).await?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Ohayo, sensei! The verification confirms that args.db_path
is directly concatenated with the sqlite:
prefix without any checks. This could lead to potential misconfigurations if db_path
already includes the prefix.
- File:
scripts/verify_db_balances/src/main.rs
, Line: 88
🔗 Analysis chain
Ohayo, sensei! Handle potential errors when formatting the database path.
Using format!("sqlite:{}", args.db_path)
assumes that the db_path
does not already include the sqlite:
prefix. Consider checking or sanitizing the input to prevent potential misconfigurations.
Here's a script to check if db_path
starts with sqlite:
:
Ohayo, sensei! Let's get a closer look at how args.db_path
is handled in your codebase.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that db_path does not start with 'sqlite:'
# Test: Echo the db_path variable. Expect: It should not start with 'sqlite:'
echo "${args.db_path}" | grep '^sqlite:' && echo "db_path should not start with 'sqlite:'"
Length of output: 99
Script:
#!/bin/bash
# Description: Search for usage of `args.db_path` in main.rs to check if it is sanitized or prefixed with 'sqlite:'
# Use ripgrep to find lines where `args.db_path` is used
rg 'args\.db_path' scripts/verify_db_balances/src/main.rs
Length of output: 139
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Outside diff range and nitpick comments (3)
crates/torii/core/src/sql/erc.rs (3)
35-37
: Ohayo sensei! Consider improving error handling here.The
execute
call useswith_context
, which is good for providing additional context. However, we could make the error message more specific to help with debugging.Consider modifying the error message to include more details:
- self.execute().await.with_context(|| "Failed to execute in handle_erc20_transfer")?; + self.execute().await.with_context(|| format!("Failed to execute ERC20 token registration for contract address: {}", token_id))?;
337-351
: Ohayo sensei! Let's add a bit more robustness here.The
apply_cache_diff
method looks good overall, but we can improve its error handling slightly.Consider adding error context to the
send
operation:pub async fn apply_cache_diff(&mut self) -> Result<()> { if !self.local_cache.erc_cache.is_empty() { - self.executor.send(QueryMessage::new( + self.executor.send(QueryMessage::new( "".to_string(), vec![], QueryType::ApplyBalanceDiff(ApplyBalanceDiffQuery { erc_cache: mem::replace( &mut self.local_cache.erc_cache, HashMap::with_capacity(64), ), }), - ))?; + )).context("Failed to send ApplyBalanceDiff query")?; } Ok(()) }This change will provide more context if an error occurs during the send operation, making it easier to debug issues.
1-352
: Ohayo sensei! Here's a summary of our review.Overall, the implementation of ERC20 and ERC721 token handling in this file is solid. The code effectively manages token transfers, metadata registration, and balance updates. However, there are a few areas where we can improve:
- Error Handling: Replace
unwrap()
andexpect()
calls with proper error propagation to make the code more robust.- Performance Optimization: Consider implementing batched token registration to reduce database operations.
- Code Duplication: Extract common functionality between ERC20 and ERC721 handling into shared methods.
- Consistency: Align error handling and optimization strategies between ERC20 and ERC721 methods.
By addressing these points, we can enhance the reliability and efficiency of the token handling system. Great work so far, and I'm excited to see these improvements implemented!
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (1)
- crates/torii/core/src/sql/erc.rs (1 hunks)
🧰 Additional context used
🔇 Additional comments (1)
crates/torii/core/src/sql/erc.rs (1)
310-335
: Ohayo sensei! This method looks good to me.The
store_erc_transfer_event
method is well-implemented. It correctly stores the ERC transfer event details in the database using the executor to send a query message.The implementation is clean and straightforward. Good job!
pub async fn handle_erc20_transfer<P: Provider + Sync>( | ||
&mut self, | ||
contract_address: Felt, | ||
from_address: Felt, | ||
to_address: Felt, | ||
amount: U256, | ||
provider: &P, | ||
block_timestamp: u64, | ||
) -> Result<()> { | ||
// contract_address | ||
let token_id = felt_to_sql_string(&contract_address); | ||
|
||
let token_exists: bool = self.local_cache.contains_token_id(&token_id); | ||
|
||
if !token_exists { | ||
self.register_erc20_token_metadata(contract_address, &token_id, provider).await?; | ||
self.execute().await.with_context(|| "Failed to execute in handle_erc20_transfer")?; | ||
} | ||
|
||
self.store_erc_transfer_event( | ||
contract_address, | ||
from_address, | ||
to_address, | ||
amount, | ||
&token_id, | ||
block_timestamp, | ||
)?; | ||
|
||
if from_address != Felt::ZERO { | ||
// from_address/contract_address/ | ||
let from_balance_id = felts_to_sql_string(&[from_address, contract_address]); | ||
let from_balance = self | ||
.local_cache | ||
.erc_cache | ||
.entry((ContractType::ERC20, from_balance_id)) | ||
.or_default(); | ||
*from_balance -= I256::from(amount); | ||
} | ||
|
||
if to_address != Felt::ZERO { | ||
let to_balance_id = felts_to_sql_string(&[to_address, contract_address]); | ||
let to_balance = | ||
self.local_cache.erc_cache.entry((ContractType::ERC20, to_balance_id)).or_default(); | ||
*to_balance += I256::from(amount); | ||
} | ||
|
||
if self.local_cache.erc_cache.len() >= 100000 { | ||
self.apply_cache_diff().await?; | ||
} | ||
|
||
Ok(()) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ohayo sensei! Consider optimizing token registration process.
The current implementation registers tokens and executes the operation immediately. This could be optimized by batching token registrations.
Consider implementing a queue for token registrations and executing them in batches. This could reduce the number of database operations and improve overall performance. Here's a high-level suggestion:
const REGISTRATION_BATCH_SIZE: usize = 100; // Adjust based on your needs
pub async fn handle_erc20_transfer<P: Provider + Sync>(
&mut self,
// ... other parameters ...
) -> Result<()> {
// ... existing code ...
if !token_exists {
self.queue_erc20_token_metadata_registration(contract_address, &token_id, provider).await?;
if self.registration_queue.len() >= REGISTRATION_BATCH_SIZE {
self.execute_registration_queue().await?;
}
}
// ... rest of the existing code ...
}
This change would require implementing a queue for token registrations and a method to execute the queued registrations. The REGISTRATION_BATCH_SIZE
constant should be tuned based on your specific use case and performance requirements.
pub async fn handle_erc721_transfer<P: Provider + Sync>( | ||
&mut self, | ||
contract_address: Felt, | ||
from_address: Felt, | ||
to_address: Felt, | ||
token_id: U256, | ||
provider: &P, | ||
block_timestamp: u64, | ||
) -> Result<()> { | ||
// contract_address:id | ||
let token_id = felt_and_u256_to_sql_string(&contract_address, &token_id); | ||
let token_exists: bool = self.local_cache.contains_token_id(&token_id); | ||
|
||
if !token_exists { | ||
self.register_erc721_token_metadata(contract_address, &token_id, provider).await?; | ||
self.execute().await?; | ||
} | ||
|
||
self.store_erc_transfer_event( | ||
contract_address, | ||
from_address, | ||
to_address, | ||
U256::from(1u8), | ||
&token_id, | ||
block_timestamp, | ||
)?; | ||
|
||
// from_address/contract_address:id | ||
if from_address != Felt::ZERO { | ||
let from_balance_id = | ||
format!("{}{FELT_DELIMITER}{}", felt_to_sql_string(&from_address), &token_id); | ||
let from_balance = self | ||
.local_cache | ||
.erc_cache | ||
.entry((ContractType::ERC721, from_balance_id)) | ||
.or_default(); | ||
*from_balance -= I256::from(1u8); | ||
} | ||
|
||
if to_address != Felt::ZERO { | ||
let to_balance_id = | ||
format!("{}{FELT_DELIMITER}{}", felt_to_sql_string(&to_address), &token_id); | ||
let to_balance = self | ||
.local_cache | ||
.erc_cache | ||
.entry((ContractType::ERC721, to_balance_id)) | ||
.or_default(); | ||
*to_balance += I256::from(1u8); | ||
} | ||
|
||
if self.local_cache.erc_cache.len() >= 100000 { | ||
self.apply_cache_diff().await?; | ||
} | ||
|
||
Ok(()) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ohayo sensei! Let's align this method with handle_erc20_transfer
.
The handle_erc721_transfer
method is quite similar to handle_erc20_transfer
, but it could benefit from the same optimizations we suggested earlier.
Consider applying the following changes:
- Improve error handling for the
execute
call:
- self.execute().await?;
+ self.execute().await.with_context(|| format!("Failed to execute ERC721 token registration for contract address: {}", token_id))?;
- Implement batched token registration:
if !token_exists {
self.queue_erc721_token_metadata_registration(contract_address, &token_id, provider).await?;
if self.registration_queue.len() >= REGISTRATION_BATCH_SIZE {
self.execute_registration_queue().await?;
}
}
These changes will improve consistency between ERC20 and ERC721 handling and potentially enhance performance.
async fn register_erc20_token_metadata<P: Provider + Sync>( | ||
&mut self, | ||
contract_address: Felt, | ||
token_id: &str, | ||
provider: &P, | ||
) -> Result<()> { | ||
// Fetch token information from the chain | ||
let name = provider | ||
.call( | ||
FunctionCall { | ||
contract_address, | ||
entry_point_selector: get_selector_from_name("name").unwrap(), | ||
calldata: vec![], | ||
}, | ||
BlockId::Tag(BlockTag::Pending), | ||
) | ||
.await?; | ||
|
||
// len = 1 => return value felt (i.e. legacy erc20 token) | ||
// len > 1 => return value ByteArray (i.e. new erc20 token) | ||
let name = if name.len() == 1 { | ||
parse_cairo_short_string(&name[0]).unwrap() | ||
} else { | ||
ByteArray::cairo_deserialize(&name, 0) | ||
.expect("Return value not ByteArray") | ||
.to_string() | ||
.expect("Return value not String") | ||
}; | ||
|
||
let symbol = provider | ||
.call( | ||
FunctionCall { | ||
contract_address, | ||
entry_point_selector: get_selector_from_name("symbol").unwrap(), | ||
calldata: vec![], | ||
}, | ||
BlockId::Tag(BlockTag::Pending), | ||
) | ||
.await?; | ||
|
||
let symbol = if symbol.len() == 1 { | ||
parse_cairo_short_string(&symbol[0]).unwrap() | ||
} else { | ||
ByteArray::cairo_deserialize(&symbol, 0) | ||
.expect("Return value not ByteArray") | ||
.to_string() | ||
.expect("Return value not String") | ||
}; | ||
|
||
let decimals = provider | ||
.call( | ||
FunctionCall { | ||
contract_address, | ||
entry_point_selector: get_selector_from_name("decimals").unwrap(), | ||
calldata: vec![], | ||
}, | ||
BlockId::Tag(BlockTag::Pending), | ||
) | ||
.await?; | ||
let decimals = u8::cairo_deserialize(&decimals, 0).expect("Return value not u8"); | ||
|
||
// Insert the token into the tokens table | ||
self.executor.send(QueryMessage::other( | ||
"INSERT INTO tokens (id, contract_address, name, symbol, decimals) VALUES (?, ?, ?, \ | ||
?, ?)" | ||
.to_string(), | ||
vec![ | ||
Argument::String(token_id.to_string()), | ||
Argument::FieldElement(contract_address), | ||
Argument::String(name), | ||
Argument::String(symbol), | ||
Argument::Int(decimals.into()), | ||
], | ||
))?; | ||
|
||
self.local_cache.register_token_id(token_id.to_string()); | ||
|
||
Ok(()) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo sensei! Let's make this method more robust.
The register_erc20_token_metadata
method looks good functionally, but we can improve its error handling to make it more resilient.
Replace unwrap()
and expect()
calls with proper error handling. Here's a suggested refactor:
async fn register_erc20_token_metadata<P: Provider + Sync>(
&mut self,
contract_address: Felt,
token_id: &str,
provider: &P,
) -> Result<()> {
// Fetch token information from the chain
let name = provider
.call(
FunctionCall {
contract_address,
entry_point_selector: get_selector_from_name("name")?,
calldata: vec![],
},
BlockId::Tag(BlockTag::Pending),
)
.await?;
let name = if name.len() == 1 {
parse_cairo_short_string(&name[0]).context("Failed to parse name")?
} else {
ByteArray::cairo_deserialize(&name, 0)
.context("Failed to deserialize name")?
.to_string()
.context("Failed to convert name to string")?
};
// Similar changes for symbol and decimals...
// Insert the token into the tokens table
self.executor.send(QueryMessage::other(
"INSERT INTO tokens (id, contract_address, name, symbol, decimals) VALUES (?, ?, ?, ?, ?)"
.to_string(),
vec![
Argument::String(token_id.to_string()),
Argument::FieldElement(contract_address),
Argument::String(name),
Argument::String(symbol),
Argument::Int(decimals.into()),
],
))?;
self.local_cache.register_token_id(token_id.to_string());
Ok(())
}
This refactored version properly propagates errors and provides more context when errors occur.
async fn register_erc721_token_metadata<P: Provider + Sync>( | ||
&mut self, | ||
contract_address: Felt, | ||
token_id: &str, | ||
provider: &P, | ||
) -> Result<()> { | ||
let res = sqlx::query_as::<_, (String, String, u8)>( | ||
"SELECT name, symbol, decimals FROM tokens WHERE contract_address = ?", | ||
) | ||
.bind(felt_to_sql_string(&contract_address)) | ||
.fetch_one(&self.pool) | ||
.await; | ||
|
||
// If we find a token already registered for this contract_address we dont need to refetch | ||
// the data since its same for all ERC721 tokens | ||
if let Ok((name, symbol, decimals)) = res { | ||
debug!( | ||
contract_address = %felt_to_sql_string(&contract_address), | ||
"Token already registered for contract_address, so reusing fetched data", | ||
); | ||
self.executor.send(QueryMessage::other( | ||
"INSERT INTO tokens (id, contract_address, name, symbol, decimals) VALUES (?, ?, \ | ||
?, ?, ?)" | ||
.to_string(), | ||
vec![ | ||
Argument::String(token_id.to_string()), | ||
Argument::FieldElement(contract_address), | ||
Argument::String(name), | ||
Argument::String(symbol), | ||
Argument::Int(decimals.into()), | ||
], | ||
))?; | ||
self.local_cache.register_token_id(token_id.to_string()); | ||
return Ok(()); | ||
} | ||
|
||
// Fetch token information from the chain | ||
let name = provider | ||
.call( | ||
FunctionCall { | ||
contract_address, | ||
entry_point_selector: get_selector_from_name("name").unwrap(), | ||
calldata: vec![], | ||
}, | ||
BlockId::Tag(BlockTag::Pending), | ||
) | ||
.await?; | ||
|
||
// len = 1 => return value felt (i.e. legacy erc721 token) | ||
// len > 1 => return value ByteArray (i.e. new erc721 token) | ||
let name = if name.len() == 1 { | ||
parse_cairo_short_string(&name[0]).unwrap() | ||
} else { | ||
ByteArray::cairo_deserialize(&name, 0) | ||
.expect("Return value not ByteArray") | ||
.to_string() | ||
.expect("Return value not String") | ||
}; | ||
|
||
let symbol = provider | ||
.call( | ||
FunctionCall { | ||
contract_address, | ||
entry_point_selector: get_selector_from_name("symbol").unwrap(), | ||
calldata: vec![], | ||
}, | ||
BlockId::Tag(BlockTag::Pending), | ||
) | ||
.await?; | ||
let symbol = if symbol.len() == 1 { | ||
parse_cairo_short_string(&symbol[0]).unwrap() | ||
} else { | ||
ByteArray::cairo_deserialize(&symbol, 0) | ||
.expect("Return value not ByteArray") | ||
.to_string() | ||
.expect("Return value not String") | ||
}; | ||
|
||
let decimals = 0; | ||
|
||
// Insert the token into the tokens table | ||
self.executor.send(QueryMessage::other( | ||
"INSERT INTO tokens (id, contract_address, name, symbol, decimals) VALUES (?, ?, ?, \ | ||
?, ?)" | ||
.to_string(), | ||
vec![ | ||
Argument::String(token_id.to_string()), | ||
Argument::FieldElement(contract_address), | ||
Argument::String(name), | ||
Argument::String(symbol), | ||
Argument::Int(decimals.into()), | ||
], | ||
))?; | ||
|
||
self.local_cache.register_token_id(token_id.to_string()); | ||
|
||
Ok(()) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ohayo sensei! Let's refactor this method for better robustness and DRYness.
The register_erc721_token_metadata
method is similar to register_erc20_token_metadata
, and we can apply similar improvements.
Consider the following refactoring:
- Improve error handling by replacing
unwrap()
andexpect()
with proper error propagation. - Extract common code for fetching and parsing token metadata into a separate function.
Here's a suggested refactor:
async fn fetch_token_metadata<P: Provider + Sync>(
provider: &P,
contract_address: Felt,
field: &str,
) -> Result<String> {
let data = provider
.call(
FunctionCall {
contract_address,
entry_point_selector: get_selector_from_name(field)?,
calldata: vec![],
},
BlockId::Tag(BlockTag::Pending),
)
.await?;
if data.len() == 1 {
parse_cairo_short_string(&data[0]).context(format!("Failed to parse {}", field))
} else {
ByteArray::cairo_deserialize(&data, 0)
.context(format!("Failed to deserialize {}", field))?
.to_string()
.context(format!("Failed to convert {} to string", field))
}
}
async fn register_erc721_token_metadata<P: Provider + Sync>(
&mut self,
contract_address: Felt,
token_id: &str,
provider: &P,
) -> Result<()> {
if let Ok((name, symbol, decimals)) = self.get_existing_token_metadata(contract_address).await {
self.insert_token_metadata(token_id, contract_address, &name, &symbol, decimals)?;
return Ok(());
}
let name = self.fetch_token_metadata(provider, contract_address, "name").await?;
let symbol = self.fetch_token_metadata(provider, contract_address, "symbol").await?;
let decimals = 0;
self.insert_token_metadata(token_id, contract_address, &name, &symbol, decimals)?;
Ok(())
}
This refactored version improves error handling, reduces code duplication, and separates concerns more clearly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work @lambda-0x! Let's keep monitoring how Torii is doing with this addition, and ensure the start-block
removal is propagated across the stack and communicated.
combination of: #2442 #2483 #2491
to be merge after: #2460
NOTE:
--start-block
configuration is removed since it was no longer required.Summary by CodeRabbit
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
.gitignore
file to exclude additional log files from version control.Cargo.toml
for theverify_db_balances
package, specifying dependencies and metadata.