-
-
Notifications
You must be signed in to change notification settings - Fork 793
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add calldata located variables. #1499
Add calldata located variables. #1499
Conversation
@charles-cooper could you give this a look through? :) |
Sure! I will have to get to it next week, though. |
Sure! I reckon it's mostly OK, I checked it over when it wasn't 4 AM ... and it still looked good hehe |
name=arg.name, | ||
pos=default_arg_pos, | ||
typ=arg.typ, | ||
mutable=False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for clarity, could set location explicitly here
@jacqueswww this looks pretty good, I didn't see the code that copies dynamic data to memory though? Could you help point it out to me? |
@@ -138,12 +135,29 @@ def parse_public_function(code: ast.FunctionDef, | |||
mem_pos, _ = context.memory_allocator.increase_memory(32 * get_size_of_type(arg.typ)) | |||
context.vars[arg.name] = VariableRecord(arg.name, mem_pos, arg.typ, False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@charles-cooper The byte-arrays are copied into memory here.
What I did
Fixes #1381.
This is an optimisation I am glad I finally got to make! Basically we were copying variables to memory unnecessarily, by using the calldata directly one not only gets better speed - but also ensures on the EVM level the variables are read-only (yes we had a test that actually was altering a parameter in a tuple assign).
This changes affects these these function parameter types.
Note: Turned out ByteArrays were too difficult optimise, especially when they tend to have to be packed, which one would have to just allocate them again. Also one looses out not being able to always re-use the identity / mem copy contract.
Taking some files from the examples directory:
Blind Auction -> 73 % less bytecode (!!!)
ERC20 -> 4.9 % less bytecode
ERC720 -> 6.3% less bytecode
Additionally we save runtime costs because we skip 'n full N of mstore (memory growth) on each call.
Also private calls tend to be much cheaper on average, because the in-memory context is smaller.
How I did it
Used a lot of coffee ☕
How to verify it
Check commits, play with your own contracts.
Description for the changelog
Bytecode & Runtime optimisation for function parameters.
Cute Animal Picture