Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Progress on Invariant Tests #3277

Closed
nine-december opened this issue Sep 19, 2022 · 5 comments · Fixed by #7914
Closed

Progress on Invariant Tests #3277

nine-december opened this issue Sep 19, 2022 · 5 comments · Fixed by #7914
Labels
C-forge Command: forge Cmd-forge-test Command: forge test T-feature Type: feature

Comments

@nine-december
Copy link

Component

Forge

Describe the feature you would like

Currently the invariant testing goes blind while running.

It would be nice to have a dynamic progress on-run like Echidna (where it shows a counter of the current runs / total runs).

Additional context

No response

@nine-december nine-december added the T-feature Type: feature label Sep 19, 2022
@gakonst gakonst added this to Foundry Sep 19, 2022
@gakonst gakonst moved this to Todo in Foundry Sep 19, 2022
@rkrasiuk rkrasiuk added Cmd-forge-test Command: forge test C-forge Command: forge labels Sep 20, 2022
@nine-december
Copy link
Author

I am currently testing the invariant testing feature by solving Damn Vulnerable Defi using only invariants.

Another thing that could be improved is how is the sequence logged.

For example, with the Naive-Receiver level (spoiler alert):

10 subsequent calls are needed to crack this level in order to drain the receiver.

Trying to get 10 calls within the same run was pretty impossible (needed to tune the invariant up to 7 ether). When the logs are shown, the whole stack of failed txs is shown.

Maybe it is pretty complicated by it would be great to show only those calls who differentially modify the invariant (thinking of this as a slope of a function a.k.a derivative). Also, that idea could be used as a feedback seed to the invariant engine as another path to check if the invariant could be broken (for example, when a recently performed call approaches the invariant to be broken, try to do it again). This could be a config parameter.

image

@gakonst
Copy link
Member

gakonst commented Sep 20, 2022

Maybe it is pretty complicated by it would be great to show only those calls who differentially modify the invariant (thinking of this as a slope of a function a.k.a derivative). Also, that idea could be used as a feedback seed to the invariant engine as another path to check if the invariant could be broken (for example, when a recently performed call approaches the invariant to be broken, try to do it again). This could be a config parameter.

cc @mds1 @transmissions11 have we thought about this before?

@mds1
Copy link
Collaborator

mds1 commented Sep 23, 2022

I haven't thought of that idea before in this context. It's interesting, sounds like converting the process of breaking invariants into solving an optimization problem with gradient descent, where the function you're "optimizing" is the invariant. My initial guess is that this may not be practical to implement, because (1) invariant expressions are not necessarily mathematical functions / differentiable curves, but boolean expressions, and (2) a single test may look for more than one related invariants, so which do we analyze? However I don't want to dismiss this too quickly, and would love for someone to tell me this is feasible. Because I believe Harvey does covert code into cost functions that it tries to minimize (see figure 1 here), so maybe this is more doable than I think.

In general, Harvey has a couple of nice ideas (all described in that same paper) that we should consider implementing, though I haven't looked at that paper recently so am hazy on the details.

And I do agree progress bars or counters for fuzz/invariant runs would be nice, see also #585

@nine-december
Copy link
Author

nine-december commented Sep 27, 2022

Will dig into that paper today so I can brainstorm in here with the same theory.

I come from a more scientific field (chemical process engineer). It could sound pretty odd and maybe it could be impossible to implement but how compounds are made and the thermodynamics behind the transformation of matter could help as an approach.

The feasible and most common reaction within a system is the one that happens through the "minimum energy path" (MEP). The way it is modeled is in the vectorial field due to the fact that each variable is modified whenever another one suffers from a change. That's why the math behind the solution of finding the MEP conveys an optimization of a vectorial field using partial derivatives yielding to the best compromise between feasibility in time (compared to the other "feasible" times of the same system) and also energy minimization. A quick overview of this could be seen on the following paper.

The idea of having a vectorial field of solutions where the objectives need to converge could be also applied to the idea we are speaking in order to build a game-changing coverage/corpus feedback engine.

@nine-december
Copy link
Author

Read the paper about Harvey, it mixes the idea of a typical grey-boxed fuzzer with the concept of optimization by leveraging from the cost optimization. It could be a nice approach IMO that mixes the idea of minimizing the cost (the same way a material system tries to reduce its energy towards a more stable state) with an energy factor to each instruction as a strategy of weighting the alternatives in order to maximize the amount of paths explored minimizing the effort.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-forge Command: forge Cmd-forge-test Command: forge test T-feature Type: feature
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

4 participants