Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Judging #135

Open
Caglankaan opened this issue Nov 30, 2023 · 6 comments
Open

Judging #135

Caglankaan opened this issue Nov 30, 2023 · 6 comments

Comments

@Caglankaan
Copy link

Hi, i don't know if i am misjudging or not but i don't think bot race judging is fair.

[N-01] 2**<n> - 1 should be re-written as type(uint<n>).mae:
    `384:              s_AddrToPoolIdData[univ3pool] = uint256(poolId) + 2 ** 255;`
    no such thing as uint255
[N-07] Add inline comments for unnamed variables -> Not any output
[N-08] Adding a return statement when the function defines a named return variable, is redundant:
    since it is an if block, there should be return with name, you cant write `return;`only. This issue
    can be accepted on at the end of function only.
[N-17] Consider moving msg.sender checks to a common authorization modifier:
    they are not msg.sender control
[N-18] Consider returning a struct rather than having multiple return values:
    defining struct for 3 parameter is not a good option. I think there should be at least 4-5 return params to do that.
[N-19] Consider splitting long calculations:
    what is defined as long? splitting those calculations will increase complexity
[N-21] Consider using a struct rather than having many function input parameters:
     a lot of false negatives.
[N-22] Consider using descriptive constants when passing zero as a function argument:
    return self.isLong(0) is ok like address(0)
[N-23] Constants in comparisons should appear on the left side:
    you can't do typo with !=, it should only check for == cases.
[N-26] Contracts should have full test coverage:
    contract already has test coverage?
[N-32] Events that mark critical parameter changes should contain both the old and the new value:
    this is approve, so old value will be 'not new value'. no need to contain old value.
[N-34] High cyclomatic complexity:
    https://github.com/code-423n4/2023-11-panoptic/blob/main/bot-report.md#n-34-high-cyclomatic-complexity
[N-35] Inconsistent method of specifying a floating pragma:
    Given examples are not using >= or <=

[N-38] Large numeric literals should use underscores for readability:
    /// @audit 4096
    162:             return int24(int256((self >> (64 + legIndex * 48 + 36)) % 4096));
    4096 is not a large numeric literal?
[N-40] Libraries should be defined in separate files from their usage:
    all of them in seperate file already
[N-43] Multiple address/ID mappings can be combined into a single mapping of an address/ID to a struct, for readability:
    Cant be combined with `    mapping(bytes32 positionKey => int256 baseFees0And1) internal s_accountFeesBase;`
    since value of that is int256 instead of uint256.
[N-46] NatSpec: Error declarations should have descriptions:
    duplicated with [N-28] Custom error has no error details
[N-48] NatSpec: File is missing NatSpec
    File has natspec.

    // SPDX-License-Identifier: GPL-2.0-or-later
    pragma solidity ^0.8.0;

    /// @title Library of Constants used in Panoptic.
    /// @author Axicon Labs Limi

[N-53] Not using the named return variables anywhere in the function is confusing:
    duplicated n-08

[N-61] Style guide: Non-external/public function names should begin with an underscore:
    missing a lot / false negative is too much.

[N-65] Unsafe conversion from unsigned to signed values
    duplicated l-12

[N-69] Use bit shifts in an imutable variable rather than long bit masks of a single bit, for readability:
    8:        uint256 internal constant FP96 = 0x1000000000000000000000000;
    its not a bit mask.

This is from last bot race winner bot: https://github.com/code-423n4/2023-11-panoptic/blob/main/bot-report.md
i didn't spend too much on it so its possible that i have mistakes here but long story short this report has almost %33 false positive and to be honest %33 false positive means like they tried bruteforce everyhthing and pasted the results. And also there are a lot of different issues i still don't understand:

[N-71] Variables need not be initialized to false
[N-72] Variables need not be initialized to zero

so are they really different findings and should get 2 point? or
[N-26] Contracts should have full test coverage
[N-13] Consider adding formal verification proofs
[G-10] Enable IR-based code generation
[N-39] Large or complicated code bases should implement invariant tests

are they really a valid finding? I am not doing this cheap cheats on my bot and eventually i am getting low score because you judge thinks i don't find enough issue. And one last thing, i am not sure if disputed section is also counted or not but if its counted i think its also huge mistake. Disputed means something like "other bots may say this is an issue but actually its not" kind of statements. Without even taking a look i can write 250 different disputed issue and send them with my report with bruteforce like:
d-01: Other bots may think there is a signature replay attack in the contract however that is not the case.

Who can tell me this d01 is not valid? Other bots may tell yes and %99.9 it is not valid ?

Can i get some answers please. Thanks!

@IllIllI000
Copy link

IllIllI000 commented Nov 30, 2023

I agree that there are quite a few false positives, but some of the ones you've listed are opinions. For the opinion ones, other bots report them too, so I have to or else, like you, I miss out on points. For the false positives, I try to minimize them as much as possible, but I still make mistakes during the short time we have to review things. Sometimes I have to make a tweak to a rule that I've marked as "never wrong", and that ends up introducing a bug into the rule, but I try to to review even those the day after, and always try not to make the same mistake twice (assuming I catch it). I've actually been working on most of these mistakes all day. I'm not sure if that helps, but that's what I see from my side. The more you point out these sorts of mistakes, the less of them you'll see in future races.

From the judging side, what would you like to see happen differently? My hope is bots-judging-bots is implemented soon, and that'll free up the judge to look more carefully at the ones other bots complain about.

edit: after going through them all, none of the duplicates mentioned are duplicates

@Caglankaan
Copy link
Author

I suggest there should be at least a list of rules. Like 'missing zero validation' is 1 issue it doesn't matter if you write 'missing zero on constructor, on initializer, on setter' etc. like 15 different tasks it is only 1 issue and it should be judged like that. Because like i mentioned i can write 15 different title from one detector. Disputes shouldn't be counted to the points (if they are) and i also think there should not be point for 'global issues' like invarianttests etc. I don't know if you guys agree

@Caglankaan
Copy link
Author

And also i am %100 sure every bot has quite a lot false positives but i think judges are not giving penalties from them so it ends up like if a report has x amount of gas issue then ok that report should get x points which makes everything bad. To be honest i think 40 issue with 0 false positive is much much much better than 80 issues with 20 false positives

@IllIllI000
Copy link

who's going to spend the time and effort to maintain the list? who has the power to decide which ones are split and which ones aren't? I think leaving it up to the bots, and then having the judges decide on how to group them is the least-time-intensive way, and that's happening right now. The downside is that not all judges spend the time to group them, so it's hit-or-miss, and you just have to list all combinations to make sure you don't miss out.

I agree with you that there should be large penalties for mistakes, and I've mentioned it many times in the primary bot racing org issue and in the bot racing discord channel, but a vocal group oppose it, and I'm essentially the only one asking for it. If more than just I were regularly asking for it, maybe it would happen.

@Caglankaan
Copy link
Author

So you think if i can split an issue to x pieces then i should do it? If judges also agrees this then yeah i can triple my detector count tonight without any problem :D x+=1 is better struct.x+=1 is better array[index]+=1 is better struct.array[index]+=1 is better lets go

@IllIllI000
Copy link

some of those save gas and some don't, so be careful =)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants