Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Duplicates labeled with partials (partial-25/50/75) decrease duplicates weigth, but also reduce the primary finding weigth in the overall award calculation #145

Open
dontonka opened this issue Feb 2, 2024 · 70 comments

Comments

@dontonka
Copy link

dontonka commented Feb 2, 2024

Recently after having performed well the Zetachain contest, I was dissapointed with my awards. I had made the calculation kind of quickly at the end of PostQA and was hoping to get around 10k, but got only 7k, so that got me thinking, how is this possible, and I did the math again but accurately with the following spread --> zetachain calculation sheet and confirmed that there was a problem, or at least a misunderstanding from my part that was needed to be clarified, as my math were following the documentation.

After having a long discussion with C4 staffs in a private thread, we identified the source of the misunderstanding. C4 did update their documentation very recently as follow: awards documentation.

Unfortunatelly, C4 staffs and myself didn't came to an agreement/understanding, hence why I'm creating this issue so that it is discussed with the whole C4 community.

Here is what doesn't make sense to me

We can see here that the logic behind the partial- labels only impacts the awards for partial findings; even though the pies vary, the awards stay the same.

Conclusion:
Only the award amounts for "partial" findings have been reduced, in line with expectations. The aim of this adjustment is to recalibrate the rewards allocated for these specific findings. Meanwhile, the awards for full-credit findings remain unchanged.

Short version

One image is worth 1000 words, so here should be the proper value. This is good as it's exactly the case I had in Zetachain. I had a High with 2 dups which were at partial-25. What this means at high level is that this is "almost" a unique High, as the 2 other duplicates are classified partial-25, so the judge agrees that while they identify the same issue, they do find very little, which why they account for only 25% of a full duplicates. So they should be penalized (and they are in the moment as expected), BUT the problem is that the primary is not getting those rewards back!! Instead the finding's pie is reduced, so it's like saying this High is worth less then another High with the same numbers of duplicates (without partials), in fact this High almost become a sort of a Boosted Unique Medium with this pie at 4.86 (a unique med pie is 3.9). Do everyone understand how this doesn't make any sense OR it's just me?

The finding's pie CANNOT decrease, it's a High, it is not worth less then another High with the same amount of duplicates. By having the pie reduced, the rewards that the primary should have had is simply diluted among all the other wardens.
image

Long version

Naaa, the short is enough and crystal clear.

@aliX40
Copy link

aliX40 commented Feb 2, 2024

I completely agree, and think that Code Arena systems should be designed to encourage the discovery of unique findings and to motivate wardens to produce well-constructed reports. When a warden submits the only valid (with "valid" in this context meaning a 100% rated issue) High or Medium vulnerability along with a high quality report, I believe that, from Code Arena's perspective, such findings should be exceptionally rewarded. Moreover, wardens should be given additional incentives to craft exceptionally thorough reports and to evaluate the full scope of the vulnerability comprehensively.

@J4X-98
Copy link

J4X-98 commented Feb 2, 2024

I totally agree with the author. In my opinion these kind of issues maybe should stay the same for the report but should be split for rewards. I have also just encountered a case where there was a very simple bug due to non EIP712 compliance, which was recognized by ~90 people which provided very low quality reports of this (max med, more like low severity). Me and 3 others were able to leverage that bug to steal funds and make it a high severity. Now all those 90 people got 25% duplicated, making our high pretty much worthless in the process.

This functionality incentivizes people to as soon as they find a simple bug do a low quality report and let it be, as anyways someone else will spend the significant time to leverage the bug to a high severity, to which they will get duplicated.

To get back to the original idea, my recommendation for this would be to leave it as one issue for the report but split into 2 for rewarding. This can easily be done when there are submissions that are dups where some of them only are med/qa and some are high. Only the highs would get bundled into one and duplicated (resulting in higher rewards for people that put in significant time to leverage a bug) and all the meds would get bundled into a med not eating up the rewards of the highs.

This functionality would also remove the meta of just writing low effort submissions of every small bug without ever describing any valid attack path.

@Minh-Trng
Copy link

Minh-Trng commented Feb 2, 2024

I totally agree with the author. In my opinion these kind of issues maybe should stay the same for the report but should be split for rewards. I have also just encountered a case where there was a very simple bug due to non EIP712 compliance, which was recognized by ~90 people which provided very low quality reports of this (max med, more like low severity). Me and 3 others were able to leverage that bug to steal funds and make it a high severity. Now all those 90 people got 25% duplicated, making our high pretty much worthless in the process.

This functionality incentivizes people to as soon as they find a simple bug do a low quality report and let it be, as anyways someone else will spend the significant time to leverage the bug to a high severity, to which they will get duplicated.

To get back to the original idea, my recommendation for this would be to leave it as one issue for the report but split into 2 for rewarding. This can easily be done when there are submissions that are dups where some of them only are med/qa and some are high. Only the highs would get bundled into one and duplicated (resulting in higher rewards for people that put in significant time to leverage a bug) and all the meds would get bundled into a med not eating up the rewards of the highs.

This functionality would also remove the meta of just writing low effort submissions of every small bug without ever describing any valid attack path.

This idea only works for this specific case. Now imagine 100 people put in the effort to report this as high and one guy is lazy and only reports the most obvious impact as med. He would get a solo med for putting in less work.

I think the way it works now does make sense, even if it does indeed not reward putting in effort for simple bugs. If all people find the same root cause of an issue then each of these reports would have had made the sponsor aware of its existance. And thats why the 100% finding only gets the credit of a finding with 3 duplicates. Because 3 people found the root of the problem, even if 2 didnt manage to see the full impact.

However, I do see dontonkas point. It could also make sense to redistribute the reduction for partial findings to all duplicates that scored 100%.

@0xEVom
Copy link

0xEVom commented Feb 2, 2024

Instead the finding's pie is reduced, so it's like saying this High is worth less then another High with the same numbers of duplicates (without partials)

Agree that this doesn't make a lot of sense, intuitively a high with only duplicates with partial credit should be worth something between a solo high and a high with full-credit duplicates.

I was playing around with the reward formula and came up with a small change that I think could work:

slice = severityWeight * (0.9 ^ (totalCredit - 1)) * (sliceCredit / totalCredit)

Instead of counting the number of findings, we take the sum of the total credit awarded to all findings to calculate the pie. This means that a high with one duplicate is worth the same as one with two duplicates with 50% credit. That strikes me as reasonable.

And instead of dividing by the number of findings, we multiply by the finding's credit and divide by the total credit. This gives us the share of the credit for a given submission.

This is what the scoring would look like for the example above:

pie sliceCredit totalCredit slice
11.38 1.3 (1) 1.5 8.22
11.38 0.25 1.5 1.58
11.38 0.25 1.5 1.58

The pie and first slice end up being quite large, but that seems reasonable considering there are only 1.5 "findings". The effect is less extreme if the other 2 findings had 50% credit:

pie sliceCredit totalCredit slice
10.35 1.3 (1) 2 5.85
10.35 0.5 2 2.25
10.35 0.5 2 2.25

In this case, the first finding gets the same number of shares as if there was only one other duplicate, and the other two split the shares that would have gone to the duplicate.

I couldn't think of any bad behavior this would incentivize but I may be missing something, it would be great to hear what others think.

Because 3 people found the root of the problem, even if 2 didnt manage to see the full impact.

@Minh-Trng I can see what you're saying but at the same time, the sponsor may have decided the finding is a non-issue, or the findings could have been rejected if it wasn't for the high-quality report that correctly identified the highest impact. So putting in that extra effort is important, and should be incentivized.

@GalloDaSballo
Copy link

The suggestion effectively would allow for exploitation:

  • Create ideal submission
  • Create a group of partial submissions

Partial submissions would steal total rewards from other findings
And would credit them to the ideal submission

That's ultimately more gameable than what's available now

The feedback of disappointment in having a partial award reduce the rewards of your finding is legitimate, the suggested fix offers more downside than the current situation

@0xEVom
Copy link

0xEVom commented Feb 2, 2024

Could you walk me through that? I wouldn't think it does since combined partial credit is treated the same as full credit.

I just tried out an example:

pie sliceCredit totalCredit slice
10.35 1.3 (1) 2 5.85
10.35 1 2 4.5

And with partial findings:

pie sliceCredit totalCredit slice
9.56 1.3 (1) 2.5 4.44
9.56 1 2.5 3.42
9.56 0.5 2.5 1.7

Shares to first + partial credit = 6.14
Shares to second = 3.42

I was thinking you were right but in fact this is also currently exploitable with a normal duplicate and I think it's just an edge case when findingCount == 2:

pie weight split slice
9.56 1.3 3 3.51
9.56 1 3 2.7
9.56 1 3 2.7

Shares to first + fake dupe = 6.21
Shares to second = 2.7

Otherwise any number of partial credit findings amount to their total combined weight (4 duplicates with 25% credit are paid the same as one normal duplicate), so it's not any more (un-)profitable than submitting duplicates.

@0xean
Copy link

0xean commented Feb 2, 2024

tldr: We should reward unique information presented to a sponsor.

If we all agree (which I think we do) that a solo unique H has the most sponsor value, then it stands to reason that more duplicates, if they are in fact 100% duplicates, degrade the value of that report.

However, partial findings are typically marked as such because they didn't add the same value (or contain the same unique information) as a true (100%) duplicate and therefore the original H should be worth more to the sponsor because it represented unique information not present in other reports.

I understand concerns about the attack vector is opens up, but do think there is probably a solution in the middle that might make this more fair.

@dontonka
Copy link
Author

dontonka commented Feb 2, 2024

tldr: We should reward unique information presented to a sponsor.

If we all agree (which I think we do) that a solo unique H has the most sponsor value, then it stands to reason that more duplicates, if they are in fact 100% duplicates, degrade the value of that report.

However, partial findings are typically marked as such because they didn't add the same value (or contain the same unique information) as a true (100%) duplicate and therefore the original H should be worth more to the sponsor because it represented unique information not present in other reports.

I understand concerns about the attack vector is opens up, but do think there is probably a solution in the middle that might make this more fair.
@0xean
The feedback of disappointment in having a partial award reduce the rewards of your finding is legitimate, the suggested fix offers more downside than the current situation
@GalloDaSballo

Well said, and exactly rephrasing my thoughts. Here are the properties/requirements the award should aim to acheive:

  • A finding is worth the same weigth (aka pie) as any other finding of the same severity with the same amount of duplicates, if some dup have partial, that SHOULD NOT reduce his weigth, it doesn't make sense, the finding has the same value for the sponsor.
  • The primary being worth more for the sponsor then the dup partial, meaning that the weight lost by the partial dup is transfered to the primary.

So my actual solution would be to adjust the maths to reflects those requirements. What @0xEVom is proposing seems a bit more fancy, I would need to give it more thoughts, I was more thinking of a simpler solution which simply transfer the pie lost by the partial to the primary.

Keep in mind that today the current formula already penalize the primary with dup (on top of the issue I'm raising), as it reduce the finding pie (pie * (0.9 ^ (split - 1)) / split) as if the dup would be a full duplicate, which is already a negative impact for the primary. One could argue that the formula the split in case of partial dup should be less in the above formula (instead of 1), maybe this is where @0xEVom is going.

So to put this into a requiement, would be this additional point:

  • A finding weigth is not only based on severity and amount of duplicates, BUT also the value of those duplicates. So a High with 2 dup partial-25, should be worth more then a High with 2 full duplicates.

This additional requirement would make sense to me, as for the sponsor, that reflect the reality.

EDIT:

Ok did the math to include the additional requirement, seems to work perfectly (modified my google sheet from my original post).

Finding's pie would be calculated as follow, severity being 10 for High and 3 for Medium:
(0.3*( *(0.9 ^ (<#-split-partial-100 - 1>*1+<#-split-partial-75>*0.75+<#-split-partial-50>*0.5+<#-split-partial-25>*0.25)) / (<#-split-partial-100>+<#-split-partial-75>*0.75+<#-split-partial-50>*0.5+<#-split-partial-25>*0.25)))+(severity *(0.9 ^ ((<#-split-partial-100 - 1>*1+<#-split-partial-75>*0.75+<#-split-partial-50>*0.5+<#-split-partial-25>*0.25))))

Warden finding's pie would be as follow if selected for report:
(0.3*( *(0.9 ^ (<#-split-partial-100 - 1>*1+<#-split-partial-75>*0.75+<#-split-partial-50>*0.5+<#-split-partial-25>*0.25)) / (<#-split-partial-100>+<#-split-partial-75>*0.75+<#-split-partial-50>*0.5+<#-split-partial-25>*0.25)))+(severity *(0.9 ^ ((<#-split-partial-100 - 1>*1+<#-split-partial-75>*0.75+<#-split-partial-50>*0.5+<#-split-partial-25>*0.25))) / (<#-split-partial-100>+<#-split-partial-75>*0.75+<#-split-partial-50>*0.5+<#-split-partial-25>*0.25)))

Which give the following formula in my current sheet for High-191

Finding's pie:
=(0.3*(10*(0.9 ^ ((D18-1)1+E180.75+F180.5+G180.25)) / (D181+E180.75+F180.5+G180.25)))+(10*(0.9 ^ ((E180.75+F180.5+G18*0.25))))

Warden's pie (mine, selected for report):
=(0.3*(10*(0.9 ^ ((D18-1)1+E180.75+F180.5+G180.25)) / (D181+E180.75+F180.5+G180.25)))+(10*(0.9 ^ ((D18-1)1+E180.75+F180.5+G180.25)) / (D181+E180.75+F180.5+G180.25))

image

So this would reflect the 3 requirements indicated. And let's try to rationalize the number:

Finding's pie:

  • A High with 2 full dup would be worth : 10.35
  • A High (this current case) with 3 dup (1 primary, so full, and 2 at partial-25) : 11.38
  • A unice High (no changes): 13

Warden's pie (for the warden selected for report):

  • A High with 2 full dup would be worth : 5.35
  • A High (this current case) with 3 dup (1 primary, so full, and 2 at partial-25) : 8.22
  • A unice High (no changes): 13

Are those making sense? I think so. I think this represent exactly how it should be.

So the formula in the doc would change to the following:

Med Risk Slice: 3 * (0.9 ^ ( ((split_100 - 1)*1) + (split_75*0.75) + (split_50*0.5) + (split_25*0.25) )) / ( (split_100*1) + (split_75*0.75) + (split_50*0.5) + (split_25*0.25) )
High Risk Slice: 10 * (0.9 ^ ( ((split_100 - 1)*1) + (split_75*0.75) + (split_50*0.5) + (split_25*0.25) )) / ( (split_100*1) + (split_75*0.75) + (split_50*0.5) + (split_25*0.25) )

@0xA5DF
Copy link

0xA5DF commented Feb 2, 2024

I agree this should be changed, I brought this up about a year ago and iinm the response to my help request was that this is the intended design.

To put it simply, I think the rationality behind C4's formula (when no partials are involved) is to first dilute the pot of the bug by 0.9^(n-1) and then split that pot between all wardens who found this bug.
The problem with the partials is that a partial dupe reduces the pot of the bug by more than 10%.

I agree that the fix should be the formula suggested above by @dontonka, there are 2 points here:

  • The split parameter should account for partials (i.e. if we have 1 with full credit and 2 with 25% credit then the split should be 1.5 rather than 3)
  • The exponent of 0.9 should count partials as partials, meaning in the case above it'd be 0.9^1.5 rather than 0.9^3. I think this one is less significant - it has less impact on reports with full-credit and there might be arguments for why not change that, but I think it should be changed as well.

@dontonka
Copy link
Author

dontonka commented Feb 2, 2024

I agree this should be changed, I brought this up about a year ago and iinm the response to my help request was that this is the intended design.

Interesting. What was the resolution of the ticket exactly @0xA5DF ?

@0xA5DF
Copy link

0xA5DF commented Feb 2, 2024

I can't find the ticket in my email for some reason, but as I said, the help team said this is not a bug. I should've opened an org issue back then but I failed to do so.

PS
I've updated my estimator script to reflect this change after I received that response.

@dontonka
Copy link
Author

dontonka commented Feb 2, 2024

Yes this is really annoying, so many wardens didn't receive their proper share of award, but hey, things are not perfect but we need to learn from mistakes, but first recognize our mistakes/errors, fix them and improve as an organisation.

@0xEVom
Copy link

0xEVom commented Feb 2, 2024

Yup that's exactly what I meant! Then the slices just need to be scaled by the partial credit to get the final number of shares for a given finding.

@0xA5DF note that there's also a partial-75 label now

@0xA5DF
Copy link

0xA5DF commented Feb 2, 2024

so many wardens didn't receive their proper share of award

I don't think there's any wrongdoing on C4's end here, org issue is the right way to change this kind of stuff.

@0xA5DF note that there's also a partial-75 label now

True, I need to update the script 🙂

@dontonka
Copy link
Author

dontonka commented Feb 2, 2024

I don't think there's any wrongdoing on C4's end here, org issue is the right way to change this kind of stuff.

With all respect, I have to partially disagree. This is actually a bug in the current formula, we all agree that the current formula reducing the finding pie doesn't make any sense, and should not have been there in the first place, nor aligned with the interpretation wardens would have from the documentation. Either it was not audited or not understood by the C4 developer in charge of the implementation. Since this tool is a black box, I do think there is a wrongdoing (not tested properly, not audited, etc) in C4 personally for the negative impact this has generated for the wardens impacted, since this is present since the partial have been implemented.

I do agree that the right way to change is by involving the community with this post, which is in process.

@sockdrawermoney
Copy link
Member

sockdrawermoney commented Feb 2, 2024

Since this conversation is now happening in two places, I'm going to bring it all here.

dontonka:

As I just posted on the issue, I do believe C4 should be accountable in a way for this, you guys provide the infra for the service, but unfortunately this was clearly a bug in the award tool when accounting to partial, it's not a documentation bug. You cannot just hide behind the fact that since that was there since a while and nobody complains, it's fine.

Why would we ask you to bring this discussion here to discuss it if we wanted to hide something? There are simply tradeoffs. At the point of implementation of the award calc for partials, there was the choice of how the awardcalc was written.

The title of this issue is exactly how it was written intentionally—to decrease the value of the identified bug based on others spotting the root cause, without compensating the duplicate equally based on not having fully rationalized the severity/impact.

I get that there are folks who don't like how it was written and that's a completely fair and reasonable viewpoint. And my viewpoint may be a dated one that is good to revisit which is why the team asked you to open this issue and why we thanked you for doing so.

But "pie" and "slice" are anachronisms anyway which remain based on the way the award calc script was originally written. There are other areas which equally do not fit within the analogy of pies and slices: what does a "30% bonus" mean in the pie and slice context, for example?

The awardcalc script used to be open source. It was completely refactored a year or so ago and was intended to be made open. It will be again.

dontonka:

In a bounty this would by a High severity. Granted that moving forward, this can be changed. Don't get me wrong, C4 bring a huge net positive, but I don't think this is something that should simply be a simple change in the award tool after the next supreme court kind of thing and simply moving on. Personally, since I raised the alarm before the ZetaChain awards were sent, but C4 staff continue the process as usual,

In fact multiple members of staff looked deeply into this and engaged with you on the matter before moving ahead. I took it seriously when you raised it with me, pulled in multiple staff members, and this is where the analysis came to, along with the conclusion that if it makes sense to revisit the tradeoff, we will do so.

I think I should receive some retributions, and not from the wardens that have been paid, but from the C4 DAO revenue, period, at least the amount that I should have received from the ZetaChain contest (not accounting for anything that I could have deserved to bring this to light, and that's fine, we don't do all for money, at least I will have contributed a bit to C4 success)

I'm not sure how you can argue that you should receive retribution without arguing that everyone else impacted by this tradeoff should as well.

However if there were to be sufficient consensus among @code-423n4/judges that this is a bug and not an intentional tradeoff, then I think characterizing it as a vulnerability is perfectly reasonable and in such case the original identifier (@0xA5DF) should be awarded.

@dontonka
Copy link
Author

dontonka commented Feb 2, 2024

However if there were to be sufficient consensus among @code-423n4/judges that this is a bug and not an intentional tradeoff, then I think characterizing it as a vulnerability is perfectly reasonable and in such case the original identifier (@0xA5DF) should be awarded.

Well, that would be a duplicate finding, mine would actually be the primary, and him a dup (probably partial-50), as his intent didn't result in any changes.

Anyway thanks for replying, I did say my part, I have nothing more to add.

I'm not sure how you can argue that you should receive retribution without arguing that everyone else impacted by this tradeoff should as well.

That would be ideal, but unfortunatelly unrealistic, hence why I haven't proposed it. That being said, having retribution to both @0xA5DF and myself would be adecaute too, definatelly. Now if you would disqualify me because I'm being too rude / honest / straight to the point, that's fine.

@sockdrawermoney
Copy link
Member

I don't feel like you've been rude at all. I expected you to bring your position here—it's what was asked and what you've been thanked for multiple times—and of course it's rational to defend your opinion.

@dontonka
Copy link
Author

dontonka commented Feb 2, 2024

However if there were to be sufficient consensus among @code-423n4/judges that this is a bug and not an intentional tradeoff, then I think characterizing it as a vulnerability is perfectly reasonable and in such case the original identifier (@0xA5DF) should be awarded.

Personally I don't understand the intentionality in the current behavior, and no judge with his right mind would either, and neither any warden. And this is confirmed in all the posts by them above, everyone agrees with my point that reducing the pie's finding IS wrong.

I challenge any judge/warden that think the contrary to explain me with clear arguments where the intentionality from the current behavior would come from and why it would exist, for what purpose it would have been implemented intentionaly that way and to acheive which goals. It is clearly unfair for the primary warden (but does the proper job for the partial dup penalizing them, at least XD), that is crystal clear, and the sponsor endup paying less that specific warden that would have deserve it and instead diluting this among all other wardens, which doesn't seem fair either for the sponsor, of course that's a minimum impact, he pays the same pot in the end.

@Minh-Trng
Copy link

However if there were to be sufficient consensus among @code-423n4/judges that this is a bug and not an intentional tradeoff, then I think characterizing it as a vulnerability is perfectly reasonable and in such case the original identifier (@0xA5DF) should be awarded.

Personally I don't understand the intentionality in the current behavior, and no judge with his right mind would either, and neither any warden. And this is confirmed in all the posts by them above, everyone agrees with my point that reducing the pie's finding IS wrong.

I challenge any judge/warden that think the contrary to explain me with clear arguments where the intentionality from the current behavior would come from and why it would exist, for what purpose it would have been implemented intentionaly that way and to acheive which goals. It is clearly unfair for the primary warden (but does the proper job for the partial dup penalizing them, at least XD), that is crystal clear, and the sponsor endup paying less that specific warden that would have deserve it and instead diluting this among all other wardens, which doesn't seem fair either for the sponsor, of course that's a minimum impact, he pays the same pot in the end.

I did provide reasoning above why the current solution is not illogical, which has been reiterated by sockdrawer. Even though I do agree that your suggestion is better, you shouldnt talk from a point of absolute truth about subjective matters.

I also find it a bit audacious to trying to argue for a retrospective reimbursement just because you made a suggestion for an improvement of a current solution

@sockdrawermoney
Copy link
Member

sockdrawermoney commented Feb 2, 2024

Recognize that before this implementation of awardcalc that what we now call partials would get full credit if a judge assessed them as a functional duplicate!

You can likely find many cases from 18 months ago where a warden got awarded a full high or med as a duplicate without rationalizing the severity or describing the impact.

So this current implementation has to be understood as a transition away from that and toward something that did not overly award partials.

The assumption always was that partial duplicates were indeed duplicates per the decrementer (otherwise they presumably wouldn't be called duplicates at all?)

I think it's very productive to have a conversation about

  1. whether that assumption is correct (plenty of good arguments here it should not be the assumption) and
  2. what should the algo be going forward given we have a ton more experience and precedent with awarding duplicates than when we first added them.

@dontonka
Copy link
Author

dontonka commented Feb 2, 2024

@Minh-Trng

I did provide reasoning above why the current solution is not illogical, which has been reiterated by sockdrawer. Even though I do agree that your suggestion is better, you shouldnt talk from a point of absolute truth about subjective matters.

I read it but it's not clear, as you reply to another warden mainly. My proposal is not to dedup and separate in different issue, which is what Sherlook btw I beleive, I much more prefer the partial approach, but the correct one XD.

In software, product owner write requirement A, that is given to the engineer which implements the code to reflect the requirement, afterward demo is done to proove the product owner the requirement really doing A, then the feature is deemed accepted and rollout to production.
Explain to me like a 5 years old what is the requirement that the current behavior reducing the pie's finding is acheiving? And if there was an intentional trade-off, what was the constraint that introduced such trade-off?

I also find it a bit audacious to trying to argue for a retrospective reimbursement just because you made a suggestion for an improvement of a current solution

Definatelly, let's not focus on that thought, that's another topic, I am sorry. The root problem is because I don't feel I'm suggesting, but I'm fixing something broken, a bug.

@dontonka
Copy link
Author

dontonka commented Feb 2, 2024

The assumption always was that partial duplicates were indeed duplicates per the decrementer (otherwise they presumably wouldn't be called duplicates at all?)

Yes agreed, this is where the suggestion come in (see point 2). So there are 2 items in play here:

  1. The bug: finding weight (aka pie, or whatever you call it) being reduced when some finding's dups are partials. This is what this post is mostly about, which I consider a bug, not a suggestion. The solution here would be to transfer such reduction among those that are 100% evenly. This definatelly complexify the calculation a bit, but bring overall much more fair award for wardens.

  2. Suggestion: Consider partial dup weight to establish the finding's weight. This is something that make sense to me, but is more a suggestion that need to be discussed and would end up with the following. The high level idea is challenging the assumption you stated (partial dup being full decrementer), they are duplicates yes, but partial duplicates, consequently they should weight less, or said differently (and to echo what @0xean stated) such finding would be more valuable for the sponsor compare to another finding with the same amount of dup (but without partial).

Med Risk Slice: 3 * (0.9 ^ ( ((split_100 - 1)*1) + (split_75*0.75) + (split_50*0.5) + (split_25*0.25) )) / ( (split_100*1) + (split_75*0.75) + (split_50*0.5) + (split_25*0.25) )
High Risk Slice: 10 * (0.9 ^ ( ((split_100 - 1)*1) + (split_75*0.75) + (split_50*0.5) + (split_25*0.25) )) / ( (split_100*1) + (split_75*0.75) + (split_50*0.5) + (split_25*0.25) )

@dontonka
Copy link
Author

dontonka commented Feb 2, 2024

Recognize that before this implementation of awardcalc that what we now call partials would get full credit if a judge assessed them as a functional duplicate!

Yes that's clear, there was a before and after, and it's fine, it's an evolution. Now once we evolved with partial, seems like it was understood one way, and behaved a different way for an edge case (primary warden being negativelly impacted as pie reduced),
and since the tool is a blackbox and warden fully trusted C4, and since result mostly looks good, this gone unotice for this long.

The part that you might not understand @sockdrawermoney is that right now partials are only doing their job for the dup with the partial, penalizing them, but for the primary (those at 100%), their slice remain the same as before partial world was introduced! So for them partial doesn't bring any plus value, which is The bug, but you guys say it's an intentional trade-off.

@dontonka
Copy link
Author

dontonka commented Feb 2, 2024

to decrease the value of the identified bug based on others spotting the root cause, without compensating the duplicate equally based on not having fully rationalized the severity/impact

Yes, and it does it well for the partial dup. While it's true that they compensate the primary more then the partial dup, where is the intentionality that a finding's weight is also decreased because some dup are partial? Yes it is decreased as it has more duplicates, but having partial duplicates should not reduce the finding's weight further (which is what happen today, the bug), but it should at least remains constant, or even better (the suggestion) even increase.

@Minh-Trng
Copy link

In software, product owner write requirement A, that is given to the engineer which implements the code to reflect the requirement, afterward demo is done to proove the product owner the requirement really doing A, then the feature is deemed accepted and rollout to production.

Well, that reference to software engineering really doesnt help your argument. You have the actual product owner here with you, telling you that the implementation does what it was supposed to do (and even how the requirement came to be) and that it is in fact not a bug.

Explain to me like a 5 years old what is the requirement that the current behavior reducing the pie's finding is acheiving?

all explained clearly by sockdrawer above

@dontonka
Copy link
Author

dontonka commented Feb 3, 2024

all explained clearly by sockdrawer above

The only thing I see is the following, which is not clear to me, unfortunatelly. That doesn't explain why there is a reduction in the finding's pie, at least I don't see it, and what are the tradeoffs, and what choices? That is what I'm asking:

  1. Requirement: what was the initial requirement and was it intentional to reduce the finding's pie as it is behaving today? That is a Yes or No question.
  2. If it was intentional, why and what is the reasoning behind it? Such behavior bring only half of the work of the partial to the table (penalizing partial dub, but missing the other half which would be to redistrubute this reduction to the full dup evenly)

There are simply tradeoffs. At the point of implementation of the award calc for partials, there was the choice of how the awardcalc was written.
The title of this issue is exactly how it was written intentionally—to decrease the value of the identified bug based on others spotting the root cause, without compensating the duplicate equally based on not having fully rationalized the severity/impact.

@Minh-Trng
Copy link

Minh-Trng commented Feb 3, 2024

The title of this issue is exactly how it was written intentionally—to decrease the value of the identified bug based on others spotting the root cause, without compensating the duplicate equally based on not having fully rationalized the severity/impact.

The assumption always was that partial duplicates were indeed duplicates per the decrementer (otherwise they presumably wouldn't be called duplicates at all?)

This answers the question regarding the initial requirement very well.

Such behavior bring only half of the work of the partial to the table (penalizing partial dub, but missing the other half which would be to redistrubute this reduction to the full dup evenly)

thats just you designing different requirements, according to what you would like the behavior to be. It was not part of what was originally needed, can you please stop pretending as if it were? The whole concept of the findings pie that needs to remain equally large for the same amount of dups is just something you thought of to argue for a change. It never was part of the original specification.

Most people (including me) already do agree that your suggestion would be an improvement to what is currently done, isnt that enough? For some reason you seem very stubbornly focused on declaring it as a bug, even if the product owner (who knows best), tells you it isnt

@dontonka
Copy link
Author

dontonka commented Feb 3, 2024

Alright, I'm definatelly stubborn, sorry. That will be my last post regarging this arguments with you @Minh-Trng, as prooving if it's a bug or not in the end is not the end goal, but to improve the award situation instead for the sake of all the C4 wardens, and I think we all agree with that.

The way I understand this is the following. Let's break it down.
Scenario: High with 3 duplicates (two partial-25) as shown in the documentation and as I had in my contest.

Requirement:
to decrease the value of the identified bug based on others spotting the root cause, without compensating the duplicate equally based on not having fully rationalized the severity/impact. The assumption always was that partial duplicates were indeed duplicates per the decrementer (otherwise they presumably wouldn't be called duplicates at all?)

What I understand from this is the partial are duplicate as any other, so they should act as full decrementer. That's not what they are doing! They act as full decrementer on steorid, as it decrease the finding weight more then normal duplicate (from 8.91 to 4.86 in the current scenario, which is almost a 50% cut of the finding, so your High become almost a Medium!). Expected? Why? That's what I want to clarify with this argumentation.

The current calculation acheive otherwise the remaining part of the requirement without compensating the duplicate equally based on not having fully rationalized the severity/impact as you can see the primary get a higher slice then the partial-25 dup.
image

I don't have anything more to add, to argue and spend enough time explaining my position on this issue as a whole.

I wish all of the participants of this sensitive discussion a wonderful weekend!

@0xA5DF
Copy link

0xA5DF commented Feb 3, 2024

They act as full decrementer on steorid, as it decrease the finding weight more then normal duplicate

They indeed decrease the finding's weight, but at the end of the day the warden with full credit gets the same share as if the partial dupes were dupes with full credit.
So it makes sense to say that the partial label is just a penalty on the partial submission for not fully identifying the severity, and the funds from that penalty go back to the HMs pool rather than to this specific bug's pool.

It indeed makes more sense for the funds to go back to the bug's pool, but it's not as if the current state doesn't make sense at all. I don't think you can argue it's so unreasonable that we should make retroactive changes.

@GalloDaSballo
Copy link

I'd like to see the math for a unique finding in the same example, what would be the Pie and Slice if the finding is unique?

@dontonka
Copy link
Author

dontonka commented Feb 6, 2024

what would be the Pie and Slice if the finding is unique?

For a unique High both Pie and Slice will be the same which is 13, so full award (10) plus the 30% bonus for being selected as report.

@dontonka
Copy link
Author

dontonka commented Feb 6, 2024

You see how the current behavior is kind of frustrating having a High with 2 partial-25, this means it's almost a unique High in a way (depending on why the partial-25 were assigned, I understand this can be contextual on the issue and debatable if the root cause was identified or not by those, on the issue itself etc), but this gives the primary warden a resulting slice of 3.51 instead of 13 (if it would have been really unique). There should be probably a middle ground.

But the most fundamental property for me that I can't reason about, is how come this would decrease the finding's pie, it's simply not a property that partial should be affecting at all, but only the slices on the finding.

@H4X0R1NG
Copy link

H4X0R1NG commented Feb 6, 2024

I totally agree with the concerns @dontonka raised here and I totally understand how it feels to lose a chunk of the rewards you should get after sleepless nights of work as a result of this bug, because I myself got affected by this same exact issue.

I strongly believe this is a bug. I am yet to see anybody providing a reasonable justification for this bug being treated as anything but a bug that needs to be fixed.

If I report a high bug with two duplicates that has two 25-partial duplicates, and @dotonka reports another high with two duplicates which have no partials, why in the world should @dotonka get the same reward share as me? That makes absolutely zero sense.

@Minh-Trng

The problem is that you are so tunnelvisioned and frustrated about your Zetachain results, that all you can think about is this particular example and how you "should have gotten more money"

What he is raising is a legit issue that needs to be fixed. That's a fact. Him being frustrated about this is 1. his right and 2. it doesn't change the fact that this needs to be fixed. I'm not sure what you're arguing about here. I'm also pretty sure if you were directly affected by this you wouldn't have said something like this.

The goal of the current solution is to make sure worse reports dont get as much money as you do.

The proposed solution will make sure that this is the outcome as well.

If a high severity bug is reported with 2 non-partial duplicates, the finding pot would be 8.1, each finding would get 2.7 shares. That's 8.1 shares total (2.7 x 3) (excluding best report bonus)
If a high severity bug is reported with 2x 25-partial duplicates. the finding pot should remain 8.1, not get downgraded to 4.86. Main finding should get 6.75, the two 25-partial duplicates should get 0.675 each. (6.75 + 0.675 x 2) = 8.1. That should be still 8.1 shares total.

So worse reports will surely not get as much money as the primary report, and the primary report gets the treatment it deserves. Issue fixed.

@dontonka
Copy link
Author

dontonka commented Feb 6, 2024

I totally agree with the concerns @dontonka raised here and I totally understand how it feels to lose a chunk of the rewards you should get after sleepless nights of work as a result of this bug, because I myself got affected by this same exact issue.

I strongly believe this is a bug. I am yet to see anybody providing a reasonable justification for this bug being treated as anything but a bug that needs to be fixed.

If I report a high bug with two duplicates that has two 25-partial duplicates, and @dotonka reports another high with two duplicates which have no partials, why in the world should @dotonka get the same reward share as me? That makes absolutely zero sense.

Thanks, that means a lot @H4X0R1NG. Chainlight in zkSync lost 18k USD if my calculation are accurate. Personally, the only way I can ensure this doesn't happen to me again is simple, not participate in anymore C4 contest (I understand that the warden's here could not careless and might actually by happy of that), until this is changed, period. The C4 organization can do whatever they think it's best, and I hope it will be aligned with my concerns (for theirs good and mine), but if not, too bad.

@Minh-Trng
Copy link

Minh-Trng commented Feb 6, 2024

I'm not sure what you're arguing about here

SInce you are new to this convo, I think you may have not caught up on all the context necessary to understand

The proposed solution will make sure that this is the outcome as well.

That reaffirms what I just said about missing context, the line you quoted is not even remotely used as an argument against the proposed solution.

I'm also pretty sure if you were directly affected by this you wouldn't have said something like this.

I am sure I would have, because its based on reason, not personal bias. If you look further up, you can see that I have already confirmed multiple times (again, context...) that I believe a change to be an improvement. So if anything, I should argue for it, no?

In fact, there is a very good chance that I have been affected by this before

@dontonka
Copy link
Author

dontonka commented Feb 6, 2024

OK I think warden's participation has been enought already to raise the concerns to the C4 DAO, so let's not waste anymore warden's energy on arguing each other and having ego's battle, etc, not constructive.

Let's all grab a 🍿 and check what C4 DAO will do with all those.

Have a wonderfull day ❤️ !

@Simon-Busch
Copy link

@dontonka

I'm not really following tbh. Those partials are indeed penalized as they should right now. Redistrubuting their lost share to others findings (which could benefit them, granted) is not that much the point, if their initial issue where Medium and got upgraded to High because of the main warden, we can argue they get an increase there too. I think the main question is the following:

Should the lost finding's shares be redistrubuted to the warden(s) evenly (those with no partial) on that specific finding OR
Be redistrubuted to all other finding (as it is today)

"Redistrubuting their lost share to others findings[...]is not that much the point" In reality, this seems to be a central aspect for you.

Given that the partial findings already undergo an award reduction through the partial label attribution system (25%, 50%, 75%), based on the existing formula, theoretical redistributing of the reward further inflates the value for others. This results in an even more pronounced disparity between a standard satisfactory finding and a partial finding. In essence, a partial finding's value is diminished twice through:

  1. The application of the partial percentage.
  2. The theoretical award re-distrubution to other findings ( proportionally re-decrease the value of the partial finding ).

This potential double penalty means that the requirements of the documentation would no longer be met, specifically the guideline that states, "25%, 50%, or 75% of the shares of a satisfactory submission in the duplicate set."

It's important to clarify a possible misconception that a partial finding is nearly inconsequential and of minimal quality.
On the contrary, should a finding be of such low quality, it would be classified as "unsatisfactory." Thus, a partial finding is, in fact, a valid contribution, with certain limitations, it still identifies a vulnerability and it is still counted as 1 valid finding in a group of duplicates.

@GalloDaSballo

I'd like to see the math for a unique finding in the same example, what would be the Pie and Slice if the finding is unique?

In the case of a solo findings, the slice is equivalent to the pie as it's shared between 1 finding.

High Risk Slice: 10 * (0.9 ^ (split - 1)) / split
=> 10*(0.9^(1-1))/1 = 10

To which we add the bonus for selected for report (30%) = 13

@H4X0R1NG

why in the world should @dotonka get the same reward share as me? That makes absolutely zero sense.

This issue has been addressed and explained in detail here: #145 (comment)

@0xEVom
Copy link

0xEVom commented Feb 7, 2024

The theoretical award re-distrubution to other findings ( proportionally re-decrease the value of the partial finding )

This isn't actually the case. The proposed solution scales the shares of every finding, partial or not, such that the "pie" remains proportional to that of findings with no partial-credit duplicates. The size of the pie for a finding with two partial-25 duplicates would be between that of a finding with no duplicates, and that of a finding with one full-credit duplicate.

Here's again how the shares are currently distributed in that scenario:

pie weight split slice
4.86 1.3 3 3.51
4.86 0.25 3 0.675
4.86 0.25 3 0.675

3.51 / 0.675 = 5.2

And here's what it would look like with the proposed approach:

pie sliceCredit totalCredit slice
11.38 1.3 (1) 1.5 8.22
11.38 0.25 1.5 1.58
11.38 0.25 1.5 1.58

8.22 / 1.58 = 5.2

The ratio remains the same and partial duplicates even stand to gain since they get more than twice the number of total shares.

To me, this seems an overall more sensible approach and a natural extension of the current reward calculation.

@sockdrawermoney
Copy link
Member

sockdrawermoney commented Feb 7, 2024

I believe a better approach than the current implementation is desirable.

However I actually do not think this is a desirable outcome:

partial duplicates even stand to gain since they get more than twice the number of total shares

Nor do I think a single issue identified by multiple people but only argued by one person for sufficient impact is the same as a true solo finding.

A desirable outcome, in my view is:

  1. partial findings do NOT get the upside of preserving value of the pie by treating it as if the issue was only found by the primary findings (eg all duplicates partial or otherwise should be considered full duplicates of the partial for purposes of assigning the decrementer count)
  2. value of primary finding(s) are reduced by occurrences of partial duplicates -- but less than they are now, and ideally in appropriate ratio to the partials

@dontonka
Copy link
Author

dontonka commented Feb 7, 2024

Let's be clear on one item here, it might not be well receive but I'll say it anyway. The most impacted by the award system are the warden's, looker and judges, not the CEO or the engineer that build the tool itself (of course they are indirectly if the business go down if something really bad would be implemented), so please keep that in mind, sometimes those stakeholders might be disconnected from the reality, they are not the one participating and spending time auditing the code, fighting in PostQA and finally getting the rewards. I saw it multiple times in my career that higher management wants to push their idea on the engineer down the line, while the expert are the engineer doing the day to day tasks and knows better how to design the appropriate solution, and that usually doesn't end up well. Higher management needs to delegate and trust their expert.

@Simon-Busch

Thus, a partial finding is, in fact, a valid contribution, with certain limitations, it still identifies a vulnerability and it is still counted as 1 valid finding in a group of duplicates.

Agreed. That is not the main culprit point for me, but the decrease in the finding's value as a result.

This potential double penalty means that the requirements of the documentation would no longer be met, specifically the guideline that states

Let's not tunnelize ourself on the current system, which we want to improve. Instead, let's build basic requirements that everyone is happy with, and once the new engine is build, let's compare it with the previous one, and see if we are all happy with the new engine.

@sockdrawermoney

  1. partial findings do NOT get the upside of preserving value of the pie by treating it as if the issue was only found by the primary findings (eg all duplicates partial or otherwise should be considered full duplicates of the partial for purposes of assigning the decrementer count)

I'm not English native, so if you could please explain your thoughts is the most simple terms, that would be very appreciated, maybe also with concrete example. I don't undertand this point, unfortunatelly.

  1. value of primary finding(s) are reduced by occurrences of partial duplicates -- but less than they are now, and ideally in appropriate ratio to the partials

This is the main culprit for me. Walk me throught your reaoning why a High with 3 duplicates (2 with partials) would be worth less then another High with 3 duplicates (but with no partials)? Here we are not considering the shares for wardens at all yet, just the value of a finding relative to others. This is what I can't understand. This is the most basic requirement regarding partial that we need to agree upon. The fact that you ack you want to improve it (but less than they are now, and ideally in appropriate ratio to the partials) is already a good start 😃.

@dontonka
Copy link
Author

dontonka commented Feb 11, 2024

Just listened to the first part of the C4 office hours with @bytes032 and @GalloDaSballo, and I would like to respond their question. Btw thanks @bytes032 for aligning with me that this rather needs to be addressed ASAP.

Scenario
High with 25 duplicates: 1 primary + 24 dup at partial-25
High Pie (or value or whatever you wanna call it compare to other finding) : 10 * (0.9 ^ (25 - 1)) : 0.8 * 1.3 (this is the bonus) = 1.04

TODAY
24 duplicates would get each: 0.008
Primary (also selected for report): 0.0416

The issue, again, this reduce the value of the finding (doesn't make a F sense): (0.008 *24) + 0.0416 = 0.2336

That's a nice 78% in pie reduction, nice. More partial, more pie reduction.

FUTURE (🙏 )
24 duplicates would get each: 0.008
Primary (also selected for report): 0.848

That represent the proper value of the finding: (0.008 *24) + 0.848 = 1.04

Those are simple math, I'm not sure @GalloDaSballo what you are trying to think someone could game this, there is nothing to game 😄, you guys seems to overcomplexify the current issue for no good reason. The finding value is not even impacted by the fact that duplicates or partial or not, it's the same calculation as of today. The changes that needs to happen is the warden's slice calculation.

The only way this can be gamed is already present today by spamming duplicate for a finding, which decrease it's value. The fact that partial duplicate (even at partial-25) reduce the pie already as if the duplicate is full, and there is reason for that as discussed earlier (it's still a duplicate and act as a full decrementer), and I agree with that, it's really not the main issue here.

But tell me who in his right mind will create partial or even duplicate on purpose? The motivation for backstage warden to dedup invalid duplicate already exist today, pushing for some duplicate to be partial instead of full duplicate would be the same motivation, nothing new here.

ALL the wardens would understand the award intutively as I have described it, NOT how it is working today, which is another proof that this is harmless, just calculate the warden's slice properly and we are done here.

@Simon-Busch please feel free to validate the number I provided up there, so the information is not only assessed by me.

@0xEVom
Copy link

0xEVom commented Feb 12, 2024

@sockdrawermoney

Gotcha. It's true that partial duplicates reducing the pie by less than a total duplicate (a byproduct of the proposed solution) doesn't make as much sense intuitively as the pie being distributed in its entirety (what we're trying to address).

Maybe the following is closer to what we're looking for:

slice = severityWeight * (0.9 ^ (findingCount - 1)) * (sliceCredit / totalCredit)

Where slice is the amount of shares awarded to a given submission (without taking into account the 30% bonus for selected report), findingCount the number of duplicates, sliceCredit the partial weighting and totalCredit the sum of all partial weights.

This ensures that all duplicates, partial or not, decrease the pie by the same mount, and distributes all shares in the pie among duplicates in the appropriate ratio.

This is what the same example would look like again:

pie sliceCredit totalCredit slice
9.72 1.3 (1) 1.5 7.02
9.72 0.25 1.5 1.35
9.72 0.25 1.5 1.35

However I actually do not think this is a desirable outcome:

partial duplicates even stand to gain since they get more than twice the number of total shares

I can see what you're saying. Even with this new formula the partial-25 duplicates still get 1.35 / 2.70 = 50% of the original reward, which is more than the intended 25%.

This seems undesirable if the goal of the partial labels is to penalize submissions that provide less value, but I also see another way of looking at this.

If we shift from an intention of imposing penalties to one of more appropriately distributing rewards for a given finding among all its duplicates, this starts to look more logical. They're still getting significantly less than if they were full credit duplicates, and the main issue gets the lion's share.

All this does is redistribute the "slashed shares" among all duplicates, according to their partial weighting. Allowing some of these to flow back to the slashed dups seems fairer than distributing them only among full-weight findings, and also fairer than the present scenario where they are simply lost.

@GalloDaSballo
Copy link

As discussed here:
https://twitter.com/code4rena/status/1756604948345942125

This change may allow stealing other findings rewards and funnel them to the main report

I'd like to see the math of a finding with [0, N] partials as a means to see if that's the case or if the formula prevents it

@0xEVom
Copy link

0xEVom commented Feb 12, 2024

@GalloDaSballo it would be helpful if you could provide a specific example of the scenario you have in mind.

The suggestion is to take the pie for a given number of duplicates and distribute it equally among all findings according to their partial weighing (25% - 100%).

Here's an example for a high severity finding with 3 other duplicates and [0, 3] 25% partials, again using the formula

slice = severityWeight * (0.9 ^ (findingCount - 1)) * (sliceCredit / totalCredit)

pie findingCount sliceCredit totalCredit slice
7.84 4 1.3 (1) 4 2.37
7.84 4 1 4 1.82
7.84 4 1 4 1.82
7.84 4 1 4 1.82
pie findingCount sliceCredit totalCredit slice
7.02 5 1.3 (1) 4.25 2.01
7.02 5 1 4.25 1.54
7.02 5 1 4.25 1.54
7.02 5 1 4.25 1.54
7.02 5 0.25 4.25 0.39
pie findingCount sliceCredit totalCredit slice
6.30 6 1.3 (1) 4.5 1.71
6.30 6 1 4.5 1.31
6.30 6 1 4.5 1.31
6.30 6 1 4.5 1.31
6.30 6 0.25 4.5 0.33
6.30 6 0.25 4.5 0.33
pie findingCount sliceCredit totalCredit slice
5.65 7 1.3 (1) 4.75 1.45
5.65 7 1 4.75 1.12
5.65 7 1 4.75 1.12
5.65 7 1 4.75 1.12
5.65 7 0.25 4.75 0.28
5.65 7 0.25 4.75 0.28
5.65 7 0.25 4.75 0.28

@dontonka
Copy link
Author

dontonka commented Feb 12, 2024

Here you go, @0xEVom is prooving this with facts, ty. So it's end up as follow. So duplicates (even partial) continue acting as full decrementer, but pie value is not reduced and awards are distributed properly among the finding's wardens.

pie = severityWeight * (0.9 ^ (findingCount - 1))
bonusSlice = (severityWeight * pie / findingCount) * 0.3
sliceCredit = 1 for full dup, 0.75 for partial-75. 0.5 for partial-50 and 0.25 for partial-25
totalCredit = sum(of all sliceCredit on the finding)
slice = (pie * sliceCredit / totalCredit) + (warden selected for report ? bonusSlice : 0)

@dontonka
Copy link
Author

Hello, anyone here, tok tok. C4 staffs really care about their wardens, it's amazing. Keep up!

@0xA5DF
Copy link

0xA5DF commented Feb 16, 2024

Just a personal advice, I don't think this bitter tone would do you much of a service in this space.
You can say the same thing in a much more respectful manner, e.g.:

Now that we've been through this whole discussion can C4 staff please comment on what steps will be taken to address this issue?
This issue is very important to me and I think it should be important for C4 as well, as having a formula which I consider unfair deeply impacts my motivation to participate in contests and I'm afraid that would impact other wardens as well.

@sockdrawermoney
Copy link
Member

@dontonka I'm sorry to hear you ran out of popcorn before the previews were over.

I'm awaiting a solution that sees clear consensus support from a few wardens AND at least one judge (ideally several) before it's considered a candidate for implementation.

We're moving in the right direction and I respect the sense of urgency to improve something we all agree can and should be improved, but I'm not in a rush to force a decision here.

@dontonka
Copy link
Author

dontonka commented Feb 16, 2024

@dontonka I'm sorry to hear you ran out of popcorn before the previews were over.

value of primary finding(s) are reduced by occurrences of partial duplicates -- but less than they are now, and ideally in appropriate ratio to the partials
This is the main culprit for me. Walk me throught your reaoning why a High with 3 duplicates (2 with partials) would be worth less then another High with 3 duplicates (but with no partials)? Here we are not considering the shares for wardens at all yet, just the value of a finding relative to others.

Oh my man, I thought Wendy's had accepted your application 😄 . Yes indeed I run out I was too hungry, but just went to the counter to grab another one 🍿 so I can even complete the entire movie. I'm still awaiting you to walk me throught your thought process on your requirement you mention above, this is just for my own curiosity which will help me understand the intentionality in the current behavior.

I'm awaiting a solution that sees clear consensus support from a few wardens AND at least one judge (ideally several) before it's considered a candidate for implementation.

Yes that make total sense. And since only 2 judges has been involved on this matter, let's not involve more, as they seems pretty busy already. So let's do something simple, I'm gonna call out ALL the wardens that posted a comment here, @Minh-Trng, @0xEVom, @0xA5DF, @H4X0R1NG , @aliX40, @J4X-98 and 2 judges @0xean and @GalloDaSballo, and the action I'm requesting is simple, go over my post above (four post above this one), and put an ❤️ if you agree, or 👎 if you disagree with the new formula, which address the entire concerns raised in this discussion. If we get a majority of warden and both judge on the ❤️ side, we are good to go, otherwise back to square one, and the judge/warden in disagrement needs to explain their reasoning.

@dontonka
Copy link
Author

@GalloDaSballo good morning, I can see you are in disagreement with the proposal, can you walk us throught your counter argument please? I thought the fact that you ❤️ the @0xEVom post was indicative of the contrary, but happy to understand further your perspective.

@dontonka
Copy link
Author

Closing this ticket, as while there was hope, this is clearly not going anywhere. Judges involved in this discussion doesn't care to follow up, C4 staffs doesn't care much either as they don't interven really to push their judges to give an active opinion and follow up so this can be concluded, CEO doesn't follow up with my question asked 3 weeks ago, and finally not even the wardens participating here took the effort to give a ❤️ or 👎 on the post with the final formula, which means they also don't care much in the end. BTW we are all very busy, but that doesn't mean you don't have the time to spend few minutes in a window of 2-3 weeks.

@sockdrawermoney
Copy link
Member

@dontonka No need to close the issue. As much as there is consensus agreement that an improvement is needed, there's not been a recommendation made yet without concerns voiced.

If others feel this is worth addressing sooner, it'll emerge. I think people are quite busy with a few audits though :)

@Simon-Busch
Copy link

Hello everyone,

After thorough analysis, we have compared the existing formula with a new formula to handle partials, guided by @0xEVom's suggestion.

As a conclusion, we are confident that this revised formula is viable and addresses @GalloDaSballo's concerns effectively.

We deeply appreciate everyone's contribution and the diverse viewpoints shared during this discussion. Your input has been really valuable.

We plan to release an updated version of our algorithm in the upcoming weeks.

See the analysis below:

Case study with dummy data

Scenarios

  • scenario_1:
    • Solo High
    • Group of 4 high dupes
  • scenario_2:
    • Solo High
    • Group of 5 high dupes
    • 1 partial 25
    • 1 selected
    • 3 satisfactory
  • scenario_3:
    • Solo High
    • Group of 6 high dupes
    • 2 partial 25
    • 1 selected
    • 3 satisfactory
  • scenario_4:
    • Solo High
    • Group of 7 high dupes
    • 3 partial 25
    • 1 selected
    • 3 satisfactory
  • scenario_5:
    • Solo High
    • Group of 8 high dupes
    • 4 partial 25
    • 1 selected
    • 3 satisfactory
  • scenario_6:
    • Solo High
    • Group of 9 high dupes
    • 5 partial 25
    • 1 selected
    • 3 satisfactory
  • scenario_7:
    • Solo High
    • Group of 14 high dupes
    • 5 partial 25
    • 5 partial 50
    • 1 selected
    • 3 satisfactory
  • scenario_8:
    • Solo High
    • Group of 19 high dupes
    • 5 partial 25
    • 5 partial 50
    • 5 partial 75
    • 1 selected
    • 3 satisfactory

Formulas

Using the current formula

Using the new formula

What changed ?

  • Pie will not be impacted anymore other than by the duplicates group ( not by partials anymore )
  • The slices are defined as followed: severityWeight * (0.9 ^ (findingCount - 1)) * (sliceCredit / totalCredit)
  • Slice credit =
    • 1.3 for selected
    • 1 for satisfactory finding
    • 0.25, 0.50, 0.75 based on partial
  • Total credit = sum of all all slices credit

Previous formula breakdown

test_warden_data_scenario_1

Handle Finding Pie Split Slice Score Award
warden_2 H-02 13.00 1 13.00 2 3119.49 USDC
warden_c H-01 7.84 4 2.37 2 568.53 USDC
warden_a H-01 7.84 4 1.82 1 437.33 USDC
warden_b H-01 7.84 4 1.82 1 437.33 USDC
warden_d H-01 7.84 4 1.82 1 437.33 USDC

test_warden_data_scenario_2

Handle Finding Pie Split Slice Score Award
warden_2 H-02 13.00 1 13.00 2 3426.37 USDC
warden_c H-01 5.97 5 1.71 2 449.61 USDC
warden_a H-01 5.97 5 1.31 1 345.85 USDC
warden_b H-01 5.97 5 1.31 1 345.85 USDC
warden_d H-01 5.97 5 1.31 1 345.85 USDC
warden_e H-01 5.97 5 0.33 0.25 86.46 USDC

test_warden_data_scenario_3

Handle Finding Pie Split Slice Score Award
warden_2 H-02 13.00 1 13.00 2 3667.36 USDC
warden_c H-01 4.72 6 1.28 2 360.92 USDC
warden_a H-01 4.72 6 0.98 1 277.63 USDC
warden_b H-01 4.72 6 0.98 1 277.63 USDC
warden_d H-01 4.72 6 0.98 1 277.63 USDC
warden_e H-01 4.72 6 0.25 0.25 69.41 USDC
warden_f H-01 4.72 6 0.25 0.25 69.41 USDC

test_warden_data_scenario_4

Handle Finding Pie Split Slice Score Award
warden_2 H-02 13.00 1 13.00 2 3861.24 USDC
warden_c H-01 3.83 7 0.99 2 293.15 USDC
warden_a H-01 3.83 7 0.76 1 225.50 USDC
warden_b H-01 3.83 7 0.76 1 225.50 USDC
warden_d H-01 3.83 7 0.76 1 225.50 USDC
warden_e H-01 3.83 7 0.19 0.25 56.37 USDC
warden_f H-01 3.83 7 0.19 0.25 56.37 USDC
warden_g H-01 3.83 7 0.19 0.25 56.37 USDC

test_warden_data_scenario_5

Handle Finding Pie Split Slice Score Award
warden_2 H-02 13.00 1 13.00 2 4020.11 USDC
warden_c H-01 3.17 8 0.78 2 240.35 USDC
warden_a H-01 3.17 8 0.60 1 184.89 USDC
warden_b H-01 3.17 8 0.60 1 184.89 USDC
warden_d H-01 3.17 8 0.60 1 184.89 USDC
warden_e H-01 3.17 8 0.15 0.25 46.22 USDC
warden_f H-01 3.17 8 0.15 0.25 46.22 USDC
warden_g H-01 3.17 8 0.15 0.25 46.22 USDC
warden_h H-01 3.17 8 0.15 0.25 46.22 USDC

test_warden_data_scenario_6

Handle Finding Pie Split Slice Score Award
warden_2 H-02 13.00 1 13.00 2 4152.15 USDC
warden_c H-01 2.65 9 0.62 2 198.60 USDC
warden_a H-01 2.65 9 0.48 1 152.77 USDC
warden_b H-01 2.65 9 0.48 1 152.77 USDC
warden_d H-01 2.65 9 0.48 1 152.77 USDC
warden_e H-01 2.65 9 0.12 0.25 38.19 USDC
warden_f H-01 2.65 9 0.12 0.25 38.19 USDC
warden_g H-01 2.65 9 0.12 0.25 38.19 USDC
warden_h H-01 2.65 9 0.12 0.25 38.19 USDC
warden_i H-01 2.65 9 0.12 0.25 38.19 USDC

test_warden_data_scenario_7

Handle Finding Pie Split Slice Score Award
warden_2 H-02 13.00 1 13.00 2 4494.67 USDC
warden_c H-01 1.46 14 0.24 2 81.61 USDC
warden_a H-01 1.46 14 0.18 1 62.77 USDC
warden_b H-01 1.46 14 0.18 1 62.77 USDC
warden_d H-01 1.46 14 0.18 1 62.77 USDC
warden_j H-01 1.46 14 0.09 0.5 31.39 USDC
warden_k H-01 1.46 14 0.09 0.5 31.39 USDC
warden_l H-01 1.46 14 0.09 0.5 31.39 USDC
warden_m H-01 1.46 14 0.09 0.5 31.39 USDC
warden_m H-01 1.46 14 0.09 0.5 31.39 USDC
warden_e H-01 1.46 14 0.05 0.25 15.69 USDC
warden_f H-01 1.46 14 0.05 0.25 15.69 USDC
warden_g H-01 1.46 14 0.05 0.25 15.69 USDC
warden_h H-01 1.46 14 0.05 0.25 15.69 USDC
warden_i H-01 1.46 14 0.05 0.25 15.69 USDC

test_warden_data_scenario_8

Handle Finding Pie Split Slice Score Award
warden_2 H-02 13.00 1 13.00 2 4665.46 USDC
warden_c H-01 0.93 19 0.10 2 36.86 USDC
warden_a H-01 0.93 19 0.08 1 28.35 USDC
warden_b H-01 0.93 19 0.08 1 28.35 USDC
warden_d H-01 0.93 19 0.08 1 28.35 USDC
warden_n H-01 0.93 19 0.06 0.75 21.26 USDC
warden_o H-01 0.93 19 0.06 0.75 21.26 USDC
warden_p H-01 0.93 19 0.06 0.75 21.26 USDC
warden_q H-01 0.93 19 0.06 0.75 21.26 USDC
warden_r H-01 0.93 19 0.06 0.75 21.26 USDC
warden_j H-01 0.93 19 0.04 0.5 14.18 USDC
warden_k H-01 0.93 19 0.04 0.5 14.18 USDC
warden_l H-01 0.93 19 0.04 0.5 14.18 USDC
warden_m H-01 0.93 19 0.04 0.5 14.18 USDC
warden_m H-01 0.93 19 0.04 0.5 14.18 USDC
warden_e H-01 0.93 19 0.02 0.25 7.09 USDC
warden_f H-01 0.93 19 0.02 0.25 7.09 USDC
warden_g H-01 0.93 19 0.02 0.25 7.09 USDC
warden_h H-01 0.93 19 0.02 0.25 7.09 USDC
warden_i H-01 0.93 19 0.02 0.25 7.09 USDC

New formula breakdown

test_warden_data_scenario_1

Handle Finding Pie Split Slice Score Award
warden_2 H-02 13.00 1 13.00 2 3119.49 USDC
warden_c H-01 7.84 4 2.37 2 568.53 USDC
warden_a H-01 7.84 4 1.82 1 437.33 USDC
warden_b H-01 7.84 4 1.82 1 437.33 USDC
warden_d H-01 7.84 4 1.82 1 437.33 USDC

test_warden_data_scenario_2

Handle Finding Pie Split Slice Score Award
warden_2 H-02 13.00 1 13.00 2 3257.38 USDC
warden_c H-01 6.95 5 1.99 2 497.89 USDC
warden_a H-01 6.95 5 1.53 1 382.99 USDC
warden_b H-01 6.95 5 1.53 1 382.99 USDC
warden_d H-01 6.95 5 1.53 1 382.99 USDC
warden_e H-01 6.95 5 0.38 0.25 95.75 USDC

test_warden_data_scenario_3

Handle Finding Pie Split Slice Score Award
warden_2 H-02 13.00 1 13.00 2 3385.39 USDC
warden_c H-01 6.20 6 1.68 2 437.29 USDC
warden_a H-01 6.20 6 1.29 1 336.38 USDC
warden_b H-01 6.20 6 1.29 1 336.38 USDC
warden_d H-01 6.20 6 1.29 1 336.38 USDC
warden_e H-01 6.20 6 0.32 0.25 84.09 USDC
warden_f H-01 6.20 6 0.32 0.25 84.09 USDC

test_warden_data_scenario_4

Handle Finding Pie Split Slice Score Award
warden_2 H-02 13.00 1 13.00 2 3505.52 USDC
warden_c H-01 5.54 7 1.43 2 384.72 USDC
warden_a H-01 5.54 7 1.10 1 295.94 USDC
warden_b H-01 5.54 7 1.10 1 295.94 USDC
warden_d H-01 5.54 7 1.10 1 295.94 USDC
warden_e H-01 5.54 7 0.27 0.25 73.98 USDC
warden_f H-01 5.54 7 0.27 0.25 73.98 USDC
warden_g H-01 5.54 7 0.27 0.25 73.98 USDC

test_warden_data_scenario_5

Handle Finding Pie Split Slice Score Award
warden_2 H-02 13.00 1 13.00 2 3618.68 USDC
warden_c H-01 4.96 8 1.22 2 338.81 USDC
warden_a H-01 4.96 8 0.94 1 260.63 USDC
warden_b H-01 4.96 8 0.94 1 260.63 USDC
warden_d H-01 4.96 8 0.94 1 260.63 USDC
warden_e H-01 4.96 8 0.23 0.25 65.16 USDC
warden_f H-01 4.96 8 0.23 0.25 65.16 USDC
warden_g H-01 4.96 8 0.23 0.25 65.16 USDC
warden_h H-01 4.96 8 0.23 0.25 65.16 USDC

test_warden_data_scenario_6

Handle Finding Pie Split Slice Score Award
warden_2 H-02 13.00 1 13.00 2 3725.32 USDC
warden_c H-01 4.45 9 1.04 2 298.57 USDC
warden_a H-01 4.45 9 0.80 1 229.67 USDC
warden_b H-01 4.45 9 0.80 1 229.67 USDC
warden_d H-01 4.45 9 0.80 1 229.67 USDC
warden_e H-01 4.45 9 0.20 0.25 57.42 USDC
warden_f H-01 4.45 9 0.20 0.25 57.42 USDC
warden_g H-01 4.45 9 0.20 0.25 57.42 USDC
warden_h H-01 4.45 9 0.20 0.25 57.42 USDC
warden_i H-01 4.45 9 0.20 0.25 57.42 USDC

test_warden_data_scenario_7

Handle Finding Pie Split Slice Score Award
warden_2 H-02 13.00 1 13.00 2 4167.65 USDC
warden_c H-01 2.60 14 0.42 2 134.42 USDC
warden_a H-01 2.60 14 0.32 1 103.40 USDC
warden_b H-01 2.60 14 0.32 1 103.40 USDC
warden_d H-01 2.60 14 0.32 1 103.40 USDC
warden_j H-01 2.60 14 0.16 0.5 51.70 USDC
warden_k H-01 2.60 14 0.16 0.5 51.70 USDC
warden_l H-01 2.60 14 0.16 0.5 51.70 USDC
warden_m H-01 2.60 14 0.16 0.5 51.70 USDC
warden_m H-01 2.60 14 0.16 0.5 51.70 USDC
warden_e H-01 2.60 14 0.08 0.25 25.85 USDC
warden_f H-01 2.60 14 0.08 0.25 25.85 USDC
warden_g H-01 2.60 14 0.08 0.25 25.85 USDC
warden_h H-01 2.60 14 0.08 0.25 25.85 USDC
warden_i H-01 2.60 14 0.08 0.25 25.85 USDC

test_warden_data_scenario_8

Handle Finding Pie Split Slice Score Award
warden_2 H-02 13.00 1 13.00 2 4475.15 USDC
warden_c H-01 1.52 19 0.17 2 57.82 USDC
warden_a H-01 1.52 19 0.13 1 44.48 USDC
warden_b H-01 1.52 19 0.13 1 44.48 USDC
warden_d H-01 1.52 19 0.13 1 44.48 USDC
warden_n H-01 1.52 19 0.10 0.75 33.36 USDC
warden_o H-01 1.52 19 0.10 0.75 33.36 USDC
warden_p H-01 1.52 19 0.10 0.75 33.36 USDC
warden_q H-01 1.52 19 0.10 0.75 33.36 USDC
warden_r H-01 1.52 19 0.10 0.75 33.36 USDC
warden_j H-01 1.52 19 0.06 0.5 22.24 USDC
warden_k H-01 1.52 19 0.06 0.5 22.24 USDC
warden_l H-01 1.52 19 0.06 0.5 22.24 USDC
warden_m H-01 1.52 19 0.06 0.5 22.24 USDC
warden_m H-01 1.52 19 0.06 0.5 22.24 USDC
warden_e H-01 1.52 19 0.03 0.25 11.12 USDC
warden_f H-01 1.52 19 0.03 0.25 11.12 USDC
warden_g H-01 1.52 19 0.03 0.25 11.12 USDC
warden_h H-01 1.52 19 0.03 0.25 11.12 USDC
warden_i H-01 1.52 19 0.03 0.25 11.12 USDC

Case Study ZetaChain

Previous formula breakdown

Handle Finding Pie Split Slice Score Award
dontonka M-34 2.07 3 1.05 2 512.17 USDC
dontonka H-05 8.91 3 2.70 1 1313.25 USDC
dontonka H-13 4.86 3 3.51 2 1707.23 USDC
dontonka M-07 2.35 4 0.55 1 265.93 USDC
dontonka M-28 3.90 1 3.90 2 1896.92 USDC
dontonka M-14 3.11 2 1.35 1 656.63 USDC
dontonka M-31 2.35 4 0.55 1 265.93 USDC
dontonka Q-16 32.35 1 32.35 3 380.02 USDC
ChristiansWhoHack H-14 10.35 2 5.85 2 2845.38 USDC
ChristiansWhoHack H-01 8.91 3 2.70 1 1313.25 USDC
ChristiansWhoHack H-03 8.91 3 2.70 1 1313.25 USDC
ChristiansWhoHack M-27 3.90 1 3.90 2 1896.92 USDC
ChristiansWhoHack H-12 13.00 1 13.00 2 6323.07 USDC
ChristiansWhoHack M-21 3.11 2 1.35 1 656.63 USDC
ChristiansWhoHack H-05 8.91 3 2.70 1 1313.25 USDC
ChristiansWhoHack M-26 3.11 2 1.76 2 853.61 USDC
ChristiansWhoHack M-25 3.90 1 3.90 2 1896.92 USDC
ChristiansWhoHack H-11 13.00 1 13.00 2 6323.07 USDC
ChristiansWhoHack M-24 3.90 1 3.90 2 1896.92 USDC
jayjonah8 M-33 3.90 1 3.90 2 1896.92 USDC
p0wd3r M-32 3.11 2 1.76 2 853.61 USDC
p0wd3r M-31 2.35 4 0.71 2 345.71 USDC
p0wd3r M-06 2.09 5 0.39 1 191.47 USDC
p0wd3r M-08 2.67 3 0.81 1 393.98 USDC
p0wd3r M-22 2.67 3 0.81 1 393.98 USDC
p0wd3r M-30 3.90 1 3.90 2 1896.92 USDC
p0wd3r H-08 8.91 3 2.70 1 1313.25 USDC
p0wd3r M-13 3.11 2 1.35 1 656.63 USDC
p0wd3r Q-18 32.47 11 2.95 1 34.67 USDC
zhaojie M-34 2.07 3 0.20 0.25 98.49 USDC
zhaojie H-01 8.91 3 2.70 1 1313.25 USDC
zhaojie M-29 2.67 3 1.05 2 512.17 USDC
zhaojie M-07 2.35 4 0.55 1 265.93 USDC
zhaojie M-05 2.07 3 0.20 0.25 98.49 USDC
zhaojie M-16 3.11 2 1.35 1 656.63 USDC
zhaojie M-08 2.67 3 0.81 1 393.98 USDC
likeTheWind H-02 8.91 3 2.70 1 1313.25 USDC
kuprum H-08 8.91 3 2.70 1 1313.25 USDC
oakcobalt H-10 13.00 1 13.00 2 6323.07 USDC
oakcobalt M-06 2.09 5 0.51 2 248.91 USDC
oakcobalt M-07 2.35 4 0.55 1 265.93 USDC
oakcobalt M-31 2.35 4 0.55 1 265.93 USDC
oakcobalt M-03 3.90 1 3.90 2 1896.92 USDC
oakcobalt M-11 3.11 2 1.35 1 656.63 USDC
oakcobalt Q-06 32.47 11 2.95 1 34.67 USDC
berndartmueller M-23 3.90 1 3.90 2 1896.92 USDC
berndartmueller M-26 3.11 2 1.35 1 656.63 USDC
berndartmueller M-22 2.67 3 1.05 2 512.17 USDC
berndartmueller M-21 3.11 2 1.76 2 853.61 USDC
berndartmueller M-20 3.90 1 3.90 2 1896.92 USDC
berndartmueller M-19 3.90 1 3.90 2 1896.92 USDC
berndartmueller H-04 10.35 2 5.85 2 2845.38 USDC
berndartmueller M-18 3.90 1 3.90 2 1896.92 USDC
berndartmueller M-32 3.11 2 1.35 1 656.63 USDC
berndartmueller M-17 3.90 1 3.90 2 1896.92 USDC
berndartmueller M-16 3.11 2 1.76 2 853.61 USDC
berndartmueller H-09 10.35 2 5.85 2 2845.38 USDC
berndartmueller M-15 3.90 1 3.90 2 1896.92 USDC
berndartmueller M-31 2.35 4 0.55 1 265.93 USDC
berndartmueller M-14 3.11 2 1.76 2 853.61 USDC
berndartmueller M-13 3.11 2 1.76 2 853.61 USDC
berndartmueller M-12 3.90 1 3.90 2 1896.92 USDC
berndartmueller M-11 3.11 2 1.76 2 853.61 USDC
berndartmueller M-10 3.90 1 3.90 2 1896.92 USDC
berndartmueller H-08 8.91 3 3.51 2 1707.23 USDC
berndartmueller H-07 13.00 1 13.00 2 6323.07 USDC
berndartmueller H-06 10.35 2 5.85 2 2845.38 USDC
berndartmueller H-05 8.91 3 3.51 2 1707.23 USDC
berndartmueller M-29 2.67 3 0.81 1 393.98 USDC
berndartmueller H-03 8.91 3 3.51 2 1707.23 USDC
Al-Qa-qa H-02 8.91 3 3.51 2 1707.23 USDC
Al-Qa-qa Q-11 199.10 8 24.89 2 292.32 USDC
Josephdara_0xTiwa M-09 3.90 1 3.90 2 1896.92 USDC
lsaudit H-13 4.86 3 0.68 0.25 328.31 USDC
lsaudit Q-10 32.47 11 2.95 1 34.67 USDC
Udsen H-13 4.86 3 0.68 0.25 328.31 USDC
QiuhaoLi M-06 2.09 5 0.39 1 191.47 USDC
QiuhaoLi H-04 10.35 2 4.50 1 2188.75 USDC
QiuhaoLi Q-08 32.47 11 2.95 1 34.67 USDC
deliriusz M-06 2.09 5 0.39 1 191.47 USDC
deliriusz M-08 2.67 3 1.05 2 512.17 USDC
deliriusz M-22 2.67 3 0.81 1 393.98 USDC
deliriusz H-02 8.91 3 2.70 1 1313.25 USDC
deliriusz M-05 2.07 3 0.81 1 393.98 USDC
deliriusz M-07 2.35 4 0.71 2 345.71 USDC
csanuragjain M-34 2.07 3 0.81 1 393.98 USDC
ciphermarco M-05 2.07 3 1.05 2 512.17 USDC
ciphermarco M-04 3.90 1 3.90 2 1896.92 USDC
ciphermarco H-14 10.35 2 4.50 1 2188.75 USDC
ciphermarco H-01 8.91 3 3.51 2 1707.23 USDC
ciphermarco H-09 10.35 2 4.50 1 2188.75 USDC
ciphermarco Q-04 199.10 8 24.89 2 292.32 USDC
MevSec M-02 3.90 1 3.90 2 1896.92 USDC
MevSec H-06 10.35 2 4.50 1 2188.75 USDC
MevSec M-29 2.67 3 0.81 1 393.98 USDC
MevSec H-03 8.91 3 2.70 1 1313.25 USDC
MevSec M-06 2.09 5 0.39 1 191.47 USDC
MevSec M-01 3.90 1 3.90 2 1896.92 USDC
MevSec Q-02 199.10 8 24.89 2 292.32 USDC
Sathish9098 Q-20 32.47 11 2.95 1 34.67 USDC
cartlex_ Q-19 32.47 11 2.95 1 34.67 USDC
Topmark Q-17 32.47 11 2.95 1 34.67 USDC
hihen Q-15 199.10 8 24.89 2 292.32 USDC
ChaseTheLight Q-14 199.10 8 24.89 2 292.32 USDC
Kaysoft Q-13 32.47 11 2.95 1 34.67 USDC
Bauchibred Q-12 32.47 11 2.95 1 34.67 USDC
codeslide Q-09 32.47 11 2.95 1 34.67 USDC
SAQ Q-07 32.47 11 2.95 1 34.67 USDC
PNS Q-05 199.10 8 24.89 2 292.32 USDC
0x6980 Q-03 199.10 8 24.89 2 292.32 USDC
IllIllI Q-01 199.10 8 24.89 2 292.32 USDC

New formula breakdown

Handle Finding Pie Split Slice Score Award
dontonka M-34 2.67 3 1.36 2 649.53 USDC
dontonka H-05 8.91 3 2.70 1 1286.95 USDC
dontonka H-13 8.91 3 6.44 2 3067.23 USDC
dontonka M-07 2.35 4 0.55 1 260.61 USDC
dontonka M-28 3.90 1 3.90 2 1858.92 USDC
dontonka M-14 3.11 2 1.35 1 643.47 USDC
dontonka M-31 2.35 4 0.55 1 260.61 USDC
dontonka Q-16 32.35 1 32.35 3 380.02 USDC
ChristiansWhoHack H-14 10.35 2 5.85 2 2788.39 USDC
ChristiansWhoHack H-01 8.91 3 2.70 1 1286.95 USDC
ChristiansWhoHack H-03 8.91 3 2.70 1 1286.95 USDC
ChristiansWhoHack M-27 3.90 1 3.90 2 1858.92 USDC
ChristiansWhoHack H-12 13.00 1 13.00 2 6196.41 USDC
ChristiansWhoHack M-21 3.11 2 1.35 1 643.47 USDC
ChristiansWhoHack H-05 8.91 3 2.70 1 1286.95 USDC
ChristiansWhoHack M-26 3.11 2 1.76 2 836.52 USDC
ChristiansWhoHack M-25 3.90 1 3.90 2 1858.92 USDC
ChristiansWhoHack H-11 13.00 1 13.00 2 6196.41 USDC
ChristiansWhoHack M-24 3.90 1 3.90 2 1858.92 USDC
jayjonah8 M-33 3.90 1 3.90 2 1858.92 USDC
p0wd3r M-32 3.11 2 1.76 2 836.52 USDC
p0wd3r M-31 2.35 4 0.71 2 338.79 USDC
p0wd3r M-06 2.09 5 0.39 1 187.64 USDC
p0wd3r M-08 2.67 3 0.81 1 386.08 USDC
p0wd3r M-22 2.67 3 0.81 1 386.08 USDC
p0wd3r M-30 3.90 1 3.90 2 1858.92 USDC
p0wd3r H-08 8.91 3 2.70 1 1286.95 USDC
p0wd3r M-13 3.11 2 1.35 1 643.47 USDC
p0wd3r Q-18 32.47 11 2.95 1 34.67 USDC
zhaojie M-34 2.67 3 0.26 0.25 124.91 USDC
zhaojie H-01 8.91 3 2.70 1 1286.95 USDC
zhaojie M-29 2.67 3 1.05 2 501.91 USDC
zhaojie M-07 2.35 4 0.55 1 260.61 USDC
zhaojie M-05 2.67 3 0.26 0.25 124.91 USDC
zhaojie M-16 3.11 2 1.35 1 643.47 USDC
zhaojie M-08 2.67 3 0.81 1 386.08 USDC
likeTheWind H-02 8.91 3 2.70 1 1286.95 USDC
kuprum H-08 8.91 3 2.70 1 1286.95 USDC
oakcobalt H-10 13.00 1 13.00 2 6196.41 USDC
oakcobalt M-06 2.09 5 0.51 2 243.93 USDC
oakcobalt M-07 2.35 4 0.55 1 260.61 USDC
oakcobalt M-31 2.35 4 0.55 1 260.61 USDC
oakcobalt M-03 3.90 1 3.90 2 1858.92 USDC
oakcobalt M-11 3.11 2 1.35 1 643.47 USDC
oakcobalt Q-06 32.47 11 2.95 1 34.67 USDC
berndartmueller M-23 3.90 1 3.90 2 1858.92 USDC
berndartmueller M-26 3.11 2 1.35 1 643.47 USDC
berndartmueller M-22 2.67 3 1.05 2 501.91 USDC
berndartmueller M-21 3.11 2 1.76 2 836.52 USDC
berndartmueller M-20 3.90 1 3.90 2 1858.92 USDC
berndartmueller M-19 3.90 1 3.90 2 1858.92 USDC
berndartmueller H-04 10.35 2 5.85 2 2788.39 USDC
berndartmueller M-18 3.90 1 3.90 2 1858.92 USDC
berndartmueller M-32 3.11 2 1.35 1 643.47 USDC
berndartmueller M-17 3.90 1 3.90 2 1858.92 USDC
berndartmueller M-16 3.11 2 1.76 2 836.52 USDC
berndartmueller H-09 10.35 2 5.85 2 2788.39 USDC
berndartmueller M-15 3.90 1 3.90 2 1858.92 USDC
berndartmueller M-31 2.35 4 0.55 1 260.61 USDC
berndartmueller M-14 3.11 2 1.76 2 836.52 USDC
berndartmueller M-13 3.11 2 1.76 2 836.52 USDC
berndartmueller M-12 3.90 1 3.90 2 1858.92 USDC
berndartmueller M-11 3.11 2 1.76 2 836.52 USDC
berndartmueller M-10 3.90 1 3.90 2 1858.92 USDC
berndartmueller H-08 8.91 3 3.51 2 1673.03 USDC
berndartmueller H-07 13.00 1 13.00 2 6196.41 USDC
berndartmueller H-06 10.35 2 5.85 2 2788.39 USDC
berndartmueller H-05 8.91 3 3.51 2 1673.03 USDC
berndartmueller M-29 2.67 3 0.81 1 386.08 USDC
berndartmueller H-03 8.91 3 3.51 2 1673.03 USDC
Al-Qa-qa H-02 8.91 3 3.51 2 1673.03 USDC
Al-Qa-qa Q-11 199.10 8 24.89 2 292.32 USDC
Josephdara_0xTiwa M-09 3.90 1 3.90 2 1858.92 USDC
lsaudit H-13 8.91 3 1.24 0.25 589.85 USDC
lsaudit Q-10 32.47 11 2.95 1 34.67 USDC
Udsen H-13 8.91 3 1.24 0.25 589.85 USDC
QiuhaoLi M-06 2.09 5 0.39 1 187.64 USDC
QiuhaoLi H-04 10.35 2 4.50 1 2144.91 USDC
QiuhaoLi Q-08 32.47 11 2.95 1 34.67 USDC
deliriusz M-06 2.09 5 0.39 1 187.64 USDC
deliriusz M-08 2.67 3 1.05 2 501.91 USDC
deliriusz M-22 2.67 3 0.81 1 386.08 USDC
deliriusz H-02 8.91 3 2.70 1 1286.95 USDC
deliriusz M-05 2.67 3 1.05 1 499.64 USDC
deliriusz M-07 2.35 4 0.71 2 338.79 USDC
csanuragjain M-34 2.67 3 1.05 1 499.64 USDC
ciphermarco M-05 2.67 3 1.36 2 649.53 USDC
ciphermarco M-04 3.90 1 3.90 2 1858.92 USDC
ciphermarco H-14 10.35 2 4.50 1 2144.91 USDC
ciphermarco H-01 8.91 3 3.51 2 1673.03 USDC
ciphermarco H-09 10.35 2 4.50 1 2144.91 USDC
ciphermarco Q-04 199.10 8 24.89 2 292.32 USDC
MevSec M-02 3.90 1 3.90 2 1858.92 USDC
MevSec H-06 10.35 2 4.50 1 2144.91 USDC
MevSec M-29 2.67 3 0.81 1 386.08 USDC
MevSec H-03 8.91 3 2.70 1 1286.95 USDC
MevSec M-06 2.09 5 0.39 1 187.64 USDC
MevSec M-01 3.90 1 3.90 2 1858.92 USDC
MevSec Q-02 199.10 8 24.89 2 292.32 USDC
Sathish9098 Q-20 32.47 11 2.95 1 34.67 USDC
cartlex_ Q-19 32.47 11 2.95 1 34.67 USDC
Topmark Q-17 32.47 11 2.95 1 34.67 USDC
hihen Q-15 199.10 8 24.89 2 292.32 USDC
ChaseTheLight Q-14 199.10 8 24.89 2 292.32 USDC
Kaysoft Q-13 32.47 11 2.95 1 34.67 USDC
Bauchibred Q-12 32.47 11 2.95 1 34.67 USDC
codeslide Q-09 32.47 11 2.95 1 34.67 USDC
SAQ Q-07 32.47 11 2.95 1 34.67 USDC
PNS Q-05 199.10 8 24.89 2 292.32 USDC
0x6980 Q-03 199.10 8 24.89 2 292.32 USDC
IllIllI Q-01 199.10 8 24.89 2 292.32 USDC

Variation

Here's the combined table with "Award actual", "Award new", and "Variation" columns included, based on the provided data for each handle.

Handle Finding Score Award init Award updated Variation
dontonka M-34 2 512.17 USDC 649.53 USDC +137.36 USDC
dontonka H-05 1 1313.25 USDC 1286.95 USDC -26.3 USDC
dontonka H-13 2 1707.23 USDC 3067.23 USDC +1360.0 USDC
dontonka M-07 1 265.93 USDC 260.61 USDC -5.32 USDC
dontonka M-28 2 1896.92 USDC 1858.92 USDC -38.0 USDC
dontonka M-14 1 656.63 USDC 643.47 USDC -13.16 USDC
dontonka M-31 1 265.93 USDC 260.61 USDC -5.32 USDC
dontonka Q-16 3 380.02 USDC 380.02 USDC 0 USDC
ChristiansWhoHack H-14 2 2845.38 USDC 2788.39 USDC -56.99 USDC
ChristiansWhoHack H-01 1 1313.25 USDC 1286.95 USDC -26.3 USDC
ChristiansWhoHack H-03 1 1313.25 USDC 1286.95 USDC -26.3 USDC
ChristiansWhoHack M-27 2 1896.92 USDC 1858.92 USDC -38.0 USDC
ChristiansWhoHack H-12 2 6323.07 USDC 6196.41 USDC -126.66 USDC
ChristiansWhoHack M-21 1 656.63 USDC 643.47 USDC -13.16 USDC
ChristiansWhoHack H-05 1 1313.25 USDC 1286.95 USDC -26.3 USDC
ChristiansWhoHack M-26 2 853.61 USDC 836.52 USDC -17.09 USDC
ChristiansWhoHack M-25 2 1896.92 USDC 1858.92 USDC -38.0 USDC
ChristiansWhoHack H-11 2 6323.07 USDC 6196.41 USDC -126.66 USDC
ChristiansWhoHack M-24 2 1896.92 USDC 1858.92 USDC -38.0 USDC
jayjonah8 M-33 2 1896.92 USDC 1858.92 USDC -38.0 USDC
p0wd3r M-32 2 853.61 USDC 836.52 USDC -17.09 USDC
p0wd3r M-31 2 345.71 USDC 338.79 USDC -6.92 USDC
p0wd3r M-06 1 191.47 USDC 187.64 USDC -3.83 USDC
p0wd3r M-08 1 393.98 USDC 386.08 USDC -7.9 USDC
p0wd3r M-22 1 393.98 USDC 386.08 USDC -7.9 USDC
p0wd3r M-30 2 1896.92 USDC 1858.92 USDC -38.0 USDC
p0wd3r H-08 1 1313.25 USDC 1286.95 USDC -26.3 USDC
p0wd3r M-13 1 656.63 USDC 643.47 USDC -13.16 USDC
p0wd3r Q-18 1 34.67 USDC 34.67 USDC 0 USDC
zhaojie M-34 0.25 98.49 USDC 124.91 USDC +26.42 USDC
zhaojie H-01 1 1313.25 USDC 1286.95 USDC -26.3 USDC
zhaojie M-29 2 512.17 USDC 501.91 USDC -10.26 USDC
zhaojie M-07 1 265.93 USDC 260.61 USDC -5.32 USDC
zhaojie M-05 0.25 98.49 USDC 124.91 USDC +26.42 USDC
zhaojie M-16 1 656.63 USDC 643.47 USDC -13.16 USDC
zhaojie M-08 1 393.98 USDC  386.08 USDC  -7.9 USDC
likeTheWind H-02 1 1313.25 USDC 1286.95 USDC -26.3 USDC
kuprum H-08 1 1313.25 USDC 1286.95 USDC -26.3 USDC
oakcobalt H-10 2 6323.07 USDC 6196.41 USDC -126.66 USDC
oakcobalt M-06 2 248.91 USDC 243.93 USDC -4.98 USDC
oakcobalt M-07 1 265.93 USDC 260.61 USDC -5.32 USDC
oakcobalt M-31 1 265.93 USDC 260.61 USDC -5.32 USDC
oakcobalt M-03 2 1896.92 USDC 1858.92 USDC -38.0 USDC
oakcobalt M-11 1 656.63 USDC 643.47 USDC -13.16 USDC
oakcobalt Q-06 1 34.67 USDC 34.67 USDC 0 USDC
berndartmueller M-23 2 1896.92 USDC 1858.92 USDC -38.0 USDC
berndartmueller M-26 1 656.63 USDC 643.47 USDC -13.16 USDC
berndartmueller M-22 2 512.17 USDC 501.91 USDC -10.26 USDC
berndartmueller M-21 2 853.61 USDC 836.52 USDC -17.09 USDC
berndartmueller M-20 2 1896.92 USDC 1858.92 USDC -38.0 USDC
berndartmueller M-19 2 1896.92 USDC 1858.92 USDC -38.0 USDC
berndartmueller H-04 2 2845.38 USDC 2788.39 USDC -56.99 USDC
berndartmueller M-18 2 1896.92 USDC 1858.92 USDC -38.00 USDC
berndartmueller M-32 1 656.63 USDC 643.47 USDC -13.16 USDC
berndartmueller M-17 2 1896.92 USDC 1858.92 USDC -38.00 USDC
berndartmueller M-16 2 853.61 USDC 836.52 USDC -17.09 USDC
berndartmueller H-09 2 2845.38 USDC 2788.39 USDC -56.99 USDC
berndartmueller M-15 2 1896.92 USDC 1858.92 USDC -38.00 USDC
berndartmueller M-31 1 265.93 USDC 260.61 USDC -5.32 USDC
berndartmueller M-14 2 853.61 USDC 836.52 USDC -17.09 USDC
berndartmueller M-13 2 853.61 USDC 836.52 USDC -17.09 USDC
berndartmueller M-12 2 1896.92 USDC 1858.92 USDC -38.00 USDC
berndartmueller M-11 2 853.61 USDC 836.52 USDC -17.09 USDC
berndartmueller M-10 2 1896.92 USDC 1858.92 USDC -38.00 USDC
berndartmueller H-08 2 1707.23 USDC 1673.03 USDC -34.20 USDC
berndartmueller H-07 2 6323.07 USDC 6196.41 USDC -126.66 USDC
berndartmueller H-06 2 2845.38 USDC 2788.39 USDC -56.99 USDC
berndartmueller H-05 2 1707.23 USDC 1673.03 USDC -34.20 USDC
berndartmueller M-29 1 393.98 USDC 386.08 USDC -7.90 USDC
berndartmueller H-03 2 1707.23 USDC 1673.03 USDC -34.20 USDC
Al-Qa-qa H-02 2 1707.23 USDC 1673.03 USDC -34.20 USDC
Al-Qa-qa Q-11 2 292.32 USDC 292.32 USDC 0.00 USDC
Josephdara_0xTiwa M-09 2 1896.92 USDC 1858.92 USDC -38.00 USDC
lsaudit H-13 0.25 328.31 USDC 589.85 USDC +261.54 USDC
lsaudit Q-10 1 34.67 USDC 34.67 USDC 0.00 USDC
Udsen H-13 0.25 328.31 USDC 589.85 USDC +261.54 USDC
QiuhaoLi M-06 1 191.47 USDC 187.64 USDC -3.83 USDC
QiuhaoLi H-04 1 2188.75 USDC 2144.91 USDC -43.84 USDC
QiuhaoLi Q-08 1 34.67 USDC 34.67 USDC 0.00 USDC
deliriusz M-06 1 191.47 USDC 187.64 USDC -3.83 USDC
deliriusz M-08 2 512.17 USDC 501.91 USDC -10.26 USDC
deliriusz M-22 1 393.98 USDC 386.08 USDC -7.90 USDC
deliriusz H-02 1 1313.25 USDC 1286.95 USDC -26.30 USDC
deliriusz M-05 1 393.98 USDC 499.64 USDC +105.66 USDC
deliriusz M-07 2 345.71 USDC 338.79 USDC -6.92 USDC
csanuragjain M-34 1 393.98 USDC 499.64 USDC +105.66 USDC
ciphermarco M-05 2 512.17 USDC 649.53 USDC +137.36 USDC
ciphermarco M-04 2 1896.92 USDC 1858.92 USDC -38.00 USDC
ciphermarco H-14 1 2188.75 USDC 2144.91 USDC -43.84 USDC
ciphermarco H-01 2 1707.23 USDC 1673.03 USDC -34.20 USDC
ciphermarco H-09 1 2188.75 USDC 2144.91 USDC -43.84 USDC
ciphermarco Q-04 2 292.32 USDC 292.32 USDC 0.00 USDC
MevSec M-02 2 1896.92 USDC 1858.92 USDC -38.00 USDC
MevSec H-06 1 2188.75 USDC 2144.91 USDC -43.84 USDC
MevSec M-29 1 393.98 USDC 386.08 USDC -7.90 USDC
MevSec H-03 1 1313.25 USDC 1286.95 USDC -26.30 USDC
MevSec M-06 1 191.47 USDC 187.64 USDC -3.83 USDC
MevSec M-01 2 1896.92 USDC 1858.92 USDC -38.00 USDC
MevSec Q-02 2 292.32 USDC 292.32 USDC 0.00 USDC
Sathish9098 Q-20 1 34.67 USDC 34.67 USDC 0.00 USDC
cartlex_ Q-19 1 34.67 USDC 34.67 USDC 0.00 USDC
Topmark Q-17 1 34.67 USDC 34.67 USDC 0.00 USDC
hihen Q-15 2 292.32 USDC 292.32 USDC 0.00 USDC
ChaseTheLight Q-14 2 292.32 USDC 292.32 USDC 0.00 USDC
Kaysoft Q-13 1 34.67 USDC 34.67 USDC 0.00 USDC
Bauchibred Q-12 1 34.67 USDC 34.67 USDC 0.00 USDC
codeslide Q-09 1 34.67 USDC 34.67 USDC 0.00 USDC
SAQ Q-07 1 34.67 USDC 34.67 USDC 0.00 USDC
PNS Q-05 2 292.32 USDC 292.32 USDC 0.00 USDC
0x6980 Q-03 2 292.32 USDC 292.32 USDC 0.00 USDC
IllIllI Q-01 2 292.32 USDC 292.32 USDC 0.00 USDC

Conclusion

The new formula would work. The total variation sum is 0 which means that the award have been realocated as expected.

@GalloDaSballo
Copy link

Happy to see a full example which we can all reason through

@dontonka
Copy link
Author

dontonka commented Mar 21, 2024

We plan to release an updated version of our algorithm in the upcoming weeks.

@Simon-Busch Thank you for doing this thorough analysis, which I think you were in the best position to execute and impact the decision here.

That was a long movie boys and girls 🍿, but happy to see it come to fruition.

Now, let's address the elephant in the room, la question qui tue, that is a question especially for @sockdrawermoney and/or @Simon-Busch:

Was the previous formula behavior intentional, and if so, why was it implemented in such way such that it reduce the finding's weight (aka pie)? Why would you implement a formula that voluntarily reduce a finding's weight because some duplicates have a partial label attached to it? Finally, even if hypothetically you would come with a valid argument justifying such behavior, my next question is why would this affect only duplicates with partials labels and not normal duplicates?

@0xEVom
Copy link

0xEVom commented May 2, 2024

@Simon-Busch I just noticed that the C4 docs describe the old awarding method for partial credit duplicates. Has this change been reverted, or is the documentation yet to be updated?

@CloudEllie
Copy link

I just noticed that the C4 docs describe the old awarding method for partial credit duplicates. Has this change been reverted, or is the documentation yet to be updated?

@0xEVom (and @dontonka) The docs have not yet been updated with the new awardcalc math; we'll add it as soon as we can. Rest assured the change is being applied to the calculation; we just haven't managed to update the docs with the math, yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests