Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stm reduce transmute #675

Closed
wants to merge 7 commits into from
Closed

Stm reduce transmute #675

wants to merge 7 commits into from

Conversation

curiecrypt
Copy link
Collaborator

@curiecrypt curiecrypt commented Jan 4, 2023

Content

This PR aims to reduce or completely remove the use of transmute in multi_sig.rs.

Pre-submit checklist

  • Branch
    • Commit sequence broadly makes sense
    • Key commits have useful messages
  • PR
    • No clippy warnings in the CI
    • Self-reviewed the diff
    • Useful pull request description
    • Reviewer requested

Comments

The following are the benchmark results giving the regression between the versions with transmute and serde. The performance has regressed drastically. So, we should consider another way to prevent the possible problems of using transmute which is also efficient. As we discussed, a helper structure for unsafe codes could be better @iquerejeta.

STM/Blake2b/Key registration/k: 25, m: 150, nr_parties: 300
                        time:   [502.24 ms 502.95 ms 503.73 ms]
                        change: [+26.772% +27.320% +27.940%] (p = 0.00 < 0.05)
                        Performance has regressed.

STM/Blake2b/Play all lotteries/k: 25, m: 150, nr_parties: 300
                        time:   [908.94 µs 910.58 µs 912.56 µs]
                        change: [+31.221% +31.619% +32.056%] (p = 0.00 < 0.05)
                        Performance has regressed.

STM/Blake2b/Aggregation/k: 25, m: 150, nr_parties: 300
                        time:   [24.401 ms 24.622 ms 24.895 ms]
                        change: [+26.608% +27.610% +29.137%] (p = 0.00 < 0.05)
                        Performance has regressed.

STM/Blake2b/Key registration/k: 250, m: 1523, nr_parties: 2000
                        time:   [3.3481 s 3.3779 s 3.4273 s]
                        change: [+23.353% +27.009% +30.851%] (p = 0.00 < 0.05)
                        Performance has regressed.

STM/Blake2b/Play all lotteries/k: 250, m: 1523, nr_parties: 2000
                        time:   [7.5255 ms 7.5313 ms 7.5375 ms]
                        change: [+30.692% +30.971% +31.228%] (p = 0.00 < 0.05)
                        Performance has regressed.

STM/Blake2b/Aggregation/k: 250, m: 1523, nr_parties: 2000
                        time:   [238.58 ms 238.82 ms 239.12 ms]
                        change: [+24.642% +24.856% +25.077%] (p = 0.00 < 0.05)
                        Performance has regressed.

STM/Blake2b/Batch Verification/k: 25, m: 150, nr_parties: 300/batch size: 1
                        time:   [2.9301 ms 2.9354 ms 2.9427 ms]
                        change: [+30.118% +30.369% +30.663%] (p = 0.00 < 0.05)
                        Performance has regressed.

STM/Blake2b/Batch Verification/k: 25, m: 150, nr_parties: 300/batch size: 10
                        time:   [23.629 ms 23.650 ms 23.674 ms]
                        change: [+31.475% +31.664% +31.845%] (p = 0.00 < 0.05)
                        Performance has regressed.

STM/Blake2b/Batch Verification/k: 25, m: 150, nr_parties: 300/batch size: 20
                        time:   [46.031 ms 46.192 ms 46.431 ms]
                        change: [+32.254% +32.681% +33.157%] (p = 0.00 < 0.05)
                        Performance has regressed.
STM/Blake2b/Batch Verification/k: 25, m: 150, nr_parties: 300/batch size: 100
                        time:   [230.16 ms 230.38 ms 230.62 ms]
                        change: [+31.979% +32.165% +32.361%] (p = 0.00 < 0.05)
                        Performance has regressed.

STM/Blake2b/Batch Verification/k: 250, m: 1523, nr_parties: 2000/batch size: 1
                        time:   [4.3937 ms 4.4235 ms 4.4583 ms]
                        change: [+18.011% +19.305% +20.655%] (p = 0.00 < 0.05)
                        Performance has regressed.

STM/Blake2b/Batch Verification/k: 250, m: 1523, nr_parties: 2000/batch size: 10
                        time:   [45.043 ms 45.313 ms 45.601 ms]
                        change: [+22.106% +23.430% +24.636%] (p = 0.00 < 0.05)
                        Performance has regressed.
STM/Blake2b/Batch Verification/k: 250, m: 1523, nr_parties: 2000/batch size: 20
                        time:   [97.238 ms 100.43 ms 104.30 ms]
                        change: [+44.286% +49.695% +54.132%] (p = 0.00 < 0.05)
                        Performance has regressed.

STM/Blake2b/Batch Verification/k: 250, m: 1523, nr_parties: 2000/batch size: 100
                        time:   [396.87 ms 397.68 ms 398.52 ms]
                        change: [+34.851% +35.222% +35.516%] (p = 0.00 < 0.05)
                        Performance has regressed.

Issue(s)

Closes #532

@curiecrypt curiecrypt force-pushed the stm-reduce-transmute branch from 1524a6c to 3c00e04 Compare January 4, 2023 18:44
@github-actions
Copy link

github-actions bot commented Jan 4, 2023

Test Results

    3 files  ±0    28 suites  ±0   6m 0s ⏱️ +12s
384 tests ±0  384 ✔️ ±0  0 💤 ±0  0 ±0 
445 runs  ±0  445 ✔️ ±0  0 💤 ±0  0 ±0 

Results for commit f8b3ade. ± Comparison against base commit 697b10e.

♻️ This comment has been updated with latest results.

@curiecrypt curiecrypt temporarily deployed to testing-preview January 5, 2023 10:25 — with GitHub Actions Inactive
Comment on lines +203 to +207
let ser_sk = sk.0.serialize().as_ptr();
let mut sk_scalar = blst_scalar::default();
blst_scalar_from_bendian(&mut sk_scalar, ser_sk);
let mut out = blst_p1::default();
blst_sk_to_pk_in_g1(&mut out, sk_scalar);
blst_sk_to_pk_in_g1(&mut out, &sk_scalar);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general I would avoid doing serialisation/deserialisation, and use it as a last resort. So if for the other usages of transmute we can use different strategies, it'll be best.

If we were to continue with this technique (of serialisation and deserialisation), I would be careful with secret material. i.e. I would overwrite ser_sk and sk_scalar with zeroes and blst_scalar::default respectively.

@curiecrypt curiecrypt temporarily deployed to testing-preview January 10, 2023 12:13 — with GitHub Actions Inactive
@curiecrypt curiecrypt temporarily deployed to testing-preview January 11, 2023 14:26 — with GitHub Actions Inactive
@curiecrypt curiecrypt temporarily deployed to testing-preview January 11, 2023 16:46 — with GitHub Actions Inactive
@curiecrypt curiecrypt requested a review from iquerejeta January 17, 2023 15:37
@iquerejeta
Copy link
Contributor

As mentioned by @curiecrypt, the regression is considerable, and we won't proceed on that path. Doing serialisation/deserialisation, while being safer, has an important hit on performance. In order to minimise the risks of transmute, we will create helper function (to facilitate auditing), and fix to a particular version of blst library to avoid changes under the hood. Closing this in favour of #690 .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Reduce usage of transmute to the strictly necessary
2 participants