Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

makes last erasure batch size >= 64 shreds #34330

Merged
merged 1 commit into from
Dec 13, 2023

Conversation

behzadnouri
Copy link
Contributor

Problem

wider retransmitter set for last erasure batch.

Summary of Changes

makes last erasure batch size >= 64 shreds

@@ -2562,6 +2562,7 @@ fn run_test_load_program_accounts_partition(scan_commitment: CommitmentConfig) {

#[test]
#[serial]
#[ignore]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test is already flaky: #32863 but fails a lot more with this code.
I have to ignore it for now for CI to pass, since this code is not the culprit anyways.

Copy link

codecov bot commented Dec 6, 2023

Codecov Report

Merging #34330 (6043a51) into master (8c6239c) will decrease coverage by 0.1%.
The diff coverage is 88.4%.

Additional details and impacted files
@@            Coverage Diff            @@
##           master   #34330     +/-   ##
=========================================
- Coverage    82.0%    82.0%   -0.1%     
=========================================
  Files         819      819             
  Lines      220598   220625     +27     
=========================================
+ Hits       180898   180914     +16     
- Misses      39700    39711     +11     

@AshwinSekar
Copy link
Contributor

how expensive is it to pad with 32 - d data shreds instead?
just curious because it could make verification simpler (compare fec set of last_shred_in_slot and last_shred_in_slot - 31) vs counting coding shreds
also we then have an inherent tracker in max_tick_height. otherwise we'll have to add a tracker to revisit when enough coding shreds have been received.

not saying it can't be done, but introduces some more complexity on the verification side.

@behzadnouri
Copy link
Contributor Author

how expensive is it to pad with 32 - d data shreds instead?
just curious because it could make verification simpler (compare fec set of last_shred_in_slot and last_shred_in_slot - 31) vs counting coding shreds

From erasure coding perspective, it would be pretty wasteful to add more empty data shreds instead of coding shreds.
Lets see how the verification code looks like and we can revise this if needed.

@behzadnouri behzadnouri force-pushed the last-erasure-batch branch 2 times, most recently from 84bf708 to 25afd71 Compare December 7, 2023 14:41
@behzadnouri behzadnouri merged commit 7500235 into solana-labs:master Dec 13, 2023
34 checks passed
@behzadnouri behzadnouri deleted the last-erasure-batch branch December 13, 2023 06:48
@behzadnouri behzadnouri added the v1.17 PRs that should be backported to v1.17 label Dec 13, 2023
Copy link
Contributor

mergify bot commented Dec 13, 2023

Backports to the beta branch are to be avoided unless absolutely necessary for fixing bugs, security issues, and perf regressions. Changes intended for backport should be structured such that a minimum effective diff can be committed separately from any refactoring, plumbing, cleanup, etc that are not strictly necessary to achieve the goal. Any of the latter should go only into master and ride the normal stabilization schedule. Exceptions include CI/metrics changes, CLI improvements and documentation updates on a case by case basis.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
v1.17 PRs that should be backported to v1.17
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants