Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

deprioritize unreconcilable missingPvtData #1721

Conversation

cendhu
Copy link
Contributor

@cendhu cendhu commented Aug 13, 2020

Type of change

  • New feature

Description

When a private data associated with a transaction is missing (i.e, the peer could not fetch it from other members during the commit time), the missing data info is recorded in the pvtdata store rather than stalling the block commit. A pvtdata reconciler task is to periodically try to fetch these missing private data. When the reconciler is not able to fetch a particular private data, it is necessary to deprioritize that missing private data so that the reconciler can fetch other missing private data rather than stalling the whole process by continuously retrying the same missing data.

This PR enables differentiation of prioritized and deprioritized missing data.

Additional details

This feature is necessary for the peer that bootstraps a channel using a snapshot. As the private is not dumped to a file, the new peer will use a reconciler to fetch the required private data. If a certain set of missing data could not be fetched, then the reconciler will get into a loop and does not progress further in a rare situation.

@cendhu cendhu force-pushed the deprioritize-unreconcilable-missingPvtData branch 2 times, most recently from 41f7f41 to a5a2fd9 Compare August 21, 2020 12:17
@cendhu cendhu changed the title [WIP] deprioritize unreconcilable missingPvtData deprioritize unreconcilable missingPvtData Aug 21, 2020
@cendhu cendhu marked this pull request as ready for review August 21, 2020 12:25
@cendhu cendhu requested a review from a team as a code owner August 21, 2020 12:25
@cendhu cendhu force-pushed the deprioritize-unreconcilable-missingPvtData branch 2 times, most recently from 06f1025 to 2c78106 Compare August 24, 2020 06:10
Signed-off-by: senthil <cendhu@gmail.com>
@cendhu cendhu force-pushed the deprioritize-unreconcilable-missingPvtData branch from 2c78106 to 9d2d7e3 Compare August 24, 2020 06:12
Copy link
Contributor

@manish-sethi manish-sethi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am yet to look at the test files. However, I have a few comments on the regular files.
In addition to the comments, I would like to mention that we should avoid major refactoring and adding new code together in a single PR. We discussed this before. In this PR, diffs hardly make any sense. Renaming/splitting into different functions/moving across files gets easier reviewed in isolation.. particularly, when it is desired to back port.

@@ -140,6 +143,7 @@ func (p *Provider) OpenStore(ledgerid string) (*Store, error) {
return nil, err
}
s.launchCollElgProc()
s.deprioritizedMissingDataPeriodicity = 1000
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Better to define a constant, with a comment

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, will do.

Comment on lines -155 to +167
&missingDataKey{nsCollBlk{ns, coll, expiryEntry.key.committingBlk}, true})
missingDataKeys = append(missingDataKeys,
&missingDataKey{nsCollBlk{ns, coll, expiryEntry.key.committingBlk}, false})
&missingDataKey{
nsCollBlk: nsCollBlk{
ns: ns,
coll: coll,
blkNum: expiryEntry.key.committingBlk,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. One way is to replace isEligible with the group and then pass the information from here. There will be three groups -- elgPrio, elgDepri, and inElg. The only problem is that removing isEligible might impact moderate amount of existing code. I thought of doing it but many tests and normal commit path dependent on it. Hence, I left it as-is now and decided to do it in a separate PR. Let me know what do you think.

Comment on lines 110 to 111
func encodeMissingDataKey(key *missingDataKey) []byte {
if key.isEligible {
// When missing pvtData reconciler asks for missing data info,
// it is necessary to pass the missing pvtdata info associated with
// the most recent block so that missing pvtdata in the state db can
// be fixed sooner to reduce the "private data matching public hash version
// is not available" error during endorserments. In order to give priority
// to missing pvtData in the most recent block, we use reverse order
// preserving encoding for the missing data key. This simplifies the
// implementation of GetMissingPvtDataInfoForMostRecentBlocks().
keyBytes := append(eligibleMissingDataKeyPrefix, encodeReverseOrderVarUint64(key.blkNum)...)
keyBytes = append(keyBytes, []byte(key.ns)...)
keyBytes = append(keyBytes, nilByte)
return append(keyBytes, []byte(key.coll)...)
}

keyBytes := append(ineligibleMissingDataKeyPrefix, []byte(key.ns)...)
keyBytes = append(keyBytes, nilByte)
keyBytes = append(keyBytes, []byte(key.coll)...)
keyBytes = append(keyBytes, nilByte)
return append(keyBytes, []byte(encodeReverseOrderVarUint64(key.blkNum))...)
func encodeElgMissingDataKey(group []byte, key *missingDataKey) []byte {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pattern that was followed intentionally in this package was that encoding/decoding various types of keys are contained in one function and everywhere else in the code, we work with the high level structs in order to keep the binary representation separated from high level code - just like you deal with the proto objects. Passing the group which is a []byte from outside seem taking a step back where everywhere in the code we need to take care of passing the right prefix - which is part of the encoding knowledge.

Further, In the current mixed form, its very misleading (e.g., see function decodeElgMissingDataKey below, this is being returned with default value of false. I have highlighted one more usage below).

One option could be to remove the field isEligible entirely and instead have explicit three functions for keys encoding/decoding. The functions that needs to operate on either priortized or depriortized data can take explicit bool flag instead of taking this []byte.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree. I was trying to use a single encoding/decoding method. Then, I hit this linter https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#flag-parameter and it made sense to me.

If you prefer to use a single encoding/decoding function, we can replace isEligible as I mentioned earlier with the group []byte

Comment on lines 193 to 198
func createRangeScanKeysForInelgMissingData(maxBlkNum uint64, ns, coll string) ([]byte, []byte) {
startKey := encodeInelgMissingDataKey(
&missingDataKey{
nsCollBlk: nsCollBlk{ns: ns, coll: coll, blkNum: maxBlkNum},
isEligible: false,
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to highlight the redundancy at another place... calling the function encodeInelgMissingDataKey and still pass isEligible: false could be a source of confusion.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. I should have added a TODO for removing isEligible. With the introduction of three separate encoding/decoding, we really don't need this field in the struct anymore.

func TestEligibleMissingdataRange(t *testing.T) {
func TestEligibleMissingDataRange(t *testing.T) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change does not bother me much but it was not a camelcasing mistake... I had it deliberately (here and other places) to distinguish that it's Missingdata-Range and not the Missing-DataRange. In other words, treating Missingdata a noun.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just for a consistent usage. Not related to camecase. Everywhere we use either missingData or MissingData. When I looked at this, suddenly it looked a bit odd and made then change.

Comment on lines 202 to 207
if err != nil {
return nil, err
}
return expiryData, nil
if expData == nil {
return nil, errors.Wrap(err, "error while getting expiry data from the store")
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

either this..

	if err != nil || expData == nil{
		return nil, errors.Wrap(err, "error while getting expiry data from the store")
	}

or this...

	if err != nil {
		return nil, errors.Wrap(err, "error while getting expiry data from the store")
	}
	if expData == nil {
		return nil, nil
	}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, something messed up while changing code multiple times. First, I had the 1st option. Then, I was thinking whether clarity or lesser LOC. Then, I chosen the former and went with the 2nd option but forgot to change to nil

Comment on lines +106 to +107
p.entries.dataEntries[*dataEntry.key] = dataEntry.value
p.entries.expiryEntries[expKey] = expData
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typically, if you override the existing key in a map, the lookup and checking for existence is close to the code. See related comment on function getExpiryData.

Comment on lines 130 to 134
if deprioMissingData != nil && deprioMissingData.Test(txNum) {
p.entries.deprioritizedMissingDataEntries[key] = deprioMissingData.Clear(txNum)
}
// if the missing data entry is already purged by the purge scheduler, we would
// get nil missingData from both prioritized and deprioritized list
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does this happen? The data entries only contain the data that has not been yet expired. Given that the condition at line 130 should always be true.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True. Will remove.

Comment on lines 117 to 126
prioMissingData, err := p.getMissingDataFromPrioritizedList(key)
if err != nil {
return err
}
if missingData == nil {
// data entry is already expired
// and purged (a rare scenario)
if prioMissingData != nil && prioMissingData.Test(txNum) {
p.entries.prioritizedMissingDataEntries[key] = prioMissingData.Clear(txNum)
continue
}
missingData.Clear(uint(txNum))

entries.add(dataEntry, expKey, expData, missingData)
deprioMissingData, err := p.getMissingDataFromDeprioritizedList(key)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to take a note, as we ask the data exclusively for either of the group in one cycle, when we refactor the reconciliation code to make it more of a pull model than a push, this can be made better by checking with one list only.

return nil
}

func (p *oldBlockDataProcessor) prepareMissingDataEntriesToReflectPriority(deprioritizedList ledger.MissingPvtDataInfo) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deprioritizedList -> toDeprioritize carries a better meaning.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, will update.

Comment on lines -155 to +167
&missingDataKey{nsCollBlk{ns, coll, expiryEntry.key.committingBlk}, true})
missingDataKeys = append(missingDataKeys,
&missingDataKey{nsCollBlk{ns, coll, expiryEntry.key.committingBlk}, false})
&missingDataKey{
nsCollBlk: nsCollBlk{
ns: ns,
coll: coll,
blkNum: expiryEntry.key.committingBlk,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. One way is to replace isEligible with the group and then pass the information from here. There will be three groups -- elgPrio, elgDepri, and inElg. The only problem is that removing isEligible might impact moderate amount of existing code. I thought of doing it but many tests and normal commit path dependent on it. Hence, I left it as-is now and decided to do it in a separate PR. Let me know what do you think.

@cendhu
Copy link
Contributor Author

cendhu commented Aug 28, 2020

I am yet to look at the test files. However, I have a few comments on the regular files.
In addition to the comments, I would like to mention that we should avoid major refactoring and adding new code together in a single PR. We discussed this before. In this PR, diffs hardly make any sense. Renaming/splitting into different functions/moving across files gets easier reviewed in isolation.. particularly, when it is desired to back port.

Sorry about that. I went through multiple iterations and it resulted in a lot of refactoring though it was not intentional. I also couldn't push only refactoring PR because only with the new feature and the name I had chosen made sense. Otherwise, it didn't. In other words, the existing code looked right to me without deprioritizedList and hence didn't require refactoring. Anyway, I should have added multiple commits for the ease of review. I will keep this in mind for the next time.

Signed-off-by: senthil <cendhu@gmail.com>
Copy link
Contributor

@manish-sethi manish-sethi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few comments.. relates to code clean up

Comment on lines 72 to 75
missingDataEntries = elgMissingDataEntries
if !nsColl.IsEligible {
missingDataEntries = inelgMissingDataEntries
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Though I prefer the approach of assigning the map to a common var based on type and then have a common flow (as, I have suggested at another place), however, here I find it too involved because the assignment in not one time. I believe that having a separate function getOrCreateBitSet will improve the readability.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will go with common var and switch case.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In that case, not really sure then whether having two separate structures (after removing the IsEligible) has improved the readability. This has in fact, increased the code area of if-elses.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand. Can you elaborate on your concern related to readability (how the current code makes reduces the readability)?

There is a set of missingDataInfo passed by the kvledger. This function splits the missingDataInfo into two buckets -- eligible and ineligible. This code looks readable and clear to me.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I mentioned before, the exact part that causes hindrance in the flow is temporary assignment with in the loop... just for the purpose of reusing a small portion of code (map look up and create if missing). Like other places, I don't mind duplication in the two switch statements if you prefer that... though, I had suggested an alternate approach - i.e., having a function getOrCreateBitSet could allow reuse while still avoiding this reassignment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, Manish. I still don't understand. This looks like a nit to me but you see some concern here.

Are you saying that using a switch statement to choose a bucket for the missingData is not natural/correct?

	elgMissingDataEntries := make(map[missingDataKey]*bitset.BitSet)
	inelgMissingDataEntries := make(map[missingDataKey]*bitset.BitSet)

	var missingDataEntries map[missingDataKey]*bitset.BitSet

	for txNum, missingData := range missingPvtData {
		for _, nsColl := range missingData {
			key := missingDataKey{
				nsCollBlk{
					ns:     nsColl.Namespace,
					coll:   nsColl.Collection,
					blkNum: committingBlk,
				},
			}

			switch nsColl.IsEligible {
			case true:
				missingDataEntries = elgMissingDataEntries
			default:
				missingDataEntries = inelgMissingDataEntries
			}

			if _, ok := missingDataEntries[key]; !ok {
				missingDataEntries[key] = &bitset.BitSet{}
			}
			bitmap := missingDataEntries[key]

			bitmap.Set(uint(txNum))
		}
	}

	return elgMissingDataEntries, inelgMissingDataEntries
  1. Are you suggesting to split this function into two? prepareEligibleEntries() and prepareIneligibleEntries()
  2. Are you suggesting to introduce getOrCreateBitmap(elg, inelg map[nsCollBlk]*bitset.Bitmap, key missingDataKey, isEligible bool)?
  3. Are you suggesting something very different?

Copy link
Contributor

@manish-sethi manish-sethi Sep 1, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you saying that using a switch statement to choose a bucket for the missingData is not natural/correct?

Yes, my objection is to the overuse of variable missingDataEntries for small reuse of code. In a function typically, you change the values of variables but not the semantic meaning. This variable sometime points to elgMissingDataEntries and sometime to inelgMissingDataEntries.

On the suggestion part... if code can convey better...I was suggesting
either

	elgMissingDataEntries := make(map[missingDataKey]*bitset.BitSet)
	inelgMissingDataEntries := make(map[missingDataKey]*bitset.BitSet)
	
	getOrCreateBitSet := func(
		m map[missingDataKey]*bitset.BitSet,
		k missingDataKey,
	)*bitset.BitSet{
		if _, ok := m[k]; !ok {
			m[k] = &bitset.BitSet{}
		}
		return m[k]	
	}

	for txNum, missingData := range missingPvtData {
		for _, nsColl := range missingData {
			key := missingDataKey{
				nsCollBlk{
					ns:     nsColl.Namespace,
					coll:   nsColl.Collection,
					blkNum: committingBlk,
				},
			}

			switch nsColl.IsEligible {
			case true:
				getOrCreateBitSet(elgMissingDataEntries, key).Set(uint(txNum))

			default:
				getOrCreateBitSet(inelgMissingDataEntries, key).Set(uint(txNum))
			}
		}
	}

	return elgMissingDataEntries, inelgMissingDataEntries

Or Repeating the code of function getOrCreateBitSet inline for both the cases.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. Okay. I will go with the later (repeating the code) as it adds only 3 lines of extra code while avoiding inline function.

func (p *oldBlockDataProcessor) getMissingDataFromEntriesOrStore(entries *entriesForPvtDataOfOldBlocks, nsCollBlk nsCollBlk) (*bitset.BitSet, error) {
if missingData, ok := entries.missingDataEntries[nsCollBlk]; ok {
return missingData, nil
func (p *oldBlockDataProcessor) getMissingDataFromEntriesOrStore(group elgMissingDataGroup, nsCollBlk nsCollBlk) (*bitset.BitSet, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel that the code in this function can be made better readable if you have a single switch case...that initialize variables but then the rest of the flow is common and simple.

	var entries map[nsCollBlk]*bitset.BitSet
	var dbKey []byte

	switch group {
	case prioritized:
		entries = p.entries.prioritizedMissingDataEntries
		dbKey = encodeElgPrioMissingDataKey(&missingDataKey{
			nsCollBlk: nsCollBlk,
		})

	case deprioritized:
		entries = p.entries.deprioritizedMissingDataEntries
		dbKey = encodeElgPrioMissingDataKey(&missingDataKey{
			nsCollBlk: nsCollBlk,
		})
	}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find the above a bit confusing because we need dbKey only if entries is not present. I would prefer to keep the current code as-is.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, that is easy to solve by using encodingFunc instead of dbKey but the two interleaving flows within a function is not a normal style, if avoidable IMO.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

encodingFunc also looks like a hack to me. I will split this function into two as we are okay with certain amount of code duplication.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I can compromise with this if you duplication. I'll prefer that over the current form.

Comment on lines 314 to 329
entries := map[string]map[nsCollBlk]*bitset.BitSet{
"prioritized": e.prioritizedMissingDataEntries,
"deprioritized": e.deprioritizedMissingDataEntries,
}

if missingData.None() {
batch.Delete(key)
continue
}
for group, missingDataList := range entries {
for nsCollBlk, missingData := range missingDataList {
missingKey := &missingDataKey{
nsCollBlk: nsCollBlk,
}
switch group {
case "prioritized":
key = encodeElgPrioMissingDataKey(missingKey)
case "deprioritized":
key = encodeElgDeprioMissingDataKey(missingKey)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just like at the other place, I think that try to reuse the code in-place has caused compromising the readability. I believe that a s separate function with parameters map[nsCollBlk]*bitset.BitSet and the encoding funciton may make it better readable... otherwise, if you prefer to duplicate the code, even that would be fine.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will duplicate the code then.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, sure.. all I wanted to avoid is the interleaving assignments or interleaving flows just to reuse the code for the different structures.

},
}

deprioritizedMissingDataPeriodicity = 10
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since this is now a package level variable, better to move it to the TestMain. Otherwise, better to reset it in a defer statement.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure

Comment on lines 278 to 280
p := &oldBlockDataProcessor{
Store: store,
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unnecessarily created. Can get db directly from store.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Forgot to cleanup. Earlier, I used oldBlockDataProcessor to get the missing data entries.

ns1Coll2Blk1Tx2 = &dataKey{nsCollBlk: nsCollBlk{ns: "ns-1", coll: "coll-2", blkNum: 1}, txNum: 2}
ns3Coll2Blk1Tx2 = &dataKey{nsCollBlk: nsCollBlk{ns: "ns-3", coll: "coll-2", blkNum: 1}, txNum: 2}
for _, k := range keys {
encKey := encodeElgPrioMissingDataKey(&missingDataKey{k})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

encodeElgDeprioMissingDataKey

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch

require.False(t, testDataKeyExists(t, store, ns1Coll2Blk1Tx2)) // purged
require.True(t, testDataKeyExists(t, store, ns3Coll2Blk1Tx2)) // never expires
func TestCommitPvtDataOfOldBlocksWithDeprioritization(t *testing.T) {
sampleDataForTest := func() (map[uint64]*pvtDataForTest, ledger.MissingPvtDataInfo) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the value of this function? It just returns the static data.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True. Will put the code outside the function.

assertMissingDataInfo(t, store, missingDataSummary, 2)
}

func TestCommitPvtDataOfOldBlocksWithBTL(t *testing.T) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be honest, I haven't compared this test with it's previous version because it's significantly changed in this refactoring...so, had to look a fresh but I find it very hard to track at places. Primarily because, one would have to keep track of the flow and correlate with the current state of the store while keeping in mind the different elements of sample data.

For instance, I didn't find the coverage of purging the depriortized list when it is present before the expiry (I see the one that is only added after the expiry). Similarly, deletion of prioritized data is not covered. I feel that this test should be broken into different test cases, perhaps in the line of how you have done in the test TestCommitPvtDataOfOldBlocksWithDeprioritization.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Puring is tested as part of TestExpiryDataNotIncluded() -- which is an existing test. This might require a refactoring but not going to do it in this PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In that case, I don't understand the value of code from line 163 through 238. If I get you right, all you wanted to test was that the data supplied to the call store.CommitPvtDataOfOldBlocks(oldBlocksPvtData, deprioritizedList) at line 270 should be kind of a noop if the data has arrived after it's expiry. Did I miss something?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does present in the existing test as well. All I did was that I split TestCommitPvtDataOfOldBlocks into TestCommitPvtDataOfOldBlocks and TestCommitPvtDataOfOldBlocksWithBTL and refactored as it was hard to read the existing test.

Copy link
Contributor

@manish-sethi manish-sethi Sep 1, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does present in the existing test as well

Don't get me wrong but, IMO that does not go well justifying the refactoring that has taken place. To me, this still looks unreadable and hard to get what this test tries to achieve.... It may be looking an improved version to you because you just wrote it and hence is fresh in your mind but I found it as cryptic as before.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree. To me, it reads better than the previous version. I did multiple iterations and came up with this. I am out of ideas here. If you have a better way to organize this test, please feel free to push a PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May be you missed in the previous comment, something that does not require drastic change... just remove the unnecessary code.

  • setup part - just do fake commits for a few blocks (with no private data)
  • API call a single call to store.CommitPvtDataOfOldBlocks(oldBlocksPvtData, deprioritizedList)
  • Verification - the API side-effect is noop

Signed-off-by: senthil <cendhu@gmail.com>
Signed-off-by: senthil <cendhu@gmail.com>
Copy link
Contributor

@manish-sethi manish-sethi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @cendhu for your patience on this one. Adding the Jira for reference - https://jira.hyperledger.org/browse/FAB-15810

@manish-sethi manish-sethi merged commit 98e706c into hyperledger:master Sep 1, 2020
@cendhu
Copy link
Contributor Author

cendhu commented Sep 1, 2020

Thanks @cendhu for your patience on this one. Adding the Jira for reference - https://jira.hyperledger.org/browse/FAB-15810

Thanks, Manish, for the merge. As long as the code looks good at the end, no issue in iterating the code as many times as possible (even if we lose the patience in-between 😄).

cendhu added a commit to cendhu/fabric that referenced this pull request Sep 8, 2020
* deprioritize unreconcilable missingPvtData

Signed-off-by: senthil <cendhu@gmail.com>

* address review comment

Signed-off-by: senthil <cendhu@gmail.com>

* add review comments

Signed-off-by: senthil <cendhu@gmail.com>

* address review comments

Signed-off-by: senthil <cendhu@gmail.com>
cendhu added a commit to cendhu/fabric that referenced this pull request Sep 9, 2020
* deprioritize unreconcilable missingPvtData

Signed-off-by: senthil <cendhu@gmail.com>

* address review comment

Signed-off-by: senthil <cendhu@gmail.com>

* add review comments

Signed-off-by: senthil <cendhu@gmail.com>

* address review comments

Signed-off-by: senthil <cendhu@gmail.com>
denyeart pushed a commit that referenced this pull request Sep 16, 2020
* deprioritize unreconcilable missingPvtData

Signed-off-by: senthil <cendhu@gmail.com>

* address review comment

Signed-off-by: senthil <cendhu@gmail.com>

* add review comments

Signed-off-by: senthil <cendhu@gmail.com>

* address review comments

Signed-off-by: senthil <cendhu@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants