You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While building tools to create a DB from slog data, I discovered that we aren't incrementing the crankNum counter for the BOYD we do just before a heap-snapshot write. The first example of this on mainnet is in the bootstrap block, where we create the first heap-snapshot on v1-bootstrap:
The first delivery to v1 was a startVat on crankNum: 6. The second is the delivery of the bootstrap method on crankNum: 11. The 'initial snapshot threshold" is 2, which means the bootstrap delivery triggers a heap snapshot sequence. That sequence starts by doing a BOYD delivery, and then writes out the snapshot.
The heap snapshot sequence code did not increment crankNum first, so the slog entries for the BOYD also get crankNum: 11.
This will happen every time we trigger the heap snapshot sequence, which is both on the initial threshold after an incarnation is launched, and then every interval deliveries afterwards. Both the threshold and the interval are configurable as kernel options, but they default to 2 and 100.
The fix will probably be for code in processUpgradeVat to call kernelKeeper.incrementCrankNumber() just after it calls deliverAndLogToVat.
In the meantime, any tools we build to analyze slogfiles must be prepared to handle duplicate cranknums. I'm not sure how they should do that.
The text was updated successfully, but these errors were encountered:
Hm, my writeup said processUpgradeVat, but I think I meant something else. It does kinda depend upon our definition of a crank. I generally make a distinction between kernel cranks, and vat deliveries. Each vat delivery happens inside a single kernel crank, but not all kernel cranks involve vat deliveries.
During vat upgrades, we'll perform two deliveries: one BOYD to the old worker, then a startVat to the new worker. I think that's ok: we don't need separate crank numbers for them (and it would look weird to have a crank-finish + crank-start in the middle of the processUpgradeVat crank).
The issue I was concerned about is when we take a heap snapshot, which only happens outside of vat upgrades. We make a normal delivery that increments our snapshot counter beyond snapshotInterval (or, once #8980 is done, that increments reapDirt beyond one of the reapDirtThreshold limits), then we deliver a BOYD, then we write out a snapshot, then we load a new worker from that snapshot. The weirdness is that both the normal delivery and the BOYD appear in the same crank.
I'm not positive I feed the need to change that. It's less justifiable than the processUpgradeVat case, since vat-upgrade is fairly indivisible, whereas doing an extra BOYD could conceivably be treated as an extra crank (eg if crossing reapDirtThreshold pushed the vatID onto a high-priority queue, so the very next crank performs a BOYD and a heap snapshot save+load cycle). That wouldn't be a bad way to implement this, and would split the deliveries up into separate cranks, but it's not how we're currently implementing it. (currently, the kernel calls vatWarehouse.maybeSaveSnapshot() as part of processing the CrankResults, just before processing decrementReapCount).
There are currently about 45 lines between maybeSaveSnapshot and the code which increments the crank number and slogs the crank-finish line. If we really wanted to treat this as two separate cranks, we'd need a second (conditional) incrementCrankNumber, along with an extra crank-finish and then a crank-start to match, and we'd want to cycle the activityhash too.. all of which sounds messy.
So I'm going to put this on the back burner, until/unless we decide to change the way maybeSaveSnapshot works (perhaps as part of the snapshot-based-on-computrons part of #6786).
Describe the bug
While building tools to create a DB from slog data, I discovered that we aren't incrementing the
crankNum
counter for the BOYD we do just before a heap-snapshot write. The first example of this on mainnet is in the bootstrap block, where we create the first heap-snapshot on v1-bootstrap:The first delivery to
v1
was astartVat
oncrankNum: 6
. The second is the delivery of thebootstrap
method oncrankNum: 11
. The 'initial snapshot threshold" is 2, which means thebootstrap
delivery triggers a heap snapshot sequence. That sequence starts by doing a BOYD delivery, and then writes out the snapshot.The heap snapshot sequence code did not increment
crankNum
first, so the slog entries for the BOYD also getcrankNum: 11
.This will happen every time we trigger the heap snapshot sequence, which is both on the initial
threshold
after an incarnation is launched, and then everyinterval
deliveries afterwards. Both thethreshold
and theinterval
are configurable as kernel options, but they default to2
and100
.The fix will probably be for code in
processUpgradeVat
to callkernelKeeper.incrementCrankNumber()
just after it callsdeliverAndLogToVat
.In the meantime, any tools we build to analyze slogfiles must be prepared to handle duplicate cranknums. I'm not sure how they should do that.
The text was updated successfully, but these errors were encountered: