-
Notifications
You must be signed in to change notification settings - Fork 212
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(async-flow): asyncFlow #9097
Conversation
00ae6ae
to
95efeaa
Compare
9774ce3
to
0ccd48f
Compare
0ccd48f
to
fb9b798
Compare
OMG it's ready for review! |
c10ed0e
to
0164eaf
Compare
cba0176
to
fa9766a
Compare
fa9766a
to
a326ce8
Compare
a326ce8
to
0e527f5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The biggest issue I have is with how host vow fulfillment is handled. I think there is a big problem with the current approach, but maybe I missed something. Let's chat.
Staged on #9097 closes: #XXXX refs: #9231 #9321 #9329 #9097 ## Description Now that `watchPromise` is abstracted over zones and vows make use of that, we can simplify the test cases a lot. Should be a pure refactor with no externally observable effect. ### Security Considerations none ### Scaling Considerations none ### Documentation Considerations none ### Testing Considerations tests simpler. Otherwise, none ### Upgrade Considerations none
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Github won't let me request changes of myself, but me, please fix this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Audited the membrane for stopped
state check. I think we could add an extra check in the guest call handler to avoid relying on the bijection being reset.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the process of reviewing test, I realized the wakeAll
behavior was not quite as expected it was. Given the change to the data model, I think this needs to be addressed before first release, either now or in a closely following PR.
Furthermore in the upgrading considerations, and maybe somewhere in the documentation, we should mention that because of metering, an activation that executed successfully in a previous incarnation might not replay correctly, even if it doesn't cause any explicit side-effects. That's because metering is a hidden side-effect of any execution.
Staged on #9097 closes: #XXXX refs: #9231 #9321 #9329 #9097 ## Description Now that `watchPromise` is abstracted over zones and vows make use of that, we can simplify the test cases a lot. Should be a pure refactor with no externally observable effect. ### Security Considerations none ### Scaling Considerations none ### Documentation Considerations none ### Testing Considerations tests simpler. Otherwise, none ### Upgrade Considerations none
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tests look mostly good. I'm a little concerned that the waking behavior on new incarnation is not quite tested right, in particular if a flow is not failed and not eagerly waking, whether the durable vow watcher kicks in.
Conditional approval on tweaking this if easily doable (I think it should be now that we fixed auto waking behavior)
closes: #9302
refs: #9125, #9126 #9153 #9154, #9280
Description
Upgrade while suspended at
await
points! Uses membrane to log and replay everything that happened before each upgrade.In the first incarnation, somewhere, using a closed async function argument
then elsewhere, as often as you'd like
For all these
asyncFlow
calls that happened in the first incarnation, in the first crank of all later incarnationswith async functions that reproduce the original's logged behavior. In these later incarnations, you only need to capture the returned
wrapperFunc
if you want to create new activations. Regardless, the old activations continue.Future Growth
I designed this PR so it could grow in the following ways:
TODO: The membrane should use the
HandledPromise
API to make proper remote presences and handled promises, so that the guest function can useE
on objects or promises it receives from the host as expected. I commented out the additional ops needed for these:doSend
andcheckSend
.TODO: Currently, I assume that the host side only presents vows, not promises. However, imported remote promises can be stored durably, and be durably watched, so the membrane should be extended to handle those.
TODO: We currently impose the restriction that the guest cannot export to the host guest-created remotables or promises. (It can pass back to the host remotables or promises it received from the host.) I commented out the additional ops needed for these:
doCall
,checkReturn
andcheckThrow
. I wrote thebijection
andequate
subsystems so that old durable host wrappers can be hooked back up on replay to the corresponding new guest remotables and promises.Security Considerations
Nothing enforces that the argument async function is closed, i.e., does not capture (lexically "close over") any direct access to mutable state or ability to cause effects. If it does, it still cannot harm anything but itself. But it -- or its replayings -- may be more confusable, and so more vulnerable to confusion attacks.
Since this entire framework itself is written as a helper (well, a huge helper) with no special privilege, it cannot be used to do anything that could not have otherwise been done. It is not a source of new authority.
See caveats in Upgrade Considerations about isolation of effects following a replay failure.
Scaling Considerations
We assume that the total number of async functions, i.e., calls to
asyncFlow
, are low cardinality. This is essential to the design, in exactly the same sense as our assumption that exoClasses are low cardinality. The RAM cost is proportional to the number of these.The current implementation further assumes that the total number of activations of these replayable async functions are low cardinality. This will likely be a scaling problem at some point, but is unlikely to be painful for the initial expected use cases.
The current implementation imposes two simplifying restrictions that allow us to relieve some of this memory pressure: Currently, the async function argument cannot make and export new remotables or promises to the host. Thus, once its outcomeVow is settled, its job is done. There is very little more it can do that is observable. Thus, once this outcome is settled, the activation is shut down and most of the memory it held onto is dropped.
Of the activations not shut down, they must replay from the beginning in each incarnation. If a given activation has a long history of past activity, this can become expensive.
How do we verify in CI that when an asyncFlow is in use & when it has completed, resource usage in RAM & on disk meet our expectations?
The PR assumes
low cardinality
of asyncFlows. what scale islow cardinality
- Is 10^3, 10&5? What is the risk if cardinality is too high?Documentation Considerations
For the normal developer documentation,
asyncFlow
should make things simpler and more like "just JavaScript". The membrane translates between host vows and guest promises, so the async function can simplyawait
on promises without needing thewhen
from@agoric/vow
.Testing Considerations
This PR is structured as a tower of building blocks, where I unit tested each as I went, in bottom up order, in order to build with confidence. Currently, each of these building blocks is also very paranoid about internal consistency checking, so I'd get early indications if I made a mistake. Probably some of this internal consistency checking can be reduced over time, as we gain more static confidence.
This PR is currently using the fake upgrade testing framework from the
@agoric/zone
package. This caused bug #9126. Instead, we need to redo all these tests with a real upgrade testing framework, like the one in bootstrapTests. See https://github.com/Agoric/agoric-sdk/blob/master/packages/boot/test/bootstrapTests/test-zcf-upgrade.tsUpgrade Considerations
The point.
In an reviving incarnation, if the async function argument of
fails to recapitulate the logs of each of its activations, those activations will not have done any damage. They will simply be stuck, with a diagnostic available via
To unstick these, so those stuck activations can continue to make progress, upgrade again using an async function argument that does reproduce the logged behavior of each of its activations.
Caveat: Imperfect isolation of effects following a replay failure
Once a replay failure is detected, we attempt to isolate the replaying activation from its outside world, and to also shut down its further execution as soon as possible. But it may be in the midst of activity with a non-empty stack. Following a replay failure, we have no way to preemptively terminate it without any further execution of the activation. This further execution may therefore be arbitrarily confused. We simply try to isolate it as much as possible, immediately revoking all access it had through the membrane to all authority to cause observable effects. However,
console
logging activity to be an observable effect. These might be caused by diagnostics emitted by this framework in response to its "isolated" confused behavior.console
logging to be an observable effect, we also allow this as an exception to our closed function rule. Messages it sends directly to the console are not logged, and can differ without causing replay failure. During its post-replay-failure confused execution, it can still directly log to the console.