-
I've been trying to figure out the semantics of state machine event handlers: are state machine event handlers executed atomically once triggered? IOW: does P create new state sequences within the execution of an event handler, or are new state sequences only introduced when a new event is processed (received) or a message is sent? The docs in https://p-org.github.io/P/manual/statemachines/ say
This does sound it's atomic. The example here P/Tutorial/1_ClientServer/PSrc/Server.p Line 26 in 8470c46 both updates the database and sends a response to the client. If that is executed atomically, including event delivery, then I understand how can be guaranteed. However, this would actually require a transactional messaging protocol that is able to deliver a result in one distributed transaction spanning both the database update and the message delivery to the client. Is this example simple not modeling at that detail level? More importantly, how would I model a non-atomic database update and client response? If event handlers always run atomically, then I would need to make each of those 2 part into an own state machine? So essentially, I am looking after the corresponding feature for "labels" in TLA+
https://www.learntla.com/intro/conceptual-overview.html?highlight=label#specifications |
Beta Was this translation helpful? Give feedback.
Replies: 6 comments 1 reply
-
fwiw, the way that I look at this is: if I want to model realistically (eg a backend DB transaction is atomic, but not spanning a client message delivery) OR if I want to model an implementation in a "modern" programming language supporting asynchronous programming (eg. with futures/deferreds/promises or coroutines) then each "await/yield" point must be modeled as potentially branching into a new state sequence. having a separate state machine for each asynchronous synchronization / branching point seems "wrong" .. that's my question - is it wrong? Do I fundamentally misunderstand P? |
Beta Was this translation helpful? Give feedback.
-
Hi @oberstet, this is an excellent question. A way to think about it is that each state machine is executing concurrently and the only shared state in the system (for which there can be race condition) is the inbox or message buffers. So, when any event handler in any state machine is executed, then Does this help? |
Beta Was this translation helpful? Give feedback.
-
yes, that clarifies the semantics for me, thanks! I can see the systematic/symmetry in this design. unfortunately, that's what I did suspect:( as the consequence is: in an asynchronous program, every yield/await is a scheduling point, an independently scheduled continuation. the entry point to such a task/coroutine is a scheduling point, as is the return point. and while the latter 2 map nicely to state machine message handler entry and send message calls, the former is more general: asynchronous continuation context can be nested, and this must be mapped to 1 state machine per asynchronous continuation context and message passing for yield/await. eg in the "1_ClientServer" tutorial example, one would need more state machines to separate out the DB transaction and the sending of the client reply. separate from consuming a request from the inbox of the main server FSM. practically, with such an application, stuff like this can happen:
anyways. it'll depend I guess. however, looking from this "async app programming" perspective, I am wondering: what I really need to exhaustively explore all possible state sequences of an async single-threaded app implementation would be: automatically use every future/promise/deferred/coroutine as a scheduling point. those are exactly the branching points in the state sequence of such a program. wrt to the event loop / reactor (the "green threads" scheduler), not the OS of course. OS scheduling adds more branch points. |
Beta Was this translation helpful? Give feedback.
-
I should probably add: I am mostly socialized in the context of monolithic kernels with mostly blocking system calls, and then using a green threads user program scheduler with async/await or the like. and then create a distributed app using instances of such a program. IOW: never used microkernels, and though I have used purely message passing systems such as Erlang/OTP and purely functional (monadic IO) languages in the past, I have mostly moved to async/await style which "seems more natural". Yeah, I know, I am not believing hard enough;) If I try very very hard, I can think about programs as just sets of assertions. Well, it seems my patience for such brain puzzles is getting lesser the older I get;) Doesn't feel natural. For me. Anyways, just my 2cts being a regular programmer .. |
Beta Was this translation helpful? Give feedback.
-
just rethinking this: actually, one should be able to create a |
Beta Was this translation helpful? Give feedback.
-
You can https://p-org.github.io/P/manual/statements/#receive Not sure if this is what you are hinting towards in your last message? |
Beta Was this translation helpful? Give feedback.
thanks for keeping this open and moving to a discussion - yeah, absolutely, isn't "an issue", but a discussion about P and semantics.
yes, that was what I was hinting at: yield control, and thus inserting a branching point in the non-deterministic state sequence. however, calling
receive
within a state machine event handler will block until there is a message I guess. callingsend
would not, as message queues are unlimited size.I don't understand the semantics fully, eg if it is possible to yield control within an event handler, and since an event handler can have local variables, this requires to save the local ex…