Skip to content

Releases: mkremins/winnow

AIIDE 2021 Artifact

09 Aug 23:59
Compare
Choose a tag to compare

This release marks the version of Winnow described in the accepted AIIDE 2021 paper "Winnow: A Domain-Specific Language for Incremental Story Sifting", by Max Kreminski, Melanie Dickinson, and Michael Mateas.

Winnow is a domain-specific language for story sifting, or automatically identifying interesting microstories that have emerged in large corpuses of simulation or game data. To use Winnow, you first write one or more Winnow sifting patterns: small programs that describe the kinds of microstories you're interested in finding. You then execute these Winnow sifting patterns against a DataScript database containing your storyworld data.

An example Winnow sifting pattern can be found in this repository's README. More can be found on the tests page and in the Winnow paper.

All Winnow sifting patterns can be run in two different modes:

  1. The incremental mode. In this mode, your code maintains a pool of partial sifting pattern matches and uses the Winnow API to update this pool as new events arrive. This mode allows you to detect and reason about emerging microstories that haven't yet run all the way to completion, as well as to detect when one of these partial matches finally does turn into a complete match.
  2. The retrospective mode. In this mode, your code uses the Winnow API to compile a Winnow sifting pattern down to a simpler Felt sifting pattern. The resulting Felt pattern can then be used to identify emergent microstories that have already occurred in the past.

The incremental mode is Winnow's flagship feature and its key contribution over previously existing approaches to story sifting, so we'll focus primarily on demonstrating how to evaluate the incremental mode here.

Running the benchmark

You can use the benchmark page to replicate the incremental sifting benchmark results we report in our AIIDE 2021 paper. Load the page in your web browser, open the browser console, and press the "Run benchmark" button. The page may appear to hang as the benchmark runs, but the console should continue to display progress regardless. When the benchmark is complete, a table of results will be added to the page.

This benchmark repeatedly initializes a pool of N partial sifting pattern matches and pushes 100 random events onto the database, updating the partial match pool at each step. We track the per-event time taken to update the partial match pool and report the minimum, maximum and average per-event pool update times for each value of N. The following values of N are used: 10, 50, 100, 500, 1000.

Running the incremental execution test

At the top of the tests page (under the heading "Sifting Tests") is an interactive visual example of incremental sifting pattern execution. This example allows you to step through a predefined sequence of storyworld events at your own pace, and to observe how these events are incrementally matched against several Winnow sifting patterns.

On the left side of this example (under the "Events" subheading) is a list of story events, which will be pushed onto the database in order as we press the "Step" button. On the right side of this example is a visualization of a partial match pool, which initially contains four empty partial matches: one for each of the four sifting patterns (violationOfHospitality, twoImpulsiveBetrayals, romanticFailureThenSuccess, and criticismOfHypocrisy) defined lower on the tests page.

To run this example, first press the "Step" button once. This will push the first event in the Events list onto the database and update the partial match pool. Because this first event matches the first clause of our violationOfHospitality sifting pattern, a new partial match for this pattern will appear in the pool, next to the empty parent match from which it was forked. Additionally, this new partial match will contain appropriate bindings for the ?e1 and ?guest logic variables mentioned in the first clause of the violationOfHospitality sifting pattern: the ?e1 variable will be bound to the numerical ID of the initial enterTown event (i.e., 6), while the ?guest variable will be bound to the numerical ID of that event's actor (i.e., 1).

To continue running the example, repeatedly press the "Step" button to add each event to the database. At each step, you can observe how the partial match pool changes and compare this behavior to the sifting pattern definitions to verify that each event causes the changes you would expect:

  1. Creates a new partial match for the violationOfHospitality pattern, with bindings for the ?e1 and ?guest variables.
  2. Does nothing.
  3. Creates a new, more advanced partial match for the violationOfHospitality pattern, with bindings for the ?e2 and ?host variables on top of the earlier ?e1 and ?guest bindings.
  4. Does nothing.
  5. Creates a new, complete match for the violationOfHospitality pattern, with a binding for the ?e3 variable on top of all the earlier bindings.
  6. Kills the two remaining partial matches for the violationOfHospitality pattern, since the guest has now left town.
  7. Does nothing, since all the violationOfHospitality pattern matches involving these characters are now either dead or complete.

Note that in the partial match pool visualization, we use the following colors to indicate matches that have changed or updated in certain ways:

  • Blue partial matches have just been forked from a parent match, but are not yet complete.
  • Green matches are complete, so new events no longer have to be tested against them.
  • Red matches are dead, so new events no longer have to be tested against them.

Running the compilation tests

The compilation tests run automatically when you first load the tests page. When you scroll down to the "Compiler Tests" heading, you can see four examples of Winnow sifting patterns (on the left side of the page) and roughly equivalent Felt sifting patterns (on the right side of the page). Each of these Felt patterns is the result of parsing and compiling the Winnow pattern to its left via the Winnow-to-Felt compilation API.

Running the benchmark and tests locally

If you want to further inspect or modify the benchmark or tests, you can also run them locally. To do this, first download a copy of the Winnow repository. Then navigate to the root of the repository in your terminal and launch a local web server. For instance, with Python 3 on Mac OS X, you can launch a local server via the following command:

python3 -m http.server

From here, you can access a local copy of both the benchmark page and the tests page in your web browser by navigating to the following URLs: