Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(twap): prune records over multiple blocks #7427

Merged
merged 11 commits into from
Feb 9, 2024

Conversation

czarcas7ic
Copy link
Member

@czarcas7ic czarcas7ic commented Feb 8, 2024

Closes: #XXX

What is the purpose of the change

Screenshot 2024-02-07 at 6 50 59 PM

As shown by the above pprof, we spend almost 30s of epoch deleting entries. When adding print lines, we saw that a majority of these deletes are coming from the TWAP module. This can safely be limited and done over multiple blocks instead of being done at time of epoch.

This PR hardcodes a limit of 1000 entries per block. We also do the pruning at endBlock instead of afterEpochEnd.

Testing and Verifying

Screenshot 2024-02-07 at 8 54 04 PM

Profile shows reduction in delete time from 30s to 7s

Documentation and Release Note

  • Does this pull request introduce a new feature or user-facing behavior changes?
  • Changelog entry added to Unreleased section of CHANGELOG.md?

Where is the change documented?

  • Specification (x/{module}/README.md)
  • Osmosis documentation site
  • Code comments?
  • N/A

@czarcas7ic czarcas7ic added the V:state/breaking State machine breaking PR label Feb 8, 2024
@github-actions github-actions bot added the C:x/twap Changes to the twap module label Feb 8, 2024
Comment on lines -198 to -208
// pruneRecords prunes twap records that happened earlier than recordHistoryKeepPeriod
// before current block time while preserving the most recent record before the threshold.
// Such record is preserved for each pool.
// See TWAP keeper's `pruneRecordsBeforeTimeButNewest(...)` for more details about the reasons for
// keeping this record.
func (k Keeper) pruneRecords(ctx sdk.Context) error {
recordHistoryKeepPeriod := k.RecordHistoryKeepPeriod(ctx)

lastKeptTime := ctx.BlockTime().Add(-recordHistoryKeepPeriod)
return k.pruneRecordsBeforeTimeButNewest(ctx, lastKeptTime)
}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This helper func is no longer needed since we determine the lastKeptTime at epoch end and store in a state entry.

Copy link
Member

@ValarDragon ValarDragon Feb 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like we should still keep this function so we can unit test it? (Just move the logic into here)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or do you think the current test is good / simple enough?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can reimplement the test, I thought the state.go test tests the same thing, but I can double check

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah this is tested in even more depth here

// TestPruneRecordsBeforeTime tests that all twap records earlier than
// current block time - given time are pruned from the store while
// the newest record for each pool before the time to keep is preserved.
func (s *TestSuite) TestPruneRecordsBeforeTimeButNewest() {
// N.B.: the records follow the following naming convention:
// <pool id><delta from base time in seconds><delta from base time in milliseconds>
// These are manually created to be able to refer to them by name
// for convenience.
// Create 6 records of 4 pools from base time, each in different pool with the difference of 1 second between them. Pool 2 is a 3 asset pool.
pool1Min2SBaseMs, pool2Min1SBaseMsAB, pool2Min1SBaseMsAC, pool2Min1SBaseMsBC, pool3BaseSecBaseMs, pool4Plus1SBaseMs := s.createTestRecordsFromTime(baseTime)
// Create 6 records of 4 pools from base time - 1 ms, each in different pool with the difference of 1 second between them. Pool 2 is a 3 asset pool.
pool1Min2SMin1Ms, pool2Min1SMin1MsAB, pool2Min1SMin1MsAC, pool2Min1SMin1MsBC, pool3BaseSecMin1Ms, pool4Plus1SMin1Ms := s.createTestRecordsFromTime(baseTime.Add(-time.Millisecond))
// Create 6 records of 4 pools from base time - 2 ms, each in different pool with the difference of 1 second between them. Pool 2 is a 3 asset pool.
pool1Min2SMin2Ms, pool2Min1SMin2MsAB, pool2Min1SMin2MsAC, pool2Min1SMin2MsBC, pool3BaseSecMin2Ms, pool4Plus1SMin2Ms := s.createTestRecordsFromTime(baseTime.Add(2 * -time.Millisecond))
// Create 6 records of 4 pools from base time - 3 ms, each in different pool with the difference of 1 second between them. Pool 2 is a 3 asset pool.
pool1Min2SMin3Ms, pool2Min1SMin3MsAB, pool2Min1SMin3MsAC, pool2Min1SMin3MsBC, pool3BaseSecMin3Ms, pool4Plus1SMin3Ms := s.createTestRecordsFromTime(baseTime.Add(3 * -time.Millisecond))
// Create 12 records in the same pool from base time , each record with the difference of 1 second between them.
pool5Min2SBaseMsAB, pool5Min2SBaseMsAC, pool5Min2SBaseMsBC,
pool5Min1SBaseMsAB, pool5Min1SBaseMsAC, pool5Min1SBaseMsBC,
pool5BaseSecBaseMsAB, pool5BaseSecBaseMsAC, pool5BaseSecBaseMsBC,
pool5Plus1SBaseMsAB, pool5Plus1SBaseMsAC, pool5Plus1SBaseMsBC := s.CreateTestRecordsFromTimeInPool(baseTime, 5)
// Create 12 records in the same pool from base time - 1 ms, each record with the difference of 1 second between them
pool5Min2SMin1MsAB, pool5Min2SMin1MsAC, pool5Min2SMin1MsBC,
pool5Min1SMin1MsAB, pool5Min1SMin1MsAC, pool5Min1SMin1MsBC,
pool5BaseSecMin1MsAB, pool5BaseSecMin1MsAC, pool5BaseSecMin1MsBC,
pool5Plus1SMin1MsAB, pool5Plus1SMin1MsAC, pool5Plus1SMin1MsBC := s.CreateTestRecordsFromTimeInPool(baseTime.Add(-time.Millisecond), 5)
tests := map[string]struct {
// order does not follow any specific pattern
// across many test cases on purpose.
recordsToPreSet []types.TwapRecord
lastKeptTime time.Time
expectedKeptRecords []types.TwapRecord
overwriteLimit uint16
}{
"base time; across pool 3; 4 records; 3 before lastKeptTime; 2 deleted and newest kept": {
recordsToPreSet: []types.TwapRecord{
pool3BaseSecMin1Ms, // base time - 1ms; kept since newest before lastKeptTime
pool3BaseSecBaseMs, // base time; kept since at lastKeptTime
pool3BaseSecMin3Ms, // base time - 3ms; deleted
pool3BaseSecMin2Ms, // base time - 2ms; deleted
},
lastKeptTime: baseTime,
expectedKeptRecords: []types.TwapRecord{pool3BaseSecMin1Ms, pool3BaseSecBaseMs},
},
"base time - 1s - 2 ms; across pool 2; 12 records; 3 before lastKeptTime; none pruned since newest kept": {
recordsToPreSet: []types.TwapRecord{
pool2Min1SMin2MsAB, pool2Min1SMin2MsAC, pool2Min1SMin2MsBC, // base time - 1s - 2ms; kept since at lastKeptTime
pool2Min1SMin1MsAB, pool2Min1SMin1MsAC, pool2Min1SMin1MsBC, // base time - 1s - 1ms; kept since older than at lastKeptTime
pool2Min1SBaseMsAB, pool2Min1SBaseMsAC, pool2Min1SBaseMsBC, // base time - 1s; kept since older than lastKeptTime
pool2Min1SMin3MsAB, pool2Min1SMin3MsAC, pool2Min1SMin3MsBC, // base time - 1s - 3ms; kept since newest before lastKeptTime
},
lastKeptTime: baseTime.Add(-time.Second).Add(2 * -time.Millisecond),
expectedKeptRecords: []types.TwapRecord{
pool2Min1SMin3MsAB, pool2Min1SMin3MsAC, pool2Min1SMin3MsBC,
pool2Min1SMin2MsAB, pool2Min1SMin2MsAC, pool2Min1SMin2MsBC,
pool2Min1SMin1MsAB, pool2Min1SMin1MsAC, pool2Min1SMin1MsBC,
pool2Min1SBaseMsAB, pool2Min1SBaseMsAC, pool2Min1SBaseMsBC,
},
},
"base time - 2s - 3 ms; across pool 1; 4 records; none before lastKeptTime; none pruned": {
recordsToPreSet: []types.TwapRecord{
pool1Min2SMin3Ms, // base time - 2s - 3ms; kept since older than lastKeptTime
pool1Min2SMin1Ms, // base time - 2s - 1ms; kept since older than lastKeptTime
pool1Min2SMin2Ms, // base time - 2s - 2ms; kept since older than lastKeptTime
pool1Min2SBaseMs, // base time - 2s; kept since older than lastKeptTime
},
lastKeptTime: baseTime.Add(2 * -time.Second).Add(3 * -time.Millisecond),
expectedKeptRecords: []types.TwapRecord{pool1Min2SMin3Ms, pool1Min2SMin2Ms, pool1Min2SMin1Ms, pool1Min2SBaseMs},
},
"base time + 1s + 1ms; across pool 4; 4 records; all before lastKeptTime; 3 deleted and newest kept": {
recordsToPreSet: []types.TwapRecord{
pool4Plus1SBaseMs, // base time + 1s; kept since newest before lastKeptTime
pool4Plus1SMin3Ms, // base time + 1s - 3ms; deleted
pool4Plus1SMin1Ms, // base time + 1s -1ms; deleted
pool4Plus1SMin2Ms, // base time + 1s - 2ms; deleted
},
lastKeptTime: baseTime.Add(time.Second).Add(time.Millisecond),
expectedKeptRecords: []types.TwapRecord{pool4Plus1SBaseMs},
},
"base time; across pool 3 and pool 5; pool 3: 4 total records; 3 before lastKeptTime; 2 deleted and newest 2 kept. pool 5: 24 total records; 12 before lastKeptTime; 12 deleted and 12 kept": {
recordsToPreSet: []types.TwapRecord{
pool3BaseSecMin3Ms, // base time - 3ms; deleted
pool3BaseSecMin2Ms, // base time - 2ms; deleted
pool3BaseSecMin1Ms, // base time - 1ms; kept since newest before lastKeptTime
pool3BaseSecBaseMs, // base time; kept since at lastKeptTime
pool5Min2SBaseMsAB, pool5Min2SBaseMsAC, pool5Min2SBaseMsBC, // base time - 2s; deleted
pool5Min1SBaseMsAB, pool5Min1SBaseMsAC, pool5Min1SBaseMsBC, // base time - 1s; ; deleted
pool5BaseSecBaseMsAB, pool5BaseSecBaseMsAC, pool5BaseSecBaseMsBC, // base time; kept since at lastKeptTime
pool5Plus1SBaseMsAB, pool5Plus1SBaseMsAC, pool5Plus1SBaseMsBC, // base time + 1s; kept since older than lastKeptTime
pool5Min2SMin1MsAB, pool5Min2SMin1MsAC, pool5Min2SMin1MsBC, // base time - 2s - 1ms; deleted
pool5Min1SMin1MsAB, pool5Min1SMin1MsAC, pool5Min1SMin1MsBC, // base time - 1s - 1ms; deleted
pool5BaseSecMin1MsAB, pool5BaseSecMin1MsAC, pool5BaseSecMin1MsBC, // base time - 1ms; kept since newest before lastKeptTime
pool5Plus1SMin1MsAB, pool5Plus1SMin1MsAC, pool5Plus1SMin1MsBC, // base time + 1s - 1ms; kept since older than lastKeptTime
},
lastKeptTime: baseTime,
expectedKeptRecords: []types.TwapRecord{
pool3BaseSecMin1Ms,
pool5BaseSecMin1MsAB, pool5BaseSecMin1MsAC, pool5BaseSecMin1MsBC,
pool3BaseSecBaseMs,
pool5BaseSecBaseMsAB, pool5BaseSecBaseMsAC, pool5BaseSecBaseMsBC,
pool5Plus1SMin1MsAB, pool5Plus1SMin1MsAC, pool5Plus1SMin1MsBC,
pool5Plus1SBaseMsAB, pool5Plus1SBaseMsAC, pool5Plus1SBaseMsBC,
},
},
"base time - 1s - 2 ms; all pools; all test records": {
recordsToPreSet: []types.TwapRecord{
pool3BaseSecMin3Ms, // base time - 3ms; kept since older
pool3BaseSecMin2Ms, // base time - 2ms; kept since older
pool3BaseSecMin1Ms, // base time - 1ms; kept since older
pool3BaseSecBaseMs, // base time; kept since older
pool2Min1SMin3MsAB, pool2Min1SMin3MsAC, pool2Min1SMin3MsBC, // base time - 1s - 3ms; kept since newest before lastKeptTime
pool2Min1SMin2MsAB, pool2Min1SMin2MsAC, pool2Min1SMin2MsBC, // base time - 1s - 2ms; kept since at lastKeptTime
pool2Min1SMin1MsAB, pool2Min1SMin1MsAC, pool2Min1SMin1MsBC, // base time - 1s - 1ms; kept since older
pool2Min1SBaseMsAB, pool2Min1SBaseMsAC, pool2Min1SBaseMsBC, // base time - 1s; kept since older
pool1Min2SMin3Ms, // base time - 2s - 3ms; deleted
pool1Min2SMin2Ms, // base time - 2s - 2ms; deleted
pool1Min2SMin1Ms, // base time - 2s - 1ms; deleted
pool1Min2SBaseMs, // base time - 2s; kept since newest before lastKeptTime
pool4Plus1SMin3Ms, // base time + 1s - 3ms; kept since older
pool4Plus1SMin2Ms, // base time + 1s - 2ms; kept since older
pool4Plus1SMin1Ms, // base time + 1s -1ms; kept since older
pool4Plus1SBaseMs, // base time + 1s; kept since older
pool5Min2SBaseMsAB, pool5Min2SBaseMsAC, pool5Min2SBaseMsBC, // base time - 2s; kept since newest before lastKeptTime
pool5Min1SBaseMsAB, pool5Min1SBaseMsAC, pool5Min1SBaseMsBC, // base time - 1s; kept since older
pool5BaseSecBaseMsAB, pool5BaseSecBaseMsAC, pool5BaseSecBaseMsBC, // base time; kept since older
pool5Plus1SBaseMsAB, pool5Plus1SBaseMsAC, pool5Plus1SBaseMsBC, // base time + 1s; kept since older
pool5Min2SMin1MsAB, pool5Min2SMin1MsAC, pool5Min2SMin1MsBC, // base time - 2s - 1ms; deleted
pool5Min1SMin1MsAB, pool5Min1SMin1MsAC, pool5Min1SMin1MsBC, // base time - 1s - 1ms; kept since older
pool5BaseSecMin1MsAB, pool5BaseSecMin1MsAC, pool5BaseSecMin1MsBC, // base time - 1ms; kept since older
pool5Plus1SMin1MsAB, pool5Plus1SMin1MsAC, pool5Plus1SMin1MsBC, // base time + 1s - 1ms; kept since older
},
lastKeptTime: baseTime.Add(-time.Second).Add(2 * -time.Millisecond),
expectedKeptRecords: []types.TwapRecord{
pool1Min2SBaseMs, // base time - 2s; kept since newest before lastKeptTime
pool5Min2SBaseMsAB, pool5Min2SBaseMsAC, pool5Min2SBaseMsBC, // base time - 2s; kept since newest before lastKeptTime
pool2Min1SMin3MsAB, pool2Min1SMin3MsAC, pool2Min1SMin3MsBC, // base time - 1s - 3ms; kept since newest before lastKeptTime
pool2Min1SMin2MsAB, pool2Min1SMin2MsAC, pool2Min1SMin2MsBC, // base time - 1s - 2ms; kept since at lastKeptTime
pool2Min1SMin1MsAB, pool2Min1SMin1MsAC, pool2Min1SMin1MsBC, // base time - 1s - 1ms; kept since older
pool5Min1SMin1MsAB, pool5Min1SMin1MsAC, pool5Min1SMin1MsBC, // base time - 1s - 1ms; kept since older
pool2Min1SBaseMsAB, pool2Min1SBaseMsAC, pool2Min1SBaseMsBC, // base time - 1s; kept since older
pool5Min1SBaseMsAB, pool5Min1SBaseMsAC, pool5Min1SBaseMsBC, // base time - 1s; kept since older
pool3BaseSecMin3Ms, // base time - 3ms; kept since older
pool3BaseSecMin2Ms, // base time - 2ms; kept since older
pool3BaseSecMin1Ms, // base time - 1ms; kept since older
pool5BaseSecMin1MsAB, pool5BaseSecMin1MsAC, pool5BaseSecMin1MsBC, // base time - 1ms; kept since older
pool3BaseSecBaseMs, // base time; kept since older
pool5BaseSecBaseMsAB, pool5BaseSecBaseMsAC, pool5BaseSecBaseMsBC, // base time; kept since older
pool4Plus1SMin3Ms, // base time + 1s - 3ms; kept since older
pool4Plus1SMin2Ms, // base time + 1s - 2ms; kept since older
pool4Plus1SMin1Ms, // base time + 1s -1ms; kept since older
pool5Plus1SMin1MsAB, pool5Plus1SMin1MsAC, pool5Plus1SMin1MsBC, // base time + 1s - 1ms; kept since older
pool4Plus1SBaseMs, // base time + 1s; kept since older
pool5Plus1SBaseMsAB, pool5Plus1SBaseMsAC, pool5Plus1SBaseMsBC, // base time + 1s; kept since older
},
},
"no pre-set records - no error": {
recordsToPreSet: []types.TwapRecord{},
lastKeptTime: baseTime,
expectedKeptRecords: []types.TwapRecord{},
},
"base time; across pool 3; 4 records; 3 before lastKeptTime; only 1 deleted due to limit set to 1": {
recordsToPreSet: []types.TwapRecord{
pool3BaseSecMin1Ms, // base time - 1ms; kept since newest before lastKeptTime
pool3BaseSecBaseMs, // base time; kept since at lastKeptTime
pool3BaseSecMin3Ms, // base time - 3ms; in queue for deletion
pool3BaseSecMin2Ms, // base time - 2ms; deleted
},
lastKeptTime: baseTime,
expectedKeptRecords: []types.TwapRecord{pool3BaseSecMin3Ms, pool3BaseSecMin1Ms, pool3BaseSecBaseMs},
overwriteLimit: 1,
},
}
for name, tc := range tests {
s.Run(name, func() {
s.SetupTest()
s.preSetRecords(tc.recordsToPreSet)
ctx := s.Ctx
twapKeeper := s.twapkeeper
if tc.overwriteLimit != 0 {
originalLimit := twap.NumRecordsToPrunePerBlock
defer func() {
twap.NumRecordsToPrunePerBlock = originalLimit
}()
twap.NumRecordsToPrunePerBlock = tc.overwriteLimit
}
state := types.PruningState{
IsPruning: true,
LastKeptTime: tc.lastKeptTime,
LastKeySeen: types.FormatHistoricalTimeIndexTWAPKey(tc.lastKeptTime, 0, "", ""),
}
err := twapKeeper.PruneRecordsBeforeTimeButNewest(ctx, state)
s.Require().NoError(err)
s.validateExpectedRecords(tc.expectedKeptRecords)
})
}
}

Deleting the func along with the test is fine imo

x/twap/store.go Outdated Show resolved Hide resolved
@ValarDragon
Copy link
Member

ValarDragon commented Feb 8, 2024

Cool! Looks like as is this is shaving 25 seconds off of epoch. (Including the TWAP CPU time) I'd guess that a lot of this 7 seconds is still from prunes in next 2 blocks.

Per your data in slack, epoch is deleting ~12k keys (assuming I counted correctly) that are not TWAP, so thats probably a lot of the 7 seconds then.

  • 212k deletes = 30 seconds
    • == 7k deletes / second
  • 16k deletes = 7 seconds
    • == 2.85k/deletes / second

But half the TWAP deletes are very "data local", so will have less of branchy-ness. The other half should be fairly data local, but won't be if its split up.

So right now were deleting 2k/block in the above benchmark. (Applying my above comment makes this 1k/block)
Given that its 200k deletes right now, and that takes 30 seconds. If we split this across blocks, the net time here will increase due to log factors. I think its worth doing anyway to split this more. Lets keep it at 1k/block, I am a bit worried if this will add 1 second to blocks for 200 blocks, but I think thats a safe risk for prod.

x/twap/store.go Outdated Show resolved Hide resolved
@czarcas7ic czarcas7ic closed this Feb 8, 2024
@czarcas7ic czarcas7ic reopened this Feb 8, 2024
@czarcas7ic czarcas7ic marked this pull request as ready for review February 8, 2024 18:24
Comment on lines +151 to +153
// We have not pruned all records, so we update the last key seen.
state.LastKeySeen = iter.Key()
k.SetPruningState(ctx, state)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I may ahve missed it, but I don't think we have any logic testing this code path. (or rather that keys from this are working as we want)

From reading the code, this loosk right to me!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't, I will add before merge

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test shows pruning happening over multiple blocks f84a716

It verifies

  1. We see the extra record that we save due to the bug
  2. Pruning eventually gets set to false when we iterate over records
  3. The correct records get pruned

@ValarDragon

Copy link
Member

@ValarDragon ValarDragon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, let's send it

@czarcas7ic czarcas7ic added the A:backport/v23.x backport patches to v23.x branch label Feb 9, 2024
@czarcas7ic czarcas7ic merged commit 1c9e491 into main Feb 9, 2024
1 check passed
@czarcas7ic czarcas7ic deleted the adam/prune-twap-multiple-blocks branch February 9, 2024 04:36
mergify bot pushed a commit that referenced this pull request Feb 9, 2024
* twap prune over multiple blocks

* changelog entry

* fix after epoch end test

* track where we left off in iteration

* add comment to explain better

* Generated protofile changes

* inc by 2

* inc by 2

* add comments

* add twap pruning test over multiple blocks

---------

Co-authored-by: github-actions <github-actions@github.com>
(cherry picked from commit 1c9e491)

# Conflicts:
#	CHANGELOG.md
czarcas7ic added a commit that referenced this pull request Feb 9, 2024
* twap prune over multiple blocks

* changelog entry

* fix after epoch end test

* track where we left off in iteration

* add comment to explain better

* Generated protofile changes

* inc by 2

* inc by 2

* add comments

* add twap pruning test over multiple blocks

---------

Co-authored-by: github-actions <github-actions@github.com>
(cherry picked from commit 1c9e491)

# Conflicts:
#	CHANGELOG.md

Co-authored-by: Adam Tucker <adam@osmosis.team>
// Two records are deleted per incentive record:
// 1. by time index
// 2. by pool index
// Therefore, setting this to 1000 means 500 complete incentive records are deleted per block.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: twap records

// Two records are deleted per incentive record:
// 1. by time index
// 2. by pool index
// Therefore, setting this to 1000 means 500 complete incentive records are deleted per block.
Copy link
Member

@p0mvn p0mvn Feb 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we elaborate on the choice here? Why 1000/500?

Copy link
Member

@p0mvn p0mvn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

High level logic LGTM but would be good to document why when chose 1000/500 records per block in the comment

@github-actions github-actions bot mentioned this pull request Mar 1, 2024
@github-actions github-actions bot mentioned this pull request Mar 15, 2024
@github-actions github-actions bot mentioned this pull request May 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A:backport/v23.x backport patches to v23.x branch C:x/twap Changes to the twap module V:state/breaking State machine breaking PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants