Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP][Testing] Mock Persistence Module in Utility Tests - An Example #290

Closed
wants to merge 7 commits into from

Conversation

Olshansk
Copy link
Member

@Olshansk Olshansk commented Oct 5, 2022

Description

This PR is a WIP and is meant to be a starting point with an example.

The goal is to enable the utility module to be developed while dependencies in the persistence module are not yet ready. This paradigm could extend to any other cross-module interaction.

The comments contain more detail, but in short:

  1. A real instance of the persistence context is instantiated
  2. A mocked instance of the persistence module instantiated and mocked
  3. For some functions, we implement a passthrough from the mock -> real implementation
  4. For other functions, we mock the behaviour; sometimes it's stateless and sometimes it's stateful

Issue

Fixes #261

Type of change

Please mark the relevant option(s):

  • New feature, functionality or library
  • Bug fix
  • Code health or cleanup
  • Major breaking change
  • Documentation
  • Other

TODO: List of changes

Testing

  • go test -v -count=1 -v ./utility/... -run TestExperimental
  • make develop_test
  • LocalNet w/ all of the steps outlined in the README

Required Checklist

  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have tested my changes using the available tooling
  • I have updated the corresponding CHANGELOG

If Applicable Checklist

  • I have updated the corresponding README(s); local and/or global
  • I have added tests that prove my fix is effective or that my feature works
  • I have added, or updated, mermaid.js diagrams in the corresponding README(s)
  • I have added, or updated, documentation and mermaid.js diagrams in shared/docs/* if I updated shared/*README(s)

@Olshansk Olshansk added utility Utility specific changes persistence Persistence specific changes code health Nice to have code improvement tooling tooling to support development, testing et al labels Oct 5, 2022
require.Equal(t, []byte(" 1 getHash 42 getHash "), appHash)
}

func newMockablePersistenceContextForAppHashTestWithMoreState(t *testing.T, height int64) modules.PersistenceRWContext {
Copy link
Contributor

@andrewnguyen22 andrewnguyen22 Oct 6, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So if I'm understanding correctly, everytime we have a unit test we need a separate function to mock that functionality - like a 1:1 mapping of sorts?

Is there any overlapping Mock functionality we may genericize/share? If so, can you demonstrate that in an example?

EDIT:

(The above assumes we go full Mock approach and not use the 'real' persistence module for testing)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Conversation around 'full mocking' vs 'hybrid' here #290 (comment)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

like a 1:1 mapping of sorts?

Doesn't have to be a 1:1 mapping.

You can create helpers and then reuse them:

Imagine a common_mocks_test.go file

    func sharedAddAccountAmount(address []byte, amount string) error {
       ... // implementation
    }
    
    // define other common functions
    
    sharedPersistenceContextMock := initializedPersistenceMock()
  
	sharedPersistenceContextMock.EXPECT().
		AddAccountAmount(gomock.Any(), gomock.Any()).
		DoAndReturn(sharedAddAccountAmount).
		AnyTimes()

    // expect other common functions

some_test.go

    persistenceContextMock = copy(sharedPersistenceContextMock)
    
    // override some subset of mocks specific to this

some_other_test.go

    persistenceContextMock = copy(sharedPersistenceContextMock)
    
    // override some other subset of mocks specific to this

"github.com/stretchr/testify/require"
)

// This test just tries to add amounts to an account and check the app hash via the real implementation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall this seems relatively elegant. I think we some proper common helper funcs we might be able to have a decent testing suite

Copy link
Member Author

@Olshansk Olshansk Oct 6, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, the way I've seen this done before is that you don't build it out all at once, but you add the functions/components you need over time.

E.g. If you need to mock AddAccountAmount now, we add it alongside the new logic, and keep appending in the future.

@andrewnguyen22
Copy link
Contributor

A real instance of the persistence context is instantiated
A mocked instance of the persistence module instantiated and mocked
For some functions, we implement a passthrough from the mock -> real implementation
For other functions, we mock the behaviour; sometimes it's stateless and sometimes it's stateful

This appears to not satisfy the encapsulation requirement, why don't we just Mock everything instead of using the persistence module? It appears (atleast for the unit tests) we would be able to use the Expect statements to be able to test unlimited scenarios

@Olshansk
Copy link
Member Author

Olshansk commented Oct 6, 2022

This appears to not satisfy the encapsulation requirement

Yea, I see it as a step in the right direction.

why don't we just Mock everything instead of using the persistence module?

I guess I have varying viewpoints here but don't feel strongly.

  1. The persistence module has dozens of functions, and I was thinking that we could leverage existing business logic in the persistence module as an interim to unblock other work dependent on business logic that's not been implemented yet.

  2. If we mock with placeholders, it makes sense, but if we mock with "real business logic", then we will pretty much need to maintain two complete versions of the persistence module. E.g., anytime we change something in the real persistence module, we'd need to change that here too. Could be the right path, but feels complex to me straight of the bat.

It appears (atleast for the unit tests) we would be able to use the Expect statements to be able to test unlimited scenarios

Correct.


tl;dr I think what the "passthrough" approach does is unblock work that's dependant on functionality we don't have yet, without needing to pay the upfront cost of doing a complete second implementation of the persistence module. This may or may not be a good idea, but I think we will have visibility into it as we start mocking parts of it and get a clearer picture.

@andrewnguyen22
Copy link
Contributor

@Olshansk seems like a decent amount of techdebt will be created if we go passthrough approach. I caution about the snowplow effect that @jessicadaugherty alluded to.

The current open M3 issues for this sprint are not blocked by this, but the future ones will be. Since this is an XL task, I'd encourage us to go all the way with the Mock

@Olshansk
Copy link
Member Author

Olshansk commented Oct 6, 2022

Since this is an XL task, I'd encourage us to go all the way with the Mock

To make sure I understand, are you suggesting to maintain two "feature complete" versions of the persistence module, or just mocking everything in the persistence module interface with hardcoded values.

Will ping to sync offline as well.

@andrewnguyen22
Copy link
Contributor

just mocking everything in the persistence module interface with hardcoded values.

I'm suggesting to mock the entire persistence module in the Utility module to fully encapsulate rather than importing the persistence module and passing through

@codecov-commenter
Copy link

Codecov Report

Base: 0.00% // Head: 41.41% // Increases project coverage by +41.41% 🎉

Coverage data is based on head (2a64f5e) compared to base (87a6060).
Patch has no changes to coverable lines.

Additional details and impacted files
@@            Coverage Diff            @@
##           main     #290       +/-   ##
=========================================
+ Coverage      0   41.41%   +41.41%     
=========================================
  Files         0       28       +28     
  Lines         0     2113     +2113     
=========================================
+ Hits          0      875      +875     
- Misses        0     1184     +1184     
- Partials      0       54       +54     
Impacted Files Coverage Δ
persistence/types/util.go 22.22% <0.00%> (ø)
persistence/types/account.go 20.00% <0.00%> (ø)
utility/types/util.go 75.00% <0.00%> (ø)
utility/types/vote.go 100.00% <0.00%> (ø)
p2p/transport.go 14.11% <0.00%> (ø)
consensus/leader_election/sortition/sortition.go 85.71% <0.00%> (ø)
persistence/types/converters.go 0.00% <0.00%> (ø)
p2p/utils.go 65.38% <0.00%> (ø)
persistence/types/unstaking.go 0.00% <0.00%> (ø)
consensus/leader_election/vrf/errors.go 0.00% <0.00%> (ø)
... and 18 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@deblasis
Copy link
Contributor

I have been thinking about this and I have a blurred idea that I wanted to share with you guys.

How about a function that constructs a mock and then multiple methods that in a fluent way allow us to configure expected behaviours?

Pseudo-code of a random scenario:

persistenceContextMock:=newPersistenceContextMock(ctrl).
FromRuntimeMgr(runtimeMgr).
ShouldFailWhenGetAllFishermenIsCalledAtHeight(2, fmt.Errorf("some expected failure here").
AppHashShouldReturnAtHeight(2, []byte("somehash"))

persistenceMock:=newPersistenceModuleMock(ctrl).
WithPersistenceContextMock(persistenceContextMock).
SomeOtherBehaviouralMethod(...some_arguments)

And these can also be wrapped in some sort of util if we call them all the time and change only a bunch of parameters.

so in Utility you would call

persistenceModuleMock := getHappyPathXXPersistenceModule()

It might be more complex than it sounds especially because probably there's gonna be some state to be handled inside the mock itself 🤔 but we could leverage the built-in controller to keep track of calls instead. I need to read more in depth how it all works.

@Olshansk
Copy link
Member Author

Heads up: I added the typical Olshansky whitespaces in utility/test/account_test.go to help me understand the tests better.

@deblasis I believe the implementation I pushed is a starting point to what you described so PTAL.

@andrewnguyen22 I removed the code for my initial passthrough attempt and updated the unit tests in utility/test/account_test.go to use the new approach.

Note how in places where we have dependencies on accounts, I do:

	runTimeMgr := persistenceRuntimeMgr(t)
	acc := GetAllTestingAccounts(t, ctx)[0]
	ctx := NewTestUtilityContext(t, 0, withBaseAccountMock(t, runTimeMgr))

And in some places where we have dependencies on pools, I do:

	runTimeMgr := persistenceRuntimeMgr(t)
	pool := GetAllTestingPools(t, ctx)[0]
	ctx := NewTestUtilityContext(t, 0, withBasePoolMock(t, runTimeMgr))

What the current implementation achieves:

  1. Complete separation from the persistence module
  2. Composable "building blocks" for a stateful persistence module mock that can be used in utility. E.g. you could include both pools AND accounts by running NewTestUtilityContext(t, 0, withBasePoolMock(t, runTimeMgr), withBaseAccountMock(t, runTimeMgr)) if you needed to.
  3. withBasePoolMock and withBaseAccountMock are stateful mocks, BUT we can explicitly EXPECT other functions with specific (i.e. not gomock.Any() values when we have more complex tests verifying specific flows

Please note that I do not believe this is "simple" or "low cognitive load" but fits the requirements of isolation by mocking the shared modules everywhere and sets up up to have other building blocks we can add in the future.

Let me know if jumping on a call would help instead.

@Olshansk
Copy link
Member Author

@andrewnguyen22 Wanted to bump this for you to provide some feedback if this is what you had in mind.

Copy link
Contributor

@andrewnguyen22 andrewnguyen22 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DUP COMMENT SEE BELOW

persistenceContextMock.EXPECT().Release().Return(nil).AnyTimes()

// Adding behavioural mocks based on the options provided
for _, o := range options {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Without looking at the example, I'm not sure what options does here.

Guess:

  • You can add behaviors dynamically to the mock by sending wrapper functions through the parameter.

I think a detailed/clarifying comment above func NewTestUtilityContext(t *testing.T, height int64, options would be helpful

return
}

mock.EXPECT().GetAllAccounts(gomock.Any()).DoAndReturn(func(height int64) ([]modules.Account, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think breaking these EXPECT()s out in separate functions or leaving comments to divide them up would help reduce the cognitive load

@@ -96,7 +96,7 @@ func CleanupPostgresDocker(_ *testing.M, pool *dockertest.Pool, resource *docker
os.Exit(0)
}

// TODO(drewsky): Remove this in favor of a golang specific solution
// TODO: Remove this in favor of a golang specific solution for after test teardown
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't this be part of test teardown?

Copy link
Contributor

@andrewnguyen22 andrewnguyen22 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like this pattern!

It fits great in modularizing our software.

I left a few comments to help clarify the code a bit more and help with cognitive load.

I think this PR can be improved by doing an actor specific mock as well.

P.S.

I noticed you @ me but didn't request a review. The best way to cut through the noise to get to me is to just request a review

@Olshansk
Copy link
Member Author

Closing out this PR for now, but we can use it as a reference for future work on #261.

@Olshansk Olshansk closed this Dec 27, 2022
@Olshansk Olshansk deleted the issues/261/utility_per_mock_example branch June 2, 2023 21:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
code health Nice to have code improvement persistence Persistence specific changes tooling tooling to support development, testing et al utility Utility specific changes
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

[Testing] Mock 10% Persistence Module in Utility Tests
4 participants