Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Architecture proof of concept #6992

Merged
merged 6 commits into from
May 31, 2022
Merged

Conversation

bvaughn
Copy link
Contributor

@bvaughn bvaughn commented May 29, 2022

Related to the architecture proposal #6932

This branch explores a different architectural direction for Replay devtools. The branch attempts to illustrate the following:

  • What it would be like to write and maintain React views using this state management approach (see packages/bvaughn-architecture-demo/components/console/MessagesList.tsx).
  • The added UX benefits that React concurrent mode can provide if we go all-in on Suspense and concurrent compatibility.
  • How easy it can be to write tests for React components using this pattern.

Here is a walk through of the new design and implementation: https://www.loom.com/share/a8d5fa276cfb4f0181f07378bfa9636e

If we took this idea as far as I'd like to take it, our state architecture might look something like this:

State architecture diagram


For the purpose of this branch, I'm focusing on one data type: messages. Message are an interesting case study, because

  1. Their data comes from both the backend (messages stored as part of the recording) and global state (user-defined log points added in memory).
  2. Despite being in-memory, log points sometimes also require analysis to be run on the backend to evaluate complex expressions.
  3. Focus range impacts which messages we show both in terms of remote filtering (findMessagesInRange) and in-memory filtering (for long points or in the case of no overflow where we are able to fetch all recording messages in memory).
  4. Additional client side filtering logic may also be applied (e.g. show warnings or errors).

I think the above constraints make messages a good smoke test candidate for this new architecture. If we can make messages work efficiently and with a nice API, then we can probably do so for other things as well.


To run the demo app after checking out this branch:

npm install
cd packages/bvaughn-architecture-demo/
npm run dev

# View a public Replay using a URL like:
# http://localhost:3000/?recordingId=48ca3af8-8493-4297-922c-d0ddf6cf44e6

Note that I think facebook/react#24634 is probably a blocker for this issue since certain types of errors can cause hot cycles (in DEV with Next.js). Hopefully this issue will be resolved soon on the React side or on the Next side.

@vercel
Copy link

vercel bot commented May 29, 2022

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Updated
devtools ✅ Ready (Inspect) Visit Preview May 31, 2022 at 8:23PM (UTC)

Comment on lines +22 to +23
const { filterByText, levelFlags } = useContext(ConsoleFiltersContext);
const { range, isTransitionPending: isFocusTransitionPending } = useContext(FocusContext);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's one part of the proposal: Global/shared state comes from React contexts.

};
}

const { countAfter, countBefore, didOverflow, messages } = getMessages(sessionId, focusMode);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's another key part of the proposal: Values that come from the backend (like messages) use Suspense patterns to make them easy to consume from the view's perspective. (Beside the scenes, if getMessages needs to fetch/refetch data, it can just suspend to do that.)

// Note that we are intentionally not storing derived values like this in state.
const filteredMessages = useFilteredMessages(messages, {
...levelFlags,
filterByText,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's another key part of the proposal: Derived values (like filtered messages) are just memoized selectors. We don't need to store them in parallel "state" (or Redux).

Comment on lines +57 to +60
const isTransitionPending = isFocusTransitionPending;

return (
<div className={isTransitionPending ? styles.ContainerPending : styles.Container}>
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A nice side benefit of building our UI on top of Suspense and transitions is that it's super easy to tweak the UI slightly (like dimming it) to indicate that the data being shown may be a little stale and we're updating it.

Comment on lines +109 to +110
<FocusContext.Provider value={focusContext}>
<ConsoleFiltersContext.Provider value={consoleFiltersContext}>
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This component uses React primitives (like useState and context) to setup and share "global" state with other parts of the app. This gives us a nice point to integrate with other React APIs (like transitions).

// Should we force serialization?
// Should we cancel in-flight requests and start new ones?

export default class ReplayClient {
Copy link
Contributor Author

@bvaughn bvaughn May 31, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was an earlier idea and it's kind of redundant with the Suspense caches. Ignore it for now. I'm going to re-write this class.

Comment on lines +12 to +29
export type ConsoleFiltersContextType = {
// Filter text to display in the UI.
// This value is updated at React's default, higher priority.
filterByDisplayText: string;

// Text to filter console messages by.
// This value is updated at a lower, transition priority.
filterByText: string;

// Filter by text is about to be updated as part of a transition;
// UI that consumes the focus for Suspense purposes may wish want reflect the temporary pending state.
isTransitionPending: boolean;

// Types of console messages to include (or filter out).
levelFlags: ConsoleLevelFlags;

update: (filterByText: string, levelFlags: ConsoleLevelFlags) => void;
};
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These contexts are nice for sharing state between different parts of the app (without "props drilling"). I guess they also give us a nice super lightweight way to statically render our UI if we want to for visual testing (but I haven't thought about this use case much yet).

// or throws a Wakeable to be resolved once messages have been fetched.
//
// This method is Suspense friend; it is meant to be called from a React component during render.
export function getMessages(
Copy link
Contributor Author

@bvaughn bvaughn May 31, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is where the business logic lives for when to fetch/refetch messages from the backend (related to "soft focus") vs when to filter in-memory.

This is probably a lot more complex than a typical suspense cache because of the added soft focus logic.


// Helper function to read from multiple Suspense caches in parallel.
// This method will re-throw any thrown value, but only after also calling subsequent caches.
export function suspendInParallel(callbacks: Function[]): void {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could use this as kind of a micro-optimization to fetch points in parallel. I didn't use it for now (because I didn't want to make "suspensey" React component code harder to read).

@bvaughn bvaughn force-pushed the bvaughn-architecture-proof-of-concept branch from 0a48abb to 8d81ddb Compare May 31, 2022 14:51
return getCacheForType(createMap);
}

export function getClosestPointForTime(time: number, sessionId: SessionId): ExecutionPoint {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is more of a typical Suspense cache implementation. No extra business logic like soft focus, just a map of key (time) to value (execution point) that we build over time– suspending and loading information from the backend when we need to.

then(onFulfill: () => any, onReject: () => any): void | Thennable;
}

export interface Wakeable extends Thennable {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So a Promise<void> is a valid thing to pass for a Wakeable, correct?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. You could use a native Promise instead of "wakeable". The type doesn't matter. React doesn't do anything with the value the thing resolves to. The resolution is just a signal to React to try renderin again.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool. Yeah, as you mentioned this morning, I really like the idea of having this be the interface that the testable components require for simplicity. 👍

@bvaughn bvaughn marked this pull request as ready for review May 31, 2022 19:17
@@ -124,7 +124,7 @@
"@storybook/react": "^6.4.21",
"@tailwindcss/forms": "^0.5.0",
"@testing-library/jest-dom": "^5.16.3",
"@testing-library/react": "^12.1.4",
"@testing-library/react": "^13.3.0",
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Upgrading this fixed the createRoot vs render warning and our tests all still passed so...

Comment on lines +8 to +21
it("should render the with no focus region", async () => {
await renderFocused(<Focuser />, {
focusContext: {
range: null,
rangeForDisplay: null,
},
sessionContext: {
duration: 60_000,
},
});

expect(await screen.queryByText("Focus off")).toBeInTheDocument();
expect(await screen.queryByText("0:00 – 1:00")).toBeInTheDocument();
});
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an example of the approach we might use to render a small part of our app to test in isolation. The renderFocused helper method could be shared between all tests. It accepts optional params which can be used to define a test scenario.

Comment on lines +10 to +22
await render(<HomePage />, {
replayClient: {
findMessages: async () =>
Promise.resolve({
messages: [
createConsoleMessage({
text: "This is a message",
})[1],
],
overflow: true,
}),
},
});
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test demonstrates how we might test the entire app (or at least a larger subtree?) which is probably not something we'd do often?

In this case, I'm using one of the helper mock functions (createConsoleMessage) that I added a few weeks ago to stub a response from Replay.


export type MessageTuple = ["Console.newMessage", Message];

export function createConsoleMessage({
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved these methods into the shared package so the demo app tests can use them too.

@bvaughn
Copy link
Contributor Author

bvaughn commented May 31, 2022

After chatting with Jason, I'm going to go land this prototype for now. (It doesn't impact our devtools app any.) I'd love to get others playing with it and poking at it (and maybe coming up with their own POCs built on top of it).

@bvaughn bvaughn merged commit aba283b into main May 31, 2022
@bvaughn bvaughn deleted the bvaughn-architecture-proof-of-concept branch May 31, 2022 21:14
Comment on lines +29 to +30
const startPoint = getClosestPointForTime(replayClient, startTime);
const endPoint = getClosestPointForTime(replayClient, endTime);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

q: if I understand correctly - this is using a simple waterfall fetch on render, right? That is certainly fine, especially for demo purposes. I just wonder what's the idiomatic/canonical answer for removing the waterfall but I guess the answer is just to preload both of those and that can be done in parallel so the potential "render waterfall" here wouldn't quite matter. Is that correct understanding of those patterns?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess that, partially, you have also mentioned the waterfall concern here - but based on that helper's API I'm not actually sure how to use it 😅

It has to be called from render as that is the only place from which one can suspend, right? But I'm surprised that it doesn't return any values cause I've imagined that it could be used like this:

const [startPoint, endPoint] = suspendInParallel([
  () => getClosestPointForTime(replayClient, startTime),
  () => getClosestPointForTime(replayClient, endTime)
])

It also seems to "leak" suspense to renders whereas usually it's promoted as something completely hidden from the call site (which is cool!). So perhaps I'm misunderstanding completely how this can be used.

However, I'm still curious if the preloading logic would remove the need for such a helper altogether cause it seems to me that it would but again, I feel like I might miss some details here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this would (in the worst case) be a waterfall. Couple of reasons why I think that's okay for now:

  1. This is still in the proof of concept stage.
  2. We can pre-populate this cache with other information as part of bootstrapping the recording. (I haven't done it yet but there is a TODO comment in the PointsCache to follow up with it.)
  3. Doing the time-to-point conversation is very fast when we do need to suspend.

We could also parallelize the requests using the helper utility you found– but I didn't actually do that for the first revision. The way you show it being used is how it was intended to be used when I started to write it (it's just only kind of half written at the moment).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the response! I also don't see a big problem in the used fetch on render approach here. I was just mainly curious about the given choices and possible alternatives from a theoretical PoV. Your answer has helped me to clarify the doubts that I had 👍

The way you show it being used is how it was intended to be used when I started to write it (it's just only kind of half written at the moment).

This makes sense then now 😅 .

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have some small follow up changes I'm going to make this morning (actually woke up a few times thinking about them last night 😭) and I'll toss this one in as well 😁

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TBH the only real snag on suspendInParallel is that I don't know how to manage the return type (to be anything other than any[])

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I'm your guy then 😉

type AnyFunction = (...args: any[]) => any;

declare function suspendInParallel<T extends AnyFunction[]>(
  readers: [...T]
): { [K in keyof T]: ReturnType<Extract<T[K], AnyFunction>> };

Note that you might need to cast the final result within the implementation itself because TS won't be able to verify that it matches this annotation. For public consumers this should work great though.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps I should explain this weirdo...

  1. TS doesnt really have a tuple constraint (which is super annoying) so we need to use an array constraint on the generic T extends AnyFunction[]
  2. we need to make it to actually narrow the inline arrays used as arguments here to be of a tuple type and using [...T] is a neat trick to do it
  3. then we proceed to the return type which is a mapped type - it allows us to iterate over the type per property. And that is why we had to make it infer those array arguments as tuples. We want to "unwrap" types per position. So we just iterate over keys of T and then do some type-level manipulation on a specific element (T[K])
  4. It should be enough to just use ReturnType<T[K]> here and in TS 4.7 it actually is enough... but that's a new improvement. That extra Extract just allows us to "cast" this at the generic level and force TS to accept T[K] as a type argument passed to ReturnType (as it only allows functions as type arguments)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh hell yes! Thank you for the type info and the explanation :)

}

function getRecordMap(): TimeToRecordMap {
return getCacheForType(createMap);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be very interesting to learn about the advantages of this React-owned cache. What it can do that the user land's implementation can't implement? Or what it automates that a user land implementation would have to craft on their own?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know if this will be the final API for React Suspense caches. It's the one React DevTools has been using for a year or so now but it may change before public release.

React owning the underlying cache (map) allows it to manage invalidation (e.g. invalidate this bit of data and re-fetch it at a lower priority) without preventing it from re-rendering a high priority update in the meanwhile. If we used a global cache, it would be unable to do the high-priority update because the global cache would now be missing data.

In Replay's case, this doesn't matter much because we don't need to invalidate this cache for example. It grows over time but once added to it, data is never "wrong" or "outdated".

@bvaughn bvaughn mentioned this pull request Jun 1, 2022
2 tasks
bvaughn added a commit that referenced this pull request Jun 1, 2022
Relates to architectural proposal #6932; small follow up to #6992.

* Add unit tests for the Suspense utility and add a small performance optimization to the time-to-points fetching so that we now suspend in parallel for begin and end points.
* Separate the contexts and their React hooks integration into separate files with their own React component. This pattern simplifies usage of the contexts (as we add more) and provides greater flexibility when testing.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants