-
Notifications
You must be signed in to change notification settings - Fork 136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Architecture proof of concept #6992
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
612bcd4
to
d7e8e62
Compare
✅ QA Wolf - Deploy PRTuck here: 10 tests ran, see details here. ✅ 10 passed |
const { filterByText, levelFlags } = useContext(ConsoleFiltersContext); | ||
const { range, isTransitionPending: isFocusTransitionPending } = useContext(FocusContext); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here's one part of the proposal: Global/shared state comes from React contexts.
}; | ||
} | ||
|
||
const { countAfter, countBefore, didOverflow, messages } = getMessages(sessionId, focusMode); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here's another key part of the proposal: Values that come from the backend (like messages) use Suspense patterns to make them easy to consume from the view's perspective. (Beside the scenes, if getMessages
needs to fetch/refetch data, it can just suspend to do that.)
// Note that we are intentionally not storing derived values like this in state. | ||
const filteredMessages = useFilteredMessages(messages, { | ||
...levelFlags, | ||
filterByText, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here's another key part of the proposal: Derived values (like filtered messages) are just memoized selectors. We don't need to store them in parallel "state" (or Redux).
const isTransitionPending = isFocusTransitionPending; | ||
|
||
return ( | ||
<div className={isTransitionPending ? styles.ContainerPending : styles.Container}> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A nice side benefit of building our UI on top of Suspense and transitions is that it's super easy to tweak the UI slightly (like dimming it) to indicate that the data being shown may be a little stale and we're updating it.
<FocusContext.Provider value={focusContext}> | ||
<ConsoleFiltersContext.Provider value={consoleFiltersContext}> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This component uses React primitives (like useState
and context) to setup and share "global" state with other parts of the app. This gives us a nice point to integrate with other React APIs (like transitions).
// Should we force serialization? | ||
// Should we cancel in-flight requests and start new ones? | ||
|
||
export default class ReplayClient { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was an earlier idea and it's kind of redundant with the Suspense caches. Ignore it for now. I'm going to re-write this class.
export type ConsoleFiltersContextType = { | ||
// Filter text to display in the UI. | ||
// This value is updated at React's default, higher priority. | ||
filterByDisplayText: string; | ||
|
||
// Text to filter console messages by. | ||
// This value is updated at a lower, transition priority. | ||
filterByText: string; | ||
|
||
// Filter by text is about to be updated as part of a transition; | ||
// UI that consumes the focus for Suspense purposes may wish want reflect the temporary pending state. | ||
isTransitionPending: boolean; | ||
|
||
// Types of console messages to include (or filter out). | ||
levelFlags: ConsoleLevelFlags; | ||
|
||
update: (filterByText: string, levelFlags: ConsoleLevelFlags) => void; | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These contexts are nice for sharing state between different parts of the app (without "props drilling"). I guess they also give us a nice super lightweight way to statically render our UI if we want to for visual testing (but I haven't thought about this use case much yet).
// or throws a Wakeable to be resolved once messages have been fetched. | ||
// | ||
// This method is Suspense friend; it is meant to be called from a React component during render. | ||
export function getMessages( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is where the business logic lives for when to fetch/refetch messages from the backend (related to "soft focus") vs when to filter in-memory.
This is probably a lot more complex than a typical suspense cache because of the added soft focus logic.
|
||
// Helper function to read from multiple Suspense caches in parallel. | ||
// This method will re-throw any thrown value, but only after also calling subsequent caches. | ||
export function suspendInParallel(callbacks: Function[]): void { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could use this as kind of a micro-optimization to fetch points in parallel. I didn't use it for now (because I didn't want to make "suspensey" React component code harder to read).
2e267b4
to
0a48abb
Compare
0a48abb
to
8d81ddb
Compare
return getCacheForType(createMap); | ||
} | ||
|
||
export function getClosestPointForTime(time: number, sessionId: SessionId): ExecutionPoint { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is more of a typical Suspense cache implementation. No extra business logic like soft focus, just a map of key (time) to value (execution point) that we build over time– suspending and loading information from the backend when we need to.
…port the protocol directly
7ecb14b
to
ad91149
Compare
then(onFulfill: () => any, onReject: () => any): void | Thennable; | ||
} | ||
|
||
export interface Wakeable extends Thennable { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So a Promise<void>
is a valid thing to pass for a Wakeable
, correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. You could use a native Promise
instead of "wakeable". The type doesn't matter. React doesn't do anything with the value the thing resolves to. The resolution is just a signal to React to try renderin again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool. Yeah, as you mentioned this morning, I really like the idea of having this be the interface that the testable components require for simplicity. 👍
@@ -124,7 +124,7 @@ | |||
"@storybook/react": "^6.4.21", | |||
"@tailwindcss/forms": "^0.5.0", | |||
"@testing-library/jest-dom": "^5.16.3", | |||
"@testing-library/react": "^12.1.4", | |||
"@testing-library/react": "^13.3.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Upgrading this fixed the createRoot
vs render
warning and our tests all still passed so...
it("should render the with no focus region", async () => { | ||
await renderFocused(<Focuser />, { | ||
focusContext: { | ||
range: null, | ||
rangeForDisplay: null, | ||
}, | ||
sessionContext: { | ||
duration: 60_000, | ||
}, | ||
}); | ||
|
||
expect(await screen.queryByText("Focus off")).toBeInTheDocument(); | ||
expect(await screen.queryByText("0:00 – 1:00")).toBeInTheDocument(); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an example of the approach we might use to render a small part of our app to test in isolation. The renderFocused
helper method could be shared between all tests. It accepts optional params which can be used to define a test scenario.
await render(<HomePage />, { | ||
replayClient: { | ||
findMessages: async () => | ||
Promise.resolve({ | ||
messages: [ | ||
createConsoleMessage({ | ||
text: "This is a message", | ||
})[1], | ||
], | ||
overflow: true, | ||
}), | ||
}, | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test demonstrates how we might test the entire app (or at least a larger subtree?) which is probably not something we'd do often?
In this case, I'm using one of the helper mock functions (createConsoleMessage
) that I added a few weeks ago to stub a response from Replay.
|
||
export type MessageTuple = ["Console.newMessage", Message]; | ||
|
||
export function createConsoleMessage({ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I moved these methods into the shared package so the demo app tests can use them too.
After chatting with Jason, I'm going to go land this prototype for now. (It doesn't impact our devtools app any.) I'd love to get others playing with it and poking at it (and maybe coming up with their own POCs built on top of it). |
const startPoint = getClosestPointForTime(replayClient, startTime); | ||
const endPoint = getClosestPointForTime(replayClient, endTime); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
q: if I understand correctly - this is using a simple waterfall fetch on render, right? That is certainly fine, especially for demo purposes. I just wonder what's the idiomatic/canonical answer for removing the waterfall but I guess the answer is just to preload both of those and that can be done in parallel so the potential "render waterfall" here wouldn't quite matter. Is that correct understanding of those patterns?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess that, partially, you have also mentioned the waterfall concern here - but based on that helper's API I'm not actually sure how to use it 😅
It has to be called from render as that is the only place from which one can suspend, right? But I'm surprised that it doesn't return any values cause I've imagined that it could be used like this:
const [startPoint, endPoint] = suspendInParallel([
() => getClosestPointForTime(replayClient, startTime),
() => getClosestPointForTime(replayClient, endTime)
])
It also seems to "leak" suspense to renders whereas usually it's promoted as something completely hidden from the call site (which is cool!). So perhaps I'm misunderstanding completely how this can be used.
However, I'm still curious if the preloading logic would remove the need for such a helper altogether cause it seems to me that it would but again, I feel like I might miss some details here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this would (in the worst case) be a waterfall. Couple of reasons why I think that's okay for now:
- This is still in the proof of concept stage.
- We can pre-populate this cache with other information as part of bootstrapping the recording. (I haven't done it yet but there is a TODO comment in the
PointsCache
to follow up with it.) - Doing the time-to-point conversation is very fast when we do need to suspend.
We could also parallelize the requests using the helper utility you found– but I didn't actually do that for the first revision. The way you show it being used is how it was intended to be used when I started to write it (it's just only kind of half written at the moment).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the response! I also don't see a big problem in the used fetch on render approach here. I was just mainly curious about the given choices and possible alternatives from a theoretical PoV. Your answer has helped me to clarify the doubts that I had 👍
The way you show it being used is how it was intended to be used when I started to write it (it's just only kind of half written at the moment).
This makes sense then now 😅 .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have some small follow up changes I'm going to make this morning (actually woke up a few times thinking about them last night 😭) and I'll toss this one in as well 😁
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TBH the only real snag on suspendInParallel
is that I don't know how to manage the return type (to be anything other than any[]
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I'm your guy then 😉
type AnyFunction = (...args: any[]) => any;
declare function suspendInParallel<T extends AnyFunction[]>(
readers: [...T]
): { [K in keyof T]: ReturnType<Extract<T[K], AnyFunction>> };
Note that you might need to cast the final result within the implementation itself because TS won't be able to verify that it matches this annotation. For public consumers this should work great though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps I should explain this weirdo...
- TS doesnt really have a tuple constraint (which is super annoying) so we need to use an array constraint on the generic
T extends AnyFunction[]
- we need to make it to actually narrow the inline arrays used as arguments here to be of a tuple type and using
[...T]
is a neat trick to do it - then we proceed to the return type which is a mapped type - it allows us to iterate over the type per property. And that is why we had to make it infer those array arguments as tuples. We want to "unwrap" types per position. So we just iterate over keys of
T
and then do some type-level manipulation on a specific element (T[K]
) - It should be enough to just use
ReturnType<T[K]>
here and in TS 4.7 it actually is enough... but that's a new improvement. That extraExtract
just allows us to "cast" this at the generic level and force TS to acceptT[K]
as a type argument passed toReturnType
(as it only allows functions as type arguments)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh hell yes! Thank you for the type info and the explanation :)
} | ||
|
||
function getRecordMap(): TimeToRecordMap { | ||
return getCacheForType(createMap); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be very interesting to learn about the advantages of this React-owned cache. What it can do that the user land's implementation can't implement? Or what it automates that a user land implementation would have to craft on their own?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know if this will be the final API for React Suspense caches. It's the one React DevTools has been using for a year or so now but it may change before public release.
React owning the underlying cache (map) allows it to manage invalidation (e.g. invalidate this bit of data and re-fetch it at a lower priority) without preventing it from re-rendering a high priority update in the meanwhile. If we used a global cache, it would be unable to do the high-priority update because the global cache would now be missing data.
In Replay's case, this doesn't matter much because we don't need to invalidate this cache for example. It grows over time but once added to it, data is never "wrong" or "outdated".
Relates to architectural proposal #6932; small follow up to #6992. * Add unit tests for the Suspense utility and add a small performance optimization to the time-to-points fetching so that we now suspend in parallel for begin and end points. * Separate the contexts and their React hooks integration into separate files with their own React component. This pattern simplifies usage of the contexts (as we add more) and provides greater flexibility when testing.
Related to the architecture proposal #6932
This branch explores a different architectural direction for Replay devtools. The branch attempts to illustrate the following:
packages/bvaughn-architecture-demo/components/console/MessagesList.tsx
).Here is a walk through of the new design and implementation: https://www.loom.com/share/a8d5fa276cfb4f0181f07378bfa9636e
If we took this idea as far as I'd like to take it, our state architecture might look something like this:
For the purpose of this branch, I'm focusing on one data type: messages. Message are an interesting case study, because
findMessagesInRange
) and in-memory filtering (for long points or in the case of no overflow where we are able to fetch all recording messages in memory).I think the above constraints make messages a good smoke test candidate for this new architecture. If we can make messages work efficiently and with a nice API, then we can probably do so for other things as well.
To run the demo app after checking out this branch:
Note that I think facebook/react#24634 is probably a blocker for this issue since certain types of errors can cause hot cycles (in DEV with Next.js). Hopefully this issue will be resolved soon on the React side or on the Next side.