-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hybrid term/type tests #4766
Comments
I might be missing the intent of this issue as a feature request of Vitest typecheck mode, but as an alternative approach, I think it's perfectly reasonable to have both I was looking at TRPC code base the other day and I found such example: test('decorate independently', async () => {
const result = await ctx.proxy.getContext.mutate();
expect(result).toEqual({
user: mockUser,
foo: 'bar',
});
expectTypeOf(result).toEqualTypeOf<{
user: User | null;
foo: 'bar';
}>();
}); I'm not sure this approach would fit your use case, but I thought I'd share since I found this pattern has nice ergonomics and I just started to employ this on my own project. |
@hi-ogawa thanks, I have done similar in the past but what this issue is asking is to bring support for the test runner output too, not just the IDE or |
I believe I just ran into the same issue / desire... I set Is there a reason to prefer separated tests? |
This is a technical limitation for now. |
What version will this be released in? I'm eager to take it for a test drive because it's not clear to me how it all works together - @sheremet-va said test results are not merged so I want to see how I can use them/how they're reported. |
It will be released in the next beta - 2.1.0-beta.6. Probably, in a few days. |
Clear and concise description of the problem
Currently type tests require being in a different module than term ("concrete" seems to the the term used by the docs) tests.
This creates a separation of function rathe than concern.
That is, there are times when a test case concerns assertions for both the terms and types.
Having to keep these in two separate modules is not ideal, since it creates distance between these concerns.
Suggested solution
I'm really not sure what design and/or implementation considerations would be at play off the top of my head. I think it is desirable to have test runner terminal/ui/etc. output "merge" the failures from a test case into a unified view, but I think its not simply a matter of taking the current outputs and "concating" them... Some thinking will be required probably how best to present them, grouping vs interleaved, etc.
Alternative
No response
Additional context
No response
Validations
The text was updated successfully, but these errors were encountered: