-
Notifications
You must be signed in to change notification settings - Fork 831
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve onError validation #1730
Conversation
This pull request is being automatically deployed with Vercel (learn more). 🔍 Inspect: https://vercel.com/mui-org/material-ui-pickers/4q3pwdca0 |
Test summaryRun details
View run in Cypress Dashboard ➡️ This comment has been generated by cypress-bot as a result of this project's GitHub integration settings. You can manage this integration in this project's settings in the Cypress Dashboard |
Codecov Report
@@ Coverage Diff @@
## next #1730 +/- ##
==========================================
+ Coverage 88.93% 90.11% +1.17%
==========================================
Files 75 77 +2
Lines 2259 2316 +57
Branches 404 435 +31
==========================================
+ Hits 2009 2087 +78
+ Misses 201 186 -15
+ Partials 49 43 -6
Continue to review full report at Codecov.
|
); | ||
}; | ||
|
||
const FormikExample = () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
const FormikExample = () => { | |
export default FormikOurValidation() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefer export default to be always the last export. It is more readable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We systematically use the other way around to 1. increase the code density of the demos, 2. reduce the duplicate of the name of the demo. I think that it's important we still to a single convention.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, but for now pickers are standalone project. And if we are not planning to migrate docs before v5 we could stick to the our internal conventions for now. (before we merge docs)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that we can break down the dicussion in 3 different topics:
- Standaloneness. The picker components are part of Material-UI, like any other official component. I think that we should treat the component here as they were inside the mono repository. Where the code is a hosted shouldn't influence the developer experience. I think that it should be to the own discretion of how the core team wants to work.
Why? The consistency of the components is one of the important value propositions of the library, I think that steps in this direction improve the DX. - Progressiveness. I think that we should leverage all the opportunities to reduce the gap in the "standard way". I think that any step in the long term direction is good to take. What the advantage of delaying it? I find it interesting to handle it as we go, as it will be one less change to handle in the future.
- Convention evolution. I think that we should evolve the convention as we learn from our users and the environment change. RFC sounds like the best framework to make changes.
For history, we used to follow the approach of the pull request, it was optimized for reducing the switching cost between class and functional components. With hooks, this requirement is no longer relevant.
<DesktopDatePicker {...stateProps} disablePast onError={onErrorMock} /> | ||
)); | ||
|
||
component.find('input').simulate('change', { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a new test? What about we force ourselves not to write a single new test with enzyme? Preferring testing-library as much as possible, cypress when testing-library isn't enough (10% of the cases?).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not using testing lib in pickers yet
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about introducing it then? So we stop investing time into a direction we know we want to go back from?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably you are right. I thought that will be able to move 💯 in cypress, but it will not be possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had a look at the current state of the tests on the date picker components. It seems that, we have:
- 63 with enzyme mount/shallow (shallow much less frequent) (count with
*.test.tsx
+it('
) - 27 with Cypress (count with
*.spec.ts
+it('
) - 16 with date management (count with
*.test.ts
+it('
)
What's the difference between the usage of "test(" and "it("? It seems to be a simple alias https://jestjs.io/docs/en/api#testname-fn-timeout, what about we update the codebase to a single approach to ease the static analysis of the codebase? The usage of "it" is more frequent, so in favor of using "it" everywhere.
For comparison, the Autocomplete component has 82 tests, and the core components have 1956 tests. So it seems that migrating the date picker components in any direction will be a much smaller effort (1/10) than the one we have currently in progress from enzyme to testing-library for the core (there are 600 tests+ still to be migrated).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure how you calculate test count. But we have 77 cypress tests (count by cypress dashboard comment)
@dmtrKovalenko Interesting, it's the first time I see the Cypress dashboard 👌. I have ignored (1.) App.spec.ts
and (2.) VisualRegression.spec.ts
in the previous count. I believe we could remove 1. per the structure of the docs in the mono-repository and need to replace 2. with Argos-CI fixtures screenshot tests. After these two subtractions, we get the correct amount :) 27-28.
Actually, it's interesting to look at the Cypress dashboard. Any idea why it takes 26s to run 28 tests? If we were to write all our tests with Cypress, would it mean 1956 / 28 * 26 = 1816s (30min) of test duration per browser target?
In the mono-repository test stack, we run them in about 15-60s per browser (we run them in 4 different browsers). I suspect that fixing #1674 can speed up the tests by a factor x3-5. But will it be enough? It seems that we are still talking about one order of magnitude of slowdown if we make Cypress a systematic choice.
This raises a second question, what's our time budget (the time it takes to get feedback and $/month)? Would we be below or above?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't want to move all component test to cypress. That is for 99% of our tests overkill (we're talking about 1s per test instead of 50ms). Especially considering they polyfill a lot of browser behavior (we want to control).
There are probably some tests that might be more suited for e2e tests but I'd rather use a lower level library for that (like puppeteer or playwright). Cypress really shines once you have complete user stories e.g. login or data fetching.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case, I would be in favor of migrating all the tests to testing-library, until we hit a roadblock, something that is compelling enough to introduce a new tool in the stack, that the engineers working on Material-UI need to learn and maintain.
We had one case like this in the past, to test the UMD release. We had multiple regressions, we couldn't use Karma, we went with Puppeteer (the test is still present).
As far as I can envision the limits of testing-library + karma, we have:
- No way to test drag and drop, it's going to be important for the DataGrid and TreeView. For the ClockView of the date picker, it's might also be a concern but I think a less important one. There are no underlying elements that could catch the events and mess around.
- No way to test the tab behavior. I have tested this diff, using Testing: Experiment with Puppeteer for E2E testing WordPress/gutenberg#5618 as inspiration, it seems to work with Puppeteer.
diff --git a/packages/material-ui/test/umd/run.js b/packages/material-ui/test/umd/run.js
index 367466663..8c34ecb2b 100644
--- a/packages/material-ui/test/umd/run.js
+++ b/packages/material-ui/test/umd/run.js
@@ -71,6 +71,8 @@ function App() {
return (
<React.Fragment>
+ <input type="text" id="foo" />
+ <input type="text" id="bar" />
<Button onClick={() => setOpen(true)}>Super Secret Password</Button>
<Dialog open={open}>
1-2-3-4-5
@@ -141,8 +143,12 @@ async function run() {
const page = await startBrowser();
await page.goto(`http://${host}:${port}`);
+ await page.click( '#foo' );
+ await page.keyboard.press( 'Tab' );
+ const activeElement = await page.evaluate(() => {
+ return document.activeElement.id;
+ });
+ console.log('activeElement', activeElement); // output bar
await expect(page).toClick('button', { text: 'Super Secret Password' });
await expect(page).toMatch('1-2-3-4-5');
} catch (err) {
log.fatal({
name: 'test',
- No reliable way to test the performance of the components (do we?). It's going to be important for the DataGrid. We might also need to track this metric too for #1674.
- No built-in parallelism. We can probably still write x2 more tests without it harming our DX. After that, we might have an issue with the time it takes for the tests to run. We could split the tests into two chunks, one half going to BrowserStack, the other to SauceLabs.
- else? did I forget something? The debugging experience isn't great but it doesn't seem to have been something we have struggled a lot with (so far).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No reliable way to test the performance of the components (do we?)
- I experimented a bit with React.Profiler in our test suite which means that we wouldn't need to write tests and benchmark but reuse the testcases for benchmarking. That's a viable option if I remember correctly. Just needs some infra so that we get a good sample size and don't run it on any PR.
No way to test drag and drop, it's going to be important for the DataGrid and TreeView.
- That is something I wouldn't let automated testing touch anyway. Fixtures and manual testing would be way more reliable and also ensures that Drag'n'Drop is actually usable. I think this should be tested manually on different devices.
We can probably still write x2 more tests without it harming our DX.
- Please consider that your workflow is the issue. You absolutely do not need to run the full test suite in watchmode when working on the codebase for virtually every use case.
--grep
and.only
give you all the tools you need.
No way to test the tab behavior.
- This is something I never understood. The only meaningful behavior you're missing out is :focus-visible and that is only visual anyway. For all the rest
.focus()
should be enough, no? If you're interested in tab-order then you need to test this manually with an actual browsers/assistive technology.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Great. I have no knowledge of how this should be done correctly. My understanding of the problem is limited to: reproducibility is paramount. Without a stable, noise-free measure, we can't do anything.
- Agree, manual testing should probably be the main tool used when iterating on the logic. Maybe one automated test, as a rough safe belt could be valuable.
- My concern is with the CI, the delay of the feedback loop on pull requests.
- Considering the TrapFocus component, Do we have enough tools to test it correctly?
@@ -0,0 +1,108 @@ | |||
import React from 'react'; | |||
import Code from '../../_shared/Code'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using Code
breaks the isolation of the demo, please use something else.
import Code from '../../_shared/Code'; |
I think that not displaying the error at all would provide the best tradeoff, maybe relying on the TextField feedback only?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about form state understanding? Now it is really easy to understand how this example works.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that it depends on the objective. Should we 1. expect the developer to use this demo or 2. is it only for illustration?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if it's 2. we will prevent developers to see the source, for instance like in https://material-ui.com/components/popover/#anchor-playground. If it's 1. we absolutely need to make it work in isolation.
const DatePickerField: React.FC<DatePickerFieldProps> = ({ | ||
form, | ||
field: { value, name }, | ||
maxDate = new Date('2100-01-01'), | ||
minDate = new Date('1900-01-01'), | ||
getShouldDisableDateError, | ||
...other | ||
}) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For consistency
const DatePickerField: React.FC<DatePickerFieldProps> = ({ | |
form, | |
field: { value, name }, | |
maxDate = new Date('2100-01-01'), | |
minDate = new Date('1900-01-01'), | |
getShouldDisableDateError, | |
...other | |
}) => { | |
function DatePickerField(props: DatePickerFieldProps) { | |
const { | |
form, | |
field: { value, name }, | |
maxDate = new Date('2100-01-01'), | |
minDate = new Date('1900-01-01'), | |
getShouldDisableDateError, | |
...other | |
} = props; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean under consistency? function
vs const
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean that we systematically write private components following this pattern:
function DatePickerField(props: DatePickerFieldProps) {
const {
form,
field: { value, name },
maxDate = new Date('2100-01-01'),
minDate = new Date('1900-01-01'),
getShouldDisableDateError,
...other
} = props;
and for public ones:
const DatePickerField = React.forwardRef(function DatePickerField(
props: DatePickerFieldProps,
ref: React.Ref<HTMLElement>,
) {
const {
form,
field: { value, name },
maxDate = new Date('2100-01-01'),
minDate = new Date('1900-01-01'),
getShouldDisableDateError,
...other
} = props;
value, | ||
onChange, | ||
mask = '__/__/____', | ||
startText = 'Start', | ||
endText = 'End', | ||
inputFormat: passedInputFormat, | ||
...restPropsForTextField | ||
minDate: __minDate = defaultMinDate, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For consistency
minDate: __minDate = defaultMinDate, | |
minDate: minDateProp = defaultMinDate, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't it look less clear? Because accidentally type minDateProp
when you are typing minDate
is much much easier than __minDate
. And __
is some kind of convention about naming private variables?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No specific preference, we could make it the new convention, if we do, could you update all the other components (core, lab, etc.)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably yes :)
Ill give you more context: Here we had exactly the same naming
minDatePropbut it was problematic because appeared in the autocomplete for
minDate` so I decided to use another convention
Co-authored-by: Olivier Tassinari <olivier.tassinari@gmail.com>
Co-authored-by: Olivier Tassinari <olivier.tassinari@gmail.com>
Co-authored-by: Olivier Tassinari <olivier.tassinari@gmail.com>
This PR closes #1675
Description
Reworked our internal validation to be more useful and easier in form integration.
Breaking changes
minDateMessage
maxDateMessage
invalidDateMessage
strictCompareDates
emptyText
onError
callback to return a reason why the error must be displayed. We are not displaying english error messages by default, If the input is invalid by default picker will only show red coloredTextField
. So from now form validation with sharing internal pickers logic will looks like this (reason
is well-typed string union):