-
-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(🎁) Improve usability of --install-types
#10600
Comments
Another useful addition could be to make mypy emit the libraries it would install, so you can do something like |
Just to add to this, the
Which seems super gross and leads to a jumbled and confusing console output with errors followed by a clean run. IMO if |
I wouldn't recommend running Right now this is possible (as a one-time thing -- commit the changes to
This isn't very intuitive, however. I can see how some projects would also prefer not to maintain stub requirements explicitly and are happy to use the latest versions of all available/known stub packages. I'm trying to summarize the ideas above below. I can see several somewhat different but related use cases. The option names are open to bike shedding. Use case 1: Use requirements.txtInfer types from This could be used both as a one-off action and as part of every CI run. This would always install the latest stubs. Example:
Use case 2: Install types non-interactivelyUnconditionally install type packages and don't ask for confirmation. If type checking (i.e. not using Use case 3: Generate requirements outputInstead of installing stubs, produce output suitable for Here's how this could look like (note no error output about missing stubs):
I'm not sure whether this should look up the latest versions of stub packages and use |
--install-types
from a requirements file--install-types
Is it possible to run mypy |
In the absence of a recommendation to use It'd be great to have some guidance from the project on this. |
@mikepurvis we have added to the |
My recommendation is to do one of these (each has different tradeoffs):
The If/when 0.910 is out, you'd also have the option of running something like Please let me know if none of the above options work for you. Generating type requirements from your main requirements file (use case 1 above) is more effort to implement, so we may not have it available soon, unless somebody would like to contribute it. |
The |
I agree. Non-interactive would return to something along the lines of the original behaviour, admittedly with the interface change in the form of the new flags, and would work for my team's workflows. We've currently had to pin back mypy on all of our repos because the lack of a non-interactive mode. It's broken CI for us. |
I'd like to have some confidence that the package matches the type definition i'm downloading. Does typeshed not provide a mechanism for mapping package versions to type package versions? With version pinning, it seems inevitable that if it isn't automated the versions will one day fall out of sync... crippling the value of type checking. So i'd say making it non-interactive isn't enough, it also needs to generate the version of the type packages it is going to install. |
It also doesn't show errors, making it useful for running CI jobs. Work on #10600.
It also doesn't show errors, making it useful in CI jobs. Example: ``` $ mypy --install-types --non-interactive -c 'import click' Installing missing stub packages: /Users/jukka/venv/mypy/bin/python3 -m pip install types-click Collecting types-click Using cached types_click-7.1.0-py2.py3-none-any.whl (13 kB) Installing collected packages: types-click Successfully installed types-click-7.1.0 ``` Work on #10600.
@chorner Typeshed supports defining the target library version that stubs support (in METADATA.toml). It's reflected in the version of the types package on PyPI. Since it hasn't been filled in for most stubs, it's not very useful yet. This is still better than what we used to have before, as previously there was no support for specifying the supported package versions. Once typeshed has more dependable version information, at least the proposed variant of For example, if we have versions 1.5.2 and 2.0.4 of |
Is the expectation that separate types stub packages are a long term thing? I kind of assumed they were a 12-24 month bridge and the hope in the end is that most popular dependencies would supply their own type information, at least at the API boundaries. I suppose the lesson of Python 3 is not to assume that any hack is a short-term thing. |
I expect that types stub packages will be around for a long time, but hopefully they will be needed much less frequently in the future. It's not really something we can control, since the decision to bundle stubs or include inline annotations is up to individual project maintainers. Making the workflows not suck is thus pretty important. |
Seems install-types doesn't work when there's no cache directory:
|
Somewhat related per https://github.com/pre-commit/mirrors-mypy/issues/50 is there merit in having the
rather than via two-steps which is non-viable when using with
|
I kinda have the opposite hope -- take for example setting up a separate type checking environment, I'd rather install a handful of text files (~order of KB) than the actual libraries (~order of MB) especially for libraries with native extensions. inline types also aren't possible for |
Same as @larroy , does not work asking for incremental mode, even if explicitly asked (which should be the default mode?)
|
@larroy @RouquinBlanc Did you actually have some files for mypy to type check? If you just use In any case, the error message is confusing. |
Hi @JukkaL, The use case is running mypy with tox in a container. The mypy section was configured as follows:
After following this ticket I naively tried to modify it like this:
But as you say, because it has not run yet once, it fails... if I manually call For those tests we skip installation, and do not have a requirements.txt to work with, and do not require one at that place for various reasons (not telling we do not have one elsewhere, just that for those test we rely on setup.cfg install_requires, and work with bleeding edge). A very short term quick-fix is to manually define the list of packages which require external types:
But that's only a quick fix to me... What if tomorrow another dependency needs external types as well? In that sense, there are 2 propositions above which would make sense for our scenario:
|
Hmm it looks like the current behavior still seems somewhat problematic. What if we'd change Currently two runs are needed for the same results:
|
Find a better way, this is ugly. More info and fix coming here: python/mypy#10600
We want PRs to have an explicit indicator for linter errors in the branch/PR. It will allow us to see the state of the branch without explicitly running linters locally if we don't want it. In the scope of this task we need to add a basic CI pipeline with linters and status indicator on GitHub. Steps to do: - add `Dockerfile` for the application (we don't want to add `ENTRYPOINT` in it for now, since we suppose that it is better to have container entrypoints inside the `docker-compose` file) - add `docker-compose` file Compose file should have following services: - `forum123-build` (will be used to build application) - `forum123-mypy` (to run `mypy`) - `forum123-flake8` (to run `flake8`) - `forum123-pylint` (to run `pylint`) We want them to be separated on different services to be able to run build in one job, and then run all linters in a parallel way in three different jobs (parallel execution will be implemented sometimes later, now we need only to have different services in compose file). - setup CI pipeline using GitHub Actions Also we want to put this configuration in a separate folder like `envs/dev` to indicate that this is only for development purposes. And when we will need to add some production infrastructure it will go into `envs/prod`. Seems like using different folders is more convenient than having a bunch of Dockerfile's and docker-compose files with suffixes like `.dev` and `.prod`. We decided to put mypy stubs in `requirements-dev.txt`, because we had a troubles with installing types on CI. In our case we had to run `mypy` twice - first time, to populate `mypy`'s cache and define what types are missing, and second time to install types and check for errors. The cause of this isssue is that github always run its jobs in a new clean containers, so each new `mypy` run will not have `mypy`'s cache from the previous run. Therefore we had to run `mypy` twice for every job to have type stubs installed. More about related problems with missing type stubs on CI from other people you can read here: python/mypy#10600 Worth to mention that now we're not going to make this pipeline optimized by performance. We are planning to add caching of intermediate docker layers later in the scope of another task. For now we just need this pipeline to work and nothing more.
We want PRs to have an explicit indicator for linter errors in the branch/PR. It will allow us to see the state of the branch without explicitly running linters locally if we don't want it. In the scope of this task we need to add a basic CI pipeline with linters and status indicator on GitHub. Steps to do: - add `Dockerfile` for the application (we don't want to add `ENTRYPOINT` in it for now, since we suppose that it is better to have container entrypoints inside the `docker-compose` file) - add `docker-compose` file Compose file should have following services: - `forum123-build` (will be used to build application) - `forum123-mypy` (to run `mypy`) - `forum123-flake8` (to run `flake8`) - `forum123-pylint` (to run `pylint`) We want them to be separated on different services to be able to run build in one job, and then run all linters in a parallel way in three different jobs (parallel execution will be implemented sometimes later, now we need only to have different services in compose file). - setup CI pipeline using GitHub Actions Also we want to put this configuration in a separate folder like `envs/dev` to indicate that this is only for development purposes. And when we will need to add some production infrastructure it will go into `envs/prod`. Seems like using different folders is more convenient than having a bunch of Dockerfile's and docker-compose files with suffixes like `.dev` and `.prod`. We decided to put mypy stubs in `requirements-dev.txt`, because we had a troubles with installing types on CI. In our case we had to run `mypy` twice - first time, to populate `mypy`'s cache and define what types are missing, and second time to install types and check for errors. The cause of this isssue is that github always run its jobs in a new clean containers, so each new `mypy` run will not have `mypy`'s cache from the previous run. Therefore we had to run `mypy` twice for every job to have type stubs installed. More about related problems with missing type stubs on CI from other people you can read here: python/mypy#10600 Worth to mention that now we're not going to make this pipeline optimized by performance. We are planning to add caching of intermediate docker layers later in the scope of another task. For now we just need this pipeline to work and nothing more.
We want PRs to have an explicit indicator for linter errors in the branch/PR. It will allow us to see the state of the branch without explicitly running linters locally if we don't want it. In the scope of this task we need to add a basic CI pipeline with linters and status indicator on GitHub. Steps to do: - add `Dockerfile` for the application (we don't want to add `ENTRYPOINT` in it for now, since we suppose that it is better to have container entrypoints inside the `docker-compose` file) - add `docker-compose` file Compose file should have following services: - `forum123-build` (will be used to build application) - `forum123-mypy` (to run `mypy`) - `forum123-flake8` (to run `flake8`) - `forum123-pylint` (to run `pylint`) We want them to be separated on different services to be able to run build in one job, and then run all linters in a parallel way in three different jobs (parallel execution will be implemented sometimes later, now we need only to have different services in compose file). - setup CI pipeline using GitHub Actions Also we want to put this configuration in a separate folder like `envs/dev` to indicate that this is only for development purposes. And when we will need to add some production infrastructure it will go into `envs/prod`. Seems like using different folders is more convenient than having a bunch of Dockerfile's and docker-compose files with suffixes like `.dev` and `.prod`. We decided to put mypy stubs in `requirements-dev.txt`, because we had a troubles with installing types on CI. In our case we had to run `mypy` twice - first time, to populate `mypy`'s cache and define what types are missing, and second time to install types and check for errors. The cause of this isssue is that github always run its jobs in a new clean containers, so each new `mypy` run will not have `mypy`'s cache from the previous run. Therefore we had to run `mypy` twice for every job to have type stubs installed. More about related problems with missing type stubs on CI from other people you can read here: python/mypy#10600 Worth to mention that now we're not going to make this pipeline optimized by performance. We are planning to add caching of intermediate docker layers later in the scope of another task. For now we just need this pipeline to work and nothing more.
We want PRs to have an explicit indicator for linter errors in the branch/PR. It will allow us to see the state of the branch without explicitly running linters locally if we don't want it. In the scope of this task we need to add a basic CI pipeline with linters and status indicator on GitHub. Steps to do: - add `Dockerfile` for the application (we don't want to add `ENTRYPOINT` in it for now, since we suppose that it is better to have container entrypoints inside the `docker-compose` file) - add `docker-compose` file Compose file should have following services: - `forum123-build` (will be used to build application) - `forum123-mypy` (to run `mypy`) - `forum123-flake8` (to run `flake8`) - `forum123-pylint` (to run `pylint`) We want them to be separated on different services to be able to run build in one job, and then run all linters in a parallel way in three different jobs (parallel execution will be implemented sometimes later, now we need only to have different services in compose file). - setup CI pipeline using GitHub Actions Also we want to put this configuration in a separate folder like `envs/dev` to indicate that this is only for development purposes. And when we will need to add some production infrastructure it will go into `envs/prod`. Seems like using different folders is more convenient than having a bunch of Dockerfile's and docker-compose files with suffixes like `.dev` and `.prod`. We decided to put mypy stubs in `requirements-dev.txt`, because we had a troubles with installing types on CI. In our case we had to run `mypy` twice - first time, to populate `mypy`'s cache and define what types are missing, and second time to install types and check for errors. The cause of this isssue is that github always run its jobs in a new clean containers, so each new `mypy` run will not have `mypy`'s cache from the previous run. Therefore we had to run `mypy` twice for every job to have type stubs installed. More about related problems with missing type stubs on CI from other people you can read here: python/mypy#10600 Worth to mention that now we're not going to make this pipeline optimized by performance. We are planning to add caching of intermediate docker layers later in the scope of another task. For now we just need this pipeline to work and nothing more.
We want PRs to have an explicit indicator for linter errors in the branch/PR. It will allow us to see the state of the branch without explicitly running linters locally if we don't want it. In the scope of this task we need to add a basic CI pipeline with linters and status indicator on GitHub. Steps to do: - add `Dockerfile` for the application (we don't want to add `ENTRYPOINT` in it for now, since we suppose that it is better to have container entrypoints inside the `docker-compose` file) - add `docker-compose` file Compose file should have following services: - `forum123-build` (will be used to build application) - `forum123-mypy` (to run `mypy`) - `forum123-flake8` (to run `flake8`) - `forum123-pylint` (to run `pylint`) We want them to be separated on different services to be able to run build in one job, and then run all linters in a parallel way in three different jobs (parallel execution will be implemented sometimes later, now we need only to have different services in compose file). - setup CI pipeline using GitHub Actions Also we want to put this configuration in a separate folder like `envs/dev` to indicate that this is only for development purposes. And when we will need to add some production infrastructure it will go into `envs/prod`. Seems like using different folders is more convenient than having a bunch of Dockerfile's and docker-compose files with suffixes like `.dev` and `.prod`. We decided to put mypy stubs in `requirements-dev.txt`, because we had a troubles with installing types on CI. In our case we had to run `mypy` twice - first time, to populate `mypy`'s cache and define what types are missing, and second time to install types and check for errors. The cause of this isssue is that github always run its jobs in a new clean containers, so each new `mypy` run will not have `mypy`'s cache from the previous run. Therefore we had to run `mypy` twice for every job to have type stubs installed. More about related problems with missing type stubs on CI from other people you can read here: python/mypy#10600 Worth to mention that now we're not going to make this pipeline optimized by performance. We are planning to add caching of intermediate docker layers later in the scope of another task. For now we just need this pipeline to work and nothing more.
We want PRs to have an explicit indicator for linter errors in the branch/PR. It will allow us to see the state of the branch without explicitly running linters locally if we don't want it. In the scope of this task we need to add a basic CI pipeline with linters and status indicator on GitHub. Steps to do: - add `Dockerfile` for the application (we don't want to add `ENTRYPOINT` in it for now, since we suppose that it is better to have container entrypoints inside the `docker-compose` file) - add `docker-compose` file Compose file should have following services: - `forum123-build` (will be used to build application) - `forum123-mypy` (to run `mypy`) - `forum123-flake8` (to run `flake8`) - `forum123-pylint` (to run `pylint`) We want them to be separated on different services to be able to run build in one job, and then run all linters in a parallel way in three different jobs (parallel execution will be implemented sometimes later, now we need only to have different services in compose file). - setup CI pipeline using GitHub Actions Also we want to put this configuration in a separate folder like `envs/dev` to indicate that this is only for development purposes. And when we will need to add some production infrastructure it will go into `envs/prod`. Seems like using different folders is more convenient than having a bunch of Dockerfile's and docker-compose files with suffixes like `.dev` and `.prod`. We decided to put mypy stubs in `requirements-dev.txt`, because we had a troubles with installing types on CI. In our case we had to run `mypy` twice - first time, to populate `mypy`'s cache and define what types are missing, and second time to install types and check for errors. The cause of this isssue is that github always run its jobs in a new clean containers, so each new `mypy` run will not have `mypy`'s cache from the previous run. Therefore we had to run `mypy` twice for every job to have type stubs installed. More about related problems with missing type stubs on CI from other people you can read here: python/mypy#10600 Worth to mention that now we're not going to make this pipeline optimized by performance. We are planning to add caching of intermediate docker layers later in the scope of another task. For now we just need this pipeline to work and nothing more.
Related issue: #14663 |
--install-types
--install-types
Is there any way to suppress the "error" or "warning" messages when running |
I've the same question. At the moment I use this:
If I add the dot in the end, it shows me all the errors in my code from mypy: Have no idea how to install reps, and do not run the mypy checks. |
I am having this same issue. Hard to know what the proper approach is. |
The best I managed to to on my CI is to pre-generate the logs and install the requierements after that.
This is slow because mypy needs to run twice, in addition Is there any better way to implement Mypy in my CI ? |
As documented in https://mypy.readthedocs.io/en/stable/running_mypy.html#library-stubs-not-installed Also if you're using latest mypy, I recommend |
Hi, Maybe I miss something, but it looks very harsh for too few. |
Working on |
After digging into that a bit longer, I think that the best solution might be to simply run: (.venv) ~/Desktop/mypy master ✗
» mypy YOUR_PROJECT
(.venv) ~/Desktop/mypy master ✗ 1 ⚠️
» cat .mypy_cache/missing_stubs
lxml-stubs
types-Pygments
types-colorama
(.venv) ~/Desktop/mypy master ✗
» pip install -r .mypy_cache/missing_stubs This way we only need to document this semi-officially. |
Feature
mypy --install-types requirements.txt
will install type stubs of all dependencies in the file.Pitch
I can't see a generic way to set up an environment ahead of time.
The text was updated successfully, but these errors were encountered: