Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rollup of 8 pull requests #93287

Closed
wants to merge 35 commits into from

Conversation

matthiaskrgr
Copy link
Member

Successful merges:

Failed merges:

r? @ghost
@rustbot modify labels: rollup

Create a similar rollup

sunfishcode and others added 30 commits September 9, 2021 14:16
As suggested in rust-lang#88564. This adds a `try_clone()` to `OwnedFd` by
refactoring the code out of the existing `File`/`Socket` code.
`WSADuplicateSocketW` returns 0 on success, which differs from
handle-oriented functions which return 0 on error. Use `sys::net::cvt`
to handle its return value, which handles the socket convention of
returning 0 on success, rather than `sys::cvt`, which handles the
handle-oriented convention of returning 0 on failure.
Fixes rust-lang#92987

During evaluation of an auto trait predicate, we may encounter a cycle.
This causes us to store the evaluation result in a special 'provisional
cache;. If we later end up determining that the type can legitimately
implement the auto trait despite the cycle, we remove the entry from
the provisional cache, and insert it into the evaluation cache.

Additionally, trait evaluation creates a special anonymous `DepNode`.
All queries invoked during the predicate evaluation are added as
outoging dependency edges from the `DepNode`. This `DepNode` is then
store in the evaluation cache - if a different query ends up reading
from the cache entry, it will also perform a read of the stored
`DepNode`. As a result, the cached evaluation will still end up
(transitively) incurring all of the same dependencies that it would
if it actually performed the uncached evaluation (e.g. a call to
`type_of` to determine constituent types).

Previously, we did not correctly handle the interaction between the
provisional cache and the created `DepNode`. Storing an evaluation
result in the provisional cache would cause us to lose the `DepNode`
created during the evaluation. If we later moved the entry from the
provisional cache to the evaluation cache, we would use the `DepNode`
associated with the evaluation that caused us to 'complete' the cycle,
not the evaluatoon where we first discovered the cycle. As a result,
future reads from the evaluation cache would miss some incremental
compilation dependencies that would have otherwise been added if the
evaluation was *not* cached.

Under the right circumstances, this could lead to us trying to force
a query with a no-longer-existing `DefPathHash`, since we were missing
the (red) dependency edge that would have caused us to bail out before
attempting forcing.

This commit makes the provisional cache store the `DepNode` create
during the provisional evaluation. When we move an entry from the
provisional cache to the evaluation cache, we create a *new* `DepNode`
that has dependencies going to *both* of the evaluation `DepNodes` we
have available. This ensures that cached reads will incur all of
the necessary dependency edges.
If we do not add code coverage instrumentation to the `Body` of a
function, then when we go to generate the function record for it, we
won't write any data and this later causes llvm-cov to fail when
processing data for the entire coverage report.

I've identified two main cases where we do not currently add code
coverage instrumentation to the `Body` of a function:

  1. If the function has a single `BasicBlock` and it ends with a
     `TerminatorKind::Unreachable`.

  2. If the function is created using a proc macro of some kind.

For case 1, this typically not important as this most often occurs as
the result of function definitions that take or return uninhabited
types. These kinds of functions, by definition, cannot even be called so
they logically should not be counted in code coverage statistics.

For case 2, I haven't looked into this very much but I've noticed while
testing this patch that (other than functions which are covered by case
1) the skipped function coverage debug message is occasionally triggered
in large crate graphs by functions generated from a proc macro. This may
have something to do with weird spans being generated by the proc macro
but this is just a guess.

I think it's reasonable to land this change since currently, we fail to
generate *any* results from llvm-cov when a function has no coverage
instrumentation applied to it. With this change, we get coverage data
for all functions other than the two cases discussed above.
This reduces the number of clicks required to change theme.

Also, simplify the UI a bit (remove setting grouping), and add a "Back"
link close to the settings icon.
This agrees with Clang, and avoids an error when using LTO with mixed
C/Rust. LLVM considers different behaviour flags to be a mismatch,
even when the flag value itself is the same.

This also makes the flag setting explicit for all uses of
LLVMRustAddModuleFlag.
…r=joshtriplett

Add a `try_clone()` function to `OwnedFd`.

As suggested in rust-lang#88564. This adds a `try_clone()` to `OwnedFd` by
refactoring the code out of the existing `File`/`Socket` code.

r? `@joshtriplett`
…ichaelwoerister

Properly track `DepNode`s in trait evaluation provisional cache

Fixes rust-lang#92987

During evaluation of an auto trait predicate, we may encounter a cycle.
This causes us to store the evaluation result in a special 'provisional
cache;. If we later end up determining that the type can legitimately
implement the auto trait despite the cycle, we remove the entry from
the provisional cache, and insert it into the evaluation cache.

Additionally, trait evaluation creates a special anonymous `DepNode`.
All queries invoked during the predicate evaluation are added as
outoging dependency edges from the `DepNode`. This `DepNode` is then
store in the evaluation cache - if a different query ends up reading
from the cache entry, it will also perform a read of the stored
`DepNode`. As a result, the cached evaluation will still end up
(transitively) incurring all of the same dependencies that it would
if it actually performed the uncached evaluation (e.g. a call to
`type_of` to determine constituent types).

Previously, we did not correctly handle the interaction between the
provisional cache and the created `DepNode`. Storing an evaluation
result in the provisional cache would cause us to lose the `DepNode`
created during the evaluation. If we later moved the entry from the
provisional cache to the evaluation cache, we would use the `DepNode`
associated with the evaluation that caused us to 'complete' the cycle,
not the evaluatoon where we first discovered the cycle. As a result,
future reads from the evaluation cache would miss some incremental
compilation dependencies that would have otherwise been added if the
evaluation was *not* cached.

Under the right circumstances, this could lead to us trying to force
a query with a no-longer-existing `DefPathHash`, since we were missing
the (red) dependency edge that would have caused us to bail out before
attempting forcing.

This commit makes the provisional cache store the `DepNode` create
during the provisional evaluation. When we move an entry from the
provisional cache to the evaluation cache, we create a *new* `DepNode`
that has dependencies going to *both* of the evaluation `DepNodes` we
have available. This ensures that cached reads will incur all of
the necessary dependency edges.
…bank

Move param count error emission to end of `check_argument_types`

The error emission here isn't exactly what is done in rust-lang#92364, but replicating that is hard . The general move should make for a smaller diff.

Also included the `(usize, Ty, Ty)` to -> `Option<(Ty, Ty)>` commit.

r? `@estebank`
…ov2, r=tmandry

Work around missing code coverage data causing llvm-cov failures

If we do not add code coverage instrumentation to the `Body` of a
function, then when we go to generate the function record for it, we
won't write any data and this later causes llvm-cov to fail when
processing data for the entire coverage report.

I've identified two main cases where we do not currently add code
coverage instrumentation to the `Body` of a function:

  1. If the function has a single `BasicBlock` and it ends with a
     `TerminatorKind::Unreachable`.

  2. If the function is created using a proc macro of some kind.

For case 1, this is typically not important as this most often occurs as
a result of function definitions that take or return uninhabited
types. These kinds of functions, by definition, cannot even be called so
they logically should not be counted in code coverage statistics.

For case 2, I haven't looked into this very much but I've noticed while
testing this patch that (other than functions which are covered by case
1) the skipped function coverage debug message is occasionally triggered
in large crate graphs by functions generated from a proc macro. This may
have something to do with weird spans being generated by the proc macro
but this is just a guess.

I think it's reasonable to land this change since currently, we fail to
generate *any* results from llvm-cov when a function has no coverage
instrumentation applied to it. With this change, we get coverage data
for all functions other than the two cases discussed above.

Fixes rust-lang#93054 which occurs because of uncallable functions which shouldn't
have code coverage anyway.

I will open an issue for missing code coverage of proc macro generated
functions and leave a link here once I have a more minimal repro.

r? `@tmandry`
cc `@richkadel`
…ency, r=GuillaumeGomez

Fix inconsistency of local blanket impls

When a blanket impl is local, go through HIR instead of middle. This fixes inconsistencies with data detected during JSON generation.

Expected this change to take longer. I also tried doing the whole item through existing clean architecture, but it didn't work out trivially, and felt like it would have added more complexity than it removed.

Properly fixes rust-lang#83718
…e-new, r=nikomatsakis

Implement stable overlap check considering negative traits

This PR implement the new disjointness rules for overlap check described in https://rust-lang.github.io/negative-impls-initiative/explainer/coherence-check.html#new-disjointness-rules

r? `@nikomatsakis`
rustdoc settings: use radio buttons for theme

This reduces the number of clicks required to change theme.

Also, simplify the UI a bit (remove setting grouping), and add a "Back" link close to the settings icon.

Demo: https://rustdoc.crud.net/jsha/theme-radio/settings.html

r? `@GuillaumeGomez`

New:

![image](https://user-images.githubusercontent.com/220205/150702647-4826d525-54fa-439a-b24c-6d5bca6f95bf.png)

Old:

![image](https://user-images.githubusercontent.com/220205/150702669-6a4214ed-1dab-4fee-b1aa-59acfce3dbca.png)
…petrochenkov

Use error-on-mismatch policy for PAuth module flags.

This agrees with Clang, and avoids an error when using LTO with mixed
C/Rust. LLVM considers different behaviour flags to be a mismatch,
even when the flag value itself is the same.

This also makes the flag setting explicit for all uses of
LLVMRustAddModuleFlag.

----

I believe that this fixes rust-lang#92885, but have only reproduced it locally on Linux hosts so cannot confirm that it fixes the issue as reported.

I have not included a test for this because it is covered by an existing test (`src/test/run-make-fulldeps/cross-lang-lto-clang`). It is not without its problems, though:
* The test requires Clang and `--run-clang-based-tests-with=...` to run, and this is not the case on the CI.
   * Any test I add would have a similar requirement.
* With this patch applied, the test gets further, but it still fails (for other reasons). I don't think that affects rust-lang#92885.
@rustbot rustbot added T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. T-rustdoc Relevant to the rustdoc team, which will review and decide on the PR/issue. rollup A PR which is a rollup labels Jan 25, 2022
@matthiaskrgr matthiaskrgr deleted the rollup-o1usniy branch February 13, 2022 00:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
rollup A PR which is a rollup T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. T-rustdoc Relevant to the rustdoc team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants