-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SourceGenerators in shared compilation can cause MissingMethodException #66104
Comments
I'm not able to reproduce this behavior. I've followed the steps outlined and built with a couple of different versions of |
Never mind I can repro the problem now. The problem was I failed to save before trying to build 🤦 |
@AArnott, @RikkiGibson, @jjonescz FYI I debugged into this a bit this morning and here is what is happening.
There are not a lot of great options for fixing this on the compiler side. After all the reference was passed via One approach we could take is ignoring dependencies that have simple names that match compiler dlls. That heuristic worries me a bit though about what corners it could open up. |
I'm a little confused here. I thought when you build from VS, the compiler runs on netframework. That means there are no load contexts. Right? It sounds like there's an inconsistency between the System.Collections.Immutable referenced by an analyzer and the one referenced by the compiler. The analyzer's version says that ReadOnlySpan comes from assembly A, while the compiler's version says ReadOnlySpan comes from assembly B. Right? We already have a warning for when an analyzer references a newer compiler version than the compiler actually hosting it. Can/should we somehow extend this warning to the analyzer's dependencies? |
Just double checking here - this is something that would either be fixed by Roslyn or by CsWinRT, but not something that's related to the MVVM Toolkit per se, right? I'm wrapping up for the 8.1 release so just want to make sure there isn't anything wrong on our part there that should be addressed. Am I right to consider this as just an external issue which just so happens to repro when using the MVVM Toolkit now? Also, thanks everyone for investigating this! 😄 |
.NETFramework is the one with load contexts. The 'Default' context, 'LoadFrom' context, and the 'No' context (not relevant here). Jared got the to same conclusion I saw. Types aren't equivalent across load contexts. |
This is essentially what happens in the .NET version though, yes (seen here)? If it's a compiler provided dll, then it's loaded in the compiler's ALC. It's just that the .NETFramework version take a different approach. |
There are no Essentially on .NET Framework there is an internal partitioning of assembly loads into different context within an https://learn.microsoft.com/en-us/archive/blogs/suzcook/choosing-a-binding-context
Not quite. The analyzer and compiler agree on Now when the analyzer goes to call
That is what we do with .NET core but I'm hopeful we can delete it there. There are some corner case issues with that approach that I think we could solve with a slightly different approach in |
So you're saying .NET Framework has a concept of load contexts..for loading assemblies.. which is entirely unrelated to .NET Core's concept of assembly load contexts? 🙁 https://learn.microsoft.com/en-us/dotnet/framework/deployment/best-practices-for-assembly-loading |
OK. So the metadata signature for I would agree that the solution is probably to share the set of "compiler assembly simple names" on the Framework side, and when such assemblies are found in I am also wondering, though, if it is essentially a bug that CsWin32 is shipping "compiler assemblies" when the compiler host is always going to provide the version that needs to actually be used in order for things to work. For example, should it be a warning at analyzer packaging time or similar if assemblies with certain simple names are found in the directory.
I wasn't aware of this, and perhaps I should know more about this in order to understand whether the warning I suggested above would be appropriate/useful. |
The part I don't like about our .NET Core solution is the hard coding of names. Yes we have checks to make sure it's consistent but it still feels off a bit for me in ways that are hard to quantify. What I was thinking about for .NET Framework was along these lines. My first version had none of the checks for name or file location. I wanted it to be essentially if the host owns this name then always load from the host. That leads to lots of runtime exceptions though because we end up probing for every single assembly and that felt off. The check and then load feels a bit better.
Strictly speaking yes I consider it a bug but I'm also sympathetic to the push back of "okay, so tell me how to not ship the assembly". The problem I see generators like CSWin32 facing is they have some dependencies and those bring in dependencies transitively that are shared with the compiler. So to be compliant with the "don't ship compiler assemblies" idea these generators have to know what assemblies we ship, across all versions of the compiler, and prune them out. That's not realistic I think. |
Packing analyzers is performed by some kind of specialized task, right? It feels like it can be someone's job to decide how to not ship compiler assemblies in an analyzer package. 🙂 If the quality of the message ends up being "you're shipping these assemblies, you shouldn't do that, run our such-and-such task during Pack to fix this", then that seems like a decent place to land. |
The set of compiler assemblies isn't static though. What ends up mattering is what is present at runtime, not what the compiler depends on at compile time. What is present at runtime is different between .NET Framework and .NET Core and changes release to release. The pack step can't actually know what the right answer is. |
I suppose that's true. Once we handle the bug on the compiler host side, which we need to do regardless, then the concern of not packing compiler assemblies is more of a "hygiene" concern than a correctness concern anyway. For the purposes of "hygiene", it seems like considering the compiler assemblies for the compiler version you are referencing is probably adequate. But then, if a compiler assembly disappears in a future compiler version, you could imagine your analyzer failing to load, because suddenly the compiler needs to load the one from the analyzer directory, but that assembly was not shipped, because the analyzer thought the compiler would supply it 🙂. If a compiler assembly is added in a future compiler version, the analyzer behavior could also change, I guess, because we stop loading the one that came from the analyzer, which might have behaved differently. Considering all that, maybe it would be best to do our fix on the host side, let people ship whatever they're shipping, and accept that if the set of compiler assemblies changes, weird things can happen. |
I support any effort on the part of the compiler to resolve this, due to @jaredpar's apt recognition that analyzers must work with a variety of compiler versions that may or may not include a given assembly in its distribution. CsWin32 certainly has opportunity to choose which assemblies to ship. My guess is that it includes System.Memory because at some point, csc.exe either didn't ship it, or shipped a version that was too old for what CsWin32 included. I will revisit that to determine whether it is (still) true and drop the dll if we don't need something newer than csc ships. I would argue though, that it's not an automatic "bug" to ship a dll that the compiler also ships. It could be newer, or a custom build, etc. I'd like to see a pure-.NET world where ALCs can be used to isolate all analyzer assemblies except those that are required for types exchanged between compiler and analyzer. Those of course must be shared, and the compiler forcing use of its own assemblies instead of the ones included in the analyzer in that case is reasonable. |
FYI as a bit of history, CsWin32 started shipping all its own dependencies in response to it failing within Omnisharp: microsoft/CsWin32#261. |
I will say that as a library author (in this case, for the MVVM Toolkit) the main issue here is that generator A can break another generator, with no way for consumers to understand why. This means they'll assume generator B has a bug, and report issues there. It seems to me that, ideally, Roslyn should just not make it possible for such a scenario to happen. No matter what other generators are doing (be it correct or not), I would not expect them to be able to break me. If this is not something that can fully be fixed from Roslyn side, it'd be nice if it could at least detect scenarios like these and inform users appropriately 😅 |
I'm curious to know more about the OmniSharp and Rider scenarios. It makes it sound like they are not using the same analyzer host as the VS IDE or command line build.. If CsWin32 does require a newer version of a "compiler assembly" such as System.Memory, this might be only really resolvable by asking Roslyn to start shipping a newer version of that dependency. It seems like we generally won't be able to honor the analyzer's intent for their in-package version of such an assembly to be loaded. Agree that we should be looking at some way to give a diagnostic which leads users to the root of the problem, not to the component which happened to step on the land mine, so to speak. |
I should've been clearer in my response about this. It intuitively feels wrong to me to do this and I deeply wish I could proclaim it a bug and wash my hands of it. But the reality of the situation is complex, particularly when analyzers start depending on other libraries, and I don't see a way for customers to reliably avoid doing this. I think it's ultimately something that we need to handle gracefully. So yes I want to say this is a bug but I'm unable to credibly do so 😄 |
When csc.exe's only world is a .NET world (no .NET Framework) I believe you'll be able to do a much better job of getting this right. Or at least, it will be a simpler problem. I just prepared a CsWin32 change to drop all dependencies that msbuild/csc ships. I tested the change in VS Code (omnisharp) and Rider (started my 30 day ticking timer on a trial to do it) and it works everywhere. I can's speak to older versions of all these, but given it works, and it sounds like it'll solve the MVVM Toolkit problem, I'm happy to merge this. |
See, now this is the frustrating bit. I tested it, and it worked. But now a few days later, I happen to be using that build of CsWin32 inside VS for Windows and it's failing:
So evidently, I have to add at least System.Text.Json.dll back as an Analyzer. Yet it worked before (maybe because it was still loaded into whatever .NET Framework process is running the compiler that had been using the older version of this analyzer). I had also verified that System.Text.Json.dll shipped with msbuild and/or the compiler. So frustrating. |
I do have a fix for this behavior but it was more involved than I originally thought. Had to break it up. |
This changes the design of our analyzer loading policy to give hosts more control over dependencies. Whenever possible analyzer dependencies will be unified with the compiler host environment where previously we only tried to unify dependencies owned directly by the compiler. This will be achieved by using standard assembly loading practices that give the host first shot at determining an `Assembly` to load for a given `AssemblyName`. For a concrete example of this problem consider some analyzers deploy copies of `System.Memory.dll` in their analyzer folder. That means there are two copies of `System.Memory.dll` in play: the one the compiler deploys and the analyzer copy. The compiler prefers its copy because so exchange types unify to a single place. This won't change the designed behavior for the command line compilers. In those scenarios the compiler is the host and the intent compiler dependencies to be unified (this PR does fix a bug in this area). It will make the implementation more rigorous though. This will potentially change our behavior when loaded in hosts such as Visual Studio. That environment has a much larger set of DLLs that can be loaded into the primary load context. This is important though to ensure propery type / dll unification happens where as today it can result in duplicate types and assemblies. This change also substantially updates our tests. Most of our tests were focused on the default loading strategy (load in place, no shadow copies). That is actually the least common loading strategy. The most common is shadow copy loading. As such I revamped our test suite such that the same set of tests run for all four of our main configurations: - default + .NET Core - default + .NET Framework - shadow + .NET Core - shadow + .NET Framework The results are illuminating. There are more differences between these scenarios than most people, likely, suspect. Later changes may try and unify a few scenarios but for now wanted to mostly document the behavior so it's not accidentally changed. closes dotnet#66104 closes dotnet#60763 https://dev.azure.com/devdiv/DevDiv/_workitems/edit/1692992
This changes the design of our analyzer loading policy to give hosts more control over dependencies. Whenever possible analyzer dependencies will be unified with the compiler host environment where previously we only tried to unify dependencies owned directly by the compiler. This will be achieved by using standard assembly loading practices that give the host first shot at determining an `Assembly` to load for a given `AssemblyName`. For a concrete example of this problem consider some analyzers deploy copies of `System.Memory.dll` in their analyzer folder. That means there are two copies of `System.Memory.dll` in play: the one the compiler deploys and the analyzer copy. The compiler prefers its copy because so exchange types unify to a single place. This won't change the designed behavior for the command line compilers. In those scenarios the compiler is the host and the intent compiler dependencies to be unified (this PR does fix a bug in this area). It will make the implementation more rigorous though. This will potentially change our behavior when loaded in hosts such as Visual Studio. That environment has a much larger set of DLLs that can be loaded into the primary load context. This is important though to ensure propery type / dll unification happens where as today it can result in duplicate types and assemblies. This change also substantially updates our tests. Most of our tests were focused on the default loading strategy (load in place, no shadow copies). That is actually the least common loading strategy. The most common is shadow copy loading. As such I revamped our test suite such that the same set of tests run for all four of our main configurations: - default + .NET Core - default + .NET Framework - shadow + .NET Core - shadow + .NET Framework The results are illuminating. There are more differences between these scenarios than most people, likely, suspect. Later changes may try and unify a few scenarios but for now wanted to mostly document the behavior so it's not accidentally changed. closes dotnet#66104 closes dotnet#60763 https://dev.azure.com/devdiv/DevDiv/_workitems/edit/1692992
This changes the design of our analyzer loading policy to give hosts more control over dependencies. Whenever possible analyzer dependencies will be unified with the compiler host environment where previously we only tried to unify dependencies owned directly by the compiler. This will be achieved by using standard assembly loading practices that give the host first shot at determining an `Assembly` to load for a given `AssemblyName`. For a concrete example of this problem consider some analyzers deploy copies of `System.Memory.dll` in their analyzer folder. That means there are two copies of `System.Memory.dll` in play: the one the compiler deploys and the analyzer copy. The compiler prefers its copy because so exchange types unify to a single place. This won't change the designed behavior for the command line compilers. In those scenarios the compiler is the host and the intent compiler dependencies to be unified (this PR does fix a bug in this area). It will make the implementation more rigorous though. This will potentially change our behavior when loaded in hosts such as Visual Studio. That environment has a much larger set of DLLs that can be loaded into the primary load context. This is important though to ensure propery type / dll unification happens where as today it can result in duplicate types and assemblies. This change also substantially updates our tests. Most of our tests were focused on the default loading strategy (load in place, no shadow copies). That is actually the least common loading strategy. The most common is shadow copy loading. As such I revamped our test suite such that the same set of tests run for all four of our main configurations: - default + .NET Core - default + .NET Framework - shadow + .NET Core - shadow + .NET Framework The results are illuminating. There are more differences between these scenarios than most people, likely, suspect. Later changes may try and unify a few scenarios but for now wanted to mostly document the behavior so it's not accidentally changed. closes dotnet#66104 closes dotnet#60763 https://dev.azure.com/devdiv/DevDiv/_workitems/edit/1692992
This changes the design of our analyzer loading policy to give hosts more control over dependencies. Whenever possible analyzer dependencies will be unified with the compiler host environment where previously we only tried to unify dependencies owned directly by the compiler. This will be achieved by using standard assembly loading practices that give the host first shot at determining an `Assembly` to load for a given `AssemblyName`. For a concrete example of this problem consider some analyzers deploy copies of `System.Memory.dll` in their analyzer folder. That means there are two copies of `System.Memory.dll` in play: the one the compiler deploys and the analyzer copy. The compiler prefers its copy because so exchange types unify to a single place. This won't change the designed behavior for the command line compilers. In those scenarios the compiler is the host and the intent compiler dependencies to be unified (this PR does fix a bug in this area). It will make the implementation more rigorous though. This will potentially change our behavior when loaded in hosts such as Visual Studio. That environment has a much larger set of DLLs that can be loaded into the primary load context. This is important though to ensure propery type / dll unification happens where as today it can result in duplicate types and assemblies. This change also substantially updates our tests. Most of our tests were focused on the default loading strategy (load in place, no shadow copies). That is actually the least common loading strategy. The most common is shadow copy loading. As such I revamped our test suite such that the same set of tests run for all four of our main configurations: - default + .NET Core - default + .NET Framework - shadow + .NET Core - shadow + .NET Framework The results are illuminating. There are more differences between these scenarios than most people, likely, suspect. Later changes may try and unify a few scenarios but for now wanted to mostly document the behavior so it's not accidentally changed. closes dotnet#66104 closes dotnet#60763 https://dev.azure.com/devdiv/DevDiv/_workitems/edit/1692992
* Fix analyzer loading to prefer host This changes the design of our analyzer loading policy to give hosts more control over dependencies. Whenever possible analyzer dependencies will be unified with the compiler host environment where previously we only tried to unify dependencies owned directly by the compiler. This will be achieved by using standard assembly loading practices that give the host first shot at determining an `Assembly` to load for a given `AssemblyName`. For a concrete example of this problem consider some analyzers deploy copies of `System.Memory.dll` in their analyzer folder. That means there are two copies of `System.Memory.dll` in play: the one the compiler deploys and the analyzer copy. The compiler prefers its copy because so exchange types unify to a single place. This won't change the designed behavior for the command line compilers. In those scenarios the compiler is the host and the intent compiler dependencies to be unified (this PR does fix a bug in this area). It will make the implementation more rigorous though. This will potentially change our behavior when loaded in hosts such as Visual Studio. That environment has a much larger set of DLLs that can be loaded into the primary load context. This is important though to ensure propery type / dll unification happens where as today it can result in duplicate types and assemblies. This change also substantially updates our tests. Most of our tests were focused on the default loading strategy (load in place, no shadow copies). That is actually the least common loading strategy. The most common is shadow copy loading. As such I revamped our test suite such that the same set of tests run for all four of our main configurations: - default + .NET Core - default + .NET Framework - shadow + .NET Core - shadow + .NET Framework The results are illuminating. There are more differences between these scenarios than most people, likely, suspect. Later changes may try and unify a few scenarios but for now wanted to mostly document the behavior so it's not accidentally changed. closes #66104 closes #60763 https://dev.azure.com/devdiv/DevDiv/_workitems/edit/1692992 Co-authored-by: Charles Stoner <10732005+cston@users.noreply.github.com> Co-authored-by: Charles Stoner <10732005+cston@users.noreply.github.com>
Version Used:
Visual Studio v17.5p1
Roslyn in .NET 7.0.101 and in .NETFramework 4.7.2 (the one that ships with VS)
Steps to Reproduce:
then add a new
.cs
file so the CommunityToolkit SourceGenerators will runThen you can either build in VS or use
msbuild -restore /t:Rebuild
to trigger the warningExpected Behavior:
The generated code should include the
ICommand
that is generatedActual Behavior:
Because the source generator encountered an error, the associated IL is missing entirely.
If you build with
dotnet build
, it works as expected.Investigation/Cause
A couple things collide to cause this:
msbuild
or Visual Studio uses the full framework MSBuild (i.e. no ALCs) under a shared compilation contextSystem.Memory
in their analyzers directorySystem.Memory
in theLoadFrom
context (even though System.Memory is in the probing path)I hope this enough information. Someone who is very familiar with assembly loading process should get the gist of what's going on.
The .NET version of roslyn has the names of simple assemblies that will be provided in by the compiler but the .NETFramework has a completely different loading process since there are no ALCs.
Is this a concern to the Roslyn team or should the fix be CsWin32 package not shipping Roslyn provided dlls in their analyzer folder?
[Update(jcouv):] This issue is referenced by some skipped tests.
The text was updated successfully, but these errors were encountered: