Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a way to filter in/out modules to prevent metals from triggering their compilation #6782

Open
Baccata opened this issue Mar 1, 2022 · 21 comments
Assignees
Labels
improvement Not a bug or a feature, but something general we can improve

Comments

@Baccata
Copy link

Baccata commented Mar 1, 2022

Is your feature request related to a problem? Please describe.

I maintain a number of cross-compiled libraries and metals seem to struggle with them (ie, I need to restart it fairly often, etc), but also, I don't necessarily want to cross-compile against all targets whilst I'm developing, because it uses a lot of resources (even with incremental compilation), and sometimes drags the attention away from what I'm actually trying to do.

Describe the solution you'd like

I'd like to be able to tell metals to filter-out (or filter-in) some modules, based on a sequence of glob-patterns.

I don't want to solve it at the build level, because I don't want to risk checking-in this kind of configuration, nor do I want to trigger a bloop config dump every time I change it. I also like being able to use bloop from the command-line to compile/test things, without necessarily for it to be reflected in the editor.

When editing a file, before triggering compilation via BSP, metals would look at its configuration and inspect the filter to decide whether it should omit some targets.

Describe alternatives you've considered

I've considered playing with the build, but as stated previously, this poses a risk of inadvertently checking-in configuration that is very much workflow specific. It's also not very portable, in the fact that I maintain both mill and sbt projects. Finally, using bloop from CLI is nice, and discarding modules at the build level would disallow their compilation by bloop in terminal.

Additional contex

My workflow when working on these heavily cross-compiled projects is usually :

  1. work on 2.13/jvm, make sure everything works
  2. move on to 2.12/jvm
  3. move on to 3/jvm
  4. move on to scalajs

I'd like to put *_212*, *_3* and *_JS* in a disallow-list in some configuration somewhere (following the pattern used in my projects) when working on step 1, then replace *_212* by *_213* when working on step 2, without having to recompute the bloop configuration, and for metals to immediately understand that it clean diagnostics and recompile against the newly allowed targets.

Search terms

filter, config, cross

@tgodzik
Copy link
Contributor

tgodzik commented Mar 2, 2022

Thanks for reporting! I think we could actually change the defaults, currently Metals runs:

https://github.com/scalameta/metals/blob/main/metals/src/main/scala/scala/meta/internal/metals/Compilations.scala#L92

when anything changes. Instead of doing that, we could only compile for the currently used target and do the full compilation when cascade compile is run.

@tgodzik
Copy link
Contributor

tgodzik commented Mar 2, 2022

Actually, looking into that we do only compile one build target per file. There is no way to choose (scalameta/metals-feature-requests#13), but it should not cause any issues aside from the full compilation at the start. Are you getting more targets compiled? Do you have any example projects that this is happening at and what is your workflow that triggers this behaviour?

@Baccata
Copy link
Author

Baccata commented Mar 2, 2022

Hey Tomasz, sure, the main repos I can share are :

Edit : oops, I hit "send" by accident. Anyhow, I was saying : I think I'm getting more target compiles, in the sense that I'm seeing compilation information in the bottom right of my screen for targets I'm not interested in.

Now that I'm thinking about it, in smithy4s in particular, there is bootstrapped code-generation involved (ie, the code generator is compiled in the project, but also shelled-out to in the build logic of other modules). It might eagerly trigger some cascade compile, leading to more targets being compiled than the ones I'm actually interested in.

I'll try and pay more attention and report

@Baccata
Copy link
Author

Baccata commented Mar 2, 2022

PS : both projects I shared use https://github.com/sbt/sbt-projectmatrix

@tgodzik
Copy link
Contributor

tgodzik commented Mar 11, 2022

I tried to reproduce your issue, but I think I might need a bit more clarification. When does more than one target get compiled? Usually we just invoke compile on save for a single target. We don't invoke it for every possible target the file belongs to.

Could you explain a bit more what happens currently, maybe on an example? (open this file, modify this, I see a lot of targets compiling etc.)

@Baccata
Copy link
Author

Baccata commented Mar 16, 2022

Hey @tgodzik, sorry for not having replied earlier. I've tried reproducing, and it might be that the compilation isn't invoked on all possible targets, but rather the targets on which they are invoked seems to be inconsistant: ie sometimes a 2.12-specific target will be invoked, sometimes 2.13, sometimes js, etc.

I've yet to find a properly reproducible scenario, I'm afraid. It seems to happen after a regeneration of the bloop config, but not 100% sure.

Usually we just invoke compile on save for a single target.

is there anything that dictates what specific target is supposed to be invoked ? Is it deterministic ?

@adpi2
Copy link
Member

adpi2 commented Mar 16, 2022

The code that determines the order of build targets seems to be here:

val buildTargetsOrder: BuildTargetIdentifier => Int = {
(t: BuildTargetIdentifier) =>
var score = 1
val isSupportedScalaVersion = scalaTarget(t).exists(t =>
ScalaVersions.isSupportedAtReleaseMomentScalaVersion(
t.scalaVersion
)
)
if (isSupportedScalaVersion) score <<= 2
val usesJavac = javaTarget(t).nonEmpty
val isJVM = scalaTarget(t).exists(_.scalac.isJVM)
if (usesJavac) score <<= 1
else if (isJVM) score <<= 1
// note(@tgodzik) once the support for Scala 3 is on par with Scala 2 this can be removed
val isScala2 = scalaTarget(t).exists(info =>
!ScalaVersions.isScala3Version(info.scalaVersion)
)
if (isScala2) score <<= 1
score
}

So it seems JVM is preferred over JS and Scala 2 over Scala 3. There can be some undeterminism between Scala 2.12 and Scala 2.13. This would surely happen after restarting Metals or re-importing the build.

@tgodzik
Copy link
Contributor

tgodzik commented Mar 16, 2022

We could for sure change it to make it deterministic and always pick the higher version. Should be a simple change. I will take a look!

@Baccata
Copy link
Author

Baccata commented Mar 16, 2022

Any chance this could be configured ? I think this would basically address my request.

In particular, though I cross-compile to 3, I'd rather 2.13 was default on my projects for the time being (to get linting options that don't have an equivalent, like warn-unused etc), but also, I'd like for JVM to be selected over JS every time, and not sure it's the case for everybody

@tgodzik
Copy link
Contributor

tgodzik commented Mar 16, 2022

Sure, there is actually a feature request for that: scalameta/metals-feature-requests#13

But in your case it seems we would actually use 2.13 just the way you prefer, no?

@Baccata
Copy link
Author

Baccata commented Mar 16, 2022

Oh right ...

regardless, I do see JS targets being compiled for apparently no reason. I thought it may have been a non-determinism thing, because I rarely ever touch JS-specific code :/

@tgodzik
Copy link
Contributor

tgodzik commented Mar 16, 2022

That's what worries me, I don't currently see a situation where we would compile unrelated build targets especially JS ones. We do sometimes invoke cascade compile especially with rename/references etc. where want to have the full picture of all the references, but it should compile JVM projects if you are in a JVM 🤔

Next time this happens could you try and see what were the previous steps that let up to those compilations being done? Did you look for references, did you invoke a code action etc.

I will try to look further into this in the meantime.

@Baccata
Copy link
Author

Baccata commented Mar 16, 2022

Next time this happens could you try and see what were the previous steps that let up to those compilations being done

yup, I'll definitely try to capture a reproducible sequence of steps next time I see it happening. When using bloop, is the computation of the cascade handled by metals or bloop ? Could you point me to the code that decides to trigger it ? Understanding how the code works could help me figure out a reproduction.

@tgodzik
Copy link
Contributor

tgodzik commented Mar 16, 2022

It's handled by Metals in

def cascadeCompile(targets: Seq[BuildTargetIdentifier]): Future[Unit] = {

In most casess Bloop will just return a NOP results.

@armanbilge
Copy link
Contributor

@tgodzik following from scalameta/metals-feature-requests#332 (comment) ...

What is the exact issue you are seeing, what happens within Metals that might be slowing things down?

I don't think there's necessarily anything wrong. It's just slow, because there are many cross-compiled projects sharing the same sources. Combine this with the sbt bsp, which blocks while it is compiling, and it gets to be rather annoying :)

In fact, I'd love to be able to simply import a build into metals without waiting forever. I just tried it now, and for Cats Effect it takes about 10 minutes from triggering the build import to being able to use the sbt console again.

Now that I've imported the project, playing around with it actually seems okay :) from the logs it looks like when I work on shared sources, it is only recompiling them for JVM.

But if a switch to a source that's only for JS or Native, there is quite a bit of latency while it catches up (?).

I'm also getting some messages like this (and others have reported it as well).

[warn] In the last 605 seconds, 7.306 (1.2%) were spent in GC. [Heap: 6.34GB free of 8.00GB, max 8.00GB] Consider increasing the JVM heap using `-Xmx` or try a different collector, e.g. `-XX:+UseG1GC`, for better performance.

Here are some examples of large, cross-compiled projects I frequently work on:

@armanbilge
Copy link
Contributor

armanbilge commented Feb 16, 2023

While writing the above post, we got a bug report in Cats Effect. It's an easy one to fix, I just need to switch branches—but now I'm stuck waiting several minutes for metals, with a blocked sbt console, before I can do anything. In the same time, I can open a new cloud workspace and fix the bug with sbt and no metals 😅

@tgodzik
Copy link
Contributor

tgodzik commented Feb 17, 2023

Combine this with the sbt bsp, which blocks while it is compiling, and it gets to be rather annoying :)

This would be great to fix before making sbt the default, but still compilation would be taking a lot of resources here 🤔

In fact, I'd love to be able to simply import a build into metals without waiting forever. I just tried it now, and for Cats Effect it takes about 10 minutes from triggering the build import to being able to use the sbt console again.

We always do cascade compile at the start, we could exclude Native and Js targets either in an option or just make them compile on demand when the only target a file belongs to is Native/Js. What do you think? The only risk here is that when finding references/rename we could lose some that only belong to other targets than JVM.

But overall if we had an option to run concurrent tasks, we could queue them and make it all less problematic on the CPU/RAM while also making it possible to use the console.

But if a switch to a source that's only for JS or Native, there is quite a bit of latency while it catches up (?).

That's probably expected, since only then we would recompile the sources for them.

I'm also getting some messages like this (and others have reported it as well).

I think that's sbt, right? I've seen this issue when crosscompiling a lot, not sure how to fix it 🤔

I just checked and for cats repo it seems to work much faster with Bloop, which makes me think that sbt is surely not ready yet for being the default. @ckipp01 I would love to hear your input, any idea why sbt behaves weirdly here?

@tgodzik
Copy link
Contributor

tgodzik commented Feb 17, 2023

Ok, so Native seems to add a lot of compilation time here I think, so the best bet is to remove it from initial compilation. Also did you try increasing Metals max heap? Might also help in case Metals is having issues in a large codebase. I changed it to 3 GB and that seems more than enough (it was using up to 2.2 GB and down to 1.3 GB normally.)

@tgodzik
Copy link
Contributor

tgodzik commented Feb 17, 2023

Also, I wondering if it is normal that we link everything on compilation:

2023.02.17 10:04:11 INFO  Compiling to native code (136284 ms)
2023.02.17 10:04:15 INFO  Total (317422 ms)
2023.02.17 10:04:40 INFO  Linking (18584 ms)
2023.02.17 10:04:40 INFO  Discovered 17315 classes and 93533 methods
2023.02.17 10:05:25 INFO  Optimizing (debug mode) (44529 ms)
2023.02.17 10:06:04 INFO  Generating intermediate code (39154 ms)
2023.02.17 10:06:04 INFO  Produced 8 files

shouldn't that be done on publish?

@armanbilge
Copy link
Contributor

Thanks so much for looking at this!


We always do cascade compile at the start, we could exclude Native and Js targets either in an option or just make them compile on demand when the only target a file belongs to is Native/Js. What do you think?

I mean, that's basically the whole point of this issue 😛 to let the user configure which targets to include/exclude. I don't think we should give JVM special status here, necessarily. But I am very much in favor of exposing some option, that lets the user exclude targets.

The only risk here is that when finding references/rename we could lose some that only belong to other targets than JVM.

True, but I think this is an edge case. The majority of the code in the repos I linked is shared, actually, so doing the rename on one target is sufficient. Indeed, doing most of your work on one target is sufficient.

In cases where code is not shared and is platform-specific, it might have to be a pretty major rename (essentially affecting the entire codebase) if it needs to be changed for multiple platforms.

Ok, so Native seems to add a lot of compilation time here I think, so the best bet is to remove it from initial compilation.

Right, except on the day when I want to fix a bug in Native-specific code :) then I would prefer it compiles that one initially, and I don't care about the other targets at all actually.


I think that's sbt, right? I've seen this issue when crosscompiling a lot, not sure how to fix it

Yeah, the GC warning is from sbt. Good point about Metals heap, I will try adjusting that.

I just checked and for cats repo it seems to work much faster with Bloop, which makes me think that sbt is surely not ready yet for being the default.

Sad 😕 seems to be trade-offs either way. Ok, thanks, good to know.


Also, I wondering if it is normal that we link everything on compilation:

It's not even needed for publish, it's only needed for linking, which is only needed for running tests or main method. /shrug

@satorg
Copy link

satorg commented Feb 17, 2023

I just checked and for cats repo it seems to work much faster with Bloop, which makes me think that sbt is surely not ready yet for being the default.

Well, it is not that simple, actually. I personally find that it is usually way more easy to work with both apps simultaneously: VSCode/Metals and SBT. It just provides more flexibility. I.e. I usually keep a project loaded in both these tools. That means, in particular, that if Metals uses Bloop, then the project has to be compiled twice for each change – one time with Bloop from VSCode and also with SBT itself. Which is substantially less performant comparing to a case when everything is compiled with SBT/BSP only (even though it is slower than Bloop by itself).

@tgodzik tgodzik self-assigned this Sep 17, 2024
@tgodzik tgodzik transferred this issue from scalameta/metals-feature-requests Sep 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
improvement Not a bug or a feature, but something general we can improve
Projects
Status: To do
Development

No branches or pull requests

6 participants