-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scala ClassLoader breaks nio FileSystemProvider API #10247
Comments
Imported From: https://issues.scala-lang.org/browse/SI-10247?orig=1 |
Using Or, use the API where you can provide the class loader to find providers which are not "installed." This example shows loading a test provider from a build dir.
Or specifying loader:
|
Thanks for these workarounds, @som-snytt! Unfortunately I'm not sure that either one works very well for me:
Can we also have some conversation about why Scala doesn't have this Just Work? I assume there is no Good Reason for it, but rather that it's an unfortunate side-effect of something well under the hood of what most Scala users know or care to know about… is there any hope of "fixing" this? |
That's a good question. Maybe there's a good reason for Spark to update its script. You can "install" Scala in the |
Not sure I follow; is using I assume the latter, which seems like a non-starter to me, at least until we get a straight answer about why Scala is doing this and what fixing the problem at its source would entail.
Sorry, I'm missing what this is supposed to solve or why. Per the above, I'd like to discuss why Scala is doing this and what it would take to fix it, though I appreciate the thorough exploration of the space of possible work-arounds 😎. |
You can drop your I just spent some time refreshing my understanding: basically, you're saying that invoking the runner script with Just guessing, but that might be because the same script is used for both Here's an issue: scala/scala3#44 In 2.13, they want more flexible module handling, and also use the Java 9 modules, so now might be a good time to start or join a conversation on their discussion site, or mention it on gitter. In fact, I'll go mention it now. To reiterate, this isn't an issue with The confusing options are You could also consider using |
@som-snytt should this issue be closed? the combination of "out of scope" yet remaining open is confusing |
@SethTisue I always confuse "out of scope" with "out of rope." I'll try to confirm that I said something correct in the thread and whether the request that it just work can be met. The class loader fix happened since then. |
As an update, I've been using a wrapper around That got me unblocked, but I don't think a world where [everyone who wants to use JSR203 libraries from Scala] has to [use my library or roll their own similar library] is desirable. I'm also a little confused that this hasn't come up more widely / there aren't others mentioning that they've run into this; I thought folks I work with in the ADAM universe (cf. linked issue above) would have, but perhaps they and everyone else primarily use the analogous HDFS FileSystem APIs (that JSR203 was meant to mimic/replace, IIUC)? Anyway, I'll defer to y'all about what level of fixing, further documenting recommended workarounds, #wontfix'ing, etc. is the right outcome here, thanks. |
I think this is more of a Spark problem (and/or bad design in the JDK):
I suppose it is because people don't use the java [-Xms, -Xmx, ...] -cp [~full classpath, including scala & your fs~] YourMainClass And that will make everything work just fine. I don't want the (As for the dev experience, in sbt you need to enable forking when running/testing, and then everything works the same. It is annoying that it doesn't Just Work™ in the REPL though.) I have no idea how Spark does all this, or if they allow users to easily inject stuff onto the classpath of Spark itself, but that's what you would need. |
I do use REPL scripting on my server, so it should just work. Right now it looks pretty broken. With the need for Java 9 support, it's a good time to revisit what does it think it's doing? Here's the REPL class loader, which is more precise than It wasn't intended to put I did a quick munge of the script that just puts the Scala user I don't know that there is any benefit in the current set-up, where a special class loader takes over. The runner code can still use a
with -nobootcp
Modified:
|
I haven't looked at whether sbt supports forking when running the console. Hopefully any wrinkle could be ironed out. Similarly, folks did work for Spark to support adding jars to the compiler class path, so this is a natural use case. Maybe some future REPL will support that in a natural way. |
There ought to be an issue to make |
java.nio.file.spi.FileSystemProvider
loads implementations usingClassLoader.getSystemClassLoader()
.However, Scala uses a system ClassLoader that doesn't search among JARs on the classpath, resulting in it being impossible to use custom
FileSystemProvider
implementations.As an example, try running the
google-cloud-nio
example; here is a gist showing shell cmds and output.When the same example JAR is run with
java -cp …
andscala -cp …
, the former finds the customFileSystemProvider
(gs
scheme) but the latter doesn't.I'm currently planning to use this workaround to call
FileSystemProvider.loadInstalledProviders
while the system classloader is temporarily overwritten toThread.currentThread().getContextClassLoader
, which properly findsFileSystemProvider
implementations in user-supplied JARs.This SO provides basically the same analysis and diagnosis.
The text was updated successfully, but these errors were encountered: