-
-
Notifications
You must be signed in to change notification settings - Fork 639
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for concurrent recursive function memoization #2086
Comments
If you would like to take credit for it (and we would like you to), then it should be in the form of a pull-request which is pretty trivial in GH. |
Mmmh, that is pretty cool. I have some questions.
interface Function0<R> {
static <R> Function0<Future<R>> memoized(Supplier<? extends Future<? extends R>> f) { ... }
}
interface Function1<T1, R> {
static <T1, R> Function1<T1, Future<R>> memoized(Function<? super T1, ? extends Future<? extends R>> f) { ... }
}
interface Function2<T1, T2, R> {
static <T1, T2, R> Function2<T1, T2, Future<R>> memoized(BiFunction<? super T1, ? super T2, ? extends Future<? extends R>> f) { ... }
}
interface Function3<T1, T2, T3, R> {
static <T1, T2, T3, R> Function3<T1, T2, T3, Future<R>> memoized(Function3<? super T1, ? super T2, ? super T3, ? extends Future<? extends R>> f) { ... }
}
... That should still work, especially trampolining. @chb0github you are right but in this case it is not trivial to create a PR. There is a code generator written in Scala that has to be modified (Generator.scala). Also we need to do this for Function* and CheckedFunction*... |
One more thing: It is not clear where to put it. We plan for 1.0.0 to modularize Vavr (see this blog post). If we modify io.vavr.Function*, we have to pull the concurrent package into the upcoming vavr-core module. I planned to create a separate module for the concurrency stuff. Another solution would be to add this functionality to Future itself, e.g. interface Future<T> {
// static factory methods for Function0..n
static <T1, R> Function1<T1, Future<R>> memoized(Function<? super T1, ? extends Future<? extends R>> f) { ... }
// ...
} But I agree that it would be more intuitive to place these methods in Function... Any opinions? |
This leads to (Function0..8 + CheckedFunction0..8) * 2 = 36 new static factory methods in Future (compared to (Checked)FunctionX * 2 = 2 new static factory methods in every (CheckedFunctionX). 😱 |
@smillies (Off-topic) I see that you already got your hands on Kotlin coroutines. I've read that they do not use native Threads. Do you think that it is possible to achieve the same in Java while preserving the functionality of Future/Promise? (I mean backing Future on s.th. else than a native Thread) |
relaxing the type signatures to But I do not see what you gain by introducing all those factory methods. In what way is writing And you can keep a separate module for the concurrent stuff. |
@danieldietrich (Off-topic) I haven't looked at Kotlin coroutines in-depth. However, as Kotlin coroutines require a new keyword ( Apart from |
@smillies Thanks!
I think having not too many entry points (= kind of different words of the 'language' Vavr) makes it easier to find functionality. But maybe you are right. I will think about it...
I think you are right, it can be simplified in this case - it is similar to this one (Off-topic)
Thank you for your thoroughly investigation. The idea of having 'lightweight' concurrency in Java without native threads is tempting but I will not dig deeper for now... |
Oh yes, and I believe you do not want a varant of |
What you do want is a variant of |
Thanks. Oh yeah, caching... Nice topic 🤔 |
Hi there, I've gone and created a pull request. Changes compared to the original proposal: MemoizedConcurrently has an additional method that accepts a cache implementation, and ConcurrentTrampoliningMemoizer is hidden (package private). |
I have closed the pull request, because the code used Promise, which has been removed meanwhile. Without Promise, I currently see no good way to implement this functionality. It could be implemented in terms of CompletableFuture, as in the original version on my blog, but then there's no good way to go from a Function1 that produces a vavr.io.concurrent.Future to a Function that produces a CompletableFuture and back. (And Future.fromCompletableFuture() cannot be used, because it blocks.) |
This can be considered as bug. We need a version that asynchronously completes the wrapping future. I will create an issue. Update: see #2115 |
Vavr already supports non-concurrent function memoization. As I happen to have written a concurrent recursive function memoizer some time ago, I will attach it as a patch for you to decide if it's worth including something like this in Vavr. I have ported the code from ordinary Java (CompletableFuture) to (I hope idiomatic) use of Vavr constructs (Promise, Future):
concurrent-recursive-function-memoization-patch.zip
Usage example with good old Fibonacci sequence (doesn't actually show concurrent calls, but should be enough to illustrate the use):
My original blog post about this topic is here.
The text was updated successfully, but these errors were encountered: