-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Throttle list size in ListWrapper #2648
Conversation
Also, perhaps resolves #2319 ? |
Codecov Report
@@ Coverage Diff @@
## master #2648 +/- ##
=======================================
Coverage 95.12% 95.12%
=======================================
Files 363 363
Lines 6704 6704
Branches 305 289 -16
=======================================
Hits 6377 6377
Misses 327 327 Continue to review full report at Codecov.
|
hmmm, this might be a little surprising to users. Imagine they intentionally created a 10k item ListWrapper to test stack safety of some type class. I am curious how this solution compared to changing the implicit def listWrapperArbitrary[A: Arbitrary]: Arbitrary[ListWrapper[A]] =
Arbitrary(arbitrary[List[A]].map(ListWrapper.apply)) to implicit def listWrapperArbitrary[A: Arbitrary]: Arbitrary[ListWrapper[A]] =
Arbitrary(arbitrary[List[A]].map(l => ListWrapper(l.take(100)) |
It actually needs : Arbitrary(arbitrary[List[A]].map((l: List[A]) => ListWrapper(l.take(100)))) But unfortunately, it does not work. Whilst re-testing, I actually got a new PB of 29 million list size... Test code, for those interested, : new Alternative[ListWrapper] {
def pure[A](x: A): ListWrapper[A] = ListWrapper(M.pure(x))
def ap[A, B](f: ListWrapper[A => B])(fa: ListWrapper[A]): ListWrapper[B] ={
if (f.list.length > 1000 || fa.list.length >1000)
println( f.list.length +" "+ fa.list.length )
ListWrapper(M.ap(f.list)(fa.list))
}
... I'm totally with you that the current "fix" is not ideal and could give surprising results. The problem is, the current implemention does give surprising results, and does so regularly; it make my laptop sound like an air raid siren. "oooooOOOOOOoooooooOOOOOOOOoooooo" |
Just thinking out loud... I wonder if the issue has anything to do with the fact that we aren't directly using a |
It's interesting that the list size was not caused by |
Yeah...this is worth nailing once and for all...I'll have a look later 👍 |
@Alistair-Johnson when you saw the ballooned sizes, was it for passing tests or was it when Scalacheck was trying to shrink a failing test case? |
FWIW I had a go at doing something similar with implicit def listWrapperArbitrary[A](implicit arbA: Arbitrary[A]): Arbitrary[ListWrapper[A]] =
Arbitrary {
for {
size <- Gen.choose(0, 100000)
list <- Gen.resize(size, Gen.listOf(arbA.arbitrary))
} yield ListWrapper(list)
} CPU usage / GC activity was a little less spiky but still at the limit of the heap size and scala List cons cells still make up ~600MB of heap. Maybe I need to try EDIT: Sadly that didn't seem to have the desired effect either :( |
I'm not convinced this is caused by ListWrapper, as it doesn't happen if you do |
@DavidGregory084 we suspect that the list is ballooned after the arbitrary generation. Did you try throw an exception in ListWrapper constructor when the given list has more than a very large size? This might tell us what is trying to create a huge listwrapper. |
FYI just encountered another one of these on build (2.13)
|
@kailuowang I can reproduce the same behaviour just by running I was looking at some heap dumps earlier and they are dominated by scala.js ClearableLinkers. Is it possible to increase the heap size to the ~3.5-4gb we need for this to run? Alternatively, can we split the build up so that fewer tasks are running in parallel? SBT will happily run four fastOptJS tasks in parallel which is very heavy in terms of memory usage. Perhaps we could use the tag system in SBT to cut down the number of fastOptJS tasks that can run at once. |
@DavidGregory084 the VM running our build, according to travis documentation, has 6G of memory, so we should be able to increase. I'd like to further break up the build task as well (as part of this proposal #2570), but that might be a heavier task. |
I think now that I was totally wrong about this as I have indeed seen the same thing on JVM tests while investigating #2319 |
Is this still an issue? If yes, feel free to re-open. |
Running the tests locally, and in particular
ApplicativeSuite
, I managed to occasionally, but nonetheless repeatedly, experience the same test timeouts we see on travis. Additionally, I observed some tests taking much longer than average, always coupled with the test JVM memory jumping up from about 1.2G to the max of 3G.A few
println
's later...theList[A]
's generated byArbitrary
in the customListWrapper
are generally in the sub 100 element size, but I have seen them balloon to over 8 million in the slow tests; this would, I believe, explain the OOM's.The real fix is to make sure scalacheck creates reasonable size lists for
List[A]
's in the first place, but unfortunately, I could not quickly achieve this. So the current fix is suboptimal, but works.So if you are happy with the investigation, I might suggest that the current "fix" is better than what we currently have - a new issue can be raised for a more elegant solution, that might also take into account other slow tests that might suffer from a similar issue.