You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently the "testBadModelParams" method does seem to pass regardless of whether bad
model parameters are used or not. My guess is that its supposed to be testing that an exception is
thrown if you provide invalid model parameters. From a conversation with @colings86 we think that
the padding parameter passed in ensure the alpha and beta values are invalid, but that seems to be
trappy because if the randomness selects simple or linear models the padding isn’t used and the model
will be valid. I will @AwaitsFix the test in the course of #31876
where I stumbled into it, but create this issue for further investigation.
The text was updated successfully, but these errors were encountered:
FAILURE 0.39s | MovAvgIT.testHoltWintersNotEnoughData <<< FAILURES!
> Throwable #1: junit.framework.AssertionFailedError: Expected exception SearchPhaseExecutionException but no exception was thrown
> at __randomizedtesting.SeedInfo.seed([CE15410BFF4D8B8A:A5C013FEF4A98B75]:0)
> at org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2687)
> at org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2672)
> at org.elasticsearch.search.aggregations.pipeline.moving.avg.MovAvgIT.testHoltWintersNotEnoughData(MovAvgIT.java:856)
> at java.lang.Thread.run(Thread.java:748)
I think this needs a similar treatment like the issue mentioned above. The test it muted now as well, would be good to clean this up in one go.
Currently the "testBadModelParams" method does seem to pass regardless of whether bad
model parameters are used or not. My guess is that its supposed to be testing that an exception is
thrown if you provide invalid model parameters. From a conversation with @colings86 we think that
the padding parameter passed in ensure the alpha and beta values are invalid, but that seems to be
trappy because if the randomness selects simple or linear models the padding isn’t used and the model
will be valid. I will @AwaitsFix the test in the course of #31876
where I stumbled into it, but create this issue for further investigation.
The text was updated successfully, but these errors were encountered: