Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Addressing hyper-sensitivity for RCFs with homogenous observations #400

Open
zackrossman opened this issue Sep 18, 2023 · 8 comments
Open

Comments

@zackrossman
Copy link

I'm using multiple ThresholdedRandomCutForests to detect anomalies which we expect to occur very infrequently. The RCF is configured to process three-dimensional observations (triple-vectors), where each dimension of the observation tracks some indicator of anomaly. Given that anomalies are infrequent, it turns out that most of the observation are the exact same. It wouldn't be surprising if the observations for the majority of the models were the exact same over the (expected) multi-year lifetime of the model. This has resulted in some false positives in situations where any value in the triple-vector increases by even the smallest amount. These false-positives would be more manageable if the false-positive inference results were paired with a moderate threat score which reflected the reality that there is a mild anomaly, but we're seeing threatScore=100.

Here's a theoretical series of observations processed by a ThresholdedRandomCutForest which will output "anomaly" with threatScore=100 and confidenceScore=100:

obs_0: [5, 10, 2]
obs_1: [5, 10, 2]
obs_2: [5, 10, 2]
...
obs_<sampleSize>: [5, 10, 2]
obs_<sampleSize+1>: [5, 10, 3] // Different than all previous observations

I've tried changing the Z-Score threshold for the RCF from the default of 2.5 up to 7, but no-luck. My theory is that the standard deviation of the observations from obs_0 to obs_<sampleSize> is 0, which means any observation which deviates from the previous observations will have a large (infinite?) standard deviation. If that is the case, raising the Z-Score threshold is a futile attempt. I tested this hypothesis by adding some amount of random "jitter" to the observations themselves. This was the only way I was able to get 0 < threatScore < 100.

I have two questions:

  1. Why are we seeing this behavior? Is my theory, above, on the right track?
  2. What can we do to address it? Adding jitter/noise to the observations themselves is counterintuitive. I've considered layering a heuristic on top of the inference result from the RCF model but wondering if there's a better way to approach this problem.

Any suggestions/feedback greatly appreciated!

@sudiptoguha
Copy link
Contributor

Nice! You are on the right track -- standard deviation 0 is likely; but it could be that if you've got 5 distinct 3-tuples then the trees are very short. So the theory is correct.

In terms of the second -- the behavior is problematic when you know that [5,10,3] is not that critically different from [5,10,2]. But that is more "business logic" and perhaps better addressed as a layer on top of RCF? The anomaly description should contain information about "expected value" and maybe perform a filter on it? If one is interested, one could also build RLHF layer to crowdsource the filtering. But it may be better to not take this sensitivity away -- because for some other use case "we have never seen this before" is perhaps worth flagging.

Adding a filter or jitters are options to solve overfitting -- depends which of these would be easier to update down the road.
Adding noise may help in term of privacy preservation etc. as well, if that is something of interest. Unless privacy is involved, I think it is better to not modify input, because down the road the information about "why the noise was added" would be forgotten and some new use case would arise where one wants to detect [5,10,3]. At some level filtering is a
post-processing; and adding noise is pre-processing. The former has more information (think of bottom up dynamic programming -- which is post-processing) and thus more powerful as a detector (which is why noise is added as a first step when we wish to reduce the information). Hope that helps.

@zackrossman
Copy link
Author

Thanks for the input, Sudipto. We do have some domain-specific knowledge which could be applied to a filtering/post-processing layer, as you suggested.

@sudiptoguha
Copy link
Contributor

Sounds good -- you will likely need the domain specific post-processing in more than this issue. But I just remembered (what a good night's sleep would not do!) that the specific filtering has already been built in! See https://github.com/aws/random-cut-forest-by-aws/blob/main/Java/examples/src/main/java/com/amazon/randomcutforest/examples/parkservices/LowNoisePeriodic.java

You can set forest.setIgnoreNearExpectedFromAbove(new double [] {0,0,1}) to suppress the [5,10,3] because the expected value should be [5,10,2]. There is a corresponding forest.setIgnoreNearExpectedFromBelow(). So the specific narrow case of filtering in a pre-defined box of values should be there already (do try it out). But in general you'd need post-processing at some point, so plan for that as well.

@zackrossman
Copy link
Author

Oh, great! I will definitely run some experiments with that. If nothing else, I think setIgnoreNearExpectedFromBelow is actually going to solve another one of my less-pressing issues which is that we don't consider an observation anomalous if the outlier observation is less than the historical data. Back to the example where [5,10,2] is representative of the historical observations, our business logic requires that we don't consider [0,0,0] an anomaly even though it's most definitely an outlier.

Can you confirm this is a good application of setIgnoreNearExpectedFromBelow?

@sudiptoguha
Copy link
Contributor

:) Yes. Look at https://github.com/aws/random-cut-forest-by-aws/blob/main/Java/examples/src/main/java/com/amazon/randomcutforest/examples/parkservices/LowNoisePeriodic.java

and see line 120-125

    // the following suppresses all anomalies that shifted down compared to
    // predicted
    // for any sequence

    // forest.setIgnoreNearExpectedFromBelow(new double [] {Float.MAX_VALUE});

@sudiptoguha
Copy link
Contributor

Since we were separating the Above and Below, both take in +ve values and the check is applied as a - b and b - a separately.

@zackrossman
Copy link
Author

Nice, I scheduled these improvements on our engineering roadmap in a couple of weeks.

Thanks for being so responsive. Not sure how you want to handle open-ended issues like this one, but feel free to close and I can open another issue if I have followup questions at that point.

@sudiptoguha
Copy link
Contributor

Open ended issues are great -- if one person has questions, likely others have similar questions as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants