-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.qmd
587 lines (413 loc) · 16.5 KB
/
index.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
---
title: "Outlier Analysis"
author: "Rob Wiederstein"
bibliography: Outlier_analysis.bib
csl: ieee.csl
format:
revealjs:
incremental: false
theme: simple
reference-location: document
logo: ""
footer: <a href="https://github.com/RobWiederstein/libby/blob/main/LICENSE.md">MIT License 2023</a>
slide-number: true
controls: true
multiplex: false
chalkboard: true
auto-stretch: true
include-in-header:
text: |
<style>
.center-xy {
margin: 0;
position: absolute;
top: 33%;
left: 10%;
-ms-transform: translateY(-25%), translateX(-25%);
transform: translateY(-25%), translateX(-25%);
}
</style>
html:
embed-resources: true
preload-iframes: true
---
```{r set-options, include=FALSE}
knitr::opts_chunk$set(
echo = FALSE,
warning = FALSE,
error = FALSE,
cache = TRUE
)
```
```{r load-libraries, message=FALSE}
library(ggplot2)
library(dplyr)
library(tidyr)
library(quarto)
library(psych)
library(citr)
library(kableExtra)
```
# Overview
## Illustration
::: {.center-xy}
| | |
|:------|:-------|
| **Princess Fiona: ** | "What kind of knight are you?" |
| **Shrek:** |"One of a kind."|
:::
## Based Upon
```{r talagala-article, fig.cap="Article[@talagala2021]"}
knitr::include_graphics("./img/talagala_anomaly_detection.png")
```
:::{.notes}
“We applied our stray algorithm to a dataset obtained from an automated pedestrian counting system with 43 sensors in the city of Melbourne, Australia (City of Melbourne 2019; Wang 2018), to identify unusual pedestrian activities within the municipality.” The article uses the KNN algorith and scagonostics to identify days of unusual activity.
:::
## Roadmap
- Basics
- Distributions
- Models (KNN)
- Part B Claims Data
- Scagnostics
- Interactive Display
## Also Known As
"outliers, novelty, faults, deviants, discordant observations, extreme values/cases, change points, rare events, intrusions, misuses, exceptions, aberrations, surprises, peculiarities, odd values and contaminants"[@talagala2021]
## Definitions
- **Kurtosis** -- is a measure of the <span class="fragment highlight-red">tailedness</span> of a distribution. Tailedness is how often outliers occur.
- **Outlier** -- “An outlier is an observation which deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism.”[@hawkins1980identification]
- **Skewness** is a measure of the <span class="fragment highlight-red">asymetry</span> of the probability distribution of a real-valued random variable about its mean.
- **Standardize** scale all of the values in the dataset such that the mean value is 0 and the standard deviation is 1.
## Symbols
| Symbol | Short | Meaning |
|:------:|:------:|:------:|
|$\mu$ | "mew" | mean |
|$\sigma$| "sigma" | std. dev. |
:::{.notes}
None.
::::
# Basics
## Outliers Classified
```{r outliers-classified, fig.cap="$c_1$ and $c_2$ are clusters; $x_1$ and $x_2$ are global anomalies; $x_3$ is a local anomaly; and $c_3$ is potentially ambiguous. [@goldstein2016]"}
knitr::include_graphics("./img/outliers_classified.png")
```
::: {.notes}
"Two anomalies can be easily identified by eye: x1 and x2 are very different from the dense areas with respect to their attributes and are therefore called global anomalies. When looking at the dataset globally, x3 can be seen as a normal record since it is not too far away from the cluster c2. However, when we focus only on the cluster c2 and compare it with x3 while neglecting all the other instances, it can be seen as an anomaly. Therefore, x3 is called a local anomaly, since it is only anomalous when compared with its close-by neighborhood."[@goldstein2016]
:::
## Continuum of Outlierness
```{r outlierness}
knitr::include_graphics("img/outlierness.png")
```
## Univariate Outliers
> The detection of outliers in the observed distribution of a single variable spans the entire history of outlier detection. It spans this history not only because it is the simplest formulation of the problem, but also because it is deceptively simple.[@wilkinson2018]
$$
\{1, 2, 3, 4, 50, 97, 98, 99\}
$$
## Distance from the Center Rule
> “The word outlier implies lying at an extreme end of a set of ordered values – far away from the center of those values. The modern history of outlier detection emerged with methods that depend on a measure of centrality and a distance from that measure of centrality.” [@wilkinson2018]
$$
\{1, 47, 47, 49, 51, 52, 55, 100\}
$$
## Common Outlier Definitions
- 1.5 x the inter quartile range - Tukey
- 3.0 x the standard deviation
- <span class="fragment highlight-red">Percentile?</span>
:::{.notes}
1.5 IQR -
3x st dev - this rule uses the mean and the standard deviation, more appropriate for symetric distributions.
All rules for identifying outliers are arbitrary
:::
## Four Methods to Identify Outliers
1. Extreme Value Analysis
2. Probabilistic and Statistic Models
3. Linear Models
4. Proximity-Based Models
- Cluster
- Density
- <span class="fragment highlight-red">Distance</span> <==(We are here!)
:::{.notes}
EVA: “The most basic form of outlier detection is extreme-value analysis of 1-dimensional data. These are very specific types of outliers in which it is assumed that the values that are either too large or too small are outliers.” Singh and Upadhyaya 2012 “The key is to determine the statistical tails of the underlying distribution.”
PSA: “In probabilistic and statistical models, the data is modeled in the form of a closed-form probability distribution, and the parameters of this model are learned.”
LM: These methods model the data along lower-dimensional subspaces with the use of linear correlations.
PB: “Proximity-based methods are among the most popular class of methods used in outlier analysis. Proximity-based methods may be applied in one of three ways, which are clustering methods, density-based methods”
-Proximity based. "Proximity-based techniques define a data point as an outlier when its locality (or proximity) is sparsely populated."[@aggarwal2017] Cluster, Distance and Density based. "The distance of a data point to its k-nearest neighbor (or other variant) is used in order to define proximity."[@aggarwal2017]
:::
## Tools Density Plot
```{r std-dev-diagram}
knitr::include_graphics("./img/Standard_deviation_diagram.svg")
```
::: {.notes}
About 68% of values drawn from a normal distribution are within one standard deviation σ away from the mean; about 95% of the values lie within two standard deviations; and about 99.7% are within three standard deviations.[6] This fact is known as the 68–95–99.7 (empirical) rule, or the 3-sigma rule.
:::
## Tools Histogram (Binning)
```{r histogram-example}
library(ggplot2)
mtcars |>
ggplot() +
aes(mpg) +
geom_histogram() +
labs(title = "Mtcars") +
theme_bw()
```
## Tools Boxplots
```{r boxplot-explained}
knitr::include_graphics(path = "./img/boxplot_explained.png")
```
::: {.notes}
"In descriptive statistics, a box plot or boxplot is a method for graphically demonstrating the locality, spread and skewness groups of numerical data through their quartiles.[1] In addition to the box on a box plot, there can be lines (which are called whiskers) extending from the box indicating variability outside the upper and lower quartiles, thus, the plot is also called the box-and-whisker plot and the box-and-whisker diagram."
"The range-bar method was first introduced by Mary Eleanor Spear in her book "Charting Statistics" in 1952[4] and again in her book "Practical Charting Techniques" in 1969.[5] The box-and-whisker plot was first introduced in 1970 by John Tukey, who later published on the subject in his book "Exploratory Data Analysis" in 1977."
:::
# Distributions
## Normal
```{r normal-dist-plot}
set.seed(1)
nd <- data.frame(id = 1:100,
y1 = rnorm(100, mean = 0, sd = 1),
y2 = rnorm(100, mean = 0, sd = 2),
y3 = rnorm(100, mean = 0, sd = 3)
)
nd |>
tidyr::pivot_longer(cols = c(y1, y2, y3)) |>
ggplot() +
aes(value, group = name, color = name) +
geom_density(lwd = 2) +
theme_minimal() +
labs(title = "Normal")
```
```{r normal-dist-table}
library(psych)
describe(nd[ c(2, 3, 4)]) |>
dplyr::select(!c(trimmed, mad, range)) |>
kableExtra::kable(digits = 2) |>
kableExtra::kable_styling(bootstrap_options = c("striped", "hover", "condensed"), font_size = 20)
```
## Zipf
```{r zipf-distribution}
library(sads)
set.seed(1)
zd <- data.frame(id = 1:100,
y1 = rzipf(n = 100, 1000, .95),
y2 = rzipf(n = 100, 1000, .75),
y3 = rzipf(n = 100, 1000, .50))
library(ggplot2)
zd |>
tidyr::pivot_longer(cols = c(y1, y2, y3)) |>
ggplot() +
aes(value, group = name, color = name) +
geom_density(lwd = 2) +
theme_minimal() +
labs(title="Zipf")
```
```{r ziph-dist-table}
library(psych)
describe(zd[, c(2, 3, 4)]) |>
dplyr::select(!c(trimmed, mad, range)) |>
kableExtra::kable(digits = 2) |>
kableExtra::kable_styling(bootstrap_options = c("striped", "hover", "condensed"), font_size = 20)
```
::: {.notes}
Zipf's law is an empirical law that often holds, approximately, when a list of measured values is sorted in decreasing order. It states that the value of the nth entry is inversely proportional to n. The best known instance of Zipf's law applies to the frequency table of words in a text or corpus of natural language:
word frequency
∝
1
word rank
.
{\displaystyle {\text{word frequency}}\propto {\frac {1}{\text{word rank}}}.}
:::
## Log
```{r log-distribution}
set.seed(1)
ld <- data.frame(id = 1:100,
y1 = rlnorm(100, meanlog = 0, sdlog = 1),
y2 = rlnorm(100, meanlog = 0, sdlog = 2),
y3 = rlnorm(100, meanlog = 0, sdlog = 3)
)
ld |>
tidyr::pivot_longer(cols = c(y1, y2, y3)) |>
ggplot() +
aes(value, group = name, color = name) +
geom_density(lwd = 2) +
#scale_x_log10() +
theme_minimal() +
labs(title = "Log")
```
```{r log-dist-table}
library(psych)
describe(ld[ c(2, 3, 4)]) |>
dplyr::select(!c(trimmed, mad, range)) |>
kableExtra::kable(digits = 2) |>
kableExtra::kable_styling(bootstrap_options = c("striped", "hover", "condensed"), font_size = 20)
```
## Z-value Test
$$
Z_1 = \frac{| X_1 - \mu |} \sigma
$$
where $X_1$ = observation, $\mu$ = mean, and $\sigma$ = standard deviation
```{r z-value-r, eval=FALSE, include=TRUE, echo=TRUE}
# in R
df$z <- (df$points-mean(df$points))/sd(df$points)
```
::: {.notes}
:::
## Normalized/Standardized
```{r uniform-example}
ud <- data.frame(uniform = runif(100, 0, 50)
)
library(dplyr)
ud$transformed <- scale(ud$uniform)
library(tidyr)
library(ggplot2)
ud |>
pivot_longer(cols = c(uniform, transformed)) |>
mutate(name = factor(name)) |>
ggplot() +
aes(value, fill = name) +
geom_histogram(lwd = 2) +
theme_minimal() +
theme(legend.position = "none") +
facet_wrap(vars(name)) +
labs(title = "")
```
```{r uniform-dist-table}
library(psych)
describe(ud[ c(1, 2)]) |>
dplyr::select(!c(trimmed, mad, range)) |>
kableExtra::kable(digits = 2) |>
kableExtra::kable_styling(bootstrap_options = c("striped", "hover", "condensed"), font_size = 20)
```
# Boxplots
## Ziph Box Plots
```{r zipf-box-plots}
zd |>
tidyr::pivot_longer(cols = c(y1, y2, y3)) |>
ggplot() +
aes(name, value, group = name, color = name) +
geom_boxplot(notch = TRUE) +
geom_jitter(width = .2, color = "black") +
theme_minimal() +
labs(title="Zipf")
```
## Normal Box Plot
```{r nd-box-plots}
nd |>
tidyr::pivot_longer(cols = c(y1, y2, y3)) |>
ggplot() +
aes(name, value, group = name, color = name) +
geom_boxplot(notch = T) +
geom_jitter(width = .2, color = "black") +
theme_minimal() +
labs(title = "Normal")
```
# Models
## KNN
```{r}
knitr::include_graphics("./img/ibm_knn.png")
```
## Best Unsupervised Learning Method
> “By using a diverse collection of datasets, several evaluation measures, and a broad range of parameter settings, we argue here that it is typically <span class="fragment highlight-red">pointless and unjustified</span> to state the superior behavior of any method for the general case.”[@campos2016]
## Evaluation of KNN
```{r knn-article-results, fig.cap="**(a)** is unnormalized data, aggregated over all datasets.<br> **(b)** is normalized data, aggregated over all datasets.[@campos2016]"}
knitr::include_graphics("./img/knn_unsupervised_results.png")
```
## What Works
>The gist of our findings is that, when considering the totality of results . . . the seminal methods kNN, kNNW, and LOF still remain the <span class="fragment highlight-red">state of the art</span> —- none of the more recent methods tested offer any comprehensive improvement over those classics, while two methods in particular (LDF and KDEOS) have been found to be noticeably less robust to parameter choices.[@campos2016]
## How many neighbors?
```{r k-nearest-neighbor}
knitr::include_graphics("./img/k_nearest_neighbor.png")
```
## Proof
> “A key question arises as to how the effectiveness of an outlier detection algorithm should be evaluated. Unfortunately, this is often a difficult task, because . . . ground-truth labeling of data points as outliers or non-outliers is often not available. Therefore, much of the research literature uses case studies to provide an <span class="fragment highlight-red">intuitive and qualitative evaluation</span> of the underlying outliers in unsupervised scenarios.”[@aggarwal2017]
# Data
## Medicare Claims
```{r}
knitr::include_graphics("./img/cms_mdcr_clms_dwnld.png")
```
:::{.notes}
- Part B claims
- NPI level
- 2021 3GB, 9.8m claims
:::
## Structure
```{r}
mdcr_clms_ex <- readRDS("./data/2021_mdcr_clm_ex.rds")
mdcr_clms_ex |>
t() |>
as.data.frame() |>
dplyr::rename(example = "1") |>
kableExtra::kable() |>
kableExtra::kable_styling(bootstrap_options = "striped", font_size = 30)
```
## RUCA
```{r ruca-codes, fig.cap="USDA Economic Research Service.[@zotero-3702]"}
knitr::include_graphics("./img/ruca_codes.png")
```
## Medicare Top 25 Metros & Specialties - Outliers
```{r mdcr-all_claims-outliers}
knitr::include_graphics("./plots/2021_top_25_w_outliers.jpg")
```
```{r mdcr-all_claims-outliers-tbl}
readRDS("./tables/2021_mdcr_all_clm_outliers_tbl.rds")
```
## Medicare Top 25 Metros & Specialties - No Outliers
```{r mdcr-all-claims-no-outliers}
knitr::include_graphics("./plots/2021_top_25_no_outliers.jpg")
```
```{r mdcr-all-claims-no-outlier-tbl}
readRDS("./tables/2021_mdcr_all_clm_no_outliers_tbl.rds")
```
## Medicare Orthopedic Claims
```{r mdcr-ortho-clms}
knitr::include_graphics("./plots/2021_mdcr_ortho.jpg")
```
```{r mdcr-tbl-ortho}
readRDS("./tables/2021_mdcr_orth_tbl.rds")
```
# Scagnostics
:::{.notes}
Scagnostics is a Tukey neologism for the term scatterplot diagnostics. Scagnostics (scatterplot diagnostics) is a series of measures that characterize certain properties of a point cloud in a scatter plot.
They are useful tool for discovering interesting or unusual scatterplots from a scatterplot matrix, without having to look at every individual plot.
:::
## Scag
```{r baseball-pair-plots}
knitr::include_graphics("./img/baseball_pairs_plot.png")
```
## Nostics
A statistical description of a bivariate plot using 9 metrics:
:::: {.columns}
::: {.column width="50%"}
1. outlying
2. skewed
3. clumpy
4. sparse
5. striated
:::
::: {.column width="50%"}
6. convex
7. skinny
8. stringy
9. monotony
:::
::::
For more information, see Wilkinson.[@wilkinson2008scagnostics]
## Mtcars
<br>
```{r scag-mtcars, echo=TRUE}
scagnostics::scagnostics(mtcars[, c(1:3)])
```
# Interactive Dashboard
# Review
::: {.incremental}
- Do Medicare claims resemble a logrithmic or normal distribution? Why?
- In describing the tails of a distribution, would skew or kurtosis be most appropriate?
- Is the KNN algorithm a "seminal" approach that remains "state of the art" or an "avant-garde" approach?
- What diagnostics were created to describe the "point cloud" of a bivariate plot?
:::
# Thank You!
```{r shreck-gif}
knitr::include_graphics("./img/shrek.gif")
```
## Links
- [Outlier Analysis Presentation](https://robwiederstein.github.io/outliers_pres/#/title-slide)
- [Outlier Analysis Repository](https://github.com/RobWiederstein/outliers_pres)
- [Outlier Analysis Dashboard](https://rob-wiederstein.shinyapps.io/outliers_dash/)
- [Outlier Analysis Dashboard Repository](https://github.com/RobWiederstein/outliers_dash)
# Bibliography