-
Notifications
You must be signed in to change notification settings - Fork 2
/
01-intro.Rmd
256 lines (191 loc) · 13.8 KB
/
01-intro.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
# Tests and inferences {#testinference}
```{r message=FALSE}
library(knitr)
library(kableExtra)
library(tidyverse)
```
One of the first thing to be familiar with while doing machine learning works is the basic of statistical inferences.
In this chapter, we go over some of these important concepts and the "R-ways" to do them.
## Assumption of normality {#normality}
Copied from [here](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3693611/)
Many of the statistical procedures including correlation, regression, t tests, and analysis of variance, namely parametric tests, are based on the assumption that the data follows a normal distribution or a Gaussian distribution (after Johann Karl Gauss, 1777–1855); that is, it is assumed that the populations from which the samples are taken are normally distributed. The assumption of normality is especially critical when constructing reference intervals for variables. Normality and other assumptions should be taken seriously, for when these assumptions do not hold, it is impossible to draw accurate and reliable conclusions about reality.
\index{Test of normality}
With large enough sample sizes (> 30 or 40), the violation of the normality assumption should not cause major problems; this implies that we can use parametric procedures even when the data are not normally distributed (8). If we have samples consisting of hundreds of observations, we can ignore the distribution of the data (3). According to the central limit theorem,
* if the sample data are approximately normal then the sampling distribution too will be normal;
* in large samples (> 30 or 40), the sampling distribution tends to be normal, regardless of the shape of the data
* means of random samples from any distribution will themselves have normal distribution.
Although true normality is considered to be a myth, we can look for normality visually by using normal plots or by significance tests, that is, comparing the sample distribution to a normal one. It is important to ascertain whether data show a serious deviation from normality.
### Visual check of normality
Visual inspection of the distribution may be used for assessing normality, although this approach is usually unreliable and does not guarantee that the distribution is normal. However, when data are presented visually, readers of an article can judge the distribution assumption by themselves. The frequency distribution (histogram), stem-and-leaf plot, boxplot, P-P plot (probability-probability plot), and Q-Q plot (quantile-quantile plot) \index{Q-Q plot} are used for checking normality visually. The frequency distribution that plots the observed values against their frequency, provides both a visual judgment about whether the distribution is bell shaped and insights about gaps in the data and outliers outlying values. A Q-Q plot is very similar to the P-P plot except that it plots the quantiles (values that split a data set into equal portions) of the data set instead of every individual score in the data. Moreover, the Q-Q plots are easier to interpret in case of large sample sizes. The boxplot shows the median as a horizontal line inside the box and the interquartile range (range between the 25 th to 75 th percentiles) as the length of the box. The whiskers (line extending from the top and bottom of the box) represent the minimum and maximum values when they are within 1.5 times the interquartile range from either end of the box. Scores greater than 1.5 times the interquartile range are out of the boxplot and are considered as outliers, and those greater than 3 times the interquartile range are extreme outliers. A boxplot that is symmetric with the median line at approximately the center of the box and with symmetric whiskers that are slightly longer than the subsections of the center box suggests that the data may have come from a normal distribution.
### Normality tests
The various normality tests compare the scores in the sample to a normally distributed set of scores with the same mean and standard deviation; the null hypothesis is that “sample distribution is normal.” If the test is significant, the distribution is non-normal. For small sample sizes, normality tests have little power to reject the null hypothesis and therefore small samples most often pass normality tests. For large sample sizes, significant results would be derived even in the case of a small deviation from normality, although this small deviation will not affect the results of a parametric test. It has been reported that the K-S test has low power and it should not be seriously considered for testing normality (11). Moreover, it is not recommended when parameters are estimated from the data, regardless of sample size (12).
The Shapiro-Wilk test \index{Shapiro-Wilk test} is based on the correlation between the data and the corresponding normal scores provides better power than the K-S test even after the Lilliefors correction. Power is the most frequent measure of the value of a test for normality. Some researchers recommend the Shapiro-Wilk test as the best choice for testing the normality of data.
## T-tests {#ttest}
\index{T test}
The __independent t test__ is used to test if there is any statistically *significant difference between two means*. Use of an independent t test requires several assumptions to be satisfied.
1. The variables are continuous and independent
1. The variables are normally distributed
1. The variances in each group are equal
When these assumptions are satisfied the results of the t test are valid. Otherwise they are invalid and you need to use a non-parametric test. When data is not normally distributed you can apply transformations to make it normally distributed.
Using the `mtcars` data set, we check if there are any difference in mile per gallon (mpg) for each of the automatic and manual group.
First things first, let's check the data.
\index{mtcars dataset}
```{r intro01}
glimpse(mtcars)
```
For this t-test, we focus on mpg and the kind of gearbox. Once done, let's check how it looks like.
```{r intro01c}
df <- mtcars
df$am <- factor(df$am, labels = c("automatic", "manual"))
df2 <- df %>% select(mpg, am)
```
```{r echo=FALSE, message=FALSE, warning=FALSE}
kable(head(df2), format = "html") %>%
kable_styling(bootstrap_options = c("striped", "hover"),
full_width = F)
```
Next step, we can generate descriptive statistic for each of the `am` group.
```{r intr02, echo=FALSE}
yo <- df2 %>% group_by(am) %>%
summarise(mean = mean(mpg), minimum = min(mpg),
maximum = max(mpg), n = n())
kable(yo, digits = 2, format = 'html') %>%
kable_styling(bootstrap_options = c("striped", "hover"),
full_width = F)
```
There is a difference between the mean of the automatic vs the manual cars. Now, is that difference significant?
Vizualising that difference by generating a boxplot for each group can help us better understand it.
\index{ggplot geom-boxplot}
```{r intro03boxplot}
ggplot(df2, aes(x = am, y = mpg)) +
geom_boxplot(fill = c("dodgerblue3", "goldenrod2")) +
labs(x = "Type of car", title = "Achieved milage for Automatic / Manual cars")
```
Before we go on to our t-test, we must test the normality of the data.
To do so, we can use the __Shapiro Wilk Normality Test__
\index{Shapiro-Wilk test}
```{r intro04summary}
test_shapiro_df2 <- df2 %>% group_by(am) %>%
summarise(shaprio_test = shapiro.test(mpg)$p.value)
```
```{r intro04summaryb, echo=FALSE}
kable(test_shapiro_df2, digit = 2, format = 'html') %>%
kable_styling(bootstrap_options = c("striped", "hover"), full_width = FALSE)
```
There is no evidence of departure from normality.
To test the equal variance in each group, we use the `levene.test` for homogeneity of Variance (center = mean) from the `car` package.
\index {Levene test}
```{r intro05leveneTest}
test_levene_df2 <- car::leveneTest(mpg ~ am, center = mean, data = df2)
```
```{r intro05leveneTestB, echo=FALSE}
kable(test_levene_df2, format = 'html', digit= 3) %>%
kable_styling(bootstrap_options = c("striped", "hover"),
full_width = FALSE)
```
Because the variance in the 2 groups is not equal, we have to transform the data.
Apply a log transformation to stabilize the variance.
```{r intro06logtransformed}
log_transformed_mpg = log(df2$mpg)
```
Now we can finally apply the t test to our data.
\index{t-test}
```{r intro07ttest}
t.test(log_transformed_mpg ~ df2$am, var.equal = TRUE)
yo <- t.test(log_transformed_mpg ~ df2$am, var.equal = TRUE)
kable(broom::glance(yo), format = 'html') %>%
kable_styling()
```
Interpretation of the results.
* Manual cars have on average a higher mileage per Gallon (24 mpg) compared to Automatic (17 mpg).
* The box plot did not reveal the presence of outliers
* The Shapiro-Wilk normality test did not show any deviation from normality in the data
* The Levene Test showed difference in the variance in the 2 group. We addressed that by log transforming the data
* The t test show a significant difference in the mean of miles per gallon from automatic and manual cars.
## ANOVA - Analyse of variance.
ANOVA is an extension of the t-test. While the t-test is checking for the difference between 2 means, ANOVA is checking for the difference in more than 2 means.
As with the t-test, we need to have our 3 assumptions to be verified.
1. The variables are continuous and independent
2. The variables are normally distributed
3. The variances in each group are equal
We'll do ANOVA on another Kaggle dataset called `cereals` . In this dataset, we'll check if the place on the shelf (at the supermarket) of a cereal box is dependent of the amount of sugars in a cereal box.
\index{group-by}
\index{summarize}
```{r intro8, message=FALSE}
df <- read_csv("dataset/cereal.csv")
df2 <- df %>% select(shelf, sugars) %>%
group_by(shelf) %>%
summarize(mean = mean(sugars),
sd = sd(sugars),
n = n()) %>%
ungroup()
```
```{r echo=FALSE}
kable(df2, caption = "Statistics on sugars based on shelving", format = 'html', digit = 2) %>%
kable_styling(bootstrap_options = c("striped", "hover"), full_width = FALSE)
```
Clearly there is a difference. Let's visualize that.
\index{ggplot geom-jitter}
\index{ggplot geom-boxplot}
\index{ggplot theme}
```{r intro9}
df$type <- factor(df$type, labels = c("cold", "hot"))
df$mfr <- factor(df$mfr, labels = c("American Home \n Food Products", "General Mills",
"Kelloggs", "Nabisco", "Post", "Quaker Oats",
"Ralston Purina"))
df$shelf <- factor(df$shelf)
ggplot(df, aes(x = shelf, y = sugars)) +
geom_boxplot() +
geom_jitter(aes(color = mfr)) +
labs(y = "Amount of sugars", x = "Shelf level",
title = "Amount of sugars based on the shelf level") +
theme(legend.position = "bottom")
```
We can see that shelf 2 tends to have cereals boxes that contains more sugars. Can we show this statistically?
We are in the situation to compare 3 different means and see if there is a difference between them.
* The Null Hypothesis: Mean of Sugar Shlef 1 = Mean of Sugar Shlef 2 = Mean of Sugar Shlef 3
* Alternative Hypothesis: There is a difference in between one of these means
\index{ANOVA}
```{r intro11}
model_aov_df <- aov(sugars ~ shelf, data = df)
summary(model_aov_df)
```
We get a high F-value on our test with a p-value of 0.001. Hence we can reject the null hypothesis and say that there is a difference between the mean of sugars in each of the shelf.
\index{confidence intervals}
```{r intro12}
confint(model_aov_df)
```
All we could conclude is that there is a significant difference between one (or more) of the pairs. As the ANOVA is significant, further ‘post hoc’ tests have to be carried out to confirm where those differences are. The post hoc tests are mostly t-tests with an adjustment to account for the multiple testing. Tukey’s is the most commonly used post hoc test but check if your discipline uses something else.
\index{Tukey HSD}
```{r intro13}
TukeyHSD(model_aov_df)
```
The residuals versus fits plot can be used to check the homogeneity of variances.
In the plot below, there is no evident relationships between residuals and fitted values (the mean of each groups), which is good. So, we can assume the homogeneity of variances.
```{r intro14}
library(ggfortify)
autoplot(model_aov_df, label.size = 3)[[1]]
```
Statistically, we would use the `Levene's test` to check the homogeneity of variance.
\index{Levene Test}
```{r intro15}
car::leveneTest(sugars ~ shelf, data = df)
```
From the output above we can see that the p-value is not less than the significance level of 0.05. This means that there is no evidence to suggest that the variance across groups is statistically significantly different. Therefore, we can assume the homogeneity of variances in the different treatment groups.
To check the normality assumption, we can use the Q-Q plot.
Normality plot of residuals. In the plot below, the quantiles of the residuals are plotted against the quantiles of the normal distribution. A 45-degree reference line is also plotted.
The normal probability plot of residuals is used to check the assumption that the residuals are normally distributed. It should approximately follow a straight line.
```{r intro16}
autoplot(model_aov_df, label.size = 3)[[2]]
```
As all the points fall approximately along this reference line, we can assume normality.
The conclusion above, can be supported by the Shapiro-Wilk test on the ANOVA residuals.
```{r intro17}
shapiro.test(residuals(model_aov_df))
```
Again the p-value indicate no violation from normality.
## Covariance
The correlation coefficient between 2 variables can be calculated by
$r = \frac{Cov(x, y)}{\sigma{x} \cdot \sigma{y}}$
The covariance is defined as $\frac {\sum(x - \overline x) \cdot (y - \overline y)}{n-1}$
and the standard deviation is defined as $\sqrt \frac{\sum(x - \overline x)^2}{n-1}$