Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revise the way we calculate % in ROPE (SGPV) for frequentist models? #849

Open
strengejacke opened this issue Feb 27, 2023 · 15 comments
Open
Assignees
Labels
Enhancement 💥 Implemented features can be improved or revised Low priority 😴 This issue does not impact package functionality much

Comments

@strengejacke
Copy link
Member

This idea came up after a chat with Isabella on Twitter.

See following example at https://easystats.github.io/parameters/reference/equivalence_test.lm.html

library(parameters)
data(qol_cancer)
model <- lm(QoL ~ time + age + education, data = qol_cancer)
equivalence_test(model)
#> # TOST-test for Practical Equivalence
#> 
#>   ROPE: [-1.99 1.99]
#> 
#> Parameter        |         90% CI | % in ROPE |        H0 |      p
#> ------------------------------------------------------------------
#> (Intercept)      | [59.33, 68.41] |        0% |  Rejected | > .999
#> time             | [-0.76,  2.53] |    83.52% | Undecided | 0.137 
#> age              | [-0.26,  0.32] |      100% |  Accepted | < .001
#> education [mid]  | [ 5.13, 12.39] |        0% |  Rejected | 0.999 
#> education [high] | [10.14, 18.57] |        0% |  Rejected | > .999

If we look at the parameter time, we see a CI of (-0.76, 2.53) and 83.52% inside ROPE. The latter is equivalent to the SGPV. Both SGPV and % in ROPE are calculated the same way (see GitHub repo from Daniel: https://github.com/Lakens/TOST_vs_SGPV/tree/master/functions, related paper https://open.lnu.se/index.php/metapsychology/article/view/933).

However, although we actually want the overlap of the interval (probability mass) with the ROPE, it's calculated (for simplicity) based on the overlap of the range.

What we are actually doing is:

# hypothetical min/max
est.hi <- 2.53
est.lo <- -0.76

x <- bayestestR::distribution_uniform(
  n = 500,
  min = est.lo,
  max = est.hi
)
# actual (simulated) min/max
min(x)
#> [1] -0.75671
max(x)
#> [1] 2.52671

bayestestR::equivalence_test(x, c(-2, 2), ci = 1)
#> # Test for Practical Equivalence
#> 
#>   ROPE: [-2.00 2.00]
#> 
#> H0        | inside ROPE |     100% HDI
#> --------------------------------------
#> Undecided |     83.80 % | [-0.76 2.53]
bayestestR::rope(x, c(-2, 2), ci = 1) |> plot()

But what we actually want, is:

# hypothetical min/max
est.hi <- 2.53
est.lo <- -0.76

x <- bayestestR::distribution_normal(
  n = 500,
  mean = est.hi - ((est.hi - est.lo) / 2),
  sd = (est.hi - est.lo) / 6.1
)
# actual (simulated) min/max
min(x)
#> [1] -0.7816991
max(x)
#> [1] 2.551699

bayestestR::equivalence_test(x, c(-2, 2), ci = 1)
#> # Test for Practical Equivalence
#> 
#>   ROPE: [-2.00 2.00]
#> 
#> H0       | inside ROPE |     100% HDI
#> -------------------------------------
#> Accepted |     98.00 % | [-0.78 2.55]
bayestestR::rope(x, c(-2, 2), ci = 1) |> plot()

So my question is, to compute the % in ROPE for frequentist models, should we switch to simulating the CI and calculate the % in ROPE based on the actual probability mass of that interval, than just the range? Wouldn't this be more precise? Or am I missing something here?

@easystats/core-team

@strengejacke strengejacke added the What's your opinion 🙉 Collectively discuss something label Feb 27, 2023
@strengejacke
Copy link
Member Author

Tagging @vincentarelbundock too, maybe this is of interest for you, too?

@mattansb
Copy link
Member

However, although we actually want the overlap of the interval (probability mass) with the ROPE, it's calculated (for simplicity) based on the overlap of the range.

We've discussed this elsewhere I think, and concluded that giving "posterior" probabilities estimates from MLE models was a bad idea? Why can't we just have the p-value from the equivalence test?

@strengejacke
Copy link
Member Author

Not sure we discussed it in this context, though. But we may decide on dropping the % in rope completely, but I also see that it might be useful for frequentist case.

@mattansb
Copy link
Member

If it's so useful, might as well switch to a framework that can provide with posterior probabilities 😉

@strengejacke
Copy link
Member Author

It's all one song... 😉

@bwiernik
Copy link
Contributor

I'm not following after a quick read but will circle back.

…perennial Brenton refrain about confidence distributions and frequentist posterior distributions 😜…

@vincentarelbundock
Copy link
Contributor

I don't have a strong view.

…perennial Brenton refrain about confidence distributions and frequentist posterior distributions 😜…

I read that book that you recommend and liked it a lot. Thanks for that!

Although, this kind of interpretation feels very attractive, I will say that my impressionistic sense is that it seems a bit "exotic" in the stats community. For software defaults and labels, I'd lean toward more traditional/conservative language, or at least use labels which make it clear that the suggested interpretation departs somewhat from canon.

@strengejacke
Copy link
Member Author

I think labeling is more a side topic here. ;-)
Currently, we report the % in ROPE, which is equivalent to the SGPV (which, in turn, is proposed by strong proponents of the Neyman-Pearson philosophy). However, the % in ROPE usually refers to the probability mass of the interval distribution that is inside the ROPE. Currently, we just divide the ranges:

ci <- c(-0.76,  2.53)
# rope <- c(-2, 2)
overlap <- c(-0.76, 2)
100 * diff(overlap) / diff(ci)
#> [1] 83.89058

That is, we consider the CI as a uniform distribution with equal density for all values inside this interval. Usually, you would consider the CI as roughly normal distribution, so the % in ROPE of that interval cannot be calculated as if it was uniformly distributed. Thus, my suggestion to simulate a normal distribution (based on the lower/upper limit) and then calculate the proportion of that probability mass that is inside the ROPE range. This would no longer be in line with the proposed calculation of the SGPV. But it would give a more realistic picture of the "% in ROPE".

We could, by default, just report the p-value of the equivalence test and then add columns like SGPV and % in ROPE when requested by the user, if these measures are too avantgarde for some of you to be shown by default :-P

@vincentarelbundock
Copy link
Contributor

if these measures are too avantgarde for some of you to be shown by default :-P

Haha! again, don't listen to me much here. I don't really use this function and don't have a strong view.

@strengejacke
Copy link
Member Author

strengejacke commented Feb 27, 2023

I think this can work... I have implemented a % in ROPE based on a simulated normal distribution of the CI. The "new" results from % inside ROPE are very close to what we would get from a bootstrapped model or a model where we simulate estimates. We now have the SGPV, calculated in the "classical" way as proposed by Lakens, and the % inside ROPE, which assumes a normally distributed confidence interval and where the proportion of the probability mass of this distribution inside the ROPE is shown. See the close results from simulated model and normal model.

library(parameters)
data(qol_cancer)
model <- lm(QoL ~ time + age + education + phq4 + hospital, data = qol_cancer)
equivalence_test(model, metric = "all")
#> # TOST-test for Practical Equivalence
#> 
#>   ROPE: [-1.99 1.99]
#> 
#> Parameter        |         90% CI |   SGPV | % in ROPE | Equivalence |      p
#> -----------------------------------------------------------------------------
#> (Intercept)      | [56.21, 69.05] | < .001 |        0% |    Rejected | > .999
#> time             | [-0.18,  2.52] | 0.802  |    85.68% |   Undecided | 0.160 
#> age              | [-0.40,  0.07] | > .999 |      100% |    Accepted | < .001
#> education [mid]  | [ 3.12,  9.12] | < .001 |        0% |    Rejected | 0.988 
#> education [high] | [ 3.92, 11.06] | < .001 |        0% |    Rejected | 0.994 
#> phq4             | [-5.75, -4.70] | < .001 |        0% |    Rejected | > .999
#> hospital         | [-1.52,  8.95] | 0.335  |    27.05% |   Undecided | 0.743

x <- simulate_model(model)
equivalence_test(x, ci = 0.9)
#> # Test for Practical Equivalence
#> 
#>   ROPE: [-1.99 1.99]
#> 
#> Parameter     |        H0 | inside ROPE |       90% HDI
#> -------------------------------------------------------
#> (Intercept)   |  Rejected |      0.00 % | [55.86 69.00]
#> time          | Undecided |     88.22 % | [-0.19  2.65]
#> age           |  Accepted |    100.00 % | [-0.38  0.09]
#> educationmid  |  Rejected |      0.00 % | [ 3.00  9.08]
#> educationhigh |  Rejected |      0.00 % | [ 3.96 11.05]
#> phq4          |  Rejected |      0.00 % | [-5.72 -4.74]
#> hospital      | Undecided |     25.78 % | [-1.37  9.19]

model <- glm(vs ~ wt + mpg + cyl, data = mtcars, family = "binomial")
equivalence_test(model, metric = "all")
#> # TOST-test for Practical Equivalence
#> 
#>   ROPE: [-0.18 0.18]
#> 
#> Parameter   |          90% CI |   SGPV | % in ROPE | Equivalence |     p
#> ------------------------------------------------------------------------
#> (Intercept) | [-14.74, 30.58] | 0.008  |     0.95% |   Undecided | 0.991
#> wt          | [ -0.59,  5.11] | 0.064  |     3.79% |   Undecided | 0.964
#> mpg         | [ -0.48,  0.61] | 0.332  |    42.74% |   Undecided | 0.593
#> cyl         | [ -5.25, -0.34] | < .001 |     1.68% |    Rejected | 0.983

x <- simulate_model(model)
equivalence_test(x, ci = 0.9)
#> # Test for Practical Equivalence
#> 
#>   ROPE: [-0.18 0.18]
#> 
#> Parameter   |        H0 | inside ROPE |        90% HDI
#> ------------------------------------------------------
#> (Intercept) | Undecided |      0.89 % | [-15.52 30.46]
#> wt          | Undecided |      4.22 % | [ -0.33  5.15]
#> mpg         | Undecided |     44.22 % | [ -0.48  0.63]
#> cyl         |  Rejected |      0.00 % | [ -5.46 -0.48]

# for predictions
model <- lm(QoL ~ time + age + education + phq4 + hospital, data = qol_cancer)
ggeffects::ggpredict(model, "education") |>
  equivalence_test(metric = "all")
#> # TOST-test for Practical Equivalence
#> 
#>   ROPE: [-1.99 1.99]
#> 
#> education | Contrast |          95% CI |   SGPV | % in ROPE | Equivalence |     p
#> ---------------------------------------------------------------------------------
#> low-mid   |    -6.12 | [ -9.69, -2.56] | < .001 |     0.42% |    Rejected | 0.988
#> low-high  |    -7.49 | [-11.74, -3.24] | < .001 |        0% |    Rejected | 0.994
#> mid-high  |    -1.37 | [ -4.59,  1.86] | 0.596  |    61.05% |   Undecided | 0.353

@strengejacke strengejacke added the Low priority 😴 This issue does not impact package functionality much label Mar 15, 2023
@strengejacke
Copy link
Member Author

I think the current implementation of SGPV is ok.

@bwiernik
Copy link
Contributor

I'm going to re-open and hope to get to it this year, sorry to mess up the tidy issue log Daniel :-)

@bwiernik bwiernik reopened this Oct 16, 2024
@bwiernik bwiernik self-assigned this Oct 16, 2024
@bwiernik bwiernik added Enhancement 💥 Implemented features can be improved or revised and removed What's your opinion 🙉 Collectively discuss something labels Oct 16, 2024
@strengejacke
Copy link
Member Author

I'm not sure there's much left to do? For the frequentist methods, we calculate the underlying normal/t-distribution based on the CI-range, and then compute the rope coverage or SGPV. If you use this approach to calculate the pd from that distribution (p_direction() for a frequentist model), or convert the related p-value into a pd (p_to_pd()) , you get almost identical results.

@strengejacke
Copy link
Member Author

See

.generate_posterior_from_ci <- function(ci = 0.95, ci_range, dof = Inf, precision = 10000) {
  # this function creates an approximate normal distribution that covers the
  # CI-range, i.e. we "simulate" a posterior distribution from a frequentist CI


  # sanity check - dof argument
  if (is.null(dof)) {
    dof <- Inf
  }
  # first we need the range of the CI (in units), also to calculate the mean of
  # the normal distribution
  diff_ci <- abs(diff(ci_range))
  mean_dist <- ci_range[2] - (diff_ci / 2)
  # then we need the critical values of the quantiles from the CI range
  z_value <- stats::qt((1 + ci) / 2, df = dof)
  # the range of Z-scores (from lower to upper quantile) gives us the range of
  # the provided interval in terms of standard deviations. now we divide the
  # known range of the provided CI (in units) by the z-score-range, which will
  # give us the standard deviation of the distribution.
  sd_dist <- diff_ci / diff(c(-1 * z_value, z_value))
  # generate normal-distribution if we don't have t-distribution, or if
  # we don't have necessary packages installed
  if (is.infinite(dof) || !insight::check_if_installed("distributional", quietly = TRUE)) {
    # tell user to install "distributional"
    if (!is.infinite(dof)) {
      insight::format_alert("For models with only few degrees of freedom, install the {distributional} package to increase accuracy of `p_direction()`, `p_significance()` and `equivalence_test()`.") # nolint
    }
    # we now know all parameters (mean and sd) to simulate a normal distribution
    bayestestR::distribution_normal(n = precision, mean = mean_dist, sd = sd_dist)
  } else {
    insight::check_if_installed("distributional")
    out <- distributional::dist_student_t(df = dof, mu = mean_dist, sigma = sd_dist)
    sort(unlist(distributional::generate(out, times = precision), use.names = FALSE))
  }
}

(

.generate_posterior_from_ci <- function(ci = 0.95, ci_range, dof = Inf, precision = 10000) {
# this function creates an approximate normal distribution that covers the
# CI-range, i.e. we "simulate" a posterior distribution from a frequentist CI
# sanity check - dof argument
if (is.null(dof)) {
dof <- Inf
}
# first we need the range of the CI (in units), also to calculate the mean of
# the normal distribution
diff_ci <- abs(diff(ci_range))
mean_dist <- ci_range[2] - (diff_ci / 2)
# then we need the critical values of the quantiles from the CI range
z_value <- stats::qt((1 + ci) / 2, df = dof)
# the range of Z-scores (from lower to upper quantile) gives us the range of
# the provided interval in terms of standard deviations. now we divide the
# known range of the provided CI (in units) by the z-score-range, which will
# give us the standard deviation of the distribution.
sd_dist <- diff_ci / diff(c(-1 * z_value, z_value))
# generate normal-distribution if we don't have t-distribution, or if
# we don't have necessary packages installed
if (is.infinite(dof) || !insight::check_if_installed("distributional", quietly = TRUE)) {
# tell user to install "distributional"
if (!is.infinite(dof)) {
insight::format_alert("For models with only few degrees of freedom, install the {distributional} package to increase accuracy of `p_direction()`, `p_significance()` and `equivalence_test()`.") # nolint
}
# we now know all parameters (mean and sd) to simulate a normal distribution
bayestestR::distribution_normal(n = precision, mean = mean_dist, sd = sd_dist)
} else {
insight::check_if_installed("distributional")
out <- distributional::dist_student_t(df = dof, mu = mean_dist, sigma = sd_dist)
sort(unlist(distributional::generate(out, times = precision), use.names = FALSE))
}
}
)

@strengejacke
Copy link
Member Author

and

# this function simulates a normal distribution, which approximately has the
# same range / limits as the confidence interval, thus indeed representing a
# normally distributed confidence interval. We then calculate the probability
# mass of this interval that is inside the ROPE.
.rope_coverage <- function(ci = 0.95, range_rope, ci_range, dof = Inf) {
out <- .generate_posterior_from_ci(ci, ci_range, dof = dof)
# compare: ci_range and range(out)
# The SGPV refers to the proportion of the confidence interval inside the
# full ROPE - thus, we set ci = 1 here
rc <- bayestestR::rope(out, range = range_rope, ci = 1)
rc$ROPE_Percentage
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Enhancement 💥 Implemented features can be improved or revised Low priority 😴 This issue does not impact package functionality much
Projects
None yet
Development

No branches or pull requests

4 participants