Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inverted Z-stats in testBinomial? #79

Open
mrkaye97 opened this issue Feb 16, 2023 · 1 comment
Open

Inverted Z-stats in testBinomial? #79

mrkaye97 opened this issue Feb 16, 2023 · 1 comment
Labels

Comments

@mrkaye97
Copy link

mrkaye97 commented Feb 16, 2023

Hi all,

Thanks so much for putting together this package! A question (not sure if it's a bug or not):

In the documentation of the x and n arguments in testBinomial, you say:

x1
Number of “successes” in the control group

x2
Number of “successes” in the experimental group

n1
Number of observations in the control group

n2
Number of observations in the experimental group

My understanding of testBinomial is that for a one-sided test, it should be testing if the experimental group is greater than the control group. In your examples, you give:

testBinomial(x1 = 39, x2 = 13, n1 = 500, n2 = 500, adj = 1)
[1] 3.701266

which I'm reading as "the success rate for the control group is 3x the success rate for the treatment group" since x1 corresponds to the control. Which then leads me to believe that the Z score you're supplying might be inverted. What are the hypotheses being tested? I was thinking we were testing treatment > control, and so to me just eyeballing, the 3x success rate for the control over the treatment should be very strong evidence in favor of the null (treatment <= control).

Is that not the correct way to set up the hypotheses? I see in the code a line that looks like the opposite is happening. In this line it looks to me like the Z stat is being computed as control - treatment as opposed to vice versa. This makes sense to me in the non-inferiority case (where the null is control - treatment > delta), but doesn't make as much sense to me in the superiority case where we're testing treatment - control > 0 (as far as I can tell).

For instance, if I run two non-inferiority tests with different deltas, I get intuitive results -- for a test with a far more negative delta, we get a much higher Z stat, which says to me that there's more evidence that the treatment is not inferior to the control given our provided delta, which is in line with my intuitions.

> testBinomial(50, 50, 100, 100, delta0 = -0.05)
[1] 0.7079923
> testBinomial(50, 50, 100, 100, delta0 = -0.25)
[1] 3.651484

Let me know if I'm misunderstanding something, and thanks so much for any pointers!

@mrkaye97 mrkaye97 changed the title Inverted t-stats in testBinomial? Inverted Z-stats in testBinomial? Feb 16, 2023
@mrkaye97
Copy link
Author

FWIW, inverting lower.tail in pnorm to make it lower.tail = TRUE gives me back the results I was expecting. Is this the correct way of thinking?

library(gsDesign)

## Non-inferiority, delta of 0.10 should give a higher p-value than with
##  a more extreme delta value

## Relatively small delta --> higher p-val
pnorm(testBinomial(50, 45, 100, 100, delta0 = 0.100), lower.tail = TRUE)
#> [1] 0.2383717

## Relatively large delta --> lower p-val
pnorm(testBinomial(50, 45, 100, 100, delta0 = 0.25), lower.tail = TRUE)
#> [1] 0.001725765



## Superiority should give higher p-value with the proportion
##  for the variant is relatively closer to that of the control

## Relatively low difference in proportions --> higher p-val
pnorm(testBinomial(50, 55, 100, 100, delta0 = 0), lower.tail = TRUE)
#> [1] 0.239475

## Relatively high difference in proportions --> lower p-val
pnorm(testBinomial(50, 75, 100, 100, delta0 = 0), lower.tail = TRUE)
#> [1] 0.0001303648

Created on 2023-02-16 with reprex v2.0.2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants