-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pool SE across RM factors: toggle is mis-calibrated #307
Comments
@JohnnyDoorn (?) |
Hi Richard, Thanks for informing us about this. Something went wrong with how the pooled data are handled and I just fixed it. Kind regards |
OK Thank you. However, I thought the feature was something *completely*
different. My single factor had very different variances across the
factor's two levels, thus violating the equal variance assumption.
Nevertheless I wanted the error bars for the two levels to be different
(reflecting the unequal variances). Checking the 'box' produced unequal
error bars (which I presumed to be a result of non-pooling of standard
errors). Unchecking the 'box' produced equal error bars (which I thought
reflected pooled variances). So is it that the option for unequal error
bars (due to non-pooling of error variances) was here by mistake, and that
going forward, there will be no option to have each error bar reflect the
specific standard error of the particular mean that's attached to the error
bar?
…On Tue, Feb 19, 2019 at 5:49 PM JohnnyDoorn ***@***.***> wrote:
Hi Richard,
Thanks for informing us about this. Something went wrong with how the
pooled data are handled and I just fixed it.
However, the main issue here is that the text of the tick box is a little
confusing: in the case of your data, there is no pooling across RM factors,
because your data only has 1 factor. What is meant, is that when plotting a
factor, the mean is taken across the unused RM factors that are still in
the model. So if I have two RM factors with two levels in my model, A (1&2)
and B (1&2), and I make a plot of the means of A, I first take the average
of B across its levels (so when I plot the mean of A1, it is actually the
average of A1B1 and A1B2). This procedure is discussed by Loftus and Masson
in https://link.springer.com/article/10.3758/BF03210951
I will update the text accordingly, to hopefully make it more clear. Maybe
something like "Average across unused RM factors."
Kind regards
Johnny
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#307 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/APemy_SI_9mO9vMAIzgSGasxqUyWys3hks5vPH9sgaJpZM4a_qyI>
.
|
Hi Richard,
In order to eliminate the participant effect, one can normalize the data:
It seems that in your data example, the participant effects have a very low variance in one condition, and higher in the other. The normalization corrects for this, in order to show the variance of the condition effect only. If you want to get insight into the variance simply per cell of your design, you can look at the descriptive table under "additional options" in the RM ANOVA menu. If you want a little bit more insight into this, you can look at it from a t-test perspective. Basically the ANOVA you are doing is a paired-samples t-test, since you only have 1 factor with 2 levels. The paired samples t-test eliminates the participant effect by default, so if you were to conduct a paired samples t-test and get a descriptives plot with confidence intervals around the group means, you would get the exact same result as with the approach discussed above for the RM ANOVA. We are currently working on communicating such information more clearly to JASP users by extending the help files, in order to make the software more transparent - especially with procedures that are not always very straightforward such as this one. Kind regards, |
I consider this issue clarified. Please reopen if it's unclear. @MyrtheV As you're working on the ancova documentation, you might be interested in Johnny's post. |
For JASP 9.2.0. for Windows (Windows 7):
In Repeated-Measures ANOVA / Descriptive Plots, checking the box labeled "Pool SE across RM factors" causes the SE to be un-pooled in the plot, and un-checking that box causes the SE to be pooled. The behavior should be the opposite.
Pooled un-pooled check box.zip
The text was updated successfully, but these errors were encountered: