Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generelized Omega - wrong SS? #8

Open
mattansb opened this issue Oct 18, 2020 · 10 comments
Open

Generelized Omega - wrong SS? #8

mattansb opened this issue Oct 18, 2020 · 10 comments

Comments

@mattansb
Copy link

In you website you specify Omega2G as:

image

However, in the orignal paper by Olejnik and Algina,
it seems like the denominator should be [SSA + SSOther]
where you have [SSTotal + SSOther]:

image

If this isn't a mistake, can you please help me understand what I'm missing here?
Thanks!

@doomlab
Copy link
Owner

doomlab commented Nov 17, 2020

@mattansb - just finally dug this out of my email. We are using the formula on that same page (switching A and B on our explanation).

image

The table you've included is for three way designs with various mixed factors - we note in our guide that we only are doing two-way designs (maybe this should be clearer, but you only enter information for two of them).

@mattansb
Copy link
Author

mattansb commented Jan 7, 2021

Okay, I see now!

However this raises other concerns - looking at the paper, from page 441-442, it seem that this formula is only applicable for:

  • A fixed effects model
    • With only 2 IVs
    • With one IV manipulated and the other measured
  • The generalized Omega in the function is only applicable for the main effect of measured IV.

Whereas the docs of the function seem to claim it is applicable for both fixed and mixed models (and it is suggested that there is something called a partial generalized omega squared? "Remember if you have two or more IVs, these values are partial omega squared.")...

Unless again I am missing something? 😅

@mattansb
Copy link
Author

mattansb commented Jan 7, 2021

Likewise, the ges.partial.SS.mix() seems to be for Partial Generalized Eta-Squared, but I could not find a reference to such an effect size outside your package? And the formula seems also to only be applicable for manipulated IVs?

And I'm not sure where ges.partial.SS.rm() is taken from?

Sorry for the deep dive here...

@doomlab
Copy link
Owner

doomlab commented Jan 12, 2021

@mattansb you are raising an excellent point that I have been thinking about. I know that the generalized eta/omega paper talks a lot about manipulated versus measured ... but I'm not 100% convinced this distinction at all matters for the estimation of the effect. Obviously, the data collection procedure does (i.e., between versus repeated). I just haven't connected all the dots on how to show this point exactly. Either way some clarifying language is in order.

Everything generalized (eta/omega) is Olejnik and Algina - I'm going to mark these notes to make sure we update the citation information online before I start working on the paper for this thing.

@mattansb
Copy link
Author

The distinction between manipulated and observed is, to my understanding, the whole point of the generalized measures; The generalized effect size tries to determine the effect size we would see for this "term" if it was to be set outside the lab, in the wild, so it matters if a IV varies in the wild (e.g., birth order) or if it doesn't (e.g., treatment group).

Here is an example from the afex package showing how the generalized effect size is affected by not only if the term of interest is labelled "observed" or not, but also by how other terms are labelled.

library(afex)

data(obk.long, package = "afex")

a <- aov_4(value ~ treatment * gender * phase * hour +
             (phase * hour | id),
           data = obk.long)

The ges column is the generalized Eta squared.
Focusing on the gender effect.

When non observed, we get ges = 0.15.

anova(a)[2, ]
#> Anova Table (Type 3 tests)
#> 
#> Response: value
#>        num Df den Df    MSE      F    ges Pr(>F)  
#> gender      1     10 22.806 3.6591 0.1516 0.0848 .
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

When only gender is observed, we get ges = 0.11.

anova(a, observed = "gender")[2, ] # gender is observed
#> Anova Table (Type 3 tests)
#> 
#> Response: value
#>        num Df den Df    MSE      F     ges Pr(>F)  
#> gender      1     10 22.806 3.6591 0.11481 0.0848 .
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

When gender is not observed, but treatment is, we get ges = 0.08.

anova(a, observed = "treatment")[2, ] # something else (not gender) is observed
#> Anova Table (Type 3 tests)
#> 
#> Response: value
#>        num Df den Df    MSE      F      ges Pr(>F)  
#> gender      1     10 22.806 3.6591 0.085304 0.0848 .
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

This is just a quick example. A situation can also be manufactured where the differences are much larger.

@doomlab
Copy link
Owner

doomlab commented Jan 13, 2021

@mattansb I get that we can calculate them differently, I'm just not convinced that this distinction is necessary (or educated enough on why we think this difference is necessary). It seems to me that effect sizes (and their distributions) should be determined by the underlying effect ... why should the effect care if they are measured or manipulated?

I obviously can be uninformed here, if you have a reference (or I just have forgotten everything but the math from the main paper we are talking about).

@mattansb
Copy link
Author

why should the effect care if they are measured or manipulated?

This is exactly the point of the generalized eta/omega. From the paper's abstract:

image

So what the GES or GOS do is "account" for the design, and allow to compare the effect sizes of a term across different designs.
Else you can just report the partial / non-partial eta/omega.

Here is a short comp between the different eta/omegas (from Lakens, 2013)

image

@mattansb
Copy link
Author

This is also why it doesn't make sense to have a partial-generalized eta/omega, as you need to account for the whole design (not just part of it).

Hope this is helping / making sense?

@doomlab
Copy link
Owner

doomlab commented Jan 14, 2021

@mattansb alright, I took some time this morning to reread the original paper. I don't know if I misread the original, didn't comprehend it correctly, or what but I get what is going on now. We were sort of talking about two different issues, but both things that need to be addressed. Unfortunately, that's going to take some rewriting of the ges related functions and some thinking on how best to explain this because I don't think this measured/manipulated design distinction is necessarily clear.

@mattansb
Copy link
Author

because I don't think this measured/manipulated design distinction is necessarily clear.

Yes, I agree - I think it is better (probably) to frame this as "addressing bias in effect size due to study design".

Glad I brought this up (:

(On a side note, we've been trying to implement gen-Omega in effectsize for a while, but have found it very hard to do! The equations are hard to generalize (pun intended!))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants