You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
Is there a way to set the number of questions for the word intrusion task similar to topic intrusion task? At the moment, for instance, if I am assessing a model with 40 topics, by default the wi test has 40 cases.
I would prefer to be able to set the number of questions for the word intrusion task as well. Chang et al.'s (2008) experiment, they set 10 cases in each task. This is particularly helpful when assessing multiple models in which the number of topics are quite high and the human resources are limited.
Thanks!
The text was updated successfully, but these errors were encountered:
@sweetmals Thanks for the suggestion. It is possible to implement this. But the problem is, there is a crucial difference between Chang et al (2008)'s offering of only 10 cases to crowdcoders and only offering cases less than k in oolong. Chang et al.'s crowdcoding approach will ultimately have complete coverage of all k topics with at least 8 codings from their crowd. However, if we allow wi with "less than k" cases, one can't have complete coverage. I have a hesitancy to implement this in oolong because it gives users a false sense of validity. (let's say a user has a topic model with k = 100 and then this user implements a wi with only 1 case. And then this user reports in his or her paper that the model has been validated with oolong.) If it is to be implemented, there will be a warning message in every step, to say the least.
Hi,
Is there a way to set the number of questions for the word intrusion task similar to topic intrusion task? At the moment, for instance, if I am assessing a model with 40 topics, by default the wi test has 40 cases.
I would prefer to be able to set the number of questions for the word intrusion task as well. Chang et al.'s (2008) experiment, they set 10 cases in each task. This is particularly helpful when assessing multiple models in which the number of topics are quite high and the human resources are limited.
Thanks!
The text was updated successfully, but these errors were encountered: