-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Malania 2007 benchmark #365
Conversation
…nchmark, packaging and a few tests.
…alsError on trying to access them
…emoval of pool_score
Hi @benlonnqvist, we're doing some spring cleaning on the PR backlog and noticed that this PR is active and passing tests! Is it ready to be reviewed and merged? Thanks for the contributions! |
Hi @deirdre-k, thanks for messaging! Let me update the branch to double check that #917 didn't cause any issues, add one more test, and if it passes after that, it should be good to go! Sorry about not having it tagged as a draft, I'll ping you later today/this week when it's good to go. |
No worries at all, sounds great! And thanks for the quick reply 😀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Required change: remove aggregation
dimension from all scores.
Recommended change: update naming convention to reserve -
for benchmark-metric separation only (use .
for sub-data instead)
return ceiling | ||
|
||
@staticmethod | ||
def compute_threshold_elevations(assemblies: Dict[str, PropertyAssembly]) -> list: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
def compute_threshold_elevations(assemblies: Dict[str, PropertyAssembly]) -> list: | |
def compute_threshold_elevations(assemblies: Dict[str, PropertyAssembly]) -> List: |
# independent_variable is not used since we compute from thresholds, and do not need to fit them | ||
metric = load_metric('threshold', independent_variable='placeholder') | ||
score = metric(float(assembly.sel(subject='A').values), assembly) | ||
print(score) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
print(score) |
baseline_condition='placeholder', | ||
test_condition='placeholder') | ||
score = metric(float(assembly.sel(subject='A').values), assembly) | ||
print(score) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
print(score) |
score = float(score[(score['aggregation'] == 'center')].values) | ||
human_thresholds.append(random_human_score) | ||
scores.append(score) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
score = float(score[(score['aggregation'] == 'center')].values) | |
human_thresholds.append(random_human_score) | |
scores.append(score) | |
human_thresholds.append(random_human_score) | |
scores.append(score.values) |
score = float(score[(score['aggregation'] == 'center')].values) | ||
human_threshold_elevations.append(random_human_score) | ||
scores.append(score) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
score = float(score[(score['aggregation'] == 'center')].values) | |
human_threshold_elevations.append(random_human_score) | |
scores.append(score) | |
human_threshold_elevations.append(random_human_score) | |
scores.append(score.values) |
Co-authored-by: Martin Schrimpf <mschrimpf@users.noreply.github.com>
Thanks @mschrimpf for the review. I implemented both sets of changes and pending jenkins plugin tests, all good to go from my side. |
PR for Psychometric Threshold metric and the Malania2007 benchmarks.
Brief todo: