Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Malania 2007 benchmark #365

Merged
merged 92 commits into from
Jul 6, 2024
Merged

Conversation

benlonnqvist
Copy link
Contributor

@benlonnqvist benlonnqvist commented Apr 11, 2023

PR for Psychometric Threshold metric and the Malania2007 benchmarks.

Brief todo:

  • add SHAs and version IDs to DataAssemblies and StimulusSets
  • re-upload StimulusSets to conform to core Microsaccades
  • add visual acuity test benchmark with the vernier-only condition
  • add metric tests

@benlonnqvist benlonnqvist marked this pull request as ready for review June 16, 2023 11:43
@deirdre-k
Copy link
Contributor

deirdre-k commented Jul 1, 2024

Hi @benlonnqvist, we're doing some spring cleaning on the PR backlog and noticed that this PR is active and passing tests! Is it ready to be reviewed and merged? Thanks for the contributions!

@benlonnqvist
Copy link
Contributor Author

Hi @deirdre-k, thanks for messaging! Let me update the branch to double check that #917 didn't cause any issues, add one more test, and if it passes after that, it should be good to go! Sorry about not having it tagged as a draft, I'll ping you later today/this week when it's good to go.

@deirdre-k
Copy link
Contributor

No worries at all, sounds great! And thanks for the quick reply 😀

Copy link
Member

@mschrimpf mschrimpf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Required change: remove aggregation dimension from all scores.

Recommended change: update naming convention to reserve - for benchmark-metric separation only (use . for sub-data instead)

brainscore_vision/benchmarks/malania2007/__init__.py Outdated Show resolved Hide resolved
brainscore_vision/benchmarks/malania2007/benchmark.py Outdated Show resolved Hide resolved
brainscore_vision/benchmarks/malania2007/benchmark.py Outdated Show resolved Hide resolved
brainscore_vision/benchmarks/malania2007/benchmark.py Outdated Show resolved Hide resolved
brainscore_vision/benchmarks/malania2007/benchmark.py Outdated Show resolved Hide resolved
return ceiling

@staticmethod
def compute_threshold_elevations(assemblies: Dict[str, PropertyAssembly]) -> list:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
def compute_threshold_elevations(assemblies: Dict[str, PropertyAssembly]) -> list:
def compute_threshold_elevations(assemblies: Dict[str, PropertyAssembly]) -> List:

# independent_variable is not used since we compute from thresholds, and do not need to fit them
metric = load_metric('threshold', independent_variable='placeholder')
score = metric(float(assembly.sel(subject='A').values), assembly)
print(score)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
print(score)

baseline_condition='placeholder',
test_condition='placeholder')
score = metric(float(assembly.sel(subject='A').values), assembly)
print(score)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
print(score)

Comment on lines 188 to 190
score = float(score[(score['aggregation'] == 'center')].values)
human_thresholds.append(random_human_score)
scores.append(score)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
score = float(score[(score['aggregation'] == 'center')].values)
human_thresholds.append(random_human_score)
scores.append(score)
human_thresholds.append(random_human_score)
scores.append(score.values)

Comment on lines 492 to 494
score = float(score[(score['aggregation'] == 'center')].values)
human_threshold_elevations.append(random_human_score)
scores.append(score)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
score = float(score[(score['aggregation'] == 'center')].values)
human_threshold_elevations.append(random_human_score)
scores.append(score)
human_threshold_elevations.append(random_human_score)
scores.append(score.values)

benlonnqvist and others added 3 commits July 5, 2024 10:15
Co-authored-by: Martin Schrimpf <mschrimpf@users.noreply.github.com>
@benlonnqvist
Copy link
Contributor Author

Required change: remove aggregation dimension from all scores.

Recommended change: update naming convention to reserve - for benchmark-metric separation only (use . for sub-data instead)

Thanks @mschrimpf for the review. I implemented both sets of changes and pending jenkins plugin tests, all good to go from my side.

@mschrimpf mschrimpf merged commit 9d963f1 into brain-score:master Jul 6, 2024
10 of 11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants