Code and data for:
Zhang^, Lengersdorff^, Mikus, Gläscher, & Lamm (2020). Frameworks, pitfalls, and suggestions of using reinforcement learning models in social neuroscience. (^Equal contributions) Social cognitive and affective neuroscience
DOI: 10.1093/scan/nsaa089.
A 2-min flash talk of the paper is available on YouTube.
This repository contains:
root
├─ code # Matlab & R code to run the analyses and produce figures
├─ data # behavioral & fMRI data
to reproduce all analyses and figures in the paper.
Note 1: to properly run all scripts, you may need to set the root of this repository as your work directory.
Note 2: to reproduce the Matlab figures, you may need the NaN Suite, the color brewer toolbox, and the offsetAxes function.
- Figure 1A: rl_learning_curve.m --> calls simuRL_one_person.m
- Figure 1B: rl_outcome_weight.m
- Figure 1C: plot_softmax.m
- Figure 1D: rl_simulations.Rmd --> full simulation rl-simulations-generate-data.Rmd
- Figure 2C: pe_time_series_plot.m*
- core function: ts_corr_basic.m --> relies on normalise.m
- permutation test: ts_perm_test.m
* See our empirical paper (Zhang & Gläscher, 2020) for the experiments and other findings.
- Figure 3A-C: plot_ppc.m
- model fitting: reinforcement_learning_HBA.R --> calls the stan model rl_ppc.stan
For bug reports, please contact Lei Zhang (lei.zhang@univie.ac.at, or @lei_zhang_lz).
Thanks to Markdown Cheatsheet and shields.io.
This license (CC BY-NC 4.0) gives you the right to re-use and adapt, as long as you note any changes you made, and provide a link to the original source. Read here for more details.