Skip to content

Latest commit

 

History

History
167 lines (129 loc) · 17.6 KB

README.md

File metadata and controls

167 lines (129 loc) · 17.6 KB

SORTED

SORTED: A curated collection of interesting ideas, tools, and resources in neuroscience, data management, and data science, all in the spirit of Open Science. Additionally, it includes interesting miscellaneous links related to AI, biohacking, and productivity.

Open Data Science

Data Repositories (suitable for neuroimaging datasets)

  • OpenNeuro: A platform for sharing BIDS-compliant MRI, PET, MEG, and EEG data.
  • OpenNeuro PET: A specialized repository for BIDS-compliant PET data.
  • NeuroVault: A repository for unthresholded statistical maps, parcellations, and atlases from MRI and PET studies.
  • BossDB: A volumetric database for 3D and 4D neuroscience data.
  • GigaDB: Hosts over 2201 discoverable, trackable, and citable datasets with DOIs for public download and use.
  • Harvard Dataverse: A repository for research data across all disciplines.
  • Dryad: An open data publishing platform for a wide range of research data.
  • GIN: A modern research data management platform for neuroscience.
  • Zenodo: A general-purpose open data repository.
  • NeMO: A data repository focused on multi-omic data from brain research projects.
  • DABI: A shared repository for invasive neurophysiology data from the NIH BRAIN Initiative.

Journals (for Data Notes)

Additional Resources:

(back to top)

Miscellaneous for Researchers

General tools

  • BIDS: Brain Imaging Data Structure; simple and intuitive way to organize and describe neuroimaging and behavioral data
  • Protocols.io: science methods, assays, clinical trials, operational procedures and checklists for keeping your protocols up do date as recommended by Good Laboratory Practice (GLP) and Good Manufacturing Practice (GMP)
  • Sample Consent Forms for neuroimaging research (EN/DE) | The Open Brain Consent: an international initiative that aims to provide researchers in the brain imaging community with information about data sharing options and tools
  • DataLad: a free and open source distributed data management system
  • ADDI + AD Workbench: Alzheimer's Disease Data Initiative; "a free data sharing platform, data science tools, funding opportunities, and global collaborations, ADDI is advancing scientific breakthroughs and accelerating progress towards new treatments and cures for AD and related dementias"
  • The Human Protein Atlas: "The Human Protein Atlas is a Swedish-based program initiated in 2003 with the aim to map all the human proteins in cells, tissues, and organs using an integration of various omics technologies, including antibody-based imaging, mass spectrometry-based proteomics, transcriptomics, and systems biology."
  • Most Wiedzy/ Bridge of Knowledge: a Polish-based system with collection of publications, studies, projects and a lot of other types of resources from a number of different subject areas (open-access)
  • Journal Citation Reports: "the world's leading journals and publisher-neutral data"

Coding/ Software

  • Dask: parallel computing with Python
  • Docker: OS-level virtualization to deliver software in packages called containers
  • Statistics in R: guidelines for ANOVA in R

Neuroscience

  • fMRIPrep: a preprocessing pipeline for task-based and resting-state fMRI data
  • Neurosynth: a platform for large-scale, automated synthesis of functional magnetic resonance imaging (fMRI) data
  • BrainMap: a database of published functional and structural neuroimaging experiments with coordinate-based results (x,y,z) in Talairach or MNI space
  • NEMAR: an open access data, tools, and compute resource for assessing and processing human NeuroElectroMagnetic data shared by its authors thru OpenNeuro
  • ReproNim: ReproNim delivers a reproducible analysis framework
  • NiMARE: a Python package for neuroimaging meta-analyses (Neuroimaging Meta-Analysis Research Environment)
  • BrainIAK: advanced fMRI analyses in Python, optimized for speed under the hood with MPI, Cython, and C++
  • Brain-age models (brain-predicted age value from a raw T1-weighted MRI scans): BrainAge, brainageR
  • SPAMRI: A MATLAB Toolbox for Surface-Based Processing and Analysis of Magnetic Resonance Imaging
  • ENIGMA toolbox: Python/MATLAB based. Cortical and subcortical visualization tools, Preprocessed micro- and macroscale data, Multiscale analytical workflows, 100+ ENIGMA-derived statistical maps

Fellowships & Grants

Learning

  • Neuroscience Tutorials**: Tutorials on various neuroscience topics.
  • NeuroStars**: A forum for discussing neuroscience research and tools.
  • ReproNim Statistics Module: Statistical basis for neuroimaging analyses: the basics / Effect size and variation of effect sizes in brain imaging / P-values and their issues / Statistical power in neuroimaging and statistical reproducibility / The positive Predictive Value / Cultural and psychological issues
  • DataCamp
  • Seeing Theory: a simple introduction to statistics and probability through the use of interactive visualizations (Brown University)

Summer Schools

  • Neuromatch Academy**: An online summer school for computational neuroscience.
  • Google Summer of Code: a global, online program focused on bringing new contributors into open source software development
  • Neurohackademy: a summer school in neuroiming & data science, held at the University of Washington eScience Institute

Initiatives, research groups, associations, labs, companies

  • ENIGMA: The ENIGMA Consortium brings together researchers in imaging genomics to understand brain structure, function, and disease, based on brain imaging and genetic data
  • The EuroLaD-EEG consortium: towards a global EEG platform for dementia (*more information is not yet available)
  • BrainArt SIG: "The scope of the Brain–Art SIG is to promote the exchange between Art & Science by fostering the dialogue between artists and members of the OHBM community." | Paper
  • DeepMind: "We’re a team of scientists, engineers, ethicists and more, committed to solving intelligence, to advance science and benefit humanity"
  • The Center for Brains, Minds and Machines
  • NeuroDataScience - ORIGAMI lab
  • Opium: Polish National Institute for Machine Learning

Predatory journals/publishers etc.

  • Think-Check-Submit: this international, cross-sector initiative aims to educate researchers, promote integrity, and build trust in credible research and publications
  • Beall's List: expanded 2022: a list of predatory journals & trusted resources

(back to top)

Data Visualisation

  • BrainPainter: a free software for visualisation of brain structures, biomarkers and associated pathological processes
  • HSLuv: HSLuv is a human-friendly alternative to HSL
  • Information is beautiful: The Information is Beautiful Awards celebrates excellence & beauty in data visualization, infographics, interactives &  information art

(back to top)

AI tools

(back to top)

Productivity

(back to top)

Biohacking

(back to top)

Reading corner

Design

  • The Design of Everyday Things (Donald Norman): how users use objects, and how to optimize and standardize things and make them more intuitive and user-friendly.

Programming

  • The Art of Readable Code (Dustin Boswell & Trevor Foucher): a basic principles and practical techniques that one can apply to write a better code.

Relaxing on a hammock under a tree...

Unreliable Science (*and how to try overcome this issue)

  • Why Most Published Research Findings Are False (PLOS/ John Ioannidis) | Paper / Wiki: an essay written by John Ioannidis (Stanford School of Medicine); author argues that a large number of papers in medical research contain results that in fact cannot be replicated and are a false positive results
  • How scientists fool themselves – and how they can stop (Nature/ Regina Nuzzo) | Article / PDF: cognitive fallacies in research and debiasing techniques
  • Power failure: why small sample size undermines the reliability of neuroscience (Nature/ Katherine S. Button et al.) | Paper: low statistical power and its influence on true/false effects
  • Scanning the horizon: towards transparent and reproducible neuroimaging research (Nat Rev Neurosci/ Russell A. Poldrack et al.) | Paper: problems that should be acknowledge during neuroimaging data analysis (low statistical power, flexibility in data analysis, software errors etc.)
  • Variability in the analysis of a single neuroimaging dataset by many teams (Nature/ Rotem Botvinik-Nezer et al.) | Paper: analytical flexibility can have substantial effects on scientific conclusions
  • How to get statistically significant effects in any ERP experiment (and why you shouldn't) (Psychophysiology/ Steven J. Luck & Nicholas Gaspelin) | Paper: the purpose of this paper is to demonstrate how common and seemingly innocuous methods for quantifying and analyzing ERP effects can lead to very high rates of significant-but-bogus effects
  • Slowed canonical progress in large fields of science (PNAS/ Johan S. G. Chu & James A. Evans) | Paper: "Examining 1.8 billion citations among 90 million papers across 241 subjects, we find a deluge of papers does not lead to turnover of central ideas in a field, but rather to ossification of canon. Scholars in fields where many papers are published annually face difficulty getting published, read, and cited unless their work references already widely cited articles. New papers containing potentially important contributions cannot garner field-wide attention through gradual processes of diffusion."
  • False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant (Joseph P. Simmons & Leif D. Nelson) | Paper: "First, we show that despite empirical psychologists’ nominal endorsement of a low rate of false-positive findings (≤ .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates."
  • Revised standards for statistical evidence (PNAS/ Valen E. Johnson) | Paper: "The lack of reproducibility of scientific research undermines public confidence in science and leads to the misuse of resources when researchers attempt to replicate and extend fallacious research findings. (...) Modifications of common standards of evidence are proposed to reduce the rate of nonreproducibility of scientific research by a factor of 5 or greater."
  • The file drawer problem and tolerance for null results (Robert Rosenthal) | Paper: "For any given research area, one cannot tell how many studies have been conducted but never reported. The extreme view of the "file drawer problem" is that journals are filled with the 5% of the studies that show Type I errors, while the file drawers are filled with the 95% of the studies that show nonsignificant results."
  • Is there a large sample size problem? (Richard A. Armstrong) | Paper & The paradox of large samples (S. Kunte & A. P. Gore) Paper: statistical issues with large sample sizes
  • Do we really measure what we think we are measuring? (Dario Gordillo et al.) | Paper

Other articles

  • A hitchhiker’s guide to working with large, open-source neuroimaging datasets (Corey Horien et al.) | Paper: "Here we offer practical tips for working with large datasets from the end-user’s perspective. We cover all aspects of the data lifecycle: from what to consider when downloading and storing the data to tips on how to become acquainted with a dataset one did not collect and what to share when communicating results"

(back to top)

Contributions

Note

If you'd like to add anything, please feel free to edit this document and create a pull request.

GitHub Repository