Skip to content

Commit

Permalink
Merge pull request #22 from ryanharvey1/landing-page
Browse files Browse the repository at this point in the history
Autogenerate README.md as landing page
  • Loading branch information
kushaangupta authored Oct 13, 2024
2 parents d1e3e23 + 9930a22 commit b842bdf
Show file tree
Hide file tree
Showing 12 changed files with 427 additions and 150 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: neuro_py CI
name: CI

on:
push:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/deploy-docs.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: Deploy docs
name: Docs
on: [push, pull_request]
jobs:
build:
Expand Down
24 changes: 12 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,19 @@

Analysis of neuroelectrophysiology data in Python.

| | |
|---------|--------------------------------------------------------------------|
| CI/CD | [![CI - Test](https://github.com/ryanharvey1/neuro_py/actions/workflows/ci.yml/badge.svg)](https://github.com/ryanharvey1/neuro_py/actions/workflows/ci.yml) [![Docs](https://github.com/ryanharvey1/neuro_py/actions/workflows/deploy-docs.yml/badge.svg)](https://github.com/ryanharvey1/neuro_py/actions/workflows/deploy-docs.yml) |
| Package | [![PyPI - Version](https://img.shields.io/pypi/v/neuro-analysis-py.svg?logo=pypi&label=PyPI&logoColor=gold)](https://pypi.org/project/neuro-analysis-py/) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/neuro-analysis-py.svg?logo=python&label=Python&logoColor=gold)](https://pypi.org/project/neuro-analysis-py/) [![PyPI - Downloads](https://img.shields.io/pypi/dm/neuro-analysis-py?color=blue&label=Installs&logo=pypi&logoColor=gold)](https://pypi.org/project/neuro-analysis-py/) |
| Repository | [![GitHub - Issues](https://img.shields.io/github/issues/ryanharvey1/neuro_py?logo=github&label=Issues&logoColor=gold)]() [![Commits](https://img.shields.io/github/last-commit/ryanharvey1/neuro_py)]() [![Contributors](https://img.shields.io/github/contributors/ryanharvey1/neuro_py)]() [![Downloads](https://pepy.tech/badge/neuro-analysis-py)](https://pepy.tech/project/neuro-analysis-py) |
| Metadata | [![GitHub - License](https://img.shields.io/github/license/ryanharvey1/neuro_py?logo=github&label=License&logoColor=gold)](LICENSE) [![code style - black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) [![docstring - numpydoc](https://img.shields.io/badge/docstring-numpydoc-blue)](https://numpydoc.readthedocs.io/en/latest/format.html) [![linting - Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff) |


Overview
========
neuro_py is a Python package for analysis of neuroelectrophysiology data. It is built on top of the [nelpy](https://github.com/nelpy/nelpy) package, which provides core data objects. neuro_py provides a set of functions for analysis of freely moving electrophysiology, including behavior tracking utilities, neural ensemble detection, peri-event analyses, robust batch analysis tools, and more.
`neuro_py` is a Python package for analysis of neuroelectrophysiology data. It is built on top of the [nelpy](https://github.com/nelpy/nelpy) package, which provides core data objects. `neuro_py` provides a set of functions for analysis of freely moving electrophysiology, including behavior tracking utilities, neural ensemble detection, peri-event analyses, robust batch analysis tools, and more.

Tutorials are [here](https://github.com/ryanharvey1/neuro_py/tree/master/tutorials) and more will be added.
Tutorials are [here](https://github.com/ryanharvey1/neuro_py/tree/main/tutorials) and more will be added.


## Installation
Expand All @@ -32,13 +40,13 @@ pip install -e . --force-reinstall --no-cache-dir
## Usage

```python
import neuro_py as neuro
import neuro_py as npy
```


## Dependencies

For ease of use, this package uses nelpy core data objects. See [nelpy](https://github.com/nelpy/nelpy)
For ease of use, this package uses `nelpy` core data objects. See [nelpy](https://github.com/nelpy/nelpy)

## Testing

Expand All @@ -57,11 +65,3 @@ Please make sure to update tests as appropriate.
- [@ryanharvey1](https://www.github.com/ryanharvey1)
- [@lolaBerkowitz](https://www.github.com/lolaBerkowitz)
- [@kushaangupta](https://github.com/kushaangupta)


## License

neuro_py is distributed under the MIT license. See the [LICENSE](https://github.com/neuro_py/neuro_py/blob/master/LICENSE) file for details.



45 changes: 45 additions & 0 deletions docs/copy_tutorials.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
"""Examples copying utility function."""

from pathlib import Path

import mkdocs_gen_files

EXAMPLES_DIRECTORY_PATH = Path("tutorials")


def write_file(file_path: Path) -> None:
"""
Copies file from examples directory into mkdocs scope.
Args:
file_path (Path): Current file path.
"""
root_path = file_path.relative_to(".")
print(f"Copying {root_path} file to {root_path}")
with root_path.open("rb") as src, mkdocs_gen_files.open(root_path, "wb") as dst:
dst.write(src.read())


banned_directories = [
"cache",
"files",
"example_files",
"__pycache__",
"lightning_logs",
]
banned_extensions = [".pbf", ".parquet", ".json", ".geojson", ".pt"]
for i in EXAMPLES_DIRECTORY_PATH.glob("**/*"):
if i.is_file():
should_copy = True
for banned_extension in banned_extensions:
if banned_extension in i.suffixes:
should_copy = False
break

for banned_directory in banned_directories:
if banned_directory in i.parts:
should_copy = False
break

if should_copy:
write_file(i)
9 changes: 8 additions & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
@@ -1 +1,8 @@
# neuro-py
<!-- <style>
/* Hide sidebar on homepage */
.md-sidebar {
display: none;
}
</style> -->

{% include-markdown "../README.md" %}
19 changes: 19 additions & 0 deletions docs/stylesheets/extra.css
Original file line number Diff line number Diff line change
@@ -1,3 +1,22 @@
@import url('https://rsms.me/inter/inter.css');
* {
font-family: 'Inter', sans-serif;
}
h1, h2, h3, h4, h5, h6 {
font-weight: 700 !important;
}
pre * { font-family: monospace; }

p {
text-align: justify;
}

.md-nav__link--active {
background-color: var(--md-code-bg-color);
border-radius: 0.2em;
padding: 0.2em;
}

:root {
--md-code-font: "Roboto Mono";
--md-default-fg-color: #111;
Expand Down
24 changes: 20 additions & 4 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,6 @@ extra_css:

theme:
name: "material"
font:
text: Roboto
code: Roboto Mono
palette:
- media: "(prefers-color-scheme: dark)"
toggle:
Expand Down Expand Up @@ -58,6 +55,7 @@ plugins:
- gen-files:
scripts:
- docs/gen_ref_pages.py
- docs/copy_tutorials.py
- literate-nav:
nav_file: SUMMARY.md
- section-index
Expand All @@ -66,8 +64,26 @@ plugins:
python:
options:
docstring_style: numpy

docstring_section_style: table
filters: ["!__"] # exclude all members starting with __
- include-markdown: # https://github.com/mondeja/mkdocs-include-markdown-plugin
opening_tag: "{%"
closing_tag: "%}"
rewrite_relative_urls: true
heading_offset: 1
- mkdocs-jupyter:
include: ["*.ipynb"]
include_source: true
ignore_h1_titles: true
execute: false
allow_errors: true
include_requirejs: true

nav:
- Home: index.md
- API Reference: reference/
- Tutorials:
- Batch Analysis: tutorials/batch_analysis.ipynb
- Explained Variance: tutorials/explained_variance.ipynb
- Reactivation: tutorials/reactivation.ipynb
- Spatial Map: tutorials/spatial_map.ipynb
2 changes: 2 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,8 @@ dependencies = [
"mkdocs-gen-files>=0.5.0",
"mkdocs-literate-nav>=0.6.1",
"mkdocs-section-index>=0.3.9",
"mkdocs-include-markdown-plugin>=6.2.2",
"mkdocs-jupyter>=0.24.8",
]

[project.urls]
Expand Down
79 changes: 52 additions & 27 deletions tutorials/batch_analysis.ipynb
Original file line number Diff line number Diff line change
@@ -1,5 +1,21 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Batch Analysis\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup"
]
},
{
"cell_type": "code",
"execution_count": 1,
Expand All @@ -19,9 +35,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Define your analysis\n",
"## Section 1: Define the analysis\n",
"\n",
"Here, I'm defining the analysis in the notebook, but in a real project, you would define it in a separate Python file and import it here."
"Here, I'm defining the analysis in the notebook, but in a real project, you would define it in a separate `.py` file and import it here."
]
},
{
Expand All @@ -30,11 +46,11 @@
"metadata": {},
"outputs": [],
"source": [
"def toy_analysis(basepath, paramater_1=1, paramater_2=2):\n",
"def toy_analysis(basepath, parameter_1=1, parameter_2=2):\n",
" results = pd.DataFrame()\n",
" results[\"basepath\"] = [basepath]\n",
" results[\"paramater_1\"] = paramater_1\n",
" results[\"paramater_2\"] = paramater_2\n",
" results[\"parameter_1\"] = parameter_1\n",
" results[\"parameter_2\"] = parameter_2\n",
" results[\"random_number\"] = np.random.randint(0, 100)\n",
" return results"
]
Expand All @@ -43,9 +59,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Pandas dataframe of basepaths\n",
"\n",
"For your project, you will have a csv file with the basepaths you want to analyze. Here, I'm creating a dataframe with the basepaths for the purpose of this notebook."
"For your project, you will have a `.csv` file with the `basepaths` you want to analyze. Here, I'm creating a `DataFrame` with the `basepaths` for the purpose of this notebook."
]
},
{
Expand All @@ -54,23 +68,20 @@
"metadata": {},
"outputs": [],
"source": [
"sessions = pd.DataFrame()\n",
"sessions[\"basepath\"] = [\n",
"sessions = pd.DataFrame(dict(basepath=[\n",
" r\"U:\\data\\hpc_ctx_project\\HP01\\day_1_20240227\",\n",
" r\"U:\\data\\hpc_ctx_project\\HP01\\day_2_20240228\",\n",
" r\"U:\\data\\hpc_ctx_project\\HP01\\day_3_20240229\",\n",
"]"
"]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Save path\n",
"\n",
"You will need to define the path where you want to save the results of your analysis.\n",
"\n",
"It's useful to nest the analysis version in a subfolder (toy_analysis\\toy_analysis_v1) to keep track of the different versions of your analysis. "
"It's useful to nest the analysis version in a subfolder (`toy_analysis\\toy_analysis_v1`) to keep track of the different versions of your analysis. "
]
},
{
Expand All @@ -86,15 +97,15 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Batch analysis\n",
"## Section 2: Run the analysis\n",
"\n",
"### Finally, you can run your analysis in batch mode. This will loop through the basepaths and save the results in the specified folder.\n",
"**Finally, you can run your analysis in batch mode. This will loop through the `basepaths` and save the results in the specified folder.**\n",
"\n",
"The batch_analysis function is a general function that you can use for any analysis. You just need to pass the function you want to run, the basepaths you want to analyze, and the save path.\n",
"The `batch_analysis` function is a general function that you can use for any analysis. You just need to pass the function you want to run, the `basepaths` you want to analyze, and the save path.\n",
"\n",
"If your analysis fails, running again will start from where it left off.\n",
"\n",
"There is a parallel option that you can set to True if you want to run the analysis in parallel. This will speed up the analysis if you have multiple cores."
"There is a `parallel` option that you can set to `True` if you want to run the analysis in parallel. This will speed up the analysis if you have multiple cores."
]
},
{
Expand Down Expand Up @@ -140,9 +151,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Load results\n",
"## Section 3: Load the results\n",
"\n",
"There is a built in loader that concatenates the results of the analysis into a single dataframe."
"There is a built in loader that concatenates the results of the analysis into a single `DataFrame`."
]
},
{
Expand Down Expand Up @@ -229,11 +240,18 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# More complicated results\n",
"## Bonus: More complicated results\n",
"\n",
"Your results won't always fit nicely into a single dataframe. Sometimes you will have multiple data types you need to save.\n",
"Your results won't always fit nicely into a single `DataFrame`. Sometimes you will have multiple data types you need to save.\n",
"\n",
"For example, you might have values for each cell in a dataframe and also psths for each cell. Your analysis will store both in a dictionary and you will construct a custom loader in your analysis."
"For example, you might have values for each cell in a `DataFrame` and also PSTHs for each cell. Your analysis will store both in a dictionary and you will construct a custom loader in your analysis."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Define the analysis"
]
},
{
Expand Down Expand Up @@ -306,7 +324,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Run the analysis"
"### Run the analysis"
]
},
{
Expand Down Expand Up @@ -354,7 +372,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load and display results"
"### Load the results"
]
},
{
Expand Down Expand Up @@ -555,11 +573,18 @@
"display(results_df)\n",
"display(psths)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "hpc_ctx",
"display_name": "44n",
"language": "python",
"name": "python3"
},
Expand All @@ -573,7 +598,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
"version": "3.9.19"
}
},
"nbformat": 4,
Expand Down
Loading

0 comments on commit b842bdf

Please sign in to comment.