Skip to content

Commit

Permalink
Merge pull request #44 from Hjorthmedh/jofrony-patch-1
Browse files Browse the repository at this point in the history
Update README.md
  • Loading branch information
jofrony authored Jul 24, 2023
2 parents a3e603b + e0f3918 commit 9905533
Show file tree
Hide file tree
Showing 25 changed files with 57,382 additions and 103 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Extra transfer files for model prep
tools/test_data/
Planert2010/figures/
tmp/

# Byte-compiled / optimized / DLL files
__pycache__/
Expand Down
71 changes: 62 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,22 @@
# BasalGangliaData

## TODO in the next merge

* Update the morphologies for dSPN and iSPN without the in-soma dendrite point problem
* Reoptimize the FS - morphology key and parameter key pairs in filters/striatum/fs/
* Update the synapses from Cortex-Striatum
* If there are other morphology-parameter key combinations to be filtered-away/reoptimized/modified - place them within filters, similar to fs (filters/striatum/fs/) to inform the next round of merges
* To clean both BasalGangliaData and Snudda from previous commits of large files (data, models etc..) and remove from git history use: https://rtyley.github.io/bfg-repo-cleaner/

## Models within the Microcircuit group:

* The striatum - healthy:
found in: data/neurons/striatum; data/synapses/striatum; meshes/Striatum-d-*; density/ and nest/
description: The control/healthy mouse striatum model
* The striatum - Parkinsons:
found in: Parkinson/20221213/PD0 (the striatum - healthy); Parkinson/20221213/PD1; Parkinson/20221213/PD2; Parkinson/20221213/PD3; Parkinson/20221213/PD_lesion;
description: Parkinsonian models - 3 stages and PD_lesion - which is equivalent to PD2 and contains completely models, the rest is morphological changes

## Versions of Basal Ganglia Data

To move between the different tags :
Expand All @@ -12,18 +29,18 @@ git checkout tags/tag_name

### Description of models

The multicompartmental models are described a parameter set, a morphology file (.swc) and the mechanisms.json (which describes how the ion channels are distributed on the reconstructed morphology).
The multicompartmental models are described by a parameter set, a morphology file (.swc), and the mechanisms.json (which describes how the ion channels are distributed on the reconstructed morphology).

In each model folder, there is an additional file, meta.json.

Meta.json is a dictionary which contains - two levels of hash_keys, p* and m* and (nm* for neuromodulation). The * is calculated from the contents of the parameter set or morphology file (.swc) using hashlib.md5(contents_of_the_parameter_or_morphology).hexdigest(). This gives a hash specific for the contents of parameter/morphology. The hash is prefixed with either "p" or "m" for parameter and morphology, respectively.
Meta.json is a dictionary that contains - two levels of hash_keys, p* and m* and (nm* for neuromodulation). The * is calculated from the contents of the parameter set or morphology file (.swc) using hashlib.md5(contents_of_the_parameter_or_morphology).hexdigest(). This gives a hash specific to the contents of the parameter/morphology. The hash is prefixed with either "p" or "m" for parameter and morphology, respectively.

The advantage of hash keys:
- They are calculated from the contents of the file, hence is the model changes, the hash keys will change. Good for testing
- Each model become identifiable based on two keys, p* and m*, which can be used during the simulation
- We do not use lists for model files, which are dependent on the order of the models. Hence, sensitive for changes to files structure.
- They are calculated from the contents of the file, hence if the model changes, the hash keys will change. Good for testing
- Each model becomes identifiable based on two keys, p* and m*, which can be used during the simulation
- We do not use lists for model files, which are dependent on the order of the models. Hence, sensitive to changes to file structure.

For further information and help with converting new models into the abovedescribed format, email Johanna Frost Nylen, johanna.frost.nylen@ki.se.
For further information and help with converting new models into the above-described format, email Johanna Frost Nylen, johanna.frost.nylen@ki.se.

### Access
First request access from Johannes, currently only internal use for our group. But if you see this text, you probably already have access.
Expand All @@ -41,12 +58,12 @@ If you look at [runSnuddaSmall.sh](https://github.com/Hjorthmedh/Snudda/blob/mas
export SNUDDA_DATA="../../BasalGangliaData/data"
```

This is the key, it tells Snudda where BasalGangliaData is. If you run from a folder outer than Snudda/examples, or if you put BasalGangliaData somewhere else then this path might need to be different. So you need to set ```SNUDDA_DATA``` in your shellscript.
This is the key, it tells Snudda where BasalGangliaData is. If you run from a folder outer than Snudda/examples, or if you put BasalGangliaData somewhere else then this path might need to be different. So you need to set ```SNUDDA_DATA``` in your shell script.


### Morphologies - new morphologies have to be centered

Before commiting new morphologies, please verify that they are centred at (0,0,0) using ```test_segmentid.py``` in ```Snudda/tests```.
Before commiting new morphologies, please verify that they are centered at (0,0,0) using ```test_segmentid.py``` in ```Snudda/tests```.

```
export SNUDDA_DATA=/home/hjorth/HBP/BasalGangliaData/data/
Expand All @@ -60,10 +77,46 @@ When changes are made to the models, their hash names will changes and this affe

New hash to work with tests of models

if you have created a network prior to january 8, either regenerate the network and rerun (the models are the same, only keys have changed) or revert back to the old keys, by
if you have created a network prior to January 8, either regenerate the network and rerun (the models are the same, only keys have changed) or revert back to the old keys, by

git checkout 2768cd6

## Testing
BasalgangliaData uses unittest to test the code in tools/. To run tests:

```
python -m unittest discover tests/
```
Individual tests can be run by:

```
python -m unittest tests/name_of_test_file.py
```
Files with tests start with test_*, which is important when adding new tests.

To check if a piece of code has already been tested, generate a code coverage report:

```
python -m coverage run -m unittest discover tests/
```
and visualise it either in the terminal:

```
python -m coverage report
```
or html and open the htmlcov/index.html:

```
python -m coverage html
```

This will tell you which lines have been covered and if you have to write a new test or just modify a previous one. [Unittest](https://www.digitalocean.com/community/tutorials/how-to-use-unittest-to-write-a-test-case-for-a-function-in-python)


## Funding

Horizon 2020 Framework Programme (785907, HBP SGA2); Horizon 2020 Framework Programme (945539, HBP SGA3); Vetenskapsrådet (VR-M-2017-02806, VR-M-2020-01652); Swedish e-science Research Center (SeRC); KTH Digital Futures. The computations are enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC) at PDC KTH partially funded by the Swedish Research Council through grant agreement no. 2018-05973. We acknowledge the use of Fenix Infrastructure resources, which are partially funded from the European Union's Horizon 2020 research and innovation programme through the ICEI project under the grant agreement No. 800858.
Expand Down
138 changes: 138 additions & 0 deletions examples/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,138 @@
# How to transfer your model from BluePyOpt to BasalGangliaData (using Snudda format)

# BluePyOpt

To create single multi-compartmental models, the lab utilizes [BluePyOpt](https://github.com/BlueBrain/BluePyOpt). The optimization is started using a collection of code* with the "set up" of the
optimization is done with the following code:

```
optimiser = bpopt.optimisations.DEAPOptimisation(
evaluator=evaluator,
offspring_size=offspring_size,
map_function=lview.map_sync,
seed=1)
pop, hof, log, hist = optimiser.run(max_ngen=ngenerations)
```

see the method "run" in BluePyOpt [here](https://github.com/BlueBrain/BluePyOpt/blob/dfd202904c4f497c54574c7f321a95bb5183438b/bluepyopt/deapext/optimisations.py#L253)

The "pop","hof","log" and "hist" are returned from "run". The "hof" contains the "best" parameter sets of the optimization. The "hof" is saved into a json-file by the following code:

```
import json
best_models = []
for record in hof:
params = evaluator.param_dict(record)
best_models.append(params)
with open('best_models.json', 'w') as fp:
json.dump(best_models, fp, indent=4)
```
The file can be further filtered by different validation scripts* but the structure should be the same as in 'best_models.json' (a list of dictionaries). See examples/ for examples of each file required in the conversion from BluePyOpt to Snudda.

Some people in the lab use 'best_models.json', but further validation following the optimization should filter this list and these models are saved in 'hall_of_fame.json'. This file has the same structure but might contain fewer models, as other features are measured during validation compared to optimization.

If the optimization is also tested on several different morphologies, a third file called 'val_models.json' is required (See examples/ for an example of 'val_model.json').


*for more information on the optimization contact Alex Kozlov or Ilaria Carannante.

## Option 1: Give the path to specific files

### Required files
Mandatory
* parameters.json
* mechanisms.json
VERSION - either 1 or 2 or 2 + 3
1. best_models.json (if you are using the direct output of the optimizer)
2. hall_of_fame.json (if you have filtered the parameter sets against more validations)
3. val_models.json (if you have varied the morphology used within the original optimization and hence have more morph-parameter combinations)
The model optimisation could be a folder, containing the follow files and subdirectories:

model/
config/
parameters.json
mechanisms.json
morphology/
contain one or several morphologies (.swc)
used for the model
hall_of_fame.json ( contain the parameter sets - the results of the optimisation)
val_models.json ( optional file, if several morphologies are used, the parameter sets which match each morphology)
For an example of the structure and contents of the files, see **BasalGangliaData/tests/test_data/example_variation_source**

*Contact Alex Kozlov for more information

# The steps

### Create your own notebook and copy-paste the code to perform each step individually or utilize the class TransferBluePyOptToSnudda.

The transfer has been divided into several steps.

Create directory for the model.
```
Within BasalGangliaData, the models used in Snudda are saved under
BasalGangliaData/data/neurons/name_of_nucleus
**If the nucleus does not exist, add a folder for the new nucleus**
Next create (if it does not already exist), a folder for each cell type within the nucleus
Lastly, create the folder for each model of the cell type
(this folder will be the **destination** used in the code below)
For example,
BasalGanglia/data/neurons/newnucleus/new_celltype/new_model
```

### Add tools to your path

```
import sys
sys.path.append("../tools")
source = "where the Bluepyopt optimisation, with the structure described above"
destination = "BasalGanglia/data/neurons/newnucleus/new_celltype/new_model"
```

### Transfer mechanisms

```
from transfer.mechanisms import transfer_mechanisms
transfer_mechanisms(source=source, destination=destination)
```

### Transfer parameters

```
from transfer.parameters import transfer_parameters
transfer_parameters(source=source,
destination=destination,
selected=True)
```

### Transfer selected models from val_models.json


```
from transfer.selected_models import transfer_selected_models
transfer_selected_models(source=source, destination=destination)
```

### Transfer morphologies

```
from transfer.morphology import transfer_morphologies
transfer_morphologies(source=source,
destination=destination,
selected=True)
```

### Create the meta.json which combines all information on the model

```
from meta.create_meta import write_meta
write_meta(directory=destination, selected=True)
```
Loading

0 comments on commit 9905533

Please sign in to comment.