Skip to content

Commit 5c3544a

Browse files
authored
Merge pull request #372 from VisLab/develop
Updated Jupyter notebooks and removed MATLAB examples
2 parents 5e9f9eb + 819926d commit 5c3544a

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

54 files changed

+378
-2013
lines changed

docs/source/FileRemodelingTools.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -877,7 +877,7 @@ The resulting columns are called *stopped* and *stop_failed*, respectively.
877877
The results of executing this *factor_column* operation on the
878878
[**sample remodel event file**](sample-remodel-event-file-anchor) are:
879879

880-
````{admonition} Results of the factor_column operation on the sample data.
880+
````{admonition} Results of the factor_column operation on the samplepip data.
881881
882882
| onset | duration | trial_type | stop_signal_delay | response_time | response_accuracy | response_hand | sex | stopped | stop_failed |
883883
| ----- | -------- | ---------- | ----------------- | ------------- | ----------------- | ------------- | --- | ---------- | ---------- |

docs/source/HedSearchGuide.md

Lines changed: 72 additions & 68 deletions
Large diffs are not rendered by default.

docs/source/WhatsNew.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,11 @@
11
(whats-new-anchor)=
22
# What's new?
33

4+
**June 10, 2024**: **HEDTools 0.5.0 released on PyPI.**
5+
> Remodeling tool validation uses JSON schema.
6+
> Supports `.tsv` format and HED ontology generation for HED schemas.
7+
> Additional visualizations and summaries.
8+
49
**June 10, 2024**: **HED standard schema v8.3.0 released.**
510
> The [**HED schema v8.3,0**](https://doi.org/10.5281/zenodo.7876037) has just
611
been released. This release introduces `hedId` globally unique identifiers for every HED element and enables mapping into a HED ontology.

src/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,4 +46,4 @@ To install directly from the
4646
pip install git+https://github.com/hed-standard/hed-python/@master
4747
```
4848

49-
HEDTools require python 3.7 or greater.
49+
HEDTools require python 3.8 or greater.

src/jupyter_notebooks/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,4 +14,4 @@ To install directly from the
1414
pip install git+https://github.com/hed-standard/hed-python/@master
1515
```
1616

17-
HEDTools require python 3.7 or greater.
17+
HEDTools require python 3.8 or greater.

src/jupyter_notebooks/bids/README.md

Lines changed: 6 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -18,23 +18,18 @@ validating, summarizing, and analyzing your BIDS datasets.
1818

1919
These notebooks require HEDTools, which can be installed using `pip` or directly.
2020

21-
**NOTE: These notebooks have been updated to use the HEDTOOLS version on the develop branch of the HedTools.
22-
These tools must be installed directly from GitHub until the newest version of HEDTools is released.**
23-
24-
To install directly from the
25-
[GitHub](https://github.com/hed-standard/hed-python) repository:
21+
To use `pip` to install `hedtools` from PyPI:
2622

2723
```
28-
pip install git+https://github.com/hed-standard/hed-python/@master
24+
pip install hedtools
2925
```
3026

31-
32-
To use `pip` to install `hedtools` from PyPI:
27+
To install directly from the
28+
[GitHub](https://github.com/hed-standard/hed-python) repository:
3329

3430
```
35-
pip install hedtools
31+
pip install git+https://github.com/hed-standard/hed-python/@master
3632
```
3733

38-
39-
HEDTools require python 3.7 or greater.
34+
HEDTools require python 3.8 or greater.
4035

src/jupyter_notebooks/bids/extract_json_template.ipynb

Lines changed: 32 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,37 @@
3434
},
3535
{
3636
"cell_type": "code",
37-
"execution_count": 2,
37+
"source": [
38+
"import json\n",
39+
"from hed.tools.analysis.tabular_summary import TabularSummary\n",
40+
"from hed.tools.util.io_util import get_file_list\n",
41+
"\n",
42+
"dataset_root = '../../../datasets/eeg_ds003645s_hed'\n",
43+
"exclude_dirs = ['stimuli', 'code', 'derivatives', 'sourcedata', 'phenotype']\n",
44+
"skip_columns = [\"onset\", \"duration\", \"sample\"]\n",
45+
"value_columns = [\"stim_file\", \"response_time\"]\n",
46+
"output_path = None\n",
47+
"\n",
48+
"# Construct the event file dictionary for the BIDS event files\n",
49+
"event_files = get_file_list(dataset_root, extensions=[\".tsv\"], name_suffix=\"_events\", exclude_dirs=exclude_dirs)\n",
50+
"\n",
51+
"# Construct the event file value summary and generate a sidecar template representing dataset\n",
52+
"value_summary = TabularSummary(value_cols=value_columns, skip_cols=skip_columns, name=\"Wakeman-Hanson test data\")\n",
53+
"value_summary.update(event_files)\n",
54+
"sidecar_template = value_summary.extract_sidecar_template()\n",
55+
"if output_path:\n",
56+
" with open(output_path, \"w\") as f:\n",
57+
" json.dump(sidecar_template, f, indent=4)\n",
58+
"else:\n",
59+
" print(json.dumps(sidecar_template, indent=4))"
60+
],
61+
"metadata": {
62+
"collapsed": false,
63+
"ExecuteTime": {
64+
"end_time": "2024-06-15T15:54:13.163193Z",
65+
"start_time": "2024-06-15T15:53:40.611422Z"
66+
}
67+
},
3868
"outputs": [
3969
{
4070
"name": "stdout",
@@ -297,37 +327,7 @@
297327
]
298328
}
299329
],
300-
"source": [
301-
"import json\n",
302-
"from hed.tools.analysis.tabular_summary import TabularSummary\n",
303-
"from hed.tools.util.io_util import get_file_list\n",
304-
"\n",
305-
"dataset_root = '../../../datasets/eeg_ds003645s_hed'\n",
306-
"exclude_dirs = ['stimuli', 'code', 'derivatives', 'sourcedata', 'phenotype']\n",
307-
"skip_columns = [\"onset\", \"duration\", \"sample\"]\n",
308-
"value_columns = [\"stim_file\", \"response_time\"]\n",
309-
"output_path = None\n",
310-
"\n",
311-
"# Construct the event file dictionary for the BIDS event files\n",
312-
"event_files = get_file_list(dataset_root, extensions=[\".tsv\"], name_suffix=\"_events\", exclude_dirs=exclude_dirs)\n",
313-
"\n",
314-
"# Construct the event file value summary and generate a sidecar template representing dataset\n",
315-
"value_summary = TabularSummary(value_cols=value_columns, skip_cols=skip_columns, name=\"Wakeman-Hanson test data\")\n",
316-
"value_summary.update(event_files)\n",
317-
"sidecar_template = value_summary.extract_sidecar_template()\n",
318-
"if output_path:\n",
319-
" with open(output_path, \"w\") as f:\n",
320-
" json.dump(sidecar_template, f, indent=4)\n",
321-
"else:\n",
322-
" print(json.dumps(sidecar_template, indent=4))"
323-
],
324-
"metadata": {
325-
"collapsed": false,
326-
"ExecuteTime": {
327-
"end_time": "2024-01-09T22:02:52.047144900Z",
328-
"start_time": "2024-01-09T22:02:51.951144900Z"
329-
}
330-
}
330+
"execution_count": 1
331331
}
332332
],
333333
"metadata": {

src/jupyter_notebooks/bids/find_event_combinations.ipynb

Lines changed: 40 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -26,66 +26,24 @@
2626
},
2727
{
2828
"cell_type": "code",
29-
"execution_count": 3,
30-
"outputs": [
31-
{
32-
"name": "stdout",
33-
"output_type": "stream",
34-
"text": [
35-
"sub-002_task-FaceRecognition_events.tsv\n",
36-
"sub-003_task-FaceRecognition_events.tsv\n",
37-
"sub-004_task-FaceRecognition_events.tsv\n",
38-
"sub-005_task-FaceRecognition_events.tsv\n",
39-
"sub-006_task-FaceRecognition_events.tsv\n",
40-
"sub-007_task-FaceRecognition_events.tsv\n",
41-
"sub-008_task-FaceRecognition_events.tsv\n",
42-
"sub-009_task-FaceRecognition_events.tsv\n",
43-
"sub-010_task-FaceRecognition_events.tsv\n",
44-
"sub-011_task-FaceRecognition_events.tsv\n",
45-
"sub-012_task-FaceRecognition_events.tsv\n",
46-
"sub-013_task-FaceRecognition_events.tsv\n",
47-
"sub-014_task-FaceRecognition_events.tsv\n",
48-
"sub-015_task-FaceRecognition_events.tsv\n",
49-
"sub-016_task-FaceRecognition_events.tsv\n",
50-
"sub-017_task-FaceRecognition_events.tsv\n",
51-
"sub-018_task-FaceRecognition_events.tsv\n",
52-
"sub-019_task-FaceRecognition_events.tsv\n",
53-
"The total count of the keys is:31448\n",
54-
" key_counts trial_type value\n",
55-
"0 90 boundary 0\n",
56-
"1 2700 famous_new 5\n",
57-
"2 1313 famous_second_early 6\n",
58-
"3 1291 famous_second_late 7\n",
59-
"4 3532 left_nonsym 256\n",
60-
"5 3381 left_sym 256\n",
61-
"6 3616 right_nonsym 4096\n",
62-
"7 4900 right_sym 4096\n",
63-
"8 2700 scrambled_new 17\n",
64-
"9 1271 scrambled_second_early 18\n",
65-
"10 1334 scrambled_second_late 19\n",
66-
"11 2700 unfamiliar_new 13\n",
67-
"12 1304 unfamiliar_second_early 14\n",
68-
"13 1316 unfamiliar_second_late 15\n"
69-
]
70-
}
71-
],
7229
"source": [
7330
"import os\n",
7431
"from hed.tools.analysis.key_map import KeyMap\n",
7532
"from hed.tools.util.data_util import get_new_dataframe\n",
7633
"from hed.tools.util.io_util import get_file_list\n",
7734
"\n",
7835
"# Variables to set for the specific dataset\n",
79-
"data_root = 'T:/summaryTests/ds002718-download'\n",
36+
"dataset_root = '../../../datasets/eeg_ds002893s_hed_attention_shift'\n",
37+
"exclude_dirs = ['stimuli', 'code', 'derivatives', 'sourcedata', 'phenotype']\n",
8038
"output_path = ''\n",
81-
"exclude_dirs = ['stimuli', 'derivatives', 'code', 'sourcedata']\n",
39+
"exclude_dirs = ['trial', 'derivatives', 'code', 'sourcedata']\n",
8240
"\n",
8341
"# Construct the key map\n",
84-
"key_columns = [ \"trial_type\", \"value\"]\n",
42+
"key_columns = [\"focus_modality\", \"event_type\", \"attention_status\"]\n",
8543
"key_map = KeyMap(key_columns)\n",
8644
"\n",
8745
"# Construct the unique combinations\n",
88-
"event_files = get_file_list(data_root, extensions=[\".tsv\"], name_suffix=\"_events\", exclude_dirs=exclude_dirs)\n",
46+
"event_files = get_file_list(dataset_root, extensions=[\".tsv\"], name_suffix=\"_events\", exclude_dirs=exclude_dirs)\n",
8947
"for event_file in event_files:\n",
9048
" print(f\"{os.path.basename(event_file)}\")\n",
9149
" df = get_new_dataframe(event_file)\n",
@@ -103,10 +61,42 @@
10361
"metadata": {
10462
"collapsed": false,
10563
"ExecuteTime": {
106-
"end_time": "2023-10-24T20:08:40.958637400Z",
107-
"start_time": "2023-10-24T20:08:24.603887900Z"
64+
"end_time": "2024-06-15T16:02:17.144301Z",
65+
"start_time": "2024-06-15T16:02:14.364188Z"
10866
}
109-
}
67+
},
68+
"outputs": [
69+
{
70+
"name": "stdout",
71+
"output_type": "stream",
72+
"text": [
73+
"sub-001_task-AuditoryVisualShift_run-01_events.tsv\n",
74+
"sub-002_task-AuditoryVisualShift_run-01_events.tsv\n",
75+
"The total count of the keys is:11730\n",
76+
" key_counts focus_modality event_type attention_status\n",
77+
"0 2298 auditory low_tone attended\n",
78+
"1 2292 visual dark_bar attended\n",
79+
"2 1540 auditory dark_bar unattended\n",
80+
"3 1538 visual low_tone unattended\n",
81+
"4 585 auditory button_press nan\n",
82+
"5 577 auditory high_tone attended\n",
83+
"6 576 visual light_bar attended\n",
84+
"7 572 visual button_press nan\n",
85+
"8 384 auditory light_bar unattended\n",
86+
"9 383 visual high_tone unattended\n",
87+
"10 288 auditory hear_word attended\n",
88+
"11 287 visual look_word attended\n",
89+
"12 96 visual look_word unattended\n",
90+
"13 96 auditory hear_word unattended\n",
91+
"14 96 auditory look_word unattended\n",
92+
"15 96 visual hear_word unattended\n",
93+
"16 14 visual pause_recording nan\n",
94+
"17 11 auditory pause_recording nan\n",
95+
"18 1 nan pause_recording nan\n"
96+
]
97+
}
98+
],
99+
"execution_count": 3
110100
}
111101
],
112102
"metadata": {

src/jupyter_notebooks/bids/merge_spreadsheet_into_sidecar.ipynb

Lines changed: 32 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,37 @@
3030
},
3131
{
3232
"cell_type": "code",
33-
"execution_count": 1,
33+
"source": [
34+
"import os\n",
35+
"import json\n",
36+
"from hed.models import SpreadsheetInput\n",
37+
"from hed.tools import df_to_hed, merge_hed_dict\n",
38+
"\n",
39+
"# Spreadsheet input\n",
40+
"spreadsheet_path = os.path.realpath('../../../docs/source/_static/data/task-WorkingMemory_example_spreadsheet.tsv')\n",
41+
"filename = os.path.basename(spreadsheet_path)\n",
42+
"worksheet_name = None\n",
43+
"spreadsheet = SpreadsheetInput(file=spreadsheet_path, worksheet_name=worksheet_name,\n",
44+
" tag_columns=['HED'], has_column_names=True, name=filename)\n",
45+
"\n",
46+
"# Must convert the spreadsheet to a sidecar before merging\n",
47+
"spreadsheet_sidecar = df_to_hed(spreadsheet.dataframe, description_tag=False)\n",
48+
"\n",
49+
"# Use an empty dict to merge into, but any valid dict read from JSON will work\n",
50+
"target_sidecar_dict = {}\n",
51+
"\n",
52+
"# Do the merge\n",
53+
"merge_hed_dict(target_sidecar_dict, spreadsheet_sidecar)\n",
54+
"merged_json = json.dumps(target_sidecar_dict, indent=4)\n",
55+
"print(merged_json)"
56+
],
57+
"metadata": {
58+
"collapsed": false,
59+
"ExecuteTime": {
60+
"end_time": "2024-06-15T16:03:32.787320Z",
61+
"start_time": "2024-06-15T16:03:32.760819Z"
62+
}
63+
},
3464
"outputs": [
3565
{
3666
"name": "stdout",
@@ -107,37 +137,7 @@
107137
]
108138
}
109139
],
110-
"source": [
111-
"import os\n",
112-
"import json\n",
113-
"from hed.models import SpreadsheetInput\n",
114-
"from hed.tools import df_to_hed, merge_hed_dict\n",
115-
"\n",
116-
"# Spreadsheet input\n",
117-
"spreadsheet_path = os.path.realpath('../../../docs/source/_static/data/task-WorkingMemory_example_spreadsheet.tsv')\n",
118-
"filename = os.path.basename(spreadsheet_path)\n",
119-
"worksheet_name = None\n",
120-
"spreadsheet = SpreadsheetInput(file=spreadsheet_path, worksheet_name=worksheet_name,\n",
121-
" tag_columns=['HED'], has_column_names=True, name=filename)\n",
122-
"\n",
123-
"# Must convert the spreadsheet to a sidecar before merging\n",
124-
"spreadsheet_sidecar = df_to_hed(spreadsheet.dataframe, description_tag=False)\n",
125-
"\n",
126-
"# Use an empty dict to merge into, but any valid dict read from JSON will work\n",
127-
"target_sidecar_dict = {}\n",
128-
"\n",
129-
"# Do the merge\n",
130-
"merge_hed_dict(target_sidecar_dict, spreadsheet_sidecar)\n",
131-
"merged_json = json.dumps(target_sidecar_dict, indent=4)\n",
132-
"print(merged_json)"
133-
],
134-
"metadata": {
135-
"collapsed": false,
136-
"ExecuteTime": {
137-
"end_time": "2024-01-10T12:44:41.634832500Z",
138-
"start_time": "2024-01-10T12:44:39.230433200Z"
139-
}
140-
}
140+
"execution_count": 2
141141
}
142142
],
143143
"metadata": {

0 commit comments

Comments
 (0)