-
Notifications
You must be signed in to change notification settings - Fork 13
/
convolution_background.Rmd
208 lines (166 loc) · 6.58 KB
/
convolution_background.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
---
jupyter:
jupytext:
notebook_metadata_filter: all,-language_info
split_at_heading: true
text_representation:
extension: .Rmd
format_name: rmarkdown
format_version: '1.2'
jupytext_version: 1.13.7
kernelspec:
display_name: Python 3
language: python
name: python3
---
# Convolving with the hemodyamic response function
Start with our usual imports:
```{python}
import numpy as np
import matplotlib.pyplot as plt
```
## Making a hemodynamic response function
We will start by making a *function* that returns an estimate of the blood
flow signal at any time $t$ after the onset of a neural impulse.
We could use measured data to do this, from data like [this
paper](https://www.sciencedirect.com/science/article/pii/S0960982208004429#fig3),
with the problem that the data has some noise that we do not want to preserve.
Another way, that we follow here, is to construct a function from some useful
mathematical curves that gives values that are of a very similar shape to the
measured response over time.
## Using scipy
*Scipy* is a large library of scientific routines that builds on top of numpy.
You can think of numpy as being a subset of MATLAB, and numpy + scipy as being
as being roughly equivalent to MATLAB plus the MATLAB toolboxes.
Scipy has many sub-packages, for doing things like reading MATLAB `.mat`
files (`scipy.io`) or working with sparse matrices (`scipy.sparse`). We
are going to be using the functions and objects for working with statistical
distributions in `scipy.stats`:
```{python}
import scipy.stats
```
`scipy.stats` contains objects for working with many different
distributions. We are going to be working with `scipy.stats.gamma`, which
implements the [gamma distribution](https://en.wikipedia.org/wiki/Gamma_distribution).
```{python}
from scipy.stats import gamma
```
In particular we are interested in the [probability density function](https://en.wikipedia.org/wiki/Probability_density_function) (PDF) of the
gamma distribution.
Because this is a function, we need to pass it an array of values at
which it will evaluate.
We can also pass various parameters which change the shape, location and width
of the gamma PDF. The most important is the first parameter (after the input
array) known as the [shape parameter](https://en.wikipedia.org/wiki/Shape_parameter) ($k$ in the
[wikipedia page on gamma distributions](https://en.wikipedia.org/wiki/Gamma_distribution)).
First we chose some x values at which to sample from the gamma PDF:
```{python}
x = np.arange(0, 25, 0.1)
```
Next we plot the gamma PDF for shape values of 2, 4, 6, 8, 12.
```{python}
plt.plot(x, gamma.pdf(x, 2), label='k=2')
plt.plot(x, gamma.pdf(x, 4), label='k=4')
plt.plot(x, gamma.pdf(x, 4), label='k=6')
plt.plot(x, gamma.pdf(x, 8), label='k=8')
plt.plot(x, gamma.pdf(x, 12), label='k=12')
plt.legend()
```
## Constructing a hemodynamic response function
We can use these gamma functions to construct a continuous function that is
close to the hemodynamic response we observe for a single brief event in the
brain.
Our function will accept an array that gives the times we want to calculate
the HRF for, and returns the values of the HRF for those times. We will
assume that the true HRF starts at zero, and gets to zero sometime before 35
seconds.
We’re going to try using the sum of two [gamma
distribution](https://en.wikipedia.org/wiki/Gamma_distribution) probability
density functions.
Here is one example of such a function:
```{python}
def hrf(times):
""" Return values for HRF at given times """
# Gamma pdf for the peak
peak_values = gamma.pdf(times, 6)
# Gamma pdf for the undershoot
undershoot_values = gamma.pdf(times, 12)
# Combine them
values = peak_values - 0.35 * undershoot_values
# Scale max to 0.6
return values / np.max(values) * 0.6
```
```{python}
plt.plot(x, hrf(x))
plt.xlabel('time')
plt.ylabel('HRF model of signal')
```
We can sample from the function, to get the estimates at the times of our TRs.
Remember, the TR is 2.5 for our example data, meaning the scans were 2.5
seconds apart.
```{python}
TR = 2.5
tr_times = np.arange(0, 30, TR)
hrf_at_trs = hrf(tr_times)
print(len(hrf_at_trs))
plt.plot(tr_times, hrf_at_trs)
plt.xlabel('time')
plt.ylabel('HRF sampled every 2.5 seconds')
```
We can use this to convolve our neural (on-off) prediction. This will give us
a hemodynamic prediction, under the linear-time-invariant assumptions of the
convolution.
First we get the condition file:
```{python}
# Load the function to fetch the data file we need.
import nipraxis
# Fetch the data file.
cond_fname = nipraxis.fetch_file('ds114_sub009_t2r1_cond.txt')
# Show the file name of the fetched data.
cond_fname
```
Next we using a version of the `stimulus.py` file we worked on earlier, to
convert the onsets in the file, to neural time-courses.
```{python}
from nipraxis.stimuli import events2neural
n_vols = 173
neural_prediction = events2neural(cond_fname, TR, n_vols)
all_tr_times = np.arange(173) * TR
plt.plot(all_tr_times, neural_prediction)
```
When we convolve, the output is length N + M-1, where N is the number of values
in the vector we convolved, and M is the length of the convolution kernel
(`hrf_at_trs` in our case). For a reminder of why this is, see the [tutorial
on convolution](https://matthew-brett.github.io/teaching/on_convolution.html).
```{python}
convolved = np.convolve(neural_prediction, hrf_at_trs)
N = len(neural_prediction) # M == n_vols == 173
M = len(hrf_at_trs) # M == 12
len(convolved) == N + M - 1
```
This is because of the HRF convolution kernel falling off the end of the input
vector. The value at index 172 in the new vector refers to time 172 \* 2.5 =
430.0 seconds, and value at index 173 refers to time 432.5 seconds, which is
just after the end of the scanning run. To retain only the values in the new
hemodynamic vector that refer to times up to (and including) 430s, we can just
drop the last `len(hrf_at_trs) - 1 == M - 1` values:
```{python}
n_to_remove = len(hrf_at_trs) - 1
convolved = convolved[:-n_to_remove]
```
```{python}
plt.plot(all_tr_times, neural_prediction)
plt.plot(all_tr_times, convolved)
```
For our future use, might want to save our convolved time course to a numpy
text file:
```{python}
# Save the convolved time course to 6 decimal point precision
np.savetxt('ds114_sub009_t2r1_conv.txt', convolved, fmt='%2.6f')
back = np.loadtxt('ds114_sub009_t2r1_conv.txt')
# Check written data is within 0.000001 of original
np.allclose(convolved, back, atol=1e-6)
```
There is a copy of this saved file in the [Nipraxis data
collection](https://github.com/nipraxis/nipraxis-data), that we will use
later.