You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
*Note: this is a question rather than an issue. Please let me know if there's a better platform than this to post my question.
I am using mTRF-Toolbox with intracranial depth electrodes. It's been a huge help - great work.
I would like to look at certain channels' response to natural speech, and specifically phonemes. I have phonetic transcriptions (that is, onset and offset of each phoneme) of the speech. What would be the best way to estimate a channel's response to each phoneme with mTRF-Toolbox? I was thinking of binarizing phoneme onset timestamps and then convolving with a gaussian filter in order to achieve a continuous stimulus. Does that sound reasonable? Thanks!
The text was updated successfully, but these errors were encountered:
You could use impulses at the phoneme onsets, steps, and your gaussian filter idea could be fine too. I personally used steps to indicate the occurrence and duration of a phoneme. Your choice depends on your hypothesis or observation (e.g., the acoustic onset of a phoneme is not the same as the moment it is "perceived").
Note that my considerations are valid for forward models.
(Please skip this part, unless you actually went into depth by reading those papers.
In relation with the abovementioned papers Di Liberto et al., 2015 and 2021, note that the measure rFS - rS (or simply FS-S) makes sense in the EEG domain, but it's less relevant if the spatial resolution is sufficient to separate acoustic vs. phonological responses. In that case, FS-S makes sense if you combine multiple electrodes).
*Note: this is a question rather than an issue. Please let me know if there's a better platform than this to post my question.
I am using mTRF-Toolbox with intracranial depth electrodes. It's been a huge help - great work.
I would like to look at certain channels' response to natural speech, and specifically phonemes. I have phonetic transcriptions (that is, onset and offset of each phoneme) of the speech. What would be the best way to estimate a channel's response to each phoneme with mTRF-Toolbox? I was thinking of binarizing phoneme onset timestamps and then convolving with a gaussian filter in order to achieve a continuous stimulus. Does that sound reasonable? Thanks!
The text was updated successfully, but these errors were encountered: