Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Meta Issue: Image-space analysis #405

Open
3 of 7 tasks
rob-luke opened this issue Nov 3, 2021 · 7 comments
Open
3 of 7 tasks

Meta Issue: Image-space analysis #405

rob-luke opened this issue Nov 3, 2021 · 7 comments
Assignees
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@rob-luke
Copy link
Member

rob-luke commented Nov 3, 2021

Describe the new feature or enhancement

With the release of MNE-Python 0.24 and MNE-NIRS v0.1.2 the core sensor space functionality is complete (many improvements still required, but the API and minimum functionality is present). Next, development will be focused on implementing fNIRS specific image-space analysis. This meta issue will track the high level progress toward this goal.

An fNIRS specific solution is required, rather than simply applying the EEG/MEG techniques in MNE-Python. The analysis should be based on methods from these researchers:

Describe your proposed implementation

This will be expanded as a more concrete plan emerges, but some high level steps are:

Topics that need further thought

  • How to do statistics correctly in image space?
  • At what stage in the pipeline is it most appropriate to move to image space? E.g. some groups do stats at sensor level (GLM) and then project result to image space, other groups transition to image space earlier then run the stats on the image space.
    • If we move to image space earlier in the pipeline then we can probably lean more heavily on nilearn, which has well established analysis procedures we can leverage and modify to be fNIRS specific (this is the approach I tool for the GLM analysis and am very happy with the choice)
  • Plotting
  • DOT specific visualisations and quality checks
    • What extra plots etc are required?

Additional comments

Please add comments below with specific requirements that you may have for image-space analysis, or useful resources, relevant papers or source code, use-case examples, etc. I will spend quite some time reading and formulating a plan before diving in to coding (code is the easy part).

Note: There is likely to be several API and possibly backend changes on this path (for context, the GLM API took about 3-4 iterations before I was happy). So please provide feedback at any stage, as I am always happy to improve all aspects of the project.

@rob-luke rob-luke added enhancement New feature or request help wanted Extra attention is needed labels Nov 3, 2021
@rob-luke rob-luke self-assigned this Nov 3, 2021
@rob-luke rob-luke pinned this issue Nov 3, 2021
@larsoner
Copy link
Member

larsoner commented Nov 4, 2021

See DOT-HUB

This is GPL 3, so we'd have to ask permission to relicense under BSD 3-Clause. I didn't check other libs but keep this is mind

Export image space analysis in a standard format for analysis/visualisation etc in any other program

By "image space" do you mean "on the brain surface" and/or "in brain volumetric space"? Either of these are considered "source spaces" in MNE-Python, and what we're really talking about at this point is "source space analysis", and we already have lots tools for this sort of stuff (e.g., label extraction, spatio-temporal clustering, etc.).

In other words, to me the high-level view is that:

  • In MEG/EEG, you compute a forward and inverse to go from sensor to source space data
  • In sEEG/ECoG you should do the same thing, but a lot of times people just use some simple spatial proximity to go from sensor to source space data (e.g., stc_near_sensors)
  • In fNIRS you don't compute a forward and inverse (reasons unknown to me, haven't read anything about why), but instead use some photon migration / jacobian business to go from sensor to source space data

In all of these cases, at the end of this process you should end up with a STC object in MNE-Python. Then you can use all of our tools for visualization, processing, and statistics as you wish. With this in mind, I think many of the question above are immediately answered:

How to do statistics correctly in image space?

See any tutorial / function we have for this sort of thing already. Spatio-temporal clustering, FDR, label extraction then FDR, etc. are all (non-exhaustive) options. Basically you have all established fMRI and M/EEG statistical tools to choose from at this point I think (subject to meeting the assumptions of those methods, which is likely especially for something with very few assumptions like spatio-temporal clustering).

At what stage in the pipeline is it most appropriate to move to image space?

If the sensor-to-source space transformation is linear (fingers crossed?) then it doesn't matter. If it's nonlinear, then you probably need to do the sensor-to-source space transformation, then apply your GLM :(

Plotting

Look at our M/EEG source space examples and in particular things like

@larsoner
Copy link
Member

larsoner commented Nov 4, 2021

It would also be great if the source space / image space data were in units of Am like you get from M/EEG inverse models. But even if this isn't what you get (e.g., you get some other "activation" measure) we can still consider using all the existing fMRI and M/EEG tools as above I think.

@RJCooperUCL
Copy link

Hi-

Just to provide a brief response- I am happy for any code from our toolbox to be used if this is acknowledged in the source code somewhere. Happy to do what is needed on the license front.

the jacobian @rob-luke describes is a linearized forward operator, which is then inverted, completely analogously to EEG. The jacobian is calculated using a model of light transport in an FEM or voxel space.

We talk about ‘image’ space rather than source space because there are not discrete sources that generate our measurements, but a distributed, continuous image of haemoglobin concentrations. Perhaps this is just nomenclature. Images are in molar concentration, so definitely not in Am units.

Statistical handling in the image/source space is somewhat complicated by the spatially varying sensitivity of fNIRS measurements. This means different locations in the image have different statistical properties. This is likely a solved problem, however, I am just not sure what the best solution is.

@larsoner
Copy link
Member

larsoner commented Nov 4, 2021

We talk about ‘image’ space rather than source space because there are not discrete sources that generate our measurements, but a distributed, continuous image of haemoglobin concentrations. Perhaps this is just nomenclature. Images are in molar concentration, so definitely not in Am units.

At least on the viz front I don't think it will matter much. "(Time-varying) values defined on surfaces / volumetric grids" is what all of our 3D viz is geared toward, whether that be currents, noise-normalized estimates, statistical t-values, or any other arbitrary color-mappable thing.

the jacobian @rob-luke describes is a linearized forward operator, which is then inverted, completely analogously to EEG... different locations in the image have different statistical properties. This is likely a solved problem, however, I am just not sure what the best solution is.

In M/EEG forward sensitivity varies as a function of space as well. To some extent different the inverse methods (depth weighting, noise normalization, etc.) help account for this in different ways. Maybe similar techniques could be used with fNIRS data. I assume people have thought about this sort of thing, though. It would certainly be cool if we could just pack this jacobian into a mne.Forward object and have it work with our suite of inverse methods. TBD whether or not it's a valid thing to do :)

But in any case, even with the techniques to combat sensitivity differences I doubt for M/EEG we ever totally achieve statistical uniformity anyway, so we're at least in a somewhat similar boat!

@rob-luke
Copy link
Member Author

rob-luke commented Nov 5, 2021

Hi @RJCooperUCL and @larsoner thank you both for your comments (and I am quite pleased you two have now [at least virtually] met). I expect I will lean on both of you quite heavily while on the next steps to implement fNIRS image/source space analysis. There is a very nice complementary skill set involved here.

Just to provide a brief response- I am happy for any code from our toolbox to be used if this is acknowledged in the source code somewhere. Happy to do what is needed on the license front.

Thanks @RJCooperUCL! And we will definitely have this acknowledged. Before we merge any code I will highlight to you where the acknowledgment will be (in the code and in the documentation and website etc, details not figured out yet) and get your feedback/approval before moving forward.

We talk about ‘image’ space rather than source space

I have found that many similar concepts (of course with nuances) in different fields use domain specific terminology. I will attempt to use the fNIRS-specific language where possible and also refer the M/EEG analogs at first definition. Hopefully this will then be correct for the fNIRS researchers, but also facilitate findability for M/EEG users and google searches etc.

At least on the viz front I don't think it will matter much. "(Time-varying) values defined on surfaces / volumetric grids" is what all of our 3D viz is geared toward, whether that be currents, noise-normalized estimates, statistical t-values, or any other arbitrary color-mappable thing.

This is the news I wanted to hear! The plan is to utilise as much core MNE-Python code as possible. We may need to do some tweaking along the way for small domain specific details, but based on our previous fNIRS integration, I am certain we can do this with a minimum-touch approach.

It would certainly be cool if we could just pack this jacobian into a mne.Forward object and have it work with our suite of inverse methods

This is where I am planning to start once I have figured out how to generate the Jacobian from the toast or nirfast. I will ping you when I have made some progress (I go on leave today, so probably not much progress in the next few weeks).

@rob-luke
Copy link
Member Author

rob-luke commented Dec 8, 2021

@dboas @sstucker @mayucel please see above for initial thoughts on this topic. Of particular interest may be the discussion between MNE and fMRI developers on a consistent surface data api format: https://nipy.discourse.group/c/surface-api/10 and maybe existing source visualisation examples: https://mne.tools/dev/auto_tutorials/inverse/60_visualize_stc.html

@samuelpowell
Copy link

I've just had a conversation with @rob-luke on this topic and would like to contribute.

General

As you've discussed, the first question is where in the analysis you move from channel to image space. The options are:

  1. Undertake 'standard' fNIRS analysis as performed in MNE-NIRS, and then use a light transport model to invert the channel-wise data back to the image space. There are some variations here, for example there are reasons for reconstructing absorption coefficients first, before performing spectroscopy in the image space. This is the approach that @RJCooperUCL takes.

  2. Move to the image space earlier in the pipeline, performing filtering, GLM etc., in the image space.

There are some really good points raised above which impact on how you want to approach this:

At what stage in the pipeline is it most appropriate to move to image space?

If the sensor-to-source space transformation is linear (fingers crossed?) then it doesn't matter. If it's nonlinear, then you probably need to do the sensor-to-source space transformation, then apply your GLM :(

As @RJCooperUCL noted, we're reconstructing a parameter of a model, rather than its source. Light transport is linear with respect to the sources, but strongly non-linear with respect to the parameters. Most practical approaches assume a linearisation of the problem around an assumed baseline. Linearisation ameliorates a lot of experimental problems (such as optical coupling), but prevents true quantitation.

Assuming a linearised approach (the alternative is another story entirely), the problem with moving the transform through the pipeline is because the inverse (the mapping of changes in the data to changes in the parameters of interest) is regularised. This is necessary owing to the ill-posedness of the inverse problem, which leads on to....

How to do statistics correctly in image space?

See any tutorial / function we have for this sort of thing already. Spatio-temporal clustering, FDR, label extraction then FDR, etc. are all (non-exhaustive) options. Basically you have all established fMRI and M/EEG statistical tools to choose from at this point I think (subject to meeting the assumptions of those methods, which is likely especially for something with very few assumptions like spatio-temporal clustering).

The physics of fNIRS / DOT (a lossy diffusion process) is such that the forward operator is smoothing. This is why inversion requires regularisation. Consequently the number of degrees of freedom in the image space is different to what would be assumed for the same image in, e.g., MRI. This aspect of the statistics is not my area of expertise but this problem has been explored. See, for example, NIRS-SPM: Statistical parametric mapping for near-infrared spectroscopy. I assume you have similar approaches for EEG.

So to link these two things together, yes one can build a linear mapping that can be moved through the pipeline at one's discretion, but there are an infinite number of different mappings you can reasonably choose. In a hand-wavy sense, you're implicitly filtering the back-projection of your data into the image space before you even begin. The mapping you select depends upon the linearisation point, and the prior knowledge you include in the regularisation (e.g. "I expect piecewise constant changes") and (say from a Bayesian perspective) the covariance of the data.

Practical

All that technical nonsense to one side, it's still possible to build something 'reasonable'. I'd suggest that it would be prudent to start by exploring approach (1) - anything more advanced will require the same tooling anyway.

Assuming the goal is a simple model in which one goes from (data in) -> (parameters out), the following will be required:

  1. a forward model and smattering of linear algebra
  2. baseline optical properties
  3. a model of the geometry (e.g. a mesh)
  4. definition of the source and detector locations

Excuse my naivety with MNE (-NIRS) here... but if we assume that I've loaded a big SNIRF file, and it's all been magically registered to a generic head model, can we get (3) and (4) from MNE? If so:

  • would the model of (3) be available as a volumetric tetrahedral mesh?
  • is the above the same for all the models that are included in MNE, or do they vary?

We can take (2) from the literature, and I can help with (1). To determine the appropriate solution to (1):

  • I assume you will want a Python wrapper to the implementation?
  • Considering the above, is there an existing API to which we can reasonably conform? @larsoner you mentioned an existing interface for the forward model, would it be desirable for architectural reasons to try and slot in there?
  • If a new API is required, do you have a feel for what this would look like?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

4 participants