Skip to content

Commit

Permalink
Merge pull request #8 from HealthBioscienceIDEAS/6-final-review-of-wo…
Browse files Browse the repository at this point in the history
…rkshop-content

6 final review of workshop content
  • Loading branch information
davecash75 authored Jul 8, 2024
2 parents 95c03fb + 61ca955 commit 4f0d69f
Show file tree
Hide file tree
Showing 6 changed files with 293 additions and 163 deletions.
Binary file added episodes/fig/aic_smri_tissue_seg_check.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
85 changes: 66 additions & 19 deletions episodes/imaging-data-structure-and-formats.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -228,25 +228,52 @@ aux_file OAS30015_MR_d2004

Let's look at the most important fields:

* **Data type** (`data_type`): Note that some images (`sub-OAS30015_T1w`) are of _integer_ datatype, while others (`sub-OAS30015_T1w_brain_pve_0`) are of _floating point_ datatype. Integer means that the intensity values can only take on whole numbers - no fractions - raw image data is normally of this type. Floating point means that intensity values can be fractional - the result of applying most statistical processing algorithms to image data results in images of floating point type.
* **Image dimension** (`dim1`, `dim2`,`dim3`): this is the number of voxels in the image in the x,y,z dimension. This means that we have a cube of imaging data in the file that contains `dim1` columns, `dim2` rows, and `dim3` slices.
* **Image resolution (Voxel size)** (`pixdim1`,`pixdim2`,`pixdim3`) : this tells us the size that each voxel represents (in mm) in the x,y,z dimension.

_As an example to understand the difference between image dimension and image resolution, an MRI of a fruit fly or an elephant could contain 256 slices (same `dim3` value), but one image would have to represent a much larger size in the real world than the other (different `pixdim3`)._
* **Data type** (`data_type`): Note that some images (`sub-OAS30015_T1w`) are
of _integer_ datatype, while others (`sub-OAS30015_T1w_brain_pve_0`) are of
_floating point_ datatype. Integer means that the intensity values can only
take on whole numbers - no fractions - raw image data is normally of this type.
Floating point means that intensity values can be fractional - the result of
applying most statistical processing algorithms to image data results in
images of floating point type.
* **Image dimension** (`dim1`, `dim2`,`dim3`): this is the number of voxels in
the image in the x,y,z dimension. This means that we have a cube of imaging
data in the file that contains `dim1` columns, `dim2` rows, and `dim3` slices.
* **Image resolution (Voxel size)** (`pixdim1`,`pixdim2`,`pixdim3`) : this
tells us the size that each voxel represents (in mm) in the x,y,z dimension.

_As an example to understand the difference between image dimension and image
resolution, an MRI of a fruit fly or an elephant could contain 256 slices
(same `dim3` value), but one image would have to represent a much larger size
in the real world than the other (different `pixdim3`)._

::::::::::::::::::::::::instructor
If the voxel dimension is the same in all directions (e.g. 1x1x1 mm) we talk about _isotropic_ voxels. Having images with isotropic (or with very similar voxel size in the 3 directions) is desireable to perform reliable quantitative analyses."
If the voxel dimension is the same in all directions (e.g. 1x1x1 mm) we talk
about _isotropic_ voxels. Having images with isotropic (or with very similar
voxel size in the 3 directions) is desirable to perform reliable quantitative
analyses.
::::::::::::::::::::::::

* **Affine transformation** (`qform`): this field encodes a transformation or mapping that tells us **how to convert the voxel location (i,j,k) to the real-world coordinates (x,y,z)** (i.e. the coordinate system of the MRI scanner in which the image was acquired). The real-world coordinate system tends to be defined according to the patient. The x-axis tends to go from patient left to patient right, the y axis tends to go from anterior to posterior, and the z-axis goes from top to bottom of the patient.
This mapping is very important, as this information will be needed to correctly visualize images and also to align them later.
![Figure courtesy of Slicer (addlink)](fig/coordinates_affine.png){alt="Display toolbar"}
An alternative command to `fslinfo` is `fslhd`, which displays a reduced set of properties about the images (mainly data type, dimension and resolution).
* **Affine transformation** (`qform`): this field encodes a transformation or
mapping that tells us **how to convert the voxel location (i,j,k) to the
real-world coordinates (x,y,z)** (i.e. the coordinate system of the MRI scanner
in which the image was acquired). The real-world coordinate system tends to be
defined according to the patient. The x-axis tends to go from patient left to
patient right, the y axis tends to go from anterior to posterior, and the
z-axis goes from top to bottom of the patient.
This mapping is very important, as this information will be needed to
correctly visualize images and also to align them later.
![](fig/coordinates_affine.png){alt="Coordinate systems"}
Figure from [Slicer](https://slicer.readthedocs.io/en/latest/user_guide/coordinate_systems.html)

An alternative command to `fslinfo` is `fslhd`, which displays a reduced set of
properties about the images (mainly data type, dimension and resolution).

## Neuroimaging data analysis

### Generic blueprint of a neuroimaging study
The steps to conduct a neuroimaging study are very similar to any other scientific experiment. As we go through the workshop today, think about where a certain analysis or tool falls in this generic pipeline:
The steps to conduct a neuroimaging study are very similar to any other
scientific experiment. As we go through the workshop today, think about where a
certain analysis or tool falls in this generic pipeline:

| Step | Aim | Challenges and considerations |
|---|------|---------|
Expand Down Expand Up @@ -288,7 +315,7 @@ For a full overview of what FSLeyes can do, take a look at the [FSLeyes user gui
Assuming you are still in the `~/data/ImageDataVisualization` directory,

Start FSLeyes by typing in the terminal:
```shell
```bash
fsleyes &
```

Expand Down Expand Up @@ -584,7 +611,8 @@ Here are the screenshots you should see:
::::::::::::::::::::::
:::::::::::::::::::::::::::::::::

For more information about the atlases available please refer to the [FSL Wiki](https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/Atlases).
For more information about the atlases available please refer to the
[FSL Wiki](https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/Atlases).

Quit FSLeyes when you have finished looking at the atlases.

Expand Down Expand Up @@ -614,6 +642,7 @@ Now let's have a look at them in FSLeyes:
```bash
fsleyes sub-OAS30003_T1w.nii.gz sub-OAS30003_FLAIR.nii.gz
```

Change the intensity range for both images to be between 0 and 1000.
Show/hide images with the eye button
( ![](fig/eye_icon.png){alt="eye icon" height='24px'} ),
Expand All @@ -629,15 +658,30 @@ the overlay list.
::::::::::::::::::::::::::

:::::::::::::::::::::::: solution
#### _Do the T1 and the FLAIR have the same dimension?_

#### _Do the T1 and the FLAIR have the same resolution?_
*Do the T1 and the FLAIR have the same dimension?*

**No**-Using `fslhd`, we can see that the dimensions (`dim1`, `dim2`, and
`dim3`) of the T1 are 176 x 240 x 161 and the dimensions of the FLAIR
image are 256 x 256 x 35.

*Do the T1 and the FLAIR have the same resolution?*

**No**-From the same `fslhd` commands, the resolution can be found in
the fields `pixdim1`, `pixdim2`, and `pixdim3`. For the T1 the resolution is
1.20 x 1.05 x 1.05 mm. For the FLAIR it is 0.859 x 0.859 x 5.00mm

*Do the T1 and the FLAIR have the same orientation?*

#### _Do the T1 and the FLAIR have the same oreintation?_
**No** In the bottom right panel you should see the warning:
“Images have different orientations/fields of view”

#### _What brain characteristics are more visible in these images?_
TODO: Add example screenshots here
*What brain characteristics are more visible in the T1w and which are more
visible on FLAIR?*

On T1w, grey and white matter are more easily distinguishable. On FLAIR,
brain lesions – white matter hyperintensities – are more clearly visible

:::::::::::::::::::::::::::::::::
:::::::::::::::::::::::::::::::::

Expand All @@ -660,8 +704,11 @@ FSLeyes manual: https://open.win.ox.ac.uk/pages/fsl/fsleyes/fsleyes/userdoc/inde
::::::::::::::::::::::::::::::::::::: keypoints

- Images are sheets or cubes of numbers.
- Medical image datais typically storedin DICOM or Nifti format.
- Medical image data is typically stored in DICOM or Nifti format. They include
a header that contains information on the patient and/or the image characteristics.
- An affine transformation maps the voxel location to real-world coordinates.
- Medical image viewers allow to navigate an image, adjust contrast, and
localise brain regions with respect to an atlas.

::::::::::::::::::::::::::::::::::::::::::::::::

115 changes: 63 additions & 52 deletions episodes/pet-imaging.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -266,12 +266,12 @@ the Source Image and Other Image! As such, we will first create a safe copy of o
images before running the Coregistration module).**

1. Create copies of the smoothed late-frame SUM image and the 4D pib image.
1. In the terminal, create a new directory called “safe” in your working directory.
a. In the terminal, create a new directory called “safe” in your working directory.

```bash
mkdir safe
```
1. Copy the `ssub001_pib_SUM50-70min.nii` and `sub001_pib.nii` images to the safe
a. Copy the `ssub001_pib_SUM50-70min.nii` and `sub001_pib.nii` images to the safe
directory using the cp command in the terminal.

```bash
Expand Down Expand Up @@ -383,29 +383,40 @@ specific tracer binding to beta-amyloid plaques.
- **Which regions have the highest density of amyloid plaques?**
::::::::::::::::::::::::::::

## Stretch Exercises
If you have time, please try the following challenge to test your knowledge.

:::::::::::::::::::: challenge
### SUVR versus DVR images
We have pre-processed the PiB image using a different image
pipeline that outputs distribution volume ratio (DVR) images instead of SUVR.
These are located in the folder `~/data/PETImaging/ProcessedPiBDVR` in the file
`cghrsub001_pib_DVRlga.nii`
Compare the DVR image with the SUVR image you created in the tutorial.

*How are similar and how are they different?*

:::::::::::::::::::: hint
Pay close attention to the display settings for the window and colormap.
::::::::::::::::::::
::::::::::::::::::::

:::::::::::::::::::: challenge
### Test your knowledge
### Create a Tau PET SUVR image
You have created a SUVR image for PiB, which used a dynamic acquisition wherein
the scan started at the same time as tracer injection. Now see if you can
repeat the relevant steps above to create a SUVR image for the MK-6240 scan.
You’ll need to look at the .json file and the timing and framing information
to determine which frames to SUM to generate the SUVR image. The most commonly
used MK-6240 SUVR windows are 70-90 min or 90-110 min post-injection. For most
tau tracers, the inferior cerebellum is a valid reference region. If you run
out of time and would like to view an MK-6240 SUVR image, you can view the
images in <path>, which have been pre-processed. In addition, we have also
pre-processed the PiB image using a different image pipeline that outputs DVR
instead of SUVR. Compare the DVR and SUVR images.

::::::::::::::::::::::::::: hint
Pay close attention to the
display settings for the window and colormap.
::::::::::::::::::::::::::::::::

::::::::::::::::::::::::: solution
TODo put solution here
You’ll need to look at the .json file for the TAU PET NIfTI file - this will
contain key information around timing and
framing so that you can determine which frames to SUM to generate the SUVR image.
The most commonly used MK-6240 SUVR windows are 70-90 min or 90-110 min
post-injection. For most tau tracers, the inferior cerebellum is a valid
reference region. If you run out of time and would like to view an MK-6240 SUVR
image, you can view the
images in `~/data/PETImaging/ProcessedTutorial`, which have been pre-processed.
::::::::::::::::::::::::::::::::

::::::::::::::::::::::::::::::::

## Additional steps
Expand All @@ -427,71 +438,71 @@ the scope of this tutorial. We will use the 4D PiB data and SPM12 to perform int
will modify our approach to account for differences in PET frame duration and noise.

1. View the problem
1. In the previous tutorial, we created SUM images of the first and last 20 minutes of the
a. In the previous tutorial, we created SUM images of the first and last 20 minutes of the
PiB acquisition. Load these images in FSLeyes. Recall that you’ll need to use the 50-70
SUM image in the /safe directory that did not have the coregistration transformation
matrix written to the NIfTI header. If you have not completed the tutorial, you can load
the following images that have been previously processed:
* `/home/as2-streaming-user/data/PET_Imaging/ProcessedTutorial/ssub001_pib_SUM0-20min.nii`
* `/home/as2-streaming-user/data/PET_Imaging/ProcessedTutorial/safe/ssub001_pib_SUM50-70min.nii`
1. Set the threshold for the min and max window to 0 to 35,000 for the 0-20 min SUM
a. Set the threshold for the min and max window to 0 to 35,000 for the 0-20 min SUM
image and 0 to 20,000 for the 50-70 min SUM image.
1. Toggle the top image on and off using the eye icon in the Overlay list. Notice the slight
a. Toggle the top image on and off using the eye icon in the Overlay list. Notice the slight
rotation of the head in the sagittal plane between the early and late frames. This is due
to participant motion during the scan acquisition and what we are going to attempt to
correct using interframe realignment.
1. Close FSLeyes.
a. Close FSLeyes.
2. Launch SPM if not already opened
3. Smooth all frames of the 4D data – smoothing prior to realignment will improve the registration
by reducing voxel-level noise.
1. Select the Smooth module from SPM
1. Add all frames for the 4D PiB image `sub001_pib.nii` to the Images to smooth
1. Set the FWHM to an isotropic 4 mm kernel (4 4 4).
1. Set the datatype to FLOAT32
1. Press the green play button to execute the smoothing operation
1. Close the smooth module in SPM
1. View the smoothed 4D PiB image in FSLeyes.
a. Select the Smooth module from SPM
a. Add all frames for the 4D PiB image `sub001_pib.nii` to the Images to smooth
a. Set the FWHM to an isotropic 4 mm kernel (4 4 4).
a. Set the datatype to FLOAT32
a. Press the green play button to execute the smoothing operation
a. Close the smooth module in SPM
a. View the smoothed 4D PiB image in FSLeyes.
4. SUM PET frames across the 4D acquisition
1. For interframe realignment, we typically create an average image of the entire 4D time
a. For interframe realignment, we typically create an average image of the entire 4D time
series to use as a reference image to align each frame. Because the PiB framing
sequence has different frame durations, we cannot simply average the frames as we
would in fMRI, but instead need to create a SUM image of the entire 70-minute
acquisition using a weighted average.
1. Open the `ImCalc` module in SPM.
1. Specify all frames of the smoothed 4D PiB image (ssub001_pib.nii) as Input Images. Be
a. Open the `ImCalc` module in SPM.
a. Specify all frames of the smoothed 4D PiB image (ssub001_pib.nii) as Input Images. Be
sure to maintain the frame order on the file input.
1. Name the output file `ssub001_pib_SUM0-70min.nii`
1. For the expression, specify an equation for a frame duration-weighted average of all
a. Name the output file `ssub001_pib_SUM0-70min.nii`
a. For the expression, specify an equation for a frame duration-weighted average of all
frames. Recall that the frame durations are stored in the .json file.

```matlab
(i1*2 +i3*2 +i4*2 +i5*2 +i6*5 +i7*5 +i8*5 +i9*5 +i10*5 +i11*5 +i12*5 +i13*5 +i14*5 +i15*5 +i16*5 +i17*5)/70
```
1. Use FLOAT32 for the Data Type
1. Run the module using the green play arrow.
1. Close the SPM `ImCalc` module.
a. Use FLOAT32 for the Data Type
a. Run the module using the green play arrow.
a. Close the SPM `ImCalc` module.
5. Perform Interframe alignment using SPM12 realign
1. Open the Realign: Estimate and Reslice module in SPM12
1. Select data and click Specify
1. Select Session and click Specify
1. Here we will use the SUM0-70 min image as the reference for realignment. This is done by selecting this file first in the session file input list.
1. Select the SUM 0-70 min PiB image, and then specify the entire smoothed 4D time series by input each of the 17 frames.
1. Use default settings for all parameters except the following
1. Estimation Options-Smoothing (FWHM): 7
2. Estimation Options-Interpolation: Trilinear
3. Reslice Options-Resliced Images: Images 2..n
4. Reslice Options-Interpolation: Trilinear
1. Run the module by clicking the green play icon
1. Once the process has completed, the SPM graphics window will output the translation
a. Open the Realign: Estimate and Reslice module in SPM12
a. Select data and click Specify
a. Select Session and click Specify
i. Here we will use the SUM0-70 min image as the reference for realignment. This is done by selecting this file first in the session file input list.
i. Select the SUM 0-70 min PiB image, and then specify the entire smoothed 4D time series by input each of the 17 frames.
i. Use default settings for all parameters except the following
- `Estimation Options-Smoothing (FWHM)`: 7
- `Estimation Options-Interpolation`: Trilinear
- `Reslice Options-Resliced Images`: Images 2..n
- `Reslice Options-Interpolation`: Trilinear
i. Run the module by clicking the green play icon
a. Once the process has completed, the SPM graphics window will output the translation
and rotation parameters used to correct for motion in each frame. Note these are small
changes typically <1-2 mm translation and <2 degrees rotation.
1. Close the SPM realign module
1. View the resultant 4D image in FSLeyes (`rssub001_pib.nii`) using a display min and max
a. Close the SPM realign module
a. View the resultant 4D image in FSLeyes (`rssub001_pib.nii`) using a display min and max
of 0 to 30,000. Navigate in the viewer to view the sagittal plane just off mid-sagittal.
Place your crosshairs at the most inferior part of the orbitofrontal cortex and advance
through the PET frames. How did the realignment perform? Are you still seeing rotation
in the sagittal plane between early and late frames?
1. Now change the max window to 100 to saturate the image and view the outline of the
a. Now change the max window to 100 to saturate the image and view the outline of the
head. Scroll through the frames to look for any head motion across the frames. To see
the difference before and after realignment, load the smoothed 4D image, saturate the
image to view the head motion between frames.
Expand Down
Loading

0 comments on commit 4f0d69f

Please sign in to comment.