- Original research
- Open access
- Published:
Validation of a data-driven motion-compensated PET brain image reconstruction algorithm in clinical patients using four radiotracers
EJNMMI Physics volume 12, Article number: 11 (2025)
Abstract
Purpose
Patients with dementia symptoms often struggle to limit movements during PET examinations, necessitating motion compensation in brain PET imaging to ensure the high image quality needed for diagnostic accuracy. This study validates a data-driven motion-compensated (MoCo) PET brain image reconstruction algorithm that corrects head motion by integrating the detected motion frames and their associated rigid body transformations into the iterative image reconstruction. Validation was conducted using phantom scans, healthy volunteers, and clinical patients using four radiotracers with distinct tracer activity distributions.
Methods
We conducted technical validation experiments of the algorithm using Hoffman brain phantom scans during a series of controlled movements, followed by two blinded reader studies assessing image quality between standard images and MoCo images in 38 clinical patients receiving dementia scans with [18F]Fluorodeoxyglucose, [18F]N-(3-iodopro-2E-enyl)-2beta-carbomethoxy-3beta-(4’-methylphenyl)-nortropane, [18F]flutemetamol, and a research group comprising 25 elderly subjects scanned with [18F]fluoroethoxybenzovesamicol.
Results
The Hoffman brain phantom study demonstrated the algorithm’s capability to detect and correct for even minimal movements, 1-mm translations and 1⁰ rotations, applied to the phantom. Within the clinical cohort, where standard images were deemed suboptimal or non-diagnostic, all MoCo images were classified as having acceptable diagnostic quality. In the research cohort, MoCo images consistently matched or surpassed the standard image quality even in cases with minimal head movement, and the MoCo algorithm never led to degraded image quality.
Conclusion
The PET brain MoCo reconstruction algorithm was robust and worked well for four different tracers with markedly different uptake patterns. Moco images markedly improved the image quality for patients who were unable to lie still during a PET examination and obviated the need for any repeat scans. Thus, the method was clinically feasible and has the potential for improving diagnostic accuracy.
Background
Positron emission tomography (PET) is clinically well-established in neurodegenerative disorders using a variety of radiotracers. Quantifying cerebral glucose metabolism using [18F]Fluorodeoxyglucose (FDG) is crucial for early and differential diagnosis of dementia. Moreover, specific PET tracers aid in a deeper understanding of the neuropathology and neurotransmitter alterations underlying neurodegenerative dementias [1, 2].
PET brain imaging often involves long PET acquisitions in elderly subjects, many of whom exhibit dementia symptoms and have difficulties to limit voluntary and involuntary head motion during the PET examination. In particular, movement disorders such as Lewy body dementia or Parkinsonism, are associated with motor symptoms, which can make it difficult for patients to maintain a steady head position throughout a PET examination. Often, the patient’s head motion can exceed the spatial resolution of high-resolution brain images acquired on contemporary PET/CT scanners, which is around 2 mm full-width half-maximum (FWHM). Even smaller involuntary head movements degrade image quality by introducing artifacts that often lead to a loss of effective spatial resolution. In severe cases, the image artifacts compromise clinical image interpretation, and the study needs to be repeated. This is particularly challenging for brain PET examinations that rely on the visualization of tracer uptake in cortical regions, which are typically around 2.5 mm of diameter [3]. Thus, motion correction in brain PET imaging is fundamental for ensuring high image quality, accurate quantitative analysis and diagnostic reliability, regardless of patient scenario and severity of disease.
In clinical routine, careful patient instructions and immobilization devices such as head-holders with mild physical head restraints are used to reduce head motion. To compensate for the residual head motion, the use of external motion tracking devices attached to the head has been suggested to detect head movements during the PET scan [4], and the motion estimates were used for motion-corrected image reconstruction [5]. However, the use of external devices is a complex setup in clinical practice that has mainly been used for research projects. More recently, data-driven methods have been suggested to detect head movements directly from PET data [6,7,8,9,10,11].
In this work, we present a fully automated PET brain image reconstruction algorithm with data-driven motion compensation. We perform technical validations using FDG Hoffman brain phantom scans and a custom-made phantom movement system. Then, a reader study in patients that encompassed 38 clinical patients scanned for dementia evaluation with three different tracers, and 25 elderly subjects from a research study investigating dementia with Lewy bodies.
Methods
Patient Data
The study comprised a retrospective analysis of both clinical and research data from the Department of Nuclear Medicine and PET at Aarhus University Hospital. The clinical cohort consisted of 38 selected patients with inadequate image quality (Table 1). These patients underwent PET imaging for dementia and movement disorders between January 2022 and May 2023. Nuclear medicine physicians assessed the standard PET images as compromised, grading them as suboptimal or even non-diagnostic, primarily due to head motion artifacts. Within the clinical cohort, 11 subjects underwent FDG imaging, 12 were subjected to [18F]N-(3-iodopro-2E-enyl)-2beta-carbomethoxy-3beta-(4’-methylphenyl)-nortropane (PE2I) imaging [12], and 15 had [18F]flutemetamol (FMM) imaging [13]. The three tracers are used to uncover different characteristics of dementia and have markedly different uptake patterns: FDG is used to evaluate reduced cortical glucose metabolism pertinent to dementia evaluation, PE2I is used for dopamine transporter visualization with a focus on striatal regions, and FMM is used for amyloid imaging. The FMM image is notably sensitive to motion because it is essentially a white matter (WM) image that is assessed for grey matter (GM) uptake. The institutional review board at Aarhus University Hospital granted access to patient files. Individual patients’ consent was waived by the institutional review board due to the retrospective nature of the study.
The research cohort comprised 25 subjects (Table 1), selected from a previously published study [14]. This cohort included 15 patients diagnosed with Lewy body dementia and 10 age-matched cognitively intact elderly controls with Montreal Cognitive Assessment of 26 or above. All subjects underwent PET after injection of [18F]fluoroethoxybenzovesamicol (FEOBV), a radiotracer utilized for vesicular acetylcholine transporter imaging [14, 15]. The standard PET images for all participants were deemed of adequate research quality based on visual inspection, and the selection was not influenced by factors like head motion. The research study was conducted according to the Declaration of Helsinki and approved by the Regional Ethics Committee. All participants provided written informed consent.
Data acquisition and standard Brain PET Image Reconstruction
All subjects were scanned on a Biograph Vision 600 PET/CT (Siemens Healthineers, Knoxville, TN, USA) scanner with a CT (Ref mAs 150; 120 kV) for attenuation correction followed by a PET brain scan (see Table 1 for injected activities and scan times). The patients were positioned in the scanner using a head holder and instructed not to move during the PET/CT. PET data were acquired in list-mode. Brain PET data were reconstructed with attenuation and scatter correction using resolution modeling (PSF) and time-of-flight (TOF), 8 iterations, 5 subsets, 440 matrix, zoom 2, no post-filter, with a final voxel size of 0.83 × 0.83 × 1.65 mm3, and a spatial resolution around 2 mm FWHM. These images will be denoted Standard Images.
Motion-compensated Brain PET Image Reconstruction
The data-driven motion correction algorithm is based on the principle that motion is distinctively assumed not to be a continuous process. Specifically, the method assumes that head studies consist of alternating periods of quiescence and motion. The approach is to align data that exhibits quiescent periods that lends itself to the piecewise reconstruction and re-assembly of motion free images. The procedure is performed in three steps:
-
A. Subdivide the listmode into a series of 1.0 s frames. These are used to identify subset of consecutive frames in which the head motion is not detectable (“motion frame”). Each motion frame must have adequate count statistics for deriving a motion-compensating transform to a target frame.
-
B. Estimate transforms between motion frames.
-
C. The motion frames and the corresponding transforms are used in the reconstruction of the PET data with the appropriate correction factors.
Each step (A, B, C) is expanded below and in Fig. 1.
Illustration of the three steps (A, B, C) in the data-driven motion-compensated image reconstruction algorithm. List-mode data (A1) is searched to identify motion events based on 1-sec center-of-distributions (A2) and used to define a series of motion frames (A3) with gaps where low-count motion frames are discarded. Non-attenuation-corrected (NAC) image reconstruction (B1) is performed for each motion frame. The summing tree algorithm and mutual information (B2) are used to estimate the rigid body transformation (B3) between the initial frame and each motion frame. Sinograms and transformations for each motion frames (C1) are used in the OSEM algorithm for reconstructing the MoCo Image (C2)
A. Deriving time bins between motion events
Motion events are identified in the list-mode file and used to define a subset of consecutive frames, where in which motion is considered to be not detectable or negligible using the criteria below. The method is similar to the previously described Merging Adjacent Clusters method [16].
-
1. For each frame time sampling interval d > = 1.0 s, a center-of-distribution (COD) is calculated by finding the most likely location pn(xn,yn,zn) in image space of each line of response event LORn(i, j,\(\:\varDelta\:\text{t}\)) where i and j are detector pairs and \(\:\varDelta\:\text{t}\:\text{i}\text{s}\:\)time-of-flight information if available. The averaging of these positions is referred to as histo-binning.
$$\:{COD}_{n}\:=\frac{\sum\:{p}_{n}}{N}$$ -
2. If the COD is sufficiently different from the cumulatively prior sampling interval, the start of a new motion frame is declared. “Sufficiently different” is determined by comparing the motion change in COD versus the positional uncertainty related to the noise level characteristic of the scan. For example, in a scan with fewer counts, the noise level is higher, and the COD must move a greater distance to trigger a new motion frame. See [16] for further details.
-
3. If the COD is considered stable, the current motion frame is continued.
-
4. In periods where the COD changes more than 0.5 mm for each 1 s minimal time interval the frames are discarded as being within a motion event.
B. Estimating transforms between motion frames
The algorithm uses a 3D to 2D projection approach which increases the noise characteristics for each projection space and enables registration even in noisy data. This is performed using the Summing Structural Tree approach [17].
-
1. For each motion frame, a non-attenuation-corrected (NAC) reconstruction is performed.
-
2. 2D Projections are calculated in x, y,z directions to optimize counts used for registration.
-
3. The rigid body transformations are calculated for all x, y,z directions iteratively since moving in one direction will affect the other directions.
-
4. The registration works by first comparing and correcting the motion frames that are most similarly positioned. Then the algorithm adds the counts of the registered frames and iteratively compares again the most similar frames using the Summing Tree Structural Motion Correction algorithm [17].
-
5. The final target image is the first frame, which is assumed to be well registered to the CT.
-
6. The objective function for the registration is the Mutual Information Criterion [18].
C. Iterative Reconstruction with motion correction
Once the transforms are derived for all time motion frames, this information can be incorporated into the reconstruction of the image, considering the correction factors such as attenuation and scatter correction which depend on the µmap corresponding to the PET events where they actually happened. All motion frames and corresponding transform are built into the iterative reconstruction as follows [19].
Where \(\:b\): Image voxel; \(\:l\): LOR bin; \(\:t\): time-frame; \(\:O\): (Random*Norm + Scatter)×AFC; \(\:P\): Prompt; \(\:A\): AFC; \(\:F\)(): Forward projection; \(\:B\)(): Back projection; \(\:M\left(b\right)\): Motion correction in Image-Space; \(\:{M}^{-1}\left(b\right)\): Inverse motion correction in Image-Space; \(\:M\left(l\right)\): Motion correction in Sinogram-Space.
There are two principal assumptions: First, there is no motion between the initial CT scan and the first motion frame. We make no attempt to correct for motion in this interval. Consequently, potential attenuation correction mismatch in the final reconstructed image in some cases may be observed. This could be addressed by combining the MoCo method with a technique to align CT and PET images or by using deep-learning based CT-free approaches for attenuation and scatter correction [20,21,22]. Second, motion events are relatively brief, with extended periods of quiescence between them: if a patient moves their head continuously, it appears as a series of short motion frames, some of which may be discarded. The remaining data are scaled by a decay-corrected factor to compensate for the missing events. This scaling is done regardless of the reason the motion frame was discarded.
In short, the method automatically detects motion during the PET data acquisition and transforms all data back to a first quiescent part of the PET scan [16, 17]. Thus, assuming no motion between the CT and the start of the PET data acquisition, the PET data will be fully motion-corrected and aligned to the CT to achieve accurate correction for attenuation and scatter. These images will be denoted MoCo Images, and they were reconstructed using investigational prototype software (e7tools; Siemens Healthineers) using the same reconstruction parameters as the Standard Images.
Phantom Study
A Hoffman phantom [23] was filled with an FDG solution simulating a 4:1 uptake ratio between GM and WM. The phantom was positioned within a custom-made Phantom Movement System (Supplemental Fig. S1), which facilitated precise translations (x, z) and rotations (x, z) to within accuracies of less than 1 mm and 1 degree, respectively.
The phantom was scanned on a Biograph Vision 600 (Siemens Healthineers) during nine scenarios (S). See Supplemental Table S1 for detailed information about each scenario. Each scenario involved a CT scan followed by a dynamic PET scan.
• Scenario REF/S0 served as a motion-free reference for which the algorithm was tested to demonstrate “do no harm” when no motion is present. This allow for the method to be used universally both in the absence and presence of motion, making technical as well as clinical implementation easier.
• The subsequent scenarios (S1-S7), the phantom was displaced in a series of 118-second stationary phases separated by 2-second movements.
• S8 was set up to stress-test the algorithm with long continuous rotations throughout the entire scan.
The initial phantom filling consisted of a 65 MBq FDG solution. Due to radioactive decay, there were variable activity concentrations across scenarios, which were compensated through randomized decimation of the list-mode files from scenarios S0-S7 using an investigational software prototype LMChopper (e7tools, Siemens Healthineers, Knoxville, TN, USA) before PET image reconstruction into Standard and MoCo Images. In each scenario, we checked that the MoCo reconstruction algorithm correctly detected the phantom motion by comparing to the time points when the phantom was moved, i.e. every 120 s.
Clinical study
The clinical cohort comprised data from 38 clinical patients who underwent brain PET scans using three distinct tracers. This subset specifically patients for whom an expert nuclear medicine physician identified the Standard Image to be suboptimal, or of non-diagnostic quality. Our analysis concentrated on determining whether MoCo Images could enhance image quality, potentially obviating the necessity for rescans. For each tracer, the Standard Image and MoCo Image were blinded and randomized for the clinical read.
Two experienced nuclear medicine physicians (JA, PB) independently conducted a blinded evaluation of the image quality, employing a 5-point Likert scale, for sharpness and quality, defined as:
-
1) Unacceptable image quality: extremely blurry / obscured by artifacts. PET examination needs to be repeated.
-
2) Poor image quality: Blurry with artifacts. PET examination needs to be repeated.
-
3) Acceptable image quality: This is the minimum acceptable image quality. Still some blurriness/artifacts.
-
4) Good image quality: Minor blurriness/artifacts.
-
5) Excellent image quality: No sign of blurring or artifacts.
Grades 1–2 indicate a PET image that is of insufficient quality for clinical diagnostics, and grades 3–5 indicate a PET image that can be used for clinical reporting or be included in data for a scientific paper. Conclusively, each physician was prompted to select the ‘superior’ image for every patient. The evaluations were executed individually, precluding mutual consultation.
Research Study
The research cohort comprised data obtained from 25 elderly subjects. This subset specifically incorporates individuals anticipated to remain stationary with minimal head movement during the PET/CT examination. For this cohort, we focused on the question of whether motion correction could further enhance image quality, or whether motion correction could lead to degradation of image quality, for patients with minimal or no head movement.
In addition to the Standard Images and MoCo Images, we also made a manual image-based motion-compensated image that will be denoted IB-MoCo Images. 30-min PET data were binned into six 5-min frames that were individually reconstructed. The six images were visually inspected in PMOD 4.0 (PMOD Technologies Ltd, Zürich, Switzerland), images that were degraded by in-frame motion would be discarded (this was not needed), and the remaining images were registered to the patient’s T1 MRI image and averaged. IB-MoCo, sometimes used in research projects, improves image quality but it does not account for in-frame motion and mismatch between CT and PET leading to suboptimal attenuation and scatter correction.
For each subject, the Standard Image, IB-MoCo Image, and MoCo Image were masked and randomized. An experienced nuclear medicine physician (JH) did a blinded evaluation of the image quality employing the previously defined 5-point Likert scale and was prompted to select the ‘superior’ image for every subject.
Results
Phantom study
Figure 2 shows images from the reference phantom scan and the eight scenarios. For the reference scan, the algorithm did not detect motion, therefore confirming that the algorithm has “do no harm” characteristics for the phantom study and can be used universally both with the presence and absence of head motion. For S1-S4 with increasing translations and rotations, the algorithm detected all movements including the smallest 1-mm translations and 1⁰ rotations. Some additional movements were detected around the time of movements, which were probably caused by a hand touching the phantom or small vibrations related to the movement. No movements were detected in the period when the phantom was left untouched. For S5-S7 with realistic translations and movement, the algorithm detected all movements. In general, all translations were accurately detected within < 1 mm of the intended translation (except in 4 cases: 1–2 mm), and all rotations within < 1⁰ of the intended rotation (all cases). Consequently, MoCo Images of S1-S7 showed no signs of motion artifacts (Fig. 2). The MoCo images are visually comparable to the reference scan and are quantitatively similar to it (Supplemental Fig. S4). In addition, we conducted exploratory tests by decimating the FDG phantom PET raw data to 10% of its original size, without encountering any failures or undetected movements (data not shown).
The Hoffman phantom underwent scanning across eight distinct scenarios (see Supplemental Fig. S1), each characterized by predetermined motion patterns. REF: Baseline scan devoid of motion. S1: X-translations; S2: Z-rotations; S3: X-rotations; S4: Z-translations; S5: Realistic X-translations; S6: Realistic Z-rotations; S7: Realistic X-translations; S8: Continuous rotations. See supplementary material for details about the eight motion scenarios. Notably, in all the scenarios, the Standard Images exhibited pronounced motion-induced artifacts. In contrast, the MoCo Images presented a visual quality seemingly unaffected by motion artifacts
For S8 with continuous rotations, the algorithm only detected four positions during the continuous z-rotation and two positions during the continuous x-rotation. This stress-test without any motionless periods led to a suboptimal MoCo Image (Fig. 2).
Patient study (FDG, PE2I, and FMM)
Figure 3 shows three examples of image quality of Standard Images and MoCo Images for three different tracers: FDG, PE2I, and FMM. Supplemental Fig. S2 shows images for all 38 patients. Figure 4 shows the results of the visual assessments of clinical images by the two readers, individually and without communication. MoCo Images always had visual scores that were equal to or better than Standard Images. Importantly, all scans that were initially deemed to be of insufficient quality for clinical diagnostics (visual score < 3) based on standard image reconstruction achieved diagnostic quality (visual score ≥ 3) after MoCo image reconstruction. Thus, MoCo reconstruction obviated the need for rescans in all cases where the Standard Images was deemed have insufficient image quality (17 cases for reader 1, and 16 cases for reader 2). The two image readers selected the MoCo Image as the ‘superior’ image in 37 of 38 cases (97%). Thus, each reader had a single case (not the same) where Standard Image was preferred, and in this case, both images got the same visual score by both readers.
Illustrations of the image quality across four tracers with markedly different uptake patterns. For the clinical patient scans (FDG, PE2I, FMM), the Standard Images exhibit progressively better image quality from left to right. The mean visual scores (Standard Image, MoCo Image) are FDG: (1.0, 5.0), (2.0, 5.0), (2.5, 4.0); PE2I: (1.0, 4.5), (2.5, 5.0), (3.5, 4.0); FMM: (1.5, 5.0), (2.5, 4.0), (4.0, 5.0). It is noteworthy that all MoCo Images attained a diagnostic quality, characterized by a visual score of ≥ 3, and consistently had better visual scores than those of the Standard Images. For the research scans (FEOBV), minimal or no motion could be observed on Standard Images, and MoCo images all had equal or slightly better image quality
Blinded visual assessment of the clinical PET images using three different tracers: FDG, PE2I, and FMM. Images receiving scores below 3 were considered non-diagnostic (light grey area). Consistently, MoCo Images (black circle) attained diagnostic quality, registering visual scores that either matched or exceeded those of the Standard Images (blue circle). Evaluations from the two nuclear medicine physicians are shown in grey and dark grey lines, respectively. Multicolored circles are subjects where Standard Images and Moco Images received the same visual score
Research study (FEOBV)
Figure 3 shows three examples of the typical image quality of Standard Images and MoCo Images. In the research cohort, all Standard Images had sufficient image quality (visual score ≥ 3), but in all cases, the MoCo Images were equally good or better. Thus, MoCo image reconstruction never degraded image quality even for brain PET of the highest quality with minimal motion.
Figure 5 shows the results of the visual assessments of research images by an experienced reader. In two cases, motion correction improved image quality in healthy volunteers. Supplemental Fig. S3 shows an example of image quality of a Standard Image, IB-MoCo Image and MoCo Image where images quality was improved by motion correction. In this cohort, the number of images that were top graded (visual score 5) were: 11 Standard Images, 14 IB-MoCo Images, and 15 MoCo Images. The general trend was that Standard Images < IB-Moco Images < MoCo images. The image reader selected the MoCo Images as the superior image in all 25 cases (100%). The IB motion correction method can compensate for some head motion, but the IB-MoCo Image is still affected by in-frame motion and artifacts caused by imperfect attenuation and scatter correction.
Blinded visual image evaluation of images from a research project using FEOBV. Standard Images (blue circle) were consistently evaluated to have adequate image quality to be included in the research study: Patient data (PT 1–15) had a score of at least 3, and age-matched healthy controls (HC 1–10) had a score of at least 4). Both IB-MoCo Images (dark blue circle) and MoCo Images (black circle) exhibited superior image quality, with scores of at least 4. There was an instance where the MoCo Image outperformed the IB-MoCo Image with a visual score of 5 versus 4, respectively. This case is shown in Supplemental Fig. S3. Multicolored circles are subjects where Standard, IB-MoCo, and/or MoCo Images received the same visual score
Discussion
We validated a fully automated algorithm for PET brain image reconstruction with data-driven motion compensation that corrects head motion with a 1-sec temporal resolution. It integrates detected motion frames and their associated rigid body transformations into the iterative reconstruction using all PET Raw data. A technical validation using an FDG-filled Hoffman brain phantom and a custom-made Phantom Movement System showed that the algorithm detected and corrected for minimal movements, 1-mm translations and 1⁰ rotations, applied to the phantom. The image reconstruction time for MoCo Images including all processing steps was contingent on the number of motion frames identified but was approximately twice as long as for Standard Images.
Reader studies were conducted to explore the impact of a PET motion compensation algorithm across a diverse spectrum of clinical scenarios and neurodegenerative disorders, imaged with a variety of tracer activity distributions, as well as different image quality criteria necessary for sufficient diagnostic evaluation. This study focused on the application of the MoCo reconstruction for late brain PET imaging using four 18F tracers, but the algorithm should also work for other tracers and isotopes than those used in this study. The masked clinical reader studies confirmed that the method worked well for three different radiotracers in patients with different types of neurodegenerative disease, demonstrating its robustness irrespective of the tracer distribution within the brain, in agreement with related brain motion correction studies [6,7,8,9,10,11, 24, 25]. In the cohort of 38 patients whose Standard Images were deemed to be of suboptimal or non-diagnostic quality, all MoCo Images were classified as having acceptable diagnostic quality. MoCo reconstruction markedly enhanced the PET image quality for patients who are unable to lie still during a PET examination and obviated the necessity for repeat scans. Thus, the MoCo reconstruction algorithm is clinically feasible to use and has a clear clinical impact.
The multi reader study in 25 elderly subjects confirmed that MoCo images always matched or surpassed the image quality of Standard Images. In all cases, the Moco Image was selected as the ‘superior image’. In subjects with minimal head movement, only a few motion frames were detected, and both Standard Images and MoCo Images received the highest visual score. Importantly, MoCo reconstruction was consistently successful and did not result in any degradation of image quality. For research studies, the use of MoCo reconstruction has the potential to reduce the variability of PET quantification and thereby decrease the required sample sizes in studies of patients with dementia or movement disorders.
This study along with the MoCo reconstruction algorithm has some limitations. The algorithm assumes no motion between the initial CT scan and the first PET motion frame. Thus, if the patient moves during this short period, this could lead to suboptimal attenuation- and scatter correction. This could have affected some of the patient data in this study. Future versions of the algorithm could include registration of CT to PET or AI-based CT [20,21,22] to compensate for this effect. The MoCo reconstruction algorithm detects motion by comparing changes in COD in 1-sec intervals and assumes brief motion events with extended periods of quiescence between them. Our phantom study showed that the algorithm detected small abrupt movements even with low-count data. However, the 8 Scenarios were restricted to motion types that could be controlled using the phantom motion system (Supplemental Fig. S1). Simultaneous translations and rotations, which may occur in patients during neck bending, could not be realistically tested. Furthermore, the current implementation struggles with identifying continuous rotations, as illustrated by Scenario S8, which represents an extreme case. Despite this limitation, we did not observe any instances of algorithm failure within our patient cohorts. In the phantom study, we verified that the MoCo algorithm successfully detected all displacements imposed on the phantom. However, for the patient studies, we did not have a ‘gold standard’ for validation, as an external motion detection device was not used. Other studies have compared the performance of data-driven COD-based motion correction approaches to corrections based on external motion detection devices [7,8,9]. Finally, the use of the MoCo algorithm is limited to dedicated brain PET scans as it only does rigid motion compensation, i.e. it cannot properly compensate for motion that also involves bending of the neck, movement of shoulders, or elastic motion in long-axial field-of-view PET scans that include the entire body.
This study focused on the application of the MoCo reconstruction for late brain PET imaging, but the algorithm also works for in-frame motion correction of the late images of dynamic PET scans, where list-mode data is reconstructed into a time series of image frames. Dynamic PET data are used for kinetic modeling, enabling the quantification of flow, uptake rates, distribution volumes, and binding potentials, and the application of motion correction is likely to influence these kinetic parameters significantly [26]. However, the algorithm cannot be expected to work well for the early part of dynamic studies as the COD-based algorithm cannot separate the effects of head movement and fast-changing tracer uptake. Traditional motion compensation methods that employ external tracking systems are capable of monitoring head movement without being affected by variations in tracer distribution and image noise. However, there is a need for more studies to explore the potential of data-driven approaches for this application.
Conclusions
We propose an automated data-driven brain PET MoCo reconstruction algorithm capable of correcting even small head movements. We validated the method using four different tracers with markedly different uptake patterns. When applied to clinical patient data, in every case, the algorithm led to enhanced image quality without any deterioration. Patient images previously deemed non-diagnostic were elevated to diagnostic quality, effectively eliminating the need for repeat scans. This indicates its potential for straightforward implementation in clinical routine.
Data availability
All data are available from the corresponding author upon reasonable request.
Abbreviations
- COD:
-
Center-of-distribution
- CT:
-
Computed tomography
- FDG:
-
[18F]Fluorodeoxyglucose
- FEOBV:
-
[18F]fluoroethoxybenzovesamicol
- FMM:
-
[18F]flutemetamol
- FWHM:
-
Full-width half-maximum
- GM:
-
Grey matter
- MoCo:
-
Motion-compensated
- NAC:
-
Non-attenuation-corrected
- OSEM:
-
Ordered subset expectation maximization
- PET:
-
Positron emission tomography
- PE2I:
-
[18F]N-(3-iodopro-2E-enyl)-2beta-carbomethoxy-3beta-(4’-methylphenyl)-nortropane
- PI:
-
Post injection
- PSF:
-
Point spread function or resolution modeling
- S:
-
Scenarios (used for phantom experiment)
- TOF:
-
Time-of-flight
- WM:
-
White matter
References
Raji CA, Benzinger TLS. The Value of Neuroimaging in Dementia diagnosis. Continuum (Minneap Minn). 2022;28(3):800–21.
Burkett BJ, Babcock JC, Lowe VJ, Graff-Radford J, Subramaniam RM, Johnson DR. PET imaging of Dementia: Update 2022. Clin Nucl Med. 2022;47(9):763–73.
Pakkenberg B, Gundersen HJ. Neocortical neuron number in humans: effect of sex and age. J Comp Neurol. 1997;384(2):312–20.
Keller SH, Sibomana M, Olesen OV, Svarer C, Holm S, Andersen FL, Højgaard L. Methods for motion correction evaluation using 18F-FDG human brain scans on a high-resolution PET scanner. J Nucl Med. 2012;53(3):495–504.
Tumpa TR, Acuff SN, Gregor J, Bradley Y, Fu Y, Osborne DR. Data-driven head motion correction for PET using time-of-flight and positron emission particle tracking techniques. PLoS ONE. 2022;17(8):e0272768.
Lu Y, Gallezot JD, Naganawa M, Ren S, Fontaine K, Wu J, Onofrey JA, Toyonaga T, Boutagy N, Mulnix T, Panin VY, Casey ME, Carson RE, Liu C. Data-driven voluntary body motion detection and non-rigid event-by-event correction for static and dynamic PET. Phys Med Biol. 2019;64(6):065002.
Lu Y, Naganawa M, Toyonaga T, Gallezot JD, Fontaine K, Ren S, Revilla EM, Mulnix T, Carson RE. Data-Driven motion detection and event-by-event correction for Brain PET: comparison with Vicra. J Nucl Med. 2020;61(9):1397–403.
Revilla EM, Gallezot JD, Naganawa M, Toyonaga T, Fontaine K, Mulnix T, Onofrey JA, Carson RE, Lu Y. Adaptive data-driven motion detection and optimized correction for brain PET. NeuroImage. 2022;252:119031. https://doi.org/10.1016/j.neuroimage.2022.119031.
Zeng T, Lu Y, Jiang W, Zheng J, Zhang J, Gravel P, Wan Q, Fontaine K, Mulnix T, Jiang Y, Yang Z, Revilla EM, Naganawa M, Toyonaga T, Henry S, Zhang X, Cao T, Hu L, Carson RE. Markerless head motion tracking and event-by-event correction in brain PET. Phys Med Biol. 2023;68(24):245019.
Spangler-Bickell MG, Hurley SA, Pirasteh A, Perlman SB, Deller T, McMillan AB. Evaluation of data-driven rigid motion correction in clinical brain PET imaging. J Nucl Med. 2022;63(10):1604–10.
Tiss A, Marin T, Chemli Y, Spangler-Bickell M, Gong K, Lois C, Petibon Y, Landes V, Grogg K, Normandin M, Becker A, Thibault E, Johnson K, El Fakhri G, Ouyang J. Impact of motion correction on [18F]-MK6240 tau PET imaging. Phys Med Biol. 2023;68(10). https://doi.org/10.1088/1361-6560/acd161.
Schou M, Steiger C, Varrone A, Guilloteau D, Halldin C. Synthesis, radiolabeling and preliminary in vivo evaluation of [18F]FE-PE2I, a new probe for the dopamine transporter. Bioorg Med Chem Lett. 2009;19(16):4843–5.
Wolk DA, Grachev ID, Buckley C, Kazi H, Grady MS, Trojanowski JQ, Hamilton RH, Sherwin P, McLain R, Arnold SE. Association between in vivo fluorine 18-labeled flutemetamol amyloid positron emission tomography imaging and in vivo cerebral cortical histopathology. Arch Neurol. 2011;68(11):1398–403.
Okkels N, Horsager J, Labrador-Espinosa M, Kjeldsen PL, Damholdt MF, Mortensen J, Vestergård K, Knudsen K, Andersen KB, Fedorova TD, Skjærbæk C, Gottrup H, Hansen AK, Grothe MJ, Borghammer P. Severe cholinergic terminal loss in newly diagnosed dementia with Lewy bodies. Brain. 2023;146(9):3690–704.
Petrou M, Frey KA, Kilbourn MR, Scott PJ, Raffel DM, Bohnen NI, Müller ML, Albin RL, Koeppe RA. In vivo imaging of human cholinergic nerve terminals with (-)-5-(18)F-fluoroethoxybenzovesamicol: biodistribution, dosimetry, and tracer kinetic analyses. J Nucl Med. 2014;55(3):396–404.
Hong I, Burbar Z, Schleyer P, A Method to estimate motion frames from PET listmode by merging adjacent clusters, conference. (NSS/MIC), Manchester, UK, 2019, pp. 1–2. https://doi.org/10.1109/NSS/MIC42101.2019.9059870
Hong I, Burbar Z, Schleyer P. A Summing Tree Structural motion correction algorithm for brain PET images using 3D to 2D projection. 2019 IEEE Nuclear Science Symposium and, Imaging M, Conference. (NSS/MIC), Manchester, UK, 2019, pp. 1–3. https://doi.org/10.1109/NSS/MIC42101.2019.9060017
Wells WM 3rd, Viola P, Atsumi H, Nakajima S, Kikinis R. Multi-modal volume registration by maximization of mutual information. Med Image Anal. 1996;1(1):35–51.
Hong I, Burbar Z, Michel C. Comparisons motion correction methods for PET studies., Record. (NSS/MIC), Anaheim, CA, USA, 2012, pp. 3293–3294. https://doi.org/10.1109/NSSMIC.2012.6551750
Partin L, Spottiswoode B, Hayden C, Armstrong I, Fahmi R. Deep learning-based CT-less attenuation correction of brain FDG PET. J Nucl Med. 2024;65(Suppl2):242223.
Muller F, Daube-Witherspoon M, Parma M, Vanhove C, Vandenberghe S, Noël P, Karp J. Deep learning enabled CT-less attenuation and scatter correction for Multi-tracer Whole-Body PET imaging. J Nucl Med. 2024;65(Suppl2):241351.
Montgomery ME, Andersen FL, d’Este SH, Overbeck N, Cramon PK, Law I, Fischer BM, Ladefoged CN. Attenuation correction of long Axial Field-of-view Positron Emission Tomography using Synthetic computed Tomography Derived from the Emission Data: application to low-count studies and multiple Tracers. Diagnostics (Basel). 2023;13(24):3661.
Hoffman EJ, Cutler PD, Digby WM, Mazziotta JC. 3-D phantom to simulate cerebral blood flow and metabolic images for PET. IEEE Trans Nucl Sci. 1990;37:616–20.
Park HL, Park SY, Kim M, Paeng S, Min EJ, Hong I, Jones J, Han EJ. Improving diagnostic precision in amyloid brain PET imaging through data-driven motion correction. EJNMMI Phys. 2024;11(1):49.
Kemp B, Hong I, Schumacher M, Burkett B, Johnson D, Lowe V. Evaluation of a prototype motion correction algorithm for PET brain imaging. J Nucl Med. 2024;65(Suppl2):242242.
Wardak M, Wong KP, Shao W, Dahlbom M, Kepe V, Satyamurthy N, Small GW, Barrio JR, Huang SC. Movement correction method for human brain PET images: application to quantitative analysis of dynamic 18F-FDDNP scans. J Nucl Med. 2010;51(2):210–8.
Funding
Aase og Ejnar Danielsens fond (36456) (JH); Lundbeck foundation (R-359-2020-2533) (PB); Michael J Fox Foundation (MJFF-022856) (PB).
Author information
Authors and Affiliations
Contributions
OLM: Prototype testing, phantom validation, clinical validation, drafting manuscript. ABR: Phantom validation study, drafting manuscript. PBD, JRM, MTS: Prototype testing, Phantom Motion System, Phantom validation study (Bachelor’s project). NO, JH, KBA, JA, PB: Patient study and clinical validations. IH and JJ developed and implemented the reconstruction software prototype. All authors read and approved the final version of the manuscript.
Corresponding author
Ethics declarations
Ethics approval
The research study was conducted according to the Declaration of Helsinki and approved by the Regional Ethics Committee (project number 1-10-72-270-19).
Consent to participate
Informed consent was obtained from all individual participants included in the research study.
Consent to publish
Publication consent was obtained from all research study participants.
Competing interests
The authors JJ, IH and SZ are full-time employees of Siemens Medical Solutions USA, Inc. ABR is a full-time employee of Siemens Healthcare A/S Denmark. No other potential conflicts of interest relevant to this article have been reported.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Munk, O.L., Rodell, A.B., Danielsen, P.B. et al. Validation of a data-driven motion-compensated PET brain image reconstruction algorithm in clinical patients using four radiotracers. EJNMMI Phys 12, 11 (2025). https://doi.org/10.1186/s40658-025-00723-w
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s40658-025-00723-w