3D Motion Estimation of Left Ventricular Dynamics Using MRI and Track-to-Track Fusion
This study investigates the estimation of three dimensional (3D) left ventricular (LV) motion using the fusion of different two dimensional (2D) cine magnetic resonance (CMR) sequences acquired during routine imaging sessions. Although standard clinical cine CMR data is inherently 2D, the actual underlying LV dynamics lies in 3D space and cannot be captured entirely using single 2D CMR image sequences. By utilizing the image information from various short-axis and long-axis image sequences, the proposed method intends to estimate the dynamic state vectors consisting of the position and velocity information of the myocardial borders in 3D space. Method: The proposed method comprises two main components: tracking myocardial points in 2D CMR sequences and fusion of multiple trajectories correspond to the tracked points. The tracking which yields the set of corresponding temporal points representing the myocardial points is performed using a diffeomorphic nonrigid image registration approach. The trajectories obtained from each cine CMR sequence is then fused with the corresponding trajectories from other CMR views using an unscented Kalman smoother (UKS) and a track-to-track fusion algorithm. Results: We evaluated the proposed method by comparing the results against CMR imaging with myocardial tagging. We report a quantitative performance analysis by projecting the state vector estimates we obtained onto 2D tagged CMR images acquired from the same subjects and comparing them against harmonic phase estimates. The proposed algorithm yielded a competitive performance with a mean root mean square error of 1.3±0.5 pixels (1.8±0.6 mm) evaluated over 118 image sequences acquired from 30 subjects. Conclusion: This study demonstrates that fusing the information from short and long-axis views of CMR improves the accuracy of cardiac tissue motion estimation. Clinical Impact: The proposed method demonstrates that the fusion of tissue tracking information from long and short-axis views improves the binary classification of the automated regional function assessment.