Modern medical image analysis often has to deal with large-scale multi-center data, which requires the deformable registration satisfy the data diversity in clinical applications. We propose a novel deformable registration method, which is based on a cue-aware deep regression network, to deal with multiple databases with minimal parameter tuning. Our method learns and predicts the deformation field from a to-be-registered image pair, i.e., a reference image and a subject image. Specifically, given a set of training images, our method learns the displacement vector associated with a pair of reference-subject patches. To achieve this goal, we first introduce a key-point truncated-balanced sampling strategy to generate an informative and well-distributed training set, which can facilitate accurate learning from an image database of limited size. Then, we design a cue-aware deep regression network for the registration task, where we propose to employ an auxiliary contextual cue, given by a scale-adaptive local similarity map, to more apparently guide the learning process. The auxiliary contextual cue is generated via the proposed data-driven convolution and cross-channel pooling operations. Next, a deep convolutional neural network is designed to employ the contextual cue for accurate prediction of local deformation. Our experiments indicate that the proposed method can tackle various registration tasks on different image databases. Our method can consistently provide accurate registration results without the need of manual parameter tuning, giving it potentially wide clinical applications.
The website of IDEA Research Group in UNC: https://www.med.unc.edu/bric/ideagroup/