RegC2P: A registration-enhanced GAN for 3D CT-to-PET translation
Abstract
The fusion of Positron Emission Tomography (PET) and Computed Tomography (CT) has significantly advanced cancer imaging by combining metabolic and anatomical information, enhancing diagnosis, staging, and treatment monitoring. However, the widespread use of PET/CT systems is limited by the scarcity of PET scanners and the reliance on radioactive tracers. To address this, 3D image-to-image translation has emerged as a promising solution for generating synthetic PET images from CT scans. Existing generative methods based on Generative Adversarial Networks (GANs) face challenges such as instability and stochastic outputs that lack precision for reliable 3D CT-to-PET translation. We propose RegC2P, a novel approach that integrates a registration module into a GAN-based framework to generate accurately aligned 3D PET images from CT scans. RegC2P transforms the problem into a slice-by-slice 2D image-to-image translation task, where individual 2D CT slices are translated into PET slices and then stacked into a 3D PET volume. Misaligned 2D PET slices are treated as noisy labels, and the generator is trained with an additional registration network to adaptively correct the misalignment, optimizing both the translation and registration tasks simultaneously. To ensure smoothness and consistency across generated PET slices throughout the entire volume, we introduce a 3D U-Net refinement network. Extensive experiments on large datasets demonstrate that RegC2P outperforms state-of-the-art methods, achieving a 10.16% reduction in MAE, a 0.96% improvement in SSIM, and a 3.6% increase in PSNR, setting a new benchmark for the quality of synthesized 3D PET images.