Zitat
Abstract
Accurate segmentation of aortic valve cusps is important for facilitating surgical assessment and computational modeling. This study evaluates the feasibility of MobileNetV3 + DeepLabV3+ for aortic valve segmentation using RGB images and examines the impact of synthetic data augmentation and unsupervised pretraining. A dataset of porcine aortic valves was used for training and evaluation, following a Leave-One-Heart-Out Cross-Validation (LOHO-CV) strategy. Synthetic images generated by a conditional Denoising Diffusion Probabilistic Model (cDDPM) were integrated into training, and unsupervised pretraining via a deep convolutional autoencoder (DCAE) was tested. Performance was assessed using mean Intersection over Union (mIoU) and accuracy. The model achieved an average mIoU exceeding 0.93 across LOHO-CV splits, demonstrating its capability for accurate segmentation with minimal computational cost. Synthetic data improved segmentation accuracy, while unsupervised pretraining accelerated convergence but had no significant effect on final performance. The low standard deviation suggests high robustness across different heart specimens. Our findings confirm that small, efficient deep learning models are sufficient for aortic valve segmentation, reducing the need for larger architectures. Synthetic data augmentation enhances performance, while unsupervised pretraining may help reduce annotation efforts. Future work will focus on dataset expansion and instance-based segmentation to eliminate preprocessing steps.
Schlagwörter
Image segmentation techniques
Machine learning and AI-enhanced imaging
Performance evaluation and benchmarking