|
چکیده
|
Mitochondria are vital organelles responsible for energy production and various cellular processes. Accurate segmentation of mitochondria from microscopy images is crucial for understanding their structure and function. In this study, we propose a novel approach that combines the U-Net and VGG19 architectures to improve the accuracy of mitochondria segmentation. The proposed method integrates U-Net's detailed spatial information capturing capabilities with VGG19's exceptional feature extraction strengths, leveraging the complementary strengths of both models for enhanced segmentation accuracy. Additionally, data augmentation techniques, such as random flips, zooms, and rotations, are employed to increase the diversity of the training dataset, aiding the model in generalizing better to unseen variations in mitochondrial shapes, sizes, and orientations. We also utilize the Jaccard loss function, which measures the similarity between predicted and ground truth masks, providing a more informative signal for training the segmentation model compared to conventional loss functions like cross-entropy. Visualizations of training and validation metrics, including Intersection over Union (IoU) and loss, are provided to monitor the model's performance and ensure proper convergence. The proposed method was evaluated using two datasets, achieving an accuracy of 91%, which is higher than other existing methods.
|