|
چکیده
|
Image segmentation is a cornerstone of computer vision, with critical applications in medicine, autonomous vehicles, and agriculture. Despite advancements in deep learning-based segmentation, existing evaluation metrics (e.g., pixel accuracy, dice similarity coefficient, and intersection over union) fail to comprehensively assess boundary fidelity, topological errors, and class-imbalanced scenarios. To address these limitations, in this study, three novel metrics were proposed: under-segmentation index (US), over-segmentation index (OS), and a combined US-OS index, which quantify segmentation errors at both regional and boundary levels. These metrics were validated on diverse real-world datasets (BraTS for brain tumors, Cityscapes for urban scenes, and Pineapple-Slice for agricultural imagery) using state-of-the-art architectures (3D U-Net, U-Net, and YOLOv8l). Results demonstrate that the proposed metrics outperform traditional measures in sensitivity to boundary errors, robustness to class imbalance, and clinical/application relevance. For instance, in BraTS tumor segmentation, US-OS revealed 300–493 % higher relative error rate compared to intersection over union, which reflects relative improvement in error sensitivity. This work provides a framework for more nuanced segmentation evaluation, enabling improved algorithm development for safety-critical applications like medical diagnostics and autonomous driving.
|