2024 : 11 : 21
Fateme Daneshfar

Fateme Daneshfar

Academic rank: Assistant Professor
ORCID:
Education: PhD.
ScopusId: 35078447100
HIndex:
Faculty: Faculty of Engineering
Address: Department of Computer Engineering, Faculty of Engineering, University of Kurdistan
Phone:

Research

Title
Semi-Supervised Dust-to-Clean Image Translation Using Regression Minimization and Consistency Regularization
Type
Thesis
Keywords
Dusty Image Enhancement, Semi-Supervised learning, Image-To-Image Translation, Adversarial Consistency Regularization
Year
2024
Researchers Mohammed Shamsaddin Qader(Student)، Fateme Daneshfar(PrimaryAdvisor)، Marwan Aziz Mohammed(Advisor)، Ako Bartani(Advisor)

Abstract

The efficacy of outdoor vision systems in recording images is often compromised by atmospheric elements like dust, leading to challenges in subsequent processing. Dusty images commonly suffer from issues such as reduced contrast, decreased visibility, and color distortion. These issues significantly degrade the quality of the captured images, making them less useful for applications that rely on clear and accurate visual data. Consequently, the elimination of dust, known as dedusting, is crucial as a pre-processing step in many computer vision applications. However, achieving effective dedusting is not straightforward. A significant challenge faced by learning-based dedusting methods is the lack of paired training data. Paired training data, which consists of corresponding dusty and clean image pairs, is essential for supervised learning algorithms to learn the mapping from dusty to clean images. Unfortunately, acquiring such data is often impractical or impossible in real-world scenarios, which can severely impact model performance. To address this challenge, we propose a novel semi-supervised approach for dust-to-clean image translation, termed DR-Net, which emphasizes regression minimization and consistency regularization to improve dusty image quality. In regression minimization, we ensure the preservation of the structural integrity of dedusted images by training our model using a limited set of synthetic dusty images in a supervised framework. Furthermore, we employ consistency regularization to ensure that our model produces dust-free images with distributions same to real-world clean images and maintains adherence to the statistical characteristics of the dark channel of clean images in an unsupervised framework. Experimental results underscore the effectiveness of our method in yielding high-quality outcomes. Our approach surpasses the performance of existing methods, demonstrating superior capability in enhancing various computer vision tasks such as object detection, recognition, and tracking in dusty environments. The improvement in image quality not only facilitates better human interpretation but also significantly boosts the performance of automated systems that rely on clear visual data.