2025/12/5
Hossein Bevrani

Hossein Bevrani

Academic rank: Professor
ORCID: 0000-0003-4658-9095
Education: PhD.
H-Index:
Faculty: Faculty of Science
ScholarId: View
E-mail: hossein.Bevrani [at] uok.ac.ir
ScopusId:
Phone:
ResearchGate:

Research

Title
Artificial Neural Network Analysis Using Statistical Approaches
Type
Thesis
Keywords
Statistical ‎Machine‎ Learning , interpretable deep neural networks‎, ‎artificial neural networks‎, ‎statistical tuning‎, ‎statistical modeling‎, ‎explainable artificial intelligence‎.
Year
2024
Researchers Mohamad Yamen Almohamad(Student)، Hossein Bevrani(PrimaryAdvisor)، Ali Akbar Heydari(Advisor)

Abstract

Neural networks‎, ‎often viewed as black boxes due to their complex composition of functions and parameters‎, ‎pose significant challenges for interpretability‎. ‎This study addresses these challenges by exploring various methods for interpreting neural networks‎, ‎focusing on both theoretical and practical aspects‎. ‎Firstly‎, ‎we demonstrate that the neural network estimator \ f_n \ can be interpreted as a nonparametric regression model constructed as a sieved M-estimator‎. ‎This approach ensures the weak convergence of \ f_n \ within the metric space \ (\Theta‎, ‎d) \‎, ‎providing a solid theoretical foundation for understanding neural networks‎. ‎ Building on these theoretical insights‎, ‎the study introduces statistical tests designed to assess the importance of input variables‎, ‎offering a clearer understanding of their contributions to the model‎. ‎Dimensionality reduction algorithms are also explored‎, ‎highlighting their role in simplifying the model‎, ‎enhancing both interpretability and accuracy‎. ‎ Furthermore‎, ‎we show that statistical confidence intervals enhance model reliability by providing more robust estimates‎. ‎Statistical tests are also employed to evaluate and interpret the performance of individual neurons‎, ‎identifying their contribution to classification tasks and providing insights into the network's functioning‎. ‎To validate these theoretical findings‎, ‎simulations were conducted and applied to the IDC and Iris datasets‎. ‎These experiments illustrate the practical utility of the proposed methods and affirm the effectiveness of the neural network estimator in real-world applications‎. ‎This study contributes to the emerging field of Explainable Artificial Intelligence by presenting methodologies for interpreting traditional deep artificial neural networks through statistical frameworks‎, ‎thereby facilitating a better understanding of the relationship between inputs and outputs and the performance of individual network components‎.