This research focuses on developing a question-answering system for the Sorani Kurdish language using advanced deep-learning models such as BERT, GPT, and T5. The main objective of this system is to provide accurate and relevant answers to user queries while considering the limitations in processing low-resource languages. A dataset containing 1,000 pairs of questions and answers in Sorani Kurdish was used to evaluate the models. This data was loaded and preprocessed for training and evaluation of the transformer models. The performance of the models was assessed using common metrics such as accuracy, precision, recall, and F1 score. The evaluation results indicate that the BERT model achieved the best performance among the models, with an accuracy of 0.98 and high precision and recall scores. The T5 model ranked second with an accuracy of 0.86 and an F1 score of 0.83, while the GPT model performed significantly weaker and required further optimizations. These findings suggest that transformer models, especially BERT and T5, are more suitable for processing low-resource languages.