Nonnegative Matrix Factorization (NMF), as a group representation learning model, produces part-based representation with interpretable features and can be applied to various problems, such as text clustering. The findings indicate that the NMF model with Kullback-Leibler divergence (NMFk) exhibits promising performance in the task of text clustering. However, existing NMF-based text clustering methods are defined within a latent decoder model, lacking a verification mechanism. Recently, self-representation techniques have been applied to a wide range of tasks, empowering models to learn and verify representations of their input data autonomously. This paper proposes a self-representation factorization model for text clustering that incorporates semantic information into its learning process. The Semantic-aware Encoder-Decoder NMF model based on Kullback-Liebler divergence (SEDNMFk), integrates encoder and decoder factorizations into a Kullback-Liebler cost function that mutually verify and refine each other, resulting in the formation of more distinct clusters. To further enhance the semantic properties of the method, we add a tailored semantic regularization to the model. Due to its autoencoder-like architecture, SEDNMFk, and utilization of contextual information, produces more informative word embeddings with generalization abilities that are applicable to out-of-sample data. We present an efficient and effec- tive optimization algorithm based on multiplicative update rules to solve the proposed unified model. The experimental results on the seven well-known datasets show that the proposed SEDNMFk model outperforms other state-of-the-art text clustering methods in both fully observed and out-of-sample settings.