In many real-world networks, nodes are associated with multiple labels, making multilabel classification a critical yet challenging task in graph representation learning. Existing methods often overlook label interdependencies or fail to integrate label in formation effectively into node embeddings. In this study, we propose a novel approach that leverages SimHash-based encoding and image generation from label contexts to improve multilabel node classification. Specifically, node labels are encoded using SimHash to capture semantic correlations, and the resulting binary codes are transformed into grayscale images that preserve label proximity information. A convolutional neural network (CNN) is then applied to these images to learn discriminative node representations. This design enables our method to uncover latent relationships among labels while maintaining robustness and scalability. Experiments on benchmark datasets, including BlogCatalog, Flickr, CiteSeer, and Cora, demonstrate that our approach consistently outperforms state-of-the-art baselines in terms of Micro-F1, Hamming Loss, and accuracy.