Graph Contrastive Learning (GCL) has recently emerged as a powerful paradigm for recommendation systems.However,its practical adoption is hindered by two critical challenges: (a) The representational capacity and interpretability of recommendation models; (b) The inherent flaws of data augmentation strategies in intent decoupling learning, which often introduce misleading self-supervised signals due to noise. To address these issues, we propose a novel framework that integrates adaptive augmentation with intent-aware modeling to improve the challenge (a), called AIARec. Our approach addresses the sparsity of implicit feedback in bipartite graphs by introducing a Gaussian distribution-based graph generation strategy for robust node feature encoding. Furthermore, we design an adaptive feature-level noise perturbation mechanism, governed by an embedding table, which judiciously guides the reconstruction of the bipartite graph using latent intent information. This mechanism not only mitigates excessive noise perturbation but also accentuates the intrinsic intention features of users and items, thereby strengthening the challenge (b). To further refine the learned representations, we develop a two-domain aware graph contrastive learning framework that optimizes the consistency and uniformity of node embeddings across multiple domains. Extensive experiments on real datasets (e.g., Yelp, Amazon-Book) show that AIARec outperforms 16 state-of-the-art baselines (e.g., BIGCF, LightGCN) on Recall and NDCG in four metrics (recall, NDCG, etc.). By explicitly modeling user-item interactions through interpretable intent factors, AIARec advances both the performance and explainability of GCL-based recommender systems, offering a principled solution to noisy augmentation and sparse interaction challenges.
AIARec: Adaptive intent-aware augmentation for graph contrastive learning recommendation method
Fiumara, GiacomoPenultimo
;De Meo, PasqualeUltimo
2025-01-01
Abstract
Graph Contrastive Learning (GCL) has recently emerged as a powerful paradigm for recommendation systems.However,its practical adoption is hindered by two critical challenges: (a) The representational capacity and interpretability of recommendation models; (b) The inherent flaws of data augmentation strategies in intent decoupling learning, which often introduce misleading self-supervised signals due to noise. To address these issues, we propose a novel framework that integrates adaptive augmentation with intent-aware modeling to improve the challenge (a), called AIARec. Our approach addresses the sparsity of implicit feedback in bipartite graphs by introducing a Gaussian distribution-based graph generation strategy for robust node feature encoding. Furthermore, we design an adaptive feature-level noise perturbation mechanism, governed by an embedding table, which judiciously guides the reconstruction of the bipartite graph using latent intent information. This mechanism not only mitigates excessive noise perturbation but also accentuates the intrinsic intention features of users and items, thereby strengthening the challenge (b). To further refine the learned representations, we develop a two-domain aware graph contrastive learning framework that optimizes the consistency and uniformity of node embeddings across multiple domains. Extensive experiments on real datasets (e.g., Yelp, Amazon-Book) show that AIARec outperforms 16 state-of-the-art baselines (e.g., BIGCF, LightGCN) on Recall and NDCG in four metrics (recall, NDCG, etc.). By explicitly modeling user-item interactions through interpretable intent factors, AIARec advances both the performance and explainability of GCL-based recommender systems, offering a principled solution to noisy augmentation and sparse interaction challenges.| File | Dimensione | Formato | |
|---|---|---|---|
|
2025-kbs-aiarec.pdf
solo utenti autorizzati
Tipologia:
Versione Editoriale (PDF)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
4.16 MB
Formato
Adobe PDF
|
4.16 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


