Bao Feilong


2024

pdf bib
Knowledge Graph-Enhanced Recommendation with Box Embeddings
Liang Qiuyu | Wang Weihua | Lv Lei | Bao Feilong
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)

“Knowledge graphs are used to alleviate the problems of data sparsity and cold starts in recom-mendation systems. However, most existing approaches ignore the hierarchical structure of theknowledge graph. In this paper, we propose a box embedding method for knowledge graph-enhanced recommendation system. Specifically, the box embedding represents not only the in-teraction between the user and the item, but also the head entity, the tail entity and the relationbetween them in the knowledge graph. Then the interaction between the item and the corre-sponding entity is calculated by the multi-task attention unit. Experimental results show thatour method provides a large improvement over previous models in terms of Area Under Curve(AUC) and accuracy in publicly available recommendation datasets with three different domains.”

2022

pdf bib
Interactive Mongolian Question Answer Matching Model Based on Attention Mechanism in the Law Domain
Peng Yutao | Wang Weihua | Bao Feilong
Proceedings of the 21st Chinese National Conference on Computational Linguistics

“Mongolian question answer matching task is challenging, since Mongolian is a kind of lowresource language and its complex morphological structures lead to data sparsity. In this work, we propose an Interactive Mongolian Question Answer Matching Model (IMQAMM) based on attention mechanism for Mongolian question answering system. The key parts of the model are interactive information enhancement and max-mean pooling matching. Interactive information enhancement contains sequence enhancement and multi-cast attention. Sequence enhancement aims to provide a subsequent encoder with an enhanced sequence representation, and multi-cast attention is designed to generate scalar features through multiple attention mechanisms. MaxMean pooling matching is to obtain the matching vectors for aggregation. Moreover, we introduce Mongolian morpheme representation to better learn the semantic feature. The model experimented on the Mongolian corpus, which contains question-answer pairs of various categories in the law domain. Experimental results demonstrate that our proposed Mongolian question answer matching model significantly outperforms baseline models.”