Abstract
[Objective] The explainability mechanism in explainable recommendation models is explored from two perspectives, embedded and post-hoc. [Literature scope] In Google Scholar and CNKI, the keywords "explainable recommendation", "interpretable recommendation", "explainable AI", and "explainable recommendation" were searched respectively, combined with topic screening, intensive reading and retrospective method were used to obtain a total of 61 representative literatures of explainable method research. [Method]Research the recommended interpretability method from the embedded perspective, and discuss and analyze it from the four perspectives of knowledge graph, deep learning, attention mechanism, and multi-task learning; research the recommended interpretability method from the perspective of post-hoc, and specifically combine the pre-processing Discuss and analyze the five perspectives of defining templates, comments or sentences, natural language generation, reinforcement learning, and knowledge graphs; make a detailed comparison of the interpretability methods from three aspects: logical thinking, performance characteristics, and limitations, and finally analyze the interpretability methods. The problems that explanatory research needs to be solved urgently are prospected. [Results]Interpretability can effectively improve the persuasiveness of the recommendation system and improve the user experience. It is also an important step towards a transparent and trustworthy recommendation system. [Limitations]No in-depth analysis of evaluation metrics for interpretability algorithms. [Conclusion]Although the existing interpretability methods can meet the explanation needs of many applications to a certain extent, it is certain that there are still many challenges in the research of dialogue interactive explanation and causal explanation.
|