%A Zhu Lu,Tian Xiaomeng,Cao Sainan,Liu Yuanyuan %T Subspace Cross-modal Retrieval Based on High-Order Semantic Correlation %0 Journal Article %D 2020 %J Data Analysis and Knowledge Discovery %R 10.11925/infotech.2096-3467.2019.0912 %P 84-91 %V 4 %N 5 %U {https://manu44.magtech.com.cn/Jwk_infotech_wk3/CN/abstract/article_4844.shtml} %8 2020-05-25 %X

[Objective] This paper converts the heterogeneous multi-modal data into isomorphism, aiming to address the semantic gaps and improve the accuracy of cross-modal retrieval.[Methods] First, we decided the high-order semantic correlation between multi-modal data. Then, we combined the annotation and the structure information of multi-modal data. Finally, we transformed the data of different modals into isomorphism for direct retrieval.[Results] We examined our method with three open datasets of WIKI, NUS-WIDE and XMedia. The average MAP value obtained by our method was 0.111 3, 0.091 0 and 0.185 0 higher than the best results of CCA, JGRHML, SCM and JFSSL.[Limitations] Our method is not applicable to semi-supervised and unsupervised data.[Conclusions] The proposed method improves the accuracy of cross-modal retrieval effectively.