摘 要: 针对传统的上下文电影推荐模型只采用文本数据,从单模态数据获取的信息有限,无法充分解决数据稀疏性带来的问题,提出了一种融合文本和图像数据的多模态电影推荐模型(VLPMF)。首先,VLPMF集成了长短期记忆网络(LSTM)和概率矩阵分解(PMF)。其次,将VGG16提取的图像特征以概率的角度结合到PMF中并构建融合层,将文本特征和图像特征融合后得出预测评分。最后,在Movielens-1M、Movielens-10M和亚马孙AIV数据集上进行对比实验,结果表明,VLPMF模型的均方根误差比对比实验中最优模型的均方根误差分别降低了1.26百分点、1.51百分点和4.30百分点。 |
关键词: 推荐系统;图像内容;深度卷积神经网络;概率矩阵分解模型 |
中图分类号: TP391
文献标识码: A
|
基金项目: 国家自然科学基金资助项目(12371508);国家自然科学基金资助项目(11701370);上海市“系统科学”高峰学科建设项目 |
|
Multimodal Movie Recommendation Model Integrating Context and Visual Information |
ZHU Kun, LIU Jiang, NI Feng, ZHU Jiayi
|
(Business School, University of Shanghai f or Science and Technology, Shanghai 200093, China)
zhukun812@163.com; jliu113@126.com; nifeng@usst.edu.cn; 274648524@qq.com
|
Abstract: Traditional context-based movie recommendation models only adopt text data, resulting in limited information obtained from a single modality and inability to fully address the issues caused by data sparsity. To address these problems, this paper proposes a multimodal movie recommendation model (VLPMF) that integrates text and image data. Firstly, VLPMF integrates Long Short-Term Memory (LSTM) and Probabilistic Matrix Factorization (PMF). Secondly, image features extracted by VGG16 are integrated into PMF from a probabilistic perspective to construct a fusion layer, where the fused text and image features are combined to predict ratings. Finally, comparative experiments conducted on the Movielens-1M, Movielens-10M, and Amazon AIV datasets demonstrate that the Root Mean Square Error (RMSE) of the VLPMF model is reduced by 1.26 percentage points, 1.51 percentage points, and 4.30 percentage points respectively compared to the best-performing models in the experiments. |
Keywords: recommendation system; image content; deep convolutional neural network; probabilistic matrix factorization model |