Instructing and Prompting Large Language Models for Explainable Cross-domain Recommendations

Oct 8, 2024·
Alessandro Petruzzelli
,
Cataldo Musto
Lucrezia Laraspata
Lucrezia Laraspata
,
Ivan Rinaldi
,
Marco De Gemmis
,
Pasquale Lops
,
Giovanni Semeraro
· 1 min read
Abstract
This paper presents a strategy for explainable cross-domain recommendations (CDR) using large language models (LLMs). CDR is challenging due to data sparsity, as it requires extensive labeled data across both source and target domains, which is hard to collect. Our approach leverages the knowledge in LLMs to bridge these domains and provide personalized recommendations. We developed a pipeline to (a) instruct an LLM for CDR tasks, (b) design a personalized prompt based on user preferences and target items, and (c) generate recommendations and explanations using zero-shot and one-shot settings. Experimental results show our method outperforms state-of-the-art baselines.
Type
Publication
In Proceedings of the 18th ACM Conference on Recommender Systems

This sutdy explores how to improve cross-domain recommendation systems using large language models (LLMs). Cross-domain recommendation systems (CDRs) help users receive personalized recommendations across different areas, such as suggesting books based on a user’s movie preferences. However, these systems often suffer from data sparsity, making it difficult to gather enough labeled data from both domains (source and target) to train the models effectively.

To address this, we propose a strategy that uses the knowledge encoded in LLMs to bridge the gap between different domains. The key innovation is to prompt LLMs by providing user preferences from one domain (e.g., movie ratings) and applying that knowledge to recommend items in another domain (e.g., books). The paper outlines a workflow that involves fine-tuning the LLM through instruction-based learning and carefully designed prompts that generate recommendations along with natural language explanations.

The experimental results show that this approach outperforms other state-of-the-art models, both in zero-shot (no prior domain-specific training) and one-shot settings (limited training), making it a promising direction for more explainable AI-driven recommendation systems​.