Project R-16212

Title

Interpretable rule-based recommender systems (Research)

Abstract

Recommender systems help users identify the most relevant items from a huge catalogue. In recent independent evaluation studies of recommender systems, baseline association rule models are competitive with more complex state-of-the-art methods. Moreover, rule-based recommender algorithms have several exciting properties, such as the potential to be interpretable, the ability to identify local patterns and the support of context-aware predictions. First, we survey various existing recommendation algorithms with different biases and prediction strategies and evaluate them independently. Besides accuracy, we evaluate coverage and diversity and analyse the structure of the resulting rule models, which are essential towards understanding interpretability. Second, we propose to gap the bridge between recommender systems and recent multi-label classification based on learning an optimal set of rules w.r.t. to a custom loss function. We study if a decision-theoretic framework can guarantee the identification of the optimal rules for recommender systems under a loss function combining accuracy, complexity and diversity. We account for characteristics unique to recommender datasets, such as skewed distribution, implicit feedback and scale. Finally, we adopt new rule-based algorithms that are interpretable and more accurate. We apply them for healthcare recommendations to improve intensive care unit monitoring and online bandit learning for large-scale websites for e-commerce and news.

Period of project

01 November 2025 - 31 October 2026