Fact-checked by Grok 2 weeks ago

Collaborative filtering

Collaborative filtering is a core technique in recommender systems that predicts a user's preferences for items by analyzing patterns in the collective behaviors and ratings of multiple users, under the assumption that users who have similar tastes in the past are likely to exhibit similar preferences in the future. This method relies solely on user-item interaction data, such as explicit ratings or implicit feedback like clicks and purchases, without incorporating item attributes or user demographics. By aggregating opinions from like-minded users or similar items, collaborative filtering generates personalized recommendations, making it particularly effective for discovering unexpected items that align with a user's interests. The origins of collaborative filtering trace back to the early 1990s, with foundational work on systems like and GroupLens, which applied it to automated news filtering and recommendations. Over the decades, it has evolved into two primary categories: memory-based approaches, which use similarity computations on the raw user-item , and model-based approaches, which learn latent factors through techniques like factorization. Memory-based methods include user-based filtering, where recommendations derive from the preferences of similar users, and item-based filtering, which predicts ratings based on similarities between items rated by the user; the latter offers superior and prediction quality in large datasets. Model-based methods, such as (SVD) or (PLSA), compress the interaction into lower-dimensional representations to handle sparsity and improve accuracy. Collaborative filtering powers diverse applications, including e-commerce platforms like for product suggestions, streaming services like for movie and show recommendations, and social media for feeds, where it has demonstrated significant improvements in and . Despite its successes, the technique faces key challenges, including data sparsity—where most user-item interactions are absent—cold-start problems for new users or items lacking history, and issues in processing massive datasets. Recent advancements integrate deep neural networks (DNNs), such as neural collaborative filtering (NCF) and neural networks (GNNs), to model nonlinear relationships and high-order connections, addressing these limitations while enhancing performance on implicit data. Hybrid systems combining collaborative filtering with content-based or knowledge-based methods further mitigate drawbacks, providing more robust recommendations across domains.

Fundamentals

Definition and Principles

Collaborative filtering (CF) is a technique used in recommender systems to predict a user's interest in an item by leveraging the preferences and behaviors of multiple users, rather than relying on the item's inherent attributes. Unlike content-based filtering, which recommends items similar to those a user has previously liked based on item features such as or keywords, CF aggregates collective user feedback to identify patterns of similarity among users or items. This approach assumes that users who have agreed on certain items in the past are likely to share tastes in the future, enabling personalized predictions even for items without explicit . At its core, CF operates on a user-item matrix, typically represented as a R where rows correspond to users, columns to items, and entries denote interactions such as explicit ratings, clicks, or purchases; missing values indicate unobserved interactions. A fundamental principle is the assumption: users exhibiting similar interaction patterns with items will likely prefer comparable items, allowing the system to impute missing entries by drawing from collective behavior. Understanding CF requires familiarity with basic matrix notation, where the interaction matrix forms a high-dimensional space, and representations of users or items facilitate similarity computations. In user-based CF, predictions for a target user u on item i are generated using a neighborhood of similar users N(u). The predicted rating \hat{y}_{u,i} is computed as the user's average rating adjusted by the weighted deviations of neighbors' ratings: \hat{y}_{u,i} = \mu_u + \frac{\sum_{v \in N(u)} \operatorname{sim}(u,v) \cdot (r_{v,i} - \mu_v)}{\sum_{v \in N(u)} |\operatorname{sim}(u,v)|} where \mu_u and \mu_v are the average ratings of users u and v, r_{v,i} is user v's rating for item i, and \operatorname{sim}(u,v) measures similarity between users. Similarity measures are essential for identifying relevant neighbors. For explicit ratings, Pearson correlation is commonly used, defined as: \operatorname{sim}(u,v) = \frac{\sum_{i \in I}(r_{u,i} - \mu_u)(r_{v,i} - \mu_v)}{\sqrt{\sum_{i \in I}(r_{u,i} - \mu_u)^2} \sqrt{\sum_{i \in I}(r_{v,i} - \mu_v)^2}} where I is the set of items rated by both users; this normalizes for individual rating biases. For binary or implicit interactions (e.g., presence/absence of clicks), is preferred, treating user profiles as vectors and computing: \operatorname{sim}(u,v) = \frac{\sum_{i} r_{u,i} r_{v,i}}{\sqrt{\sum_{i} r_{u,i}^2} \sqrt{\sum_{i} r_{v,i}^2}} which emphasizes overlap in interacted items without bias adjustment.

Historical Development

The term "collaborative filtering" was coined in 1992 by David Goldberg and colleagues in their work on the Tapestry system, an experimental prototype developed at Xerox PARC to manage and filter streams of electronic documents such as Usenet news articles and email. Tapestry relied on human collaborators to manually annotate documents with keywords or relevance judgments, enabling the system to route information to users based on shared interests and behaviors rather than content analysis alone. This approach marked the initial conceptualization of collaborative filtering as a mechanism for leveraging collective user input to improve information discovery in overwhelming digital environments. One of the earliest practical implementations came in 1994 with the GroupLens system, developed by Paul Resnick and colleagues at the , which applied collaborative filtering to automate recommendations for netnews articles. GroupLens used statistical predictions based on user ratings to suggest articles, demonstrating the feasibility of automated collaborative filtering in a distributed, high-volume setting and addressing challenges like scalability in early online communities. The project later expanded to other domains, including the MovieLens system launched in 1997, which focused on movie recommendations using explicit user ratings to predict preferences. This evolution highlighted collaborative filtering's adaptability from news to entertainment content. In the early 2000s, the field advanced with the introduction of model-based techniques, particularly (SVD) for matrix factorization, which addressed limitations in memory-based methods by reducing dimensionality and uncovering latent factors in user-item interactions. For instance, Sarwar et al. explored incremental SVD algorithms to enable scalable collaborative filtering on large datasets, improving accuracy and efficiency over pure neighbor-based approaches. Concurrently, to enhance further, researchers shifted from user-based to item-based collaborative filtering, as detailed by Sarwar et al. in 2001, where recommendations were generated by computing similarities between items rather than users, reducing computational overhead in dense datasets. These developments established a foundation for handling growing data volumes in recommender systems. A pivotal milestone occurred with the competition from 2006 to 2009, which challenged participants to improve movie rating predictions using a of over 100 million anonymized ratings and spurred innovations in and model-based collaborative filtering. The competition emphasized blending memory-based neighborhood methods with advanced matrix factorization, achieving up to a 10% improvement in prediction accuracy over Netflix's baseline Cinematch system and accelerating the adoption of latent factor models. Until the mid-2000s, memory-based methods had dominated due to their simplicity and interpretability, but the Netflix efforts solidified the transition toward more sophisticated model-based techniques for real-world scalability.

Core Methods

Memory-based Collaborative Filtering

Memory-based collaborative filtering encompasses non-parametric techniques that generate recommendations directly from the user-item interaction data, such as rating matrices, without training underlying models. These methods compute similarities between users or items in real-time to form neighborhoods and predict preferences based on aggregated ratings from those neighborhoods. Introduced in early recommender systems like GroupLens for filtering news, this approach leverages the collective wisdom encoded in user feedback to infer tastes. The two primary subtypes are user-based and item-based collaborative filtering. In user-based methods, the system identifies users whose rating profiles are similar to the active user and uses their ratings to predict scores for unrated items. This subtype relies on the assumption that users with comparable past behaviors will exhibit similar preferences in the future. Item-based methods, conversely, focus on similarities between items, aggregating the active user's ratings on similar items to predict scores for target items; this shifts the computation from user-user to item-item comparisons, often yielding better for datasets with many users but fewer items per user. Central to these methods is the k-nearest neighbors (k-NN) algorithm for selecting relevant neighborhoods from the interaction data. Similarity between pairs—whether users or items—is quantified using measures like Pearson correlation or . The Pearson correlation coefficient accounts for users' rating biases by centering ratings around means: \text{sim}(u,v) = \frac{\text{cov}(r_u, r_v)}{\sigma_u \sigma_v} where r_u and r_v are rating vectors for users u and v, \text{cov} is the over co-rated items, and \sigma denotes standard deviation. Cosine similarity treats ratings as vectors in a high-dimensional space: \text{sim}(u,v) = \frac{r_u \cdot r_v}{\|r_u\| \|r_v\|} with the and Euclidean norms computed over co-rated items. These metrics enable the identification of the top-k most similar entities, typically with k ranging from 5 to 50 depending on data density. are formed as weighted averages of neighbors' ratings, emphasizing closer matches via similarity weights. For a user-based \hat{r}_{u,i} on item i: \hat{r}_{u,i} = \bar{r}_u + \frac{\sum_{v \in N^k(u)} \text{sim}(u,v) (r_{v,i} - \bar{r}_v)}{\sum_{v \in N^k(u)} |\text{sim}(u,v)|} where N^k(u) is the k-nearest user neighborhood, and \bar{r} is the mean rating; item-based variants adjust the to weight by item similarities. To mitigate sparsity—where users rate few items—thresholds are applied, such as including only neighbors with positive similarity or sufficient co-ratings (e.g., at least 3 shared items), ensuring reliable aggregations. These methods offer advantages in , requiring no offline , and interpretability, as predictions trace back to specific similar users or items for explanation. However, they suffer disadvantages from dependence on dense data; in sparse matrices, neighborhoods may be empty or unreliable, limiting accuracy and for large-scale systems. Empirical evaluations on datasets like MovieLens show user-based approaches achieving mean absolute errors around 0.9 for movie ratings when using Pearson similarity with k=30, though performance degrades with increasing sparsity. A practical example is movie recommendation: for an active user who rated films like The Matrix highly, the system identifies the top-5 similar users via k-NN with , then predicts ratings for unseen movies like Inception as the weighted average of those users' scores, potentially surfacing it if the aggregate exceeds 4 stars. This direct use of raw interactions exemplifies the method's reliance on observed data for personalized suggestions.

Model-based Collaborative Filtering

Model-based collaborative filtering employs models trained on user-item interaction data to uncover underlying patterns, offering advantages over memory-based approaches in handling sparse datasets by compressing into latent factors. The core involves extracting these latent factors to represent preferences and item characteristics in a lower-dimensional space, enabling efficient prediction of missing ratings. A prominent method is matrix factorization, which approximates the user-item interaction matrix R (where R_{ui} denotes the rating by u for item i) as R \approx U V^T, with U as the factor matrix and V as the item factor matrix, both learned through optimization to minimize reconstruction error. Key variants include adaptations of (SVD), which decomposes R into low-rank matrices to capture principal components of user-item affinities, as introduced in scalable collaborative filtering contexts. (NMF) extends this by enforcing non-negativity constraints on factors, ensuring interpretable representations where components align with intuitive features like genres or tastes, and has been shown to improve accuracy in rating prediction tasks. Probabilistic Matrix Factorization (PMF) introduces a Bayesian perspective, modeling ratings as draws from a Gaussian distribution centered at the inner product of user and item factors: p(R \mid U, V) \propto \prod_{u,i: R_{ui} \text{ observed}} \mathcal{N}(R_{ui} \mid u_u^T v_i, \alpha^{-1}), with Gaussian priors on U and V to regularize the model and handle uncertainty in sparse data. Training typically optimizes the factorization objective, such as minimizing the squared error \| R - U V^T \|^2 plus regularization terms to prevent , using methods like Alternating (ALS), which iteratively fixes one matrix while solving for the other via , or (SGD), which updates factors incrementally based on individual observations for faster convergence on large datasets. ALS excels in distributed settings due to its parallelizable nature, while SGD suits but requires careful tuning of learning rates. Bayesian extensions, such as Bayesian PMF, incorporate full posterior inference via sampling to quantify uncertainty in latent factors, enabling robust predictions even with noisy or limited data by automatically adjusting model capacity. For instance, in the challenge, matrix factorization decomposed millions of ratings into user taste vectors and item genre vectors, achieving significant improvements in root over baselines.

Hybrid Collaborative Filtering

Hybrid collaborative filtering integrates multiple collaborative filtering techniques or combines collaborative filtering with other methods, such as content-based approaches, to capitalize on their respective strengths while addressing individual limitations like data sparsity and lack of . This integration aims to produce more robust recommendations by leveraging diverse data sources and algorithmic paradigms, resulting in higher accuracy and coverage compared to standalone methods. Common combination strategies include weighted hybrids, which blend prediction scores from different models using a linear combination; switching hybrids, which select the most appropriate method based on contextual criteria; mixed hybrids, which aggregate recommendations from multiple techniques into a unified list; and cascade hybrids, which apply one method sequentially to refine outputs from another. For instance, in a weighted hybrid, predictions can be computed as \hat{y} = \alpha \cdot s_{\text{memory}} + (1 - \alpha) \cdot s_{\text{model}}, where \alpha is a tunable parameter balancing memory-based and model-based scores. These strategies were systematically categorized in early surveys of recommender systems. Early examples of hybrid approaches include content-boosted , which augments sparse ratings with content-based predictions to generate pseudo-ratings, thereby improving on datasets like MovieLens. In modern implementations, weighting parameters like \alpha are often learned through optimization techniques, such as , to adaptively balance components based on data characteristics. Notable applications include the competition, where winning ensembles blended models with neighborhood-based methods, achieving a error (RMSE) of 0.8567 on the test set, a significant improvement over pure baselines. Hybrids particularly excel in sparse data environments by incorporating auxiliary information, reducing cold-start issues and enhancing prediction accuracy; for example, content-boosted variants have demonstrated RMSE reductions of up to 10-20% over pure collaborative filtering on benchmark datasets. Evaluation typically employs metrics like RMSE to quantify prediction error, with hybrids consistently outperforming individual methods in cross-validation experiments across domains such as and media recommendation.

Advanced Variants

Deep Learning-based Approaches

Deep learning-based approaches to collaborative filtering have gained prominence since the , enabling the modeling of complex, non-linear user-item interactions that traditional linear methods often overlook. These methods leverage neural networks to learn dense representations of users and items from sparse interaction data, improving recommendation accuracy in scenarios with implicit . A foundational contribution is the Neural Collaborative Filtering (NCF) framework introduced by He et al. in 2017, which embeds user and item IDs into low-dimensional dense vectors and processes them through multi-layer perceptrons (MLPs) to predict interactions, outperforming matrix factorization on datasets like MovieLens by capturing higher-order feature interactions. Key architectures in this domain extend matrix factorization using autoencoders, which learn latent representations by reconstructing input rating matrices. For instance, the AutoRec model by Sedhain et al. (2015) employs a simple autoencoder to directly reconstruct user or item vectors from interaction data, demonstrating superior performance over biased matrix factorization on sparse datasets through its ability to handle non-linearity via non-linear activation functions. Building on this, variational autoencoders (VAEs) introduce probabilistic modeling for generative recommendations, particularly suited to implicit feedback. In VAE-based collaborative filtering, as proposed by Liang et al. (2018), an encoder approximates the posterior distribution q(\mathbf{z} \mid \mathbf{x}) to match the true posterior p(\mathbf{z} \mid \mathbf{x}), optimized via the evidence lower bound (ELBO) loss to balance reconstruction and regularization, enabling the generation of diverse item suggestions while addressing data sparsity. For sequence-aware recommendations, recurrent neural networks (RNNs) such as LSTMs and GRUs capture temporal dependencies in user behavior, making them ideal for session-based collaborative filtering. The GRU4Rec model by Hidasi et al. (2016) uses gated recurrent units (GRUs) to process item sequences within sessions and predict the next item, achieving significant improvements—up to 35% in recall—over non-sequential baselines on datasets by modeling short-term user intent through sequential transitions. Multi-modal extensions integrate auxiliary data like text descriptions or images to enrich embeddings, addressing limitations in interaction-only models. Convolutional neural networks (CNNs) extract spatial features from images, while Transformers handle sequential text, fusing these with collaborative signals; for example, the Self-supervised Multimodal Graph Convolutional Network (SMGCN) by Kim et al. (2024) combines CNNs for visual and textual modalities with self-supervision to learn cross-modal preferences, outperforming state-of-the-art multimodal CF models on real-world datasets including . Recent advancements up to 2025 emphasize Transformer-based models for scalable sequential modeling. The Self-Attentive Sequential Recommendation (SASRec) by and McAuley (2018) applies self-attention mechanisms to user action histories, identifying relevant past items for next-item prediction without recurrence, outperforming RNNs on long sequences in benchmarks like and datasets. Extensions, such as the MetaBERTTransformer4Rec (MBT4R) architecture by Al-Ghezi et al. (2025), incorporate BERT-like pre-training with Transformers for collaborative filtering, achieving state-of-the-art results in personalized systems by modeling both sequential and static features.

Context-aware Collaborative Filtering

Context-aware collaborative filtering extends traditional collaborative filtering by incorporating contextual information to generate more relevant recommendations tailored to specific user situations. Context is defined as any information that can be used to characterize the situation of an entity, such as a person, place, or object relevant to the interaction between a user and an application, including the user and the application itself. In recommender systems, this includes factors like time, location, weather, or device type that influence user preferences. A foundational taxonomy views context-aware recommendations as a three-dimensional (3D) model involving users, items, and context, extending the traditional two-dimensional user-item matrix. More advanced multidimensional taxonomies treat context as multiple dimensions, allowing for richer representations such as user-location-time-item interactions. Methods for injecting context into collaborative filtering include contextual bandits and tensor factorization. Contextual bandits adapt recommendations by balancing exploration and exploitation based on situational contexts, enabling dynamic adjustments to user feedback in environments. Tensor factorization models the rating tensor as an R \approx U \times I \times C, where U, I, and C represent latent factor matrices for users, items, and contexts, respectively, capturing interactions across dimensions via . Key techniques for integration are pre-filtering, which selects only context-relevant data subsets before applying standard collaborative filtering; post-filtering, which generates baseline predictions and then adjusts them based on contextual similarity; and model-based approaches, which learn joint embeddings of users, items, and contexts during training. Examples illustrate these techniques in practice. For time-aware movie recommendations, recent ratings receive higher weights in neighborhood-based collaborative filtering, reflecting evolving user tastes over time, as implemented in extensions of matrix factorization models on datasets like MovieLens. In location-based music or app recommendations, the Frappe framework uses contextual factors such as geographic position and weather to filter or model preferences, recommending mobile applications suited to the user's current situation from a dataset of over 96,000 interactions. These approaches address challenges like in user preferences, where standard collaborative filtering fails to account for how tastes change with seasons or daily routines, leading to improved accuracy in dynamic scenarios.

Graph-based and Knowledge-enhanced Methods

Graph-based methods in collaborative filtering represent user-item interactions as bipartite graphs, where users and items are nodes connected by edges indicating interactions such as ratings or purchases. This structure captures relational dependencies beyond simple similarity computations, enabling the propagation of information across connected components to infer preferences for sparse data. Graph Neural Networks (GNNs) extend this by learning node embeddings through iterative , aggregating features from neighboring nodes to refine representations. A seminal approach is LightGCN, which simplifies traditional GCNs by focusing solely on neighborhood aggregation without nonlinear activations or feature transformations, achieving superior performance on benchmarks like MovieLens and Amazon datasets. In LightGCN, embeddings are propagated layer-wise using symmetric normalization to balance degrees of users and items, formulated as: \mathbf{e}_u^{(k+1)} = \sum_{i \in \mathcal{N}_u} \frac{1}{\sqrt{|\mathcal{N}_u| |\mathcal{N}_i|}} \mathbf{e}_i^{(k)} where \mathbf{e}_u^{(k)} is the embedding of u at layer k, and \mathcal{N}_u denotes its neighbors. The final recommendation score is computed by concatenating embeddings from all layers and applying an inner product. This method outperforms prior GCN-based models by up to 16% in and NDCG metrics, demonstrating efficiency on large-scale graphs. Knowledge-enhanced methods augment graph-based collaborative filtering by incorporating external knowledge graphs (KGs), such as ontologies or entity relation triples (head, , tail), to enrich item and representations with semantic context. For instance, KGAT constructs a collaborative by fusing user-item interactions with KG triples, then employs attentive propagation to weigh high-order connectivities, enabling reasoning over multi-hop like "user likes movie directed by actor." This integration improves recommendation accuracy and explainability, particularly on datasets like Amazon-Book and , where it surpasses baselines by modeling preferences through links. Recent advances from 2023 onward include , such as DiffKG, which generate augmented KG triples via a generative to denoise and enhance relational data for recommendations, addressing noise in sparse while preserving structural integrity. Additionally, federated GNN frameworks like GNN4FR enable privacy-preserving training across distributed devices by synchronizing embeddings without sharing raw interactions, using for gradient aggregation and achieving performance comparable to centralized LightGCN on datasets like Gowalla. These approaches handle cold-start problems effectively; for example, KG links provide side information for new users or items lacking interaction history, as evidenced by KGAT's gains on sparse user groups in Yelp2018. In practice, Pinterest's PinSage applies GNNs to a massive pin-board with 3 billion nodes and 18 billion edges, using random walks for efficient sampling and convolutions for embeddings, resulting in 30-100% engagement lifts in tests for related-pin recommendations. Unlike pure collaborative filtering, which relies implicitly on patterns, graph-based and knowledge-enhanced methods explicitly model relational structures, capturing transitive preferences and external semantics for more robust predictions in diverse domains.

Applications

Recommender Systems in E-commerce and Media

Collaborative filtering has been pivotal in platforms, where it enables personalized product suggestions based on user behavior. A landmark implementation is Amazon's item-to-item collaborative filtering system, introduced in 2003, which recommends products by identifying similarities between items purchased or viewed by users, powering features like "customers who bought this also bought." This approach supports personalization by computing item similarities offline and applying them dynamically during user sessions, scaling efficiently to millions of products without relying on user-user comparisons. In the media domain, collaborative filtering drives content discovery for streaming services. Netflix employs latent factor models, a form of model-based collaborative filtering, to predict user preferences for movies and TV shows by factoring the user-item interaction matrix into lower-dimensional representations of users and items. Similarly, Spotify integrates collaborative filtering in its music recommendation engine, combining user-item interactions with audio features to generate personalized playlists such as Discover Weekly, enhancing listener retention through tailored song sequences. Beyond these pioneers, case studies illustrate broader adoption. Alibaba leverages session-based collaborative filtering to recommend products during short user sessions on its platforms, embedding sequential behaviors to predict next-item clicks without long-term user histories, as demonstrated in their deep interest network models. YouTube's video recommendation system blends collaborative filtering with in its candidate generation phase, using deep neural networks to rank videos based on user watch similarities, which accounts for a significant portion of viewer engagement. Practitioners evaluate these systems using metrics tailored to ranking quality and business outcomes. Precision@K measures the proportion of relevant items in the top-K recommendations, while Normalized Discounted Cumulative Gain (NDCG) assesses ranking accuracy by penalizing less relevant items higher in the list. complements offline metrics by comparing live user engagement and conversion rates between recommendation variants. The business impact of collaborative filtering in these domains is substantial, with attributing approximately 35% of its sales to recommendation-driven purchases as of 2013. Such systems boost user engagement and revenue by surfacing relevant content, transforming passive browsing into targeted interactions.

Social Web and Personalized Services

Collaborative filtering () plays a pivotal role in the by leveraging user interactions to recommend connections and content, enhancing network effects through personalized suggestions based on collective behaviors. In platforms like (now X) and , CF algorithms analyze follows, likes, and shares to suggest potential friends, improving user engagement by connecting individuals with similar interests or interaction patterns. For instance, early implementations on Twitter used hybrid content and CF approaches to recommend users to follow, achieving higher in suggestions compared to purely content-based methods. Similarly, on Facebook, CF frameworks incorporate interaction intensity, such as mutual likes and comments, to generate friend recommendations that adapt to evolving user similarities. These systems handle viral trends by propagating implicit signals from rapid shares and retweets, amplifying content reach within social graphs. In personalized services, CF extends to professional and relational domains, utilizing network structures to match users with opportunities or partners. LinkedIn employs item-based CF through its Browsemaps infrastructure to recommend , drawing on users' connections, views, and endorsements to infer preferences and suggest roles aligned with career trajectories. This approach has been integral to LinkedIn's recommendation engine since the mid-2010s, enabling scalable matching of users to positions based on collective behaviors. In dating apps like , CF facilitates swipe-based matching by treating user profiles and interactions as implicit ratings, recommending potential matches based on similarity in likes and swipes from comparable users; a user trial demonstrated that CF outperformed baseline methods in predicting mutual interest. Unique to these applications is the use of implicit feedback from actions like shares, comments, and swipes, which provides richer signals than explicit ratings and addresses sparsity in social data. Additionally, social trust propagation enhances CF by weighting recommendations according to trusted connections, mitigating noise from unverified interactions and improving accuracy in trust-sensitive domains like networking. Prominent examples illustrate CF's integration with social dynamics. TikTok's For You Page employs CF augmented with graph-based methods to curate video feeds, analyzing user watches, likes, and shares alongside social connections to personalize content and foster viral trends through network effects. Reddit uses CF for subreddit suggestions, processing user subscriptions and upvotes to recommend communities, with early models showing effectiveness on sparse interaction data to guide users to niche discussions. The evolution of CF in the social web traces back to foundational work like GroupLens in the 1990s, which applied CF to Usenet news for group-based filtering, evolving into sophisticated social graph integrations by the 2020s that incorporate implicit feedback and trust for dynamic, user-centric services.

Challenges and Limitations

Data Sparsity and Cold Start Problems

In collaborative filtering, data sparsity refers to the phenomenon where the user-item matrix contains a disproportionately large number of missing values, often exceeding 99% zeros, as observed in real-world datasets like the data with approximately 100 million ratings across over 8.5 billion possible entries. This sparsity arises because users typically interact with only a small fraction of available items due to limited time, awareness, or interest. Consequently, it hampers the computation of reliable user or item similarities, as neighborhood-based methods like k-nearest neighbors rely on overlapping ratings that are rarely sufficient, leading to degraded recommendation accuracy and coverage. The problem exacerbates sparsity issues by preventing effective recommendations for entities lacking historical data. It manifests in three primary forms: user cold start, where new users have no or few interactions; item cold start, where new items receive minimal ratings; and system-wide cold start, occurring in nascent platforms with overall insufficient data. These scenarios disrupt collaborative filtering's core assumption of leveraging collective user behavior, often resulting in fallback to non-personalized strategies and reduced system utility, particularly for emerging products or content. To quantify these challenges, key metrics include the sparsity ratio, calculated as $1 - \frac{\text{number of observed interactions}}{\text{number of users} \times \text{number of items}}, which directly measures matrix density (e.g., 0.9883 for ), and recommendation coverage, defined as the proportion of the item catalog that receives at least one recommendation across users, often dropping below 50% in sparse conditions due to reliance on popular items. Basic mitigations for sparsity and cold start involve simple heuristics without advanced modeling. Default predictors, such as global or user-specific average ratings, provide baseline estimates for missing values to stabilize similarity calculations. Popularity-based fallbacks recommend top-rated or most-interacted items to users or items, ensuring some utility despite lacking . Incorporating demographic data, like age or location, offers initial profiling for new users to approximate preferences, though this shifts toward approaches. For instance, in new product recommendations, content-based hybrids briefly integrate item features (e.g., genre metadata) to bootstrap collaborative signals until ratings accumulate.

Scalability and Computational Issues

Collaborative filtering systems, particularly memory-based approaches like user-based methods, face significant scalability bottlenecks when handling large datasets. Computing pairwise similarities between users requires examining all user-item interactions, leading to a of O(n^2) where n is the number of users, which becomes prohibitive for millions of users as it demands excessive and for storing dense similarity . Similarly, maintaining the full user-item consumes substantial , exacerbating issues in high-dimensional sparse spaces typical of recommender systems. To address these challenges, several solutions have been developed. techniques, such as (SVD), compress the user-item matrix into lower-dimensional latent factors, reducing both storage requirements and computation time while preserving predictive accuracy; for instance, SVD can approximate the original matrix with far fewer parameters, enabling efficient similarity computations. Approximate nearest neighbor methods, including Locality-Sensitive Hashing (LSH), accelerate similarity searches by hashing similar items or users into the same buckets with high probability, avoiding exhaustive pairwise comparisons and achieving sublinear query times in high dimensions. frameworks like parallelize the process across clusters, partitioning data for simultaneous similarity calculations and matrix operations, thus scaling to billions of ratings. Real-time recommendation in collaborative filtering often contrasts incremental updates with . Batch methods recompute models periodically on the full , offering high accuracy but incurring long times unsuitable for dynamic environments; incremental approaches, however, update models only with new data, enabling near-real-time adaptations at the cost of slightly reduced due to partial . This trade-off balances speed and accuracy, as incremental methods can process updates in seconds versus hours for batch re-training on large scales. Practical examples illustrate these solutions' impact. Netflix employed parallel collaborative filtering on a cluster using Alternating , akin to paradigms, to train on their 100 million ratings dataset, achieving a error of 0.8985 while scaling across multiple machines. Shifting to item-based collaborative filtering, as opposed to user-based, precomputes item similarities offline—feasible due to typically fewer items than users—enabling faster online queries by aggregating a small number of item neighbors per recommendation. Key metrics for evaluating include training time and query latency. For example, on the dataset with 480,000 users and 17,000 items, standard user-based methods exhibit training times exceeding hours on single machines, while item-based variants with reduce this to minutes; query latency for recommendations drops from milliseconds in small-scale tests to sub-second levels in distributed setups, ensuring responsiveness for millions of daily users.

Security, Privacy, and Bias Concerns

Collaborative filtering systems are vulnerable to attacks, also known as profile injection attacks, where malicious users create fake profiles to manipulate recommendations by boosting or demoting specific items. These attacks often involve injecting profiles that mimic legitimate user behavior but systematically alter s for target items, such as assigning high ratings to promote a product or low ratings to sabotage competitors. A seminal study demonstrated the effectiveness of such attacks on user-based collaborative filtering, showing that even a small number of fake profiles can significantly shift recommendation rankings. Detection methods typically rely on statistical anomalies in rating patterns, such as unusual rating variance, filler item distributions, or degree of similarity to average user profiles, which can identify suspicious injections with high accuracy in controlled experiments. Data sparsity can exacerbate these attacks by making it easier for fake profiles to influence sparse neighborhoods without detection. Privacy concerns in collaborative filtering arise primarily from inference attacks, where adversaries reconstruct sensitive user profiles from observed recommendations or auxiliary . For instance, attackers can a user's ratings or transaction history by analyzing the temporal changes in recommendation lists, exploiting the system's reliance on user-item interactions to reveal undisclosed preferences. To mitigate these risks, techniques have been integrated into collaborative filtering, such as adding Laplace noise to rating matrices or using exponential mechanisms for item selection, which bound the leakage while preserving recommendation utility—studies show privacy budgets (ε ≈ 1-5) maintain comparable accuracy to non-private baselines. These approaches ensure that the output of the filtering process reveals little about any individual user's data, addressing both membership and attribute threats in real-world deployments. Bias in collaborative filtering manifests in several forms, including popularity bias, where the system disproportionately recommends mainstream items, neglecting the of less popular ones due to higher interaction signals from majority preferences. This leads to a feedback loop amplifying exposure for popular content, as evidenced in multimedia recommenders, reducing visibility for niche products. Gray sheep users—those whose tastes diverge significantly from group norms—also suffer, as their sparse or profiles result in poor recommendations, since collaborative methods rely on neighborhood that excludes atypical users. Additionally, the synonymy problem arises when conceptually similar items receive inconsistent ratings due to semantic ambiguities, causing the system to undervalue related content and perpetuate fragmented user experiences. Diversity issues further compound these biases, as over-recommendation of popular items diminishes —the discovery of unexpected yet relevant content—leading to homogenized lists that reinforce echo chambers. In standard collaborative filtering, diversity metrics like intra-list dissimilarity often prioritize accuracy over variety and limit exposure to novel items for top-k recommendations. This reduces user satisfaction in long-term use, as repeated exposure to familiar popular fare stifles exploration. Recent ethical concerns (2023-2025) emphasize fairness in collaborative filtering, with a growing focus on metrics like to ensure equitable recommendation distributions across protected groups, such as or . measures the of positive recommendation rates between groups, revealing disparities in biased datasets. Studies advocate debiasing techniques, like reweighting interactions or adversarial , to achieve while maintaining overall performance, highlighting the need for fairness-aware evaluations in production systems.

Future Directions

In recent years, has emerged as a prominent trend in collaborative filtering (), particularly through contrastive learning techniques that generate embeddings from unlabeled data to enhance representation quality without relying on explicit labels. For instance, contrastive methods create positive and negative pairs from user-item graphs to learn robust embeddings, improving in sparse settings. A key example is the Self-supervised Contrastive Learning for Implicit Collaborative Filtering model, which leverages inherent structures for better implicit feedback modeling. Similarly, Disentangled Contrastive Collaborative Filtering disentangles user preferences to address supervision shortages, achieving superior performance on benchmarks like MovieLens. Parallel to this, multimodal CF has gained traction by integrating diverse data types such as text, images, audio, and video to enrich user-item representations beyond traditional interaction matrices. These approaches fuse modalities via encoders and collaborative signals, enabling more nuanced recommendations in domains like and media. The MM-GEF framework, for example, combines multimodal features with graph-based to refine item embeddings, demonstrating improved accuracy on multimodal datasets. A comprehensive survey highlights how such systems, including those using large language models for , outperform unimodal baselines by capturing cross-modal semantics. Innovations in privacy-preserving CF have advanced through federated learning paradigms, allowing decentralized training across devices while mitigating data leakage risks, with notable developments from 2023 onward. The Personalized Federated Collaborative Filtering approach uses variational autoencoders to maintain user-specific models without centralizing raw data, enhancing privacy in on-device scenarios. Complementing this, explainable CF has incorporated attention mechanisms to provide interpretable insights into recommendation rationales, such as highlighting influential user-item interactions. The XRec framework employs attention in large language models to generate explanations for CF outputs, bridging opacity gaps in neural recommenders. From 2023 to 2025, graph models have introduced generative processes to CF, modeling user preferences as diffusion trajectories for more dynamic predictions. DiffRec, a pioneering diffusion-based recommender, applies denoising steps to interaction histories, outperforming GANs and VAEs on sparse data by simulating realistic preference evolutions. Graph-based extensions like the Graph Signal Diffusion Model further leverage spectral graph convolutions for collaborative signals, achieving state-of-the-art results on datasets such as . Concurrently, sentiment-aware CF has integrated to extract review sentiments, refining ratings and preferences. Review-based systems, as surveyed in recent literature, incorporate aspect-level sentiment from text to augment CF, boosting in review-rich platforms. Evaluation practices have evolved with refined offline and online metrics to better align with real-world performance, alongside extensions to benchmarks like MovieLens for diverse objectives. Advances include time-dependent metrics that penalize bias in offline setups, improving with online A/B tests. The MovieLens-32M extension provides for evaluating recommendations against user watchlists, enabling assessments of real-world interest and helping mitigate bias under varied conditions. These developments, combined with techniques, have notably mitigated cold-start issues by transferring knowledge from seen to unseen users or items via attribute-based embeddings. For example, model-agnostic zero-shot interest learning generalizes preferences to mitigate cold-start issues, showing improvements in real-world deployments.

Integration with Emerging Technologies

Collaborative filtering (CF) has increasingly intersected with large language models (LLMs) to enhance recommendation capabilities, particularly through prompt-based approaches that enable zero-shot recommendations without extensive training data. In these methods, LLMs like GPT variants are prompted with user interaction histories or item descriptions to generate personalized suggestions directly, leveraging the models' natural language understanding to infer preferences in novel scenarios. For instance, a 2024 framework utilizes LLMs as zero-shot recommenders by crafting prompts that incorporate collaborative signals, such as user-item interaction patterns, to rank items effectively on datasets like point-of-interest recommendations. This integration addresses cold-start problems in traditional CF by drawing on the LLM's pre-trained knowledge, achieving competitive performance with fewer parameters compared to conventional matrix factorization techniques. Building on this, conversational recommenders powered by LLMs, akin to interfaces, facilitate interactive by engaging users in multi-turn dialogues to refine recommendations dynamically. These systems combine LLM-generated responses with underlying models to retrieve and rank items based on evolving user feedback, improving relevance through iterative prompting. A 2025 study demonstrates that integrating collaborative retrieval with LLMs in conversational settings boosts recommendation accuracy by 15-20% on benchmark datasets, as the LLM interprets nuanced queries while aggregates community preferences. Such hybrids enable more natural, context-aware interactions, extending beyond static predictions to . In and (IoT) environments, CF is adapted for on-device processing to prioritize mobile privacy and low-latency recommendations. Decentralized algorithms perform CF computations locally on user devices, minimizing data transmission to central servers and thus reducing risks associated with shared ratings. For example, a edge-cloud collaborative system decomposes CF models into components that run on resource-constrained devices, preserving user data while leveraging aggregation for global updates, resulting in up to 30% lower communication overhead. Complementing this, graph neural networks (GNNs) facilitate real-time CF in IoT networks by simplifying propagation layers to focus on essential user-item connections, enabling efficient on edge hardware without sacrificing accuracy. These GNN variants, such as pruned models, support dynamic IoT applications like smart home recommendations by processing with minimal computational footprint. Quantum-inspired techniques offer promising optimizations for CF, particularly in solving complex similarity computations and matrix factorizations at scale. These approaches mimic quantum annealing to tackle quadratic unconstrained binary optimization problems inherent in neighborhood-based CF, accelerating nearest-neighbor searches for large datasets. A 2024 quantum nearest-neighbor algorithm for CF demonstrates superior scalability on sparse matrices, reducing convergence time by factors of 10 compared to classical methods while maintaining recommendation quality. Similarly, variational quantum Hopfield networks integrate quantum principles into associative memory for CF, enhancing pattern retrieval in user preference modeling and showing potential for handling high-dimensional data in emerging hardware. Blockchain technology further bolsters CF security by enabling tamper-proof rating aggregation, where distributed ledgers record user interactions immutably to prevent manipulation. In blockchain-based CF, ratings are aggregated via consensus mechanisms across nodes, ensuring privacy through encryption and verifiable fairness in e-commerce recommendations, as validated in systems that improve trust without central vulnerabilities. Looking ahead, ethical AI prospects in CF emphasize bias mitigation using (RLHF), where s fine-tuned on diverse inputs align recommendations with fairness criteria. RLHF adapts CF by rewarding debiased outputs, reducing popularity and demographic biases in suggestions, as seen in frameworks that incorporate prompts to evaluate and adjust model decisions iteratively. Sustainable addresses the environmental impact of large-scale CF models by optimizing integration through model and efficient , cutting by 40-50% in recommender deployments without loss. However, research gaps persist in scalable multi-agent CF systems, where -based agents collaborate across distributed environments; current limitations include coordination overhead and inconsistent signal propagation, hindering applications in dynamic, large-scale networks like ecosystems.

References

  1. [1]
    [PDF] Item-Based Collaborative Filtering Recommendation Algorithms
    Recommender systems apply knowledge discovery techniques to the problem of making personalized recommendations for information, products or services during ...
  2. [2]
    [PDF] Collaborative Filtering Recommender Systems - Michael Ekstrand
    This survey aims to provide a broad overview of the current state of collaborative filtering research. In the next two sections, we discuss the core algorithms ...
  3. [3]
    None
    ### Summary of https://arxiv.org/pdf/2412.01378
  4. [4]
    None
    ### Summary of Collaborative Filtering Recommender System: Overview and Challenges
  5. [5]
    GroupLens - DOI
    No information is available for this page. · Learn whyMissing: PDF | Show results with:PDF
  6. [6]
    [PDF] Empirical Analysis of Predictive Algorithms for Collaborative Filtering
    In this paper we describe several algorithms designed for this task, in- cluding techniques based on correlation coef- ficients, vector-based similarity ...
  7. [7]
    Using collaborative filtering to weave an information tapestry
    Using collaborative filtering to weave an information tapestry. Authors: David Goldberg ... Published: 01 December 1992 Publication History. 2,798citation ...Missing: origins | Show results with:origins
  8. [8]
    GroupLens: an open architecture for collaborative filtering of netnews
    GroupLens is a system for collaborative filtering of netnews, to help people find articles they will like in the huge stream of available articles.
  9. [9]
    [PDF] XXXX The MovieLens Datasets: History and Context - GroupLens
    The MovieLens datasets, first released in 1998, describe movie preferences as <user, item, rating, timestamp> tuples from the MovieLens recommender system.
  10. [10]
    [PDF] Incremental Singular Value Decomposition Algorithms for Highly ...
    The dimensionality reduction approach in SVD can be very useful for the collaborative filtering process. SVD produces a set of uncorrelated eigenvectors.
  11. [11]
    Item-based collaborative filtering recommendation algorithms
    Index Terms. Item-based collaborative filtering recommendation algorithms ... View or Download as a PDF file. PDF. eReader. View online with eReader . eReader ...
  12. [12]
    [PDF] The Netflix Prize - Computer Science
    In October, 2006 Netflix released a dataset containing 100 million anonymous movie ratings and challenged the data mining, machine learning and computer science ...
  13. [13]
    [PDF] Empirical Analysis of Predictive Algorithms for Collaborative Filtering
    Breese David Heckerman Carl Kadie. May, 1998 revised October, 1998. Technical Report. MSR-TR-98-12. Microsoft Research. Microsoft Corporation. One Microsoft Way.
  14. [14]
    [PDF] MATRIX FACTORIZATION TECHNIQUES FOR RECOMMENDER ...
    The two primary areas of collaborative filtering are the neighborhood methods and latent factor models. Neighbor- hood methods are centered on computing the ...
  15. [15]
  16. [16]
    [PDF] Probabilistic Matrix Factorization - NIPS papers
    In this paper we present the Probabilistic Matrix Factorization (PMF) model which scales linearly with the number of observations and, more importantly, ...
  17. [17]
    [PDF] Bayesian Probabilistic Matrix Factorization using Markov Chain ...
    In this paper we present a fully Bayesian treatment of the Probabilistic. Matrix Factorization (PMF) model in which model capacity is controlled automatically ...
  18. [18]
    [PDF] Hybrid Recommender Systems: Survey and Experiments
    Nov 13, 2014 · This paper surveys the landscape of actual and possible hybrid recommenders, and introduces a novel hybrid, EntreeC, a system that combines ...
  19. [19]
    [PDF] Content-Boosted Collaborative Filtering for Improved ...
    In this paper, we present an elegant and effective frame- work for combining content and collaboration. Our approach uses a content-based predictor to enhance ...
  20. [20]
    [PDF] The BellKor Solution to the Netflix Grand Prize - GW Engineering
    Collaborative filtering models try to capture the interactions between users and items that produce the different rating values. However, many of the observed ...
  21. [21]
    [1708.05031] Neural Collaborative Filtering - arXiv
    Aug 16, 2017 · In this work, we strive to develop techniques based on neural networks to tackle the key problem in recommendation -- collaborative filtering -- on the basis ...
  22. [22]
    [1802.05814] Variational Autoencoders for Collaborative Filtering
    Feb 16, 2018 · Abstract:We extend variational autoencoders (VAEs) to collaborative filtering for implicit feedback. This non-linear probabilistic model ...
  23. [23]
    Session-based Recommendations with Recurrent Neural Networks
    Nov 21, 2015 · We argue that by modeling the whole session, more accurate recommendations can be provided. We therefore propose an RNN-based approach for session-based ...
  24. [24]
    Self-supervised Multimodal Graph Convolutional Network for ...
    We develop a Self-supervised Multimodal Graph Convolutional Network (SMGCN), which aims to learn the cross-modal user preferences over multiple modalities.
  25. [25]
    [1808.09781] Self-Attentive Sequential Recommendation - arXiv
    Aug 20, 2018 · At each time step, SASRec seeks to identify which items are `relevant' from a user's action history, and use them to predict the next item.
  26. [26]
    A transformer-based architecture for collaborative filtering modeling ...
    Jul 8, 2025 · This study proposes a novel transformer-based architecture, MetaBERTTransformer4Rec(MBT4R), designed to outperform state of the art existing methods in the ...
  27. [27]
    Context-Aware Recommender Systems | AI Magazine
    Authors ; Gediminas Adomavicius University of Minnesota ; Bamshad Mobasher DePaul University ; Francesco Ricci Free University of Bozen-Bolzano ; Alexander Tuzhilin ...
  28. [28]
    [1502.03473] Collaborative Filtering Bandits - arXiv
    Feb 11, 2015 · In this work, we investigate an adaptive clustering technique for content recommendation based on exploration-exploitation strategies in contextual multi-armed ...
  29. [29]
    LightGCN: Simplifying and Powering Graph Convolution Network for ...
    Feb 6, 2020 · We propose a new model named LightGCN, including only the most essential component in GCN -- neighborhood aggregation -- for collaborative filtering.
  30. [30]
    KGAT: Knowledge Graph Attention Network for Recommendation
    May 20, 2019 · We propose a new method named Knowledge Graph Attention Network (KGAT) which explicitly models the high-order connectivities in KG in an end-to-end fashion.
  31. [31]
    DiffKG: Knowledge Graph Diffusion Model for Recommendation - arXiv
    Dec 28, 2023 · We propose a novel knowledge graph diffusion model for recommendation, referred to as DiffKG. Our framework integrates a generative diffusion model with a data ...
  32. [32]
    A Lossless GNN-based Federated Recommendation Framework
    Jul 25, 2023 · In this paper, we are the first to design a novel lossless federated recommendation framework based on GNN, which achieves full-graph training with complete ...<|separator|>
  33. [33]
    Graph Convolutional Neural Networks for Web-Scale Recommender ...
    Jun 6, 2018 · We develop a data-efficient Graph Convolutional Network (GCN) algorithm PinSage, which combines efficient random walks and graph convolutions to generate ...
  34. [34]
    Amazon.com recommendations: item-to-item collaborative filtering
    Amazon uses item-to-item collaborative filtering, which scales independently of the number of customers and items, unlike traditional collaborative filtering.
  35. [35]
    [PDF] Deep Neural Networks for YouTube Recommendations
    Sep 15, 2016 · The candidate generation network only provides broad personalization via collaborative filtering. The similarity between users is expressed in ...
  36. [36]
    Evaluating collaborative filtering recommender systems
    In this article, we review the key decisions in evaluating collaborative filtering recommender systems: the user tasks being evaluated, the types of analysis ...
  37. [37]
    How retailers can keep up with consumers | McKinsey
    Oct 1, 2013 · Already, 35 percent of what consumers purchase on Amazon and 75 percent of what they watch on Netflix come from product recommendations ...
  38. [38]
    [PDF] Revisiting the Netflix Prize the Decade After - CS229
    The Netflix dataset contains more than 100 million ob- servations, yet is sparse, since the matrix could hold more than 9 billion entries. Additionally, we are ...Missing: ratio | Show results with:ratio
  39. [39]
    [PDF] Resolving Data Sparsity and Cold Start in Recommender Systems
    – Data sparsity arises from the phenomenon that users in general rate only a limited number of items;. – Cold start refers to the difficulty in bootstrapping ...
  40. [40]
    Data Sparsity Issues in the Collaborative Filtering Framework
    Aug 7, 2025 · We conclude that the quality of collaborative filtering recommendations is highly dependent on the sparsity of available data. Furthermore, we ...Missing: seminal | Show results with:seminal
  41. [41]
    Addressing the Cold-Start Problem in Recommender Systems ...
    Mar 27, 2023 · In this work, we address the cold-start problem in recommender systems based on frequent patterns which are highly frequent in one set of users.
  42. [42]
    BinRec: addressing data sparsity and cold-start challenges in ...
    Jul 1, 2025 · Introduction of BinRec, a novel collaborative filtering approach that leverages Biclustering techniques to group users with similar rating ...
  43. [43]
    Resolving data sparsity and cold start problem in collaborative ...
    Jul 1, 2020 · Matrix factorization (MF) model with LOD is introduced to handle the data sparsity problem in collaborative filtering.
  44. [44]
    [PDF] Scalable Collaborative Filtering Approaches for Large ...
    The collaborative filtering (CF) using known user ratings of items has proved to be effective for predicting user preferences in item selection.
  45. [45]
    Scalable collaborative filtering using incremental update and local ...
    To address these problems, we present a novel scalable item-based collaborative filtering method by using incremental update and local link prediction. By ...
  46. [46]
    [PDF] Scalable Recommender System over MapReduce - Stat@Duke
    We implement the distributed item-based collaborative filtering in 3 MapReduce phases: Phase 1. Preprocessing: transform each line of the raw data into the ...
  47. [47]
    [PDF] Incremental Collaborative Filtering for Highly- Scalable ...
    Abstract. Most recommendation systems employ variations of Collaborative. Filtering (CF) for formulating suggestions of items relevant to users' interests.
  48. [48]
    Large-Scale Parallel Collaborative Filtering for the Netflix Prize
    We applied the ALS-WR algorithm on a large-scale CF problem, the Netflix Challenge, with 1000 hidden features and obtained a RMSE score of 0.8985.
  49. [49]
    Shilling recommender systems for fun and profit - ACM Digital Library
    This paper explores four open questions that may affect the effectiveness of such shilling attacks: which recommender algorithm is being used.
  50. [50]
    Preventing shilling attacks in online recommender systems
    Shilling attacks have been a significant vulnerability to collaborative filtering based recommender systems recently. There are various studies focusing on ...Abstract · Information & Contributors · Published In<|control11|><|separator|>
  51. [51]
    (PDF) Understanding Shilling Attacks and Their Detection Traits
    This paper aims to be a comprehensive survey of the shilling attack models, detection attributes, and detection algorithms. Additionally, we unravel and ...
  52. [52]
    "You Might Also Like: " Privacy Risks of Collaborative Filtering
    In this paper, we develop algorithms which take a moderate amount of auxiliary information about a customer and infer this customer's transactions.Missing: seminal | Show results with:seminal
  53. [53]
    Differential privacy in collaborative filtering recommender systems
    Oct 12, 2023 · In this work, we first overview threats to user privacy in recommender systems, followed by a brief introduction to the differential privacy framework that can ...Missing: seminal | Show results with:seminal
  54. [54]
    A Survey of Collaborative Filtering Techniques - Wiley Online Library
    Oct 27, 2009 · As one of the most successful approaches to building recommender systems, collaborative filtering (CF) uses the known preferences of a group ...Missing: seminal | Show results with:seminal
  55. [55]
    Popularity Bias in Collaborative Filtering-Based Multimedia ... - arXiv
    Mar 1, 2022 · In this paper, we investigate a potential issue of such collaborative-filtering based multimedia recommender systems, namely popularity bias.Missing: gray sheep synonym
  56. [56]
    Beyond-accuracy: a review on diversity, serendipity, and fairness in ...
    There are various methods employed to implement recommender systems, among which collaborative filtering (CF) has proven to be particularly effective due to ...2 Background · Table 1 · 3 Model Development
  57. [57]
    Diversity and Serendipity in Recommender Systems
    In this paper, we present and explore a recommendation technique that ensures that diversity, accuracy and serendipity are all factored in the recommendations.Abstract · Information & Contributors · Formats Available
  58. [58]
    Fairness and Diversity in Recommender Systems: A Survey
    While most existing studies explore fairness and diversity independently, we identify strong connections between these two domains.
  59. [59]
    Consumer-side fairness in recommender systems: a systematic ...
    Mar 29, 2024 · This survey serves as a systematic overview and discussion of the current research on consumer-side fairness in recommender systems.
  60. [60]
    [2305.02759] Disentangled Contrastive Collaborative Filtering - arXiv
    May 4, 2023 · Towards this research line, graph contrastive learning (GCL) has exhibited powerful performance in addressing the supervision label shortage ...
  61. [61]
    MM-GEF: Multi-modal representation meet collaborative filtering
    Aug 14, 2023 · MM-GEF learns refined item representations by injecting structural information obtained from both multi-modal and collaborative signals. Through ...
  62. [62]
    Personalized Federated Collaborative Filtering: A Variational ... - arXiv
    Aug 16, 2024 · This paper proposes a novel personalized FedCF method by preserving users' personalized information into a latent variable and a neural model simultaneously.
  63. [63]
    [2304.04971] Diffusion Recommender Model - arXiv
    Apr 11, 2023 · DiffRec is a Diffusion Recommender Model that learns the generative process in a denoising manner, addressing limitations of GANs and VAEs.Missing: collaborative filtering
  64. [64]
    [PDF] An Efficient All-round LLM-based Recommender System - arXiv
    Jun 1, 2024 · [12] assigns the role of a recommender expert to rank items that meet users' needs through prompting and conducts zero-shot recommendations.
  65. [65]
    Large language models are zero-shot point-of-interest recommenders
    Sep 6, 2025 · (2024) introduce various prompts that incorporate a user's previous interactions with an LLM to rank recommended items. Some works have focused ...
  66. [66]
    Collaborative Retrieval for Large Language Model-based...
    Jan 29, 2025 · This paper presents a novel approach that integrates large language models with collaborative filtering techniques to enhance conversational recommender ...
  67. [67]
    ChatGPT as a Conversational Recommender System
    Jun 22, 2024 · In this work, we study the use of ChatGPT as a movie recommender system. To this purpose, we conducted an online user study involving N=190 participants.
  68. [68]
    A Collaborative User Driven Recommendation System for Edge ...
    Nov 7, 2024 · We present Duet, a novel collaborative edge-cloud recommendation system that intelligently decomposes the recommendation model into two smaller models.
  69. [69]
    Decentralized Collaborative Filtering Algorithm with Privacy ...
    Aug 7, 2025 · Mobile edge computing (MEC) deploys network services closer to the user's wireless access network side and provides IT service environment ...
  70. [70]
    Quantum Nearest Neighbor Collaborative Filtering Algorithm for ...
    Jul 31, 2024 · The core of this method is to utilize the quantum annealing paradigm to solve the quadratic unconstrained binary optimization problem, thereby ...
  71. [71]
    Blockchain-based recommender systems: Applications, challenges ...
    Blockchain-based recommender systems use blockchain to improve security and privacy in recommender systems, which are used in many applications.
  72. [72]
    (PDF) Towards Sustainability of Large Language Models for ...
    Sep 18, 2024 · This study systematically assesses the sustainability challenges, including environmental, economic, and societal aspects, of integrating ...