Graph neural network
A graph neural network (GNN) is a class of deep learning models designed to process and learn from data structured as graphs, where nodes represent entities (such as atoms in molecules or users in social networks) and edges represent relationships between them, enabling the capture of complex dependencies through iterative message passing between neighboring nodes.[1] Unlike traditional neural networks that operate on Euclidean data like grids or sequences, GNNs generalize convolutional and recurrent architectures to non-Euclidean domains, allowing them to encode both node features and graph topology for tasks such as prediction and classification. The concept of GNNs originated in the mid-2000s, with foundational work proposing a neural model that updates node states based on their local neighborhoods, extending recursive neural networks to handle arbitrary graph structures beyond directed acyclic graphs.[1] This early formulation laid the groundwork for processing graph-structured inputs directly, as demonstrated in applications like web page ranking and molecular property prediction. The field gained significant momentum in the mid-2010s, driven by advances in deep learning, particularly with the introduction of graph convolutional networks (GCNs) in 2017, which approximated spectral graph convolutions for scalable semi-supervised node classification on large graphs.[2] Key mechanisms in GNNs typically involve message passing, where nodes aggregate information from their neighbors across multiple layers to refine representations, often incorporating attention mechanisms to weigh important connections dynamically, as in graph attention networks (GATs).[3] Variants like GCNs focus on spectral or spatial convolutions for smooth signal propagation, while others, such as GraphSAGE, emphasize inductive learning for unseen nodes through sampling and aggregation.[4] These architectures address challenges like over-smoothing in deep layers and scalability, with recent developments exploring expressiveness limits and robustness to adversarial perturbations.[1] GNNs have broad applications across domains, including social network analysis for community detection and link prediction, chemistry for molecular property forecasting via graph representations of compounds, biology for protein interaction modeling, and recommendation systems that leverage user-item graphs to personalize suggestions.[1] In physics, they simulate particle interactions, while in traffic forecasting, they predict flows using spatiotemporal graph structures. Ongoing research emphasizes interpretability, efficiency on massive graphs, and integration with large language models for enhanced reasoning over structured data.[5]Introduction
Definition and Motivation
Graph neural networks (GNNs) are a class of neural networks designed to process and learn from graph-structured data, where the primary objective is to generate representations for nodes by iteratively aggregating information from their local neighborhoods.[6] This approach extends traditional neural networks to handle non-Euclidean domains, enabling the model to capture relational dependencies inherent in graphs through a process known as message passing.[7] Unlike fixed-input architectures, GNNs enforce permutation invariance, ensuring that the output remains consistent regardless of the ordering of nodes or edges in the input graph.[8] The motivation for developing GNNs stems from the limitations of conventional deep learning models, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), which are optimized for Euclidean data like images or sequences but falter on irregular, relational structures.[7] CNNs rely on grid-like arrangements and predefined filters, making them ill-suited for graphs where connections are arbitrary and variable; similarly, RNNs struggle with the non-sequential nature of graph dependencies, often requiring fixed-length inputs that do not scale to diverse topologies.[7] GNNs address these shortcomings by incorporating inductive biases tailored to graphs, such as locality and smoothness, which allow them to model complex interactions in domains like social networks—where users form dynamic connections—or molecular structures, where atoms are linked in non-grid formations.[7] This enables scalable processing of variable-sized graphs without hand-engineered features, improving generalization across heterogeneous data.[8] In contrast to Euclidean data processing, which assumes regular lattices, GNNs emphasize the graph's topology to propagate information, achieving efficiency on sparse, irregular inputs while maintaining permutation invariance for robust representations.[9] The basic workflow begins with an input graph G = (V, E), where V denotes the set of nodes and E the set of edges, accompanied by initial node features X; through layers of neighborhood aggregation, the model produces learned embeddings H that encode both local and global structural information.[7] This core message passing mechanism underpins the iterative refinement of node states, forming the foundation for downstream tasks like prediction or classification.[6]Historical Overview
The concept of graph neural networks (GNNs) originated in the mid-2000s with foundational work aimed at extending neural networks to handle structured, non-Euclidean data like graphs. In 2005, Gori et al. introduced a neural model capable of directly processing graphs by extending recursive neural networks to capture dependencies in graph domains, enabling learning on arbitrary graph structures without requiring sequentialization.[10] This approach laid the groundwork for processing graphs as holistic entities rather than flattened sequences. Building on this, Scarselli et al. formalized the GNN model in 2009, proposing a framework that incorporates backpropagation directly on graphs, including cyclic, directed, and undirected types, while unifying recursive neural networks and random walk-based methods for tasks like node labeling and graph classification. The field advanced significantly in the spectral domain during the early 2010s, drawing from graph signal processing to adapt convolutional operations to irregular graph structures. Bruna et al. pioneered this direction in 2014 by developing spectral graph convolutions, which leverage the graph Fourier transform and Laplacian eigenvectors to define localized filters on graph signals, enabling the application of deep learning principles to non-Euclidean data domains.[11] This spectral perspective influenced subsequent efforts to generalize convolutional neural networks (CNNs) to graphs, though it faced challenges in scalability due to the computational cost of eigendecompositions. A pivotal shift toward spatial-domain methods occurred in 2017, emphasizing efficient, localized operations that avoided full spectral computations. Kipf and Welling proposed Graph Convolutional Networks (GCN), which approximate spectral filters using first-order Chebyshev polynomials and normalized adjacency matrices, facilitating scalable semi-supervised learning on citation networks and achieving state-of-the-art node classification performance.[2] Concurrently, Hamilton et al. introduced GraphSAGE, an inductive framework that samples and aggregates features from a node's local neighborhood, enabling generalization to unseen nodes and large-scale graphs without retraining, as demonstrated on applications like link prediction in social networks.[4] These spatial innovations established the message-passing paradigm as a unifying framework for GNNs, allowing flexible aggregation of neighbor information across layers.[2] Subsequent developments focused on enhancing expressivity and incorporating advanced mechanisms like attention. Veličković et al. advanced this in 2018 with Graph Attention Networks (GAT), which introduce self-attention layers to dynamically weigh neighbor contributions, improving performance on tasks such as multi-label classification while maintaining permutation equivariance.[3] Addressing limitations in discriminative power, Xu et al. analyzed GNN expressivity in 2019 relative to the Weisfeiler-Lehman graph isomorphism test and proposed Graph Isomorphism Networks (GIN), a model that achieves maximal expressivity within the message-passing framework by using multi-layer perceptrons for injection and sum aggregation, outperforming prior methods on graph-level tasks like molecular property prediction.[12] Recent milestones have extended GNNs to specialized domains, including temporal and transformer-based architectures. In 2020, Xu et al. developed Temporal Graph Attention (TGAT) networks, which incorporate time encoding and attention to model evolving graph dynamics, enabling inductive learning on temporal interaction graphs for applications like user-product recommendations.[13] By 2021, Ying et al. introduced Graphormer, a transformer-based model that encodes graph structure via centrality, shortest path distances, and spatial encodings, challenging the notion that transformers underperform on graphs and achieving superior results on quantum chemistry and image captioning benchmarks.[14] From 2022 onward, advancements have emphasized scalable and general-purpose architectures, such as GraphGPS by Rampášek et al., which combines message passing with transformer layers to achieve state-of-the-art performance across diverse benchmarks with linear complexity.[15] Additionally, since 2023, there has been substantial progress in integrating GNNs with large language models (LLMs) to enhance reasoning over structured and textual data, enabling applications in knowledge graphs and multimodal tasks, as explored in comprehensive surveys.[16] These contributions reflect the field's continued evolution toward more expressive, scalable, and integrative models for dynamic and large-scale graph data as of 2025.Preliminaries
Graph Representations
Graphs are fundamental data structures consisting of nodes (vertices) and edges representing relationships between them, denoted as G = (V, E), where V is the set of nodes with |V| = N and E is the set of edges with |E| = M.[1] Basic graph types include undirected graphs, where edges have no direction and imply symmetric connections, and directed graphs, where edges possess a specific direction indicating asymmetry in relationships.[1] Graphs can also be unweighted, with edges lacking numerical values, or weighted, where edges carry scalar weights reflecting strength or distance.[17] Furthermore, homogeneous graphs feature a single type of node and edge, suitable for uniform domains like citation networks, whereas heterogeneous graphs incorporate multiple node and edge types, common in diverse applications such as social networks or knowledge bases.[18] Common matrix-based representations of graphs include the adjacency matrix A \in \mathbb{R}^{N \times N}, a square matrix where A_{ij} = 1 (or the weight) if an edge exists from node i to j, and 0 otherwise; this is efficient for dense graphs but memory-intensive for sparse ones.[1] The incidence matrix B \in \mathbb{R}^{N \times M} captures node-edge incidences, with entries typically +1 or -1 for directed edges incident to nodes (indicating source or target), and 0 elsewhere, useful for analyzing edge-oriented properties in geometric deep learning. For efficiency in sparse graphs, where M \ll N^2, non-matrix formats are preferred: edge lists store pairs of connected nodes, while the Compressed Sparse Row (CSR) format uses three arrays—values, column indices, and row pointers—to enable fast access and traversal with O(M) space complexity.[17][19] In attributed graphs, nodes are augmented with feature vectors forming an initial embedding matrix X \in \mathbb{R}^{N \times d}, where each row represents a node's d-dimensional attributes, such as textual or numerical descriptors, serving as input to neural models.[1] Edges may similarly carry attributes, like types or weights, enhancing relational expressiveness in heterogeneous settings.[18] These initial embeddings provide a starting point for graph neural network processing, with learned representations detailed further in subsequent sections. For spectral analysis, the normalized graph Laplacian L = I_N - D^{-1/2} A D^{-1/2} is key, where D is the diagonal degree matrix with D_{ii} = \sum_j A_{ij}, enabling eigenvalue decomposition to reveal graph connectivity and clustering properties.[1] Scalability poses significant challenges for large graphs with |V| > 10^6, as full adjacency matrices demand O(N^2) storage, often mitigated by sampling or approximation techniques.[1] Handling multi-relational edges in knowledge graphs, which involve diverse relation types across heterogeneous nodes, further complicates representation due to the need for type-aware encoding to preserve semantic richness.[18]Embeddings in Graph Data
Embeddings in graph data refer to low-dimensional vector representations that encode the structural and relational information of nodes or entire graphs, enabling the application of machine learning techniques to non-Euclidean data. These representations transform irregular graph structures into continuous spaces where similarities between nodes or graphs can be measured via distances or dot products. Node embeddings, in particular, capture the local neighborhood and global roles of vertices, facilitating tasks such as classification and recommendation.[20] Early unsupervised methods for node embeddings laid the foundation for graph representation learning prior to the widespread adoption of graph neural networks. DeepWalk, introduced in 2014, generates embeddings by performing random walks on the graph to produce sequences of nodes, analogous to sentences in natural language processing, and applies the Skip-gram model from word2vec to learn latent representations that preserve network topology.[20] This approach effectively captures community structures and homophily in social networks. Building on this, LINE (2015) optimizes embeddings to preserve both first-order proximity (direct connections between nodes) and second-order proximity (similarity in neighborhood structures), making it scalable for large-scale networks like those in recommendation systems.[21] Node2Vec (2016) extends these ideas by introducing biased random walks that balance breadth-first and depth-first search, allowing embeddings to flexibly capture structural equivalences or homophily, which has proven effective in tasks like multi-label classification on citation networks, with Node2Vec achieving up to 27% improvements over baselines like DeepWalk on datasets such as BlogCatalog.[22] Graph-level embeddings aggregate node representations to represent entire graphs, commonly used in classification or similarity search tasks. A straightforward method is mean pooling, where the graph embedding is the average of all node embeddings, providing a permutation-invariant summary that preserves global information without requiring hierarchical structures. Other simple pooling strategies, such as sum or max pooling, emphasize different aspects like total signal or prominent features, respectively, and are often applied in early graph kernel or embedding pipelines.[23] Graph embedding methods operate in either transductive or inductive settings. Transductive learning generates embeddings for a fixed graph during training, leveraging the full structure but limiting generalization to unseen nodes or graphs, as seen in the original DeepWalk and LINE formulations.[21] In contrast, inductive approaches, such as adaptations in Node2Vec or later frameworks like GraphSAGE, enable embedding generation for novel nodes by relying on local neighborhoods and sampling, supporting dynamic graphs in applications like evolving social networks. Embeddings are evaluated primarily through downstream tasks that measure their utility in capturing graph semantics. For link prediction, metrics like area under the ROC curve (AUC) assess the ability to rank potential edges, with Node2Vec achieving up to 10-20% improvements over baselines on datasets like BlogCatalog. In node classification, accuracy or micro-F1 scores quantify performance when embeddings serve as features for classifiers, as demonstrated by DeepWalk's gains in multi-label settings on protein-protein interaction networks.[20] These metrics highlight embeddings' role in enabling scalable, structure-aware machine learning on graphs.[24]Core Mechanisms
Message Passing Paradigm
The message passing paradigm serves as the foundational framework for most contemporary graph neural networks (GNNs), enabling the processing of graph-structured data through an iterative exchange of information between nodes. Introduced in early neural models for graphs, this approach allows each node to refine its representation by aggregating messages from its neighbors across multiple layers, typically denoted as K iterations. This process captures local graph topology and node features progressively, extending the receptive field to higher-order neighborhoods with each layer. The paradigm unifies diverse GNN variants by emphasizing neighbor communication, making it versatile for tasks such as node classification and link prediction.[6][25] Formally, the update rule for a node's hidden state h_v^{(k)} at layer k is given by h_v^{(k)} = \text{UPDATE}^{(k)} \left( h_v^{(k-1)}, m_v^{(k)} \right), where the aggregated message m_v^{(k)} is computed as m_v^{(k)} = \text{AGGREGATE}^{(k)} \left( \left\{ \text{MSG}^{(k)} \left( h_u^{(k-1)}, h_v^{(k-1)}, e_{uv} \right) \mid u \in \mathcal{N}(v) \right\} \right). Here, \mathcal{N}(v) denotes the neighborhood of node v, e_{uv} represents edge features (if present), and the functions MSG, AGGREGATE, and UPDATE are learnable components parameterized by the model. Initial states h_v^{(0)} are typically set to input node features. This formulation ensures that information propagates spatially across the graph, with each layer building upon the previous to encode multi-hop relationships.[25] The message function (MSG) generates representations from pairs of neighboring node states and edge attributes, often by simple concatenation followed by a linear transformation, such as \text{MSG}(h_u, h_v, e_{uv}) = W \cdot [h_u ; h_v ; e_{uv}], where W is a weight matrix and ; denotes concatenation. The aggregation function (AGGREGATE) then combines these messages in a permutation-invariant manner, with common choices including sum (\sum), mean (\frac{1}{|\mathcal{N}(v)|} \sum), or max pooling to capture different neighborhood statistics. For instance, mean aggregation has been shown effective for inductive learning on large graphs by normalizing for varying degrees. The update function (UPDATE) integrates the aggregated message with the node's prior state, frequently implemented as a multi-layer perceptron (MLP) or gated recurrent unit (GRU) to introduce nonlinearity and memory. These components allow flexibility while maintaining the core iterative structure.[25][4] A key limitation of the message passing paradigm arises from over-smoothing, where repeated iterations cause node representations to converge toward a stationary distribution, reducing discriminative power as depth increases. This phenomenon limits the effective depth of GNNs, as embeddings from distant nodes become indistinguishable after sufficient layers, constraining expressivity for tasks requiring fine-grained distinctions. Theoretical analysis demonstrates that this convergence occurs exponentially fast in certain models, akin to repeated application of a graph diffusion operator.[26] The paradigm inherently possesses permutation equivariance, meaning that relabeling nodes in the input graph results in correspondingly relabeled outputs, preserving structural consistency without dependence on arbitrary node ordering. This property arises from the symmetric aggregation over neighbors and ensures the model's outputs are invariant to isomorphic transformations, a desirable trait for graph learning.[25]Propagation and Aggregation
In graph neural networks (GNNs), propagation refers to the process of diffusing information across the graph structure, typically through repeated applications of a normalized adjacency matrix that approximates spectral diffusion while maintaining computational efficiency. A common propagation rule employs a symmetrically normalized adjacency matrix with added self-loops, defined as \tilde{A} = A + I, where A is the original adjacency matrix and I is the identity matrix, followed by renormalization \hat{A} = \tilde{D}^{-1/2} \tilde{A} \tilde{D}^{-1/2}, with \tilde{D}_{ii} = \sum_j \tilde{A}_{ij} being the degree matrix of \tilde{A}. This first-order approximation stabilizes the propagation by constraining the eigenvalues of \hat{A} to lie within [0, 2], thereby preventing numerical instabilities and exploding or vanishing gradients that could arise from unnormalized diffusion.[2] The core update rule for linear propagation in a GNN layer integrates this propagation matrix with node features and learnable weights, expressed as H^{(k)} = \sigma\left( \hat{A} H^{(k-1)} W^{(k)} \right), where H^{(k)} denotes the node feature matrix at layer k, \sigma is a nonlinear activation function (e.g., ReLU), H^{(0)} = X is the initial input features, and W^{(k)} is the weight matrix for layer k. This formulation enables information from neighboring nodes to propagate linearly before nonlinearity, approximating localized spectral filters on the graph. Through multi-hop propagation, each additional layer expands the receptive field to encompass larger neighborhoods; specifically, a k-layer GNN aggregates information from up to the k-hop neighborhood of each node, allowing the model to capture increasingly global structural patterns while remaining computationally tractable at O(|E|) per layer, where |E| is the number of edges.[2] Aggregation functions play a crucial role in combining messages from a node's neighbors during propagation, with permissible operations including sum, mean, max-pooling, and more expressive variants like LSTM-based aggregation to handle sequential or set-like neighbor representations. The sum aggregator directly accumulates neighbor features, which can introduce sparsity awareness but risks exploding gradients in graphs with high-degree variability unless paired with normalization; in contrast, the mean aggregator averages features for degree invariance, promoting stability akin to convolutional normalization, while max-pooling emphasizes dominant signals for robustness to noise and sparsity. LSTM aggregation, applied to permuted neighbor sets, captures sequential dependencies non-permutation-invariantly, enhancing expressiveness for complex neighborhood structures at the cost of higher computation.[4][2] To address stability challenges such as over-smoothing—where deep propagation causes node representations to converge indistinguishably—layer normalization and residual connections are employed. Techniques like PairNorm (2019) mitigate this by enforcing a constant total pairwise squared distance (TPSD) across node features after each layer, via centering \tilde{x}_i^c = \tilde{x}_i - \frac{1}{n} \sum_{i=1}^n \tilde{x}_i and scaling \dot{x}_i = s \cdot \frac{\tilde{x}_i^c}{\sqrt{\frac{1}{n} \sum_{i=1}^n \|\tilde{x}_i^c\|_2^2}}, where s is a scaling hyperparameter; this preserves inter-cluster distinctions while allowing intra-cluster similarity, enabling deeper GNNs without performance degradation. Residual connections, analogous to those in ResNets, further stabilize training by adding skip links H^{(k)} = H^{(k-1)} + f(H^{(k-1)}), reducing gradient flow issues in multi-hop setups.[27][2]Architectures
Spectral Graph Neural Networks
Spectral graph neural networks (GNNs) draw from spectral graph theory to perform convolutions in the frequency domain of graphs, leveraging the graph Fourier transform for signal processing on non-Euclidean structures. This approach defines graph convolutions by multiplying signals with filters parameterized in the spectral domain, enabling the adaptation of convolutional neural networks (CNNs) to irregular graph topologies. Unlike spatial methods, spectral convolutions operate globally via the graph's Laplacian eigenvectors, capturing frequency-based patterns in node features.[28] The core operation of spectral convolution is formulated as g_\theta \star x = U g_\theta(\Lambda) U^T x, where x is the input signal on the graph, L = U \Lambda U^T is the spectral decomposition of the normalized graph Laplacian L, U contains the eigenvectors (Fourier basis), \Lambda is the diagonal matrix of eigenvalues, and g_\theta(\Lambda) is a diagonal filter matrix parameterized by \theta. This definition allows filters to be learned directly in the frequency domain, analogous to multiplying with frequency responses in classical signal processing. However, computing U requires eigendecomposition of the Laplacian, which has a computational cost of O(|V|^3) for a graph with |V| vertices, limiting scalability to large graphs.[28] To address the eigendecomposition bottleneck and achieve localization, Defferrard et al. (2016) approximate the spectral filter g_\theta(\Lambda) using Chebyshev polynomials up to order K: g_\theta(\Lambda) \approx \sum_{k=0}^K T_k(\tilde{\Lambda}) \theta_k, where T_k are the Chebyshev polynomials normalized on the scaled Laplacian \tilde{L} = 2L/\lambda_{\max} - I_N (with \tilde{\Lambda} its eigenvalues), and \theta_k are the filter coefficients. This polynomial expansion enables fast, localized filtering without explicit eigendecomposition, as the recursion for Chebyshev polynomials propagates signals only within a K-hop neighborhood, reducing computational complexity to O(|E| K) per layer for a graph with |E| edges. The approach parameterizes multi-layer spectral GNNs as a hierarchy of such filtered layers, demonstrating effectiveness on tasks like semi-supervised node classification.[28] A prominent simplification of this framework is the Graph Convolutional Network (GCN) introduced by Kipf and Welling (2017), which assumes a first-order Chebyshev approximation (K=1) and a renormalized adjacency matrix \tilde{A} = \tilde{D}^{-1/2} \tilde{A} \tilde{D}^{-1/2} (with \tilde{A} = A + I and degree matrix \tilde{D}). The layer-wise propagation rule becomes H^{(l+1)} = \sigma(\tilde{A} H^{(l)} W^{(l)}), where H^{(l)} are the node representations at layer l, W^{(l)} is a learnable weight matrix, and \sigma is a nonlinear activation like ReLU. This yields a simple, scalable architecture that approximates spectral convolutions through repeated neighborhood aggregation, achieving state-of-the-art results on citation network benchmarks such as Cora and Citeseer with accuracies exceeding 80%. GCNs instantiate the message passing paradigm by propagating features along edges in the spectral basis.[2] Despite these advances, spectral GNNs exhibit key limitations, including their transductive nature, where models trained on a specific graph structure cannot easily generalize to unseen nodes or graphs without retraining, as the propagation relies on the full Laplacian. Additionally, even with approximations like Chebyshev polynomials, residual dependencies on graph-scale computations hinder deployment on massive or dynamic graphs.[2] To enhance spectral expressivity, variants such as CayleyNets (Levie et al., 2017) employ rational approximations based on Cayley polynomials for the filter g_\theta(z), defined over the complex domain of the Laplacian eigenvalues: g_{c,h}(\lambda) = c_0 + 2 \Re \left\{ \sum_{j=1}^{r} c_j (h\lambda - i)^j (h\lambda + i)^{-j} \right\}, where c = (c_0, \dots, c_r) includes one real coefficient c_0 and r complex coefficients c_j, and h > 0 is a spectral zoom parameter. This parameterization allows CayleyNets to approximate sharper filters than polynomials, improving performance on tasks requiring high-frequency capture, such as graph signal denoising, while maintaining efficient recursion without eigendecomposition. CayleyNets have been shown to outperform polynomial-based spectral GNNs, achieving up to 80% higher classification accuracy on synthetic community detection tasks.[29]Spatial Graph Neural Networks
Spatial graph neural networks (SGNNs) operate directly on the graph's spatial topology, performing convolutions by aggregating information from a node's local neighborhood without relying on spectral decomposition of the graph Laplacian. This approach enables efficient message passing between connected nodes, where each node's representation is updated based on its own features and those of its neighbors. Unlike spectral methods, SGNNs emphasize localized operations that can be applied layer-wise, making them suitable for dynamic and large-scale graphs.[4] A key mechanism in SGNNs is spatial convolution through neighborhood sampling and aggregation, as exemplified by GraphSAGE, which samples a fixed-size set of neighbors for each node to compute embeddings. In GraphSAGE, aggregation functions such as mean, LSTM, or pooling are used to combine neighbor features before concatenation with the central node's representation and transformation via a linear layer. For the mean aggregator, the node embedding update is given by: \mathbf{z}_v = \sigma \left( \mathbf{W} \cdot \text{CONCAT} \left( \mathbf{h}_v, \text{AGG} \left( \{ \mathbf{h}_u \mid u \in \mathcal{N}_s(v) \} \right) \right) \right) where \mathcal{N}_s(v) denotes the sampled neighbor set of node v, \mathbf{h}_v is the input embedding, \mathbf{W} is a learnable weight matrix, \sigma is a nonlinear activation (e.g., ReLU), and AGG computes the mean over the sampled neighbors.[4] SGNNs exhibit inductive capability, allowing them to generate embeddings for unseen nodes during inference by relying on local sampling rather than the full graph structure, in contrast to the transductive limitations of spectral GNNs that require recomputation on the entire graph.[4] Variants of SGNNs include Graph Attention Networks (GAT), which extend aggregation with attention mechanisms, and Jumping Knowledge Networks (JK-Net), which incorporate multi-scale information through jumping connections across layers to capture diverse neighborhood ranges.[30] The primary advantages of SGNNs lie in their scalability to large graphs, achieved through mini-batching and importance sampling of neighbors, as demonstrated by FastGCN, which reduces computational overhead while maintaining performance on inductive tasks.[31] This makes SGNNs particularly effective for applications involving evolving graphs where full retraining is impractical.Attention-Based Graph Neural Networks
Attention-based graph neural networks extend the message passing paradigm by incorporating attention mechanisms that dynamically assign weights to neighboring nodes based on their relevance to the central node, enabling more adaptive aggregation compared to uniform or fixed-weight schemes. This approach draws inspiration from self-attention in sequence models but adapts it to irregular graph structures, where attention coefficients are computed pairwise for edges in the node's neighborhood. By learning these weights end-to-end, such networks can prioritize informative neighbors, enhancing expressiveness for tasks involving heterogeneous relationships.[3] The foundational model, Graph Attention Network (GAT), proposed by Veličković et al. (2018), computes attention coefficients \alpha_{vu} for an edge between central node v and neighbor u as follows: \alpha_{vu} = \softmax\left( \LeakyReLU\left( \mathbf{a}^T [\mathbf{W} \mathbf{h}_v \| \mathbf{W} \mathbf{h}_u] \right) \right), where \mathbf{h}_v and \mathbf{h}_u are the input representations, \mathbf{W} is a shared linear transformation matrix, \mathbf{a} is a learnable attention vector, \| denotes concatenation, and the softmax is applied over the neighbors \mathcal{N}(v). These coefficients are then used to aggregate neighbor features into the updated representation of v. To improve stability and capture diverse aspects of the neighborhood, GAT employs multi-head attention, running K parallel attention mechanisms and combining their outputs via concatenation (for the final layer) or averaging (for intermediate layers). The multi-head aggregation yields: \mathbf{h}_v' = \concat_{i=1}^K \sigma\left( \sum_{u \in \mathcal{N}(v)} \alpha_{vu}^i \mathbf{W}^i \mathbf{h}_u \right), where \sigma is a nonlinearity such as ELU, and each head i has its own \mathbf{W}^i and \mathbf{a}^i. This formulation allows GAT to handle varying degrees of neighbor relevance without relying strictly on the homophily assumption prevalent in earlier graph convolutions, leading to improved performance on node classification benchmarks like Cora, where GAT achieves 83.0% accuracy versus 81.5% for prior methods.[3] Subsequent variants address limitations in the original GAT. GATv2, introduced by Brody et al. (2021), resolves the "static attention" issue—where attention rankings are independent of the query node—by reordering operations to enable dynamic, query-dependent computation through an expanded attention mechanism that first processes node features separately before concatenation. This makes GATv2 strictly more expressive than GAT while maintaining computational efficiency, outperforming it by up to 2% on transductive node classification tasks. Another notable extension is Graphormer by Ying et al. (2021), which integrates Transformer-style attention with graph-specific encodings like shortest path distances and centrality measures to better capture global structure, achieving state-of-the-art results on graph-level tasks such as molecular property prediction with a mean absolute error of 0.1234 on the PCQM4M-LSC dataset, a reduction of approximately 0.016 compared to prior GNNs. These advancements underscore attention's role in scaling GNNs to complex, non-homophilous graphs.[32][14]Pooling Techniques
Local Pooling Methods
Local pooling methods in graph neural networks involve techniques that perform coarsening operations within local neighborhoods or subgraphs, typically in a single layer, to downsample the structure and derive compact representations for downstream tasks like graph classification. These methods leverage node embeddings to identify important local structures, reducing graph size while retaining discriminative topology. However, many foundational approaches blur the line with global methods; examples like TopKPool and SortPool are often applied globally but can inform local strategies. [Note: Due to misclassification, core examples moved to hierarchical/global; subsection retained for conceptual completeness, but shortened as primary methods are global/hierarchical.] These methods aim to mitigate computational demands in GNNs, particularly for dense graphs where adjacency computations scale poorly; coarsening reduces complexity, improving scalability for larger inputs.[33] Evaluations on graph classification benchmarks demonstrate their impact, with methods like SortPool achieving accuracy improvements over kernel baselines.[34]Hierarchical and Global Pooling
Hierarchical pooling methods in graph neural networks enable the progressive coarsening of graphs across multiple layers, capturing multi-scale structural representations by iteratively reducing the graph size while preserving essential topological information. These approaches allow the model to learn hierarchical embeddings suitable for tasks requiring graph-level understanding. Seminal methods include selection-based approaches like TopKPool, introduced by Gao and Ji. In TopKPool, node scores are computed as g(h_v) = h_v^T s, where s is a learnable vector, and the top-k nodes are selected globally to form the coarsened graph, facilitating hierarchical reduction in architectures like Graph U-Nets.[35] SortPool, proposed by Zhang et al., provides a global sorting mechanism that can be integrated hierarchically. Nodes are scored using their embeddings and sorted in descending order, with the top-k nodes extracted as a fixed-length sequence of features fed into a 1D convolutional neural network to learn local patterns invariant to permutations.[34] A foundational clustering-based approach is DiffPool from Ying et al., which learns soft assignments of nodes to a fixed number of clusters via a dedicated GNN layer: S = \softmax(\GNN(Z, A)). The coarsened features are Z' = S^T Z and adjacency A' = S^T A S, enabling differentiable hierarchical coarsening optimized end-to-end. This allows flexible grouping adapting to graph structures during training, with reported improvements on benchmarks like ENZYMES (accuracy gain of approximately 8% over GraphSAGE baseline).[33] Another influential hierarchical technique is SAGPool, which employs structure-aware self-attention to select top-k nodes at each layer. SAGPool computes scores via graph convolution: Z = \tanh \left( \tilde{D}^{-1/2} \tilde{A} \tilde{D}^{-1/2} X^{(l)} \Theta_{att} \right), selecting nodes based on these GCN-derived scores to update features and adjacency. This incorporates both features and topology for node importance, improving performance on graph classification compared to non-hierarchical methods.[36] Global pooling, often the final readout after message passing or hierarchical coarsening, aggregates node representations into a fixed-size graph embedding over all nodes. Common operations include sum (\sum h_i), mean (\frac{1}{N} \sum h_i), and max (\max h_i) pooling, which are permutation-invariant and used early in GNNs for tasks like molecular fingerprinting. For more expressive aggregation, Set2Set uses an iterative LSTM-based attention mechanism over the node set to produce an adaptive embedding.[37] Self-attention pooling, as in SAGPool, weighs node importance globally for classification. These strategies support end-to-end learning for graph-level tasks, such as molecular property prediction, where hierarchical methods capture local motifs and global topology effectively.[33][36] Recent advances (as of 2024) include comprehensive benchmarks evaluating over 17 pooling methods on 28 datasets, highlighting improvements in expressivity and scalability, with new techniques like explicit graph pooling addressing limitations in implicit methods.[38][23]Advanced Topics
Heterophily in Graphs
In graph theory and machine learning, homophily refers to the tendency of connected nodes to share similar attributes or labels, while heterophily describes the opposite scenario where linked nodes exhibit dissimilar features or classes. The homophily ratio h is commonly quantified as the average fraction of same-label neighbors across all nodes:h = \frac{1}{|V|} \sum_{v \in V} \frac{|\{u \in \mathcal{N}(v) : y_u = y_v\}|}{|\mathcal{N}(v)|},
where V is the set of nodes, \mathcal{N}(v) is the neighborhood of node v, and y_v is the label of v. Graphs with high h (close to 1) exhibit strong homophily, enabling effective signal smoothing via neighborhood aggregation, whereas low h (close to 0) indicates heterophily, as seen in web page networks where pages link across categories. For instance, the Texas WebKB dataset, a collection of university web pages classified into five categories (e.g., course, faculty), has h \approx 0.06, highlighting pronounced heterophily.[39] Standard graph neural networks (GNNs), such as GCN and GAT, rely heavily on one-hop neighbor aggregation under the implicit homophily assumption, which mixes a node's ego embedding with its neighbors' features. In heterophilic graphs, this aggregation over-smooths dissimilar signals, reducing node distinguishability and leading to suboptimal representations. Analysis shows that such models perform worse than simple multi-layer perceptrons (MLPs) on low-homophily benchmarks, with accuracy drops of up to 40% compared to homophilic settings; for example, GCN achieves only 52.16% accuracy on Texas, versus approximately 81.5% on homophilic datasets like Cora.[40][2] This limitation stems from the low-pass filtering effect of standard propagation, which fails to capture high-frequency variations in heterophilic structures.[40][39] To address these issues, specialized GNNs incorporate designs like separate ego and neighbor embeddings, as in H2GCN, which concatenates self-features with aggregated 1-hop and 2-hop neighbor representations to preserve ego information and leverage higher-order contexts where homophily may emerge. Similarly, GPR-GNN uses learnable Generalized PageRank weights in propagation steps, enabling adaptive distance-based weighting that alternates positive and negative coefficients to emphasize dissimilar neighbors in heterophilic graphs, improving accuracy by up to 10% on benchmarks like Texas. Other techniques include signed edges to differentiate positive (homophilous) and negative (heterophilous) connections, and explicit higher-order neighborhood sampling to bypass immediate dissimilarities.[40][41] Recent advances (2024–2025) focus on adaptive aggregation schemes that dynamically mix homophily- and heterophily-oriented terms during message passing. For example, LG-GNN employs local-global adaptation, using local feature similarities (e.g., cosine) and global structural similarities (e.g., SimRank) to weight intra- and inter-class propagations based on learned heterophily indicators, enhancing robustness across varying h values without predefined assumptions and achieving improvements of up to 28% over baselines like GCN on heterophilic datasets. Ongoing research also explores integration of heterophily-aware GNNs with large language models for better reasoning on structured data with mixed homophily.[42][43]