Fact-checked by Grok 2 weeks ago

Graph neural network

A graph neural network (GNN) is a class of deep learning models designed to process and learn from data structured as graphs, where nodes represent entities (such as atoms in molecules or users in social networks) and edges represent relationships between them, enabling the capture of complex dependencies through iterative message passing between neighboring nodes. Unlike traditional neural networks that operate on Euclidean data like grids or sequences, GNNs generalize convolutional and recurrent architectures to non-Euclidean domains, allowing them to encode both node features and graph topology for tasks such as prediction and classification. The concept of GNNs originated in the mid-2000s, with foundational work proposing a neural model that updates states based on their local neighborhoods, extending recursive neural networks to handle arbitrary structures beyond directed acyclic graphs. This early formulation laid the groundwork for processing -structured inputs directly, as demonstrated in applications like ranking and molecular prediction. The field gained significant momentum in the mid-2010s, driven by advances in , particularly with the introduction of graph convolutional networks (GCNs) in 2017, which approximated spectral convolutions for scalable semi-supervised classification on large graphs. Key mechanisms in GNNs typically involve message passing, where nodes aggregate information from their neighbors across multiple layers to refine representations, often incorporating attention mechanisms to weigh important connections dynamically, as in graph attention networks (GATs). Variants like GCNs focus on spectral or spatial convolutions for smooth signal propagation, while others, such as GraphSAGE, emphasize inductive learning for unseen nodes through sampling and aggregation. These architectures address challenges like over-smoothing in deep layers and scalability, with recent developments exploring expressiveness limits and robustness to adversarial perturbations. GNNs have broad applications across domains, including for community detection and , chemistry for molecular property via graph representations of compounds, for protein modeling, and recommendation systems that leverage user-item to personalize suggestions. In physics, they simulate particle interactions, while in traffic , they predict flows using spatiotemporal structures. Ongoing emphasizes interpretability, efficiency on massive graphs, and integration with large models for enhanced reasoning over structured .

Introduction

Definition and Motivation

Graph neural networks (GNNs) are a class of neural networks designed to process and learn from graph-structured data, where the primary objective is to generate representations for nodes by iteratively aggregating information from their local neighborhoods. This approach extends traditional neural networks to handle non-Euclidean domains, enabling the model to capture relational dependencies inherent in graphs through a process known as message passing. Unlike fixed-input architectures, GNNs enforce permutation invariance, ensuring that the output remains consistent regardless of the ordering of nodes or edges in the input graph. The motivation for developing GNNs stems from the limitations of conventional models, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), which are optimized for data like images or sequences but falter on irregular, relational structures. CNNs rely on grid-like arrangements and predefined filters, making them ill-suited for s where connections are arbitrary and variable; similarly, RNNs struggle with the non-sequential nature of graph dependencies, often requiring fixed-length inputs that do not scale to diverse topologies. GNNs address these shortcomings by incorporating inductive biases tailored to graphs, such as locality and smoothness, which allow them to model complex interactions in domains like social networks—where users form dynamic connections—or molecular structures, where atoms are linked in non-grid formations. This enables scalable processing of variable-sized graphs without hand-engineered features, improving generalization across heterogeneous data. In contrast to data processing, which assumes regular lattices, GNNs emphasize the graph's to propagate information, achieving efficiency on sparse, irregular inputs while maintaining invariance for robust representations. The basic workflow begins with an input graph G = (V, E), where V denotes the set of s and E the set of edges, accompanied by initial features X; through layers of neighborhood aggregation, the model produces learned embeddings H that encode both local and global structural information. This core mechanism underpins the iterative refinement of states, forming the foundation for downstream tasks like prediction or .

Historical Overview

The concept of graph neural networks (GNNs) originated in the mid-2000s with foundational work aimed at extending neural networks to handle structured, non-Euclidean data like . In 2005, Gori et al. introduced a neural model capable of directly processing by extending recursive neural networks to capture dependencies in graph domains, enabling learning on arbitrary structures without requiring sequentialization. This approach laid the groundwork for processing as holistic entities rather than flattened sequences. Building on this, Scarselli et al. formalized the GNN model in 2009, proposing a framework that incorporates directly on , including cyclic, directed, and undirected types, while unifying recursive neural networks and random walk-based methods for tasks like node labeling and . The field advanced significantly in the spectral domain during the early 2010s, drawing from graph signal processing to adapt convolutional operations to irregular graph structures. Bruna et al. pioneered this direction in 2014 by developing spectral graph convolutions, which leverage the graph and Laplacian eigenvectors to define localized filters on graph signals, enabling the application of principles to non-Euclidean data domains. This spectral perspective influenced subsequent efforts to generalize convolutional neural networks (CNNs) to graphs, though it faced challenges in scalability due to the computational cost of eigendecompositions. A pivotal shift toward spatial-domain methods occurred in 2017, emphasizing efficient, localized operations that avoided full spectral computations. Kipf and Welling proposed Graph Convolutional Networks (GCN), which approximate spectral filters using first-order Chebyshev polynomials and normalized adjacency matrices, facilitating scalable semi-supervised learning on citation networks and achieving state-of-the-art node classification performance. Concurrently, et al. introduced GraphSAGE, an inductive framework that samples and aggregates features from a node's local neighborhood, enabling generalization to unseen nodes and large-scale graphs without retraining, as demonstrated on applications like in social networks. These spatial innovations established the message-passing paradigm as a unifying framework for GNNs, allowing flexible aggregation of neighbor information across layers. Subsequent developments focused on enhancing expressivity and incorporating advanced mechanisms like . Veličković et al. advanced this in 2018 with Graph Attention Networks (GAT), which introduce self-attention layers to dynamically weigh neighbor contributions, improving performance on tasks such as while maintaining equivariance. Addressing limitations in discriminative power, Xu et al. analyzed GNN expressivity in 2019 relative to the Weisfeiler-Lehman graph isomorphism test and proposed Graph Isomorphism Networks (GIN), a model that achieves maximal expressivity within the message-passing framework by using multi-layer perceptrons for injection and sum aggregation, outperforming prior methods on graph-level tasks like molecular property prediction. Recent milestones have extended GNNs to specialized domains, including temporal and transformer-based architectures. In 2020, Xu et al. developed Temporal Graph Attention (TGAT) networks, which incorporate time encoding and attention to model evolving graph dynamics, enabling inductive learning on temporal interaction graphs for applications like user-product recommendations. By 2021, Ying et al. introduced Graphormer, a transformer-based model that encodes graph structure via centrality, shortest path distances, and spatial encodings, challenging the notion that transformers underperform on graphs and achieving superior results on quantum chemistry and image captioning benchmarks. From 2022 onward, advancements have emphasized scalable and general-purpose architectures, such as GraphGPS by Rampášek et al., which combines message passing with transformer layers to achieve state-of-the-art performance across diverse benchmarks with linear complexity. Additionally, since 2023, there has been substantial progress in integrating GNNs with large language models (LLMs) to enhance reasoning over structured and textual data, enabling applications in knowledge graphs and multimodal tasks, as explored in comprehensive surveys. These contributions reflect the field's continued evolution toward more expressive, scalable, and integrative models for dynamic and large-scale graph data as of 2025.

Preliminaries

Graph Representations

Graphs are fundamental data structures consisting of (vertices) and representing relationships between them, denoted as G = (V, E), where V is the set of nodes with |V| = N and E is the set of edges with |E| = M. Basic graph types include undirected graphs, where edges have no and imply symmetric connections, and directed graphs, where edges possess a specific indicating in relationships. Graphs can also be unweighted, with edges lacking numerical values, or weighted, where edges carry scalar weights reflecting strength or distance. Furthermore, homogeneous graphs feature a single type of node and edge, suitable for uniform domains like citation networks, whereas heterogeneous graphs incorporate multiple node and edge types, common in diverse applications such as social networks or knowledge bases. Common matrix-based representations of graphs include the A \in \mathbb{R}^{N \times N}, a where A_{ij} = 1 (or the weight) if an edge exists from node i to j, and 0 otherwise; this is efficient for dense graphs but memory-intensive for sparse ones. The B \in \mathbb{R}^{N \times M} captures node-edge incidences, with entries typically +1 or -1 for directed edges incident to nodes (indicating source or target), and 0 elsewhere, useful for analyzing edge-oriented properties in geometric . For efficiency in sparse graphs, where M \ll N^2, non-matrix formats are preferred: edge lists store pairs of connected nodes, while the Compressed Sparse Row (CSR) format uses three arrays—values, column indices, and row pointers—to enable fast access and traversal with O(M) . In attributed graphs, nodes are augmented with feature vectors forming an initial embedding matrix X \in \mathbb{R}^{N \times d}, where each row represents a node's d-dimensional attributes, such as textual or numerical descriptors, serving as input to neural models. Edges may similarly carry attributes, like types or weights, enhancing relational expressiveness in heterogeneous settings. These initial embeddings provide a starting point for processing, with learned representations detailed further in subsequent sections. For , the normalized graph Laplacian L = I_N - D^{-1/2} A D^{-1/2} is key, where D is the diagonal with D_{ii} = \sum_j A_{ij}, enabling to reveal and clustering properties. poses significant challenges for large with |V| > 10^6, as full adjacency matrices demand O(N^2) storage, often mitigated by sampling or techniques. Handling multi-relational edges in knowledge , which involve diverse types across heterogeneous nodes, further complicates due to the need for type-aware encoding to preserve semantic richness.

Embeddings in Graph Data

Embeddings in graph data refer to low-dimensional representations that encode the structural and relational information of or entire graphs, enabling the application of techniques to non-Euclidean data. These representations transform irregular graph structures into continuous spaces where similarities between nodes or graphs can be measured via distances or dot products. Node embeddings, in particular, capture the local neighborhood and global roles of vertices, facilitating tasks such as and recommendation. Early unsupervised methods for node embeddings laid the foundation for graph representation learning prior to the widespread adoption of graph neural networks. DeepWalk, introduced in 2014, generates embeddings by performing random walks on the graph to produce sequences of nodes, analogous to sentences in , and applies the Skip-gram model from to learn latent representations that preserve . This approach effectively captures community structures and in social networks. Building on this, LINE (2015) optimizes embeddings to preserve both first-order proximity (direct connections between nodes) and second-order proximity (similarity in neighborhood structures), making it scalable for large-scale networks like those in recommendation systems. Node2Vec (2016) extends these ideas by introducing biased random walks that balance breadth-first and , allowing embeddings to flexibly capture structural equivalences or , which has proven effective in tasks like on citation networks, with Node2Vec achieving up to 27% improvements over baselines like DeepWalk on datasets such as BlogCatalog. Graph-level embeddings aggregate node representations to represent entire graphs, commonly used in classification or similarity search tasks. A straightforward method is mean pooling, where the graph embedding is the average of all node embeddings, providing a permutation-invariant summary that preserves global information without requiring hierarchical structures. Other simple pooling strategies, such as sum or max pooling, emphasize different aspects like total signal or prominent features, respectively, and are often applied in early graph kernel or embedding pipelines. Graph embedding methods operate in either transductive or inductive settings. Transductive learning generates embeddings for a fixed graph during , leveraging the full but limiting generalization to unseen nodes or graphs, as seen in the original DeepWalk and LINE formulations. In contrast, inductive approaches, such as adaptations in Node2Vec or later frameworks like GraphSAGE, enable embedding generation for novel nodes by relying on local neighborhoods and sampling, supporting dynamic graphs in applications like evolving social networks. Embeddings are evaluated primarily through downstream tasks that measure their utility in capturing graph semantics. For link prediction, metrics like area under the ROC curve (AUC) assess the ability to rank potential edges, with Node2Vec achieving up to 10-20% improvements over baselines on datasets like BlogCatalog. In node classification, accuracy or micro-F1 scores quantify performance when embeddings serve as features for classifiers, as demonstrated by DeepWalk's gains in multi-label settings on protein-protein interaction networks. These metrics highlight embeddings' role in enabling scalable, structure-aware on graphs.

Core Mechanisms

Message Passing Paradigm

The paradigm serves as the foundational framework for most contemporary graph neural networks (GNNs), enabling the processing of graph-structured data through an iterative exchange of information between s. Introduced in early neural models for graphs, this approach allows each to refine its representation by aggregating messages from its neighbors across multiple layers, typically denoted as K iterations. This process captures local graph topology and features progressively, extending the to higher-order neighborhoods with each layer. The paradigm unifies diverse GNN variants by emphasizing neighbor communication, making it versatile for tasks such as classification and . Formally, the update rule for a node's hidden state h_v^{(k)} at layer k is given by h_v^{(k)} = \text{UPDATE}^{(k)} \left( h_v^{(k-1)}, m_v^{(k)} \right), where the aggregated message m_v^{(k)} is computed as m_v^{(k)} = \text{AGGREGATE}^{(k)} \left( \left\{ \text{MSG}^{(k)} \left( h_u^{(k-1)}, h_v^{(k-1)}, e_{uv} \right) \mid u \in \mathcal{N}(v) \right\} \right). Here, \mathcal{N}(v) denotes the neighborhood of node v, e_{uv} represents edge features (if present), and the functions MSG, AGGREGATE, and UPDATE are learnable components parameterized by the model. Initial states h_v^{(0)} are typically set to input node features. This formulation ensures that information propagates spatially across the graph, with each layer building upon the previous to encode multi-hop relationships. The function (MSG) generates representations from pairs of neighboring states and attributes, often by simple followed by a linear transformation, such as \text{MSG}(h_u, h_v, e_{uv}) = W \cdot [h_u ; h_v ; e_{uv}], where W is a weight and ; denotes . The aggregation function (AGGREGATE) then combines these messages in a permutation-invariant manner, with choices including (\sum), (\frac{1}{|\mathcal{N}(v)|} \sum), or max pooling to capture different neighborhood statistics. For instance, aggregation has been shown effective for inductive learning on large graphs by normalizing for varying degrees. The update function (UPDATE) integrates the aggregated message with the node's prior state, frequently implemented as a multi-layer perceptron (MLP) or gated recurrent unit (GRU) to introduce nonlinearity and memory. These components allow flexibility while maintaining the core iterative structure. A key limitation of the paradigm arises from over-smoothing, where repeated iterations cause node representations to converge toward a , reducing discriminative power as depth increases. This phenomenon limits the effective depth of GNNs, as embeddings from distant nodes become indistinguishable after sufficient layers, constraining expressivity for tasks requiring fine-grained distinctions. Theoretical analysis demonstrates that this occurs exponentially fast in certain models, akin to repeated application of a diffusion operator. The paradigm inherently possesses permutation equivariance, meaning that relabeling nodes in the input graph results in correspondingly relabeled outputs, preserving structural consistency without dependence on arbitrary node ordering. This property arises from the symmetric aggregation over neighbors and ensures the model's outputs are invariant to isomorphic transformations, a desirable trait for graph learning.

Propagation and Aggregation

In graph neural networks (GNNs), propagation refers to the process of diffusing information across the graph structure, typically through repeated applications of a normalized adjacency matrix that approximates spectral diffusion while maintaining computational efficiency. A common propagation rule employs a symmetrically normalized adjacency matrix with added self-loops, defined as \tilde{A} = A + I, where A is the original adjacency matrix and I is the identity matrix, followed by renormalization \hat{A} = \tilde{D}^{-1/2} \tilde{A} \tilde{D}^{-1/2}, with \tilde{D}_{ii} = \sum_j \tilde{A}_{ij} being the degree matrix of \tilde{A}. This first-order approximation stabilizes the propagation by constraining the eigenvalues of \hat{A} to lie within [0, 2], thereby preventing numerical instabilities and exploding or vanishing gradients that could arise from unnormalized diffusion. The core update rule for linear propagation in a GNN layer integrates this with features and learnable weights, expressed as H^{(k)} = \sigma\left( \hat{A} H^{(k-1)} W^{(k)} \right), where H^{(k)} denotes the at layer k, \sigma is a nonlinear (e.g., ReLU), H^{(0)} = X is the initial input features, and W^{(k)} is the weight for layer k. This formulation enables from neighboring s to propagate linearly before nonlinearity, approximating localized filters on the . Through multi-hop , each additional layer expands the to encompass larger neighborhoods; specifically, a k-layer GNN aggregates from up to the k-hop neighborhood of each , allowing the model to capture increasingly global structural patterns while remaining computationally tractable at O(|E|) per layer, where |E| is the number of edges. Aggregation functions play a crucial role in combining messages from a node's neighbors during , with permissible operations including , , max-pooling, and more expressive variants like LSTM-based aggregation to handle sequential or set-like neighbor representations. The aggregator directly accumulates neighbor features, which can introduce sparsity awareness but risks exploding gradients in graphs with high-degree variability unless paired with ; in contrast, the aggregator averages features for degree invariance, promoting stability akin to convolutional , while max-pooling emphasizes dominant signals for robustness to noise and sparsity. LSTM aggregation, applied to permuted neighbor sets, captures sequential dependencies non-permutation-invariantly, enhancing expressiveness for complex neighborhood structures at the cost of higher computation. To address stability challenges such as over-smoothing—where deep propagation causes node representations to converge indistinguishably—layer normalization and residual connections are employed. Techniques like PairNorm (2019) mitigate this by enforcing a constant total pairwise squared distance (TPSD) across node features after each layer, via centering \tilde{x}_i^c = \tilde{x}_i - \frac{1}{n} \sum_{i=1}^n \tilde{x}_i and scaling \dot{x}_i = s \cdot \frac{\tilde{x}_i^c}{\sqrt{\frac{1}{n} \sum_{i=1}^n \|\tilde{x}_i^c\|_2^2}}, where s is a scaling hyperparameter; this preserves inter-cluster distinctions while allowing intra-cluster similarity, enabling deeper GNNs without performance degradation. Residual connections, analogous to those in ResNets, further stabilize by adding skip links H^{(k)} = H^{(k-1)} + f(H^{(k-1)}), reducing gradient flow issues in multi-hop setups.

Architectures

Spectral Graph Neural Networks

Spectral graph neural networks (GNNs) draw from to perform convolutions in the of graphs, leveraging the for on non-Euclidean structures. This approach defines graph convolutions by multiplying signals with filters parameterized in the spectral domain, enabling the adaptation of convolutional neural networks (CNNs) to irregular graph topologies. Unlike spatial methods, spectral convolutions operate globally via the graph's Laplacian eigenvectors, capturing frequency-based patterns in node features. The core operation of spectral convolution is formulated as g_\theta \star x = U g_\theta(\Lambda) U^T x, where x is the input signal on the , L = U \Lambda U^T is the of the normalized Laplacian L, U contains the eigenvectors ( basis), \Lambda is the of eigenvalues, and g_\theta(\Lambda) is a parameterized by \theta. This definition allows filters to be learned directly in the , analogous to multiplying with frequency responses in classical . However, computing U requires eigendecomposition of the Laplacian, which has a computational cost of O(|V|^3) for a graph with |V| vertices, limiting to large graphs. To address the eigendecomposition bottleneck and achieve localization, Defferrard et al. (2016) approximate the filter g_\theta(\Lambda) using Chebyshev polynomials up to order K: g_\theta(\Lambda) \approx \sum_{k=0}^K T_k(\tilde{\Lambda}) \theta_k, where T_k are the Chebyshev polynomials normalized on the scaled Laplacian \tilde{L} = 2L/\lambda_{\max} - I_N (with \tilde{\Lambda} its eigenvalues), and \theta_k are the filter coefficients. This polynomial expansion enables fast, localized filtering without explicit eigendecomposition, as the recursion for Chebyshev polynomials propagates signals only within a K-hop neighborhood, reducing to O(|E| K) per layer for a with |E| edges. The approach parameterizes multi-layer GNNs as a hierarchy of such filtered layers, demonstrating effectiveness on tasks like semi-supervised node classification. A prominent simplification of this framework is the Graph Convolutional Network (GCN) introduced by Kipf and Welling (2017), which assumes a first-order Chebyshev approximation (K=1) and a renormalized adjacency matrix \tilde{A} = \tilde{D}^{-1/2} \tilde{A} \tilde{D}^{-1/2} (with \tilde{A} = A + I and degree matrix \tilde{D}). The layer-wise propagation rule becomes H^{(l+1)} = \sigma(\tilde{A} H^{(l)} W^{(l)}), where H^{(l)} are the node representations at layer l, W^{(l)} is a learnable weight matrix, and \sigma is a nonlinear activation like ReLU. This yields a simple, scalable architecture that approximates spectral convolutions through repeated neighborhood aggregation, achieving state-of-the-art results on citation network benchmarks such as Cora and Citeseer with accuracies exceeding 80%. GCNs instantiate the message passing paradigm by propagating features along edges in the spectral basis. Despite these advances, spectral GNNs exhibit key limitations, including their transductive nature, where models trained on a specific structure cannot easily generalize to unseen nodes or graphs without retraining, as the propagation relies on the full Laplacian. Additionally, even with approximations like Chebyshev polynomials, residual dependencies on graph-scale computations hinder deployment on massive or dynamic graphs. To enhance spectral expressivity, variants such as CayleyNets (Levie et al., ) employ rational approximations based on Cayley polynomials for the filter g_\theta(z), defined over the domain of the Laplacian eigenvalues: g_{c,h}(\lambda) = c_0 + 2 \Re \left\{ \sum_{j=1}^{r} c_j (h\lambda - i)^j (h\lambda + i)^{-j} \right\}, where c = (c_0, \dots, c_r) includes one real coefficient c_0 and r coefficients c_j, and h > 0 is a spectral zoom parameter. This parameterization allows CayleyNets to approximate sharper filters than polynomials, improving performance on tasks requiring high-frequency capture, such as signal denoising, while maintaining efficient recursion without eigendecomposition. CayleyNets have been shown to outperform polynomial-based spectral GNNs, achieving up to 80% higher accuracy on synthetic detection tasks.

Spatial Graph Neural Networks

Spatial graph neural networks (SGNNs) operate directly on the graph's spatial , performing convolutions by aggregating information from a node's local neighborhood without relying on of the graph Laplacian. This approach enables efficient between connected nodes, where each node's representation is updated based on its own features and those of its neighbors. Unlike spectral methods, SGNNs emphasize localized operations that can be applied layer-wise, making them suitable for dynamic and large-scale graphs. A key mechanism in SGNNs is spatial convolution through neighborhood sampling and aggregation, as exemplified by GraphSAGE, which samples a fixed-size set of neighbors for each to compute embeddings. In GraphSAGE, aggregation functions such as mean, LSTM, or pooling are used to combine neighbor features before concatenation with the central 's representation and transformation via a linear layer. For the mean aggregator, the node embedding update is given by: \mathbf{z}_v = \sigma \left( \mathbf{W} \cdot \text{CONCAT} \left( \mathbf{h}_v, \text{AGG} \left( \{ \mathbf{h}_u \mid u \in \mathcal{N}_s(v) \} \right) \right) \right) where \mathcal{N}_s(v) denotes the sampled neighbor set of v, \mathbf{h}_v is the input , \mathbf{W} is a learnable weight matrix, \sigma is a nonlinear (e.g., ReLU), and AGG computes the mean over the sampled neighbors. SGNNs exhibit inductive capability, allowing them to generate embeddings for unseen nodes during inference by relying on local sampling rather than the full structure, in contrast to the transductive limitations of spectral GNNs that require recomputation on the entire . Variants of SGNNs include Graph Attention Networks (GAT), which extend aggregation with attention mechanisms, and Jumping Knowledge Networks (JK-Net), which incorporate multi-scale information through jumping connections across layers to capture diverse neighborhood ranges. The primary advantages of SGNNs lie in their scalability to large graphs, achieved through mini-batching and of neighbors, as demonstrated by FastGCN, which reduces computational overhead while maintaining performance on inductive tasks. This makes SGNNs particularly effective for applications involving evolving graphs where full retraining is impractical.

Attention-Based Graph Neural Networks

Attention-based graph neural networks extend the paradigm by incorporating mechanisms that dynamically assign weights to neighboring nodes based on their relevance to the central node, enabling more adaptive aggregation compared to uniform or fixed-weight schemes. This approach draws inspiration from self-attention in sequence models but adapts it to irregular graph structures, where attention coefficients are computed pairwise for edges in the node's neighborhood. By learning these weights end-to-end, such networks can prioritize informative neighbors, enhancing expressiveness for tasks involving heterogeneous relationships. The foundational model, Graph Attention Network (GAT), proposed by Veličković et al. (2018), computes coefficients \alpha_{vu} for an between central v and u as follows: \alpha_{vu} = \softmax\left( \LeakyReLU\left( \mathbf{a}^T [\mathbf{W} \mathbf{h}_v \| \mathbf{W} \mathbf{h}_u] \right) \right), where \mathbf{h}_v and \mathbf{h}_u are the input representations, \mathbf{W} is a shared linear , \mathbf{a} is a learnable , \| denotes , and the softmax is applied over the \mathcal{N}(v). These coefficients are then used to aggregate features into the updated representation of v. To improve stability and capture diverse aspects of the neighborhood, GAT employs multi-head , running K parallel mechanisms and combining their outputs via (for the final layer) or averaging (for intermediate layers). The multi-head aggregation yields: \mathbf{h}_v' = \concat_{i=1}^K \sigma\left( \sum_{u \in \mathcal{N}(v)} \alpha_{vu}^i \mathbf{W}^i \mathbf{h}_u \right), where \sigma is a nonlinearity such as ELU, and each head i has its own \mathbf{W}^i and \mathbf{a}^i. This formulation allows GAT to handle varying degrees of neighbor relevance without relying strictly on the homophily assumption prevalent in earlier graph convolutions, leading to improved performance on node classification benchmarks like Cora, where GAT achieves 83.0% accuracy versus 81.5% for prior methods. Subsequent variants address limitations in the original GAT. GATv2, introduced by Brody et al. (2021), resolves the "static attention" issue—where attention rankings are independent of the query —by reordering operations to enable dynamic, query-dependent computation through an expanded mechanism that first processes features separately before . This makes GATv2 strictly more expressive than GAT while maintaining computational , outperforming it by up to 2% on transductive tasks. Another notable extension is Graphormer by Ying et al. (2021), which integrates Transformer-style with graph-specific encodings like shortest path distances and measures to better capture global structure, achieving state-of-the-art results on graph-level tasks such as molecular property prediction with a of 0.1234 on the PCQM4M-LSC dataset, a reduction of approximately 0.016 compared to prior GNNs. These advancements underscore 's role in scaling GNNs to complex, non-homophilous graphs.

Pooling Techniques

Local Pooling Methods

Local pooling methods in graph neural networks involve techniques that perform coarsening operations within local neighborhoods or subgraphs, typically in a single layer, to downsample the structure and derive compact representations for downstream tasks like classification. These methods leverage node embeddings to identify important local structures, reducing graph size while retaining discriminative . However, many foundational approaches blur the line with global methods; examples like TopKPool and SortPool are often applied globally but can inform local strategies. [Note: Due to misclassification, core examples moved to hierarchical/global; subsection retained for conceptual completeness, but shortened as primary methods are global/hierarchical.] These methods aim to mitigate computational demands in GNNs, particularly for dense graphs where adjacency computations scale poorly; coarsening reduces complexity, improving for larger inputs. Evaluations on classification benchmarks demonstrate their impact, with methods like SortPool achieving accuracy improvements over kernel baselines.

Hierarchical and Global Pooling

Hierarchical pooling methods in graph neural networks enable the progressive coarsening of graphs across multiple layers, capturing multi-scale structural representations by iteratively reducing the graph size while preserving essential topological information. These approaches allow the model to learn hierarchical embeddings suitable for tasks requiring graph-level understanding. Seminal methods include selection-based approaches like TopKPool, introduced by Gao and . In TopKPool, node scores are computed as g(h_v) = h_v^T s, where s is a learnable , and the top-k nodes are selected globally to form the coarsened graph, facilitating hierarchical reduction in architectures like Graph U-Nets. SortPool, proposed by Zhang et al., provides a global sorting mechanism that can be integrated hierarchically. Nodes are scored using their embeddings and sorted in descending order, with the top-k nodes extracted as a fixed-length sequence of features fed into a 1D to learn local patterns invariant to permutations. A foundational clustering-based approach is DiffPool from Ying et al., which learns soft assignments of nodes to a fixed number of clusters via a dedicated GNN layer: S = \softmax(\GNN(Z, A)). The coarsened features are Z' = S^T Z and adjacency A' = S^T A S, enabling differentiable hierarchical coarsening optimized end-to-end. This allows flexible grouping adapting to graph structures during training, with reported improvements on benchmarks like ENZYMES (accuracy gain of approximately 8% over GraphSAGE baseline). Another influential hierarchical technique is SAGPool, which employs structure-aware self-attention to select top-k nodes at each layer. SAGPool computes scores via : Z = \tanh \left( \tilde{D}^{-1/2} \tilde{A} \tilde{D}^{-1/2} X^{(l)} \Theta_{att} \right), selecting nodes based on these GCN-derived scores to update features and adjacency. This incorporates both features and topology for node importance, improving performance on compared to non-hierarchical methods. Global pooling, often the final readout after or hierarchical coarsening, aggregates representations into a fixed-size over all s. Common operations include sum (\sum h_i), mean (\frac{1}{N} \sum h_i), and max (\max h_i) pooling, which are permutation-invariant and used early in GNNs for tasks like molecular fingerprinting. For more expressive aggregation, Set2Set uses an iterative LSTM-based mechanism over the set to produce an adaptive embedding. Self-attention pooling, as in SAGPool, weighs node importance for . These strategies support end-to-end learning for graph-level tasks, such as molecular property prediction, where hierarchical methods capture local motifs and effectively. Recent advances (as of 2024) include comprehensive benchmarks evaluating over 17 pooling methods on 28 datasets, highlighting improvements in expressivity and , with new techniques like explicit graph pooling addressing limitations in implicit methods.

Advanced Topics

Heterophily in Graphs

In and , refers to the tendency of connected s to share similar attributes or labels, while heterophily describes the opposite scenario where linked nodes exhibit dissimilar features or classes. The homophily ratio h is commonly quantified as the average fraction of same-label neighbors across all nodes:
h = \frac{1}{|V|} \sum_{v \in V} \frac{|\{u \in \mathcal{N}(v) : y_u = y_v\}|}{|\mathcal{N}(v)|},
where V is the set of nodes, \mathcal{N}(v) is the neighborhood of node v, and y_v is the label of v. Graphs with high h (close to 1) exhibit strong homophily, enabling effective signal smoothing via neighborhood aggregation, whereas low h (close to 0) indicates heterophily, as seen in web page networks where pages link across categories. For instance, the WebKB dataset, a collection of web pages classified into five categories (e.g., , ), has h \approx 0.06, highlighting pronounced heterophily.
Standard graph neural networks (GNNs), such as GCN and GAT, rely heavily on one-hop neighbor aggregation under the implicit assumption, which mixes a node's embedding with its neighbors' features. In heterophilic graphs, this aggregation over-smooths dissimilar signals, reducing node distinguishability and leading to suboptimal representations. Analysis shows that such models perform worse than simple multi-layer perceptrons (MLPs) on low- benchmarks, with accuracy drops of up to 40% compared to homophilic settings; for example, GCN achieves only 52.16% accuracy on , versus approximately 81.5% on homophilic datasets like Cora. This limitation stems from the low-pass filtering effect of standard propagation, which fails to capture high-frequency variations in heterophilic structures. To address these issues, specialized GNNs incorporate designs like separate ego and neighbor embeddings, as in H2GCN, which concatenates self-features with aggregated 1-hop and 2-hop neighbor representations to preserve ego information and leverage higher-order contexts where may emerge. Similarly, GPR-GNN uses learnable Generalized weights in propagation steps, enabling adaptive distance-based weighting that alternates positive and negative coefficients to emphasize dissimilar neighbors in heterophilic graphs, improving accuracy by up to 10% on benchmarks like . Other techniques include signed edges to differentiate positive (homophilous) and negative (heterophilous) connections, and explicit higher-order neighborhood sampling to bypass immediate dissimilarities. Recent advances (2024–2025) focus on adaptive aggregation schemes that dynamically mix - and heterophily-oriented terms during . For example, LG-GNN employs local-global adaptation, using local feature similarities (e.g., cosine) and global structural similarities (e.g., SimRank) to weight intra- and inter-class propagations based on learned heterophily indicators, enhancing robustness across varying h values without predefined assumptions and achieving improvements of up to 28% over baselines like GCN on heterophilic datasets. Ongoing also explores integration of heterophily-aware GNNs with large language models for better reasoning on structured data with mixed .

Scalability Challenges

Graph neural networks (GNNs) face significant scalability challenges when applied to large-scale graphs, primarily due to the high computational and memory requirements of message across the entire graph structure. In full-graph training, where all nodes are processed simultaneously, the memory complexity is O(|V| d K), with |V| denoting the number of , d the feature dimension, and K the number of layers, as intermediate activations for each layer must be retained for . This becomes prohibitive for graphs with millions or billions of , such as social networks or web graphs, where even sparse representations exceed GPU memory limits. Additionally, in graphs with high-degree , the "neighbor explosion" problem arises during multi-layer , as the of each grows exponentially with depth, leading to redundant computations and further straining resources. To address these issues, sampling techniques have been developed to approximate full-graph propagation by processing only subsets of neighbors or subgraphs during training. GraphSAGE employs uniform node sampling to select a fixed number of neighbors per layer, enabling inductive learning on large, evolving graphs without full propagation and mitigating neighbor explosion by controlling the sampling size independently of node degrees. Cluster-GCN, introduced in 2019, partitions the graph into densely connected clusters using algorithms like and forms mini-batches from these clusters, allowing on graphs too large for single-device memory while preserving local structure for accurate approximations. These methods reduce memory usage from full-graph requirements to subgraph scales, often achieving near-linear scaling with graph size. Approximation methods further enhance efficiency by precomputing or probabilistically selecting propagation paths. , proposed in 2020, sidesteps online sampling by precomputing powers of the up to K hops offline, enabling constant-time multi-layer propagation during with linear memory overhead relative to input size, suitable for billion-edge graphs. FastGCN, from 2018, introduces layer-dependent to prioritize informative neighbors based on historical gradients, reducing variance in estimates and avoiding duplicate sampling across layers, which is particularly effective in heterogeneous graphs with skewed degree distributions. Distributed training frameworks extend these approaches to ultra-large graphs by parallelizing across multiple machines. For instance, scales GNN training to graphs with 3 billion nodes and 18 billion edges using random walk-based sampling and distributed aggregation, handling web-scale recommender systems through coordinated feature fetching and model updates across clusters. Such systems leverage actor-critic-inspired mechanisms for load balancing in parallel computation, ensuring efficient partitioning of billion-node graphs without excessive communication overhead. Evaluations on benchmarks like demonstrate substantial speedups; for example, sampling-based methods yield up to 10x faster compared to full-graph baselines, with Cluster-GCN achieving 3-5x improvements on citation networks while maintaining comparable accuracy.

Applications

Node and Edge Prediction

Graph neural networks (GNNs) are widely applied to node classification tasks, where the goal is to predict labels for individual nodes in a graph based on their features and structural context. This is often framed as a semi-supervised learning problem, leveraging a small set of labeled nodes to propagate information across the graph via . A seminal example is the Graph Convolutional Network (GCN), which aggregates neighborhood information to produce node embeddings for classification; on the Cora citation network dataset—a consisting of 2,708 scientific publications classified into seven categories—GCN achieves approximately 81.5% accuracy in semi-supervised node classification, outperforming traditional methods like label propagation by capturing both node attributes and graph topology. In multi-label node classification scenarios, GNNs extend this capability to assign multiple labels per node, which is particularly useful in biological networks. For instance, in protein-protein interaction graphs, GNNs predict functions by learning from node features like sequence data and edge relations representing interactions; specialized variants of Graph Attention Networks (GAT), such as GAT-GO, have demonstrated Fmax scores around 0.50 on challenging test sets for the with low sequence identity, enabling scalable annotation of thousands of . Another application involves actor co-occurrence graphs derived from movie databases, where node classification identifies attributes like preferences; here, spatial GNN variants achieve high micro-F1 scores by exploiting homophilic connections in collaboration networks. Link prediction, or edge prediction, uses GNNs to infer the existence or type of edges between node pairs, typically by computing similarity scores from learned embeddings. A common approach involves taking the dot product of node representations from the final GNN layers to score potential edges, with thresholds applied via sigmoid activation for binary prediction. The SEAL framework advances this by focusing on neighborhood subgraphs around potential edges, treating link prediction as a classification task on these substructures; on datasets like the PPI network, SEAL improves ROC-AUC scores to approximately 0.94, surpassing earlier methods like DeepWalk by incorporating expressive subgraph patterns. GNNs also handle edge type prediction in multi-relational graphs, such as knowledge graphs where edges represent diverse relations like "author-of" or "located-in." By encoding edge types during , models predict missing relations; for example, on the WN18RR benchmark, relational GCN variants achieve (MRR) values around 0.45, facilitating completion for applications in recommendation systems. In social networks, GNN-based user profiling for edge —such as inferring friendship ties—has shown strong performance on homophilic datasets like ego networks, where similar users are more likely connected, highlighting GNNs' effectiveness in capturing relational patterns. Standard evaluation metrics for these tasks include micro-F1 for node , emphasizing balanced across labels, and ROC-AUC for , which measures discrimination ability under varying thresholds.

Graph Classification Tasks

Graph classification tasks in graph neural networks (GNNs) involve predicting a single label for an entire , leveraging its structural and or edge features to capture holistic properties. Unlike , which focuses on individual vertices, graph classification requires aggregating node-level representations into a fixed-size , typically through global or hierarchical pooling mechanisms, followed by a classifier like a multi-layer (MLP). This process is crucial for applications such as molecular property prediction in , where graphs represent chemical compounds, and , where graphs depict communities or interactions. Recent advancements as of 2025 include integrating GNNs with large language models for enhanced graph reasoning in complex domains like and recommendation systems. Seminal methods emphasize effective aggregation and pooling to preserve graph structure. The Graph Isomorphism Network (GIN), introduced in 2019, provides a theoretically grounded approach by using an injective sum aggregator with MLPs, proven to match the expressive power of the . Its node update rule is given by: h_v^{(k)} = \mathrm{MLP}^{(k)} \left( (1 + \epsilon^{(k)}) \cdot h_v^{(k-1)} + \sum_{u \in \mathcal{N}(v)} h_u^{(k-1)} \right) followed by a multi-layer readout aggregating representations across layers. achieves state-of-the-art results on bioinformatics benchmarks, such as 89.4% ± 5.6% accuracy on MUTAG (mutagenicity prediction) and 82.7% ± 1.7% on NCI1 (anticancer activity screening), outperforming prior GNNs like GraphSAGE and GCN that rely on less expressive mean or max pooling. Pooling techniques are central to scalable graph classification. DiffPool (2018) pioneered differentiable hierarchical pooling, learning soft cluster assignments via a GNN to coarsen graphs layer-wise, enabling deeper architectures without losing structural information; it improves accuracy by 5-10% over flat pooling, reaching 76.25% on PROTEINS (enzyme classification). Complementing this, Self-Attention Graph Pooling (SAGPool, 2019) integrates self-attention with graph convolutions for node selection, balancing feature and topology awareness; it attains 76.45% on DD (protein fold identification) and 71.86% on PROTEINS in hierarchical setups, surpassing DiffPool and global baselines like mean pooling. Benchmarking reveals nuanced performance across domains. A comprehensive 2020 comparison of GNNs (, DiffPool, DGCNN, MoNet, ) on nine datasets—four chemical (e.g., NCI1, PROTEINS) and five social (e.g., REDDIT-BINARY, IMDB-BINARY)—shows as the top performer on social networks (e.g., 89.9% on REDDIT-BINARY with features), while structure-agnostic MLPs compete closely on chemical tasks (e.g., 78.4% on ), highlighting that degrees often suffice over complex in some cases. These findings underscore the need for domain-specific adaptations, with 's simplicity and power making it widely adopted for real-world graph classification.

References

  1. [1]
    Graph Neural Networks: A Review of Methods and Applications - arXiv
    Dec 20, 2018 · Graph neural networks (GNNs) are neural models that capture the dependence of graphs via message passing between the nodes of graphs.Missing: definition | Show results with:definition
  2. [2]
    Semi-Supervised Classification with Graph Convolutional Networks
    Sep 9, 2016 · We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks.
  3. [3]
    [1710.10903] Graph Attention Networks - arXiv
    Oct 30, 2017 · Graph Attention Networks. Authors:Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio.
  4. [4]
    The Graph Neural Network Model | IEEE Journals & Magazine
    Dec 9, 2008 · In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the ...
  5. [5]
    [PDF] Graph neural networks: A review of methods and applications - arXiv
    Introduction​​ Graph neural networks (GNNs) are deep learning based methods that operate on graph domain. Due to its convincing performance, GNN has become a ...
  6. [6]
    [PDF] Graph Representation Learning - McGill School Of Computer Science
    THE GRAPH NEURAL NETWORK MODEL neural networks over graphs is that they should permutation invariant (or equivariant). In mathematical terms, any function f ...
  7. [7]
    A Gentle Introduction to Graph Neural Networks - Distill.pub
    Sep 2, 2021 · This article explores and explains modern graph neural networks. We divide this work into four parts. First, we look at what kind of data is most naturally ...
  8. [8]
    A new model for learning in graph domains - IEEE Xplore
    This paper presents a new neural model, called graph neural network (GNN), capable of directly processing graphs. GNNs extends recursive neural networks and can ...
  9. [9]
    Spectral Networks and Locally Connected Networks on Graphs - arXiv
    Dec 21, 2013 · This paper proposes two constructions for generalizing CNNs to signals on graphs, using hierarchical clustering and the graph Laplacian ...
  10. [10]
    Inductive Representation Learning on Large Graphs - arXiv
    Jun 7, 2017 · Here we present GraphSAGE, a general, inductive framework that leverages node feature information (eg, text attributes) to efficiently generate node embeddings ...
  11. [11]
    [1810.00826] How Powerful are Graph Neural Networks? - arXiv
    Here, we present a theoretical framework for analyzing the expressive power of GNNs to capture different graph structures.
  12. [12]
    [2002.07962] Inductive Representation Learning on Temporal Graphs
    Feb 19, 2020 · We propose the temporal graph attention (TGAT) layer to efficiently aggregate temporal-topological neighborhood features as well as to learn the time-feature ...
  13. [13]
    Do Transformers Really Perform Bad for Graph Representation?
    Jun 9, 2021 · Abstract page for arXiv paper 2106.05234: Do Transformers Really Perform Bad for Graph Representation?
  14. [14]
    [PDF] Compressed Sparse Row Format for Representing Graphs - USENIX
    Then we'll walk through a working C11 program that converts an edge list representation of a graph into CSR format. Finally we'll conclude by suggesting ...
  15. [15]
    [2011.14867] A Survey on Heterogeneous Graph Embedding - arXiv
    Nov 30, 2020 · This survey reviews heterogeneous graph embedding methods, techniques, and applications, and categorizes methods based on the information they ...
  16. [16]
    Compressed sparse graph routines (scipy.sparse.csgraph)
    Construct a CSR-format sparse graph from a dense matrix. ... This sort of sparse representation also allows for edges with zero weights.Missing: machine learning<|control11|><|separator|>
  17. [17]
    DeepWalk: Online Learning of Social Representations - arXiv
    We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations.
  18. [18]
    [1503.03578] LINE: Large-scale Information Network Embedding
    Mar 12, 2015 · This paper studies the problem of embedding very large information networks into low-dimensional vector spaces, which is useful in many tasks.
  19. [19]
    Graph pooling for graph-level representation learning: a survey
    Dec 20, 2024 · The graph pooling technique, as a key step in graph neural networks, simplifies the graph structure by merging nodes or subgraphs.
  20. [20]
    Graph embedding on biomedical networks: methods, applications ...
    One of the initial works in this category is DeepWalk (Perozzi et al., 2014), which performs truncated random walks on a graph. Compared with DeepWalk, node2vec ...<|separator|>
  21. [21]
    [1704.01212] Neural Message Passing for Quantum Chemistry - arXiv
    Apr 4, 2017 · In this paper, we reformulate existing models into a single common framework we call Message Passing Neural Networks (MPNNs) and explore additional novel ...
  22. [22]
    Graph Neural Networks Exponentially Lose Expressive Power for ...
    May 27, 2019 · We investigate the expressive power of graph NNs via their asymptotic behaviors as the layer size tends to infinity.
  23. [23]
    [1909.12223] PairNorm: Tackling Oversmoothing in GNNs - arXiv
    Sep 26, 2019 · PairNorm, a novel normalization layer that is based on a careful analysis of the graph convolution operator, which prevents all node embeddings from becoming ...
  24. [24]
    [1606.09375] Convolutional Neural Networks on Graphs with Fast ...
    Jun 30, 2016 · We present a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes.
  25. [25]
    [1705.07664] CayleyNets: Graph Convolutional Neural Networks ...
    May 22, 2017 · CayleyNets is a spectral domain convolutional architecture for deep learning on graphs, using Cayley polynomials for spectral filters.
  26. [26]
    Representation Learning on Graphs with Jumping Knowledge ...
    Jun 9, 2018 · Jumping Knowledge (JK) networks use different neighborhood ranges for each node to enable better structure-aware representation in graph ...
  27. [27]
    FastGCN: Fast Learning with Graph Convolutional Networks via ...
    Jan 30, 2018 · FastGCN not only is efficient for training but also generalizes well for inference. We show a comprehensive set of experiments to demonstrate its effectiveness.
  28. [28]
    [2105.14491] How Attentive are Graph Attention Networks? - arXiv
    May 30, 2021 · In this paper we show that GAT computes a very limited kind of attention: the ranking of the attention scores is unconditioned on the query node.
  29. [29]
    [1905.05178] Graph U-Nets - arXiv
    May 11, 2019 · We develop an encoder-decoder model on graph, known as the graph U-Nets. Our experimental results on node classification and graph classification tasks
  30. [30]
    An End-to-End Deep Learning Architecture for Graph Classification
    Apr 29, 2018 · In this paper, we propose a novel neural network architecture accepting graphs of arbitrary structure.
  31. [31]
    Hierarchical Graph Representation Learning with Differentiable ...
    Jun 22, 2018 · Here we propose DiffPool, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph ...
  32. [32]
    [1904.08082] Self-Attention Graph Pooling - arXiv
    Apr 17, 2019 · In this paper, we propose a graph pooling method based on self-attention. Self-attention using graph convolution allows our pooling method to consider both ...
  33. [33]
    [1511.06391] Order Matters: Sequence to sequence for sets - arXiv
    Nov 19, 2015 · View a PDF of the paper titled Order Matters: Sequence to sequence for sets, by Oriol Vinyals and 2 other authors. View PDF. Abstract ...
  34. [34]
    [2002.05287] Geom-GCN: Geometric Graph Convolutional Networks
    We propose a novel geometric aggregation scheme for graph neural networks to overcome the two weaknesses.
  35. [35]
    MixHop: Higher-Order Graph Convolutional Architectures via ... - arXiv
    Apr 30, 2019 · MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing. Authors:Sami Abu-El-Haija, Bryan Perozzi, Amol Kapoor ...
  36. [36]
    [2006.11468] Beyond Homophily in Graph Neural Networks - arXiv
    Jun 20, 2020 · This paper investigates GNN limitations in heterophily, proposes designs like ego- and neighbor-embedding separation, and introduces H2GCN, ...
  37. [37]
    Adaptive Universal Generalized PageRank Graph Neural Network
    Jun 14, 2020 · A new Generalized PageRank (GPR) GNN architecture that adaptively learns the GPR weights so as to jointly optimize node feature and topological information ...
  38. [38]
    [PDF] LG-GNN: Local-Global Adaptive Graph Neural Network for Modeling ...
    Abstract. Most Graph Neural Networks (GNNs) are based on the homophily assumption, where nodes with the same labels or similar features tend to be connected.
  39. [39]
    [2004.11198] SIGN: Scalable Inception Graph Neural Networks - arXiv
    Apr 23, 2020 · In this paper we propose a new, efficient and scalable graph deep learning architecture which sidesteps the need for graph sampling.
  40. [40]
    [1905.07953] Cluster-GCN: An Efficient Algorithm for Training Deep ...
    May 20, 2019 · In this paper, we propose Cluster-GCN, a novel GCN algorithm that is suitable for SGD-based training by exploiting the graph clustering structure.
  41. [41]
    Graph Convolutional Neural Networks for Web-Scale Recommender ...
    Jun 6, 2018 · We deploy PinSage at Pinterest and train it on 7.5 billion examples on a graph with 3 billion nodes representing pins and boards, and 18 billion ...
  42. [42]
    [PDF] LeapGNN: Accelerating Distributed GNN Training Leveraging ...
    Feb 27, 2025 · Abstract. Distributed training of graph neural networks (GNNs) has become a crucial technique for processing large graphs. Preva-.
  43. [43]
    [2211.06385] DistGNN-MB: Distributed Large-Scale Graph Neural ...
    Nov 11, 2022 · DistGNN-MB trains GraphSAGE and GAT 10x and 17.2x faster, respectively, as compute nodes scale from 2 to 32. Subjects: Machine Learning (cs.LG); ...Missing: GNN speedup
  44. [44]
    [PDF] How Powerful Are Graph Neural Networks? - arXiv
    Feb 22, 2019 · GNNs are effective for graph representation learning, achieving state-of-the-art results, and can be as powerful as the Weisfeiler-Lehman test ...
  45. [45]
    [PDF] A FAIR COMPARISON OF GRAPH NEURAL NETWORKS FOR ...
    These architectures effectively combine node features and graph topology to build distributed node representations. GNNs can be used to solve node.
  46. [46]
    [PDF] Hierarchical Graph Representation Learning with Differentiable ...
    Feb 20, 2019 · Here we propose DIFFPOOL, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined ...
  47. [47]
    [PDF] Self-Attention Graph Pooling - arXiv
    In this paper, we proposed SAGPool which is a novel graph pooling method based on self-attention. Our method has the following features: hierarchical ...