diff --git a/.github/citation/citation.json b/.github/citation/citation.json index 5f6eeda..cfddac8 100644 --- a/.github/citation/citation.json +++ b/.github/citation/citation.json @@ -1 +1 @@ -{"Binary Graph Neural Networks": {"citation": 57, "last update": "2024-11-18"}, "Parallel and Distributed Graph Neural Networks: An In-Depth Concurrency Analysis": {"citation": 45, "last update": "2024-11-18"}, "GraphACT: Accelerating GCN Training on CPU-FPGA Heterogeneous Platforms": {"citation": 153, "last update": "2024-11-18"}, "Seastar: Vertex-Centric Programming for Graph Neural Networks": {"citation": 56, "last update": "2024-11-18"}, "PaSca: A Graph Neural Architecture Search System under the Scalable Paradigm": {"citation": 52, "last update": "2024-11-18"}, "AGL: A Scalable System for Industrial-purpose Graph Machine Learning": {"citation": 128, "last update": "2024-11-18"}, "GE-SpMM: General-purpose Sparse Matrix-Matrix Multiplication on GPUs for Graph Neural Networks": {"citation": 125, "last update": "2024-11-18"}, "Graphiler: Optimizing Graph Neural Networks with Message Passing Data Flow Graph": {"citation": 24, "last update": "2024-11-18"}, "DistGNN: Scalable Distributed Training for Large-Scale Graph Neural Networks": {"citation": 126, "last update": "2024-11-18"}, "DIG: A Turnkey Library for Diving into Graph Deep Learning Research": {"citation": 98, "last update": "2024-11-18"}, "Accelerating Graph Convolutional Networks Using Crossbar-based Processing-In-Memory Architectures": {"citation": 42, "last update": "2024-11-18"}, "Understanding the Design Space of Sparse/Dense Multiphase Dataflows for Mapping Graph Neural Networks on Spatial Accelerators": {"citation": 9, "last update": "2024-11-18"}, "Understanding and Bridging the Gaps in Current GNN Performance Optimizations": {"citation": 83, "last update": "2024-11-18"}, "MGG: Accelerating Graph Neural Networks with Fine-Grained Intra-Kernel Communication-Computation Pipelining on Multi-GPU Platforms": {"citation": 22, "last update": "2024-11-18"}, "GNNPipe: Accelerating Distributed Full-Graph GNN Training with Pipelined Model Parallelism": {"citation": 1, "last update": "2024-11-18"}, "GNN-PIM: A Processing-in-Memory Architecture for Graph Neural Networks": {"citation": 23, "last update": "2024-11-18"}, "Sancus: Staleness-Aware Communication-Avoiding Full-Graph Decentralized Training in Large-Scale Graph Neural Networks": {"citation": 66, "last update": "2024-11-18"}, "Distributed Graph Neural Network Training: A Survey": {"citation": 41, "last update": "2024-11-18"}, "Bi-GCN: Binary Graph Convolutional Network": {"citation": 57, "last update": "2024-11-18"}, "Sequential Aggregation and Rematerialization: Distributed Full-batch Training of Graph Neural Networks on Large Graphs": {"citation": 24, "last update": "2024-11-18"}, "PCGCN: Partition-Centric Processing for Accelerating Graph Convolutional Network": {"citation": 47, "last update": "2024-11-19"}, "Hardware Acceleration of Graph Neural Networks": {"citation": 141, "last update": "2024-11-19"}, "Distributed Hybrid CPU and GPU training for Graph Neural Networks on Billion-Scale Graphs": {"citation": 36, "last update": "2024-11-19"}, "DGCL: An Efficient Communication Library for Distributed GNN Training": {"citation": 91, "last update": "2024-11-19"}, "TLPGNN: A Lightweight Two-Level Parallelism Paradigm for Graph Neural Network Computation on GPU": {"citation": 21, "last update": "2024-11-19"}, "GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings": {"citation": 163, "last update": "2024-11-19"}, "I-GCN: A Graph Convolutional Network Accelerator with Runtime Locality Enhancement through Islandization": {"citation": 113, "last update": "2024-11-19"}, "ByteGNN: Efficient Graph Neural Network Training at Large Scale": {"citation": 76, "last update": "2024-11-19"}, "AWB-GCN: A Graph Convolutional Network Accelerator with Runtime Workload Rebalancing": {"citation": 289, "last update": "2024-11-19"}, "GNNAdvisor: An Adaptive and Efficient Runtime System for GNN Acceleration on GPUs": {"citation": 160, "last update": "2024-11-19"}, "DyGNN: Algorithm and Architecture Support of vertex Dynamic Pruning for Graph Neural Networks": {"citation": 31, "last update": "2024-11-19"}, "BGL: GPU-Efficient GNN Training by Optimizing Graph Data I/O and Preprocessing": {"citation": 70, "last update": "2024-11-19"}, "EnGN: A High-Throughput and Energy-Efficient Accelerator for Large Graph Neural Networks": {"citation": 191, "last update": "2024-11-19"}, "Reducing Communication in Graph Neural Network Training": {"citation": 117, "last update": "2024-11-19"}, "fuseGNN: Accelerating Graph Convolutional Neural Network Training on GPGPU": {"citation": 23, "last update": "2024-11-19"}, "A Survey on Graph Neural Network Acceleration: Algorithms, Systems, and Customized Hardware": {"citation": 22, "last update": "2024-11-19"}, "A Comprehensive Survey on Distributed Training of Graph Neural Networks": {"citation": 29, "last update": "2024-11-19"}, "QGTC: Accelerating Quantized Graph Neural Networks via GPU Tensor Core": {"citation": 44, "last update": "2024-11-19"}, "BlockGNN: Towards Efficient GNN Acceleration Using Block-Circulant Weight Matrices": {"citation": 36, "last update": "2024-11-19"}, "Learned Low Precision Graph Neural Networks": {"citation": 39, "last update": "2024-11-19"}, "2PGraph: Accelerating GNN Training over Large Graphs on GPU Clusters": {"citation": 17, "last update": "2024-11-19"}, "Communication-Free Distributed GNN Training with Vertex Cut": {"citation": 1, "last update": "2024-11-19"}, "MariusGNN: Resource-Efficient Out-of-Core Training of Graph Neural Networks": {"citation": 35, "last update": "2024-11-19"}, "Improving the Accuracy, Scalability, and Performance of Graph Neural Networks with Roc": {"citation": 251, "last update": "2024-11-19"}, "Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture": {"citation": 67, "last update": "2024-11-19"}, "GraphFM: Improving Large-Scale GNN Training via Feature Momentum": {"citation": 29, "last update": "2024-11-19"}, "Dorylus: Affordable, Scalable, and Accurate GNN Training with Distributed CPU Servers and Serverless Threads": {"citation": 148, "last update": "2024-11-20"}, "PaGraph: Scaling GNN Training on Large Graphs via Computation-aware Caching": {"citation": 169, "last update": "2024-11-20"}, "NeuGraph: Parallel Deep Neural Network Computation on Large Graphs": {"citation": 281, "last update": "2024-11-20"}, "Computing Graph Neural Networks: A Survey from Algorithms to Accelerators": {"citation": 258, "last update": "2024-11-20"}, "ZIPPER: Exploiting Tile- and Operator-level Parallelism for General and Scalable Graph Neural Network Acceleration": {"citation": 5, "last update": "2024-11-20"}, "GCNAX: A Flexible and Energy-efficient Accelerator for Graph Convolutional Neural Networks": {"citation": 136, "last update": "2024-11-20"}, "DistDGL: Distributed Graph Neural Network Training for Billion-Scale Graphs": {"citation": 136, "last update": "2024-11-20"}, "SGQuant: Squeezing the Last Bit on Graph Neural Networks with Specialized Quantization": {"citation": 49, "last update": "2024-11-20"}, "CogDL: A Toolkit for Deep Learning on Graphs": {"citation": 2, "last update": "2024-11-20"}, "AliGraph: A Comprehensive Graph Neural Network Platform": {"citation": 316, "last update": "2024-11-20"}, "BeaconGNN: Large-Scale GNN Acceleration with Out-of-Order Streaming In-Storage Computing": {"citation": 3, "last update": "2024-11-20"}, "In situ neighborhood sampling for large-scale GNN training": {"citation": 0, "last update": "2024-11-20"}, "Ginex: SSD-enabled Billion-scale Graph Neural Network Training on a Single Machine via Provably Optimal In-memory Caching": {"citation": 27, "last update": "2024-11-20"}, "SmartSAGE: Training Large-scale Graph Neural Networks using In-Storage Processing Architectures": {"citation": 43, "last update": "2024-11-20"}, "Deep Graph Library: A Graph-Centric, Highly-Performant Package for Graph Neural Networks": {"citation": 1291, "last update": "2024-11-20"}, "Fast Graph Representation Learning with PyTorch Geometric": {"citation": 4880, "last update": "2024-11-20"}, "Relational Inductive Biases, Deep Learning, and Graph Networks": {"citation": 3893, "last update": "2024-11-20"}, "FlexGraph: A Flexible and Efficient Distributed Framework for GNN Training": {"citation": 64, "last update": "2024-11-20"}, "TigerGraph: A Native MPP Graph Database": {"citation": 81, "last update": "2024-11-20"}, "Degree-Quant: Quantization-Aware Training for Graph Neural Networks": {"citation": 192, "last update": "2024-11-20"}, "BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling": {"citation": 76, "last update": "2024-11-20"}, "Binarized Graph Neural Network": {"citation": 33, "last update": "2024-11-20"}, "GenGNN: A Generic FPGA Framework for Graph Neural Network Acceleration": {"citation": 17, "last update": "2024-11-20"}, "GNNLab: A Factored System for Sample-based GNN Training over GPUs": {"citation": 80, "last update": "2024-11-20"}, "StreamGCN: Accelerating Graph Convolutional Networks with Streaming Processing": {"citation": 5, "last update": "2024-11-20"}, "FusedMM: A Unified SDDMM-SpMM Kernel for Graph Embedding and Graph Neural Networks": {"citation": 47, "last update": "2024-11-20"}, "$P^3$: Distributed Deep Graph Learning at Scale": {"citation": 157, "last update": "2024-11-20"}, "G$^3$: When Graph Neural Networks Meet Parallel Graph Processing Systems on GPUs": {"citation": 52, "last update": "2024-11-20"}, "PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication": {"citation": 69, "last update": "2024-11-20"}, "EPQuant: A Graph Neural Network Compression Approach Based on Product Quantization": {"citation": 11, "last update": "2024-11-21"}, "Graph Neural Networks in TensorFlow and Keras with Spektral": {"citation": 315, "last update": "2024-11-21"}, "Efficient Scaling of Dynamic Graph Neural Networks": {"citation": 26, "last update": "2024-11-21"}, "Accelerating Training and Inference of Graph Neural Networks with Fast Sampling and Pipelining": {"citation": 57, "last update": "2024-11-21"}, "Hardware Acceleration of Large Scale GCN Inference": {"citation": 84, "last update": "2024-11-21"}, "Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks": {"citation": 34, "last update": "2024-11-21"}, "TARe: Task-Adaptive in-situ ReRAM Computing for Graph Learning": {"citation": 14, "last update": "2024-11-21"}, "GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design": {"citation": 53, "last update": "2024-11-21"}, "Hyperscale FPGA-as-a-service architecture for large-scale distributed graph neural network": {"citation": 23, "last update": "2024-11-21"}, "Understanding GNN Computational Graph: A Coordinated Computation, IO, and Memory Perspective": {"citation": 50, "last update": "2024-11-21"}, "DRGN: a dynamically reconfigurable accelerator for graph neural networks": {"citation": 3, "last update": "2024-11-21"}, "Global Neighbor Sampling for Mixed CPU-GPU Training on Giant Graphs": {"citation": 37, "last update": "2024-11-21"}, "Efficient Data Loader for Fast Sampling-Based GNN Training on Large Graphs": {"citation": 36, "last update": "2024-11-21"}, "Graphite: Optimizing Graph Neural Networks on CPUs Through Cooperative Software-Hardware Techniques": {"citation": 32, "last update": "2024-11-21"}, "Rubik: A Hierarchical Architecture for Efficient Graph Learning": {"citation": 14, "last update": "2024-11-21"}, "HyGCN: A GCN Accelerator with Hybrid Architecture": {"citation": 347, "last update": "2024-11-21"}, "GNNIE: GNN Inference Engine with Load-balancing and Graph-specific Caching": {"citation": 17, "last update": "2024-11-21"}, "FeatGraph: A Flexible and Efficient Backend for Graph Neural Network Systems": {"citation": 95, "last update": "2024-11-21"}, "EXACT: Scalable Graph Neural Networks Training via Extreme Activation Compression": {"citation": 60, "last update": "2024-11-21"}, "G-CoS: GNN-Accelerator Co-Search Towards Both Better Accuracy and Efficiency": {"citation": 32, "last update": "2024-11-21"}, "GRIP: A Graph Neural Network Accelerator Architecture": {"citation": 99, "last update": "2024-11-21"}, "GCNear: A Hybrid Architecture for Efficient GCN Training with Near-Memory Processing": {"citation": 10, "last update": "2024-11-21"}, "FlowGNN: A Dataflow Architecture for Universal Graph Neural Network Inference via Multi-Queue Streaming": {"citation": 12, "last update": "2024-11-21"}, "ReGraphX: NoC-enabled 3D Heterogeneous ReRAM Architecture for Training Graph Neural Networks": {"citation": 35, "last update": "2024-11-21"}, "GIST: Distributed Training for Large-Scale Graph Convolutional Networks": {"citation": 14, "last update": "2024-11-21"}} \ No newline at end of file +{"AWB-GCN: A Graph Convolutional Network Accelerator with Runtime Workload Rebalancing": {"citation": 289, "last update": "2024-11-19"}, "GNNAdvisor: An Adaptive and Efficient Runtime System for GNN Acceleration on GPUs": {"citation": 160, "last update": "2024-11-19"}, "DyGNN: Algorithm and Architecture Support of vertex Dynamic Pruning for Graph Neural Networks": {"citation": 31, "last update": "2024-11-19"}, "BGL: GPU-Efficient GNN Training by Optimizing Graph Data I/O and Preprocessing": {"citation": 70, "last update": "2024-11-19"}, "EnGN: A High-Throughput and Energy-Efficient Accelerator for Large Graph Neural Networks": {"citation": 191, "last update": "2024-11-19"}, "Reducing Communication in Graph Neural Network Training": {"citation": 117, "last update": "2024-11-19"}, "fuseGNN: Accelerating Graph Convolutional Neural Network Training on GPGPU": {"citation": 23, "last update": "2024-11-19"}, "A Survey on Graph Neural Network Acceleration: Algorithms, Systems, and Customized Hardware": {"citation": 22, "last update": "2024-11-19"}, "A Comprehensive Survey on Distributed Training of Graph Neural Networks": {"citation": 29, "last update": "2024-11-19"}, "QGTC: Accelerating Quantized Graph Neural Networks via GPU Tensor Core": {"citation": 44, "last update": "2024-11-19"}, "BlockGNN: Towards Efficient GNN Acceleration Using Block-Circulant Weight Matrices": {"citation": 36, "last update": "2024-11-19"}, "Learned Low Precision Graph Neural Networks": {"citation": 39, "last update": "2024-11-19"}, "2PGraph: Accelerating GNN Training over Large Graphs on GPU Clusters": {"citation": 17, "last update": "2024-11-19"}, "Communication-Free Distributed GNN Training with Vertex Cut": {"citation": 1, "last update": "2024-11-19"}, "MariusGNN: Resource-Efficient Out-of-Core Training of Graph Neural Networks": {"citation": 35, "last update": "2024-11-19"}, "Improving the Accuracy, Scalability, and Performance of Graph Neural Networks with Roc": {"citation": 251, "last update": "2024-11-19"}, "Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture": {"citation": 67, "last update": "2024-11-19"}, "GraphFM: Improving Large-Scale GNN Training via Feature Momentum": {"citation": 29, "last update": "2024-11-19"}, "Dorylus: Affordable, Scalable, and Accurate GNN Training with Distributed CPU Servers and Serverless Threads": {"citation": 148, "last update": "2024-11-20"}, "PaGraph: Scaling GNN Training on Large Graphs via Computation-aware Caching": {"citation": 169, "last update": "2024-11-20"}, "NeuGraph: Parallel Deep Neural Network Computation on Large Graphs": {"citation": 281, "last update": "2024-11-20"}, "Computing Graph Neural Networks: A Survey from Algorithms to Accelerators": {"citation": 258, "last update": "2024-11-20"}, "ZIPPER: Exploiting Tile- and Operator-level Parallelism for General and Scalable Graph Neural Network Acceleration": {"citation": 5, "last update": "2024-11-20"}, "GCNAX: A Flexible and Energy-efficient Accelerator for Graph Convolutional Neural Networks": {"citation": 136, "last update": "2024-11-20"}, "DistDGL: Distributed Graph Neural Network Training for Billion-Scale Graphs": {"citation": 136, "last update": "2024-11-20"}, "SGQuant: Squeezing the Last Bit on Graph Neural Networks with Specialized Quantization": {"citation": 49, "last update": "2024-11-20"}, "CogDL: A Toolkit for Deep Learning on Graphs": {"citation": 2, "last update": "2024-11-20"}, "AliGraph: A Comprehensive Graph Neural Network Platform": {"citation": 316, "last update": "2024-11-20"}, "BeaconGNN: Large-Scale GNN Acceleration with Out-of-Order Streaming In-Storage Computing": {"citation": 3, "last update": "2024-11-20"}, "In situ neighborhood sampling for large-scale GNN training": {"citation": 0, "last update": "2024-11-20"}, "Ginex: SSD-enabled Billion-scale Graph Neural Network Training on a Single Machine via Provably Optimal In-memory Caching": {"citation": 27, "last update": "2024-11-20"}, "SmartSAGE: Training Large-scale Graph Neural Networks using In-Storage Processing Architectures": {"citation": 43, "last update": "2024-11-20"}, "Deep Graph Library: A Graph-Centric, Highly-Performant Package for Graph Neural Networks": {"citation": 1291, "last update": "2024-11-20"}, "Fast Graph Representation Learning with PyTorch Geometric": {"citation": 4880, "last update": "2024-11-20"}, "Relational Inductive Biases, Deep Learning, and Graph Networks": {"citation": 3893, "last update": "2024-11-20"}, "FlexGraph: A Flexible and Efficient Distributed Framework for GNN Training": {"citation": 64, "last update": "2024-11-20"}, "TigerGraph: A Native MPP Graph Database": {"citation": 81, "last update": "2024-11-20"}, "Degree-Quant: Quantization-Aware Training for Graph Neural Networks": {"citation": 192, "last update": "2024-11-20"}, "BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling": {"citation": 76, "last update": "2024-11-20"}, "Binarized Graph Neural Network": {"citation": 33, "last update": "2024-11-20"}, "GenGNN: A Generic FPGA Framework for Graph Neural Network Acceleration": {"citation": 17, "last update": "2024-11-20"}, "GNNLab: A Factored System for Sample-based GNN Training over GPUs": {"citation": 80, "last update": "2024-11-20"}, "StreamGCN: Accelerating Graph Convolutional Networks with Streaming Processing": {"citation": 5, "last update": "2024-11-20"}, "FusedMM: A Unified SDDMM-SpMM Kernel for Graph Embedding and Graph Neural Networks": {"citation": 47, "last update": "2024-11-20"}, "$P^3$: Distributed Deep Graph Learning at Scale": {"citation": 157, "last update": "2024-11-20"}, "G$^3$: When Graph Neural Networks Meet Parallel Graph Processing Systems on GPUs": {"citation": 52, "last update": "2024-11-20"}, "PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication": {"citation": 69, "last update": "2024-11-20"}, "EPQuant: A Graph Neural Network Compression Approach Based on Product Quantization": {"citation": 11, "last update": "2024-11-21"}, "Graph Neural Networks in TensorFlow and Keras with Spektral": {"citation": 315, "last update": "2024-11-21"}, "Efficient Scaling of Dynamic Graph Neural Networks": {"citation": 26, "last update": "2024-11-21"}, "Accelerating Training and Inference of Graph Neural Networks with Fast Sampling and Pipelining": {"citation": 57, "last update": "2024-11-21"}, "Hardware Acceleration of Large Scale GCN Inference": {"citation": 84, "last update": "2024-11-21"}, "Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks": {"citation": 34, "last update": "2024-11-21"}, "TARe: Task-Adaptive in-situ ReRAM Computing for Graph Learning": {"citation": 14, "last update": "2024-11-21"}, "GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design": {"citation": 53, "last update": "2024-11-21"}, "Hyperscale FPGA-as-a-service architecture for large-scale distributed graph neural network": {"citation": 23, "last update": "2024-11-21"}, "Understanding GNN Computational Graph: A Coordinated Computation, IO, and Memory Perspective": {"citation": 50, "last update": "2024-11-21"}, "DRGN: a dynamically reconfigurable accelerator for graph neural networks": {"citation": 3, "last update": "2024-11-21"}, "Global Neighbor Sampling for Mixed CPU-GPU Training on Giant Graphs": {"citation": 37, "last update": "2024-11-21"}, "Efficient Data Loader for Fast Sampling-Based GNN Training on Large Graphs": {"citation": 36, "last update": "2024-11-21"}, "Graphite: Optimizing Graph Neural Networks on CPUs Through Cooperative Software-Hardware Techniques": {"citation": 32, "last update": "2024-11-21"}, "Rubik: A Hierarchical Architecture for Efficient Graph Learning": {"citation": 14, "last update": "2024-11-21"}, "HyGCN: A GCN Accelerator with Hybrid Architecture": {"citation": 347, "last update": "2024-11-21"}, "GNNIE: GNN Inference Engine with Load-balancing and Graph-specific Caching": {"citation": 17, "last update": "2024-11-21"}, "FeatGraph: A Flexible and Efficient Backend for Graph Neural Network Systems": {"citation": 95, "last update": "2024-11-21"}, "EXACT: Scalable Graph Neural Networks Training via Extreme Activation Compression": {"citation": 60, "last update": "2024-11-21"}, "G-CoS: GNN-Accelerator Co-Search Towards Both Better Accuracy and Efficiency": {"citation": 32, "last update": "2024-11-21"}, "GRIP: A Graph Neural Network Accelerator Architecture": {"citation": 99, "last update": "2024-11-21"}, "GCNear: A Hybrid Architecture for Efficient GCN Training with Near-Memory Processing": {"citation": 10, "last update": "2024-11-21"}, "FlowGNN: A Dataflow Architecture for Universal Graph Neural Network Inference via Multi-Queue Streaming": {"citation": 12, "last update": "2024-11-21"}, "ReGraphX: NoC-enabled 3D Heterogeneous ReRAM Architecture for Training Graph Neural Networks": {"citation": 35, "last update": "2024-11-21"}, "GIST: Distributed Training for Large-Scale Graph Convolutional Networks": {"citation": 14, "last update": "2024-11-21"}, "Binary Graph Neural Networks": {"citation": 57, "last update": "2024-11-22"}, "Parallel and Distributed Graph Neural Networks: An In-Depth Concurrency Analysis": {"citation": 45, "last update": "2024-11-22"}, "GraphACT: Accelerating GCN Training on CPU-FPGA Heterogeneous Platforms": {"citation": 155, "last update": "2024-11-22"}, "Seastar: Vertex-Centric Programming for Graph Neural Networks": {"citation": 56, "last update": "2024-11-22"}, "PaSca: A Graph Neural Architecture Search System under the Scalable Paradigm": {"citation": 52, "last update": "2024-11-22"}, "AGL: A Scalable System for Industrial-purpose Graph Machine Learning": {"citation": 128, "last update": "2024-11-22"}, "GE-SpMM: General-purpose Sparse Matrix-Matrix Multiplication on GPUs for Graph Neural Networks": {"citation": 125, "last update": "2024-11-22"}, "Graphiler: Optimizing Graph Neural Networks with Message Passing Data Flow Graph": {"citation": 24, "last update": "2024-11-22"}, "DistGNN: Scalable Distributed Training for Large-Scale Graph Neural Networks": {"citation": 126, "last update": "2024-11-22"}, "DIG: A Turnkey Library for Diving into Graph Deep Learning Research": {"citation": 98, "last update": "2024-11-22"}, "Accelerating Graph Convolutional Networks Using Crossbar-based Processing-In-Memory Architectures": {"citation": 44, "last update": "2024-11-22"}, "Understanding the Design Space of Sparse/Dense Multiphase Dataflows for Mapping Graph Neural Networks on Spatial Accelerators": {"citation": 9, "last update": "2024-11-22"}, "Understanding and Bridging the Gaps in Current GNN Performance Optimizations": {"citation": 83, "last update": "2024-11-22"}, "MGG: Accelerating Graph Neural Networks with Fine-Grained Intra-Kernel Communication-Computation Pipelining on Multi-GPU Platforms": {"citation": 22, "last update": "2024-11-22"}, "GNNPipe: Accelerating Distributed Full-Graph GNN Training with Pipelined Model Parallelism": {"citation": 1, "last update": "2024-11-22"}, "GNN-PIM: A Processing-in-Memory Architecture for Graph Neural Networks": {"citation": 23, "last update": "2024-11-22"}, "Sancus: Staleness-Aware Communication-Avoiding Full-Graph Decentralized Training in Large-Scale Graph Neural Networks": {"citation": 66, "last update": "2024-11-22"}, "Distributed Graph Neural Network Training: A Survey": {"citation": 41, "last update": "2024-11-22"}, "Bi-GCN: Binary Graph Convolutional Network": {"citation": 57, "last update": "2024-11-22"}, "Sequential Aggregation and Rematerialization: Distributed Full-batch Training of Graph Neural Networks on Large Graphs": {"citation": 24, "last update": "2024-11-22"}, "PCGCN: Partition-Centric Processing for Accelerating Graph Convolutional Network": {"citation": 47, "last update": "2024-11-22"}, "Hardware Acceleration of Graph Neural Networks": {"citation": 142, "last update": "2024-11-22"}, "Distributed Hybrid CPU and GPU training for Graph Neural Networks on Billion-Scale Graphs": {"citation": 36, "last update": "2024-11-22"}, "DGCL: An Efficient Communication Library for Distributed GNN Training": {"citation": 91, "last update": "2024-11-22"}, "TLPGNN: A Lightweight Two-Level Parallelism Paradigm for Graph Neural Network Computation on GPU": {"citation": 21, "last update": "2024-11-22"}, "GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings": {"citation": 165, "last update": "2024-11-22"}, "I-GCN: A Graph Convolutional Network Accelerator with Runtime Locality Enhancement through Islandization": {"citation": 114, "last update": "2024-11-22"}, "ByteGNN: Efficient Graph Neural Network Training at Large Scale": {"citation": 76, "last update": "2024-11-22"}} \ No newline at end of file diff --git a/README.md b/README.md index f07a10a..8f7cfae 100644 --- a/README.md +++ b/README.md @@ -99,7 +99,7 @@ A list of awesome systems for graph neural network (GNN). If you have any commen |VLDB 2022|Ginex: SSD-enabled Billion-scale Graph Neural Network Training on a Single Machine via Provably Optimal In-memory Caching|Seoul National University| [[paper]](https://dl.acm.org/doi/10.14778/3551793.3551819)![Scholar citations](https://img.shields.io/badge/Citations-27-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)| [[code]](https://github.com/SNU-ARC/Ginex)![GitHub stars](https://img.shields.io/github/stars/SNU-ARC/Ginex.svg?logo=github&label=Stars)| |ISCA 2022|SmartSAGE: Training Large-scale Graph Neural Networks using In-Storage Processing Architectures|KAIST| [[paper]](https://dl.acm.org/doi/10.1145/3470496.3527391)![Scholar citations](https://img.shields.io/badge/Citations-43-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)|| |ICML 2022|GraphFM: Improving Large-Scale GNN Training via Feature Momentum|TAMU| [[paper]](https://arxiv.org/abs/2206.07161)![Scholar citations](https://img.shields.io/badge/Citations-29-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)| [[code]](https://github.com/divelab/DIG/tree/dig-stable/dig/lsgraph)| -|ICML 2021|GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings|TU Dortmund University| [[paper]](https://arxiv.org/abs/2106.05609)![Scholar citations](https://img.shields.io/badge/Citations-163-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)| [[code]](https://github.com/rusty1s/pyg_autoscale)![GitHub stars](https://img.shields.io/github/stars/rusty1s/pyg_autoscale.svg?logo=github&label=Stars)| +|ICML 2021|GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings|TU Dortmund University| [[paper]](https://arxiv.org/abs/2106.05609)![Scholar citations](https://img.shields.io/badge/Citations-165-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)| [[code]](https://github.com/rusty1s/pyg_autoscale)![GitHub stars](https://img.shields.io/github/stars/rusty1s/pyg_autoscale.svg?logo=github&label=Stars)| |OSDI 2021|GNNAdvisor: An Adaptive and Efficient Runtime System for GNN Acceleration on GPUs|UCSB| [[paper]](https://www.usenix.org/system/files/osdi21-wang-yuke.pdf)![Scholar citations](https://img.shields.io/badge/Citations-160-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)| [[code]](https://github.com/YukeWang96/OSDI21_AE)![GitHub stars](https://img.shields.io/github/stars/YukeWang96/OSDI21_AE.svg?logo=github&label=Stars)| ### Quantized GNNs | Venue | Title | Affiliation |       Link       |   Source   | @@ -133,7 +133,7 @@ A list of awesome systems for graph neural network (GNN). If you have any commen |arXiv 2021|GCNear: A Hybrid Architecture for Efficient GCN Training with Near-Memory Processing|PKU| [[paper]](https://arxiv.org/abs/2111.00680)![Scholar citations](https://img.shields.io/badge/Citations-10-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)|| |DATE 2021|ReGraphX: NoC-enabled 3D Heterogeneous ReRAM Architecture for Training Graph Neural Networks|WSU| [[paper]](https://arxiv.org/abs/2102.07959)![Scholar citations](https://img.shields.io/badge/Citations-35-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)|| |TCAD 2021|Rubik: A Hierarchical Architecture for Efficient Graph Learning|Chinese Academy of Sciences| [[paper]](https://arxiv.org/pdf/2009.12495.pdf)![Scholar citations](https://img.shields.io/badge/Citations-14-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)|| -|FPGA 2020|GraphACT: Accelerating GCN Training on CPU-FPGA Heterogeneous Platforms|USC| [[paper]](https://dl.acm.org/doi/pdf/10.1145/3373087.3375312)![Scholar citations](https://img.shields.io/badge/Citations-153-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)| [[code]](https://github.com/GraphSAINT/GraphACT)![GitHub stars](https://img.shields.io/github/stars/GraphSAINT/GraphACT.svg?logo=github&label=Stars)| +|FPGA 2020|GraphACT: Accelerating GCN Training on CPU-FPGA Heterogeneous Platforms|USC| [[paper]](https://dl.acm.org/doi/pdf/10.1145/3373087.3375312)![Scholar citations](https://img.shields.io/badge/Citations-155-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)| [[code]](https://github.com/GraphSAINT/GraphACT)![GitHub stars](https://img.shields.io/github/stars/GraphSAINT/GraphACT.svg?logo=github&label=Stars)| ### GNN Inference Accelerators | Venue | Title | Affiliation |       Link       |   Source   | | :---: | :---: | :---------: | :---: | :----: | @@ -142,20 +142,20 @@ A list of awesome systems for graph neural network (GNN). If you have any commen |IPDPS 2022|Understanding the Design Space of Sparse/Dense Multiphase Dataflows for Mapping Graph Neural Networks on Spatial Accelerators|GaTech| [[paper]](https://arxiv.org/abs/2103.07977)![Scholar citations](https://img.shields.io/badge/Citations-9-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)| [[code]](https://github.com/stonne-simulator/omega)![GitHub stars](https://img.shields.io/github/stars/stonne-simulator/omega.svg?logo=github&label=Stars)| |arXiv 2022|FlowGNN: A Dataflow Architecture for Universal Graph Neural Network Inference via Multi-Queue Streaming|GaTech| [[paper]](https://arxiv.org/abs/2204.13103)![Scholar citations](https://img.shields.io/badge/Citations-12-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)|| |CICC 2022|StreamGCN: Accelerating Graph Convolutional Networks with Streaming Processing|UCLA| [[paper]](https://web.cs.ucla.edu/~atefehsz/publication/StreamGCN-CICC22.pdf)![Scholar citations](https://img.shields.io/badge/Citations-5-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)|| -|HPCA 2022|Accelerating Graph Convolutional Networks Using Crossbar-based Processing-In-Memory Architectures|HUST| [[paper]](https://ieeexplore.ieee.org/document/9773267)![Scholar citations](https://img.shields.io/badge/Citations-42-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)|| +|HPCA 2022|Accelerating Graph Convolutional Networks Using Crossbar-based Processing-In-Memory Architectures|HUST| [[paper]](https://ieeexplore.ieee.org/document/9773267)![Scholar citations](https://img.shields.io/badge/Citations-44-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)|| |HPCA 2022|GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design|Rice, PNNL| [[paper]](https://arxiv.org/abs/2112.11594)![Scholar citations](https://img.shields.io/badge/Citations-53-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)| [[code]](https://github.com/RICE-EIC/GCoD)![GitHub stars](https://img.shields.io/github/stars/RICE-EIC/GCoD.svg?logo=github&label=Stars)| |arXiv 2022|GenGNN: A Generic FPGA Framework for Graph Neural Network Acceleration|GaTech| [[paper]](https://arxiv.org/abs/2201.08475)![Scholar citations](https://img.shields.io/badge/Citations-17-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)|| |DAC 2021|DyGNN: Algorithm and Architecture Support of vertex Dynamic Pruning for Graph Neural Networks|Hunan University| [[paper]](https://ieeexplore.ieee.org/document/9586298)![Scholar citations](https://img.shields.io/badge/Citations-31-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)|| |DAC 2021|BlockGNN: Towards Efficient GNN Acceleration Using Block-Circulant Weight Matrices|PKU| [[paper]](https://arxiv.org/abs/2104.06214)![Scholar citations](https://img.shields.io/badge/Citations-36-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)|| |DAC 2021|TARe: Task-Adaptive in-situ ReRAM Computing for Graph Learning|Chinese Academy of Sciences| [[paper]](https://ieeexplore.ieee.org/abstract/document/9586193)![Scholar citations](https://img.shields.io/badge/Citations-14-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)|| |ICCAD 2021|G-CoS: GNN-Accelerator Co-Search Towards Both Better Accuracy and Efficiency|Rice| [[paper]](https://arxiv.org/abs/2109.08983)![Scholar citations](https://img.shields.io/badge/Citations-32-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)|| -|MICRO 2021|I-GCN: A Graph Convolutional Network Accelerator with Runtime Locality Enhancement through Islandization|PNNL| [[paper]](https://dl.acm.org/doi/pdf/10.1145/3466752.3480113)![Scholar citations](https://img.shields.io/badge/Citations-113-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)|| +|MICRO 2021|I-GCN: A Graph Convolutional Network Accelerator with Runtime Locality Enhancement through Islandization|PNNL| [[paper]](https://dl.acm.org/doi/pdf/10.1145/3466752.3480113)![Scholar citations](https://img.shields.io/badge/Citations-114-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)|| |arXiv 2021|ZIPPER: Exploiting Tile- and Operator-level Parallelism for General and Scalable Graph Neural Network Acceleration|SJTU| [[paper]](https://arxiv.org/abs/2107.08709)![Scholar citations](https://img.shields.io/badge/Citations-5-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)|| |TComp 2021|EnGN: A High-Throughput and Energy-Efficient Accelerator for Large Graph Neural Networks|Chinese Academy of Sciences| [[paper]](https://arxiv.org/abs/1909.00155)![Scholar citations](https://img.shields.io/badge/Citations-191-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)|| |HPCA 2021|GCNAX: A Flexible and Energy-efficient Accelerator for Graph Convolutional Neural Networks|GWU| [[paper]](https://ieeexplore.ieee.org/abstract/document/9407104)![Scholar citations](https://img.shields.io/badge/Citations-136-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)|| |APA 2020|GNN-PIM: A Processing-in-Memory Architecture for Graph Neural Networks|PKU| [[paper]](http://115.27.240.201/docs/20200915165942122459.pdf)![Scholar citations](https://img.shields.io/badge/Citations-23-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)|| |ASAP 2020|Hardware Acceleration of Large Scale GCN Inference|USC| [[paper]](https://ieeexplore.ieee.org/document/9153263)![Scholar citations](https://img.shields.io/badge/Citations-84-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)|| -|DAC 2020|Hardware Acceleration of Graph Neural Networks|UIUC| [[paper]](http://rakeshk.web.engr.illinois.edu/dac20.pdf)![Scholar citations](https://img.shields.io/badge/Citations-141-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)|| +|DAC 2020|Hardware Acceleration of Graph Neural Networks|UIUC| [[paper]](http://rakeshk.web.engr.illinois.edu/dac20.pdf)![Scholar citations](https://img.shields.io/badge/Citations-142-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)|| |MICRO 2020|AWB-GCN: A Graph Convolutional Network Accelerator with Runtime Workload Rebalancing|PNNL| [[paper]](https://ieeexplore.ieee.org/abstract/document/9252000)![Scholar citations](https://img.shields.io/badge/Citations-289-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)|| |arXiv 2020|GRIP: A Graph Neural Network Accelerator Architecture|Stanford| [[paper]](https://arxiv.org/pdf/2007.13828.pdf)![Scholar citations](https://img.shields.io/badge/Citations-99-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)|| |HPCA 2020|HyGCN: A GCN Accelerator with Hybrid Architecture|UCSB| [[paper]](https://arxiv.org/pdf/2001.02514.pdf)![Scholar citations](https://img.shields.io/badge/Citations-347-_.svg?logo=google-scholar&labelColor=4f4f4f&color=3388ee)||