Skip to content

Commit b425b48

Browse files
committed
Updated README to include mlsys note
1 parent 7273f45 commit b425b48

File tree

2 files changed

+6
-0
lines changed

2 files changed

+6
-0
lines changed

.gitignore

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,3 +19,7 @@ examples/cagnet_outputs/
1919
*.out
2020
examples/*.nsys-rep
2121
examples/*.ncu-rep
22+
cagnet/
23+
examples/
24+
slurm_outputs/
25+
tests/

README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,7 @@
11
# CAGNET: Communication-Avoiding Graph Neural nETworks
22

3+
This branch contains implementations for CAGNET's full-batch training pipeline (SC'20). For CAGNET's minibatch training pipeline (MLSys'24), please refer to the `distributed-sampling` branch.
4+
35
## Description
46

57
CAGNET is a family of parallel algorithms for training GNNs that can asymptotically reduce communication compared to previous parallel GNN training methods. CAGNET algorithms are based on 1D, 1.5D, 2D, and 3D sparse-dense matrix multiplication, and are implemented with `torch.distributed` on GPU-equipped clusters. We also implement these parallel algorithms on a 2-layer GCN.

0 commit comments

Comments
 (0)