Boost Intel CPU Speed for(GNN) Graph Neural Network Training
Graph Neural Network Training (GNN) Accelerated on Intel CPU with Hybrid Partitioning and Fused Sampling
A novel graph sampling technique dubbed “fused sampling,” created by Intel Labs and AIA, may speed up the training of Graph Neural Networks (GNNs) on CPUs by up to two times
The graph is often divided among many computers to speed up sampling-based training, as seen in the machines are in charge of producing their own graph samples and using them to train the GNN model
Popular GNN libraries, like DGL (a popular GNN training library), provide a typical sample pipeline that consists of several phases that each produce intermediate tensors that must be written to and subsequently read from memory
Adaptable partitioning
When a graph becomes too large to store in the memory of one training machine, it is often divided among many machines
The relevant graph data required for each machine to train the GNN model is requested and provided via inter-machine communication