Multi-view Graph Representation Learning
Easy to digest breakdown of "Contrastive Multi-View Representation Learning on Graphs" by Kaveh Hassani and Amir Hosein Khasahmadi
Created on February 4|Last edited on February 5
Comment
NOTE: This Report is a part of a series of reports on Graph Representation Learning, for a brief overview and survey please refer to the following articles as well
💡
A Brief Introduction to Graph Contrastive Learning
This article provides an overview of "Deep Graph Contrastive Representation Learning" and introduces a general formulation for Contrastive Representation Learning on Graphs using W&B for interactive visualizations. It includes code samples for you to follow!
GraphCL: Graph Contrastive Learning Framework with Augmentations
Graph Contrastive Learning Framework as outlined in "Graph Contrastive Learning with Augmentations" by You. et al.
Multi-view Graph Representation Learning
Easy to digest breakdown of "Contrastive Multi-View Representation Learning on Graphs" by Kaveh Hassani and Amir Hosein Khasahmadi
Multi-Task Self Supervised Graph Representation Learning
Brief breakdown of Multi-task Self-supervised Graph Neural Network Enable Stronger Task Generalization [ICLR 2023] by Mingxuan Ju, Tong Zhao, Qianlong Wen, Wenhao Yu, Neil Shah, Yanfang Ye and Chuxu Zhang
We know from Self Supervised Learning applied to other modalities that contrasting between more than two views helps boost pre-training performance and leads to better quality representations. But does the same apply to graphs ?
The authors of "Contrastive Multi-View Representation Learning on Graphs" aim to answer the question and find out that multiple augmented views do not improve performance but rather the best performance is achieved by contrasting encodings from first-order neighbours and a graph diffusion based model.
This technique builds on top of work done by the authors of Deep Graph Contrastive Representation Learning and Graph Contrastive Learning with Augmentations. We assume a basic understanding of Graph Neural Networks, if you feel like a quick refresher please refer to the following article which provides links to other great resources to read and learn more !!
Table of Contents
👨🏫 Method

Figure 1: The MVGRL Framework
The MVGRL Framework built on top of then SOTA and contrasted multiple perturbed versions of the input graph (This is in contrast with earlier papers discussed such as GraphCL and GRACE that used only two views). This framework learns node and graph representations by maximizing Mutual Information between node representations of one view and graph representation of another view and vice versa which achieves better results compared to contrasting global or multi-scale encodings on both node and graph classification tasks.
This framework can be summarised as follows:
- Given a graph we generate multiple correlated augmented views of the same graph. A differentiating factor from GraphCL is that in MVGRL we apply augmentations only to the structure of the graphs and not the initial node features.
- These views are then passed through separate graph encoders . Another differentiating factor from GraphCL is that in MVGRL the graph encoders are not shared.
- After getting encoded the representations are passed through a readout function (a variant of a graph pooling layer) and then passed through a shared projection head . This is in contrast to GraphCL in which the graph encoder is shared.
- A contrastive objective is applied via a discriminator which contrasts node representations from one view with graph representations from another view.
The authors apply two forms of augmentations, feature-space augmentations such as masking node features or adding gaussian noise and structure-space augmentations such as adding/removing edges, sub-sampling, generating global views using shortest distances or diffusion matrices.
👨💻 Code
The authors released the codebase along with the paper, which enabled easy reproducibility. The codebase must be commended for it's scalability and easy reading ! The framework is extremely simple to implement, let's look at the official implementation.
class MVGRL(nn.Module):def __init__(self, n_in, n_h, num_layers):super(MVGRL, self).__init__()self.mlp1 = MLP(1 * n_h, n_h)self.mlp2 = MLP(num_layers * n_h, n_h)self.gnn1 = GCN(n_in, n_h, num_layers)self.gnn2 = GCN(n_in, n_h, num_layers)def forward(self, adj, diff, feat, mask):lv1, gv1 = self.gnn1(feat, adj, mask)lv2, gv2 = self.gnn2(feat, diff, mask)lv1 = self.mlp1(lv1)lv2 = self.mlp1(lv2)gv1 = self.mlp2(gv1)gv2 = self.mlp2(gv2)return lv1, gv1, lv2, gv2def embed(self, feat, adj, diff, mask):__, gv1, __, gv2 = self.forward(adj, diff, feat, mask)return (gv1 + gv2).detach()
📊 Results
The following graph compares the training loss for downstream graph classification pretrained using the MVGRL framework across three latent dimensions.
Run set
3
🔗 Summary
In this article we briefly went over the MVGRL (Multi-view Graph Representation Learning) Framework as initially proposed in the paper "Contrastive Multi-View Representation Learning on Graphs" by Kaveh Hassani and Amir Hosein Khasahmadi. The key result from the paper is that multiple perturbed representations don't help much in Contrastive methods for Graph Representation Learning and infact the best performance is achieved by contrasting encodings from first-order neighbours and using a Diffusion based Graph Neural Network.
To see the full suite of W&B features, please check out this short 5-minute guide. If you want more reports covering the math and "from-scratch" code implementations, let us know in the comments down below or on our forum ✨!
Check out these other reports on Fully Connected covering other Geometric Deep Learning topics such as Graph Attention Networks.
Introduction to Graph Neural Networks
Interested in Graph Neural Networks and want a roadmap on how to get started? In this article, we'll give a brief outline of the field and share blogs and resources!
A Brief Introduction to Graph Attention Networks
This article provides a brief overview of the Graph Attention Networks architecture, complete with code examples in PyTorch Geometric and interactive visualizations using W&B.
A Brief Introduction to Residual Gated Graph Convolutional Networks
This article provides a brief overview of the Residual Gated Graph Convolutional Network architecture, complete with code examples in PyTorch Geometric and interactive visualizations using W&B.
Add a comment