Graph Neural Networks (GNNs) with Learnable Structural and Positional Representations
An in-depth breakdown of "Graph Neural Networks with Learnable Structural and Positional Representations" by Vijay Prakash Dwivedi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio and Xavier Bresson.
Created on January 21|Last edited on February 24
Comment
Read the Paper | Official Implementation | W&B Implementation | W&B Dashboard
Table of Contents (Click to Expand)
🧐 Motivation🙇♂️ MethodStandard Message Passing Graph Neural Networks (MP-GNNs)Positional Encoding⭐️ ⭐️ Decoupling Position and Structure in MP-GNNs ⭐️ ⭐️ Positional Loss📈 Experiments✌️ Conclusion
🧐 Motivation
Most GNNs are designed with a message passing mechanism that builds node representation by aggregating local neighborhood information. This class of GNNs is fundamentally structural (i.e. the node representation only depends on the local structure of the graph).
As a consequence of which most nodes in graphs do not have any canonical positional information. This lack of structural information is a huge limitation and results in the model having low representation power due to their inability to differentiate between simple graph symmetries.

Figure 1: The typical way a Graph Neural Networks (GNN) are structured. Considering the example of a molecule the node features viz. can represent the atom type (Hydrogen or Oxygen) and the edge features can represent the bond type (Covalent or Ionic).
For example, two atoms in a molecule with the same neighbourhood are expected to have similar representation. However this can be limiting to have the same representation for these two atoms as their positions in the molecule are distinct and their role may be different. As a consequence, the popular MP-GNNs (Message Passing Graph Neural Networks) fails to differentiate two nodes with the same 1-hop local structure, this is now well understood in context of equivalence of MP-GNNs with Weisfeiler-Lehman (WL) test for graph isomorphism.
To mitigate these issues the authors propose a new framework called LSPE (Learnable Structural and Positional Embeddings) that can be used with any MP-GNN to learn both of these properties at the same time.
These limitations can be alleviated to certain extents by:-
- Stacking Multiple Layers: this can help propagate the information from a node to multiple hops but is deficient for long distance nodes because of "over-squashing".
- Applying Higher-Order GNNs: Compute higher-order node-tuple aggregations such as in WL-based GNNs though these models are computationally expensive.
- Considering PE for nodes/edges: this helps by creating some global positional attributes for the various nodes in a graph and can lead to sub-structures.
🙇♂️ Method
Notation (Click to Expand)
Standard Message Passing Graph Neural Networks (MP-GNNs)
The typical way a Standard Message Passing Graph Neural Network problem is structured can be represented as follows :-
where are the node structural embeddings. The bit represents the neighbourhood of a particular node. are the edge features. are just functions with learnable parameters.
Input Features and Initialization (Click to Expand)
These are produced by a linear embedding of available input nodes and edge features.
Where,
- and
- , , are learnable.
Positional Encoding
Existing MP-GNNs integrate Positional Embeddings to input node features by concatenation.
With Initial
⭐️ ⭐️ Decoupling Position and Structure in MP-GNNs ⭐️ ⭐️
With Initial
Positional Loss

Figure 2: The proposed method that incorporates Positional Embeddings (PE) into the standard representation to learn canonical information.
📈 Experiments
For the purposes of this report, we use the ZINC Dataset for Graph Regression comparing various architectures and Positional Embedding types. As we can see from the plots the actual and learned eigenvectors for the trained models look pretty similar thereby proving that this methodology leads to the learning of some canonical positional information.
Run set
1
✌️ Conclusion
In this report we explored Graph Neural Networks with Learnable Structural and Positional Representations by Vijay Prakash Dwivedi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio and Xavier Bresson, in which the authors propose a new method to incorporate canonical positional information into the model training paradigm, through Laplacian eigenvectors.
This report will also be followed by a full Graph Neural Network series (from Scratch to SOTA) in the coming months, stay tuned to Fully Connected for that !! Check out these other reports on Fully Connected covering other hot topics in Graph Neural Networks.
If you want to cite the paper in your own research kindly use the following BibTeX :-
@article{DBLP:journals/corr/abs-2110-07875,author = {Vijay Prakash Dwivedi andAnh Tuan Luu andThomas Laurent andYoshua Bengio andXavier Bresson},title = {Graph Neural Networks with Learnable Structural and Positional Representations},journal = {CoRR},volume = {abs/2110.07875},year = {2021},url = {https://arxiv.org/abs/2110.07875},eprinttype = {arXiv},eprint = {2110.07875},timestamp = {Fri, 22 Oct 2021 13:33:09 +0200},biburl = {https://dblp.org/rec/journals/corr/abs-2110-07875.bib},bibsource = {dblp computer science bibliography, https://dblp.org}}
Using W&B with DeepChem: Molecular Graph Convolutional Networks
A quick tutorial on using W&B to track DeepChem molecular deep learning experiments
Machine Learning With Graphs.
Class Notes for CS224W (http://web.stanford.edu/class/cs224w/)
Part 1 – Introduction to Graph Neural Networks With GatedGCN
This article summarizes the need for Graph Neural Networks and analyzes one particular architecture – the Gated Graph Convolutional Network.
De Novo Molecule Generation with GCPNs using TorchDrug
How reinforcement learning, specifically graph convolutional policy networks, can help create brand new molecules to treat real world diseases
Add a comment
Iterate on AI agents and models faster. Try Weights & Biases today.