GSoC 23 Progress
Created on July 26|Last edited on September 25
Comment
Developed an efficient Ego QGNN Architecture in JAX and TensorCircuit.

Node 0:
1 hop neighbours: 1, 2, 5, 3
2 hop neighbours: 4, 6
3 hop neighbours: 7
Have to perform padding each ego-graph.
Now the shape of the dataset is (N, MaxNodes, NumHops, MaxNodes in a ego graph).
Training on MUTAG. Overfitting.

Next steps:
Explore distributed training with JAX for training on quark gluon.
Ego graphs with Quark Gluon
Dataset
The number of qubits quickly scales up with the number of k-hop neighbours.
Tested with only 1000 samples of the dataset+----------+----------+| k | # qubits |+----------+----------|| 1 | 6 || 2 | 23 || 3 | 41 || 4 | 47 || 5 | 54 |+----------+----------+
Too slow with higher values of k.

Hyperparameter-free and Explainable Whole Graph Embedding
Based on Degree, H-index and coreness and Shannon entropy.
Can be used be data-reuploading circuit and therefore this leads to a single qubit QNN.
Add a comment