Skip to main content

CGCNN and QGCNN

Week 3
Created on June 14|Last edited on June 14

Model Architectures

Classical GCNN

2 Message Passing steps with Global Mean Pool in the end.
Latent size of MLP is 128.
Number of trainable parameters: 67,842



Quantum GCNN

2 Message Passing steps with Global Mean Pool in the end.
Latent size of MLP 64.
Number of trainable parameters: 5,688
Hyperparams:
  • 4 qubits
  • single layer
  • num_features: 6
Note: I am using same node layer in all message passing steps thereby reducing the trainbable parameters as well as training time.

Data Re-uploading Quantum Circuit


  • Output size of Neural Network before Circuit: (batch_size, num_features)
  • Output size of Quantum Circuit: (batch_size, num_qubits)
  • Pauli-Z all all qubits


Metrics


Run: glowing-spaceship-6
3


Status

Done

  1. Implemented a Classical and Quantum GCNN (with data re-uploading circuit).
  2. Obtained preliminary results.

Goals for next week

  1. Try different model architectures for both classical and quantum
  2. Understand the data preprocessing of Boosted jets. Use it for Quark-Gluon Dataset