CGCNN and QGCNN
Week 3
Created on June 14|Last edited on June 14
Comment
Model Architectures
Classical GCNN
2 Message Passing steps with Global Mean Pool in the end.
Latent size of MLP is 128.
Number of trainable parameters: 67,842

Quantum GCNN
2 Message Passing steps with Global Mean Pool in the end.
Latent size of MLP 64.
Number of trainable parameters: 5,688
Hyperparams:
- 4 qubits
- single layer
- num_features: 6
Note: I am using same node layer in all message passing steps thereby reducing the trainbable parameters as well as training time.
Data Re-uploading Quantum Circuit
- Output size of Neural Network before Circuit: (batch_size, num_features)
- Output size of Quantum Circuit: (batch_size, num_qubits)
- Pauli-Z all all qubits

Metrics
Run: glowing-spaceship-6
3
Status
Done
- Implemented a Classical and Quantum GCNN (with data re-uploading circuit).
- Obtained preliminary results.
Goals for next week
- Try different model architectures for both classical and quantum
- Papers to read:
Add a comment