Image2Text Dashboard
Dashboard for Experiments
Created on March 2|Last edited on April 12
Comment
Training π
Data πΎ
The data input for training is
- an image whose features are extracted via InceptionV3.
- caption which is converted into a word vector with index = <position in sentence> and value = <id of word as defined by the constructed vocabulary>
We show here the original images alongside the caption and the associated relative file name as defined in our artifact structure
For β¨flairβ¨ I added an interactive way to view the images based on the word vectors of the caption (may not make the most sense but it's fun!)
ο»Ώ
ο»Ώ
ο»Ώ
Losses π
ο»Ώ
Run: train-coco2014-attention-model
1
ο»Ώ
ο»Ώ
Gradients π±
Evaluation πΈββπ½
Predictions ππ½
For each prediction, we also include the attention plot to allow us to better interpret why the model may be making the predictions it made. Take a look! π
π‘
Logging this as a table makes it very easy for the βπ½βπ£ team to pull and run their inference on!
ο»Ώ
ο»Ώ
Add a comment