Skip to main content

Cassava Leaf Disease Classification using fastai

Created on August 3|Last edited on August 3
This discussion forum is a placeholder for validation results on Kaggle's Cassava competition using fastai.
Abhishek Yadav
Abhishek Yadav •  
I am getting .8867 accuracy on validation data with augmentation, TTA, MixUp and Early stopping. I have used progressive resizing approach which did improve a bit but after some epoch, model was starting to overfit so to encounter that I used early stopping. My notebook can be found at https://github.com/abhishekaiem/deeplearning_using_kaggle/blob/Main/cassava-using-fast-ai.ipynb
Reply
Ravi Mashru
Ravi Mashru •  
I've managed to hit 88.5% validation accuracy: https://ravimashru.dev/blog/2021-08-11-fastbook-ch7/ I've used normalization, TTA, progressive resizing and mixup. Thank you so much @Feras for sharing your approach. It helped me understand how to use all these techniques together much better. :) I'm planning to try label smoothing next.
1 reply
Feras Oughali
Feras Oughali •  
Trained using MixUp on top of my previous model for another 25 epochs. This showed a slight improvement with 88.78% accuracy on the validation set. You can find the notebook here: https://colab.research.google.com/drive/13SWIqWWecK03TCyPcYuIEYWxjxq_RQuu?usp=sharing
2 replies
Feras Oughali
Feras Oughali •  
Hey everyone! I managed to obtain 88.46% accuracy on the validation set using progressive resizing, TTA, and default augmentations. Will do further experiments with Mixup and LabelSmoothing. Here is a link to the notebook: https://colab.research.google.com/drive/1oDFXMdujg29Y_J9-7W9-_n0caqwadv_L?usp=sharing
2 replies
Matteo
Matteo •  
Hey everybody, I tried some of the solutions we saw in the previous weeks such as the random resized crop and the learning rate finder, but neither of them allowed me to achieve more than 86% accuracy consistently. The few times I got more than that i guess it was more related to the random initialization of the weights rather than a actual improvement of the model. I also tried VGG16 rather than the popular ResNet-34, but I've got similar results. If you want to check my notebook here's the link: https://colab.research.google.com/drive/19pQ9Gcq8GDR46kE2Se3dp87i9IcpkTkY?usp=sharing See you all soon!
Reply
Vinayak Nayak
Vinayak Nayak •  
https://imgur.com/xLTbI4y I have also obtained 87.4% on validation dataset. I have presented my observations in this post https://elisonsherton.github.io//fastbook/deep%20learning/2021/08/04/fastbook-week8-competition.html
Reply
Vrinda Prabhu
Vrinda Prabhu •  
I tried a stacked model dividing the data into 2 parts: Model-1: Majority class + Others Model-2: The other four classes Stacked this Model-1 > Model-2. The result - still not breached 87! Haha! Have not done any additional augmentations, went with Ravi's notebooks.Details here: https://github.com/vrindaprabhu/FastBookAssignments/tree/main/Homework/cassava The Kaggle kernels were suggesting augmentations, but wanted to check stacking out.
2 replies
Abhishek Yadav
Abhishek Yadav •  
I am getting ~.87 accuracy with 10 epochs and 16 bs as Cuda was going out of memory, with the updated library but around ~.45 with the library which Kaggle is using. I have done almost same what Aman asked to do, but was getting error when I was using only splitter function, and keeping all other thing same. So I had to go back and read the chapter six and do all the things according to that. I also tried going back and try to read the documentation that why I am getting close to 2x accuracy with new library. So most probably I will get something in this week. I am planning to do Augmentation and playing with resize of image and observe how accuracy changes.
2 replies
Kevin Bird
Kevin Bird •  
I was able to get a validation accuracy of 87.2400% up to this point. Still working. Currently testing out something that I am hopeful will have some good results that involves making some changes to the input images. Will report back if it works well.
3 replies
Ravi Mashru
Ravi Mashru •  *
I obtained a validation accuracy of 87.9%: https://colab.research.google.com/drive/1T1wM77DFRLARu6YPvWxo6LnjFMv2WNHN?usp=sharing I used pretty much the same approach in Fastbook for the pets breed classifier with a few modifications: - I did not use any augmentations. I tried using some augmentations but ended up with a worse accuracy (approx. 83.5%) which was a little counterintuitive. I just resized the images to 224x224 because I used a pre-trained resnet34. - I reduced my batch size to 16 because the GPU I got on Colab was running out of memory for any batch size above that. That meant reaaaallly long training times (approx. 11 mins per epoch) - Instead of the standard “fit one cycle + unfreeze + fit one cycle” or “fine_tune”, I got better results by just using “fit one cycle” for a larger number of epochs (12 to be exact) and not unfreezing the layers at all. I tried to plot the confusion matrix and top losses to understand how to improve the model but Colab ran out of memory and crashed :( I will train the model again and export and download it and then do this analysis on my laptop.
4 replies