Small Is Beautifull
Edge AI for Image Aesthetic Assesment
Created on May 17|Last edited on November 17
Comment
Edge AI High level.Hardware:Mobile device best Image selectionImage Aesthetic Quality Assessment (IAQA):Selecting Best network ConViT, CvTs and CNNs. Related Work and Networks Pipeline and ML OpsPre Processing and Meta-dataTraining and AugmentationTraining and TuningTesting CPU inference on Rasberry Pi 4 (4GB)Deployment at the Edge (Monitoring Inference)Code & Reproducibility
Edge AI
High level.
There are a huge number of examples including Industrial Internet of Things(IoT), ML deployed in mobile phones, urban sensing, cartography, and research.
Edge of a network and generally GPU pre-trained (but not always)
Involves both specialist and no specialist hardware.
Within computer vision models designed for Edge AI computer vision are optimized that have the smallest number of FLOPs, small size (eg. 5mb) , low-latency (fast inference), has small power consumption.

Hal Burche's Graph of the internet.
Hardware:

Nvidia Jetson Nano
Mobile device best Image selection
This example uses transfer learning on mobile/edge-optimized CNNs and CvT's.
This example uses:
- Messy real-world data.
- Training small rather than large.
- Uses Weights and Biases for training and edge device network deployment.
Image Aesthetic Quality Assessment (IAQA):
Method A Binary Classifier in Regressiors shoes
Using transfer learning of pre-training mobile/edge-optimized devices using Ross Weighmans' Timm repository and Pytorch Framework.
- Quantized and thresholded into binary class from a probability distribution;
- Trained to classify good or bad images;
- Comparing models.
Selecting Best network ConViT, CvTs and CNNs.
Exploring reproducibility and domain-adaptation using transfer learning and getting SOTA metrics for the network(s) pre-trained on Image-Net 1k using Ross Weighman's timm package, the docs for this can be found here.
The latest bleeding-edge release is available to check out here, At the time of writing is needed if you want to use the latest
pip install git+https://github.com/rwightman/pytorch-image-models.git
Related Work and Networks
Paper | URL | Model Name |
---|---|---|
MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer | here | MobileViT |
Self-training with Noisy Student | here | EfficientNet NoisyStudent (B0-B7, L2) |
Adversarial Examples Improve Image Recognition | here | EfficientNet AdvProp (B0-B8) |
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks | here | EfficientNet (B0-B7) |
EfficientNet-EdgeTPU: Creating Accelerator-Optimized Neural Networks with AutoML | here | EfficientNet-EdgeTPU (S, M, L) |
EffecientNetV2: Smaller Models and Faster Training | here | EfficientNet V2 |
MobileNetV2: Inverted Residuals and Linear Bottlenecks | here | MobileNet-V2 |
Model Rubik's Cube: Twisting Resolution, Depth and Width for TinyNets | here | TinyNet |
FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search | here | FBNet-C |
MnasNet: Platform-Aware Neural Architecture Search for Mobile | here | MNASNet B1, A1 (Squeeze-Excite), and Small |
Single-Path NAS: Designing Hardware-Efficient ConvNets in less than 4 Hours | here | Single-Path NAS |
Pipeline and ML Ops
Pre Processing and Meta-data
Training and Augmentation
Training and Tuning
Testing CPU inference on Rasberry Pi 4 (4GB)
Deployment at the Edge (Monitoring Inference)
Run set
3
Code & Reproducibility
Add a comment