Skip to main content

Small Is Beautifull

Edge AI for Image Aesthetic Assesment
Created on May 17|Last edited on November 17


Edge AI




High level.

There are a huge number of examples including Industrial Internet of Things(IoT), ML deployed in mobile phones, urban sensing, cartography, and research.
Edge of a network and generally GPU pre-trained (but not always)
Involves both specialist and no specialist hardware.
Within computer vision models designed for Edge AI computer vision are optimized that have the smallest number of FLOPs, small size (eg. 5mb) , low-latency (fast inference), has small power consumption.
Hal Burche's Graph of the internet.

Hardware:

Nvidia Jetson Nano


Mobile device best Image selection

This example uses transfer learning on mobile/edge-optimized CNNs and CvT's.
This example uses:
  1. Messy real-world data.
  2. Training small rather than large.
  3. Uses Weights and Biases for training and edge device network deployment.

Image Aesthetic Quality Assessment (IAQA):

Method A Binary Classifier in Regressiors shoes
Dataset first web scraped by N. Murray et al. in 2012 from https://www.dpchallenge.com/
Using transfer learning of pre-training mobile/edge-optimized devices using Ross Weighmans' Timm repository and Pytorch Framework.
  1. Quantized and thresholded into binary class from a probability distribution;
  2. Trained to classify good or bad images;
  3. Comparing models.

Selecting Best network ConViT, CvTs and CNNs.

Exploring reproducibility and domain-adaptation using transfer learning and getting SOTA metrics for the network(s) pre-trained on Image-Net 1k using Ross Weighman's timm package, the docs for this can be found here.
The latest bleeding-edge release is available to check out here, At the time of writing is needed if you want to use the latest MobileViTs{xxs,xs,s}Mobile ViTs\in \{xxs,xs,s\}
pip install git+https://github.com/rwightman/pytorch-image-models.git


PaperURLModel Name
MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision TransformerhereMobileViT
Self-training with Noisy StudenthereEfficientNet NoisyStudent (B0-B7, L2)
Adversarial Examples Improve Image RecognitionhereEfficientNet AdvProp (B0-B8)
EfficientNet: Rethinking Model Scaling for Convolutional Neural NetworkshereEfficientNet (B0-B7)
EfficientNet-EdgeTPU: Creating Accelerator-Optimized Neural Networks with AutoMLhereEfficientNet-EdgeTPU (S, M, L)
EffecientNetV2: Smaller Models and Faster TraininghereEfficientNet V2
MobileNetV2: Inverted Residuals and Linear BottleneckshereMobileNet-V2
Model Rubik's Cube: Twisting Resolution, Depth and Width for TinyNetshereTinyNet
FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture SearchhereFBNet-C
MnasNet: Platform-Aware Neural Architecture Search for MobilehereMNASNet B1, A1 (Squeeze-Excite), and Small
Single-Path NAS: Designing Hardware-Efficient ConvNets in less than 4 HourshereSingle-Path NAS


Pipeline and ML Ops

Pre Processing and Meta-data

Training and Augmentation

Training and Tuning

Testing CPU inference on Rasberry Pi 4 (4GB)

Deployment at the Edge (Monitoring Inference)


Run set
3


Code & Reproducibility

typesource
colab .ipynbOpen In Colab
repohere
datahere