Skip to main content

Label Propagation vs. LinearSVM

Created on August 19|Last edited on August 24

Global glance to different methods




1k2k3k4kcost1000200030004000Number
: - bayes+lp/valid_num
: - bayes+svm/valid_num
: - bayes/valid_num
1k2k3k4kcost0.20.40.60.81Accuracy
: - bayes+lp/all/top1
: - bayes+svm/all/top1
: - bayes/all/top1
Run set
25


Deeper look into each run




Run set
24


LabelPropagation with different learner_thres




Gamma>=1
12
Gamma<1
13


More data points

100 images per class and still 5 prototypical images per class are used




Run set
22



Run set
22


Infinite budget




Run set
22



Run set
22



Observation

  • LabelPropagation with larger gamma (considering less but confident neighbouring examples) helps

Next step

  • Consider using all data to propagate
  • Consider the soft label (with known uncertainty)

Concern: Using soft label definitely helps. And the same thing applies to SVM.




Run set
0