2. Motivation
• Two Settings:
• Anytime classification
• Budgeted batch classification
• A test image
• “easy” vs. “hard”
• shallow model vs. deep model
• Training multiple classifiers with varying
resource demands, which we adaptively
apply during test time.
3. Motivation
• Develop CNNs
• “slice” the computation and process these slices one-by-one, stopping the
evaluation once the CPU time is depleted or the classification sufficiently
certain (through “early exits”)
classifier
…feature
4. Motivation
• Problems
• The lack of coarse-level features of early-exit classifiers
• Early classifiers interfere with later classifier
8. Multi-Scale DenseNet (MSDNet)
• Two Settings:
• Anytime classification – output the most recent prediction until the budget is
exhausted
• Budgeted batch classification – exits after classifier fk if its prediction
confidence exceeds a pre-determined threshold