-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
Description
##Purpose
This experiment aims to compare the performance of a publicly available pretrained models and analyze its effcet on training performance and generalization
Model & Dataset
- Models : MobileNetV2, ResNet18
- Dataset CIFAR-10
- Evaluation Metrics: Accuracy, Loss, Confidence Score Distribution
Curriculum Learning Strategy
- Initial training uses high-confidence samples
- Gradually introduces lower-confidence (more difficult) samples
- The goal is to simulate a "learning progression" similar to human education
Expected Outcome
- Determine whether a scratch-trained model with curriculum learning can match or outperform the pretrained model
- Evaluate the impact of confidence-based sample scheduling on training stability and generalization
- Analyze the trade-offs between transfer learning and confidence-driven curriculum learning
CheckList
- Evaluate pretrained MobileNetV2, ResNet18 performance on the chosen dataset
- Train MobileNetV2 from scratch and evaluate its performance
- Implement confidence score calculation for each sample
- Design curriculum schedule based on confidence levels
- Train pretrained model with curriculum learning
- Train Scratch model with curriculum learning
- Visualize training/test accuracy and loss curves
- Summarize results in a performance comparison table
- Write detailed analysis and discussion of the results
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels