top of page

ResAttNet: Residual Attention Network for Image Classification

Designed and implemented a CNN-based architecture with stacked attention modules for image classification on the CIFAR-10 and CIFAR-100 datasets. Optimized the training pipeline to evaluate GPU performance trade-offs, balancing accuracy and computational efficiency. Demonstrated the effectiveness of attention mechanisms in improving classification accuracy across datasets. Benchmarked model performance on various GPUs, achieving:
-
CIFAR-100: 93.34% accuracy on T4 with a training time of 136.84 minutes.
-
CIFAR-10: 91.07% accuracy on T4 with a training time of 132.22 minutes.
bottom of page