YOLO-BAM: Integrating CBAM to the YOLOv3 Model for Pedestrian Detection in Images

Document Type

Conference Proceeding

Publication Date



This study investigates the impact of integrating the Convolutional Block Attention Module (CBAM) into the YOLOv3 model for pedestrian detection. Through a 50-epoch training process on the COCO 2017 dataset, the performance of the modified YOLOv3 model, named YOLO-BAM, was evaluated against the baseline model. The results revealed that YOLO-BAM demonstrated a modest increase in accuracy with a 2.6% improvement compared to the baseline model. YOLO-BAM achieved a mean Average Precision (mAP) of 55.020%, while the baseline model attained an mAP of 56.011%. These findings suggest that factors such as the dataset, the CBAM implementation, the inherent effectiveness of the YOLOv3 model, and the evaluation metrics employed may have contributed in not observing more significant improvements in the modified model. Further analysis and exploration are necessary to uncover the full potential of integrating CBAM into YOLOv3 for pedestrian detection.