Creator:
Date:
Abstract:
Convolutional neural networks (CNNs) outperformed machine learning in image classification. Their human-brain alike structure enabled them to learn sophisticated features while passing images through their layers. However, their lack of explainability led to the demand for interpretations to justify their prediction. Explainable AI (XAI) proposed collaboration between technology and humans to provide more insights into CNNs. This study presents a novel explainable model called Augmented Score-CAM, built on top of the existing Score-CAM and the existing image augmentation techniques. This model adopts the image augmentation approach by producing augmented class activation maps and merging them into one activation map. In addition, we introduce a novel taxonomy analysis for XAI models that interpret CNNs. The taxonomy categorizes the models into architecture modification, architecture simplification, feature relevance, and visual interpretations. After that, we review XAI evaluation metrics, application areas, and tasks. In the end, we discuss XAI challenges and address some concerns, and provide suggestions to improve their performance. This study improves AI systems interpretation by adding Augmented Score-CAM visual explanations. Furthermore, we highlight the importance of incorporating visual explanations in AI systems to improve user trust in decision-making.