Effective Explainable Artificial Intelligence using Visual Explanations in Images

It appears your Web browser is not configured to display PDF files. Download adobe Acrobat or click here to download the PDF file.

Click here to download the PDF file.


Ibrahim, Rami F.




Convolutional neural networks (CNNs) outperformed machine learning in image classification. Their human-brain alike structure enabled them to learn sophisticated features while passing images through their layers. However, their lack of explainability led to the demand for interpretations to justify their prediction. Explainable AI (XAI) proposed collaboration between technology and humans to provide more insights into CNNs. This study presents a novel explainable model called Augmented Score-CAM, built on top of the existing Score-CAM and the existing image augmentation techniques. This model adopts the image augmentation approach by producing augmented class activation maps and merging them into one activation map. In addition, we introduce a novel taxonomy analysis for XAI models that interpret CNNs. The taxonomy categorizes the models into architecture modification, architecture simplification, feature relevance, and visual interpretations. After that, we review XAI evaluation metrics, application areas, and tasks. In the end, we discuss XAI challenges and address some concerns, and provide suggestions to improve their performance. This study improves AI systems interpretation by adding Augmented Score-CAM visual explanations. Furthermore, we highlight the importance of incorporating visual explanations in AI systems to improve user trust in decision-making.


Computer Science




Carleton University

Thesis Degree Name: 

Doctor of Philosophy: 

Thesis Degree Level: 


Thesis Degree Discipline: 

Information Technology

Parent Collection: 

Theses and Dissertations

Items in CURVE are protected by copyright, with all rights reserved, unless otherwise indicated. They are made available with permission from the author(s).