Paper Title
HAET: A Hybridized Methodology to Augment Explainability in Brain Tumor Detection Using Deep Learning Technique

Abstract
This paper introduces HAET, a hybridized methodology designed to enhance explainability in brain tu-mor detection using deep learning techniques. The proposed approach aims to make deep learning models more in-terpretable, providing insights into decision-making process in medical image analysis. The effectiveness of HAET is demonstrated through experiments on brain tumor datasets, showcasing its potential for reliable and interpretable tumor detection. One binary class and one multi-class dataset has been used.This study used two datasets: one for multiclass tumor analysis and the other for binary classification. For multi-class Dataset,a custom CNN architec-ture tailored for the multiclass problem is utilised.In this methodology dropout is employed for regularization and a dynamic learning rate scheduler for improved training dynamics. HAET combines heatmap and gradient map to show regions .We will use weighted average of Custom Guided Backpropagation(CGBP) and Smooth Grad-CAM++(SGC++) .This study’s primary goal is to improve patient care and healthcare outcomes by empowering clinicians with greater confidence in AI-driven medical image analysis.This effort promotes transparency in AI-driven medical solutions, fostering trust and potentially accelerating their adoption within clinical practice. Keywords - Explainability, Hybridized Methodology • , Deep Learning, • Brain Tumor Detection.