Paper Title
HARNESSING CHAOS AND CAUSALITY IN NEURAL NETWORKS: A PRUNING STRATEGY FOR ENHANCED PERFORMANCE AND EXPLAINABILITY
Abstract
Neural network pruning is essential for optimizing deep learning models, yet traditional methods like magnitude based pruning often overlook the temporal dependencies and causal relationships within networks. To address these limitations, we introduce a novel Granger Causality-based pruning methodology that identifies and retains causally significant weights, ensuring that the pruning process considers both temporal and causal dynamics. This approach not only encompasses the advantages of magnitude pruning but also serves as a more comprehensive and adaptive strategy for neural network optimization. Additionally, we employ paired t-tests and Intersection over Union (IoU) analysis to rigorously evaluate the stability and effectiveness of our pruning method. Experimental results across various datasets and architectures demonstrate that our Granger Causality-based pruning achieves superior accuracy with fewer parameters and faster convergence, making it a robust, architecture agnostic solution for enhancing model efficiency and interpretability in resource-constrained environments. Check out the project on GitHub Repository.