Explainability vs. Performance: Bridging the Trade-Off in Deep Learning Models
DOI:
https://doi.org/10.15662/IJARCST.2024.0705006Keywords:
Explainable AI, deep learning, autonomous driving, medical diagnostics, model transparency, performance trade-offAbstract
This work explores the explainability-performance trade-off in deep learning models, especially in life-and-death scenarios such as autonomous driving and medical diagnostics. The more complex and integrated AI models become and the more integrated they are into safety-critical systems, the greater the need to ensure their transparency to preserve trust and accountability. It discusses the different ways to balance model performance and explainability, both in the form of interpretability methods and post-hoc techniques of explanation such as LIME and SHAP. Tesla with Autopilot and IBM Watson Health are just one example of case studies where this trade-off is challenged and its impact is demonstrated. It reveals that to make the use of AI systems safe and ethical, high performance is necessary, and the decisions should be made transparently. The study adds to the body of existing knowledge by highlighting the need to be more transparent with AI and provides a recommendation to enhance model interpretability without making the model too inaccurate. This paper highlights the current necessity to see further developments in explainability of AI, especially in areas that are vital to safety.
References
1. Adebayo, A. S., Ajayi, O. O., & Chukwurah, N. (n.d.). Explainable AI in robotics: A critical review and implementation strategies for transparent decision-making. Journal of Frontiers in Multidisciplinary Research, 26. Retrieved from http://www.multidisciplinaryfrontiers.com
2. Akinsuli, O. (2021). The rise of AI-enhanced Ransomware-as-a-Service (RaaS): A new threat frontier. World Journal of Advanced Engineering Technology and Sciences, 1(2), 85–97. https://wjaets.com/content/rise-ai-enhanced-ransomware-service-raas-new-threat-frontier
3. Akinsuli, O. (2022). AI and the Fight Against Human Trafficking: Securing Victim Identities and Disrupting Illicit Networks. Iconic Research And Engineering Journals, 5(10), 287-303.
4. Akinsuli, O. (2023). The Complex Future of Cyberwarfare - AI vs AI. Journal of Emerging Technologies and Innovative Research, 10(2), 957-978.
5. Akinsuli, O. (2024). AI-Powered Supply Chain Attacks: A Growing Cybersecurity Threat. Iconic Research And Engineering Journals, 8(1), 696-708.
6. Akinsuli, O. (2024). AI Security in Social Engineering: Mitigating Risks of Data Harvesting and Targeted Manipulation. Iconic Research And Engineering Journals, 8(3), 665-684.
7. Akinsuli, O. (2024). Securing AI in Medical Research: Revolutionizing Personalized and Customized Treatment for Patients. Iconic Research And Engineering Journals, 8(2), 925-941.
8. Akinsuli, O. (2024). Securing the Driverless Highway: AI, Cyber Threats, and the Future of Autonomous Vehicles. Iconic Research And Engineering Journals, 8(2), 957-970.
9. Akinsuli, O. (2024). Traditional AI vs generative AI: The role in modern cyber security. Journal of Emerging Technologies and Innovative Research (JETIR), 11(7), 431-447. https://www.jetir.org/papers/JETIR2407842.pdf
10. Akinsuli, O. (2024). Using AI to Combat Cyberbullying and Online Harassment in North America (Focus on USA). International Journal of Emerging Technologies and Innovative Research, 11(5), 276-299.
11. Akinsuli, O. (2024). Using Zero Trust Security Architecture Models to Secure Artificial Intelligence Systems. Journal of Emerging Technologies and Innovative Research, 11(4), 349-373.
12. Chawande, S. (2024). AI-driven malware: The next cybersecurity crisis. World Journal of Advanced Engineering Technology and Sciences, 12(01), 542-554. https://doi.org/10.30574/wjaets.2024.12.1.0172
13. Chawande, S. (2024). Insider threats in highly automated cyber systems. World Journal of Advanced Engineering Technology and Sciences, 13(02), 807-820. https://doi.org/10.30574/wjaets.2024.13.2.0642
14. Chawande, S. (2024). The role of Artificial Intelligence in cybersecurity. World Journal of Advanced Engineering Technology and Sciences, 11(02), 683-696. https://doi.org/10.30574/wjaets.2024.11.2.0014
15. Folorunso, A., Olanipekun, K., Adewumi, T., & Samuel, B. (2024). A policy framework on AI usage in developing countries and its impact. Global Journal of Engineering and Technology Advances, 21(01), 154–166. https://doi.org/10.30574/gjeta.2024.21.1.0192
16. Gkontra, P., Quaglio, G., Tselioudis Garmendia, A., & Lekadir, K. (2023). Challenges of Machine Learning and AI (What Is Next?), Responsible and Ethical AI. Springer EBooks, 263–285. https://doi.org/10.1007/978-3-031-36678-9_17
17. Gualdi, F., & Cordella, A. (2021, January 5). Artificial intelligence and decision-making: the question of accountability (T. X. Bui, Ed.). Eprints.lse.ac.uk; IEEE Computer Society Press. https://eprints.lse.ac.uk/110995/
18. Ingle, S., & Phute, M. (2016). Tesla Autopilot: Semi autonomous driving, an uptick for future autonomy. International Research Journal of Engineering and Technology (IRJET), 3(9), 369. Retrieved from http://www.irjet.net
19. Islam, S. R., Eberle, W., Ghafoor, S. K., & Ahmed, M. (2021, January 23). Explainable Artificial Intelligence Approaches: A Survey. ArXiv.org. https://doi.org/10.48550/arXiv.2101.09429
20. Kokare, Ashish, et al. (2014). Survey on classification based techniques on non-spatial data. International Journal of Innovative Research in Science, Engineering and Technology, 3(1), 409-413.
21. Muthusubramanian, M., Jangoan, S., Sharma, K. K., & Krishnamoorthy, G. (2024). Demystifying explainable AI: Understanding, transparency and trust. International Journal for Multidisciplinary Research (IJFMR), 6(2), 1. Retrieved from http://www.ijfmr.com
22. Pérez-Cerrolaza, J., Abella, J., Borg, M., Donzella, C., Jesús Cerquides, Cazorla, F. J., Englund, C., Tauber, M., Nikolakopoulos, G., & Martínez, L. (2023). Artificial Intelligence for Safety-Critical Systems in Industrial and Transportation Domains: A Survey. ACM Computing Surveys. https://doi.org/10.1145/3626314
23. Sam, S., Kamakshi, V., Lodhi, N., & Krishnan, N. C. (2021). Evaluation of Saliency-based Explainability Method. ArXiv.org. https://arxiv.org/abs/2106.12773
24. Sarker, I. H. (2021). Deep Learning: a Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions. SN Computer Science, 2(6). Springer. https://doi.org/10.1007/s42979-021-00815-1
25. Strickland, E. (2019, April). IBM Watson, heal thyself: How IBM overpromised and underdelivered on AI health care. IEEE Spectrum, 56(4), 24-31. https://doi.org/10.1109/MSPEC.2019.8678513
26. Vouros, G. A. (2022). Explainable Deep Reinforcement Learning: State of the Art and Challenges. ACM Computing Surveys. https://doi.org/10.1145/3527448


