Fill the Form:
This is only for people registered for the course.
Learning Modules for Explainable AI with Python
Here, we outline a learning structure for you, including some GitHub repositories to follow. This suggested order of curated list of resources, including video lectures, websites, and research papers, shall help students complete the lab exercise. These resources should provide you with a solid foundation for understanding the ethical considerations, explainability, and potential biases in AI systems. Since AI continues to advance, it's essential to address these issues to ensure responsible development and deployment.
Introduction to Explainable AI
- Video: Opening the Black Box With Explainable A.I. by UC Berkeley’s Trevor Darrell, Krishna Gade of Fiddler Labs, and Karen Myers from SRI International.
-
Video: Explainable AI: From Prediction to Understanding by George’s presentation at ODSC 2018.
-
Video Playlist: Explainable AI Explained by DeepFindr
-
Website: Explainable AI: A Guide for Making Black Box Models Explainable by George Anadiotis.
-
Reading: Introduction to Interpretability
Feature Importance and Local Interpretable Model-agnostic Explanations (LIME)
-
Video Presentation: "Why Should I Trust You?": Explaining the Predictions of Any Classifier. in KDD 2016
-
Video: Explainable AI explained! | #3 LIME by DeepFindr
-
GitHub: LIME (Local Interpretable Model-agnostic Explanations)
Layer-wise Relevance Propagation (LRP)
Class Activation Mapping (CAM)
-
Method Paper: Learning Deep Features for Discriminative Localization by Zhou et al. (CVPR 2016)
-
Video: Deep Learning: Class Activation Maps Theory (Lazy Programmer).
-
Video: Explaining CNNs: Class Attribution Map Methods (NPTEL-NOC IITM)
-
GitHub: CAM
Grad-CAM: Visual Explanations from Deep Networks
- Paper: Grad-cam: Visual explanations from deep networks via gradient-based localization by Selvaraju et al. (CVPR 2017)
-
Video: Grad-CAM | Lecture 28 (Part 2) by Maziar Raissi
-
GitHub: Grad-CAM (Gradient-weighted Class Activation Mapping)
Image Saliency
-
Paper: Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps by Simonyan et al. (arXiv 2013)
-
Video: Lecture 12 | Visualizing and Understanding by Stanford University School of Engineering.
Integrated Gradients
- Video: Feature Attribution | Stanford CS224U Natural Language Understanding | Spring 2021 by Stanford University School of Engineering.
- GitHub: Integrated Gradients
Network Dissection
-
Paper: Network Dissection: Quantifying Interpretability of Deep Visual Representations by Bau et al. (CVPR 2017)
-
Video: Network Dissection: Visualizing and Understanding Deep Visual Representations by David Bau in CVF Videos.
-
GitHub: Network Dissection
Counterfactual Explanations
- Video: Counterfactual Explanations: The Future of Explainable AI by Aviv Ben Arie
-
Video: Counterfactual Explanations Can Be Manipulated in UCI NLP (NeurIPS 2021)
-
GitHub: Alibi: Algorithms for Monitoring and Explaining Machine Learning Models
Ethics in AI
- Paper: Moral dilemmas for moral machines by Travis LaCroix (Springer 2022)
-
Paper: The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation by Brundage et al. (arXiv 2018).
-
Video: The three big ethical concerns with artificial intelligence by Frank Rudzicz in MaRS Discovery District
Data Bias and Model Understanding
- Paper: Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings by Bolukbasi et al. (NeurIPS 2016)
-
Paper: Datasheets for Datasets by Gebru et al. (ACM 2021)
-
Paper: Fairness Definitions Explained by S. Verma and J. Rubin (ACM 2018)
Additional Resources & Case Studies
-
Video: Machine Learning Explainability Workshop I Stanford by Professor Hima Lakkaraju
-
Video: Explainable AI Cheat Sheet - Five Key Categories by Jay Alammar
-
Website: Explainable AI (XAI) - DARPA by Dr. Matt Turek
-
Book: Interpretability in Deep Learning by Somani et al. (Springer 2023)
-
Book: Interpretable Machine Learning by Christoph Molnar