Focuses on explainability in the context of neural networks. You will explore techniques designed for deep models and develop skills to interpret complex architectures.
This session focuses on the unique challenges of explaining neural networks, which are often viewed as “black boxes” due to their layered, nonlinear structure. You will explore a range of techniques specifically designed for deep models, including saliency maps and integrated gradients. We will discuss how these methods trace predictions back through the network to highlight influential features, and how they differ in terms of interpretability, faithfulness, and computational cost.
Courses in ML applied to finance; network theory; eXplainable AI in finance