AImpact Seminar #1: Towards Interpretable AI: Visualization of learned neural models



Speaker: Dr. Harish Guruprasad, IIT Madras

Abstract:
The trade-off between interpretability and accuracy in machine learning is a long-standing challenge in Machine learning. The rise of deep neural networks and their ubiquitous usage has brought this problem to the forefront once again. The power of neural networks comes mainly from its multi-layer feature representation through the use of “hidden nodes”. By their very nature, hidden nodes are not interpretable, and can at best be conjectured to represent some latent factor that aids in the final task, e.g. a hidden node may detect eyes in an image, and later layers can use this information for predicting the presence of a cat. Hence an entire class of approaches that try to “understand” a learned network based on various “visualizations” have arisen. This “understanding” can be used by the programmer for debugging the learned network or conveyed to the end-user as a reason for a particular prediction. In this talk, we will discuss some standard visualization approaches used for understanding the predictions of deep networks on images, speech, and audio tasks.

source

Nicole Nicky

Hi im nicole aka nicky. Im the senior editor for the rich money models empire. I am in charge of a team of 8 writers.

Leave a Reply