Speaker: Dr. Harish Guruprasad, IIT Madras
The trade-off between interpretability and accuracy in machine learning is a long-standing challenge in Machine learning. The rise of deep neural networks and their ubiquitous usage has brought this problem to the forefront once again. The power of neural networks comes mainly from its multi-layer feature representation through the use of “hidden nodes”. By their very nature, hidden nodes are not interpretable, and can at best be conjectured to represent some latent factor that aids in the final task, e.g. a hidden node may detect eyes in an image, and later layers can use this information for predicting the presence of a cat. Hence an entire class of approaches that try to “understand” a learned network based on various “visualizations” have arisen. This “understanding” can be used by the programmer for debugging the learned network or conveyed to the end-user as a reason for a particular prediction. In this talk, we will discuss some standard visualization approaches used for understanding the predictions of deep networks on images, speech, and audio tasks.