top of page

 INTERPRETABILITY IN DEEP LEARNING

Artificial intelligence and machine learning approaches are often considered as black boxes, i.e. as a type of algorithms that accomplish learning tasks but cannot explain their knowledge. However, as artificial intelligence is increasingly absorbed as adopted for accomplishing cognitive tasks for human beings, it is becoming important that the artificial intelligence models are understandable by humans, such that artificial and human intelligence can co-exist and collaborate. In critical tasks such as deriving, from given data, a correct medical diagnosis and prognosis, collaboration between artificial and human intelligence in imperative so that the suggestions or decision from artificial intelligence are both more accurate and more trustworthy.

 

This intensive course will consider different topics of importance regarding explainable artificial intelligence, equipping the students with knowledge of approaches that can be used to explain artificial intelligence, and artificial intelligence approaches that are more explainable than others. In addition, the students will receive practical skills of applying selected approaches for explaining artificial intelligence, which will equip the students with practical skills of adapting to the rapid pace of technology development in the field of explainable artificial intelligence.

Events

Screenshot 2023-02-23 at 09.24.31.png

DLN Research School 2023

The 7th open call for course proposals to DLN Research School (RS) 2023. This course will consider different topics of importance regarding interpretable deep learning, equipping the students with knowledge of approaches that can be used to explain deep learning, and deep learning approaches that are more explainable than others. Extensive lab work, self-exercises, and group work for competence development are also included.

The course is supported by the Digital Life Norway Research School and there are a limited number of travel grants available for our members.

Recommended prerequisites: Programming skills in python and hands-on-knowledge of deep learning.

NORA Summer School 2023 - Track 1

The summer school has been scheduled for 12-17 June 2023 (Monday to Saturday) for 5 ECTS credits. First digital introductory lecture will be on 8th may 10.15am to 12.00(noon). In this digital lecture course overview and project related description will be given along with what is expected in the mandatory coursework and home exam as well as project report.

Venue ‏ :  TEKNOBYGGET 1.022AUD 
 

The deadline to register is 30 April 2023.

Screenshot 2023-02-16 at 05.53_edited.jpg
IDL_Cover_Page.jpeg

Reference Book

This book is a comprehensive curation, exposition and illustrative discussion of recent research tools for interpretability of deep learning models, with a focus on neural network architectures. In addition, it includes several case studies from application-oriented articles in the fields of computer vision, optics and machine learning related topic.

Publisher ‏ :  Springer; 1st ed. 2023 edition (May 01, 2023)
Language ‏ :‎  English
Hardcover ‏: ‎ 466 pages

Amazon.png

YouTube Videos

YouTube Videos

YouTube Videos
Search video...
Introduction to Interpretability in Deep Learning 2023

Introduction to Interpretability in Deep Learning 2023

01:01
Play Video

Blog Feed

bottom of page