top of page

Hello iDL Community

The Interpretability in Deep Learning initiative is a growing team-wide effort to improve the transparency, accuracy, and fairness of AI models. AI systems and computer vision are being employed for everyday decision-making in the modern world, ranging from healthy food and clothing selections to life-altering judgments such as disease diagnosis, financial fraud detection, and employee selection. We believe it is crucial to ensure that these AI models can be interpreted by their end-users.

  • Youtube

YouTube

Our initiative aims to provide researchers, developers, and users with resources and tools to enhance the interpretability of AI models. We endeavor to make AI models more transparent and comprehensible, so that users may have faith in the judgments made by these systems. We also stress the significance of ethical and moral considerations in the design and implementation of AI models.

Get in Touch

We are always looking for new and exciting opportunities. Let's connect.

Bio-AI Lab, Dept. of Computer Science

UiT The Arctic University of Norway

kisspng-gmail-computer-icons-email-google-symbol-mecanimais-5b03fd6f841f34.669138651526988

Thanks for submitting!

bottom of page