Explainable Machine Learning and Embedded Deep Learning

Join us in our first meetup in 2019 where you will learn how to interpret your machine learning models, through a detailed introduction and hands-on session by Ioannis Mollas, as well as how to accelerate your deep learning models using Edge TPUs, through a quick-and-dirty coding session by Nikolaos Passalis.

Agenda:
19:00: Warm Up Session: ‘Embedded Deep Learning using Google’s Edge TPU’, Nikolaos Passalis
19:45: ‘Explainable Machine Learning: A Theoretical and Practical Overview’, Ioannis Mollas
21:30: Networking and socializing

‘Explainable Machine Learning: A Theoretical and Practical Overview’, Ioannis Mollas

Abstract: Explainable machine learning refers to methods and models that make the behavior and predictions of machine learning systems understandable for humans. With the rapid development of technological fields such as self-driving cars, digital healthcare, robotic assistants and recommendation systems, in combination with radical law changes empowering ethics and human rights, the field of explainable machine learning is blooming at an impressive rate. Already, a large number of methods for interpreting, mainly supervised, machine learning models, have been proposed. Each method is suggesting its own way to interpret a machine learning model and represent the explanations, whether it is in visual, graphical, textual or dialectical form. The first part of the session aims to present the theoretical background of explainable machine learning, including definitions and methods. The second part of the session will be a practical demonstration of training and explaining transparent and black box models, with a variety of methods implemented in Python and using well-known libraries such as scikit-learn, Orange, Eli5, and SHAP.

Bio: Ioannis Mollas is Ph.D. candidate and M.Sc. student at the School of Informatics in the Aristotle University of Thessaloniki (AUTH) in Greece. He obtained his B.Sc. in Informatics in 2018 from AUTH. His research interests include machine learning, argumentation and explainable machine learning. His doctoral studies are supported by the AI4EU Project (https://www.ai4eu.eu/).

‘Embedded Deep Learning using Google’s Edge TPU’, Nikolaos Passalis

Abstract: Deep Learning (DL) is currently among the most prominent candidates for providing intelligence for many embedded applications. However, DL suffers from a significant drawback: it requires powerful, expensive and energy-intensive hardware for deploying the developed DL models. This has led to the development of specialized co-processors, e.g., NVIDIA Jetson, HiSilicon NPU, Google’s Edge TPU, etc., specifically designed to accelerate DL inference, while meeting the requirements of embedded applications. In this hands-on tutorial, we will briefly review the most useful tools for developing embedded applications powered by DL and discuss the most important challenges that we should keep in mind when designing such models. Then, we will get our hands dirty by developing and testing live a state-of-the-art embedded face detection system with the just-released Google’s purpose-built ASIC, Edge TPU, that can run at 100+ FPS, while using less than 3 Watts of power.

Bio: Nikolaos Passalis is a postdoctoral researcher at the Faculty of Information Technology and Communication Sciences, Tampere University, Finland. He received the B.Sc. in Informatics in 2013, the M.Sc. in Information Systems in 2015 and the Ph.D. degree in Informatics in 2018, from the Aristotle University of Thessaloniki, Greece. He has (co-)authored more than 40 papers published in international journals and conference proceedings. His research interests include deep learning, computational intelligence and information retrieval.