Mathemetical Foundations of Deep Learning

Lecturer: 
Prof. Hadi Ghauch
Lecturer's institute: 
Telecom ParisTech
Date: 
17.12.2019 09:00 to 19.12.2019 15:00
Place: 
TS128

Tue 17.12. at 9-11am and at 13-15pm

Wed 18.12. at 9-11am and at 13-15pm

Thu 19.12. at 9-11am and at 13-15pm

Lecture room: TS128

Abstract

This course offers an in-depth study on the mathematical foundations of deep neural networks (DNNs), which are at the heart of the AI revolution. The first part of the course covers the fundamental aspects of statistical learning and large-scale convex and non-convex optimization methods for modern machine learning tasks. We then focus on learning by a DNNs, pose the resulting learning problem as an empirical risk minimization, discuss in-detail state-of-the-art methods of large-scale training, e.g., back propagationstochastic gradient descent, AdaGradRMSProp, and ADAM . We then focus on common DNN architectures/types such as deep convolutional/recurrent neural networks, and practical issues for training DNNs. Finally, we close the course by reviewing current and future research trends on DNNs and (potentially) interesting applications in signal processing and wireless communications.

Short Biography

Prof. Hadi Ghauch is an assistant professor at Telecom Paristech, within the group of digital communications. He did his postdoc jointly with Prof. Carlo Fischione (Dept of Network Systems and Engineering, KTH), and Prof. Mikael Skoglund (Dept of Information Science and Engineering, KTH) . He received his MS in Information Networking in 2011 from Carnegie Mellon University (CMU), USA, and PhD in Electrical Engineering from KTH in 2016 (under the supervision of Prof. Mikael Skoglund , Prof. Mats Bengtsson and  Dr. Taejoon Kim). His research interests include  optimization for large-scale learning, interpretable models for machine learning, machine learning for resource allocation, optimization for millimeter-wave communication and distributed optimization of wireless networks.

 

Last updated: 19.11.2019