Tuesday December 10, lecture hall AT115A, at 12:00-16:00
Wednesday December 11, lecture hall TS101, at 8:00-12:00
Oral exam: Wednesday December 11, lecture hall TS127 at 13:00-16:00
Passing the course (3 credits, PASS/FAIL) needs:
- Following the lectures: Tuesday December 10, lecture hall AT115A, at 12:00-16:00; Wednesday December 11, lecture hall TS101, at 8:00-12:00
- Writing a short report how you envision application of audio and the acquired knowledge or report on how you worked with the tools taught in the course (Deadline: Jan. 15) to Prof. Guoying Zhao (firstname.lastname@example.org)
- Passing the oral exam on Wednesday December 11, lecture hall TS127 at 13:00-16:00. Oral exam will be held with each group of two students for ~15 minutes. Please choose one group with the suitable time slot and mark your name and student number for the oral exam with this online shared doc: https://docs.google.com/document/d/17xnwYCuVCY0Lg885sEPAFiaJ9Ga2ERwdf5nenZXAZPs/edit#heading=h.v08jo35c2c52
- Please wait outside of TS127 15 minutes before your time.
This course consists of 8 hours introducing the analysis of speech and audio with an application focus on healthcare and wellbeing. From a method point of view, it quickly introduces traditional speech and audio features, and then moves to a range of suited deep learning techniques such as end-to-end learning from raw audio data. Such methods comprise Convolutional Neural Networks, Recurrent Neural Networks with Long-Short Term Memory, and Generative Adversarial Networks. A further emphasis is put on speech and audio data collection such as Active and Semi-Supervised Learning. From an application point of view, the course focuses on health-related monitoring and diagnosis. The course will be accompanied by practical exercises and examples based on recent toolkits and data in the according sub-fields.
Contact person: Guoying Zhao
Björn W. Schuller received his diploma in 1999, his doctoral degree for his study on Automatic Speech and Emotion Recognition in 2006, and his habilitation (fakultas docendi) and was entitled Adjunct Teaching Professor (venia legendi) in the subject area of Signal Processing and Machine Intelligence for his work on Intelligent Audio Analysis in 2012 all in electrical engineering and information technology from TUM in Munich/Germany (ranked #41 as university and #20 in engineering in THE World University Ranking 2018).
Since 2017, he is Full Professor and Centre Digitisation.Bavaria (ZD.B) Chair of Embedded Intelligence for Health Care and Wellbeing at the University of Augsburg/Germany (ranked #801-900 as university in the ARWU Ranking 2019) in the Faculty of Applied Informatics and the Faculty of Medicine. At the same time, he is Professor of Artificial Intelligence in the Department of Computing at Imperial College London/UK (ranked #8 in THE World University Ranking 2018) since 2018 where he heads the Group on Language Audio & Music (GLAM), previously being a Reader in Machine Learning since 2015 and Senior Lecturer since 2013. Further, he is the co-founding CEO and current CSO of audEERING GmbH – a TUM start-up on intelligent audio engineering since its launch in 2012. In 2019, he was appointed as Honourary Dean of the Centre for Affective Intelligence at Tianjin Normal University, Tianjin, P.R. China.
More information: : http://www.schuller.one/
Last updated: 9.12.2019