Infotech Oulu Doctoral Program
Lecturer: Dr. Anil Fernando, University of Surrey, UK
Date: June 6-8, 2016
Time: 09.15 - 16.00
Venue: TS 128
Registration: On site
The potential beneficiaries of this course will include the media communications students/researchers, multimedia researchers, equipment manufacturers, broadcasters, multimedia content and service providers, cloud providers, and other interested segments from the industry in the fields of video compression, multimedia communications and systems. We will discuss the importance of new video coding solutions and video systems including HDR for future highly interactive multimedia content.
The range of multimedia services used for everyday activities such as teleconferencing, mobile video streaming and peer-to-peer video sharing are undergoing unprecedented growth recently. With this growth, the quality of received video is of prime importance to users as well as service providers, irrespective of where the users are connected from. A recent report by CISCO forecasts that in 2017 almost 90% of global data traffic will be video. This is mainly fuelled by the heap of consumer communication devices being introduced to the market and the user’s higher expectation on video consumption wherever they are. Since most users are mobile users, providing necessary capacity to handle these ever increasing video traffic poses a huge challenge on the wireless communications infrastructure since the spectrum capacity is limited. Even though the latest standards such as Long Term Evolution–Advanced (LTE-A), increases data rates in the downlink to as high as 1.5Gbps, this will not be sufficient when additional data is pumped together with actual content to support error resilience with more bandwidth-hungry future video applications such as UHD, SHD, 3D and HDR video.
Meanwhile, new video coding standardisations, such as HEVC have been introduced to ease this situation by compressing the video data significantly compared to its predecessor and incorporating parallelism to a greater extent which helps real-time implementation. Although HEVC introduces improved parallelism and compression efficiency, it disregards the transmission aspects of video data. When data is compressed to greater extents, recovering them at the receiver when it is transmitted through an error prone channel is extremely challenging. A single error can cause the whole video sequence undecodable due to interdependencies between coding units, slices, frames, etc. Consequently, bit errors caused by noisy channels and multipath propagation play a crucial role in mobile wireless transmission environments. These errors create artefacts in the reconstructed video frames that propagate in both spatial and temporal domains due to the hierarchical prediction scheme employed in the video compression stages. Therefore, it is essential to take the necessary precautions to mitigate these adverse effects when video is transmitted through error prone channels. To address this problem, error resilience, error concealments, and redundant data transmission are considered. Some non-normative tools have been considered in the literature. However, the utilisation of error resilience tools and redundant data transmission is restricted by the channel bandwidth in transmission networks. The complexity of the codec and the loss of compression efficiency, also restrict the use of error resilient/concealment techniques in some application scenarios. On the other hand, to minimise the problem of video transmission in error prone channels, some approaches from the lower layers have also been considered.
In this course, basics of video compression, communications and the challenges in mobile video communications are considered. State-of-the-art video coding solutions with HEVC codec will be discussed in detail. Emerging concepts such as Quality of Experience (QoE) and Quality of Business (QoB) in media communications and their importance in mobile communications are also considered. State-of-the-art media systems and emerging media systems such as UHD, SHD, 3D and HDR will be discussed. Finally, in response to the topic, the impact of emerging technologies such as QoE, Cloud Communications and UHD/SHD/HDR video on media interaction will also be discussed.
Tentative topics that will be covered in the Course
1. Introduction to Video Coding - Key terminology and fundamental concepts of digital video coding
2. Principles of digital signal compression
3. Basic coding techniques for still images and video sequences
4. Image coding Standards
5. Video coding Standards
6. Other video coding techniques – MPEG-4, distributed coding
7. Introduction to Video Communication
8. Aspects of error resilience in video coders
9. Error concealment strategies
10. Packet based video transmissions
11. Robustness of video coders
12. Error resilience schemes in video coders
13. Joint source and channel coding for video communications
14. Unequal Error Protection for video transmissions
15. Unequal Power Application for video transmissions
16. QoE and QoB in media transmissions
17. Emerging and Future Video Coding Technologies
18. Future Media Systems: 3D, Multi-view video, UHD, SHD, HDR
19. Challenges in video transmission in state-of-the-art wireless systems and future 5G systems
20. Current research in video communications and possible future research in video communications
All lectures will involve some simulations/emulated work to enhance interactions.
Expected Time duration is 18-20 Hours with assignments if needed.
Anil Fernando (SMIEEE) is leads the Video Communications group at the University of Surrey, UK. He has been working in video coding and communications since 1998 and has published more than 350 international refereed journal and proceeding papers in this area including a book on video communications. Furthermore, he has published more than 175 international refereed journal and conference papers in multimedia communications. He has contributed to several international (ACTION-TV, ROMEO, MUSCADE, DIOMEDES, VISNET, etc) projects and currently he is leading an interactive video communications project (ACTION-TV) funded by the European Union on Media communications. Recently he won the IEEE Chester Shall award sponsored by the IEEE Consumer Electronic Society for one of his work on 3D video communications. He has contributed more than 10 IEEE tutorials in leading IEEE conferences (IEEE ICCE, IEEE ICASSP, IEEE ICME, IEEE VCIP and IEEE ICIP) which are considered as very high ranking conferences in video communications/signal processing. Moreover, he has a carrier track record of university teaching in UK over 13 years with 10 different subjects in Electronics, Computer and Communications Engineering.
More information: Nandana Rajatheva
Last updated: 26.5.2016