Distributed Computing

There is a massive surge in the amount of data that are being generated by digital devices such as mobile phones, home automation gadgets and industrial sensors. The onslaught of locally generated data, along with increased concerns on security and privacy renders the traditional cloud-native paradigm outdated in complying with end-user requirements and regulations.  In response, edge and fog computing bring computations from centralised cloud services closer to end users and applications. However, edge-native computing requires distributed artificial intelligence and data analytics solutions that span across the network to meet the stringent requirements arising from future data-intensive applications. Augmenting human decision processes has significant potential, and in the long run, developing the basis for autonomous decision-making will be crucial for optimizing the network performance and marshalling the billions of devices expected to be interacting in the 6G era.

Read more in the 6G White Paper on Edge Intelligence.

Distributed Computing

The true challenge facing Edge AI is how to distribute data, AI models and their training, and inference on those models across multiple devices, locations, and domains, while fulfilling the heterogeneous service and latency requirements and increasing resource-efficiency.

 

Our goal in 6G Flagship research is to develop novel distributed AI methods running on novel edge computing architectures of dynamically connected nodes that can opportunistically share their computing resources. We develop mechanisms to optimize the distribution of AI methods among heterogeneous nodes including cloud, edge servers and user devices. Further, new application level protocols, machine learning methods, data management practices and security solutions are required to realize Edge AI.

 

Our practical solutions consider microservices-based edge architectures which decrease the delays of data-intensive applications while providing security and privacy to the users. The architectures are particularly well suited for environments where data are dynamically changing and latency requirements for data exchange are very stringent.
Read more in the White Paper on 6G Drivers and the United Nations SDGs.

 

We will also develop new distributed learning mechanisms to allow algorithms to run at edge servers, user terminals and other devices with limited data while providing strong robustness against device and link failures.

 
Key Publications

Deep Learning for Generic Object Detection : A Survey

Li Liu; Wanli Ouyang; Xiaogang Wang; Paul Fieguth; Jie Chen; Xinwang Liu; Matti Pietikäinen 3/2020 International journal of computer vision
Distributed Computing

3D Skeletal Gesture Recognition via Hidden States Exploration

Xin Liu; Henglin Shi; Xiaopeng Hong; Haoyu Chen; Dacheng Tao; Guoying Zhao 2/2020 IEEE transactions on image processing
Distributed Computing

Are we together or not? The temporal interplay of monitoring, physiological arousal and physiological synchrony during a collaborative exam

Jonna Malmberg; Eetu Haataja; Tapio Seppänen; Sanna Järvelä 11/2019 International journal of computer-supported collaborative learning
Distributed Computing
Key Researchers
29.8.2018 Researcher

Olli Silven

21.11.2017 Researcher

Madhusanka Liyanage