HI Assistant professor Haoyu Chen: “My vision is to enable AI that can cooperate with people naturally” 

Assistant Professor Haoyu Chen studies how human subtle behaviours—like micro‑gestures and spontaneous body cues—can help shape the next generation of human‑centred artificial intelligence. His work at the University of Oulu bridges cognition and computation; it brings together machine learning, cognitive science, and behavioural research to explore how humans and AI can grow smarter together.
Portrait of Haoyu Chen

Haoyu Chen, originally from China, is currently a tenure-track Assistant Professor in Hybrid Intelligence programme at the University of Oulu, working at the Center for Machine Vision and Signal Analysis (CMVS). His academic background spans machine learning, computer vision, human behaviour understanding, and multimodal cognition modeling.

“Over the years, my work has increasingly focused on human-centered AI, particularly on decoding subtle body movements and micro-gestures as windows into cognition and emotion. “

Understanding micro‑gestures with Sentient AI

At the heart of Chen’s work is Sentient AI—systems capable of perceiving, interpreting and responding to subtle human signals. His research includes building new datasets, designing multimodal large language models, and developing frameworks for interpretable, human-aligned prediction and reasoning.

“Specifically, I study micro-gestures, spontaneous body cues, and multimodal behavioural data to understand the underlying cognitive and affective dynamics.”

The research is aiming toward AI that doesn’t just react but understands the context around human behaviour, being sensitive to nuance—something humans do naturally but machines still struggle with.

The key challenge is how to capture high‑quality behavioural data in natural, real-world environments

One of the challenges in Chen’s work is capturing high‑quality behavioural data in natural, real-world environments - linking behaviour to cognition, and designing models that remain interpretable without sacrificing performance. This all requires interdisciplinary methods.

“We collaborate closely with psychological and neuroscience experts while developing new multimodal datasets and model architectures that embed cognitive structure”

The goal is to achieve the best possible interpretability, as humans need to understand why an AI system makes certain predictions. To support this, researchers aim to incorporate cognitive structure into AI models rather than relying solely on brute‑force learning—which would involve exhaustively testing all potential solutions and is unsuitable when interacting with humans.

One of the most intriguing questions in the field right now, according to Chen, is something that might put a smile on one´ s face. The question is: how to make AI as intelligent as cats and dogs?

"This question arises because we humans, like your friends cats and dogs, are ultimately animals evolved in nature for billions of years. We excel at reading subtle, low-level emotional and bodily cues—posture, micro-movements, and changes in tension—without explicit language."

In Chen´s research, micro-gestures and fine-grained expressions represent this missing layer in AI: grounding intelligence in embodied, intuitive emotion understanding rather than abstract symbol manipulation alone.

Hybrid Intelligence creates a space to bridge human cognition and computational intelligence

To explain term of hybrid intelligence simply, for Chen it is about creating systems where humans and AI amplify each other’s strengths. “Humans are great at intuition, empathy, and creativity; AI is great at processing huge amounts of information. Hybrid Intelligence combines the two; human qualities and AI so that both can learn from one another and work together seamlessly.”

Hybrid Intelligence aligns very closely with Chen´s long-term vision: to bridge human cognition and computational intelligence. He joined the programme because it brings together experts from psychology, cognitive science, computer science, and design—exactly the ecosystem required for understanding how humans and AI can complement each other.

“Working with the team at Uni Oulu has been inspiring; the openness to new ideas and cross-disciplinary collaboration is exceptional. “

Multidisciplinarity is already deeply embedded in Chen´s current projects: his team integrates cognitive theories, behavioural studies, and AI modelling to understand how humans express emotions and intentions through micro-behaviours.

Over the coming years, Chen hopes the HI programme will help position Oulu as a major hub for human‑centred AI research. His goals include building foundational datasets and theoretical frameworks that support understanding of human intentions and emotions.

“My long-term vision is to enable AI that can cooperate with people naturally—whether in healthcare, education, well-being, or creative applications—and to deepen our scientific understanding of how humans think, feel, and interact.”

Outside research, Chen enjoys strengthening his own creative, reflective learning and group sports activities: writing, basketball, dancing, and reading.

Created 12.2.2026 | Updated 12.2.2026