JAMES James and I
JAMES James' new hand
JAMES James' head (Philips iCat)

Short CV

Dr. Manuel Giuliani is a senior scientist at the Center for Human-Computer Interaction, Department of Computer Sciences, University of Salzburg, where he leads a group for human-robot interaction. He received a Master of Arts in computational linguistics from Ludwig-Maximilian-University Munich, a Master of Science in computer science from Technische Universität München, and a PhD in computer science from Technische Universität München. He worked on the European projects JAST (Joint Action Science and Technology), JAMES (Joint Action for Multimodal Embodied Social Systems). Currently, Manuel is involved in the European project ReMeDi (Remote Medical Diagnostician) and the Christian-Doppler-Laboratory "Contextual Interfaces". His research interests include human-robot interaction, social robotics, natural language processing, multimodal fusion, multimodal output generation, and robot architectures.

Research interests

The two general topics of my research are human-robot interaction and social robotics. Here, I am mainly interested in multimodal fusion: every robot that is built to interact with humans, needs to be able to understand information from several input channels, for example from speech and gesture recognition. This is called multimodal fusion.

Manuel with JAST robot

In this picture you can see me together with the JAST robot. I'm pointing at a green cube while I'm saying "please take this". For this simple interaction the robot needs to be able to understand language and gestures. Furthermore, the robot has to recognise the object I pointed to and it needs further information about that object, for example it needs to know if it can pick up the object with its grippers.

But it doesn't stop here. For a meaningful interaction, the robot needs to have even more abilities: it needs to know if picking up the green cube is useful in the given situation. Maybe the cube does not fit into the current assembly plan that the robot and I are following. But how should the robot interact? Should it just pick up the green cube, even though it knows that that is wrong? Or should it risk that I might get angry and simply tell me that the green cube is not necessary for the current building step?

In this simple example you can see the different sub-topics that I am working on in human-robot interaction. This can also be seen from my publications and the projects I am involved in:

  • Social robotics. How should the robot react to a human's words and gestures? How do humans perceive the movements and actions by the robot? Together with my colleageus, I did research on how the words the robot says and the role it takes in the interaction effects the way how humans perceive the robot.
  • Knowledge representation. How can we represent the knowledge of the robot about its human interaction partners and about its environment? In my PhD thesis as well as in my publications, I studied different ways to represent knowledge about human utterances as well as the robot's own actions
  • Robot architectures. I am also interested in research about the architectures and methods that have to be implemented to realise multimodal fusion in a human-robot interaction system.
  • Safety issues. Since robots are typically heavy machines that could harm humans, research about safety principles is indispensable for human-robot interaction.

Current projects

ReMeDi logo

ReMeDi
Remote Medical Diagnostician

The ReMeDi project designs and implements a robot system for medical tele-examination of patients. Successful medical treatment depends on a timely and correct diagnosis, but the availability of doctors of various specializations is limited, especially in provincial hospitals or after regular working hours. The target use cases in ReMeDi are two of the most widely used techniques for physical examination, palpation and ultrasonography.

ReMeDi is funded by the EU FP7 IST Cognitive Systems from 2013 to 2016.
In ReMeDi, I work together with

Past projects

JAMES logo

JAMES
Joint Action for Multimodal Embodied Social Systems

The main goal of the JAMES project was to develop a robot that is able to work together with multiple humans. While the robot is doing that it should respect social conventions and support multimodal input processing and output generation. To realise this goal, we will use the bar tender robot James, which you can see in the pictures above. The bar scenario enables us to research situations in which socially appropriate behaviour is as important as task efficiency.

JAMES is funded by the EU FP7 IST Cognitive Systems from 2011 to 2014.
In JAMES, I work together with

JAST logo

AudiComm
Audio for Communication and Environment Perception

AudiComm focused on high- and low-level sound and speech processing. The main goals of the project were the implementation of methods to find out who is speaking, where the speaker was located, and what the speaker was saying. The approaches for sound localization, speaker identification, and speech processing that were developed in AudiComm enabled robots to communicate with humans in more natural ways.

AudiComm was funded from 2009 to 2011 by the cluster of excellence CoTeSys.
In AudiComm, I worked together with

JAST logo

JAST
Joint Action Science and Technology

The goal of the JAST project was to build intelligent, autonomous agents that cooperate and communicate with their peers and with humans while working on a mutual task. For that, we build a robot which had a pair of manipulator arms with grippers, mounted in a position to resemble human arms, and an animatronic talking head capable of producing facial expressions, rigid head motion, and lip-synchronised synthesised speech. The robot was able to recognise and manipulate pieces of a wooden toy construction set called Baufix, which were placed on a table in front of the robot. A human and the robot worked together to assemble target objects from Baufix pieces, coordinating their actions through speech (English or German), gestures, and facial expressions.

JAST was funded by the EU FP6 IST Cognitive Systems from 2004 to 2009.
In JAST, I worked together with