Informatics events
Rutherford Fellow seminar: Multi-modal interaction for robotics and cognitive systems
Friday 11 January 11:00 until 12:00
Room 230, ARUNDEL
Speaker: Dr Junpei Zhong
Abstract: In this talk, I extend the concept of multi-modal learning in the context of interaction for the social assistive robotics. I argue that we should make use of different modalities of sensors, execution of robotic behaviours and contextual information in the hierarchical learning architectures to train the robots. This is similar to the development of our brain, which works as a predictive sensorimotor machine using perception-action integration and prior knowledge. Only based on this, we can build a system that thinks similarly to a human being and can contain long-term and harmonious relationships with the users. I will introduce three research topics I have been doing on multi-modal language learning, multi-modal interaction and multi-modal predictive models, in the social assistive robotics domain.
About the speaker:
Junpei “Joni” Zhong is a Rutherford Fellow visiting Sussex. He is currently a research scientist at National Institute of Advanced Industrial Science and Technology (AIST), Tokyo, Japan. He received the degree of BEng from the South China University of Technology in 2006, M.Phil from the Hong Kong Polytechnic University in 2010 and PhD ("with great distinction") from the University of Hamburg in 2015. From 2014 till now, he has been participating in a few projects in Germany, UK, and Japan. His research interests are social assistive robotics, machine learning and cognitive robotics. He has been awarded the EU Marie-Curie Fellowship from 2010 to 2013.
The seminar will be followed by free lunch. No registration is required. All are welcome!
By: Yanan Li
Last updated: Tuesday, 18 December 2018