Important Notice:

- Jun Tani’s keynote (KN5) will be held on Aug 26th at 17:00 Beijing time (11:00 CEST) on ZOOM.

KN1 - Toward Emotionally Intelligent and Self-Evolving Robots.

Zhengyou Zhang

Prof. Zhengyou Zhang is a Distinguished Scientist and the Director of Tencent AI Lab and Robotics X, Tencent China. His research interests include Artificial intelligence; Robotics; Computer vision; Speech recognition; Multimedia systems; Signal processing; Developmental learning.

Talk: Toward Emotionally Intelligent and Self-Evolving Robots.

Abstract: With the rapid progress in computing and sensory technologies, we will enter the era of human-robot coexistence in the not-too- distant future. To survive in an ever dynamic and uncertain world, a robot needs to be emotionally intelligent and can evolve over time through interacting with the environment, human beings, and other robots. Toward that goal, I proposed an A to G theory to describe 7 research directions we need to pursue. A for AI: a robot needs to have vision/speech/NLP and make decision. B for Body: different embodiments will enable different mechanical skills, and ideally the embodiment can transform. C for Control: a robot needs to control accurately the robotic body. A, B and C form the basic capabilities of a robot. D for Developmental Learning: A robot needs to develop skills through interaction with the environment, human beings, and other robots. E for Emotional Intelligence: In order to be able to coexist with humans, a robot should be able to understand human’s emotions, exhibit appropriately and carry deep conversations. F for Flexible manipulation: A robot needs to be able to manipulate with flexibility and accuracy in order to accomplish physical tasks. G for Guardian Angel: A robot should be able to protect you whether the robot is nearby or not, through connection with the environment and cloud.


KN2 - What does it mean to share attention and knowledge?

Malinda Carpenter

Prof. Malinda Carpenter is with School of Psychology and Neuroscience, Univ. of St Andrews, UK. Her research areas include infants' and young children's participation in shared activities, their prosocial and affiliative behavior and their understanding of others' mental states. Moreover, she investigates the differences between ape and human social cognition.

Talk: What does it mean to share attention and knowledge?

Abstract: The abilities to share attention and knowledge with others are among the most important and fundamental social skills that human children develop. The vast majority of developmental research in this area focuses on when children develop joint attention, common ground, and common knowledge. Here I take a step back and discuss what it actually means to share attention and knowledge with others. First, I present a recent theoretical framework (Siposova Carpenter, 2019) that distinguishes different levels of social attention and knowledge and highlights the key role of second-personal interactions and communication. Then I present a series of studies showing some of the many benefits of shared attention, common ground, common knowledge, and shared experiences for effective communication, increasing prosocial behavior, promoting joint commitments, and (depending on the results of an ongoing study) improving online interactions. Creating robots that can truly share attention and knowledge (and other mental states like goals) would revolutionize social robotics and make both human-robot and robot-robot interaction much more ‘human.’


KN3 - From Intelligence to Creativity.

Takayuki Nagai

Prof. Takayuki Nagai is with the Osaka University, Japan. His research interests include Developmental Robotics, Robot Learning, Cognitive Architectures, Machine Learning, Language and Robotics.

Talk: From Intelligence to Creativity.

Abstract: Building robots with human-like flexible intelligence is one of the major goals of cognitive robotics. Our group has been working toward this goal by self-organizing the multimodal experience of robots using probabilistic generative models. In the first half of this talk, these attempts are briefly introduced. Such attempts gradually converge on the idea of “world models”, and the new question that arose in these studies is “can robots eventually become the existence that creates new value based on the world model?” In other words, the question is, “can robots be creative?” In the second half of this talk, the possibility of extending the intelligence based on the world models that we have pursued so far to creative intelligence is discussed.


KN4 - Learning from Human-Robot Interaction.

Dana Kulić

Prof. Dana Kulić conducts research in robotics and human-robot interaction (HRI), and develops autonomous systems that can operate in concert with humans, using natural and intuitive interaction strategies while learning from user feedback to improve and individualize operation over long-term use. Dana Kulić received the combined B. A. Sc. and M. Eng. degree in electro-mechanical engineering, and the Ph. D. degree in mechanical engineering from the University of British Columbia, Canada, in 1998 and 2005, respectively. From 2006 to 2009, Dr. Kulić was a JSPS Post-doctoral Fellow and a Project Assistant Professor at the Nakamura-Yamane Laboratory at the University of Tokyo, Japan. In 2009, Dr. Kulić established the Adaptive System Laboratory at the University of Waterloo, Canada, conducting research in human robot interaction, human motion analysis for rehabilitation and humanoid robotics. Since 2019, Dr. Kulić is a professor and director of Monash Robotics at Monash University, Australia. Dr. Kulić holds an Australian Research Council Future Fellowship. Her research interests include robot learning and human-robot interaction.

Talk: Learning from Human-Robot Interaction.

Abstract:Robots working in human environments need to learn from and adapt to their users. In this talk, I will describe the challenges of robot learning during human-robot interaction: what should be learned? how can a user effectively provide feedback and input? I will illustrate the challenges with examples of robots in different roles and applications, including rehabilitation, collaboration in industrial and field settings, and in education and entertainment.


KN5 - Exploring Robotic Minds Using the Concepts of Predictive Coding and Active Inference.

Jun Tani

Jun Tani received the D.Eng. degree from Sophia University, Tokyo in 1995. He started his research career with Sony Computer Science Lab in 1993. He became a Team Leader of the Laboratory for Behavior and Dynamic Cognition, RIKEN Brain Science Institute, Saitama, Japan in 2001. He became a Full Professor with the Electrical Engineering Department, Korea Advanced Institute of Science and Technology, Daejeon, South Korea in 2012. He is currently a Full Professor with the Okinawa Institute of Science and Technology, Okinawa, Japan. His current research interests include cognitive neuroscience, developmental psychology, phenomenology, complex adaptive systems, and robotics. He is an author of “Exploring Robotic Minds: Actions, Symbols, and Consciousness as Self-Organizing Dynamic Phenomena." published from Oxford Univ. Press in 2016.

Talk: Exploring robotic minds using the concepts of predictive coding and active inference.

Abstract:The focus of my research has been to investigate how cognitive agents can acquire structural representation via iterative interaction with the world, exercising agency and learning from resultant perceptual experience. For this purpose, my team has investigated various models analogous to predictive coding and active inference frameworks. For the last two decades, we have applied these frameworks to develop cognitive constructs for robots. I willexplain how analysis of emergent phenomena observed in robotic experiments informs cognitive mechanisms for development of compositionality and hierarchy in generating goal-directed behaviors, primary intersubjectivity in social cognition, and phenomenological consciousness.