MC4 Adaptive language grounding in robots: processing, learning and evolution


Natural language interaction between humans and robots (or more broadly autonomous intelligent systems such as self-driving cars) remains one of the biggest challenges of AI, mainly because it requires integration of sophisticated components for vision and motor control, speech, parsing and production of language, interaction through dialog, and grounded semantics. All these components should ideally acquire content through machine learning and have to remain adaptive to changing contexts, goals and interlocutors. This course focuses on how to achieve adaptive grounded Natural language processing.

The course examines representational languages for interaction scripts, semantics and grammar of Natural language that enable robots to interact with humans and one another, but also learn and adapt language to their own needs. In particular, we examine Machine Learning algorithms for Natural language, grounded procedural semantics and grammar induction.

To illustrate the main points at a technical level, the course uses case studies in a number of different domains, in particular, reference to objects based on their properties, producing and understanding action commands, and spatial and temporal description of situations.

The main part of this course will be lectures. However, interested participants will receive ample pointers to software (source code) and exercises accompanying the lectures. We will also bring a robot head to experiment with.


To understand the state of the art of language processing and language learning on robots – i.e, physical systems interacting with the real world.

Methodological : to be able to model Natural language processing and learning; implement systems capable of interacting with humans and other robots using Natural language in the real world.


Spranger, M. and Beuls, K. (2016). Referential uncertainty and word learning in high-dimensional, continuous meaning spaces. In Development and Learning and Epigenetic Robotics (ICDL-Epirob), 2016 Joint IEEE International Conferences on, 2016. IEEE.
Spranger, M. (2015). Procedural semantics for autonomous robots - a case study in locative spatial language. In Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on. IEEE.
Spranger, M. (2013). Evolving grounded spatial language strategies. KI - Künstliche Intelligenz, 27(2):97–106.
Spranger, M., Pauw, S., Loetzsch, M., and Steels, L. (2012). Open-ended Procedural Semantics. In Steels, L. and Hild, M., editors, Language Grounding in Robots, pages 153–172. Springer.

Course location


Course requirements


Instructor information.

Instructor's name

Michael Spranger


cf. website


Michael Spranger received a PhD from the Vrije Universiteit in Brussels (Belgium) in 2011 (in Computer Science). For his PhD he was a researcher at Sony CSL Paris (France). He then worked in the R&D department of Sony Corporation in Tokyo (Japan) for almost 2 years. He is currently a researcher at Sony Computer Science Laboratories Inc (Tokyo, Japan). Michael is a roboticist by training with extensive experience in research on and construction of autonomous systems including research on robot perception, world modeling and behavior control. After his undergraduate degree he fell in love with the study of language and has since worked on different language domains from action language and posture verbs to time, tense, determination and spatial language. His work focusses on artificial language evolution, machine learning for NLP (and applications), developmental language learning, computational cognitive semantics and construction grammar.