This task aims at extending the "acoustic packaging" hypothesis by Hirsh-Pasek and Golinkoff (1996), which states that the presence of a speech/sound signal might help infants to attend to particular units within the action stream, to language learning experiments with children and robots. We extend existing analysis methods by applying automatic tracking and classification tools to multi-modal interaction data, such as gesture tracking and intonation classification. These new methods allow us to analyse the phenomenon of synchrony between verbal utterances and action, which is the basis for acoustic packaging, in a quantitative way. From these analyses we will derive computational models of acoustic packaging that enable us to automatically analyse multi-modal interaction data in order to derive a semantic interpretation of the presented action in terms of goals, means and constraints. These models will be integrated in an interactive robotic system based on iCub in order to analyse the effects of the robot’s reaction upon the perception of tutoring behaviour on the tutor. Based on the evaluation of these models we will formulate a theoretical framework on the effect of acoustic packaging on learning of action and language.
For futher information conntact Dipl.-Inform. Lars Schillingmann.