第56回創成塾

[2012.11.22]


2012年11月27日(火) 18:00-20:00 吹田キャンパス 工学研究科 U2-213にて第56回創成塾を開催致します.

内容:
18:00-18:40 Lars Schillingmann(Bielefeld University,PhD student)
18:40-18:50 質疑応答
18:50-19:30 住岡英信(ATR,Researcher)
19:30-19:40 質疑応答
19:40-20:00 パネルディスカッション

Access Map to U2-211

講演1:Lars Schillingmann(Bielefeld University,PhD student)
「A Computational Model of Acoustic Packaging」

Action and language learning in robotics requires flexible methods, since it is not possible to predetermine all possible tasks a robot would be involved in. Future systems need to be able to acquire this knowledge through communication with humans. Children are able to learn new actions although they have limited experience with the events they observe. More specifically, they seem to be able to identify which parts of an action are relevant and adapt this newly-won knowledge to new situations. Typically this does not happen in an isolated way but in an interaction with an adult. In these interactions, multiple modalities are used concurrently and redundantly. Research on child development has shown that the temporal relations of events in the acoustic and visual modality have a significant impact on how this information is processed. Specifically, synchrony between action and language was assumed to be beneficial for finding relevant parts and extracting first knowledge from action demonstrations. This idea has been proposed by Hirsh-Pasek and Golinkoff (1996) as acoustic packaging. They suggest that acoustic information, typically in the form of narration, overlaps with action sequences and provides infants with a bottom-up guide to attend to relevant parts and to find structure within them. My talk is about the conception, further development, and implementation of a model that has been inspired by the general idea of acoustic packaging. The resulting model of acoustic packaging is able to segment action demonstrations into multimodal units which are called acoustic packages. I will present evaluation results on a corpus of adult-adult and adult-child interactions within a cup stacking scenario. The analyses focus on differences between the structure of child-directed and adult-directed interactions as well as developmental trends. Furthermore, the presentation includes results on a corpus of adults interacting with a simulated robot are compared to child-directed and adult-directed interactions. Additionally I will report on tests on the iCub robot and further analysis of adult-child interactions about extracting semantic information on color terms.

講演2:住岡英信(ATR,Researcher)
「Body as computational resource」

Although our brain system contributes much to computation for our cognition and behavior, a part of computation can be offloaded to our body structure. In this talk, I introduce a simple model of human’s musculoskeletal system to identify the computation that a compliant physical body can achieve. A one-joint system driven by actuation of the springs around the joint is used as a computational device to compute the temporal integration and nonlinear combination of an input signal. Only a linear and static readout unit is needed to extract the output of the computation. The results of computer simulations indicate that the network of mechanically coupled springs can emulate several nonlinear combinations which need temporal integration. The simulation with a two-joint system also shows that, thanks to mechanical connection between the joints, a distant part of a compliant body can serve as a computational device driven by the indirect input. Finally, computational capability of antagonistic muscles and information transfer through mechanical couplings are discussed.

[ページ先頭へ]