第55回創成塾

[2012.11.07]


2012年11月13日(火) 18:00-20:00 豊中キャンパス 基礎工学部シグマホールにて第55回創成塾を開催致します.

内容:
18:00-19:00 Takashi Ikegami (Professor,Graduate schol of Arts and Sciences, University of Tokyo)
19:00-19:15 Q&A
19:15-19:45 Vladimir Lumelsky(Professor, University of Wisconsin-Madison)
19:45-20:00 Q&A

基礎工学研究科 国際棟 Σホール

講演1:Takashi Ikegami (Professor, Graduate schol of Arts and Sciences,University of Tokyo)
「MDF and Messy Minds」

I have recently translated “Being There” by Andy Clark into Japanese. This book was originally published in 1993 and I should say that some new concepts have been developed since then, i.e. massive data flow (MDF).
First I will discuss what Andy wrote in his book and how I responded to it, then based on my recent studies on the web default mode network and other examples, I would like to discuss what I mean by massive data flow (MDF) and present a new approach to understanding real mind on artificial “brains”

References:
-Mizuki Oka and Takashi Ikegami :Characterizing Autonomy in the Web via Transfer Entropy, in the proceedings of ALIFE 13 (Michigan, USA, 2012)
-Takashi Ikegami, Mizuki Oka, Norihiro Maruyama, Akihiko Matsumoto, Yu Watanabe: Sensing the Sound Web, Art Gallery at the 5th ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia, 2012
-池上高志, 岡瑞起 : マッシブデータフローの科学を目指して -人と環境の間を流れる高次元のデータフローを巡る生成と解析について-, エディトリアル, 人工知能学会誌, Vol.27, No.4, pp. 389-395, 2012.

講演2:Vladimir Lumelsky(Professor, University of Wisconsin-Madison)
「Human-Robot Interaction and Whole-Body Robot Sensing」

Applications that have a need in robots operating in an uncertain environment, and/or require close human-robot interaction, are in great demand. Examples include robots preparing the Mars surface for human arrival; robots for assembly of large space telescopes; robot helpers for the elderly; robotic search and disposal of war mines. Advances in this area, while impressive, are also slow to appear. Difficulties are multiple, both on the robotics and on human side: robots have hard time adjusting in unstructured tasks, while human cognition has serious limits in manipulating 3D motion. As a result, applications where robots operate near humans – or far away from them – are exceedingly rare. The way out of this impasse is to supply the robot with a whole-body sensing, plus related intelligence – an ability to sense surrounding objects at the robot’s whole body and utilize this data in real time. This calls for large-area flexible arrays – sensitive skin covering the whole robot body. Whole-body sensing brings interesting, even unexpected, properties: robots become inherently safe; human operators can move them fast, with “natural” speeds; resulting robot motion strategies exceed human spatial reasoning skills; natural synergy of human-robot teams becomes realistic; a mix of supervised and unsupervised operation becomes possible. We will review the algorithmic, cognitive science, hardware (materials, electronics, computing), and control issues involved in realizing such systems.

[ページ先頭へ]