- Telluride Neuromorphic Cognition Engineering Workshop

Transcrição

- Telluride Neuromorphic Cognition Engineering Workshop
ARTSImit
Towards natural and efficient
Human-Robot collaboration:
from the neurocognitive basis of joint action in humans to robotics
Estela Bicho et al
University of Minho, Portugal
[email protected]
http://www.dei.uminho.pt/pessoas/estela/
Telluride Neuromorphic Cognition Engineering Workshop 2014
1
People involved at Uminho:
Estela Bicho
Wolfram Erlhagen
Luís Louro
Rui Silva
Eliana Silva
Nzoji Hipólito
Toni Machado
Anwar Hussein
Albert Mukovskiy
Emanuel Sousa
Günter Westphal
Sergio Monteiro
Flora Ferreira
Fabian Chersi
Collaborations and acknowledgments:
Radboud University Nijmegen
Harold Bekkering
Ruud Meulenbroek
Hein Van Schie
Ellen de Bruin
Raymond Cuijpers
Roger Newman-Norlund
Institut für Neuroinformatik, Bochum
Gregor Schöner, Axel Steinhage
University of Parma
Leo Fogassi, Giacomo Rizolatti
Technical University Munchën
Alois Knoll, Ellen Foster, Manuel Giuliani
2
1
University of Minho
3
Towards natural and efficient Human-Robot collaboration
Tentative outline
• Motivation?
• What makes a robot a socially inteligent assistant?
• How to achieve the goal of synthesizing such robots?
• Multidisciplinary projects and results obtained so far
• Scientific and technical relevance
4
2
Efficient Human-Human
Interaction/Collaboration
Video
(Nominated for the 6 finalists for the IEEE IROS 2012 Jubilee video award)
5
Requirements for the robot:
Human-like social and cognitive capacities
• Action understanding
• Goal inference
• Anticipation
• Action/Error Monitoring
• Decision making in joint action:
to select complementary appropriate behaviour that takes
into account the actions and intentions of the human-partner
6
3
Our approach to natural and efficient
Human-Robot Collaboration?
Hmm, what
is he trying
to build?
Hmm, what is
he trying to
build?
Give me the
short slat,
please?
Give me the
short slat,
please?
• Neurodynamics and embodied view of (social) cognition
• Multidisciplinary approach:
• Cognitive psychology
• Neuroscience
• Mathematical modelling (of neural and behavioural data),
• Robotics
7
JAST – Joint Action Science and Technology
funded by EC (ref. IST-2-003747-IP)
Cognitive basis of joint action
time
1.Reaction
What are
thestudies,…
perceptual,
reasoning and action processes in
humans that support joint action?
Nijmegen Inst. For
Cognition and Information,
Max Planck Institut for
Biological Cybernetics
Neural bases of joint action
brain
2. Which are
thescanning,
underlying brain
EEG, fMRI, ERN
structures involved in joint action?
Mathematical modelling/Theory
F.C.Donders Centre for
Cognitive
Neuroimaging, Nimegen
3. Can we build robot
Dynamic Field Theory, Nonlinear Dynamics control architectures
based on the neuroOptimization, Information Theory...
cognitive mechanisms
supporting joint action
University of Minho
in humans?
Technical UniversityMunchën
Joint action in autonomous robots
Synthesis of socially inteligent robots:
action understanding/intention reading, learning,
goal-directed imitation, anticipation, error handling
coordination of decisions and actions
4. Will this
improve the
quality of
human-robot
interaction?
10
4
Joint Construction Task #1:
Toy vehicle scenario
Symmetric task
Assumptions:
• Human and robot know the construction plan
• Human and robot can make assembly tasks
• The spatial distribution of parts in the
workspace obliges each agent to hand over
pieces
• The logical order to assemble the object is not
unique
• No direct verbal communication/Instructions
Challenge:
Coordination of decisions and actions in time?
11
Joint Construction Task #2:
Asymmetric task
?
Baufix Scenario
Assumptions:
• Several Target Objects
• Which may be initially different for human
and robot
• Robot is not performing assembly actions
• Direct verbal communication / Instructions
Challenges:
1.
2.
3.
4.
Inference of immediate goal & conflict monitoring
Inference of final goal & conflict monitoring
Conflict handling
Integration of verbal communication
12
5
Efficient and fluent joint action performance
Requires that each team member:
monitors the actions of the partner,
interprets actions in terms of their
outcomes/goals,
detects/anticipates errors,
use the predictions to select,
adequate complementary behaviors,
acts in anticipation of user needs
13
Interpretation of others’ actions: basic mechanisms
Bekkering, et al (2009); Newman-Norlund et al (2007 a,b); ...
•
•
Action understanding (at different levels of goal hierarchy) through motor
simulation/ressonance is a possible mechanism: perceived actions are
automatically mapped onto corresponding motor representation of the observer to
predict the action effect
We usually care little about the surface
behavior but interpret observed actions in
terms of their goals: what object is he/she going
to grasp? and what for?
• Contextual information matters: a given action
can be organized by very different intentions.
• Motor resonance mechanism also in joint actions tasks
• Decision making in joint action: the inferred goal of the partner biases the
selection of a complementary action sequence
• Highly context sensitive mapping of observed actions onto selected actions
14
6
Action understanding through
motor simulation
First neurophysiological evidence:
Mirror neurons in the premotor cortex (F5)
• Actions able to trigger mirror
neurons must be goal-directed
•Mirror neurons encode the
purpose of the movement and
not the movement details, e.g.,
independent of hand orientation
or even the effector used.
• Degree of congruency may
vary (e.g., precision vs full grip)
• Motor vocabulary: “grasping”, “reaching”, “placing” etc.
(adapted from Rizzolatti et al, 2001)
15
Mirror Neurons in area PF/PFG:
code the (ultimate) goal of an observed action sequence
Goal / intention of the
‘reaching-grasping-placing’ sequence?
Visual responses
Task 1: Action sequence observation task
eating
put in
container
Motor responses
Task 2: Motor sequence task
(adapted from Fogassi et al, Science, 2005)
16
7
Action organization in the parietal cortex
• Neurons of inferior parietal cortex
appear to be organized in chains of
individual motor acts, each of which
is aimed to a final action goal.
Goal direct action sequences are
represented by chains of mirror neurons
coding subsequente acts
• triggered by multi-modal input
(observed motor act, contextual
cues like object properties, verbal
utterances) (Fogassi et al, Science, 2005)
• Action related speech activates
mirror system (e.g., Buccino et al.,
2005)
(grasping for placing)
17
Dynamic Model of Joint Action
Implements a flexible mapping from observed actions (AOL) onto
complementary actions (AEL) as a dynamic process that integrates:
- shared task knowledge (CSGL)
- the inferred action goal of the partner (IL) which is inferred based on motor
simulation (layer ASL) (Erlhagen et al, 2006)
- contextual information (OML)
Action observation layer (AOL)
Observed motor primitives
Goal directed chains of motor primitives
Goal directed chains of motor primitives
Bicho et al 2010,2011a,b2011
18
8
Example of two goal-directed chains
AOL
AOL:
Visual description of observed
motor primitives (reach,grasp,…)
ASL:
Goal-directed actions
IL
(each a sequence of motor primitives)
IL:
Inferred Goal
OML:
Objects,
CSGL:
shared task knowledge
Hover
OML
CSGL
AEL:
Reachgrasp/wheel
-plug
plug
Reachgrasp/wheel
-Handover 19
handover
reach
grasp
Dynamic Neural Field Implementation
•
neural activation in each layer ui(x) evolves continuously in time
as a field dynamics:
ui ( x ,t )
t
ui ( x ,t )
wi ( x
x' )Fi ui ( x' ,t ) dx'
h
S i ( x ,t )
i= AOL, OML, ASL, IL, AEL,…
Amari, 1977
neuronal activation patterns encode task relevant
information
Ex: AOL
Working memory: activation patterns are self-sustained
persistent inner states
x
accounts for the important temporal dimension of joint
action: cognitive processes unfold continuously in time
under the influence of multiple sources of information
activation
represented in connected layers (‘synaptic links’)
Decision making through lateral inhibition
x
(e.g., Erlhagen & Schöner, Psychological review, 2002;
Bicho, Mallet, Schöner, Int.Journal of Robotics Research, 2000;
Erlhagen & Bicho, J. Neuroengineering, 2006)
20
9
Example: Activation in Action Execution Layer
Total input to Action Execution Layer
Competition among two
goal directed actions:
Activation in Action Execution Layer
E = Reach-Grasp-Nut-Handover
D= Reach-Grasp-Wheel- Insert
D
E
21
Extension to Error Monitoring
Necessary to cope with unexpected
events and errors.
guideline
•Motor representations for goal inference
become activated irrespective of the
correctness of the observed movement!
(van Schie et al., De Bruijn et al)
Error monitoring layer (EML):
• different populations in EML are sensitive to a mismatch between expected
and observed consequences;
• integrate activity from:
• IL and CSGL (Error in intention)
• ASL and OML (Error in the ‘means’)
• AEL and proprioceptive/visual feedback (Error in execution)
• inhibition of “prepotent” complementary actions
• neurophysiological evidence in areas of PFC and ACC
23
10
“Think aloud”: speech as output Speech production
was added to verbalize
meaning of the activity
in the DNFs.
- Feedback about its reasoning to the user
- Explain the errors
Helps the human to coordinate with the robot
Bicho et al (2010)
24
Verbal communication
Action related speech as input to fields:
• changes time course of field dynamics
• may change decisions or may help to disambiguate
• verbal instructions (‘“Give me the wheel” activates the motor representation of a
pointing/request gesture)
dialogue
dialogue
25
11
Human & Robot in Action:
Results
I. Construction task #1: Toy vehicle
II. Construction task #2: Baufix
III. “Drinking” scenario
26
Vision system
• Object recognition and state of the construction
zoom
AOL
• Gesture recognition
OML
CSGL
AOL
Recognition Through Combination of Feature- and Correspondence-Based Pattern Recognizers
(Günter Westphal, 2006)
27
12
Videos:
Video_Fig3_Anticipatory_AS_April09.mpg
video_Fig3_Anticipatory_AS_April09_DNF(slow
)-1.avi
A look inside
Anticipatory behavior
AG
W
OML
OML
ILN
ILW
EL
CSGL
IRW
ASL
RW-AG-I
IL
Error in intention
IW/(ILW)
Error in ‘means’
AEL
RN-AG-H
28
Understanding Partially Occluded Actions
ARoS knows that there is a wheel
behind the occluding object
Action_Toy_Vehicle_with_Occluder_19052010.mpg
29
13
Results: Time matters
(Bicho et al, 2011a)
Human first inserts the wheel and then the nut on his side
t1
t2
Left: after
inserting
the nut the
human
immediately
hands over
the wheel to
the robot
t5A
t3
t7A
Video_DF_Teste_1_3A.avi
Right: the
human is
slower and
the robot
request the
wheel
t5B
t6A
Video_Teste 1_3A.mpg
t4
t6B
Video_Teste 1_3B.mpg
Video_DF_Teste_1_3B.avi
t7B
30
Nomination for the 6 finalists for the IEEE IROS
2012 Jubilee video award:
"videos illustrating
the history and/or
milestones in
intelligent robotics
and/or intelligent
systems in the last
25 years."
With the HRI work:
“The Powers of Prediction: Robots that can read Intentions”
Bicho et al, 2012
Video
http://www.youtube.com/watch?v=JisAUhyXzus&feature=youtu.be
http://spectrum.ieee.org/automaton/robotics/robotics-hardware/iros-2012-video-friday
31
14
European ICT research success stories
ICT Results ... Results that Lead the way...
http://staging.esn.eu/projects/ICT-Results/Success_stories.html
32
Summary of main achievements
•
We have developed a DNF-based robot control architecture for joint
action taking into account neuro-cognitive principles underlying
joint action in humans:
•
Action understanding and goal inference at different levels
•
Fluency: Selection of complementary actions based on an anticipatory model of
action observation .
•
Flexibility: Context-dependent selection of mappings
•
Different types of error detection (intention, means, execution): different reactions
possible ranging from speech to communicative gestures and repair actions .
•
Dynamic field architecture reflects the importance of timing of actions and
decisions for efficient team performance.
•
Integration of verbal and non-verbal communication
•
Changes of inter-field connections allow to adapt personality of the robot:
more social or more selfish robot behavior
anticipatory vs. non-anticipatory action selection: some users prefer to control
the robot by giving orders
33
15
Towards (more) socially inteligent assistive robots:
when Action Meets Emotions
•
the same action, with a different facial
expression may have an underling
diferent goal/intention
•
The same facial expression may have
an underlying different emotional state
deppending on the context
Rui Silva
Embodied view of emotions:
Kelly & Barsade, 2001; Rizzolatti & Sinigaglia, 2008, 2010 Ferri et
al, 2010; Wiswede et al, 2009
• Shared emotions and the role of emotions in joint tasks?
current inspiration John Michael, 2011.
34
References
W.Erlhagen, E. Bicho, “The dynamic neural field approach to cognitive robotics”, Journal of Neural Engineering, 3 (2006),
R36-R54.
· E. Bicho, W. Erlhagen, L. Louro, E. Costa e Silva, R. Silva, N. Hipolito,"A dynamic field approach to goal inference, error
detection and anticipatory action selection in human-robot collaboration", in "New Frontiers in Human-Robot Interaction", Edited
by Kerstin Dautenhahn & Joe Sanders, ,pp. 135-164. Advances in Interaction Studies, ISSN 1879-873X John Benjamins Publishing
Company, 2011.
E.Bicho, W. Erlhagen, L.Louro, E. Costa e Silva, “Neuro-cognitive mechanisms of decision making in joint action: a HumanRobot interaction study”, Human Movement Science, 30 (2011) 846–868. http://dx.doi.org/10.1016/j.humov.2010.08.012
· E.Bicho, L.Louro, W. Erlhagen, “Integrating verbal and non-verbal communication in a dynamic neural field architecture for
human-robot interaction”, Frontiers in Neurorobotics, May 2010, Vol.4, Article 5 ,doi: 10.3389/fnbot.2010.00005. (videos:
http://dei-s1.dei.uminho.pt/pessoas/estela/JASTVideosFneurorobotics.htm )
W. Erlhagen, E. Bicho, “A Dynamic Neural Field Approach to Natural and Efficient Human-Robot Collaboration”, chapter in
“Neural Fields: Theory and Applications”, Edts Stephen Coombes, Peter beim Graben, Roland Potthast, and James J. Wright, Springer,
31st May 2014, ISBN 978-3-642-54592-4.
W.Erlhagen, et al, “Action-understanding and Imitation Learning in a Robot-Human Task”, Artificial Neural Networks:
Biological Inspirations, pp.261-268, Lecture Notes on Computer Science, Springer Verlag, 2005.
· W.Erlhagen, A.Mukovskiy, E. Bicho, “A dynamic model for action understanding and goal-directed imitation”, Brain
Research, 1083 (2006), 174-188.
· W.Erlhagen, A.Mukovskiy,E. Bicho, G.Panin, C.Kiss, A.knoll, H. van Schie, H.Bekkering, “Goal-directed Imitation for Robots:
a bio-inspired approach to action understanding and skill learning”, Robotics and Autonomous Systems, 54 (2006), 353-360.
35
16
Thank you!
Estela Bicho Erlhagen
([email protected])
http://www.dei.uminho.pt/pessoas/estela/
36
17

Documentos relacionados

Detailed Final Program (pdf version)

Detailed Final Program (pdf version) Feature extraction and selection for mobile robot navigation in unstructured environments Alberto Vale, José Miguel Lucas, Maria Isabel Ribeiro

Leia mais