Mirror Neuronsand Human-robot Interaction in Assembly Cells 2351-9789 © 2015 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peer-review under responsibility of AHFE Conference doi: 10.1016/j.promfg.2015.07.187 Procedia Manufacturing 3 ( 2015 ) 402 – 408 Available online at www.sciencedirect.com ScienceDirect 6th International Conference on Applied Human Factors and Ergonomics (AHFE 2015) and the Affiliated Conferences, AHFE 2015 Mirror neuronsand human-robot interaction in assembly cells Sinem Kuza, Henning Petrucka, Miriam Heisterüberb, Harshal Patelb, Beate Schumannb, Christopher M. Schlicka, Ferdinand Binkofskib aInstitute of Industrial Engineering and Ergonomics, RWTH Aachen University, Aachen, Germany bSection for Clinical Cognition Sciences, Department of Neurology, RWTH Aachen University Hospital, Aachen, Germany Abstract When interacting with a robot, the level of mental workload, comfort and trust during the interaction are decisive factors for an effective interaction. Hence, current research focuses on the concept, whether ascribing gantry robot in assembly with anthropomorphic movements can lead to a better anticipation of its behavior by the human operator.Therefore, in an empirical study the effect of different degrees of anthropomorphism should be compared. This is based on the scientific research concerning the neural activity of the human brain when watching someone performing an action. Within the study videos of a virtual gantry robot and a digital human model during placing movements were designed to use in the functional magnetic resonance imaging (fMRI). The aim is to investigate the underlying brain mechanism during observing the movements of the two models. © 2015 The Authors.Published by Elsevier B.V. Peer-review under responsibility of AHFE Conference. Keywords:Human-robot interaction; Mirror neurons; Action perception; Human motion tracking 1. Introduction When considering the cooperation between humans and robot in assembly cells, it is necessary to ensure occupational safety. Besides this aspect, when working with a robotic co-worker, the level of stress, strain, comfort and trust during the interaction are also decisive factors for an effective interaction. To take all these variables into consideration, the field of human-robot interaction should focus on the concept of anthropomorphism, i.e. the simulation of human characteristics by non-human agents such as robots. By using anthropomorphism, a higher level of safety and user acceptance can be achieved [1]. In industrial robotics however, transferring anthropomorphism in appearance might be difficult to do. Hence, current investigations focus on the question, © 2015 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peer-review under responsibility of AHFE Conference http://crossmark.crossref.org/dialog/?doi=10.1016/j.promfg.2015.07.187&domain=pdf 403 Sinem Kuz et al. / Procedia Manufacturing 3 ( 2015 ) 402 – 408 whether an industrial gantry robot with anthropomorphic movements can lead to a better anticipation of its behavior by the human operator. This idea is mainly based on the results from the neuro-scientific research on the effects of action observation on the behavior and the neural activity of the beholder. The results demonstrate that observation of actions which belong to our motor repertoire activate specialized brain areas in our brains and make our own movements faster. These special brain areas belong to the so-called Mirror Neuron System (MNS) and are activated both by action performance and observation of other humans’ actions [2]. There is evidence, that the MNS is crucial for the recognition of the content and the intention of actions. Hence, due to a stronger activation of the MNS anthropomorphic robot movements are more effectively perceived and intentionally followed than conventional robotic movement. For this reason, the work that will be presented within this paper focuses on the effect of different degrees of anthropomorphism, in particular in motion behavior of a gantry robot in comparison to a digital human model, on human motion perception. To measure the brain activity, videos of a virtual gantry robot and a digital human model were designed to use in the functional magnetic resonance imaging (fMRI). In order to generate the anthropomorphic movement data, a preliminary experiment was conducted using an infrared optical tracking system. Hence, human motion trajectories during placing an object were recorded. The motion data was analyzed to compute the joint angles of the human arm during the placing movement. This information was used to drive the model of the virtual gantry robot and a digital human model. This paper includes a detailed description of the methodology i.e. the producing of the videos with different degrees of anthropomorphism and the the first results of a pilot fMRI scan to validate the stimuli. 2. Mirror neurons and human-robot interaction Mirror neurons are a class of neurons that become active both when individuals perform a specific action and when they observe a similar action done by others and have been originally discovered in special brain areas of the monkey [3].Afterwards,they were also confirmed in the human brain by several neuroimaging experiments [4]. The main function of the MNS is to understand what another person is doing. Some studies have tried to investigate human mirror neuron activation during observation of robotic actions and had controversial discussions about the activation of the MNS in human brains to robots. Tai et al. [5] in a Positron Emission Tomography (PET) examined movement sequences that were repeated identical but could not prove any significant mirror neuron activation of the humans during monitoring an industrial robot. They presented videos in which either a human or a robot arm grasped an object. They could prove grasping action performed by the human elicited a significant neural response in the MNS, while the observation of the same action by the robot didn’t show the same activation. The conclusion was that the human MNS is only activated when watching humans performing grasping actions. In another PET study Perani et al. [6] investigated whether observation of real or virtual hands engages the same activations in the human brain. Their results have shown that only the monitoring of biological agents’ actions activates the areas associated to the MNS. Nevertheless, Gazzola et al. [7]investigated the neural activation by the observation of human and robotic actions by using fMRI. They compared videos of simple and complex movements performed either by a human or by an industrial robot with only one degree of freedom and a constant velocity curve. However, the authors found the same activations for human and robotic actions. Gazzola and colleagues explained these contrasting results showing that the presentation of exactly the same robotic action several times does not activate the MNS. In an EEG study by Oberman and colleagues [8] a similar result could be proven. They investigated whether the monitoring of a robot hand performing either a grasping action, or a pantomimed grasp without an object would activate the MNS. The aim was to determine the characteristics of the visual stimuli which evoke MNS activation. Results revealed no difference in the activation of the MNS between two stimuli. AlsoChaminade et al. [9] showed no differences in the activation of thesebrain areas during the watching robotic agents and humans. Accordingly, robotic agents can evoke a similar MNS activation as humans do. The need of producing anthropomorphic robots able to establish a natural interaction with the human counterpartin different areas like the industryis becoming more and more relevant. Furthermore, in contrast to traditional approaches in industrial robotics, there are an increasing number of new concepts for automation solutions that combine appearance characteristics of an industrial robot and a humanoid such as Yumi from ABB [10] and Baxter or Sawyer from Rethink Robotics [11,12]. This developmentimplies the necessity of designing new 404 Sinem Kuz et al. / Procedia Manufacturing 3 ( 2015 ) 402 – 408 controls for roboticactions especially in industrial environments. Therefore, we investigate whether anthropomorphic movement control of a virtual gantry robot relates to increased brain activations in MNS in comparison to conventional robotic PTP movement control. 3. Method 3.1. Trajectorygeneration for the experiment By using an infrared optical tracking system that is based on recording infrared signatures of reflective markers, the human motion data for the virtual simulation could be acquired. The used system consist of four cameras to record the 3D positionswith a rate of 60 Hz of four markersthat were placed on the participants joint angles of the right arm. To track natural human placing movements, a participant was placed in front of a table with black plastic discs mounted at regular intervals for placing a cylindrical object (see Fig.1). The task was to place the object on four predefined roundson the table always starting from the same position and with the same posture. For the analysis,both theposition of the markersin three-dimensionalspace, and the rotationof the markers aredetected.The markerM1ismountedon the upper arm, the marker M2exactlyat the elbowjoint,so that the connectingline betweenM1andM2 is exactlyparallelto the upper armand runsthe connecting line betweenM2 andmarkerM3,which is placedon the forearm (see Fig. 1). The markerM4is fixed at thecylinder whichis heldinthe hand of theparticipant.The coordinate systemisset sothat the armis fullymovedinthe x-y plane, wherein the y-axis is directed upward. Due to the restrictions of the kinematics of the virtual gantry robot with six-degree-of freedom, the movementof the armis consideredonlywithinthe x-y plane.Only therotationof the markerM4is consideredin three-dimensionalspace andtransmitted to thehand of thehuman modelandonthe end effectorof the gantry robot. Fig.1. The experimental setting to track human placement movements. 405 Sinem Kuz et al. / Procedia Manufacturing 3 ( 2015 ) 402 – 408 Fig. 2.The human motion and the PTP trajectory in comparison in a Cartesian coordinate system. To adapt the acquired placing movements to the human digital human model and the virtual gantry robot, the joint angle of the shoulder, the elbow joint and the rotation of the hand were calculated based on the 3D coordinates of the markers on the right arm of the participant. For the calculation of the angle of the shoulder joint, first the direction vector between M1 and M2 wasdetermined. The angle of the joint corresponds then to the angle between the direction vector and the negative y-axis. Analogous for the angle of the elbow joint the directional vectors between M2 and M1 and M2 and M3 are examined. The angle between these two calculated direction vectors corresponds to the angle of the elbow joint. The remaining anglescould be specified directly in the form of the recorded rotation of the marker M4 (see Fig. 1 (a)). In the end, we tracked human joint angles for four different placing movements in the xy- plane. Afterwards, PTP trajectoriesfor the same start and target positions of the tracked placing movements were computed.Overall,eight motion datawithfourPTPand fourtrajectories with the human angle positions were created. Figure 2shows theacquiredhuman motion trajectory for a placing movement from the initial position until reaching the target position in comparison to a conventional PTP movement for the same action. The humantrajectoryproceeds on adegressive curve, while theclassicrobot movementrevealsa slight tendency to a progressive course. 3.2. Stimuli and design To produce the stimuli, we used a C++ simulation environment that was already developed and used within different works of the Cluster of Excellence [13] as the virtual presentation of a gantry robot and anEditor for Manual Work Activities (EMA) [14] for simulating the human and. Using the simulation environments, four different videos ranging in length of about three secondscould be produced for the experiment. Two of the videos featured a digital human model(man, 95th percentile) in a natural, standing position with a cylinder in the hand (Fig. 3 (left)). The other two videos consisted ofa virtual gantry robot with six degree-of freedom in the same position also holding the cylinder in the tool center point (Fig. 3 (right)).The generated motion data (four anthropomorphic and four PTP) were used to drive the virtual models of the human and the gantry robot. Overall, we had eight different videos of the human model and eight of the virtual gantry robot. The general design of one trial consisted of three steps. First a video was shown with the specific virtual model and the corresponding placing movement (3 sec.). Afterwards, the participant was presented a scale, where the perceived anthropomorphism of the movements could be rated by a 5-point scale (10 sec.). After the rating phase, the scene on the screen was turned into a fixed image with a grey background (15 sec.) as a control for the baseline activation (see Fig. 4). Each participant had to do eight blocks with each 12 repetitions of one trial. 406 Sinem Kuz et al. / Procedia Manufacturing 3 ( 2015 ) 402 – 408 Fig. 3.The digital human model (left) and the virtual gantry robot (right). Fig. 4.General steps of the experiment. 3.3. Apparatus The measurements were carried out by using a 3T magnetic resonance imaging (MRI) scanner by SIEMENS. The system is a whole-body MR scanning with the fallowing parameters: echo time TE 30 ms, flip angle 90°, and the repetition time TR 2000 ms.The head positionof the participantsin the scanneris stabilized by using a vacuumpillow, in order to minimizeheadmotion artifacts. The videos are presented in full color with a resolution of 900 x 563 pixels using a back projection system, which incorporated a LCD projector that projected onto a screen place behind the magnet. The screen was reflected on a mirror installed above the eyes of the participants.Additionally, an fMRI suitable response button pad with three buttonswas used to enable ratings on a 5point scale ofthe perceived anthropomorphism of each movement. 3.4. Task Across the experiment, the participants received identical instructions. First the personal data of the participants were collected, e.g. age and profession.Their task during the main part was to watch each video closely, and to rate onthe5-point scale on a response button pad with their perceived anthropomorphism of each movement.In the exemplary subject the rating was a continuum between very “robotic” (score 1) and very “humanlike” (score 5). Video Response fixed image/ background 3 sec. 10 sec. 1 Trial = 28 sec. 12 Trials per Block (= 5.6 Min.) 15 sec. 8 Blocks (= 45 Min.) 407 Sinem Kuz et al. / Procedia Manufacturing 3 ( 2015 ) 402 – 408 3.5. Participants Within this paper, results of one participant will be presented exemplary. The participants’ age was 22. Across the experiment, all participants were naive to the purpose of the experiment, free of any neurological or psychiatric disorder, and were not on medication at the time of the experiment. All participants were monetarily compensated for their time to take part at the experiment. The local ethics committee approved the experimental procedure. 3.6. Variables The analysis of variance was performed on the preprocessed fMRI data using the general linear model as implemented in the software package Statistical Parametric Mapping (SPM12). The main factors for the analysis were the movement types (anthropomorphic vs. PTP) as well as the virtual models (humanvs.gantry robot). The first data inspection was performed at the statistical significance level of p<.05 (uncorrected). 4. Results The first analysis of the fMRI data of the exemplary subject who was asked to distinguish between anthropomorphic and robotic movements demonstrates that both movements strongly activate the front-parietal movement recognition areas. Additionally, activation of other motor areas like the supplementary motor cortex and the cerebellum could be observed. Regarding the underlying model, the robot activated the right temporo-occipital areas stronger, whereas the observation of the digital humanmodel was reflected by stronger activation of the left temporo-occipital and mesial-parietal areas (see Fig. 5(A)). Most importantly the observation of humanlike movements activated the fronto-parietal MNS stronger than observation of robotic movements. Whereas the observation of robotic movements activated the left inferior-parietal cortex (see Fig. 5 (B)). In summary, the imaging data show that observation of both anthropomorphic and robotic movements is process in very similar networks of brain areas. Nevertheless, both anthropomorphic and robotic movements additionally coded in most specialized brain regions. The observation of anthropomorphic movements activated the MNS independent of the underlying virtual model stronger than thePTP movements. Fig. 5.Brain activation during observation of anthropomorphic (A) and of robotic movements (B). 408 Sinem Kuz et al. / Procedia Manufacturing 3 ( 2015 ) 402 – 408 5. Summary and outlook In this paper, a study measuring the effect of different degrees of anthropomorphism, in particular in motion of a virtual gantry robot with six-degrees-of-freedom in comparison to a digital human model, was presented. To conduct the study, videos of a gantry robot and a digital human model were designed. Anthropomorphism was varied using different degrees of shape, velocity and trajectory. The neural activation of the human brain was measured using anfMRI. Across the experiment the task was to watch each stimuli closely. After each presented video, participants were asked in the scanner to rate on a 5-point Likert scale whether the movement in the presented video is humanlike (5) or non-humanlike (1). The implementation of the designed videos of the gantry robot and thedigital human model into the fMRI environment was successful. From the behavioral responses of the subject it can be clearly recognize that the distinction between the two types of movements could be made for both models. The preliminary imaging data show the expected stronger activation in the MNS for observing anthropomorphic movements. We applied the same experimental procedure to 24 further subjects. The exact analysis of the data is still in process. But already now we can infer that the recognition of robotic and human like movement is already mirrored in the brain activity of the participants. This approach is therefore very promising for providing observer independent assessment of anthropomorphism in the field of cognitive ergonomic. Acknowledgements The presented project Anthrobot (Investigationof cognitivecompatibility of anthropomorphicmodeledtrajectoriesinhuman-robotinteractionusingfunctionalmagnetic resonance imagingmeasurements) is supported by financial resources of the Federal Ministry of Education and Research (BMBF) within the frame concept “Human-system interaction in demographic change”. References [1] B. R.Duffy. Anthropomorphism and the social robot, Robotics and autonomous systems, 2003,42(3) 177-190. [2] G. Rizzolatti, et al. Premotor cortex and the recognition of motor actions, Cognitive brain research, 1996, 3(2) 131-141. [3] G. di Pellegrino et al. Understanding motor events: a neurophysiological study, Exp Brain Res 1992, 91(1) 176–180. [4] L. Fadigaet al. Motor facilitation during action observation: a magnetic stimulation study, J Neurophysiol, 1995, 73(6) 2608–2611. [5] Y.F. Taiet al. The human premotor cortex is ‘mirror’ only for biological actions. Journal of Current Biology, 2004. 14, 117–120. [6] D. Perani et al.Different brain correlates for watching real andvirtual hand actions. Neuroimage , 2001, 14, 749–758. [7] V. G. Gazzolaet al. The anthropomorphic brain: The mirror neuron system responds to human and robotic actions. Journal of Neuroimage, 2007, 35(4), 1674-84 [8] L. M. Obermanet al. EEG evidence for mirror neuron dysfunction in autism spectrum disorders. Brain Res. Cogn. Brain Res.24, 190–198 (2005). [9]T. Chamide et al (2010). Brain response to a humanoid robot in areas implicated in the perception of human emotional gestures. PLoSONE, 5 (7), e11577. [10] E. Guizzo. ABB's FRIDA Offers Glimpse of Future Factory Robots.. in Spectrum, IEEE. 2011. [11] E. Guizzo; E. Ackerman. The rise of the robot worker. in Spectrum, IEEE, 2012. [12] E. Guizzo.Sawyer: Rethink Robotics Unveils New Robot. in Spectrum, IEEE, 2015. [13]B.Odenthal et al. Investigation of Error Detection in Assembled Workpieces Using an Augmented Vision System. in: Proceedings of the IEA 17th World Congress on Ergonomics (CD-ROM) Beijing, China, (2011)1-9 [14] L. Fritzsche et al..Introducing ema (Editor for Manual Work Activities) – A New Tool for Enhancing Accuracy and Efficiency of Human Simulations in Digital Production Planning, 2011.