Intuitive Work Assistance by Reciprocal Human-robot Interaction in the Subject Area of Direct Human-robot Collaboration Procedia CIRP 44 ( 2016 ) 275 – 280 Available online at www.sciencedirect.com 2212-8271 © 2016 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peer-review under responsibility of the organizing committee of the 6th CIRP Conference on Assembly Technologies and Systems (CATS) doi: 10.1016/j.procir.2016.02.098 ScienceDirect 6th CIRP Conference on Assembly Technologies and Systems (CATS) Intuitive work assistance by reciprocal human-robot interaction in the subject area of direct human-robot collaboration C. Thomasa*, L. Stankiewiczb, A. Grötschc, S. Wischniewskic, J. Deuseb, B. Kuhlenköttera aChair of Production Systems, Ruhr-Universität Bochum, Universitätsstr. 150, 44801 Bochum, Germany bInstitute of Production Systems, TU Dortmund University, Leonhard-Euler-Str. 5, 44227 Dortmund, Germany cUnit Human Factors, Ergonomics, Federal Institute for Occupational Safety and Health, Friedrich-Henkel-Weg 1-25, 44149 Dortmund, Germany * Corresponding author. Tel.: +49-234-32-27760; fax: +49-234-32-07760. E-mail address: thomas@lps.rub.de Abstract The paper focuses on the interaction in human-robot collaboration. On the one hand, the robot assistance system individually aligns itself to the employee and on the other hand, the employee gets an interface which enables him to influence certain robot positions. The aim is to support the employee in assembly tasks. The employee’s personal anthropometric data and age-related as well as temporary restrictions in movements are considered by being recorded individually via motion capturing before the workplace is built in a virtual and real environment. Based on the data, task specific movements of the employee are simulated using digital human models for the virtual representation of the employee, combined with an ergonomic analysis within the work environment. The impact of the employee on the assistance robot system is provided by the design of intuitive user interfaces. The positioning of the components in the assembly is done user-specifically by the robot. In addition, the employee gets a graphical user interface and can additionally adjust the position or turn the components. In this paper, preliminary results of this ongoing research project are presented as well as two reference processes from the field of assembly technologies as application examples. © 2016 The Authors. Published by Elsevier B.V. Peer-review under responsibility of the organizing committee of the 6th CIRP Conference on Assembly Technologies and Systems (CATS). Keywords: Human-Robot Collaboration, Digital Human Models, Individual Assistance Systems, Human-Centred Design of Workplaces 1. Motivation The individual design of workplaces constitutes a prerequisite for its human-centered, ergonomic and wholesome planning within industrial production. In this context, especially flexible and adaptive assistive workplace devices with respect to the emerging field of human-robot collaborations should be taken into consideration. One major motivational aspect is the demographic development throughout European countries and its corresponding increase of the mean age of the workforce. According to the aging report of the European Union, nearly one third of the population will be 65 years or older by 2060. With respect to the working age population, defined by the range between 15 and 64 years, a decline from 67% to 56% is predicted [1]. The Federal Statistical Office of Germany reports that by 2060, a substantial percentage of the work-force will be at the age of 50 or above [2]. By considering the different diagnostic subgroups that make up for the expenses and loss of production due to work inability, 23.4% can be assigned to musculoskeletal disorders (12.4 billion Euros) [3]. This sociodemographic development arouses both opportunities and challenges that need to be met within industrial engineering processes and future developments as well as assembly technologies and systems [4]. When addressing workplace design related issues, the virtual planning of such assembly processes becomes increasingly important. One aim in this context is the virtual planning of joint workplaces of humans and robots and its subsequent implementation and realization in actual industrial work environments. A major motivational aspect for an interaction between humans and robots is the intention to combine the flexibility of manual processes by humans and the high efficiency and repeatability of automated processes in manufacturing and assembly systems. These systems benefit from the synergistic effects of a collaborative scenario between humans and robots [5,6,7]. © 2016 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peer-review under responsibility of the organizing committee of the 6th CIRP Conference on Assembly Technologies and Systems (CATS) 276 C. Thomas et al. / Procedia CIRP 44 ( 2016 ) 275 – 280 For the virtual workplace planning and product development processes digital human models (DHM) are used in order to simulate characteristics and capabilities of future users or employees respectively [8,9]. These different characteristics of employees are subject to an increasing intra- and interindividual variance, employee’s physical work capacity as well as different capabilities and skills due to the aforementioned demographic development [10,11,12]. Whereas the rising importance of the virtual planning of joint workplaces of humans and robots [6,13,14] was stressed in the literature, the intuitive and individual character of these workplaces still needs thorough investigation. Hence this conference contribution fosters the investigation of an issue that needs further consideration within the field of industrial engineering, assembly technologies and direct human-robot collaboration, foremost with respect to its virtual planning process. It describes an overall concept for the individual and virtual planning and design of human-robot- collaborations and the realisation of reference processes including the interfaces for the employees. 2. Method to design individual work assistance In the field of assembly, there are different approaches to make tasks feasible. For this, different technologies are used such as machines, handling or lifting devices which include automats, telemanipulators and balancers [4]. However, the application of these devices is laid out to strictly restricted tasks. To adapt an assistance system to the employee’s needs a highly flexible system is required. The realization of a human-robot collaboration in assembly tasks offers the possibility to implement an assistance system that can react to both given and varying constraints resulted by the employee or by environmental influences. There are many approaches to human robot collaboration for industrial applications. Research projects address the interaction of humans and robots in workspaces without separating safety devices using simulations as a planning method [15]. Even in the field of assembly tasks there are approaches of direct human robot collaborations [16] with planning of appropriate assignments of tasks, supplemented by evaluations of ergonomics and feasibility of the assignments in a simulation [17]. Although there is a division of tasks between humans and robots and a predetermined sequence of execution of working tasks in these applications of current research projects, there is no consideration of the employee’s individual performance paramters to design the work place orientated to personal capabilities and ergonomics. However using robots as a supporting element in the work system provides the opportunity to adjust the degree of assistance based on a variable automation level and respond to human restrictions individually. The human-robot collaboration system affects the workflow and the distribution of work contents through the intervention of robots in work tasks. Thus, an influence arises on the execution of the employee’s work tasks by the robot. To account for this interaction, data need to be generated in advance in order to optimise the collaboration between humans and robots. The robot-based assistance system aligns itself automatically to the employee by an analysis of the data. Moreover, the employee has an impact on the type and the extent of the robot’s influence by the possibility of making modifications on subsequent fine adjustments. Adapting workplaces to the employee’s individual demands requires a highly changeable assistance system that allows a quick and self-influenced set up to the employee’s needs. These needs are immensely affected by the ratio between both prescribed physical minimum requirements of the work tasks and the employee’s performance parameters. The physical minimum requirements include for example deliverable forces or mandatory body postures to fulfil the work tasks. If these requirements are on a higher level than the employee’s individual performance parameters, the execution of the tasks is not bearable or even feasible and can thus lead to or aggravate physical harm. For an optimised workplace design and to avoid health critical work contents, the robot based assistance system is supposed to support the employee based on his individual prerequisites and capabilities. To ensure a human-robot collaboration with interaction opportunities including the possibility to affect the assistance Worker RobotSystem Human-Machine-Interface Work AssistanceH um an -R ob ot C ol la bo ra tio n Position Customization 1a 1b 23 4 Manual Input Automated Measurement P rocess O ptim isation Simulation Individual Performance Parameters Fig. 1: Method for designing an individualised, robot-based assistance system 277 C. Thomas et al. / Procedia CIRP 44 ( 2016 ) 275 – 280 system by the employee and the system itself, the implementation is divided into two sections (see Fig. 1). In the first section, the system requirements are determined in order to ensure a targeted optimization of the process (see Fig. 1: blue box). The optimization of the human-robot collaboration includes the capturing and processing of individual performance parameters (see Fig. 1: 1a & 1b). Whereas necessary requirements are affected by the given tasks and the workplace design, the individual performance parameters are usually not quantified. Therefore, it is necessary to determine these parameters for the purpose of an individually adapted design of the assistance system. The individual performance parameters include especially the employee’s possible restrictions of joint motions which may be caused by age, deseases or injury as well as by anthropometrical data such as the length of body segments which include for instance upper arms and forearms, shoulder width and body height. Body segment lengths and motion impairments are measured via motion capturing (see Fig 1 & chapter 3). These data can be accumulated by other performance parameters such as endurance or visual acuity to create an accurate image of the employee. Transferring these data into a DHM with adequate modeled physical impairments suitable to the individual demands allows a proper simulation of the employee’s motion to perform the required work tasks (see Fig 1: 2). These simulations provide information in the manner in which the robot-based assistance system has to intervene in the work process to enable an individually adjusted operation in accordance with the stated physical performance parameters. Thus, robot paths can be programmed precisely to the employee’s demands before a physical system has to be implemented whereby cost savings over non-digital prototyping can be achieved. By using a simulation environment the paths of the assistive robot can be coordinated with the predicted employee’s motions based on the a priori recorded parameters. Therefore, the exact positions and orientations of work pieces can be calculated and programmed to ensure an ergonomically convenient serving of the work pieces. By implementing existing ergonomic screening methods into the DHM enviroment, critical postures can be detected and consequently compensated by using the assistive robot. Furthermore, by the simultaneous integration of robots and humans in the simulation environment, collisions between robots and employees can be detected prospectively. Accordingly measures can be taken to avoid such collisions by adapting robot paths or work methods (see Fig. 1: 3). The simulation environment simultaneously represents a tool to implement the programming of robot paths. In the direct interaction with the employees, the robots constitute the basis for an assistance system. Fed with recorded data, which are further processed in the simulation, an independent adjustment of the robot paths to the employee’s needs takes place (see Fig. 1: 4). Yet it is important to implement a human-machine- interface that allows discrete customization of given robot paths to readjust positions and orientations of work pieces handled by the robot in safe conditions. In this way, the employee can simultaneously improve the direct collaboration between human and robot at one work piece in a common workspace. Due to their flexible handling options, robots are not task-specific to a single working system and can perform various tasks due to their kinematic structure. This flexibility is particularly suitable in order to establish individual settings in a workplace. Thus, the use of robots allows the optimization of pre-existing work processes as well as the prospect planning of compliant work systems. With the help of the simulation, the planner can check the movements of the robots and can configurate the safety environments. This could be necessary in order to reduce the workspace, to avoid clamping situations or to reduce the robot’s speed. As depicted in Fig. 1, the proposed approach of direct human-robot-collaborations combines several subsequent steps in order to meet relevant requirements for the virtual planning and the actual realisation of the human-robot-collaboration scenario. In this way, it is ensured that the direct human- machine collaboration can be used as target-oriented support for the employee without causing risks of injury. 3. Recording of Individual Physical/Musculoskeletal Parameters In this presented approach, anthropometric and biomechanical parameters are used for the individualization of digital human models and to account for individual musculoskeletal parameters of the employee. Apart from the use of digital human models for the prospective planning of future workplaces, markerless motion capturing techniques are another emerging technology that can be incorporated into the workflow of generating an intuitive work assistance by reciprocal human-robot interaction. For the quantitative assessment of the individual anthropometric and biomechanical parameters, a markerless motion capturing technique is used that is based on the principle of the time-of-flight (ToF) - technique (Microsoft Kinect v2.0) [18]. The use and suitablitiy of markerless motion capturing for the estimation of human posture [19,20,21], the analysis of human movements [22,23] and the muskuloskeletal modeling [24] were reported in the literature . The choice to use a markerless approach was made due to the portability, easy handling and usability as well as the low costs that make it accessible even for small and medium sized enterprises (SMEs). Another advantage, especially for the use in an industrial setting, is the lacking need of using additional markers which hinders the applicability of the proposed workflow and that might also interrupt the natural motions during the recording procedure. The use of a markerless motion capturing system is also motivated by the aim to take measurements of the employee in normal clothes. The segment lengths of the human skeleton and joint angles of the employee, i.e. the range of motion of the human joints, can be calculated by using the depth information coming from the time-of-flight sensor. The distance from the sensor to the target object is calculated by registering the phase difference between the emitted and reflected infrared wave signal [25]. The depth information is further processed via an extended programming code of the internal “Body Tracking” algorithm of the Software Development Kit (SDK v2.0) of the Kinect sensor [18] combined with an additional self-programmed code (C#) that allows the direct calculation of segment lengths and 278 C. Thomas et al. / Procedia CIRP 44 ( 2016 ) 275 – 280 joint angles of the upper and lower extremities as well as the head and the torso. For the realization of an easy to use procedure for the recording of the static anthropometrical and kinematic biomechanical parameters of the employee, a specific graphical user interface was programmed in order to allow data recording, data processing and data export. The latter is primarily important for the individual design of the generic digital human model by providing a suitable csv-file for the subsequent import to the virtual environment and CAD-based process planning software. In this research context, the parameters are integrated into the virtual planning of direct human-robot collaboration scenarios. The concept of recording these parameters is designed in such a way that the needed volume for the measurement is limited to a minimum where only the time-of-flight sensor is needed for the data collection, the PC/Tablet for the data processing and the data export as well as the monitor that serves as a support during the measurement and provides visual feedback for the employee (see Fig. 2). The latter point is also beneficial for the user acceptance of this technological concept and the aim towards an easy data access for the simulation and virtual planning of a joint workplace between the human and the robot. Based on the schematic measurement setup in figure 2, the recording of physical performance parameters will be outlined in the following. The authors want to point out that the actual recording procedure with the ToF-sensor is done in a separate environment isolated from any security-critical situations. Furthermore, the motion capturing procedure is standardized in terms of a measurement protocol to ensure an efficient workflow as well as the usability for non-expert users. After the recording of the anthropometric parameters, the kinematic data is captured during subsequent movements of the joints of the lower and upper extremities as well as the head and the torso. These data is used to scale the digital human model with respect to its anthropometry as well as to parameterize it in terms of range of motions of the aforementioned joints of the hman body. The authors like to mention that data privacy regulations need to be taken into account when using employee-specific parameters. 4. Individualization of Digital Human Models The gained individual performance parameters will be used to transfer these data (see chapter 3) into a virtual environment in order to draw conclusions on yet to implement real work systems. The used virtual environment is FAMOS robotic [26] which was originally designed for offline programming tasks of robot paths and therefore supports the simulation of robots and other CAD-based operating material. For this purpose, an existing digital human model from the field of entertainment industry is adapted for the use in a scientific work context (see Fig. 3). The bare digital human model which is used here solely consists of a shell with human shape and therefore needs to be extended to include motions and analysis functions. By taking the depiction of individual human movements in interaction with its environment into account, it is important that the accuracy of the animated digital human model’s movements fit to the employee’s real movements at the workplace. In order to obtain the possibility of a prospective planning of a workplace, it is therefore necessary that the employee’s body movements can be predicted instead of being captured, for example by a motion capturing method. In this way, workplaces can be modeled in a virtual environment in advance in order to draw conclusions on a yet to build real work system. This includes the consideration of an ergonomic design of the workplace, the garuantee of safety for the employee and the equipment and the consideration of time management. For the prediction of motion sequences, an implementation of a motion structure based on a given skeleton in the digital human model as well as the integration of inverse kinematics for the entire joint elements of the skeleton is used. Since all body segments of a human are included in the process of finding the most suitable motion, it is important to apply the inverse kinematics to the entire body to simulate natural and realistic movements. By the transfer of the individual physical performance parameters, the movement impairments can be included into the DHM. There is also the possibility to map complete restrictions with respect to the range of motion of single joints by blocking individual joints in the digital human model. The accurate representation of the employee’s body movements by using the inverse kinematics Fig. 3: Digital Human Model for human-robot collaborations with its Skeleton Fig. 2: Schematic outline of the measurement setup 279 C. Thomas et al. / Procedia CIRP 44 ( 2016 ) 275 – 280 offer the opportunity to make ergonomic analysis without having to watch the employees on a set-up work system. The ergonomics screening method is based on the Rapid Entire Body Assessment (REBA) which enables the assessment of postures considering the neck, torso, leg, upper arm, forearm and the hand positions. To this end, the joint positions of the individual segments of the digital human model are read and evaluated in accordance with assessment procedures according to REBA. In addition, applied forces, the handling of heavy loads as well as taking static postures and doing repetitive short-cycled activities are considered in the assessment. Moreover, the simulation environment detects collisions of moving bodies with both the environment and other moving objects so that robot paths and the calculated employee’s motions can be coordinated to avoid collisions 5. Individualized Human-Robot Collaboration in the Production Based on identified reference processes at industrial partners, the suitability of the depicted method and the implementation of the planning system for an intuitive work assistance is tested. The first explained process is the implementation of a light weight robot in an assembly process. With the assisting robot the employee will be relieved from the monotonous and repetitive task to set up stud screws in housing parts of pumps. In a second reference process an employee will be assisted by an industrial robot instead of a balancing system. With the help of the robot a heavy and bulky assembly will be transferred into a car body. The use of robots allows the employee to do additional assembly or checking tasks under optimized ergonomic conditions. In the descriptions of the reference processes the collaboration principles, as defined in ISO 10218-1 [27] and ISO 10218-2 [28], are taken into relation. 5.1. Assisted setting of stud screws in housing parts of pumps During the assembly of industrial pumps the employee has to put a lot of stud screws in housing parts. Most of the pumps are built up in low batches or are customer-individual. The number and positions of the tapped holes are on different pitch circles. First the employee takes the stud screws out of a box and screws them into the threaded holes. Afterwards, a pneumatic screwdriver is used to fix them. This process is very uncomfortable because of the different levels of screws and the high forces necessary to apply to handle the screwdriver. Additionally, for many parts the task has to be done on both sides of a symmetrical part and the employee has to work from both sides. With the help of the robot-based assistance system the employee can directly guide the robot to the screw positions and teach this. After teaching of three positions, the robot calculates all screw positions on the pitch circles with the help of a calculated center and the distances between the three teached positions. Afterwards, the robot automatically takes a stud and screws it in the first position and repeats this process for the different positions until all studs are assembled. Because of the symmetry of the pitch circles there is the need to teach only one more position to do the same process on the opposite side (see Fig. 4). Based on the individualized human model and its implementation into the simulation, the positions and the movement of the robot can be planed offline. The demonstrator will be build up with a KUKA LBR iiwa [29]. This robot, with integrated torque-sensors, allows hand guiding and can be implemented into applications with a direct human-robot collaboration after a positive risk assessment. During the teaching of the screw positions the collaboration principle is hand guiding. Afterwards, the collaboration will be done by using power and force limiting. 5.2. Handling of assembly groups in the automotive industry For the manufacturing of car bodys, a lot of robots are used but the final assembly however is strongly influenced by manual operations. Due to the variety and challenging tasks, the main reason for a high proportion of manual assembly tasks is given by a high complexity of the implementation of automation as a result of technical or economical restrictions. However, especially in the final production in the automotive industry, assistance robots could help to reduce the physical stress of the employee and establish ergonomic work conditions. For the handling of an assembly part with a weight up to 80 kg an industrial standard robot with a safety controller will be enabled to assist the employee. Beside the assistance of the handling, the employee will be able to move the assembly in different positions for final assembly tasks or visual inspection. The employee is provided with an intuitive graphic user interface by using the robot teach pendant. The assembly can be moved and rotated in each direction in safe ranges. The translatiorial and rotatorial movements can be done in a speed mode or in a slow mode. If the employee uses the speed mode he has to be in a safe distance which is checked by implemented sensors (Collaboration principle: Speed and separation monitoring). During the slow mode the speed of the robot is controlled, cartesian limitations avoid clamping or crushing of the employee (Collaboration principles: Combination of Safety-rated monitored stop and Speed and separation monitoring). Additional advantages for the employee are reduced physical stress during the movement of the balancing system and the reduction of sources of defects because the trajectory is given by the system. The whole concept enables the employee Fig. 4. Actual assembly task (a) and the setting of stud screws with human- robot collaboration (b) 280 C. Thomas et al. / Procedia CIRP 44 ( 2016 ) 275 – 280 to stay in control of the assisted assembly task and to stop the system in the case of unattended disturbance. 6. Conclusion This paper shows a concept to implement human machine collaborations in the field of assembly tasks that adjust to the individual employee’s needs. The adaption of the collaboration between humans and robots is carried out by simulating the work system in consideration of the employee’s physical constraints. The concepts provides a comprehensive view of assembly tasks beginning from the planning of the collaborative system via adjustnents to the work system by programming robot paths and the supply of operating material up to the realization. By integrating a human from the field of entertainment industry, which is highly customized to the needs of the planning of an hybrid workplace, into a robot simulation software, this planning tool is affordable for SMEs and can be used to decide wether it is worth to implement a hybrid assembly system. In the further course two demonstrators are build on laboratory areas to verify the ergonomic advantages predicted by the simulation and to acquire rewuired technology to put a collaborative system between humans and robots into practice. Acknowledgements The presented results are part of the research project INDIVA. The research & development project is funded by the German Federal Ministry Education and Research. Thanks go to the project team, which supports and drives the development of the results presented here. References [1] European Commission: The 2012 Ageing Report. European Union, Brussels, 2012. [2] Statistisches Bundesamt (DeStatis): Datenreport 2013, Bonn, 2013. [3] Bundesanstalt für Arbeitsschutz und Arbeitsmedizin (BAuA): Sicherheit und Gesundheit bei der Arbeit 2012, Dortmund, 2014. [4] Weidner, R., Wulfsberg, J. P.: Concept and Exemplary Realization of Human Hybrid Robot for Supporting Manual Assembly Tasks. CIRP Conference on Assembly Technologies and Systems, 2014, 53–58. [5] Tan, J.T.C., Duan, F., Zhang, Y., Watanabe, R., Arai, T.: Human-robot collaboration in cellular manufacturing: Design and development. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2009, 29-34. [6] Busch, F., Wischniewski, S., Deuse, J.: Application of a character animation SDK to design ergonomic human-robot-collaboration. Proceedings 2nd International Symposium on Digital Human Modeling, 2013. [7] Ore, F., Hanson, L., Delfs, N., Wiktorsson, M.: Virtual evaluation of industrial human-robot cooperation: An automotive case study. In: Proceedings of the 3rd International Digital Human Modeling Symposium, 2014. [8] Duffy, V.G. (ed.): Handbook of digital human modeling: research for applied ergonomics and human factors engineering. CRC Press Taylor & Francis Group New York. 2009. [9] Paul, G., Wischniewski, S.: Standardisation of digital human models. Ergonomics 55(9), 2012, 1115-1118. [10] Kumashiro, M.: Ergonomic strategies and actions for achieving productive use of an aging work-force. Ergonomics 43, 2000, 1007-1018. [11] Hollmann, W., Strüder, H. K.: Sportmedizin. Schattauer Verlag, Stuttgart. 2000. [12] Okunribido, O.O., Wynn, T., Lewis, D.: Are older workers at greater risk of musculoskeletal disorders in the workplace than young workers? – A literature review. Occupational Ergonomics 10, 2011, 53-68. [13] Thomas, C.; Busch, F.; Kuhlenkötter, B.; Deuse, J.: Ensuring Human Safety with Offline Simulation and Real-time Workspace Surveillance to Develop a Hybrid Robot Assistance System for Welding of Assemblies. In: Proceedings of the 4th International Conference on Changeable, Agile, Reconfigurable and Virtual Production, 2011, 465-470. [14] Bohlin, R., Delfs, N., Maerdberg, P., Carlson, J.S.: A Framework for Combing Digital Human Simulations with Robots and Other Objects. In: Proceedings of the 3rd International Digital Human Modeling Symposium, 2014. [15] Busch, F., Thomas, C., Deuse, J., Kuhlenkötter, B.: A Hybrid Human- Robot Assistance System for Welding Operations. Methods to ensure Process Quality and Forecast Ergonomic Conditions. 4th CIRP Conference on Assembly Technologies and Systems, 2012, 151–154. [16] Cherubini, A., Passama, R., Crosnier, A., Lasnier, A., Fraisse, P.: Collaborative manufacturing with physical human–robot interaction. Robotics and Computer-Integrated Manufacturing, 2016, 40, 1–13. [17] Michalos, G., Makris, S., Spiliotopoulos, J., Misios, I., Tsarouchi, P., Chryssolouris, G.: ROBO-PARTNER. Seamless Human-Robot Cooperation for Intelligent, Flexible and Safe Operations in the Assembly Factories of the Future. CIRP Conference on Assembly Technologies and Systems, 2014, 71–76. [18] Microsoft Kinect for Windows v2.0 - Technical Documentation, 2015. [19] Taylor J., Shotton, J., Sharp, T., Fitzgibbon, A.: The Vitruvian manifold: Inferring dense correspondences for one-shot human pose estimation. In: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012, 103-110. [20] Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., Kipman, A., Blake, A.: Real-Time Human Pose Recognition in Parts from Single Depth Images. In: Cipolla, R., Battiato, S., Farinella, G.M.: Machine Learning for Computer Vision Volume 411 of the series Studies in Computational Intelligence, Springer, 2013, 119-135. [21] Buys, K., Cagniart, C., Baksheev, A., De Laet, T., De Shutter, J., Pantofaru, C.: An adaptable system for RGB-D based human body detection and pose estimation. Journal of Visual Communication and Image Representation, 25(1), 2014, 39-52. [22] Schmitz, A., Ye, M., Boggess, G., Shapiro, R., Yang, R., Noehren, B.: The measurement of in vivo joint angles during a squat using a single camera markerless motion capture system as compared to a marker based system. Gait & Posture 41(2), 2015, 694-698. [23] Graf, E., Kuster, R., Wirz, M., Heinlein, B.: Validity of the new Kinect- Sensor in measuring upper body kinematics. In: Proceedings of the 25th Congress of the International Society of Biomechanics, 2015, 738-739. [24] Andersen, M.S., Yang, J., de Zee, M., Zhou, L., Bai, S., Rasmussen, J.: Full-body musculoskeletal modeling using dual Microsoft Kinect sensors and the Anybody Modeling System. In: Proceedings of the 14th International Symposium on Computer Simulation in Biomechanics, 2013, 23-24. [25] Ganapathi, V., Plagemann, C., Koller, D., Thrun, S.: Real Time Motion Capturing Using a Single Time-Of-Flight Camera; IEEE Conference on Computer Vision and Pattern Recognition, 2010, 755-762. [26] http://famos-robotic.de/index.php?id=homepos&L=1 [27] ISO 10218-1:2011, “Robots and robotics devices – Safety requirements for industrial robots – Part 1: Robots” [28] ISO 10218-2:2011, “Robots and robotics devices – Safety requirements for industrial robots – Part 2: Robot systems and integration” [29] http://www.kuka- robotics.com/germany/de/products/industrial_robots/sensitiv/start.html