key: cord-0927384-4zfx8wqh authors: Penčić, Marko; Čavić, Maja; Oros, Dragana; Vrgović, Petar; Babković, Kalman; Orošnjak, Marko; Čavić, Dijana title: Anthropomorphic Robotic Eyes: Structural Design and Non-Verbal Communication Effectiveness date: 2022-04-15 journal: Sensors (Basel) DOI: 10.3390/s22083060 sha: 4ae5f384923571f7e9d3fae22cc609e493fb8522 doc_id: 927384 cord_uid: 4zfx8wqh This paper shows the structure of a mechanical system with 9 DOFs for driving robot eyes, as well as the system’s ability to produce facial expressions. It consists of three subsystems which enable the motion of the eyeballs, eyelids, and eyebrows independently to the rest of the face. Due to its structure, the mechanical system of the eyeballs is able to reproduce all of the motions human eyes are capable of, which is an important condition for the realization of binocular function of the artificial robot eyes, as well as stereovision. From a kinematic standpoint, the mechanical systems of the eyeballs, eyelids, and eyebrows are highly capable of generating the movements of the human eye. The structure of a control system is proposed with the goal of realizing the desired motion of the output links of the mechanical systems. The success of the mechanical system is also rated on how well it enables the robot to generate non-verbal emotional content, which is why an experiment was conducted. Due to this, the face of the human-like robot MARKO was used, covered with a face mask to aid in focusing the participants on the eye region. The participants evaluated the efficiency of the robot’s non-verbal communication, with certain emotions achieving a high rate of recognition. In spite of the global fight against the coronavirus disease (COVID-19) [1] , with over 3 million people contracting the virus daily [2] , the experiences of healthcare workers of the most developed countries in the world [3] [4] [5] [6] [7] [8] [9] [10] have shown that the healthcare system is fundamentally unprepared for a long-term or intense pandemic [11] [12] [13] [14] [15] [16] . The system becomes overwhelmed quickly, risking the possibility of many people receiving substandard care [17] [18] [19] [20] [21] [22] , with the rate of diagnosis of diseases with the highest mortality rates, such as malignant [23] [24] [25] [26] and cardiovascular [27] [28] [29] [30] diseases, dropping substantially. Both those fallen ill in the pandemic and those with chronic conditions require medical care based on interpersonal interaction, which is neither easy nor safe to provide during pandemic conditions [31] [32] [33] [34] [35] . Keeping in mind the numerous mutations of the virus, and new, more contagious variants [36] [37] [38] [39] , in spite of social distancing [40] [41] [42] [43] , preventative measures [44] [45] [46] [47] , and vaccination efforts [48] [49] [50] [51] , it is assumed that the use of disruptive technologies such as industry 4.0 [52] [53] [54] [55] [56] , internet of things (IoT) [57] [58] [59] [60] [61] , internet of medical things (IoMT) [62] [63] [64] [65] [66] , and others [67] [68] [69] [70] [71] , together with robotic technologies [72] [73] [74] [75] [76] , could have a key role in the fight against the pandemic and in relieving the healthcare system as well as preventing its collapse on a global scale. According to Ref. [77] , examples of notable uses of disruptive and robotic technologies in the fight against the pandemic and the preservation of public health are seen in: (i) diagnosis robots for fast scanning, and mass testing of people by measuring body temperature and taking oropharyngeal swabs, (ii) logistics robots for safe transport of infective waste, sterilized medical material, swab samples, blood and urine (iii) healthcare robots meant The eyeball, thanks to the structure of its media, the dioptric apparatus and presence of neuroepithelial elements in the retina, enables the reception of visual impressions. The visual pathways connect the neural membrane of the eyeball-the retina, with the visual centers of the brain. Therefore, a visual stimulus formed on the retina is transported to the relevant centers in the brain for further interpretation of the signal. The auxiliary elements of the eye are the eyebrows, eyelids and eyelashes, the lacrimal apparatus, the ocular muscles, the orbital cavity, and others. Their primary function is both to protect the eyeball and enable all the complex processes the eye performs daily. The eyeball is akin to a sphere and consists of 3 mantles and a gelatinous filling that makes up 4/5 of the eyeball. The front-facing part of the outside mantle is the cornea-an integral part of the dioptric apparatus due to its transparency and slight curvature, while the back-facing part is opaque, significantly thicker, and white in color, called the sclera. The middle mantle, its main function being to feed the eyeball, encompasses the iris-the diaphragm regulating the amount of light intake, the ciliary body-it produces and secretes the aqueous humor, and the choroid-a key element in the feeding of the optical part of the retina. The inner mantle-the retina, in an embryonic sense represents an extension of the brain matter, and The eyeball, thanks to the structure of its media, the dioptric apparatus and presence of neuroepithelial elements in the retina, enables the reception of visual impressions. The visual pathways connect the neural membrane of the eyeball-the retina, with the visual centers of the brain. Therefore, a visual stimulus formed on the retina is transported to the relevant centers in the brain for further interpretation of the signal. The auxiliary elements of the eye are the eyebrows, eyelids and eyelashes, the lacrimal apparatus, the ocular muscles, the orbital cavity, and others. Their primary function is both to protect the eyeball and enable all the complex processes the eye performs daily. The eyeball is akin to a sphere and consists of 3 mantles and a gelatinous filling that makes up 4/5 of the eyeball. The front-facing part of the outside mantle is the cornea-an integral part of the dioptric apparatus due to its transparency and slight curvature, while the back-facing part is opaque, significantly thicker, and white in color, called the sclera. The middle mantle, its main function being to feed the eyeball, encompasses the iris-the diaphragm regulating the amount of light intake, the ciliary body-it produces and secretes the aqueous humor, and the choroid-a key element in the feeding of the optical part of the retina. The inner mantle-the retina, in an embryonic sense represents an extension of the brain matter, and thanks to the presence of neuroepithelial cells, enables the reception of visual impressions. The inside of the eyeball encompasses the aqueous humor-and clear and completely transparent liquid which is the main factor determining intraocular pressure, the lens-a Sensors 2022, 22, 3060 5 of 45 between fixed points that abruptly shift the direction of the gaze-for example, reading a newspaper or scouring the objects in a room, and in this case, the angular speed of the eyeball reaches values of 400-800 • /s. On the other hand, smooth pursuit movements are gentle and very slow movements of the eyes that enable the tracking of objects in motion at great distances, and in this case, the angular speed does not exceed 30 • /s. Differing from these types of movement where both eyes rotate in the same direct, vergence movements rotate the eyeballs in different directions allowing them to focus on specific objects-for example, when moving a finger to and from the nose, and in this case, the angular speed reaches values of 30-150 • /s. Vestibulo-ocular movements are reflexive eye movements that compensate sudden and abrupt head movements to stabilize the image seen by the eyes, and in this case, the angular speed reaches values of 800 • /s. According to Ref. [109] , the ranges of motion of adduction and abduction are nearly the same and equal, 44.9 ± 7.2° and 44.2 ± 6.8°, respectively, so the total yaw range of motion of the eyeball equals approximately 90°. On the other hand, the ranges of motion of elevation and depression differ and equal, 27.9 ± 7.6° and 47.1 ± 8.0°, respectively, making the total pitch range of motion approximately 75°. The smallest range of motion, if afforded to incyclotorsion and excyclotorsion, are only a few degrees each [110] , which is why the roll motion of the eyeball is disregarded in this paper. The speed of the eyeball depends on the type and nature of the motion, and is determined by observing both eyes simultaneously. According to Refs. [111, 112] , the principal types of eye motion are: (i) saccades, (ii) smooth pursuit movements, (iii) vergence movements, and (iv) vestibulo-ocular movements. Horizontal and vertical saccades are rapid movements of the eyes between fixed points that abruptly shift the direction of the gaze-for example, reading a newspaper or scouring the objects in a room, and in this case, the angular speed of the eyeball reaches values of 400-800°/s. On the other hand, smooth pursuit movements are gentle and very slow movements of the eyes that enable the tracking of objects in motion at great distances, and in this case, the angular speed does not exceed 30°/s. Differing from these types of movement where both eyes rotate in the same direct, vergence movements rotate the eyeballs in different directions allowing them to focus on specific objects-for example, when moving a finger to and from the nose, and in this case, the angular speed reaches values of 30-150°/s. Vestibulo-ocular movements are reflexive eye movements that compensate sudden and abrupt head movements to stabilize the image seen by the eyes, and in this case, the angular speed reaches values of 800°/s. Blinking is a complex, short, and nearly periodic physiological action during which the eyelids fully close and fully open, while the duration depends on the type of motion. According to Refs. [113, 114] , the principal types of eyelid movements are: (i) reflex blinking-involuntary, abrupt, and rapid movements caused by stimulation of the retina, for example, a touch or any other peripheral stimulus; the duration of this type of blink is the shortest and equals 205 ± 18 ms, (ii) voluntary blinking-movements which the subject does willingly due to internal or external commands; the duration of these movements is longer and equals 275 ± 37 ms, and (iii) spontaneous blinking-unconscious and continuous movements with the longest duration of any type that equals 334 ± 67 ms. It should be noted that the closing phase lasts 2.5 times less than the opening phase, meaning that the speeds differ as well [114] . The spontaneous blink rate is on average 10-20 blinks/min [115] , depending on age, gender, time of day, as well as the fatigue and concentration of Blinking is a complex, short, and nearly periodic physiological action during which the eyelids fully close and fully open, while the duration depends on the type of motion. According to Refs. [113, 114] , the principal types of eyelid movements are: (i) reflex blinking-involuntary, abrupt, and rapid movements caused by stimulation of the retina, for example, a touch or any other peripheral stimulus; the duration of this type of blink is the shortest and equals 205 ± 18 ms, (ii) voluntary blinking-movements which the subject does willingly due to internal or external commands; the duration of these movements is longer and equals 275 ± 37 ms, and (iii) spontaneous blinking-unconscious and continuous movements with the longest duration of any type that equals 334 ± 67 ms. It should be noted that the closing phase lasts 2.5 times less than the opening phase, meaning that the speeds differ as well [114] . The spontaneous blink rate is on average 10-20 blinks/min [115] , depending on age, gender, time of day, as well as the fatigue and concentration of the subject-women blink twice as much as men in the same time period [116] . According to Ref. [117] , the range of motion of the upper eyelid depends on the type and phase of the motion, with the highest value being reached during the closing phase of reflex blinking, 41.3 ± 5.3 • , and with the angular speed of the upper eyelid reaching values of 1108.0 ± 157.0 • /s. The range of motion of the lower eyelid has only been discussed in the available literature as a consequence of vertical saccades, when the eyelids move together with the eyeball in an up-and-down motion [118] . Visually, when in the normal eyelid position, the upper eyelid is 2 mm bellow the periphery or the iris, while the lower eyelid is exactly on the periphery of the iris [119] . On the other hand, some authors measure the distance between the eyelids and the center of the pupil [120, 121] . Observation has shown that the line of contact between the eyelids when the eyes are closed is between the center of the pupil and its periphery, which is when the lower eyelid achieves the maximum possible rotation angle. According to Ref. [122] , the ideal position of the eyebrow is defined by a right-angle triangle (see Figure 1 ) formed by the medial and lateral canthus (points C and D, respectively), and the outside part of the nose, the ala (point E). There are 7 principal types of eyebrow movement [123] with their amplitudes directly depending on which part of the eyebrow is being actuated (medial, above the pupil or lateral) and in which direction (raising or lowering). According to Ref. [124] , eyebrow raising ability decreases with age, so for the age group 20-39 and ≥40, the amplitude equals 13.0 ± 2.9 mm and 9.8 ± 2.0 mm for the medial canthus, and 15.7 ± 2.6 mm and 12.7 ± 1.7 mm for the midpupillary line, respectively. On the other hand, for markers placed on the midpupillary line and for voluntary movements, the maximum raising amplitude for the left and right eyebrow equals 9.75 mm and 10.14 mm, with the angular speeds reaching 24.11 mm/s and 25.87 mm/s, respectively [125] . However, during abrupt movements caused by fear, the eyebrows, along with the upper eyelids, raise reflexively, and in this case, the speed of the eyebrows is much higher. Based on the structure and kinematics of the eye and eyebrows, the following is concluded: (i) although the eyeball has 3 DOFs, in this instance, only the pitch and yaw movements are relevant, with their range of motion equaling approximately 90 • (adduction 45 • + abduction 45 • ) and 75 • (elevation 25 • + depression 50 • ), respectively; the roll motion of the eyeball has a very small range of motion, and is thus disregarded; the angular speed of the eyeball reaches its highest values during saccadic and vestibulo-ocular movements-nearly 800 • /s, and its lowest during smooth pursuit movements, not exceeding 30 • /s; (ii) the kinematic parameters of the upper and lower eyelids are different-the upper eyelid is nearly two times wider that the lower eyelid, so its range of motion is also twice as big, and accordingly, the angular speed as well; the maximum ranges of motion of the upper and lower eyelids equal 45 • and 20 • , respectively, with the angular speed of the upper eyelid during the closing phase of reflexive movements reaching approximately 1100 • /s; it should be noted that the closing phase lasts around 2.5 times less than the opening phase, with the total duration of a blink being 0.2-0.4 s; (iii) the kinematics of the eyebrows are complex and depend on the part of the eyebrow being actuated as well as the direction; the amplitude when raising the eyebrows is approximately 10-15 mm, with the speed during voluntary movements reaching 25 mm/s; however, during reflexive movements of the eyelids and eyebrows caused by fear, higher speeds should be expected. The literature review should provide information on realized robots that are able to intuitively and transparently express human-like emotions by moving characteristic parts of the face, such as the eyes, eyebrows, and mouth, independently of the rest of the face. Accordingly, there are two approaches in the design and realization of socially interactive robot faces. The first refers to rigid faces with moving mechanical parts-eyeballs, eyelids, eyebrows, and mouth, while the second approach involves a rigid face on which the eyes, eyebrows, and mouth are displayed using light-emitting diodes (LEDs). However, it is possible to combine these two approaches. Accordingly, the literature review will cover two basic groups of problems: (i) robots that have rigid faces and moving mechanical parts such as eyeballs, eyelids, eyebrows, and (ii) robots that also have rigid faces, where the eyeballs and eyelids actuate mechanically, while the eyebrows and/or mouth are displayed using LEDs. We will additionally analyze: (i) the number of DOFs of the eyes and eyebrows, because a larger number of DOFs allows a wider range of movements and, consequently, a wider range of non-verbal facial expressions-emotions, (ii) how motion is transmitted from the actuators to the eyeballs, eyelids, and eyebrows-output links of the driving mechanisms, (iii) types of actuators and sensors used, and (iv) ability of the robots to produce facial expressions. A humanoid robot head called HYDROïD with minimal emotion capabilities is shown in [126] ; the robot has 4 DOFs eyeballs, 3 DOFs eyebrows, and a 5 DOFs mouth mechanism; pitch and yaw movements of the eyeball are enabled by gear and pulley systems, respectively, while the eyeballs and eyebrows are actuated by Athlonix 12G88 motors and GWS Naro servomotors, respectively; the robot is capable of producing 2 facial expressions (happiness and sadness). A robotic head called EMYS (EMotive headY System) with emotion expression capabilities is shown in [127] ; the robot has 2 DOFs eyeballs and 4 DOFs eyelids actuated by micro Hitec HS-65HB servomotors, while the Logitech Sphere AF color complementary metal-oxide semiconductor (CMOS) camera located in the nose, Kinect motion sensor, as well as a microphone for sound reception and speech recognition are used to perceive the environment; the robot is capable of producing 6 facial expressions (anger, disgust, fear, joy, sadness, and surprise). A multi-sensor robotic head called Muecas for affective HRI is shown in [128] ; the robot has 3 DOFs eyeballs, 4 DOFs eyebrows, and 1 DOF mouth; pitch eyeball movements are enabled by Faulhaber LM-2070 linear direct current (DC) servomotor via a linear guide mechanism, while two Faulhaber LM-1247 linear DC servomotors directly provide independent yaw eyeball movements; eyebrows are directly actuated by Hitec HS-45HB servomotors; the robot has a stereo audio system-speakers and microphones, vision system consisting of stereo cameras Point Gray Dragonfly2 IEEE-1394 with custom control sensor (CCS) and controller, as well as red green blue-depth (RGB-D) sensor; the robot is capable of producing 4 facial expressions (sadness, happiness, fear, and anger). A mobile humanoid robotic platform called Robovie designed for HRI is shown in [129] ; the robot has 4 DOFs eyeballs for gaze control actuated by direct-drive motors; also, it has obstacle detection sensors, tactile sensors, omnidirectional vision sensor, and microphones for receiving and recognizing voice commands. The social robot SyPEHUL (System of Physics, Electronics, HUmanoid robot and machine Learning) is shown in [130] ; the robot has 2 DOFs eyeballs, 2 DOFs eyebrows, 4 DOFs mouth, and 2 DOFs ears-all joints are actuated by servo motors, while facial expression recognition camera is located between the eyes; the robot is able to produce 4 facial expressions (happiness, sadness, anger, and surprise). A huggable social robot called Probo designed for HRI research with a focus on nonverbal communication is shown in [131] ; the robot has 3 DOFs eyeballs, 2 DOFs eyelids, 4 DOFs eyebrows, 2 DOFs lips, 2 DOFs ears, and a 1 DOF jaw, where the eyeballs, eyelids, and eyebrows are actuated by compliant Bowden cable-driven actuators (CBCDAs); also, it has a charge-coupled device (CCD) vision camera located between the eyes, sound processing microphones, and force sensor resistors for touch; the robot is capable of producing 6 facial expressions (anger, disgust, fear, happiness, sadness, and surprise). A humanoid research platform called CB (Computational Brain) for exploring neuroscience is presented in [132] ; the robot has 4 DOFs eyeballs and two cameras in each eye-Elmo MN42H 17 mm OD (peripheral) and Elmo QN42H 7 mm OD (foveal) for visual processing and ocular-motor responses using sensors and vision software; in addition, stereo microphones enable the robot's sense of hearing after perceptual signal processing. A humanoid head called Amir-II with emotion expression capabilities is shown in [133] ; the robot has 2 DOFs eyelids, 2 DOFs eyebrows, and 3 DOFs mouth-all joints are actuated Dynamixel AX-12 servomotors, while a universal serial bus (USB) webcam mounted on the robot's head is used for vision; the robot is capable of producing 4 facial expressions (happiness, anger, sadness, and disgust). A humanoid robotic torso called James designed to operate in an unstructured environment is shown in [134, 135] ; the robot has 4 DOFs eyeballs with two digital CCD Point Gray Dragonfly cameras in them actuated by Faulhaber motors via tendon-driven mechanisms; also, it has Intersense iCube2 3-axis orientational tracker (vestibular system) mounted on the head, while the pressure sensors are used for tactile information. An infant-like robot called Infanoid, designed to investigate the underlying mechanisms of social intelligence is presented in [136] ; the robot has 3 DOFs eyeballs, 2 DOFs eyebrows, and 2 DOFs lips; 2 different color CCD cameras with wide angle and telephoto lens for object recognition and focusing, respectively, are located in each eyeball, A mobile humanoid robot called Robotinho-a tour guide with multimodal interaction capabilities is shown in [137] ; the robot has 4 DOFs eyes represented by 2 USB cameras, 4 DOFs eyebrows, upper eyelids with 1 DOF, while lower eyelids move together with eyeballs, as well as jaw and mouth with a total of 6 FOFs-all joints are actuated by small digital Dynamixel servos; also, it has an attitude sensor (dual-axis accelerometer and two gyroscopes), 8 ultrasonic distance sensors Devantech SRF02 and laser range finder (LRF); the robot is capable of producing 6 facial expressions (surprise, fear, joy, sadness, anger, and disgust). An emotion-display robot called EDDIE (Emotion Display with Dynamic Intuitive Expressions) is shown in [138] ; the robot has 3 DOFs eyeballs, 4 DOFs eyelids, and 4 DOFs eyebrows that are actuated by miniature Atom Mini servomotors via levers and rings, while FireWire cameras are located in the eyeballs; in addition to the vision sensor, two microphones for sound identification and speech recognition are located on the head; the robot is able to produce 6 facial expressions (joy, surprise, anger, disgust, sadness, and fear). An active vision humanoid head robot called MERTZ is shown in [139] ; the robot has 3 DOFs eyeballs, 2 DOFs eyebrows, and 1 DOF for upper eyelids; Point Gray OEM Dragonfly cameras in the eyes are located allowing visual input, while the GN Netcom VA-2000 voice array desk microphone allows interaction with multiple people simultaneously; in addition, the robot has force sensors and motor encoders. A mobile-dexterous-social robot called MDS Nexi with a highly articulate face for HRI research is shown in [140, 141] ; the robot has 3 DOFs eyeballs, 2 DOFs eyelids, 2 eyebrows, and 3 DOFs jaw; FireWire color cameras with a 6 mm microlens are located in the eyeballs, while a three-dimensional infrared (3D IR) depth-sensing camera for facial and object recognition is placed on the robot's forehead, along with 4 microphones to localize the sound; all joints are equipped with current sensors and high-resolution encoders; the robot is capable of producing several facial expressions, such as anger, confusion, excitement, boredom, etc. Karlsruhe humanoid head-an experimental platform for the realization of interactive service tasks and cognitive vision research is presented in [142] ; the robot has 4 DOFs eyeballs that activate Harmonic Drive motors with backlash-free gears and Faulhaber DC motors with backlash-free gears enabling pitch and yaw movements, respectively; two Point Gray Dragonfly2 IEEE-1394 cameras (wide-angle lens for peripheral vision and narrow-angle lens for foveal vision) are located in each eyeball, and the robot has an acoustic sensor (six channel microphone system) and inertial system (encoders, gyroscope). An interactive robotic cat called iCat with both object and facial recognition capabilities is shown in [143, 144] ; the robot has 3 DOFs eyeballs, 2 DOFs eyelids, 2 DOFs eyebrows, and 4 DOFs mouth-all joints are actuated by radio control (RC) servo motors, while the camera for recognizing objects and faces is located in the nose; also, it has an audio system-microphones for receiving, recording sound signals, recognizing speech and its direction, as well as a speaker for generating speech, tactile sensors, and multi-color LEDs in the ears and legs for more efficient emotion expressions; the robot is capable of producing 6 facial expressions (happiness, surprise, fear, sadness, disgust, and anger). The Bielefeld anthropomorphic robot head called Flobi with human-like appearance is shown in [145] ; the robot has 3 DOFs eyeballs and Point Gray Dragonfly2 in each eye, 4 DOFs eyelids, 2 DOFs eyebrows, and 6 DOFs mouth; eyeballs and eyebrows actuated Maxon motors via levers and tendon-driven mechanisms, respectively; also, it has red green blue (RGB) sensor and M12 micro lenses, high sensitivity microphone, two different gyroscopes, and LEDs in the cheeks that change colors in accordance with the expressed emotion; the maximum angular speed of saccadic movements is 500 • /s; the robot is capable of producing 5 facial expressions (happiness, sadness, anger, surprise and fear). An interactive robot called Golden Horn with emotion expression capabilities and face detection is shown in [146] ; the robot has 4 DOFs eyeballs and 4 DOFs upper eyelids actuated by artificial intelligence (AI) motors; in addition, it has LEDs in the cheeks to generate certain emotions, while the webcam and microphone are encapsulated in the eyeballs allowing face detection and voice recognition, respectively; the robot is capable of producing 6 basic and several additional facial expressions (happiness, anger, sadness, surprise, disgust, and fear, as well as sleepiness, innocence, disregard, nervousness, dizziness, and doubtfulness). A bipedal humanoid robot called Romeo with gaze-shifting capabilities is shown in [147] ; the robot has 4 DOFs eyeballs that actuate brushed Maxon DC motors via proximal links; the maximum controllable and non-controllable angular speed of the eyeball is 450 • /s and 1000 • /s, respectively; the robot has two Aptina Imaging MT9M114 cameras located in the eyeballs, LEDs for displaying the mouth, microphones, and speakers, and tactile sensors and depth sensor for navigation and perception. An open humanoid platform called Epi designed for experiments in developmental robotics is presented in [148] . What sets the Epi apart from other robots is its eyes with controllable pupils and iris color; the robot has 4 DOFs eyeballs actuated by Dynamixel servomotors enabling yaw eyeball movements and animated pupil movements (the inner body of the eye contains an LED ring and 12 independently controlled RGB diodes), while pitch eyeball movements are not possible; the maximum angular speed of the horizontal saccades is 475 • /s; the robot has cameras in both eyes located for stereo vision, contact, and bend sensors in hands, and LEDs for generating lips. An expressive bear-like robot called eBear for exploration of HRI including verbal and non-verbal communication is shown in [149] . The robot has 2 DOFs eyeballs, 2 DOFs eyelids, 2 DOFs eyebrows, and 2 DOFs ears-all joints are actuated by Hitec PWM servomotors; also, it has a camera to recognize facial expressions with an appropriate visual recognition system and LEDs to display the mouth and generate different emotions; the robot is capable of producing 6 facial expressions (joy, anger, sadness, disgust, surprise, and fear). An open source humanoid robotic platform called iCub, designed explicitly to support research in embodied cognition is shown in [150] . The robot has 3 DOFs eyeballs-the cameras are located in the eyeballs, and brushed Faulhaber DC motors via toothed belts actuate them, while the eyebrows and mouth are displayed using LEDs allowing basic facial expressions; in addition, the robot has vestibular, auditory, and haptic sensory capabilities. The Twente humanoid head designed as a research platform for human-machine interaction (HMI) is presented in [151] ; the robot has 3 DOFs eyeballs in which CCD cameras are located to track objects and perceive human facial expressions, while the eyebrows and mouths are displayed using LEDs enabling human-like facial expressions. A multifunctional emotional biped humanoid robot called KIBO with facial expression capabilities and various human-interactive devices is shown in [152] ; the robot has 4 DOFs eyeballs, 4 DOFs eyelids, 2 DOFs eyebrows, and 5 DOFs lips; stereo cameras are located in the eyeballs, while the actuation of the joints is performed by small Hitec RC servo motors; in addition, it has a camera for position assessment, microphones for voice recognition, an ultrasonic sensor for front obstacle detection and distance measurement, as well as a lower ground camera for floor obstacle detection; using LEDs, the robot changes color depending on the situational context and the expressed emotion. Emotion expression humanoid robot called WE-4RII (Waseda Eye No.4 Refined II) is shown in [153] ; the robot has 3 DOFs eyeballs in which CCD cameras are located, 6 DOFs eyelids, 8 DOFs eyebrows, 4 DOFs lips, and a 1 DOF jaw; pitch eyeball movements are enabled by a DC motor and harmonic drive system via a belt-driven mechanism, while independent yaw eyeball movements are enabled by DC motors and torsion springs via tendon-driven mechanisms-the eyelids are actuated in a similar way; the maximum angular speed of the eyeball is 600 • /s, while one blink lasts 0.3 s and achieves a speed of 900 • /s, which is similar to a human; the robot has microphones, temperature sensors, tactile sensors, gas sensors, and force sensors, while the cheeks change color in accordance with the expressed emotion using electroluminescence (EL); in addition to speech recognition, the robot is capable of producing 6 facial expressions (happiness, surprise, anger, disgust, fear, and sadness). Based on a review of the available literature and analysis of the results, we conclude: (i) robots typically have 3 or 4 DOFs eyeballs allowing common pitch and independent yaw movements or independent pitch and yaw movements of each eye, respectively; (ii) the robots generally have 2 or 4 DOFs eyelids allowing the upper eyelids to rotate independently (while the lower eyelids are stationary or move in accordance with the vertical saccades of the eye) or each eyelid to move independently, respectively; (iii) robots typically have 2 or 4 DOFs eyebrows allowing independent rotation or translation of the eyebrows, or independent rotation and translation of each eyebrow, respectively; (iv) the transmission of motion from actuators to eyeballs, eyelids, and eyebrows is typically realized using gears, levers and rings, tendon-driven mechanisms, belt-driven mechanisms, cable-driven mechanisms, linear-guide mechanisms or direct-drive actuators; (iv) joint actuation is most commonly performed by servomotors, while Maxon and Faulhaber DC motors are less commonly used; (v) cameras can be located in the eyeballs-one or two in each eye allowing perception of the environment, recognition of faces and objects using vision and image processing systems, however, most robots have cameras located on the head, forehead, nose or chest; (vi) robots generally have one or more microphones for receiving and processing audio signals, as well as a speaker for transmitting verbal messages; (vii) in the end, only one robot has developed eyes and eyebrows in accordance with the biological and kinematic principles of the human eye. During the previous two decades, robotics have developed a presence within the field of healthcare, and its technologies are generally accepted by doctors, nurses, and patients [154] [155] [156] . The use of SARs in therapy for people with autism spectrum disorder (ASD) [157] [158] [159] , cerebral palsy (CP) [160] [161] [162] , and dementia [163] [164] [165] has had positive effects. Additionally, the use of robots as emotional and social support for persons with mild cognitive impairment or older people who are alone and/or lonely, has been the subject of many studies [166] [167] [168] [169] . Figure 3a shows the human-like robot MARKO, which is used as a motivational tool in physical therapy for children with CP [170] . Due to the nature of CP and that no two children will have identical clinical manifestations, it is key to discover the illness within the first few years of life, determine the diagnosis, and begin physical therapy, which is the cornerstone of CP treatment [171] [172] [173] . One of the goals of physical therapy is to strengthen the musculature and improve the fine motor skills of the child, with success being directly dependent on how willing the child is to do the exercises thus preventing contractures. However, although the successfulness of the therapy is directly proportional to its duration, the problem with executing these exercises is due to brain damage-the movements are often strenuous, painful, and tiring, so the child will very quickly lose interest in working with the therapist. According to the clinical study shown in [170] , it was determined that the robot MARKO raises the interest of the children to complete the exercises, motivates, and encourages them to exercise longer when compared to the conventional approach, thus making the therapy more successful. The robot firstly engages the child verbally, after which it begins demonstrating the exercise. The child then must repeat the exercise as many times as they can. After every completed exercise, the robot rewards the child with praise. It was noted that every child needs a script tailored to them specifically and that children, in general, perceive the robots as human beings. Due to this, the robot should be able to express emotions in a human-like way, in line with the kinematic principles of the eye and eyebrows. The robot has 4 DOFs eyeballs, 4 DOFs eyelids, and 3 DOFs eyebrows, as well as LEDs for the mouth and ears. CCD Fire-i board cameras are located within the eyeballs, while all the joints are actuated using Modelcraft servos. Additionally, it has a microphone, a speaker, and a system for speech recognition and synthesis. thus preventing contractures. However, although the successfulness of the therapy is directly proportional to its duration, the problem with executing these exercises is due to brain damage-the movements are often strenuous, painful, and tiring, so the child will very quickly lose interest in working with the therapist. (a) (b) According to the clinical study shown in [170] , it was determined that the robot MARKO raises the interest of the children to complete the exercises, motivates, and encourages them to exercise longer when compared to the conventional approach, thus making the therapy more successful. The robot firstly engages the child verbally, after which it begins demonstrating the exercise. The child then must repeat the exercise as many times as they can. After every completed exercise, the robot rewards the child with praise. It was noted that every child needs a script tailored to them specifically and that children, in general, perceive the robots as human beings. Due to this, the robot should be able to express emotions in a human-like way, in line with the kinematic principles of the eye and eyebrows. The robot has 4 DOFs eyeballs, 4 DOFs eyelids, and 3 DOFs eyebrows, as well as LEDs for the mouth and ears. CCD Fire-i board cameras are located within the eyeballs, while all the joints are actuated using Modelcraft servos. Additionally, it has a microphone, a speaker, and a system for speech recognition and synthesis. The mechanical systems of the eyeballs, eyelids, and eyebrows are the subject being reconstructed in this paper for a number of reasons: (i) the eyes and eyebrows of the robot are not capable of producing the types of motions, the ranges of motion or the speeds of their human counterparts, and having these capabilities would, on a functional level, enable the spectrum of movements necessary for the simulation of emotional expressions, which is a key feature; (ii) the existing actuators are not capable of producing speeds appropriate for human-like motion, with the output link having high values of arc backlash, which negatively impacts the positioning accuracy and the repeatability of the output link motion, especially since the cameras are located in the eyeballs; also, this problem causes jerks when movements are initiated, which negatively impacts the stability of the picture; (iii) the dimensions and shape of the actuators directly influenced the structure of the mechanical system and the mechanism dimensions-the structure is not optimized, and the dimensions are too large, making the eye modules take up most of the head's volume (see Figure 3b ); a consequence of this are potential problems during motion-unfavorable transmission angles and low mechanical advantage would cause most of the power to be wasted on overcoming internal friction in the mechanism joints; (iv) the driving mechanisms of the eyes are linkage mechanisms, but the links are imprecisely bent which caused issues with the kinematics; (v) the eye module dimensions directly impacted the structure The mechanical systems of the eyeballs, eyelids, and eyebrows are the subject being reconstructed in this paper for a number of reasons: (i) the eyes and eyebrows of the robot are not capable of producing the types of motions, the ranges of motion or the speeds of their human counterparts, and having these capabilities would, on a functional level, enable the spectrum of movements necessary for the simulation of emotional expressions, which is a key feature; (ii) the existing actuators are not capable of producing speeds appropriate for human-like motion, with the output link having high values of arc backlash, which negatively impacts the positioning accuracy and the repeatability of the output link motion, especially since the cameras are located in the eyeballs; also, this problem causes jerks when movements are initiated, which negatively impacts the stability of the picture; (iii) the dimensions and shape of the actuators directly influenced the structure of the mechanical system and the mechanism dimensions-the structure is not optimized, and the dimensions are too large, making the eye modules take up most of the head's volume (see Figure 3b ); a consequence of this are potential problems during motion-unfavorable transmission angles and low mechanical advantage would cause most of the power to be wasted on overcoming internal friction in the mechanism joints; (iv) the driving mechanisms of the eyes are linkage mechanisms, but the links are imprecisely bent which caused issues with the kinematics; (v) the eye module dimensions directly impacted the structure and dimensions of the eyebrow rotation and translation mechanism; due to this, the eyebrows were positioned outside of the eye region (see Figure 3b ) which is not in line with the anthropometrics of the face; in effect, the eyebrows are not functional due to the driving mechanism being inadequate because of the lack of space in the head; (v) each eyeball has 2 DOFs and due to everything stated so far, there are inconsistencies when realizing the saccades which manifests with strabismus; an additional problem is the realization of vergence movements for focusing objects in the line of sight; (vi) the base platforms of the eyes and eyebrows were made using 3D binder jetting technology and 3D printing technology, respectively; the consequences of this are manufacturing errors due to deformation during the hardening and cooling of the material, which has negative effects on the positioning accuracy and part assembly; all of this caused further problems, such as backlash in the joints and high values of friction. The goal of this paper is the structural design of a new mechanical and control system of robot eyes, which will functionally enable an assortment of movements that human eyes and eyebrows are capable of, to simulate the emotional state of the robot. The mechanical system must represent an adequate hardware platform for the development and implementation of robotic vision and algorithms with different purposes, such as face and object detection, emotion recognition, semantic segmentation of scenes, etc. By using a vision system, supported by a sophisticated mechanical and control system, robots could lower the burden carried by the healthcare system, contributing to the quality of care of ill and threatened persons, as well as to the safety of healthcare workers. The mechanical system shown in this paper consists of three independent subassemblies: (i) the mechanical system of the eyeballs, (ii) the mechanical system of the eyelids, and (iii) the mechanical system of the eyebrows. Due to their independence, each of them will be considered with regard to their structure and preliminary dimensions. The following text presents the structure of the systems driving the eyeballs, eyelids, and eyebrows, as well as the basic equations describing their kinematic behavior. Figure 4 shows the structure of the eyeball mechanical system with a total of 3 DOFs, allowing the pitch and yaw motions of the eyeball-angles ϕ L/R and ψ L/R , respectively. The mobile platforms marked as L L/R , J' L/R , and J" L/R are the eyeballs, realized as spheres with center points in O L/R . The base platforms are integrated with the robot head frame, and are defined with points K 0(L/R) , H 0(L/R) , and G 0(L/R) . The motion of the eyeball is defined with one RSU L/R leg (R, S, and U stand for revolute, spherical, and universal joints, respectively) and two identical PML1 L/R and PML2 L/R legs which form planar four-bar linkages with parallelogram configurations G 0(L/R) , G' L/R , H' L/R , H 0(L/R) and G 0(L/R) , G" 0(L/R) , H" L/R , H 0(L/R) , respectively. The RSU L/R leg provides the pitch rotation-angle ϕ L/R , while the PML1 L/R and PML2 L/R legs provide the yaw rotation to the eyeball-angle ψ L/R . The motion is achieved with 3 actuators placed in joints K 0L , G 0L , and G 0R . However, the PML1 L/R and PML2 L/R legs are driven by the same actuator, since levers G 0(L/R) , G' (L/R) and G 0(L/R) , G" 0(L/R) are fixed to one another. The four-bar linkage marked as LEV transmits the motion from the actuator in joint K 0L to passive joint K 0R , therefore making α L = α R . The axes' unit vectors of the R joints are marked as n α(L/R) and n ϕ(L/R) . Due to the joint structure, the eyeball can complete pitch and yaw motions, either independently or simultaneously. The eyeball center does not move during either motion, making the motion of the eyeball spherical with regard to its center. The local coordinate system O L/R x e(L/R) y e(L/R) z e(L/R) is fixed to the eyeball and in the initial position, the directions of the axes coincide with the axes of the fixed global coordinate system Oxyz. Since the mechanisms of both the left and right eyeball are structurally identical, the indexes denoting left L and right R will be omitted in the following text. According to the input parameters of the eyeball driving system: the lever lengths and the position angles of the mechanism input links-angles α and β-the following output kinematic parameters are determined: the position-angles ϕ and ψ, and the angular velocities of the eyeball. Firstly, the pitch motion of the eyeball is considered, defined by the rotation angle ϕ: where: Sensors 2022, 22, 3060 where: k and l-position vectors of points K and L, k 0 and l 0 -position vectors of immobile points K 0 and L 0 , k s and l s -position vectors of points K and L in initial position, initial position, the directions of the axes coincide with the axes of the fixed global coordinate system Oxyz. Since the mechanisms of both the left and right eyeball are structurally identical, the indexes denoting left L and right R will be omitted in the following text. According to the input parameters of the eyeball driving system: the lever lengths and the position angles of the mechanism input links-angles α and β-the following output kinematic parameters are determined: the position-angles φ and ψ, and the angular velocities of the eyeball. Firstly, the pitch motion of the eyeball is considered, defined by the rotation angle φ: where: Rotation matrix [R α,nα ], rotation α around an axis n α = (n ax ,n ay ,n az ) is determined according to: Rotation matrix [R ϕ,nϕ ], rotation ϕ around an axis n ϕ = (n ϕx ,n ϕy ,n ϕz ) is determined according to: Matrix [P nα ] and [Q nα ] are determined according to: n αx n αy n αx n αz n αx n αy n 2 αy n αy n αz n αx n αz n αy n αz n 2 αz   (10) The angular speed of output link LL 0 is: The velocity of point K is known and equals: . Now the velocity of point L on the eyeball is determined: The yaw motion of the eyeball is considered next. The positions of points J' and J" are defined by vectors j' and j", respectively (x O and y O are the coordinates of the eyeball center): Since G 0 G' = G 0 G", the eyeball rotates about the z-axis when angle β changes. Due to this, the position of the eyeball-angle ψ, is equal to the position of the input link-angle β, therefore: Figure 5 shows the structure of the eyelid mechanical system with a total of 4 DOFs, which enables the rotation of the upper and lower eyelids-angles θ U(L/R) and θ L(L/R) , respectively. The upper/lower eyelids UEL L/R and LEL L/R are spherical shells with centers in points O L/R (the eyeball center points). The mechanical system consists of four spatial mechanisms with RSSR configurations, driven by actuators placed in joints U 0(L/R) and R 0(L/R) . The unit vectors of the R joint axes are n θU(L/R) and n ρL/R for the upper eyelid, and n θL(L/R) and n σL/R for the lower eyelid. The local coordinate systems are fixed to the appropriate eyelid and in the initial position, the directions of the axes coincide with the fixed global coordinate system Oxyz. The eyelids are open in the initial position. When they close, the plane where they make contact lies along the y-axis and is at a 10 • angle relative to the horizontal plane. Since the mechanisms of both the left and right eyelid are structurally identical, the indexes denoting left L and right R will be omitted in the following text. Based on the input kinematic parameters of the eyelid driving system: the lever lengths and the positions of the input links-angles ρ and σ-the output kinematic parameters are defined: the positions-angles θ U and θ L , and angular velocities of the eyelids. Firstly, the position of the upper eyelid is determined: where: where: u and v-position vectors of points U and V, u 0 and v 0 -position vectors of immobile points U 0 and V 0 , u s and v s -position vectors of points U and V in initial position, [R ρ,nρ ] and [R θ,nθU ]-rotation matrix (see Equations (7) and (8), respectively), and [P nθU ] and [Q nθU ]-corresponding matrixes (see Equations (9) and (10), respectively). Figure 5 shows the structure of the eyelid mechanical system with a total of 4 DOFs, which enables the rotation of the upper and lower eyelids-angles θU(L/R) and θL(L/R), respectively. The upper/lower eyelids UELL/R and LELL/R are spherical shells with centers in points OL/R (the eyeball center points). The mechanical system consists of four spatial mechanisms with RSSR configurations, driven by actuators placed in joints U0(L/R) and R0(L/R). The unit vectors of the R joint axes are nθU(L/R) and nρL/R for the upper eyelid, and nθL(L/R) and nσL/R for the lower eyelid. The local coordinate systems are fixed to the appropriate eyelid and in the initial position, the directions of the axes coincide with the fixed global coordinate system Oxyz. The eyelids are open in the initial position. When they close, the plane where they make contact lies along the y-axis and is at a 10° angle relative to the horizontal plane. Since the mechanisms of both the left and right eyelid are structurally identical, the indexes denoting left L and right R will be omitted in the following text. Based on the input kinematic parameters of the eyelid driving system: the lever lengths and the positions of the input links-angles ρ and σ-the output kinematic parameters are defined: the positions-angles θU and θL, and angular velocities of the eyelids. Firstly, the position of the upper eyelid is determined: where: ( ) The angular speed of output link VV 0 is: The velocity of point U is known and equals: Now the velocity of point V on the upper eyelid is determined as: The position of the lower eyelid is determined as: where: where: r and t-position vectors of points R and T, r 0 and t 0 -position vectors of immobile points R 0 and T 0 , r s and t s -position vectors of points R and T in initial position, [R σ,nσ ] and [R θ,nθL ]-rotation matrix (see Equations (7) and (8), respectively), and [P nθL ] and [Q nθL ]-corresponding matrixes (see Equations (9) and (10), respectively). The angular speed of link TT 0 equals: The velocity of point R is known and equals: . Now the velocity of point T on the lower eyelid is determined as: Figure 6a shows the mechanical system of the eyebrows with a total of 2 DOFs, enabling rotational and translational motion of the eyebrows-angle ϕ 2 and displacement z 5 along the vertical axis, respectively. The eyebrows' rotation mechanism consists of two levers, 2 L and 2 R , which are fixed to each other, becoming input link 2, levers 3 L and 3 R -floating links, and levers 4 L and 4 R which are fixed to the left and right eyebrow, respectively-output links. The eyebrows are raised by link 5 which performs translational motion in relation to the immobile link 1. As shown on Figure 6b , link 5 is fixed to a screw nut which moves along the threaded shaft of a spindle drive mechanism, enabling the transformation of rotational into translational motion. The actuator is position parallel to the x-axis, between the left and right eye modules by way of bevel gears (i = 1). Now the velocity of point T on the lower eyelid is determined as: Figure 6a shows the mechanical system of the eyebrows with a total of 2 DOFs, enabling rotational and translational motion of the eyebrows-angle φ2 and displacement z5 along the vertical axis, respectively. The eyebrows' rotation mechanism consists of two levers, 2L and 2R, which are fixed to each other, becoming input link 2, levers 3L and 3Rfloating links, and levers 4L and 4R which are fixed to the left and right eyebrow, respectively-output links. The eyebrows are raised by link 5 which performs translational motion in relation to the immobile link 1. As shown on Figure 6b , link 5 is fixed to a screw nut which moves along the threaded shaft of a spindle drive mechanism, enabling the transformation of rotational into translational motion. The actuator is position parallel to the x-axis, between the left and right eye modules by way of bevel gears (i = 1). (a) (b) Figure 6 . (a) Eyebrows' mechanical system with total 2 DOFs; (b) Spindle drive mechanism. Figure 7 shows the eyebrow rotation mechanism in its initial-horizontal and rotated positions. During eyebrow rotation, link 5 does not move, so the whole mechanism can be regarded as two independent four-bar linkages. Figure 7 shows the eyebrow rotation mechanism in its initial-horizontal and rotated positions. During eyebrow rotation, link 5 does not move, so the whole mechanism can be regarded as two independent four-bar linkages. (a) (b) Figure 6 . (a) Eyebrows' mechanical system with total 2 DOFs; (b) Spindle drive mechanism. Figure 7 shows the eyebrow rotation mechanism in its initial-horizontal and rotated positions. During eyebrow rotation, link 5 does not move, so the whole mechanism can be regarded as two independent four-bar linkages. The lengths of links 2, 3, and 4 for the left and right mechanism are r 2(L/R) , r 3(L/R) , and r 4(L/R) , respectively. The relationship between the eyebrow rotation angle ϕ 4L/R and the input link angle ϕ 2L/R is expressed: where: where y C(L/R) and z C(L/R) are the coordinates of point C for the left/right mechanism. The coordinates of point A for the left/right mechanism are: where ϕ 2(L/R) is the position angle of link 2 for the left/right mechanism, ϕ 2(L0/R0) is the input link angle in the initial position where the left/right eyebrow is horizontal, and α is the rotation angle of link 2 with regard to the initial position. The main function of the mechanism is to transmit motion from the input link to the output link. In order to fulfil the aforementioned, it is necessary for the driving force to be efficiently transmitted to the output link-the measure of this efficiency is the transmission index (TI) [174] , the value of which depends on the dimensions and current position of the mechanism. When the mechanism moves, the TI value changes within the interval from 0 to 1, with values closer to 1 indicating higher efficiency. Due to this, the dimensional synthesis will be conducted so that the eyeball, eyelid, and eyebrow mechanisms achieve their prescribed ranges of motion while keeping the TI as high as possible. Figure 8 shows the vertical and horizontal saccadic movements of the eyeball-angle ϕ and angle ψ, respectively. For the vertical saccadic movements, in the initial position, the eyeball is rotated for ϕ start = −30 • around the y-axis, and then it rotates for the angle Φ = 75 • to the end position ϕ end = 45 • . As for the horizontal saccadic movements, in the initial position, the eyeball is rotated for ϕ start = −45 • around the z-axis, and then it rotates for the angle Ψ = 90 • to the end position ψ end = 45 • . The duration of both movements has been adopted to equal no more than 0.2 s. In the case of pitch rotation, the eyeball mechanism TI is defined as the cosine of the angle between the direction of the floating link KL and the direction of the velocity of point L [175] , therefore: Aside from the prescribed eyeball range of motion and keeping the TI as close to 1 as possible, an additional requirement is the minimization of the mechanism dimensions. Since some of the requirements oppose each other, the dimensional synthesis problem is defined as an optimization problem-minimization of the objective function F(x), x∈D for the set constraints, where x = (x1, x2,…, xm) is the vector of variables, D = {x∈Rn| g(x) ≤ 0 ˄ h(x) = 0} is the set of solutions that fulfils the defined constraints, while g(x) ≤ 0 and h(x) = 0 are the vectors of constraints. The optimization variables are the geometric parameters of the mechanism: the length of the input link K0K, the length of the output link OL, the initial position angle of the input link αstart, and the range of motion of the input link defined by angle A = |αend − αstart|. The objective function is therefore formed as: where: TIEBi, i = 1,…, n, an array of TI values during eyeball movement. The desired interval of motion for the eyeball is prescribed, so the following equality constraint is given h1 = |φend − φstart| − 75° = 0. The dimensions of the mechanism must be as small as possible, due to the limited space inside the head of the robot, which is also why inequality constraints are introduced (see Table 1 ). In the case of pitch rotation, the eyeball mechanism TI is defined as the cosine of the angle between the direction of the floating link KL and the direction of the velocity of point L [175] , therefore: Aside from the prescribed eyeball range of motion and keeping the TI as close to 1 as possible, an additional requirement is the minimization of the mechanism dimensions. Since some of the requirements oppose each other, the dimensional synthesis problem is defined as an optimization problem-minimization of the objective function ay of TI values during eyeball movement. otion for the eyeball is prescribed, so the following equality φstart| − 75° = 0. The dimensions of the mechanism must be he limited space inside the head of the robot, which is also e introduced (see Table 1 ). ds of optimization variables of the eyeball. m) OL (mm) αstart (°) A (°) 15 50 50 h(x) = 0} is the set of solutions that fulfils the defined constraints, while g(x) ≤ 0 and h(x) = 0 are the vectors of constraints. The optimization variables are the geometric parameters of the mechanism: the length of the input link K 0 K, the length of the output link OL, the initial position angle of the input link α start , and the range of motion of the input link defined by angle A = |α end − α start |. The objective function is therefore formed as: f(x) = 1 |mean value of (TI EB i )| (41) where: TI EBi , i = 1, . . . , n, an array of TI values during eyeball movement. The desired interval of motion for the eyeball is prescribed, so the following equality constraint is given h 1 = |ϕ end − ϕ start | − 75 • = 0. The dimensions of the mechanism must be as small as possible, due to the limited space inside the head of the robot, which is also why inequality constraints are introduced (see Table 1 ). The following variables are prescribed according to the design requirements-the eyeball center is adopted as the coordinate system origin O (0,0,0) . The rotation axis of the input link is parallel to the z-axis, making n α = (0,0,1); the rotation axis of the output link is parallel to the y-axis, making n ϕ = (0,0,1); the position of fixed point K 0 (position of the actuator) is K 0 (−80,10,10); in the initial position, point L coincides with the vertical xOz plane, while line OL is at an angle of 120 • relative to the x-axis, making l s = OL cos 120 • , 0, OL sin 60 • . According to the previous statements, optimal dimensional synthesis of the RSU leg was conducted, yielding the values shown in Table 2 . 1 The length of the floating link KL is unambiguously defined by the prescribed and optimized geometric parameters. Figure 9 shows the results of a motion simulation conducted according to the data from Table 2 . It should be noted that ∆α and ∆ϕ represent the motion of the input and output links relative to their initial positions defined as α start and ϕ start , respectively. According to the previous statements, optimal dimensional synthesis of the RSU leg was conducted, yielding the values shown in Table 2 . Figure 9 shows the results of a motion simulation conducted according to the data from Table 2 . It should be noted that Δα and Δφ represent the motion of the input and output links relative to their initial positions defined as αstart and φstart, respectively. According to Figure 9 , to achieve an eyeball range of motion of 75°, the actuator needs to rotate by 76.2°. During which, the maximum angular speed of the eyeball equals 769.1°/s, while the required angular speed of the actuator equals 770.4°/s. The TI value ranges from 0.62 to 0.98, which is satisfactory. In the case of the yaw motion, the motion is achieved by planar four-bar linkages with parallelogram configurations, meaning the motion of the eyeball is identical to the motion of the actuator (ψ = β) and does not depend on the dimensions of the mechanism. For planar mechanisms, the TI is equivalent to the transmission γ, the angle between the link directions OJ', H'J' and OJ", H"J", respectively. According to Ref. [176] , for lever mechanisms, the recommended bounds for the value of the transmission angle are γmin ≥ 45° and γmax ≤ 135°. In this case, the transmission angle depends solely on the position angle of the input link G0G' and G0G". According to everything stated above, a motion simulation was conducted, yielding the results shown in Figure 10 . It should be noted that angles Δβ and Δψ represent the motion of the input and output links relative to their initial positions defined as βstart and ψstart, respectively. According to Figure 9 , to achieve an eyeball range of motion of 75 • , the actuator needs to rotate by 76.2 • . During which, the maximum angular speed of the eyeball equals 769.1 • /s, while the required angular speed of the actuator equals 770.4 • /s. The TI value ranges from 0.62 to 0.98, which is satisfactory. In the case of the yaw motion, the motion is achieved by planar four-bar linkages with parallelogram configurations, meaning the motion of the eyeball is identical to the motion of the actuator (ψ = β) and does not depend on the dimensions of the mechanism. For planar mechanisms, the TI is equivalent to the transmission γ, the angle between the link directions OJ', H'J' and OJ", H"J", respectively. According to Ref. [176] , for lever mechanisms, the recommended bounds for the value of the transmission angle are γ min ≥ 45 • and γ max ≤ 135 • . In this case, the transmission angle depends solely on the position angle of the input link G 0 G' and G 0 G". According to everything stated above, a motion simulation was conducted, yielding the results shown in Figure 10 . It should be noted that angles ∆β and ∆ψ represent the motion of the input and output links relative to their initial positions defined as β start and ψ start , respectively. According to Figure 10 , the ranges of motion and the angular speeds of the eyeball and actuator are the same and equal, 90° and 769.1°/s, respectively, while the transmission angle value ranges from 45° to 145°, which is satisfactory. Figure 11 ). Then the upper eyelid is rotated for the angle ΘU = 50° to the upper closed position θUclosed, and the lower eyelid for the angle ΘL = 25° to the lower closed position θLclosed. In the closed position, the eyelids make contact in a plane angled so that θL0/U0 = θL/U(closed) = 10°. The eyelids then return to their initial positions. The duration of a single blink was adopted and equals no more than 0.25 s. The dimensional synthesis will be conducted so the eyelid mechanism achieves the prescribed ranges of motion ΘU/L, while keeping the force transmission as favorable as possible. For the upper eyelid mechanism, the TI is defined as the cosine between the direction of the floating link 3 and the direction of the velocity of joint V, meaning: The TI of the lower eyelid is defined similarly as: According to Figure 10 , the ranges of motion and the angular speeds of the eyeball and actuator are the same and equal, 90 • and 769.1 • /s, respectively, while the transmission angle value ranges from 45 • to 145 • , which is satisfactory. Figure 11 ). Then the upper eyelid is rotated for the angle Θ U = 50 • to the upper closed position θ Uclosed , and the lower eyelid for the angle Θ L = 25 • to the lower closed position θ Lclosed . In the closed position, the eyelids make contact in a plane angled so that θ L0/U0 = θ L/U(closed) = 10 • . The eyelids then return to their initial positions. The duration of a single blink was adopted and equals no more than 0.25 s. According to Figure 10 , the ranges of motion and the angular speeds of the eyeball and actuator are the same and equal, 90° and 769.1°/s, respectively, while the transmission angle value ranges from 45° to 145°, which is satisfactory. Figure 11 ). Then the upper eyelid is rotated for the angle ΘU = 50° to the upper closed position θUclosed, and the lower eyelid for the angle ΘL = 25° to the lower closed position θLclosed. In the closed position, the eyelids make contact in a plane angled so that θL0/U0 = θL/U(closed) = 10°. The eyelids then return to their initial positions. The duration of a single blink was adopted and equals no more than 0.25 s. The dimensional synthesis will be conducted so the eyelid mechanism achieves the prescribed ranges of motion ΘU/L, while keeping the force transmission as favorable as possible. For the upper eyelid mechanism, the TI is defined as the cosine between the direction of the floating link 3 and the direction of the velocity of joint V, meaning: The TI of the lower eyelid is defined similarly as: The dimensional synthesis will be conducted so the eyelid mechanism achieves the prescribed ranges of motion Θ U/L , while keeping the force transmission as favorable as possible. For the upper eyelid mechanism, the TI is defined as the cosine between the direction of the floating link 3 and the direction of the velocity of joint V, meaning: The TI of the lower eyelid is defined similarly as: The objective function is therefore formed as: where TI U/Li , i = 1, . . . ,n, an array of transmission index values during the eyelid motion. For constraints, the desired interval of motion for the upper and lower eyelid is h 1 = |θ Uopen − θ Uclosed | − 50 • = 0 and h 2 = |θ Lclosed − θ Lopen | − 25 • = 0, respectively. Additionally, TI U/L should not fall below some acceptable value (set to 0.5), i.e., c 1 = 0.5 − min value of (TI U/Li ). The optimization variables of the upper eyelid are: the length of the input lever U 0 U, the length of the output lever OV, the angle of the input link in the initial position ρ start , and the interval of motion of the input link, i.e., angle P = |ρ end − ρ start |, while the optimization variables of the lower eyelid are: the length of the input link R 0 R, the length of the output link OT, the angle of the input link in the initial position σ start , and the interval of motion of the input link, i.e., Σ = |σ end − σ start |. The eyes of the robot must fit in the space available in the head of the robot, so the bounds of the mechanism dimensions are given- Tables 3 and 4 for the upper and lower eyelid, respectively. The following variables are prescribed according to design requirements-the eyeball center is adopted as the coordinate system origin O(0,0,0). The axes of rotation of the input and output links are parallel to the y-axis, meaning n ρ = n σ = (0,1,0) and n θL = n θU = (0,1,0 According to the statements above, the dimensional optimal synthesis of the RSSR mechanism was conducted for the upper and lower eyelid, yielding the values shown in Tables 5 and 6, respectively. Figure 12 shows the results of the upper eyelid mechanism simulation conducted according to the values from Table 5 . It should be noted that ∆ρ i ∆θ U represent the motion of the input links relative to their initial positions defined as ρ start and θ Uopen , respectively. Figure 12 shows the results of the upper eyelid mechanism simulation conducted according to the values from Table 5 . It should be noted that Δρ i ΔθU represent the motion of the input links relative to their initial positions defined as ρstart and θUopen, respectively. According to Figure 12 , for upper eyelid range of motion to be 50°, the actuator needs to rotate by 75.3°. The maximum angular speed of the upper eyelid equals 727.9°/s, with the required angular speed of the actuator being 1034.6°/s. The TI value changes from 0.62 to 0.98, which is satisfactory. Figure 13 shows the results obtained by conducting a simulation of the lower eyelid mechanism according to the data from Table 6 . It should be noted that Δσ and ΔθL represent the movement of the input and output links relative to their initial positions defined as σstart and θLopen, respectively. According to Figure 12 , for upper eyelid range of motion to be 50 • , the actuator needs to rotate by 75.3 • . The maximum angular speed of the upper eyelid equals 727.9 • /s, with the required angular speed of the actuator being 1034.6 • /s. The TI value changes from 0.62 to 0.98, which is satisfactory. Figure 13 shows the results obtained by conducting a simulation of the lower eyelid mechanism according to the data from Table 6 . It should be noted that ∆σ and ∆θ L represent the movement of the input and output links relative to their initial positions defined as σ start and θ Lopen , respectively. According to Figure 13 , for the lower eyelid to achieve a range of motion of 25 • , the actuator must rotate by 38.9 • . The maximum angular speed of the lower eyelid equals 353.4 • /s, while the required angular speed of the actuator equals 535.9 • /s. The TI value changes from 0.62 to 0.98, which is satisfactory. to rotate by 75.3°. The maximum angular speed of the upper eyelid equals 727.9°/s, with the required angular speed of the actuator being 1034.6°/s. The TI value changes from 0.62 to 0.98, which is satisfactory. Figure 13 shows the results obtained by conducting a simulation of the lower eyelid mechanism according to the data from Table 6 . It should be noted that Δσ and ΔθL represent the movement of the input and output links relative to their initial positions defined as σstart and θLopen, respectively. According to Figure 13 , for the lower eyelid to achieve a range of motion of 25°, the actuator must rotate by 38.9°. The maximum angular speed of the lower eyelid equals 353.4°/s, while the required angular speed of the actuator equals 535.9°/s. The TI value changes from 0.62 to 0.98, which is satisfactory. Figure 14 shows the simplest solution of the left mechanism, a parallelogram four-bar linkage with opposite links of equal lengths. This means that the position angles of the input and output links are equal, so ϕ 2L = ϕ 4L . Additionally, the length of link 3 L is defined as well, and is equal to the distance between fixed points O 1 and C L . The lengths of levers 2 L and 4 L must be equal to each other-however, their length is not unambiguously defined, as there is an infinite number of possible solutions. Figure 14 shows the simplest solution of the left mechanism, a parallelogram fourbar linkage with opposite links of equal lengths. This means that the position angles of the input and output links are equal, so φ2L = φ4L. Additionally, the length of link 3L is defined as well, and is equal to the distance between fixed points O1 and CL. The lengths of levers 2L and 4L must be equal to each other-however, their length is not unambiguously defined, as there is an infinite number of possible solutions. Since link 2 consists of two levers 2L and 2R which are fixed to one another and therefore rotate together, if the left link rotates for angle α, the right one will as well (please see Figure 7 ). Considering that the left mechanism has a parallelogram configuration, the left eyebrow will rotate for that same angle α. Keeping in mind that the eyebrows should move symmetrically in regard to a vertical axis, it is evident that the right eyebrow must rotate for the angle −α. According to this, the design of the right four-bar linkage is considered to be the synthesis of a function generator, the solving of which requires the use of optimization methods: The objective function is defined as the square of the difference between the rotation angles of the input and output links in regard to the initial-horizontal position: where αi= −20°, −19°,…, 0°,…, +19°, +20°, meaning the eyebrows rotate in regard to the horizontal position for ±20°. The adopted dimensions of the mechanism are as follows: the eyeball diameter is 60 mm, the PD is 96 mm, and the points around which the eyebrows rotate are CR (−30,44) mm and CL (30,44) mm, with the actuator being placed in point O1 (0,10). It should be noted that the dimensions of the eyeball and the PD were adopted from the MARKO robot, whose eyes and eyebrows are the subject being reconstructed in this Since link 2 consists of two levers 2 L and 2 R which are fixed to one another and therefore rotate together, if the left link rotates for angle α, the right one will as well (please see Figure 7 ). Considering that the left mechanism has a parallelogram configuration, the left eyebrow will rotate for that same angle α. Keeping in mind that the eyebrows should move symmetrically in regard to a vertical axis, it is evident that the right eyebrow must rotate for the angle −α. According to this, the design of the right four-bar linkage is considered to be the synthesis of a function generator, the solving of which requires the use of optimization methods: The objective function is defined as the square of the difference between the rotation angles of the input and output links in regard to the initial-horizontal position: where α i = −20 • , −19 • , . . . , 0 • , . . . , +19 • , +20 • , meaning the eyebrows rotate in regard to the horizontal position for ±20 • . The adopted dimensions of the mechanism are as follows: the eyeball diameter is 60 mm, the PD is 96 mm, and the points around which the eyebrows rotate are C R (−30,44) mm and C L (30, 44) mm, with the actuator being placed in point O 1 (0,10). It should be noted that the dimensions of the eyeball and the PD were adopted from the MARKO robot, whose eyes and eyebrows are the subject being reconstructed in this paper. The optimization variables are the lengths of the links r 2R , r 3R , and r 4R , and the initial-neutral position angle of the input link ϕ 2R0 . In addition, the i-th position of the input link is expressed as: Additionally, the mechanism must stay assembled and be efficient in all positions. The dynamic efficiency of the mechanism is defined by the transmission angle: As the transmission angle grows, a larger part of the supplied power is spent on overcoming the work load, and less is spent on internal loads, making the mechanism more efficient. Small transmission angle values can cause the mechanism to jam. Due to this, the minimum value of the transmission angle is prescribed as γ Rmin = 45 • . Keeping in mind the available space in between the eyes (see Figure 6a) , the minimum and maximum values of the input link angle ϕ 2R0 are prescribed. Table 7 presents the minimum and maximum values of the optimization variables. According to the statements above, the dimensional optimization synthesis of the right eyebrow mechanism was conducted and the obtained values are shown in Table 8 . Since the left eyebrow mechanism has a parallelogram configuration, and keeping in mind the dimensions of the right eyebrow mechanism, the lengths of links 2 L and 4 L are adopted as r 2L = r 4L = 10 mm, with the floating link length being calculated as the following: Figure 15 shows the results of the eyebrow rotaion mechanism simulation. It should be noted that |∆ϕ 4(L/R) | represents the absolute value of the movement of output links 4 L and 4 R relative to their intial-horizontal position. According to Figure 15 , the range of motion, and the angular speeds of the eyebrows and the actuator are the same and equal, 20 • and 320.0 • /s, respectively. Since the transmission angle value depends on the side of the mechanism (left/right) and the direction of the eyebrow rotation (±), Figure 15c shows one of four cases of the transmission angle change. The values of the transmission angle in all four cases stay within the prescribed bounds, i.e., from 67 • to 110 • . adopted as r2L = r4L = 10 mm, with the floating link length being calculated as the following: Figure 15 shows the results of the eyebrow rotaion mechanism simulation. It should be noted that |Δφ4(L/R)| represents the absolute value of the movement of output links 4L and 4R relative to their intial-horizontal position. According to Figure 15 , the range of motion, and the angular speeds of the eyebrows and the actuator are the same and equal, 20° and 320.0°/s, respectively. Since the transmission angle value depends on the side of the mechanism (left/right) and the direction of the eyebrow rotation (±), Figure 15c shows one of four cases of the transmission angle Figure 16 shows the results obtained from a motion simulation of the eyebrow raising/lowering mechanism output link. It should be noted that |∆z 5 | represents the absolute values of the displacement of the output link relative to its initial position. change. The values of the transmission angle in all four cases stay within the prescribed bounds, i.e., from 67° to 110°. Figure 16 shows the results obtained from a motion simulation of the eyebrow raising/lowering mechanism output link. It should be noted that |Δz5| represents the absolute values of the displacement of the output link relative to its initial position. According to Figure 16a , the total vertical stroke of the eyebrow equals 20 mm, of which 12.5 mm is the raising and 7.5 mm the lowering. Figure 16b shows the maximum speeds of the output link of the mechanism during reflexive movement of the eyebrow during a fear response-the raising speed is 200.0 mm/s, while the lowering speed is lower and equals 120 mm/s, which is comparable to [177] . Table 9 summarizes and presents the results of the structural and dimensional synthesis of the eyeball, eyelid, and eyebrow driving systems. It should be noted that the angular speed of the input link of the eyebrow raising/lowering mechanism directly depends on the parameters of the spindle drive mechanism, such as the diameter of the threaded shaft, the type and pitch of the thread, the angle of the thread, and number of starts of the thread-see Figure 6b , making it easy to calculate. According to Figure 16a , the total vertical stroke of the eyebrow equals 20 mm, of which 12.5 mm is the raising and 7.5 mm the lowering. Figure 16b shows the maximum speeds of the output link of the mechanism during reflexive movement of the eyebrow during a fear response-the raising speed is 200.0 mm/s, while the lowering speed is lower and equals 120 mm/s, which is comparable to [177] . Table 9 summarizes and presents the results of the structural and dimensional synthesis of the eyeball, eyelid, and eyebrow driving systems. It should be noted that the angular speed of the input link of the eyebrow raising/lowering mechanism directly depends on the parameters of the spindle drive mechanism, such as the diameter of the threaded shaft, the type and pitch of the thread, the angle of the thread, and number of starts of the thread-see Figure 6b , making it easy to calculate. Figure 6b ). According to Table 9 , the relationship between the change in position of the input/output links of the eyeball and eyelid mechanisms was defined to ascertain the effect on the control system. Due to the structure of the eyeball mechanism rotating the eyeball in the horizontal plane, as well as the mechanisms for the rotation and translation of the eyebrows, the relationship between the relative movements of the output/input links is linear in all three cases, meaning that ∆ψ = ∆β, ∆ϕ 4(L/R) = ∆ϕ 2(L/R) , and ∆z 5 = c actuator displacement, where c = const. Figure 17a shows the relative change in position of the eyeball during the rotation in the vertical plane ∆ϕ with regard to the relative change in position of the mechanism input link ∆α. The relationship is very nearly linear, the nonlinearity-the largest deviation from a straight line connecting the first and last point on the graph, equals only 2.38%. Figure 17b shows the relative change in position of the upper eyelid ∆θ U with regard to the relative change in position of the mechanism input link ∆ρ, while Figure 17c shows the relative change in position of the lower eyelid ∆θ L with regard to the relative change in position of the mechanism input link ∆σ. The nonlinearity was determined to be 5.86% for the upper eyelid and 4.94% for the lower eyelid, making the relationship in both cases close to linear. According to Table 9 , the relationship between the change in position of the input/output links of the eyeball and eyelid mechanisms was defined to ascertain the effect on the control system. Due to the structure of the eyeball mechanism rotating the eyeball in the horizontal plane, as well as the mechanisms for the rotation and translation of the eyebrows, the relationship between the relative movements of the output/input links is linear in all three cases, meaning that Δψ = Δβ, Δφ4(L/R) = Δφ2(L/R), and Δz5 = c actuator displacement, where c = const. Figure 17a shows the relative change in position of the eyeball during the rotation in the vertical plane Δφ with regard to the relative change in position of the mechanism input link Δα. The relationship is very nearly linear, the nonlinearity-the largest deviation from a straight line connecting the first and last point on the graph, equals only 2.38%. Figure 17b shows the relative change in position of the upper eyelid ΔθU with regard to the relative change in position of the mechanism input link Δρ, while Figure 17c shows the relative change in position of the lower eyelid ΔθL with regard to the relative change in position of the mechanism input link Δσ. The nonlinearity was determined to be 5.86% for the upper eyelid and 4.94% for the lower eyelid, making the relationship in both cases close to linear. According to the statements above, it is concluded that the determined relationships are very close to linear, which is very favorable for control system purposes. In the following chapters, the structure of the control system is explored, and possible components are suggested for use in the eyeball, eyelid, and eyebrow mechanism control system. Additionally discussed is the structure of a servo controller meant to control a single actuator within the suggested control system. According to the statements above, it is concluded that the determined relationships are very close to linear, which is very favorable for control system purposes. In the following chapters, the structure of the control system is explored, and possible components are suggested for use in the eyeball, eyelid, and eyebrow mechanism control system. Additionally discussed is the structure of a servo controller meant to control a single actuator within the suggested control system. Figure 18 shows the hierarchy of the robot eye control system. The movement of the eyeballs, eyelids, and eyebrows is enabled by the joint action of 9 actuators, of which 3 are for the eyeballs, 4 for the eyelids, and 2 for the eyebrows. Relatively simple and efficient actuator implementations are miniature DC motors. In order to achieve the desired kinematic parameters of the eye output links, all DC motors require precise and sophisticated control. An embedded personal computer (PC), a single-board computer or a high-performance microcontroller, is at the top of the hierarchical structure and synchronizes the entire system by sending commands to all subordinate control units. This component also directly controls the audio output (sound signals, speech). Digitized audio input for speech recognition can be assigned to this system. Additionally, the images captured by the cameras, placed inside the eyeballs, are processed by the computer at the top of the hierarchical structure. According to Figure 18 , compact drive systems for actuating the mechanisms of the eyes, eyebrows, and eyelids have been proposed. The eyeball is actuated via three actuators. Actuator 1 is common to both eyeballs allowing simultaneous pitch movements (vertical saccades), while actuators 2 and 3 allow independent yaw movements of eyeballs in the same or opposite directions (horizontal saccades and focusing objects-stereovision). The movement of the upper and lower eyelids is completely independent and is enabled by the four actuators, of which actuators 4 and 6 are for the upper eyelids, while actuators 5 and 7 are for the lower eyelids. The remaining actuators enable independent rotation and translation of the eyebrows. Therefore, actuator 8 allows both eyebrows to rotate simultaneously, but in opposite directions, while actuator 9 allows both eyebrows to be raised simultaneously. By combining different movements and positions of the eyeballs, eyelids, and eyebrows, it is possible to generate a wide range of non-verbal facial expressions of the robot. A reasonably simple and efficient solution is the use of DC motors with a built-in planetary gearhead (with one or more stages) and an integrated incremental encoder for actuation of the eye's moving parts. For position detection, in addition to the incremental encoder, an absolute position sensor can be used. The elimination of the zero position sensor, which enables the adjustment of the initial position of the system, is the advantage of using the absolute position sensor. Figure 19 shows the structure of a slave servo controller. It controls a single actuator which directly affects 1 DOF within the system, assuming the actuator is a DC motor type. According to Figure 18 , compact drive systems for actuating the mechanisms of the eyes, eyebrows, and eyelids have been proposed. The eyeball is actuated via three actuators. Actuator 1 is common to both eyeballs allowing simultaneous pitch movements (vertical saccades), while actuators 2 and 3 allow independent yaw movements of eyeballs in the same or opposite directions (horizontal saccades and focusing objects-stereovision). The movement of the upper and lower eyelids is completely independent and is enabled by the four actuators, of which actuators 4 and 6 are for the upper eyelids, while actuators 5 and 7 are for the lower eyelids. The remaining actuators enable independent rotation and translation of the eyebrows. Therefore, actuator 8 allows both eyebrows to rotate simultaneously, but in opposite directions, while actuator 9 allows both eyebrows to be raised simultaneously. By combining different movements and positions of the eyeballs, eyelids, and eyebrows, it is possible to generate a wide range of non-verbal facial expressions of the robot. A reasonably simple and efficient solution is the use of DC motors with a built-in planetary gearhead (with one or more stages) and an integrated incremental encoder for actuation of the eye's moving parts. For position detection, in addition to the incremental encoder, an absolute position sensor can be used. The elimination of the zero position sensor, which enables the adjustment of the initial position of the system, is the advantage of using the absolute position sensor. Figure 19 shows the structure of a slave servo controller. It controls a single actuator which directly affects 1 DOF within the system, assuming the actuator is a DC motor type. Via a digital interface-for example, controller area network (CAN), the master controller sets the required target positions or position change profiles that the controlled element should achieve during a given time. The assigned value is set as the reference input of the control algorithm. It should be noted that the control algorithm is implemented on a microcontroller or digital signal processor of appropriate performance, performing its function based on monitoring the current position of the DC motor shaft via an incremental encoder. Power is transmitted to the motor by an amplifier-implemented as a bridge driver, which is directly controlled by the control algorithm. When initializing the servo controller, the zero position sensor (switch or optical sensor) allows the system to be brought to a known initial position. It should be noted that the motor and planetary gearhead must be selected in suc way that, at the available voltage, the motor can achieve an angular speed slightly hig than the one sufficient to achieve the fastest required movement of the mechanical pa drives. In addition, communication between individual controllers within the system be achieved using a robust communication network, such as a CAN bus. In order to realize the desired motion of the mechanical system of the eyeballs, e lids and eyebrows, the structure of the control system is given. For the actuation of mechanisms, compact drive systems which include an actuator (motor), planetary ge head, sensor, and motor controller, are proposed. The optimal variant, from a control p spective, would be a DC motor with a suitable planetary gearhead and absolute posit sensor or incremental encoder. In order to control each individual actuator, one servo c troller is provided according to the proposed structure. The improvement of the propo control system structure is possible, e.g., using wearable blindness-assistive devices (s sors, global positioning system (GPS), light detection and ranging (LIDAR), and RGB camera), and simultaneous localization and mapping (SLAM) technology [178] . Humans as social beings strive for interaction with other subjects, and may interp absence of emotional expression as indifference-it is thus desirable that robotic char ters express emotional states when communicating with humans [179] . It was establish that humans are able to perceive and understand emotional states expressed by a ro even from a relatively small number of moving points on its face [180] , which sugge further that feelings are something that a human eye looks for on another subject's (ev a robot's) face. To determine the level at which the suggested eye and eyebrow design enables robot to convey non-verbal emotions, an experiment was designed and conducted. H the purpose was to measure to which extent this set of eyes and eyebrows was capabl successfully expressing emotions to a set of human subjects. Six basic emotions (surpr It should be noted that the motor and planetary gearhead must be selected in such a way that, at the available voltage, the motor can achieve an angular speed slightly higher than the one sufficient to achieve the fastest required movement of the mechanical part it drives. In addition, communication between individual controllers within the system can be achieved using a robust communication network, such as a CAN bus. In order to realize the desired motion of the mechanical system of the eyeballs, eyelids and eyebrows, the structure of the control system is given. For the actuation of the mechanisms, compact drive systems which include an actuator (motor), planetary gearhead, sensor, and motor controller, are proposed. The optimal variant, from a control perspective, would be a DC motor with a suitable planetary gearhead and absolute position sensor or incremental encoder. In order to control each individual actuator, one servo controller is provided according to the proposed structure. The improvement of the proposed control system structure is possible, e.g., using wearable blindness-assistive devices (sensors, global positioning system (GPS), light detection and ranging (LIDAR), and RGB-D camera), and simultaneous localization and mapping (SLAM) technology [178] . Humans as social beings strive for interaction with other subjects, and may interpret absence of emotional expression as indifference-it is thus desirable that robotic characters express emotional states when communicating with humans [179] . It was established that humans are able to perceive and understand emotional states expressed by a robot even from a relatively small number of moving points on its face [180] , which suggests further that feelings are something that a human eye looks for on another subject's (even a robot's) face. To determine the level at which the suggested eye and eyebrow design enables the robot to convey non-verbal emotions, an experiment was designed and conducted. Here, the purpose was to measure to which extent this set of eyes and eyebrows was capable of successfully expressing emotions to a set of human subjects. Six basic emotions (surprise, fear, disgust, anger, happiness, and sadness) were chosen as relevant for the experiment-these basic emotions were shown to be universally identifiable through Ekman's work on measuring facial movement during expressions of emotions [181] . In order to break down every facial manifestation of an emotion, Ekman designed a comprehensive facial action coding system (FACS). This system was designed for interpreting common emotional expressions by identifying specific muscular activity that produces momentary changes in facial appearance-these specific movements are called action units (AUs) and they may be coded as "upper eyelid raiser" or "inner eyebrow movement", for example [182] . Previous research into emotion expression by robot faces utilize these aforementioned AUs, while still acknowledging that facial features of any robot are extremely sparse with highly constrained motion compared to a human face [183] . In a robot's upper part of the face, there are typically only a few DOFs, but still, previous research has shown that there is a set of minimal features for human-like facial expressions that are effective in communicating emotions [180] . Based on Ekman's seminal work [184] , as well as from research related to robot faces [185, 186] , this study started by defining AU sets for the six basic emotions, focusing only on the eye and eyebrow movements (see Table 10 ). Based on the existing robot MARKO [187] , a 2D image of his face was designed, but also altered from the original model by covering the robot's mouth and nose area with a face mask. This alteration was made for two reasons: first, to focus participants' attention to the upper half of the face which was in line with the research goal. Second, to mask the part of the face which was not movable, thus not being effective in expressing an emotion-since the mouth and the nose are generally a significant part of non-verbal expression [181, 184] , presenting them as static while other parts of the face are moving would lead to incongruent expressions and potentially confusing stimuli. Additionally, the global COVID-19 pandemic experienced in 2020/2021 and the resulting usage of face masks in everyday communication inspired the research team to present the robot with a mask covering its nose and mouth. According to the data from Table 10, Figure 20 presents the six basic facial expressions of the human-like robot MARKO: anger, disgust, surprise, happiness, fear, and sadness. sion [181, 184] , presenting them as static while other parts of the face are moving would lead to incongruent expressions and potentially confusing stimuli. Additionally, the global COVID-19 pandemic experienced in 2020/2021 and the resulting usage of face masks in everyday communication inspired the research team to present the robot with a mask covering its nose and mouth. According to the data from Table 10, Figure 20 presents the six basic facial expressions of the human-like robot MARKO: anger, disgust, surprise, happiness, fear, and sadness. In order to increase reliability by taking multiple measurements of the same stimulus, it was decided that every emotion should be expressed by the robot's face to a participant three times, mixed randomly with expressions of other emotions [186] . However, it was decided not to present the identical stimuli three times for every emotion; rather, different intensities of the emotion in question were presented, by expressing 60%, 80%, and 100% of every AU, thus allowing us to also check if the intensity of an expression had any role in emotion recognition effectiveness [179] . Figure 21 shows the 6 different facial expressions of the robot with varying intensity. In order to increase reliability by taking multiple measurements of the same stimulus, it was decided that every emotion should be expressed by the robot's face to a participant three times, mixed randomly with expressions of other emotions [186] . However, it was decided not to present the identical stimuli three times for every emotion; rather, different intensities of the emotion in question were presented, by expressing 60%, 80%, and 100% of every AU, thus allowing us to also check if the intensity of an expression had any role in emotion recognition effectiveness [179] . Figure 21 shows the 6 different facial expressions of the robot with varying intensity. From the neutral face and the still images of robot MARKO expressing the six emotions, video clips were created. Similarly to previous relevant studies [180, 186] , each video clip consisted of 5 points: (i) starting point, where the robot's face was in the neutral position-a total of 3 s, (ii) transition period, where the robot shows progress towards an emotion articulation-a total of half a second, (iii) facial expression of an emotion-a total of 3 s, (iv) transition period, where the robot reverts back to the neutral position-a total of half a second, and (v) ending point, where the robot's face is shown still again in the neutral position-a total of 3 s. Therefore, the total duration of each video clip is 10 s. The experiment was conducted in a laboratory setting, with controlled light and sound, and without any significant distractions. The experiment was conducted in an improvised laboratory space at the university office. Each participant was seated in front of a 23′′ computer screen, at a 2 m distance. The goal of the experiment was to determine to which extent the structural design of the eyes and eyebrows is capable of emotional expression, in a way that conveys the intended emotion to human observers. Since the eye is an important non-verbal actor in emotional exchange in interpersonal communication, it is of relevance to measure to which extent our model is effective in expressing basic emotions. For this experiment, 51 participants were recruited, all of them being university stu- From the neutral face and the still images of robot MARKO expressing the six emotions, video clips were created. Similarly to previous relevant studies [180, 186] , each video clip consisted of 5 points: (i) starting point, where the robot's face was in the neutral position-a total of 3 s, (ii) transition period, where the robot shows progress towards an emotion articulation-a total of half a second, (iii) facial expression of an emotion-a total of 3 s, (iv) transition period, where the robot reverts back to the neutral position-a total of half a second, and (v) ending point, where the robot's face is shown still again in the neutral position-a total of 3 s. Therefore, the total duration of each video clip is 10 s. The experiment was conducted in a laboratory setting, with controlled light and sound, and without any significant distractions. The experiment was conducted in an improvised laboratory space at the university office. Each participant was seated in front of a 23 computer screen, at a 2 m distance. The goal of the experiment was to determine to which extent the structural design of the eyes and eyebrows is capable of emotional expression, in a way that conveys the intended emotion to human observers. Since the eye is an important non-verbal actor in emotional exchange in interpersonal communication, it is of relevance to measure to which extent our model is effective in expressing basic emotions. For this experiment, 51 participants were recruited, all of them being university students at the bachelor level. The participants ranged from 18 to 27 years old (mean age 21.57), 29 were females, and 22 males. The participants were not aware of the goal of the study, and reported no prior experience with similar models or research studies. After giving informed consent and a short introduction about what to expect during the experiment, each participant was shown 18 video clips with the model expressing an emotion. For each subject, the video clips were presented. Each participant was presented with all 18 video clips, presented in random order, with the constraint that the same intended emotion was never presented twice in a row. The participants were allowed to take as long as they wished to complete every task, but were not allowed to have the same video played again. After each video clip, the subjects were presented with a short printed facial expression identification (FEI) instrument [180] , consisting of three questions. Question #1 was a simple task to identify the shown emotion, by choosing one term from an alphabetized list of six basic emotion labels that they believed best suited what they have seen. Next, the participants were presented with Question #2-they were asked to rate the degree to which the emotion was present-the strength of expression-on a scale of 0 (not at all) to 6 (extremely high), similar to [138, 180, 183, 188] . Question #3 was then asked, allowing the participants (but not requiring) them to select one or more "other expressions" they thought the model might be displaying beyond the primary one, identified in the first question, if desired-similar to [180] . In subsequent sections, we refer to "main accuracy" based on the single answer from Question #1, and "other accuracy" when including answers from both Questions #1 and #3. The completed printed FEI questionnaires were fed into a data matrix, and analyzed with IBM's SPSS software, version 23. Globally, the study participants' first guess was effective in 45.8% of cases-main accuracy was achieved in 420 cases out of 918. The study participants made a second guess in 25.7% of the cases, and that second guess was effective in 20.8% of occurrences-in 49 cases out of 236 where there was a correct second guess. Combined, the participants were successful in recognizing expressed emotions in 51.1% of cases, either on the first try or on the second try. Table 11 presents a confusion matrix where emotions identified by the study participants (in rows, counting only their first guess) and emotions expressed by the robot (columns) are cross-tabulated. The cells in this table contain percentages of the matches or mismatches between the two variables, where the diagonal direction from top left to bottom right presents the matches (grayed cells), and all the other table cells present mismatches. From this table, it is evident that the expressions of anger and sadness were successfully identified to a large extent (92.8% and 83.7%, respectively). The expression of surprise was correctly identified in half of the cases (51.6%), while it was frequently confused with fear (27.5%). The expression of disgust was correctly identified in one third of the cases (35.5%), being frequently confused with anger (28.9%). Expression of happiness was seldom correctly identified (6.7%), frequently being mistaken for disgust (34.2%), and surprise (31.5%). The expression of fear was also rarely correctly identified (4.6%), mostly being mistaken either with surprise (46.4%), or with happiness (38.6%). A separate analysis of the first identified emotion for every video clip, for each of the three levels of expression intensity, does not indicate that the level of expression intensity plays any significant role in the identification of the emotion-participants were opting for the same emotions regardless of the intensity of the movement. Even more interestingly, emotions of surprise and fear were better identified when shown with 60% and 80% of intensity, than when expressed with 100% of intensity (see Table 12 ). Although the level of expression intensity does not play a role in the kind of perceived emotion, that level is still noticed by the participants. A weak but highly significant positive correlation between the expressed intensity of an emotion by the robot and the perceived emotion intensity by the participants was observed, using the Spearman's rho correlation coefficient due to the ordinal measurement levels of the FEI scales (Spearman's rho = 0.304, p = 0.000). This finding shows that the participants had some success in identifying the intensity of an expressed emotion: for an emotion that was expressed at 60%, on a 0-6 scale, participants identified intensity with a median of 3; for an emotion that was expressed at 80%, participants identified intensity with a median of 4, and for an emotion that was expressed at 100%, participants identified intensity with a median of 5, as shown on a Box and Whisker plot in Figure 22 . This finding is even more interesting if observed separately for each expressed emotion: for the three emotions that were most successfully recognized in the first attempt (anger, sadness, and surprise), the correlation coefficients were even higher and were interpreted as "moderate" (Spearman's rho = 0.570, p = 0.000, Spearman's rho = 0.452, p = 0.000, and Spearman's rho = 0.462, p = 0.000, respectively). Participants' gender did not play a role in the effectiveness of emotion identification: although there was a slight difference observed between females' and males' percentage of main accuracy (46.9% and 44.2%, respectively), this difference was not significant (Phi = 0.027, p = 0.409), nor was there any significant difference observed in the other accuracy between the two genders (Phi = 0.023, p = 0.479). Running these analyses for every emotion separately yielded similar results. on a 0-6 scale, participants identified intensity with a median of 3; for an emotion that was expressed at 80%, participants identified intensity with a median of 4, and for an emotion that was expressed at 100%, participants identified intensity with a median of 5, as shown on a Box and Whisker plot in Figure 22 . This finding is even more interesting if observed separately for each expressed emotion: for the three emotions that were most successfully recognized in the first attempt (anger, sadness, and surprise), the correlation coefficients were even higher and were interpreted as "moderate" (Spearman's rho = 0.570, p = 0.000, Spearman's rho = 0.452, p = 0.000, and Spearman's rho = 0.462, p = 0.000, respectively). Participants' gender did not play a role in the effectiveness of emotion identification: although there was a slight difference observed between females' and males' percentage of main accuracy (46.9% and 44.2%, respectively), this difference was not significant (Phi = 0.027, p = 0.409), nor was there any significant difference observed in the other accuracy between the two genders (Phi = 0.023, p = 0.479). Running these analyses for every emotion separately yielded similar results. Aiming to determine if there was any effect of training on the accuracy of emotion identification, we have split the dataset into three thirds, based on the order of video clips that were shown to the participants, and observed the percentage of main accuracies in each third of the experiment. However, there were no significant differences to report. The presented results show that, globally, the proposed structural design of the robot eyes is capable of effectively expressing emotions of anger and sadness to a high extent, Aiming to determine if there was any effect of training on the accuracy of emotion identification, we have split the dataset into three thirds, based on the order of video clips that were shown to the participants, and observed the percentage of main accuracies in each third of the experiment. However, there were no significant differences to report. The presented results show that, globally, the proposed structural design of the robot eyes is capable of effectively expressing emotions of anger and sadness to a high extent, which is in line with previous studies, and partially for the emotion of surprise; expressions of disgust, happiness, and fear are poorly identified, being frequently misinterpreted as other emotions. Emotions of anger and sadness are most specific in this setup since the eyebrows take extreme positions regarding their vertical position and the position of their outer ends; this "uniqueness" of expression of these two emotions is in line with the "overlap" rule from a previous study that notices that "the fewer DOFs in a given emotion that overlap with other emotions, the better the recognition will be for that emotion" [186] (p. 4580). This rule is also evident in the case of surprise-although the vertical eyebrow movement reaches its full extent, the outer end does not move significantly and uniquely for this emotion, which explains the result that it was properly identified only in half of the cases. Limitations of effective expression of other emotions are, similarly, coherent with the "coverage" rule defined by the same authors, which states that "the greater the proportion of required action units in a given emotion that can be mapped to DOF in the robot, the better the recognition will be for that emotion" [186] (p. 4580). It is documented by Ekman [181] that some emotions need movements on other parts of the face for proper expression, which were not considered in this study: surprise requires the "jaw drop" movement; fear is accompanied by the "fear mouth" movement (and some-times lacks any eyebrow movement at all); disgust is primarily expressed with the unique mouth movement and the nose wrinkles; happiness is mostly shown through the lip movement and the nasolabial fold that runs down from the nose to the outer edge beyond the lip corners. Additionally, expressions of fear, surprise, and happiness are not so far apart from each other when the eye and eyebrow are observed-movements in other parts of the face are crucial for valid interpretation of these emotions [184] . Especially, the emotion of fear is one of the most complex expressions to produce in terms of the number and control of muscles used, besides considering the fact that its infrequency of use in daily life might also be a factor in the difficulty people have in identifying it [189] . This section summarizes the results and contains: (i) the comparison of the proposed mechanical system with the kinematics of the human eye, (ii) the advantages of the adopted mechanisms and their reconfigurability, and (iii) the ability of the proposed mechanism to generate facial expressions. The mechanical system consists of three subsystems which enable the independent motion of the eyeballs, eyelids, and eyebrows. Due to its structure, the eyeball mechanical system is able to generate all of the motions of a human eye, which is the main condition for the realization of binocular function of the artificial robot eyes, as well as for stereovision. Saccades are significant for rapid movements, while vergence movements allow the eyes to focus on objects. Aside from reflexive movements, it is also important to realize smooth pursuit movements whose generation and quality directly depend on the structure of the adopted mechanisms and their joints-the friction and backlash in the joints should be as low as possible. Contrarily, initiating movement would cause a jerk which can negatively affect the stability of the visual image, especially since the face, object, and surrounding recognition cameras would be located in the eyes of the robot. From a kinematic standpoint, the mechanical systems of the eyeballs, eyelids, and eyelashes are very capable at mimicking the human eye. Table 13 shows the comparison between the kinematic parameters of the human eye and the parameters of the proposed mechanical system. It should be noted that the eyebrow movements are complex and depend on which part of the eyebrow is being actuated, as well as in which direction. It should also be noted that human eyebrows cannot be rotated, only raised and lowered. During reflexive eyebrow movement due to a fear response, the eyebrows move together with the upper eyelids with a much higher speed of 25 mm/s, which was found in the available literature. The amplitude of the eyebrow raising heavily depends on and decreases with age. After searching the available literature, the range of motion of the lower eyelid could not be found, so it was estimated according to the fact that the range of motion of the upper eyelids is approximately two times larger than that of the lower ones-of course, the range of motion of the upper eyelids directly depends on the type of blink (see Section 2). All of the kinematic parameters found in Table 13 refer to the extreme values. Most of the adopted mechanisms driving the mechanical systems of the eyeballs, eyelids, and eyebrows are linkage mechanisms, with a spindle drive mechanism being adopted for raising the eyebrows. Linkage mechanisms allow for a wide range of working speeds, are highly reliable, have low backlash, and are simple to manufacture and assemble; while the spindle drive mechanism enables the transformation of rotational into translational motion, has a wide range of possible pitches and speeds, also has low backlash, high reliability, and is simple to implement. Low backlash enables high positioning accuracy which further enables high precision and repeatability of movements, which is key. Linkage mechanisms can have different structures and link shapes, making them easy to optimize. The spherical motion of the eyeball is easiest to implement with spatial linkage mechanisms. Keeping in mind the limited space in the robot's head, due to the many electronic components placed there, the most favorable solution for the transmission of power and motion are spatial linkage mechanisms. The motion of the mechanism output link is defined by the axis around which it rotates-for example, the eyelids rotate around the y-axis. By using spatial linkage mechanisms, the designer has the option to choose the axis of rotation of the input link, for example, around the x-, y-or z-axis, which allows the design of the mechanism to be adapted to the available space in the head. Another convenience is that the mechanism links can be made using 3D printing technology, which results in parts with very low mass. This would significantly lower the inertial loads present in the mechanism due to high acceleration values, especially during reflexive movements. Figure 23 shows the output link of the upper eyelid mechanism. Link OV must rotate around the y-axis, but it can be placed in different positions. Due to this, it is interesting to determine the possible range of its placement without changing the kinematics of the upper eyelid. The constructive parameter-angle δ, can vary from 40-80 • . It cannot be less than 40 • due to collisions with the side of the face, and it cannot be over 80 • due to collisions with the eyeball. From a design point of view, this information is very significant which is why the reconfigurability of the mechanism was examined. Based on the process shown in Section 5.2, dimensional synthesis of the upper eyelid was conducted for each possible value of angle δ, and the results can be seen in Figures 24 and 25 . It is possible to assemble the mechanism for every value of angle δ within the interval 40-80°. The dimensions of the other links remain within the prescribed bounds, meaning the kinematic behavior remains unchanged. These data can be acquired for the lower eyelid in a similar way. Based on the process shown in Section 5.2, dimensional synthesis of the upper eyelid was conducted for each possible value of angle δ, and the results can be seen in Figures 24 and 25 . It is possible to assemble the mechanism for every value of angle δ within the interval 40-80 • . The dimensions of the other links remain within the prescribed bounds, meaning the kinematic behavior remains unchanged. These data can be acquired for the lower eyelid in a similar way. Based on the process shown in Section 5.2, dimensional synthesis of the upper eyelid was conducted for each possible value of angle δ, and the results can be seen in Figures 24 and 25 . It is possible to assemble the mechanism for every value of angle δ within the interval 40-80°. The dimensions of the other links remain within the prescribed bounds, meaning the kinematic behavior remains unchanged. These data can be acquired for the lower eyelid in a similar way. This study aimed to determine if the proposed structural design of the robot eyes and eyebrows was capable of effectively expressing emotions to human subjects. This aim was pursued by exposing study participants to a series of short video clips where the robot MARKO was expressing basic emotions identified by Ekman. Recognizability of Ekman's basic expressions is a common test used to gauge the abilities of an expressive robot face [190] . The recorded accuracies are seen as a good sign especially since only video clips of the robot were shown-physically present robots are perceived more persuasively, and result in better user performance than their visually presented counterparts [191] ; physical presence often seems crucial for good perception of emotional information conveyed by a robotic agent [192] . It was interesting to observe that the emotion of disgust was inconsistently identified in this study, since this emotion is frequently omitted from these kinds of experiments, due to its specific expression that also includes a nose movement [189, 192] . The level of intensity of emotional expression was properly identified to a significant degree, especially for emotions of anger, sadness, and surprise, showing that, at least for these emotions, the level of movement can express the intensity of an emotion. The fact that the level of intensity of emotional expression did not play a significant role in the accuracy of emotion identification is in line with previous research that showed that even when an emotion is presented with 50% intensity in a robot's face, human subjects were still able to identify robotic facial expressions [180] . The fact that females and males were equally successful in emotion identification is not in line with previous research, which showed that females were more accurate when identifying emotions [193, 194] . It should be noted that in the last two years, interest has risen for the recognition of emotions on faces equipped with face masks [195] [196] [197] [198] [199] [200] [201] . Additionally, it should be stated that the study participants were all similar in age-being university students, and without any reported relevant health issues, which may pose a limitation in the generalizability of the obtained results, since it has been shown that children and the elderly may have different abilities in recognizing facial expressions when compared to young adults [202] [203] [204] , and that people with certain mental health issues experience facial emotion recognition deficits when compared to control groups [205] [206] [207] . This is especially important if the proposed solution is to be implemented in the context of healthcare, where specific cohorts are usually treated. This paper shows the structure of a mechanical system for robot eyes with a total of 9 DOFs, as well as its ability to allow the robot to generate non-verbal emotional content, which is a key characteristic of socially interactive robots. The mechanical system enables independent movement of the eyeballs, eyelids, and eyebrows, and consists of three subsystems: (i) the mechanical system of the eyeballs, (ii) the mechanical system of the eyelids, and (iii) the mechanical system of the eyebrows. The mechanical system of the eyeballs has 3 DOFs allowing for simultaneous pitch and independent yaw movements of the eyeballs. Due to its structure which, among other things, allows for the placement of a camera within the eyeball, the mechanical system is able to reproduce all of the movements of a human eye, which is of great significance for the realization of binocular function of artificial sight, as well as for stereovision. The mechanical system of the eyelids has 4 DOFs, enabling independent rotation of each eyelid, while the mechanical system of the eyebrows has 2 DOFs, enabling the simultaneous raising of both eyebrows, as well as the rotation of both eyebrows in opposite directions. From a kinematic standpoint, the mechanical systems of the eyeballs, eyelids, and eyebrows are able to generate movements sufficiently similar to natural human ones-the types of movements, the ranges of motion, and the angular speeds, which is of great significance for the generation of facial expressions and non-verbal communication of robots in a natural, intuitive, and transparent way. It should be noted that the relationship between the motion of the input/output links were examined for each mechanism, to ascertain its influence on the control system-the obtained relationships were all very close to linear, which is very favorable from the standpoint of the control system. Due to the joint structure, all of the mechanisms ensure both low friction and low backlash, which is important for initiating movement without jerks, as well as for highly accurate positioning which ensures high precision and repeatability. The structure of a control system for the eyeballs, eyelids, and eyebrows was proposed with the goal of realizing the motion of the mechanism's output links so that it is in accordance with the kinematic principals of the human eye. Compact drive systems which encompass the actuator-motor, reducer, sensor, and motor controller, were proposed to drive the mechanisms. The most favorable solution for controlling the system is a combination of a DC motor with an appropriate reducer and an absolute position sensor or an incremental encoder. The structure of a servo controller for each specific motor was proposed as well. Finally, the success of the mechanical system depended on how capable it was to enable the robot to generate facial expressions, which is why an experiment was conducted. For this purpose, the 2D face of existing robot MARKO was used, covered with a face mask to aid in focusing the participants on the eye region. The participants rated the efficiency of the robot's non-verbal communication after watching short video clips. The proposed structural design of the robot eyes was capable of effectively expressing emotions of anger and sadness to a high extent, and only partially the emotion of surprise. Expressions of disgust, happiness, and fear were poorly identified and were frequently misinterpreted as other emotions. To make happiness and fear more recognizable, the face would need to be fully uncovered, thus necessitating the existence of lips and their precise positioning, while the emotion of disgust requires specific motion of the nose and forehead. Further research will encompass the physical realization of each of the described mechanical systems, their implementation and experimental examinations meant to determine the kinematics and the efficiency of the non-verbal communication. Furthermore, also planned is the development and realization of eyes with a positive CT, which is a feature of female eyes. Further research should also encompass emotion expressions with other parts of the robot face, in order to determine which combination of facial movements produces the best results. The driving mechanisms of the mechanical systems of the eyeballs, eyelids, and eyebrows described in this paper are patent pending. The COVID-19 epidemic The Italian health system and the COVID-19 challenge Subjective burden and perspectives of German healthcare workers during the COVID-19 pandemic The resilience of the Spanish health system against the COVID-19 pandemic Perceptions and experiences of healthcare workers during the COVID-19 pandemic in the UK Challenges and issues about organizing a hospital to respond to the COVID-19 outbreak: Experience from a French reference centre Attending to the Emotional Well-Being of the Health Care Workforce in a New York City Health System during the COVID-19 Pandemic The Unlikely Saviour: Portugal’s National Health System and the Initial Impact of the COVID-19 Pandemic? Development COVID-19 pandemic: Challenges and opportunities for the Greek health care system Impact of COVID-19 pandemic on health system & Sustainable Development Goal 3 Impact of the COVID-19 Pandemic on Healthcare Workers’ Risk of Infection and Outcomes in a Large, Integrated Health System Mental health care for medical staff and affiliated healthcare workers during the COVID-19 pandemic The impact of the COVID-19 pandemic on the mental health of healthcare professionals. Cad. De Saúde Pública 2020, 36, e00063520 Impact of the COVID-19 Pandemic on the Mental Health of Healthcare Workers The mental health of doctors during the COVID-19 pandemic Collapse of the public health system and the emergence of new variants during the second wave of the COVID-19 pandemic in Brazil COVID-19 and mucormycosis syndemic: Double health threat to a collapsing healthcare system in India Cracks in the System: The Effects of the Coronavirus Pandemic on Public Health Systems Health system collapse 45 days after the detection of COVID-19 in Ceará, Northeast Brazil: A preliminary analysis Risk of the Brazilian health care system over 5572 municipalities to exceed health care capacity due to the 2019 novel coronavirus (COVID-19) Pandemic and lockdown: A territorial approach to COVID-19 in China, Italy and the United States Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries Impact of the COVID-19 Pandemic on Cancer Diagnoses in General and Specialized Practices in Germany Significant Decrease in Annual Cancer Diagnoses in Spain during the COVID-19 Pandemic: A Real-Data Study The impact of the COVID-19 pandemic on cancer deaths due to delays in diagnosis in England, UK: A national, population-based, modelling study International Impact of COVID-19 on the Diagnosis of Heart Disease Reduction of Emergency Calls and Hospitalizations for Cardiac Causes: Effects of Covid-19 Pandemic and Lockdown in Tuscany Region. Front. Cardiovasc Impact of the COVID-19 pandemic on interventional cardiology activity in Spain The impact of the COVID-19 pandemic on cardiology services The Effects of the Health System Response to the COVID-19 Pandemic on Chronic Disease Management: A Narrative Review Multiple Chronic Conditions, and COVID-19: A Literature Review Population-Based Estimates of Chronic Conditions Affecting Risk for Complications from Coronavirus Disease, United States Awareness, Attitudes, and Actions Related to COVID-19 Among Adults With Chronic Conditions at the Onset of the Examining the impact of COVID-19 on stress and coping strategies in individuals with disabilities and chronic conditions Omicron SARS-CoV-2 new variant: Global prevalence and biological and clinical characteristics Covid-19: Omicron may be more transmissible than other variants and partly resistant to existing vaccines, scientists fear Omicron SARS-CoV-2 variant: What we know and what we don't. Anaesth Another year another variant: COVID 3.0-Omicron Association of social distancing and face mask use with risk of COVID-19 COVID-19: The impact of social distancing policies, cross-country analysis A Vision-Based Social Distancing and Critical Density Detection System for COVID-19 The effect of social distance measures on COVID-19 epidemics in Europe: An interrupted time series analysis One Hundred Days of Coronavirus Disease 2019 Prevention and Control in China Epidemiology, causes, clinical manifestation and diagnosis, prevention and control of coronavirus disease (COVID-19) during the early outbreak period: A scoping review Taking the right measures to control COVID-19 COVID-19: Prevention and control measures in community The world's largest COVID-19 vaccination campaign The effect of COVID-19 vaccination in Italy and perspectives for living with the virus COVID-19 vaccination intention in the UK: Results from the COVID-19 vaccination acceptability study (CoVAccS), a nationally representative cross-sectional survey. Hum. Vaccines Immunother COVID-19 Vaccination Hesitancy in the United States: A Rapid National Assessment Industry 4.0 technologies and their applications in fighting COVID-19 pandemic Applications of industry 4.0 to overcome the COVID-19 operational challenges Industry 4.0 at the service of public health against the COVID-19 pandemic An intelligent framework using disruptive technologies for COVID-19 analysis Industry 4.0 Technologies and Their Applications in Fighting COVID-19. In Sustainability Measures for COVID-19 Pandemic IoT Platform for COVID-19 Prevention and Control: A Survey IoT-based analysis for controlling & spreading prediction of COVID-19 in Saudi Arabia CIoTVID: Towards an Open IoT-Platform for Infective Pandemic Diseases such as COVID-19 Internet of things (IoT) applications to fight against COVID-19 pandemic Internet of Things for Current COVID-19 and Future Pandemics: An Exploratory Study Application of cognitive Internet of Medical Things for COVID-19 pandemic IoMT amid COVID-19 pandemic: Application, architecture, technology, and security COVID-19 (Coronavirus Disease 2019): Opportunities and Challenges for Digital Health and the Internet of Medical Things in China Internet of Medical Things (IoMT) for orthopaedic in COVID-19 pandemic: Roles, challenges, and applications Combining Point-of-Care Diagnostics and Internet of Medical Things (IoMT) to Combat the COVID-19 Pandemic Artificial Intelligence (AI) applications for COVID-19 pandemic How Big Data and Artificial Intelligence Can Help Better Manage the COVID-19 Pandemic Significant applications of virtual reality for COVID-19 pandemic The role of 5G for digital healthcare against COVID-19 pandemic: Opportunities and challenges Blockchain for COVID-19: Review, Opportunities, and a Trusted Tracking System. Arab A literature survey of the robotic technologies during the COVID-19 pandemic Robotics and artificial intelligence in healthcare during COVID-19 pandemic: A systematic review Smart Wearable Technologies, and Autonomous Intelligent Systems for Healthcare during the COVID-19 Pandemic: An Analysis of the State of the Art and Future Vision From high-touch to high-tech: COVID-19 drives robotics adoption. Tour The Psychosocial Fuzziness of Fear in the Coronavirus (COVID-19) Era and the Role of Robots Robotics Applications for Public Health and Safety during the COVID-19 Pandemic The Emotional Path to Action: Empathy Promotes Physical Distancing and Wearing of Face Masks During the COVID-19 Pandemic About the Acceptance of Wearing Face Masks in Times of a Pandemic. i-Perception Why Do Japanese People Use Masks Against COVID-19, Even Though Masks Are Unlikely to Offer Protection from Infection? A Survey on Socially Assistive Robotics: Clinicians' and Patients’ Perception of a Social Robot within Gait Rehabilitation Therapies Socially Assistive Robots Helping Older Adults through the Pandemic and Life after COVID-19 Influence of a Socially Assistive Robot on Physical Activity, Social Play Behavior, and Toy-Use Behaviors of Children in a Free Play Environment: A Within-Subjects Study. Front The role of a socially assistive robot in enabling older adults with mild cognitive impairment to cope with the measures of the COVID-19 lockdown: A qualitative study The potential of socially assistive robots during infectious disease outbreaks Promoting Interactions between Humans and Robots Using Robotic Emotional Behavior How Robots Influence Humans: A Survey of Nonverbal Communication in Social Human-Robot Interaction Survey of Social Touch Interaction between Humans and Robots The Sage Handbook of Interpersonal Communication Psychiatry of Intellectual Disability: A Practical Manual Engineering color, pattern, and texture: From nature to materials Patient Care: From Body to Mind Habitual wearers of colored lenses adapt more rapidly to the color changes the lenses produce Higher-Level Meta-Adaptation Mitigates Visual Distortions Produced by Lower-Level Adaptation The Role of Eyebrows in Face Recognition Face Recognition by Humans: Nineteen Results All Computer Vision Researchers Should Know About Biometric Study of Eyelid Shape and Dimensions of Different Races with References to Beauty Differences between Caucasian and Asian attractive faces Is Medial Canthal Tilt a Powerful Cue for Facial Attractiveness? Periorbital Aesthetic Surgery: A Simple Algorithm for the Optimal Youthful Appearance Primary Transcutaneous Lower Blepharoplasty with Routine Lateral Canthal Support: A Comprehensive 10-Year Review Resecting orbicularis oculi muscle in upper eyelid blepharoplasty-A review of the literature Variations in Eyeball Diameters of the Healthy Adults The Reliability, Validity, and Normative Data of Interpupillary Distance and Pupil Diameter Using Eye-Tracking Technology Differences in eye movement range based on age and gaze direction A comparison of three different corneal marking methods used to determine cyclotorsion in the horizontal meridian The brainstem control of saccadic eye movements Eyelid movements in health and disease. The supranuclear impairment of the palpebral motility Eyelid Movements: Behavioral Studies of Blinking in Humans under Different Stimulus Conditions Diurnal variation in spontaneous eye-blink rate Spontaneous blinking in healthy persons: An optoelectronic study of eyelid motion Modeling upper eyelid kinematics during spontaneous and reflex blinks Upper and Lower Eyelid Saccades Describe a Harmonic Oscillator Function Eyelid disorders: Diagnosis and management Topographic analysis of eyelid position using digital image processing software Aesthetic analysis of the ideal eyebrow shape and position Are Eyebrow Movements Linked to Voice Variations and Turn-taking in Dialogue? An Experimental Investigation Desired Position, Shape, and Dynamic Range of the Normal Adult Eyebrow Measurement and analysis of associated mimic muscle movements HYDROïD Humanoid Robot Head with Perception and Emotion Capabilities: Modeling, Design, and Experimental Results. Front EMYS-Emotive Head of a Social Robot Muecas: A Multi-Sensor Robotic Head for Affective Human Robot Interaction and Imitation you'll have to go into the closet now': Children's social and moral relationships with a humanoid robot Design of Real Time Facial Tracking and Expression Recognition for Human-Robot Interaction Mechanical Design of The Huggable Robot Probo CB: A humanoid research platform for exploring neuroscience Graphical user interface for humanoid head Amir-II A humanoid robot acting over an unstructured world Accurate control of a human-like tendon-driven neck The humanoid museum tour guide Robotinho Design and evaluation of emotion-display EDDIE A quest for a robust and scalable active vision humanoid head robot Human social response toward humanoid robot's head and facial features Mobile, dexterous, social robots for mobile manipulation and human-robot interaction The Karlsruhe humanoid head Assessing the effects of building social intelligence in a robotic interface for the home Direct imitation of human facial expressions by a userinterface robot The Bielefeld anthropomorphic robot head "Flobi Development of an interactive robot with emotional expressions and face detection Head-eyes system and gaze analysis of the humanoid robot Romeo Epi: An open humanoid platform for developmental robotics An expressive bear-like robot The Design of the Icub Humanoid Robot Mechatronic design of the Twente humanoid head Development and Walking Control of Emotional Humanoid Robot Mechanical Design of Emotion Expression Humanoid Robot WE-4RII Robotics in Healthcare A Systematic Review of Research on Robot-Assisted Therapy for Children with Autism Robot KASPAR as Mediator in Making Contact with Children with Autism: A Pilot Study Children with Autism Spectrum Disorders Make a Fruit Salad with Probo, the Social Robot: An Interaction Study Socially Assistive Robots for Children with Cerebral Palsy: A Meta-Analysis A motor learning therapeutic intervention for a child with cerebral palsy through a social assistive robot Emergence of Socially Assistive Robotics in Rehabilitation for Children with Cerebral Palsy: A Review Exploring the perceptions of people with dementia about the social robot PARO in a hospital setting Effectiveness of Companion Robot Care for Dementia: A Systematic Review and Meta-Analysis Introducing the Social Robot MARIO to People Living with Dementia in Long Term Residential Care: Reflections Acceptance and Attitudes toward a Human-like Socially Assistive Robot by Older Adults Scoping review on the use of socially assistive robot technology in elderly care Personal Robot Assistants for Elderly Care: An Overview The effect of care-robots on improving anxiety/depression and drug compliance among the elderly in the community Assessing the Children's Receptivity to the Robot MARKO Effectiveness of physical therapy interventions for children with cerebral palsy: A systematic review Children with cerebral palsy participate: A review of the literature Effects of a Functional Therapy Program on Motor Abilities of Children With Cerebral Palsy A Transmission Index for Spatial Mechanisms Generalized transmission index and transmission quality for spatial linkages Transmission angle in mechanisms (Triangle in mech) Designing and Constructing an Animatronic Head Capable of Human Motion Programmed Using Face-Tracking Software A Wearable Navigation Device for Visually Impaired People Based on the Real-Time Semantic Visual SLAM System Your Face, Robot! The Influence of a Character's Embodiment on How Users Perceive Its Emotional Expressions Deriving Minimal Features for Human-Like Facial Expressions in Robotic Faces Unmasking the Face. A Guide to Recognizing Emotions from Facial Clues Facial Action Coding System. Manual and Investigator's Guide Face to interface: Facial affect in (hu)man and machine Unmasking the Face I show you how I like you-can you read it in my face? Robotics A design methodology for expressing emotion on robot faces Robotics as Assistive Technology for Treatment of Children with Developmental Disorders-Example of Robot MARKO Emotion and sociable humanoid robots The effects of culture and context on perceptions of robotic facial expressions Design and testing of a hybrid expressive face for a humanoid robot The benefit of being physically present: A survey of experimental works comparing copresent robots, telepresent robots and virtual agents The Perception of Emotion in Artificial Agents Age and Gender Differences in Emotion Recognition Expression intensity, gender and facial emotion recognition: Women recognize only subtle facial emotions better than men Wearing Face Masks Strongly Confuses Counterparts in Reading Emotions The impact of facemasks on emotion recognition, trust attribution and re-identification Un)mask yourself! Effects of face masks on facial mimicry and emotion perception during the COVID-19 pandemic The Impact of Face Masks on the Emotional Reading Abilities of Children-A Lesson From a Unmasking the psychology of recognizing emotions of people wearing masks: The role of empathizing, systemizing, and autistic traits. Pers How does the presence of a surgical face mask impair the perceived intensity of facial emotions? Reading Covered Faces Facial Identity and Facial Emotions: Speed, Accuracy, and Processing Strategies in Children and Adults Age-related differences in emotion recognition ability: A cross-sectional study Age Differences in Emotion Recognition Skills and the Visual Scanning of Emotion Faces Facial Emotion Labeling Deficits in Children and Adolescents at Risk for Bipolar Disorder Emotion identification and tension in female patients with borderline personality disorder Facial emotion recognition deficits in patients with bipolar disorder and their healthy parents Funding: This research received no external funding.Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.Sensors 2022, 22, 3060 Acknowledgments: This research is supported by a scientific and technical cooperation between the Republic of Serbia and the People's Republic of China through the project "The Development of a Socially Assistive Robot as a Key Technology in the Rehabilitation of Children with Cerebral Palsy", under the contract 451-02-818/2021-09/19. We would like to thank Zhenli Lu, from the School of Electrical Engineering and Automation, Changshu Institute of Technology, People's Republic of China, for his assistance in forming this paper and for providing constructive feedback which we happily adopted. The authors declare no conflict of interest.