key: cord-0276439-iatjtpws authors: Randall, William; Guterstam, Arvid title: A motion aftereffect from viewing other people’s gaze date: 2020-11-09 journal: bioRxiv DOI: 10.1101/2020.11.08.373308 sha: 6582b5c518fcc911a0153b3c19d87312227c4466 doc_id: 276439 cord_uid: iatjtpws Recent work suggests that our brains may generate subtle, false motion signals streaming from other people to the objects of their attention, aiding social cognition. For instance, brief exposure to static images depicting other people gazing at objects made subjects slower at detecting subsequent motion in the direction of gaze, suggesting that looking at someone else’s gaze caused a directional motion adaptation. Here we confirm, using a more stringent method, that viewing static images of another person gazing in a particular direction, at an object, produced motion aftereffects in the opposite direction. The aftereffect was manifested as a change in perceptual decision threshold for detecting left versus right motion. The effect disappeared when the person was looking away from the object. These findings suggest that the attentive gaze of others is encoded as an implied agent-to-object motion that is sufficiently robust to cause genuine motion aftereffects, though subtle enough to remain subthreshold. Graziano, 2020b). In this study, we made use of a visual phenomenon called the motion 14 aftereffect to test a prediction of this proposed model: viewing static images depicting other 15 people gazing in a particular direction, at an object, should lead to an illusory subsequent 16 motion in the opposite direction. 17 The motion aftereffect is a classic phenomenon of a false motion signal in the visual 18 image caused by prior exposure to motion in the opposite direction (Anstis et al., 1998; 19 Wohlgemuth, 1911) . It is typically assessed experimentally by first exposing subjects to a 20 motion stimulus, including implied motion (e.g., a static image of a running animal) 21 (Kourtzi and Kanwisher, 2000; Krekelberg et al., 2003; Winawer et al., 2008) , and then 22 measuring subjects' speed and accuracy at detecting subsequent random-dot motion test 23 probes (Glasser et al., 2011; Levinson and Sekuler, 1974) . A genuine motion aftereffect is 1 associated with slower reaction times and decreased accuracy for motion test probes of the 2 same directionality as the adapting stimulus, reflecting direction-specific neuronal fatigue 3 affecting motion processing time and perceptual decision-making. In a series of seven 4 behavioral experiments , we previously showed that brief 5 exposure to static images depicting a person gazing in a particular direction, at an object, 6 made subjects significantly slower at detecting subsequent motion in the direction of gaze, 7 which is compatible with a motion aftereffect caused by gaze encoded as implied motion. 8 The effect disappeared when the depicted person was blindfolded or looked away from the 9 object, and control experiments excluded differences in eye movements or asymmetric 10 allocation of covert attention as possible drivers of the effect. However, because the 11 paradigm in (Guterstam and Graziano, 2020a) was primarily designed for analysis of 12 reaction time rather than accuracy, the task was made easy and accuracy was close to 13 ceiling (mean accuracy across experiments = 91%). Thus, that experiment showed only 14 reaction time effects, and failed to reveal any meaningful accuracy effects. The goal of the 15 present study was to examine if seeing someone else's gaze direction caused enough of a 16 motion aftereffect to shift subjects' perceptual decisions about subsequent motion. The 17 present experiment is therefore a conceptual replication of the previous studies, but using 18 a different measure of the motion aftereffect to test whether the discovery is reliable and 19 robust across methods. 20 To achieve this goal, we modified the motion adaptation paradigm described in 21 , which was based on a random-dot motion direction 22 discrimination task, to maximize the likelihood of detecting meaningful differences in 23 accuracy. Subjects were tested using an online, remote platform (Prolific) (Palan and 1 Schitter, 2018) due to restrictions on research imposed by the coronavirus epidemic. (See 2 Materials and Methods for details of sample sizes and exclusion criteria). Just as in 3 , in each trial, subjects were first exposed to an image 4 depicting a face on one side of the screen, gazing at a neutral object, a tree, on the other 5 side (Fig 1A) . After 1.5 s, the face-and-tree image disappeared, and subjects saw a random 6 dot motion stimulus in the space interposed between where the head and the tree had been. 7 The stimulus was shown for 1.0 s. The proportion of dots moving coherently in one 8 direction (dot coherence) varied across seven different levels. The coherence ranged from 9 30% of the dots moving left (and 70% moving randomly) to 30% moving right in 10 increments of 10% (thus, the middle condition of 0% coherence had 100% of the dots 11 moving randomly). After the dots disappeared, subjects made a forced-choice left-or-right 12 judgement of the global direction of the moving-dots stimulus. 13 This approach allowed us to calculate, at each level of coherence and on a subject-14 by-subject basis, the frequency of responses that were spatially congruent with the gaze 15 direction in the preceding face-and-tree image (i.e., the direction toward the location of the 16 tree). By fitting this data to a sigmoid function and extracting the sigmoid central point, we 17 estimated the perceived null motion, that is, the amount of motion coherence for which 18 subjects were equally likely to respond that the motion direction of the test probe was 19 congruent or incongruent with the preceding gaze direction. We found that viewing 20 another's gaze significantly shifted the perceived null motion, as if that gaze caused an 21 illusory motion aftereffect in the opposite direction (experiment 1). The effect disappeared 22 when the face in the display was looking away from the object (experiment 2; Fig 1B) , 23 suggesting that the perception of the other actively gazing at the object was the key factor. 1 These findings extend previous results by demonstrating that viewing other people's gaze 2 is associated with a false motion signal, below the level of explicit detection but still 3 capable of generating a motion aftereffect that influences not only perceptual processing 4 time, but also perceptual decision thresholds about subsequent motion. In both experiment 1 (face looking toward the tree) and experiment 2 (face looking 8 away from the tree), the appearance of the face on the left and tree on the right, or the face 9 on the right and tree on the left, were balanced and presented in a random order. The 10 subsequent dot-stimulus could move either leftward or rightward with 10%, 20% or 30% 11 coherence, or be completely random (0% coherence). For analysis, the trial types were 12 collapsed into seven conditions: -30%, -20%, -10%, 0%, +10%, +20%, and +30%, where 13 motion toward the location of the (preceding) tree were arbitrarily coded as positive 14 coherence, and motion away from the tree as negative coherence. Thus, the predicted 15 motion aftereffect from viewing the face actively gazing in the direction toward the tree 16 (in experiment 1) should produce a positive shift (>0%) of the perceived null motion. 17 Subjects performed 70 trials in seven blocks of 10 trials each, thus 10 trials per condition. prediction that implied motion streaming from the eyes toward the tree causes a motion 23 aftereffect in the opposite direction . In other words, 1 immediately after subjects saw a face gazing one direction, the amount of real motion 2 needed to make subjects think a test stimulus was randomly balanced between left and right 3 movement was 1.18% coherence in the direction that the face had been gazing. 4 In experiment 2 (n=64), where the face was looking away from the tree, the central 5 point of the sigmoid function was not significantly different from 0 (M = -0.47%, S.E.M. 6 = 0.47%; t63 = -1.01, p = 0.3165; Fig 2B) . In a between-groups comparison, we found that After all trials were completed, subjects in experiments 1 and 2 were asked what they 10 thought the purpose of the experiment might be, and whether they were explicitly aware of 11 any influence of the head-and-tree stimulus on their ability to respond to the dot motion 12 stimulus. Though subjects offered guesses about the purpose of the experiment, none 13 indicated anything close to a correct understanding. All subjects also insisted that, as far as 14 they were aware, the head-and-tree stimulus had no impact on their response to the second 15 stimulus. These questionnaire results suggest that any motion aftereffects observed here 16 probably occurred at an implicit level. 17 These results strongly support the notion that when people view a face looking at an 20 object, the brain treats that gaze as though a movement were present, passing from the face 21 to the object. The motion test probes were more likely to be judged as moving in the 22 direction opposite the gaze direction depicted in the previous adapting image than to be 23 moving in the same direction, but only when the agent in the image was actively gazing at 1 the object. This work extends previous results that focused on reaction times (Guterstam 2 and Graziano, 2020a). Here, perception of other people's gaze significantly biased 3 perceptual decisions about subsequent motion, which is a hallmark of the motion 4 aftereffect. We propose that this hidden motion signal, associated with gaze, is part of an 5 implicit 'fluid-flow' model of other people's attention, that assists in human social 6 cognition. 7 The null result of experiment 2 suggest that spatial priming, i.e., subjects simply 8 being more prone to choose the direction that the face was looking, is an unlikely 9 explanation to the findings of experiment 1. Had spatial priming been the driving factor, an agent to the attended object. This motion signal may be detected using sensitive 20 behavioral motion adaptation paradigms, such as in the present study or in (Guterstam and 21 Graziano, 2020a) . It can also be quantified using a tube-tilting task, in which subjects' 22 angular judgements of the tipping point of a paper tube were implicitly biased by the 23 presence of the gazing face, as if beams of force-carrying energy emanated from eyes, 1 gently pushing on the paper tube (Guterstam et al., 2019) . The motion signal is also 2 detectable in brain activity patterns in the human motion-sensitive MT complex and in the 3 temporo-parietal junction, which responded to the gaze of others, and to visual flow, in a 4 similar manner (Guterstam et al., 2020a) . Finally, by contaminating a subject's visual 5 world with a subthreshold motion that streams from another person toward an object, we 6 could manipulate the subject's perception of that other person's attention, suggesting that 7 subthreshold motion plays a functional role in social cognition (Guterstam and Graziano, 8 2020b) . 9 Together, these present and previous findings suggest that the visual motion system 10 is used to facilitate social brain mechanism for tracking the attention of others. We 11 speculate that this implicit social-cognitive model, borrowing low-level perceptual 12 mechanisms that evolved to process physical events in the real world, may help to explain 13 the extraordinary cultural persistence of the belief in extramission, the myth that vision is 14 caused by something beaming out of the eyes (Gross, 1999; Piaget, 1979; Winer et al., 15 1996) . 16 17 18 For each experiment, participants were recruited through the online behavioral 21 testing platform Prolific (Palan and Schitter, 2018) . Using the tools available on the Prolific 22 platform, we restricted participation such that no subject could take part in more than one 23 experiment. Thus, all subjects were naïve to the paradigm when tested. All participants 1 indicated normal or corrected-to-normal vision, English as a first language, and no history 2 of mental illness or cognitive impairment. All experimental methods and procedures were 3 approved by the Princeton University Institutional Review Board, and all participants 4 confirmed that they had read and understood a consent form outlining their risks, benefits, 5 compensation, and confidentiality, and that they agreed to participate in the experiment. 6 Each subject completed a single experiment in a 6-8 min session in exchange for monetary 7 compensation. As is standard for online experiments, because of greater expected variation 8 than for in-lab experiments, relatively large numbers of subjects were tested. A target 9 sample size of 100 subjects per experiment was chosen arbitrarily before data collection 10 began. Because of stringent criteria for eliminating those who did not follow all instructions 11 or showed poor task performance (see below), initial, total sample sizes were larger than 12 100 and final sample sizes for those included in the analysis varied between experiments 13 (experiment 1, ntotal=115, nincluded=59, 17 meaningless if a subject cannot detect motion direction even at the easiest (highest) 23 coherence levels, we excluded all subjects whose accuracy was less than 80% when 30% 1 of the dots moved either right or left, in accordance with the exclusion criterium used in 2 (Guterstam and Graziano, 2020a). The relatively high rate of exclusion due to poor 3 performance here (35% on average) was expected given that the average exclusion rate 4 was 19% in a previous study using the same dot motion 5 direction discrimination task but with a fixed 40% coherence level, which is easier to 6 detect. Moreover, participants in (Guterstam and Graziano, 2020a) underwent up to four 7 sets of 10 practice trials, with feedback, before commencing the main experiment, since 8 reaction times (RTs), and not accuracy, was the outcome of interest in that study. In the 9 present study, subjects did not undergo any practice sessions, because accuracy was our 10 primary outcome. It therefore seems probable that the absence of practice trials and lower 11 dot coherence levels in the present study fully explain the higher exclusion rates reported 12 here compared to in . No subjects were excluded for failure to carefully read the instructions, which was 20 determined by an instructional manipulation check (IMC). The IMC, also used in 21 , was adapted from (Oppenheimer et al., 2009) and 22 consisted of the following sentence inserted at the end of the instructions page: "In order 23 to demonstrate that you have read these instructions carefully, please ignore the 'Continue' 1 button below, and click on the 'x' to start the practice session." Two buttons were 2 presented at the bottom of the screen, "Continue" and "x", and clicking on "Continue" 3 resulted in a failed IMC. 4 5 After agreeing to participate, subjects were redirected to a website where stimulus 7 presentation and data collection were controlled by custom software based on HTML, CSS, 8 JavaScript (using the jsPsych javascript library (de Leeuw, 2015)), and PHP. Subjects were 9 required to complete the experiment in full screen mode. Exiting full screen resulted in the 10 termination of the experiment and no payment. Because the visual stimuli were rendered 11 on participants' own web browsers, viewing distance, screen size, and display resolutions 12 varied. The face-and-tree image encompassed 60% of the subject's total screen width. For the statistical analysis, the trial types were collapsed into seven conditions: -30%, 3 -20%, -10%, 0%, +10%, +20%, and +30%. Motion toward the location of the (preceding) 4 tree were arbitrarily coded as positive coherence, and motion away from the tree as 5 negative coherence. On a subject-per-subject basis, for each condition, we calculated the 6 proportion of responses that was spatially congruent with the direction away from the face 7 and toward the tree (which, in experiment 1, corresponded to the gaze direction of the face). 8 We then fit the accuracy data to a sigmoidal function (Eq. 1) (Noel et al., 2020) The design, procedures, and statistical analysis of experiment 2 were identical to 6 those of experiment 1, with one exception: the face was turned away from the tree. This 7 control condition should eliminate any gaze-induced effect on motion judgments 8 . We therefore predicted that the mean central point in 9 experiment 2 would not significantly differ from to 0 (two-tailed one-sample t-test), and 10 that it would be significantly smaller than the mean central point among participants in 11 experiment 1 (two-tailed two-sample t-test). subjects were equally likely to respond that the motion is "going toward the tree" as "going 12 away from the tree". When the face was looking at the tree (experiment 1), the central point 13 The motion aftereffect Mindblindness: An Essay on Autism and Theory of Mind Reading the mind from eye gaze Perceptual and neural 22 consequences of rapid motion adaptation Human consciousness and its relationship to 24 social neuroscience: A novel hypothesis The fire that comes from the eye Implied motion as a possible mechanism 1 for encoding other people's attention Visual motion assists in social cognition Proc. Natl. Acad. Sci Implicit model of other people's visual attention as an invisible, force-carrying beam 6 projecting from the eyes Other 8 people's gaze encoded as implied motion in the human brain Temporo-Parietal 11 Cortex Involved in Modeling One's Own and Others' Attention Attributing awareness to oneself and to others Unique morphology of the human eye Activation in human MT/MST by static images 18 with implied motion Neural correlates of implied motion jsPsych: A JavaScript library for creating behavioral experiments 22 in a Web browser Direction-specific adaptation in human vision-24 measurements using isotropic random dot patterns Rapid Recalibration of Peri-27 Personal Space: Psychophysical, Electrophysiological, and Neural Network Modeling 28 Evidence Instructional manipulation 30 checks: Detecting satisficing to increase statistical power Prolific.ac-A subject pool for online experiments Humans are sensitive to attention 1 control when predicting others' actions The Child's Conception of the World A motion aftereffect from still 5 photographs depicting motion Images, Words, and 7 Questions: Variables That Influence Beliefs about Vision in Children and Adults On the after-effect of seen movement