key: cord-0793894-e1its2iv authors: Lin, Rebecca Z.; Marsh, Elisabeth B. title: Abnormal singing can identify patients with right hemisphere cortical strokes at risk for impaired prosody date: 2021-06-11 journal: Medicine (Baltimore) DOI: 10.1097/md.0000000000026280 sha: 57e0aa1ff5fba6b1b44c215ec6f21dd4ebbad725 doc_id: 793894 cord_uid: e1its2iv Despite lacking aphasia seen with left hemisphere (LH) infarcts involving the middle cerebral artery territory, right hemisphere (RH) strokes can result in significant difficulties in affective prosody. These impairments may be more difficult to identify but lead to significant communication problems. We determine if evaluation of singing can accurately identify stroke patients with cortical RH infarcts at risk for prosodic impairment who may benefit from rehabilitation. A prospective cohort of 36 patients evaluated with acute ischemic stroke was recruited. Participants underwent an experimental battery evaluating their singing, prosody comprehension, and prosody production. Singing samples were rated by 2 independent reviewers as subjectively “normal” or “abnormal,” and analyzed for properties of the fundamental frequency. Relationships between infarct location, singing, and prosody performance were evaluated using t tests and chi-squared analysis. Eighty percent of participants with LH cortical strokes were unable to successfully complete any of the tasks due to severe aphasia. For the remainder, singing ratings corresponded to stroke location for 68% of patients. RH cortical strokes demonstrated a lower mean fundamental frequency while singing than those with subcortical infarcts (176.8 vs 130.4, P = 0.02). They also made more errors on tasks of prosody comprehension (28.6 vs 16.0, P < 0.001) and production (40.4 vs 18.4, P < 0.001). Patients with RH cortical infarcts are more likely to exhibit impaired prosody comprehension and production and demonstrate the poor variation of tone when singing compared to patients with subcortical infarcts. A simple singing screen is able to successfully identify patients with cortical lesions and potential prosodic deficits. Prosody is defined as the variations in pitch, rhythm, and emphasis in speech often used to interpret and express emotions. [1] Important prosodic features include measures of fundamental frequency and duration of the speech. [2] While the prosodic features required to fully understand the meaning of any given sentence are complex, [3] we can begin by considering the importance of acoustic cues to convey emotional states. The acoustic cues used to convey happiness, as an example, consist of a high mean pitch and fast speech rate, while the acoustic cues to convey sadness include a low mean pitch and slow speech rate. These cues are similar across languages [4] indicating that prosodic expression of emotion is universal. Prosodic function is broadly divided into 2 categories: affective-conveying the speaker's emotional state [5] and linguistic-providing clues regarding syntax (eg, I did not say SHE stole my money, vs I did not say she STOLE my money). [5] [6] [7] As 1 of the main extralinguistic attributes of oral communication, prosody, particularly affective prosody, is critical for interpersonal interactions. The ability to use intonation to convey and understand spoken language is incredibly important in our day-to-day functioningit allows us to respond appropriately to spouses, friends, and co-workers in situations that may require disparate responses such as sympathy or sarcasm. The neural circuitry for both affective and linguistic prosody has been well described, with affective prosody lateralizing more to right hemisphere (RH) and involving many of the same areas necessary for language processing in the left hemisphere (LH). [8] [9] [10] [11] [12] [13] [14] [15] Damage to these areas, for example, due to a right middle cerebral artery (MCA) stroke, can result in impaired prosodic function for both production and comprehension of speech. [16] While LH lesions can also result in prosodic impairment, [17] unfortunately, unlike left MCA strokes commonly presenting with aphasia, speech production often appears relatively or entirely normal for patients with RH lesions, making them more difficult to identify as impaired compared to their LH counterparts. Despite this, their behavior may appear strange and inappropriate, alienating themselves from those around them. Unfortunately, many of the other deficits typically experienced with RH lesions (poor attention and visual processing), along with prosody [18] that could potentially serve as clues that prosody may be impaired, can also be difficult to appreciate, particularly on basic functional screens. Therefore, without an increased suspicion, critical communication issues can be overlooked in patients with RH cortical strokes during the rehabilitation process. Failure to detect impairment leads to a missed opportunity, as these difficulties can significantly impact long-term function and ability to successfully reintegrate into society. Given the complexity of the neural circuitry underlying prosody, patchy involvement of regions following stroke, and the inconsistent ability for the rehabilitation team to directly access neuroimaging at the time of assessment and treatment, easy and effective functional screening for individuals with lesions potentially at risk for prosodic impairment is needed. Designing the ideal communication screen for aprosodia is challenging; however, prior studies have shown that patients with cortical strokes involving similar RH areas can also exhibit deficits in singing when compared to patients with subcortical infarcts. [19, 20] Interestingly, prosody has also been referred to as the "melody of speech" [19] [20] [21] [22] and additional studies have shown that sung melody can be used post stroke to enhance acquisition of verbal material [23] likely because music processing localizes to areas similar, yet distinct from language. [24] In this study, we further explore the relationship between singing and prosody. We evaluate the ability of patients with acute ischemic stroke to sing, and the ability of a simple "normal/ abnormal" grading system of singing samples to correctly identify individuals with cortically based RH lesions at risk for aprosodia. Singing samples are further characterized to determine objective differences in fundamental frequency for cortical vs subcortical infarcts that may correspond to a rater's ability to detect abnormalities, and participants undergo formal tests of expressive and receptive prosody to gauge the extent of their impairment. Individuals with both right and LH strokes are included; however, we hypothesize that aphasia will significantly limit testing in patients with LH cortical strokes. We further suspect that individuals with RH lesions involving cortex will have high rates of prosodic impairment with respect to both comprehension and production compared to those with subcortical lesions and that due to detectable abnormalities in fundamental frequency, singing will correctly identify RH cortical lesions, proving it to be an easy, inexpensive, and efficient screen to detect those at higher risk for impaired prosody that may impact their long-term outcomes and ability to successfully rehabilitate. This population is often overlooked as they do not exhibit aphasia but represent a group who would potentially benefit from enhanced rehabilitation. Detecting the presence of prosodic deficits early in the clinical course could significantly influence how providers approach rehabilitation and the counseling of ischemic stroke patients and their families during the recovery process. This study was approved by the Johns Hopkins Medicine Institutional Review Board. Informed consent was obtained from all participants or their legal representatives. We recruited a prospectively collected cohort of consecutive patients presenting to the Johns Hopkins Bayview Medical Center, a large, urban, Comprehensive Stroke Center, with acute ischemic stroke on neuroimaging and symptom onset within 24 hours of admission. Patients were screened, consented, and evaluated by the study team within 48 hours of symptom onset. A non-contrast head computed tomography (CT) was obtained on admission to rule out intracranial bleeding and repeated for patients unable to undergo magnetic resonance imaging (MRI) to confirm the presence and location of ischemia (n = 7). For the remaining 29 patients an MRI was obtained, typically within the first 24-48 hours of admission, on a 3.0T Siemens (Munich, Germany) Trio scanner and used to classify stroke location and quantify volume. Diffusion-weighted imaging (DWI) sequences (40 slices with 2.0 mm 3 voxel size) were evaluated to confirm the presence of acute stroke (bright on DWI maps). Patients were excluded if there was the presence of primary intracerebral hemorrhage or no evidence of ischemia on imaging. A boardcertified neurologist who was blinded to the clinical findings reviewed the imaging to determine the vascular distribution affected based on the pattern of infarct and classified each stroke as cortical (involving cortical territories supplied by the MCA) or subcortical (lacking cortical involvement and supplied instead by a single small blood vessel branching off the MCA). Please see Figure 1 for representative examples of cortical and subcortical infarcts. Cortical strokes were further delineated by their vascular distribution as full or partial MCA syndromes, while subcortical infarcts were described by their anatomical location given the involvement of tiny, unnamed vessels (eg, basal ganglia, internal capsule, and thalamus). We determined stroke volume automatically using the patient's diffusion-weighted sequence and the Generic Lesion Segmentation tool in Carestream Vue PACS, version 12. [25] Other information regarding demographics, stroke characteristics, and vascular risk factors was obtained through chart review after the experimental battery was conducted. To determine whether singing can be used to differentiate cortical from subcortical lesions and indicate potential impairment in prosody, patients admitted to the Johns Hopkins Bayview Inpatient Neurology Service were administered a short battery of tasks to evaluate their singing, receptive prosody, and productive prosody. Patients with ischemic stroke were identified, consented, and tested at the bedside by the study team within 48 hours of stroke onset. Team members were undergraduate students, trained to administer the battery by a speech-language pathologist and evaluated to ensure accuracy and consistency with administration. A standard script was followed when explaining the study and tasks to each participant. Tasks were administered in the following order, prioritizing the singing assessment as the primary outcome of interest. The order of stimuli presented remained consistent across subjects. There was no time limit associated with any of the tasks, although testing was stopped if participants expressed that they were too fatigued to continue. Hearing was not formally assessed prior to administration; however, none of the participants had a history of significant hearing impairment. 2.2.1. Singing. Participants were asked to sing "Happy Birthday." This song was chosen given its widespread familiarity and well-known tune and its utility in assessing recitation, melody, and rhythm of speech as part of the BDAE-III battery. [26] Lin and Marsh Medicine (2021) 100: 23 Medicine Though the use of words during song production were encouraged both to better orient the participant to the task given its familiarity and that the majority of participants were not aphasic, they were not required (humming or other vocalization was acceptable), as the focus was on the appropriate modulation of pitch. An iPhone 8 was placed 2 to 3 inches from the mouth of the participant and its recording program used to record the song for further analysis (below). Participants were consented by the study team to have their singing recorded and stored for later evaluation. The emotion recognition task was designed to evaluate receptive prosody. Stimuli have been previously published and successfully used to evaluate for prosodic impairment. [27, 28] A female speaker recorded 25 sentences composed of nonwords with the phonological and morphological features of English intact (eg, I nestered the flegs). This prevented participants from relying on the semantic content of the phrases rather than prosodic cues when selecting the emotion that best corresponded to the recording. Each audio file was uploaded onto a PowerPoint slide along with instructions to pick the emotion best describing the speaker's tone of voice with the printed multiple-choice options: surprised, happy, sad, angry, and afraid. The 5 emotion words were presented in the same order on each slide. The PowerPoint was presented on a laptop (MacBook Pro) with volume and brightness set to 100%. Sentences were presented in the same order each time for consistency. Participants were asked to make their choice either by articulating an answer or pointing to their desired choice on the laptop screen. In cases where they were unsure, they were encouraged to select an emotion before moving to the next slide. Responses were marked correct if the participant appropriately identified the emotion. 3. Emotion production. The emotion production task was designed to evaluate productive prosody. Participants were asked to read 24 semantically neutral sentences (eg, The man knocked on our front door) in a given emotional state (surprised, happy, Lin and Marsh Medicine (2021) 100:23 www.md-journal.com sad, angry, afraid, or bored). [29] Sentences were displayed on PowerPoint slides below 1 of the emotions listed above. The stimuli were presented on a laptop set to 100% brightness in the same order for each participant. When reading each sentence aloud, participants were asked to emphasize the emotion they were attempting to convey. Successful production of emotion was evaluated at the time of testing by the study team, who had been previously evaluated on their ability to accurately determine correctness based on adherence to prosodic cues (eg, fast rate and high pitch for happy, slow rate and low pitch for sad). 2.3. Acoustic analysis 2.3.1. Praat analysis. The recordings of participants singing "Happy Birthday" were analyzed using Praat, version 6. [30] Recordings were edited to exclude any speech. A customized script was used to extract 3 parameters of fundamental frequency (F0): its mean (F0 mean ), range (F0 range ), and coefficient of variation (F0 CV ). [31] Parameters of duration were not evaluated, as they tended to be highly variable among participants and were thought to be potentially confounded by their energy level at the time of evaluation. Singing rating. The original unedited recordings were also graded as "normal" or "abnormal" by 2 independent raters to determine the ability of an impartial observer to correctly and quickly identify singing impairment in the clinical setting. The raters were other undergraduate members of the study team without significant training in language processing, given our desire to evaluate a screen that would be useful for those of varied backgrounds (physicians, nurses, and therapists with or without additional expertise). Raters were told simply to evaluate changes in pitch matching the familiar tune of "Happy Birthday," rather than the clarity or correctness of speech. A "normal" rating indicated the expected presence of variations in pitch, while an "abnormal" rating indicated the absence of such variation. The raters were blinded to the participant's identity or stroke location. A Cohen kappa was calculated for inter-rater reliability. When needed, disagreements over singing rating were resolved by group consensus among the 2 raters. Ties were broken by a third party with similar training. All statistical analyses were carried out using Stata, version 14. [32] The significance threshold was set at P = 0.05. We first determined the percentage of patients able to successfully complete each task with respect to infarct location (RH vs LH, cortical vs subcortical). The presence of aphasia as a confounding factor was noted. Participants unable to complete a given task were removed from further analysis with respect to that task. We next evaluated our primary outcome of interest, the percentage of patients in which an abnormal singing rating correctly identified a cortical lesion location. Chi-squared analysis was then used to formally examine the association between infarct location and singing rating. Univariate analysis using t tests (for continuous variables) and chi-squared tests (for categorical variables) was also performed to evaluate for potential confounding factors including age, sex, handedness, hemisphere, and depression. To evaluate the effect of lesion location on parameters of the fundamental frequency, independent t tests were performed. The t tests were also used to determine the extent of prosodic impairment based on lesion location, by evaluating the relationship between location and errors in emotion recognition and production. Multivariate linear regression models were then used to evaluate the interaction between lesion location, hemisphere, age, and sex. Figure 1 , and Table 2 details the lesions volumes and distribution of vascular territories within our population. Subcortical strokes were more heterogeneous, due to the occlusion of a small, deep blood vessel rather than the MCA, but illustrative examples are also included for reference. The mean age of the participants was 68 years (SD = 14); 31% were black; and 42% were male. Of the 36 participants, 28 successfully participated in singing "Happy Birthday." Eight participants, all with LH cortical strokes, were aphasic and unable to participate in any of the tasks, including singing. Two additional participants with LH lesions (1 cortical, 1 subcortical) were able to sing but were too fatigued to complete any further assessments, and 3 additional participants (2 RH cortical, 1 LH subcortical) completed the emotion recognition task but not the emotion production task. Participants made an average of 21 emotion recognition errors (SD = 10) and 27 emotion production errors (SD = 15). During the singing of Happy Birthday, mean values for F0 mean , F0 range , F0 CV were 157 Hz (SD = 54), 158 Hz (SD = 74), and 0.18 (SD = 0.08), respectively. Thirty-nine percent had abnormal singing ratings. For full results, please refer to Table 1 . Table 3 . We evaluated whether "normal" singing ratings were associated with subcortical stroke and "abnormal" singing ratings associated with cortical infarct. The Cohen kappa for the independent singing ratings was k = 0.620. Though the association between lesion location and singing rating did not reach statistical significance (P = 0.074), singing rating correctly classified 68% of cortical and subcortical strokes for individuals able to participate. Abnormal singing was not significantly associated with age (P = 0.515), sex (P = 0.934), handedness (P = 0.458), hemisphere (P = 0.453), or depression (P = 0.264). Compared to participants with subcortical strokes, participants with cortical infarcts demonstrated a significantly lower F0 mean (mean cortical = 130.4 (SD = 37.9), mean subcortical = 176.8 (SD = 56.0); P = 0.020). There was no significant association between infarct location and F0 range or F0 CV , although participants with cortical strokes tended to display a smaller F0 range and lower F0 CV than participants with subcortical strokes (see Table 1 for full details). There was also no significant association between hemisphere and singing characteristics, though participants with RH strokes did tend to have a lower F0 mean , (mean = 153.6 (SD = 56.4) vs 162.9 (SD = 50.5); P = 0.668) smaller F0 range , (mean = 148.8 (SD = 67.1) vs 174.1 (SD = 86.7); P = 0.397) and lower F0 CV (mean = 0.17 (SD = 0.08) vs 0.19 (SD = 0.8); P = 0.545) than participants with LH strokes who were able to participate. Individuals with "abnormal" singing tended to have lower F0 mean , F0 range , or F0 CV (Table 3) , but results did not reach statistical significance. After controlling for age, and sex in multivariable regression models, there remained a significant association between F0 mean and lesion location (P = 0.011). Please see Table 4 for the full results of the multivariable regression. Given the lack of patients with LH cortical lesions participating in the battery, the hemisphere was not included in multivariable regression. There was no significant association between performance and affected hemisphere for those who participated, though participants with RH strokes tended to make more errors than participants with LH strokes (mean = 22.9 (SD = 9.4) vs 17.8 (SD = 10.7); P = 0.225). Participants with cortical strokes also made significantly more errors on the emotion production task compared to participants with subcortical strokes (mean cortical = 40.4 (SD = 8.3), mean subcortical = 18.4 (SD = 11.7); P < 0.001). There was no significant association Table 3 Singing rating correctly identified infarct location for 68% of patients. Normal rating (n = 17) Abnormal rating (n = 11) Unable to sing (n = 8) LH cortical, n (%) 1 (10%) 1 (10%) 8 (80%) between performance and hemisphere for those who were able to participate, though participants with RH strokes tended to make more errors than participants with LH strokes (mean = 28.5 (SD = 13.9) vs 23.7 (SD = 18.1); P = 0.496). This study characterizes prosodic deficits in patients presenting with acute cortical or subcortical ischemic strokes within the MCA territory or small perforating vessels, and evaluates the utility of a brief singing screen to differentiate lesion location and identify individuals at increased risk for prosodic impairment. Results indicate that when singing, patients with RH cortical strokes in the MCA distribution demonstrate a significantly lower F0 mean (and to a lesser degree, a smaller F0 range and lower F0 CV ) than patients with subcortical infarcts and that they are more likely to receive an abnormal singing rating. Additionally, these patients make significantly more errors when completing tasks of prosody comprehension and production than patients with subcortical strokes. This relationship between impaired singing and prosodic deficits is consistent with the literature, as many studies have shown that patients with congenital amusia or tone-deafness associated with impaired RH cortical connectivity [33, 34] also exhibit deficits in pitch perception of spoken language. [35] [36] [37] It is important to note that this study was unintentionally biased to RH cortical strokes given that most patients with LH cortical infarcts were aphasic and unable to complete any of the experimental tasks (n = 8). While this was a hypothesized outcome, it likely explains why no significant associations were found between hemisphere and our variables of interest. Importantly, it does not take away from the potential clinical utility of the singing screen. Patients with LH cortical strokes demonstrate deficits in language production and comprehension that are easily recognizable in the acute setting (eg, aphasia). Conversely, individuals with RH (non-dominant) cortical lesions may appear to have fewer clinical deficits, but are no less impaired. In our study, patients with RH cortical strokes were typically able to fully participate, but they performed poorly compared to patients with LH and RH subcortical strokes. These prosodic deficits are largely underrecognized, [2, 29] and may present significant challenges for stroke patients as they navigate returning to their prior home and workplace environments. We did not include the 8 patients with LH cortical strokes who were unable to sing in our analysis of singing rating; however, if we had included them within the abnormal group, correct classification of location would have improved to 75% (27/36) and the association between singing rating and lesion location becomes significant (P = 0.003), indicating that the screen is useful in identifying individuals in need of additional assessment and rehabilitation. Our data support that objective measures of abnormal singing (F0 mean , F0 range , and F0 CV ) are associated with cortical lesions and may be responsible for allowing the correct classification of abnormal singing. It also shows that individuals with RH cortical lesions have higher rates of abnormal productive and receptive prosody, making them an important group to target. Interestingly, these measures were not significantly associated with a subjective abnormal singing rating determined by an impartial observer. The lack of association may have been due to the small sample size, as F0 mean , F0 range , F0 CV did tend to be lower in recordings classified as "abnormal," or simply due to the subjectivity of classification. The differences in the acoustic analysis may be relatively small; however, they highlight that there are objective differences that may be able to be heard by raters when evaluating singing. Confounders such as age and sex are likely also important contributors, but lesion location remained significant even in multivariable analysis. It is important to point out that for clinical utility, raters were not extensively trained on these parameters, as this also may have made a difference. There was moderate agreement between reviewers, with a kappa statistic of 0.62. However, rather than provide additional training, we felt it important to demonstrate the utility of the screen in a group of relatively inexperienced individuals. Importantly, categorizing singing as normal or abnormal, though imperfect, was able to correctly identify lesion location in the majority of patients without the need for more advanced analysis. This suggests that a simple singing screen may be useful in the clinical setting as a quick and easy initial evaluation of speech melody, though it would not necessarily take the place of more objective measures. The aim of this study was to test the ability of an impartial observer to correctly and quickly assess a stroke patient's variation in pitch when singing. A simple singing screen could be used by emergency physicians, neurologists, and speech therapists alike. As an example, during the neurological examination of a patient presenting to the rehabilitation service or clinic with symptoms localizing to the RH, requesting that the patient sing a well-known song and noting the absence of variations in pitch may aid in identifying cortical involvement and help to identify patients with damage to prosodic speech areas who would benefit from a more detailed evaluation, targeted therapy, and proper education of their therapy teams, nursing staff, and families regarding the importance of individualized communication strategies. Alternatively, in the more acute setting, identifying RH cortical involvement in the Emergency Department may aid in the decision to order more advanced neuroimaging such as hyperacute MRI or CT perfusion studies to evaluate for large vessel occlusion, as larger strokes may benefit from further treatment interventions including mechanical thrombectomy. Further studies are needed to determine the efficacy of singing in an Emergency Room setting. We are not suggesting that this screen should take the place of more advanced neuroimaging when available, or that it is as accurate as an MRI, however, access to these studies is not always available. We chose to evaluate cortical vs noncortical strokes within the vascular distribution of the MCA rather than precise lesion locations. While this may be seen as a limiting factor raising the possibility of heterogeneity contributing to inconsistent performance of the screen, we felt that it was representative of common stroke patterns and that the practicality of the clinical application was most important given that the precise neural circuitry has been previously described and involves significant portions of the MCA territory. The primary limitation of this study was its relatively small sample size, which may be reflected in the lack of significance for some results. For example, while no significant association was found between infarct location and F0 range or F0 CV , cortical stroke patients did have a lower F0 range and F0 CV than subcortical stroke patients. In addition, each patient's baseline ability to sing prior to their stroke was unknown, making it challenging to conclusively weigh their singing rating as normal or abnormal. At baseline, men typically demonstrate a lower fundamental frequency than women, [38] and differences have been reported with age [38] [39] [40] and depression. [38] [39] [40] [41] [42] However, Lin and Marsh Medicine (2021) 100: 23 www.md-journal.com none of these factors were significant when evaluated separately, and lesion location remained independently associated with fundamental frequency even when adjusting for sex in multivariable regression. Furthermore, despite these potential confounders, when singing was evaluated by 2 independent reviewers, identification of singing patterns that seemed "abnormal" correctly identified lesion location and potential prosodic impairment for two-thirds of cases. While imperfect, we believe that this simple screen could prove useful in identifying patients who may demonstrate greater deficits on more extensive tests of prosody. Similarly, each person's baseline prosodic function was unknown; however, we would argue that being able to identify individuals with poor prosody (baseline or otherwise) who may face communication difficulties during rehabilitation would be beneficial and it is unlikely that all of the cortical strokes just happened to have more impairment. Finally, the performance of patients with LH cortical strokes may have been significantly influenced by their aphasia. However, the number of individuals with aphasia able to participate was small, as the majority of affected individuals were fairly severely affected and unable to participate in any of the tasks. When evaluating singing, only 2 patients with LH cortical lesions were able to vocalize some words along with humming, but this allowed for formal acoustic analysis, and credit was given for the production of the proper melody. Only 1 was subsequently able to participate in additional prosody testing. The other participants lacked any evidence of aphasia. Despite these limitations, we believe that our study demonstrates the importance of considering prosodic deficits in patients with acute ischemic stroke affecting cortical areas. Impairments in prosody after cortical stroke make it difficult for speakers to communicate their emotions and intentions, which can then disrupt their daily interactions and interpersonal relationships. To best support cortical stroke survivors during the recovery process, it is important that providers start to acknowledge and address these challenges in consultations with patients and their families. Toward the simulation of emotion in synthetic speech: a review of the literature on human vocal emotion Right hemisphere regions critical for expression of emotion through prosody How prosody influences sentence comprehension Factors in the recognition of vocally expressed emotions: a comparison of four languages Dysprosody or altered "melody of language The interface between language and attention: prosodic focus marking recruits a general attention network in spoken language comprehension Effects of prosody on the cognitive and neural resources supporting sentence comprehension: a behavioral and lesion-symptom mapping study The neural correlates of emotional prosody comprehension: disentangling simple from complex emotion Dominant language functions of the right hemisphere?: Prosody and emotional gesturing Speech prosodies of different emotional categories activate different brain regions in adult cortex: an fNIRS study Perception of affective and linguistic prosody: an ALE meta-analysis of neuroimaging studies Lateralization of affective prosody in brain and the callosal integration of hemispheric language functions Neurology of affective prosody and its functionalanatomic organization in right hemisphere Prosodic stress: acoustic, aphasic, aprosodic and neuroanatomic interactions Recognition of emotional prosody and verbal components of spoken language: an fMRI study Acute ischemic lesions associated with impairments in expression and recognition of affective prosody Comprehension of affective and nonaffective prosody Functions of the Right Cerebral Hemisphere Words in melody: an H215O PET study of brain activation during singing and speaking Shared and distinct neural correlates of singing and speaking A musical approach to speech melody Speech melody as articulatorily implemented communicative functions Cognitive and neural mechanisms underlying the mnemonic effect of songs after stroke Neural basis of acquired amusia and its recovery after stroke Carestream vue PACS BDAE-3: Boston diagnostic aphasia Examination Is there an advantage for recognizing multi-modal emotional stimuli? Right hemisphere ventral stream for emotional prosody identification: evidence from acute stroke Selective impairments in components of affective prosody in neurologically impaired individuals Praat: doing phonetics by computer Seven and up: individual differences in male voice fundamental frequency emerge before puberty and remain stable throughout adulthood Stata version 14 Cortical thickness in congenital amusia: when less is better than more Tone deafness: a new disconnection syndrome? Congenital amusia in speakers of an tone language: association with lexical tone agnosia Speech intonation perception deficits in musical tone deafness (congenital amusia) Congenital amusia (or tone-deafness) interferes with pitch processing in tone languages Physiologic and acoustic differences between male and female voices Speaking fundamental frequency and chronologic age in males Changes in speaking fundamental frequency characteristics with aging Verbal indicators of depression Depression diagnoses and fundamental frequency-based acoustic cues in maternal infantdirected speech This work was supported in part through the generosity of the Iorizzo family.The authors have no conflicts of interest to disclose.The datasets generated during and/or analyzed during the present study are available from the corresponding author on reasonable request. The authors thank Shannon Sheppard, Argye Hillis, and Marc Pell for the stimuli used in the experimental battery. The authors also thank Sheena Khan, Dania Mallick, and Alexandria Soto for helping to screen and test patients.