key: cord-0896360-giktwgbb authors: Slomp, David; East, Martin title: Volume 45 Editorial date: 2020-07-04 journal: nan DOI: 10.1016/j.asw.2020.100472 sha: 571832d2f6244532b408fee0146ee1ba9c1db6a7 doc_id: 896360 cord_uid: giktwgbb nan 1 How have writing assessment practices and policies of the past and present either contributed to or challenged the systematic entrenchment of inequity? 2 How can research on writing assessment draw attention to the role of writing assessment in confronting inequity and promoting opportunity for all? In their introduction to Race and Writing Assessment, Inoue and Poe (2012) place such questions at the forefront of a research agenda for our field, stating, " [o] ur job is to understand how unequal outcomes may reflect larger socially organized forces and suggest ways that we could account for the effects of those racial formations in our processes of validating assessments. It's our ethical responsibility" (p. 5). Inoue and Poe, here, are not calling necessarily for a separate research agenda focused on race and writing assessment; rather they challenge us to integrate a concern for the consequences of assessment (with explicit attention to racism and inequity) into the diverse programs of research that we are already engaged in. https://doi.org/10.1016/j.asw.2020.100472 Assessing Writing xxx (xxxx) xxxx 1075-2935/ © 2020 Published by Elsevier Inc. Once again, we are impressed with the breadth of scholarship included in this volume. Studies published in this volume report on research conducted around the globe, in L1 and L2 contexts, and from primary to tertiary levels of education. These studies can be broadly grouped under two themes: the use of assessment to support teaching and learning, and challenges related to automated and human scoring. Five papers explore issues related to how assessment of writing can support teaching and learning. In L2 contexts, Written Corrective Feedback (WCF) is a popular method for helping students develop capacity to write error free prose. Research on the effectiveness of various aspects of WCF, however, has yielded both positive and negative evidence of effectiveness. To help us gain a better understanding of the research on the efficacy of WCF, Mao and Lee reviewed 59 articles published between 1979 and 2018 that examined feedback scope in WCF. By examining quantitative, qualitative and mixed methods research, their study expands on previous reviews of research on WCF. In addition to elucidating findings regarding the benefits of WCF, their study points to a number of gaps in the current body of research. These include the need for: (a) clearer definitions of the core constructs associated with various modes of WCF; (b) more studies that look at both comprehensive and focused WCF; (c) greater attention to the individual and contextual factors that shape both the provision and response to WCF; and (d) the diversification of research methods used to explore the effectiveness of WCF. Echoing themes from our review of 25 years of research published in Assessing Writing (Slomp, 2019) , Mao and Lee point also to the need for greater attention to the ecological validity of research on WCF, including expanding the populations and contexts in which this research is being conducted (primarily at the tertiary level), focusing on the impact of these approaches on different populations in different contexts, and moving beyond a cognitive perspective to also examining these issues from a sociocultural perspective. In a study the questions the focus on error in writing assessment and formative feedback, Sandiford and Macken-Horarik examine methods for assessing development in narrative writing. Working with 27 primary and secondary level teachers in Australia, they collected 373 samples of student narrative writing completed in response to timed writing prompts. Drawing on the lens of systemic functional grammatics, they orient the focus of assessment away from a focus on error and toward an appreciation of the "intimations of what is to come," exemplified by the choices students make and the struggles they work through. The paper presents several samples of student writing-always a welcome element of papers published in the journal-to persuasively demonstrate the insights into the development of writing ability that this lens provides. In a similar vein, and drawing on a sociocultural view of writing, Qin and Uccelli explored the flexibility with which adolescent and adult writers appropriately employ linguistic resources within academic and colloquial contexts. Participants were EFL learners with backgrounds in Chinese, French and Spanish languages. Their study found complex associations between L1 background, linguistic complexity, register flexibility, and English proficiency. With respect to consequences, their study points to the importance of developing metalinguistic awareness in student writers, particularly with respect to language choices they make across registers and genres to fulfil specific communicative purposes. Ghaffar, Khairallah and Salloum report on a study conducted with middle school students in Lebanon that examined the impact of co-constructing and using rubrics for formative assessment purposes on students' attitudes about writing and on their development as writers. They found that this approach enhanced student awareness of criteria, improved attitudes toward writing, and deepened engagement and student-directed learning. They report that this formative assessment focus caused both teacher and students to reconsider "the meaning of writing," both why they write and how they write. This study highlights the importance of teacher voice and collaboration in assessment design. Gomes and Ma report on the use of student evaluations of teaching to help gain insights into the functioning of writing programs. They suggest that orienting these evaluation around the construct of helpfulness-the belief that a course has had positive outcomes for the student-could help to resolve historic inequities and biases perpetuated by these forms of assessment, providing students, instructors and program administrators with a shared language about student success in local contexts, leading to more actionable data on student experience. Collectively, these five studies advance consideration of the consequential aspect of writing assessment by investigating and demonstrating uses of assessment that support instruction and development in writing ability. The final three papers in this volume focus on issues related to scoring, highlighting issues of construct representation in scoring procedures. Sevgi-Sole and Ünaldi examined how raters negotiate to resolve score discrepancies in both authentic and research scoring sessions by analyzing patterns in verbal exchanges between raters as they negotiate discrepancies in papers they had scored. They found that negotiations within authentic scoring contexts were dramatically shorter than were negotiations within research contexts. Time pressures to complete the scoring process in authentic scoring settings were found to have impacted the duration, coherence, and completeness of rater's argumentation. These findings highlight the need for more research into the role of contextual factors-including cultural values and modes of argumentation-in shaping rater negotiations. Their finding that fewer than 2% of argumentative moves made during negotiation sessions referred to the rating scale also raises questions of construct Assessing Writing xxx (xxxx) xxxx underrepresentation as an issue in need of further exploration. Canz, Hoffmann, and Kania examine presentation mode effects on highly trained raters in the context of a large-scale writing assessment program for upper secondary level students in Germany. Analyzing scores given to 430 essays that were assessed both in their original handwritten mode and in their transcribed computer-typed mode of presentation, they found that computer-typed essays were scored higher than handwritten essays. They also found that this effect was stronger for informative genres than it was for narrative genres and that the effect became stronger as essay quality decreased. Finally, Kyle examines approaches to expanding construct coverage of automated scoring systems for integrated writing tasks. His study of 480 responses to a TOEFL iBT integrated writing task examined impact of source use (aural versus written source material) on test-taker performance demonstrated that test-takers' ability to use lecture-based source material resulted in higher writing scores, while reliance on reading source material resulted in lower integrated writing scores. With respect to automated scoring, the study found that e-rater-used to score the TOEFL iBT-does not appear to cover features associated with source text use in its scoring. Features identified in the study that the researcher associated with source use point to the use of overlap indices (word, n-gram, synonym, semantic) to expand construct coverage of automated scoring systems for integrated writing tasks. Collectively these studies expand our understandings of the complexities involved in scoring writing samples collected under testing conditions. A theory of ethics for writing assessment Making our invisible racial agendas visible: Race talk in assessing writing. Assessing Writing Assessment, equity, and opportunity to learn Evidence of fairness: Twenty-five years of research in assessing writing. Assessing Writing Complexity, consequence, and frames: A quarter century of research in assessing writing. Assessing Writing An integrated design and appraisal framework for ethical writing assessment Re)visiting twenty-five years of writing assessment. Assessing Writing The range of topics, methods and contexts explored in Assessing Writing require a diverse set of expertise from editors, editorial board members, and our reviewers. We appreciate that during this period of instability brought on by the global pandemic, our reviewers and editorial board members continue to offer their expertise in support of the journal. We appreciate the patience of our authors as the process of reviewing manuscripts has at times been lengthened by the challenges each of us is facing at this time. Submissions to the journal continue to climb, placing significant demand on our editorial board members and reviewers. We thank you for your commitment to the journal and to promoting excellence in the research we publish. Through your support, and through the work of our authors, the stature of Assessing Writing continues to grow. Recently released Citescore (3.6) and Impact Factor (2.404) rankings place Assessing Writing in the top 5% of linguistics and literacy Journals and in the top 11 % of education journals.We We thank each of our new board members, alongside our existing board members, for their willingness to serve the writing assessment community in this capacity.Wishing our readers, contributors, reviewers and editorial board members health, wellness, and peace during these unprecedented times.