Assessing the Scope and Feasibility of First-Year Students’ Research Paper Topics 749 Assessing the Scope and Feasibility of First-Year Students’ Research Paper Topics Erin Rinto, Melissa Bowles-Terry, and Ariel J. Santos Erin Rinto is a Teaching and Learning Librarian, Melissa Bowles-Terry is Head, Educational Initiatives, and Ariel J. Santos is a Graduate Assistant in the Department of English, all at the University of Nevada, Las Vegas; e-mail: erin.rinto@unlv.edu, melissa.bowles-terry@unlv.edu, santos46@unlv.nevada.edu. © 2016 Erin Rinto, Melissa Bowles-Terry, and Ariel J. Santos, Attribution-NonCommercial (http://creativecom- mons.org/licenses/by-nc/3.0/) CC BY-NC. This study applied a content analysis methodology in two ways to evaluate first-year students’ research topics: a rubric to examine proposed topics in terms of scope, development, and the “researchability” of the topic, as well as textual analysis, using ATLAS.ti, to provide an overview of the types of subjects students select for a persuasive research essay. Results indicated that students struggle with defining an appropriate and feasible focus for their topics and that they often select topics related to education, health, and the environment. These findings were used to implement a new information literacy instruction model that better supports student topic development. ibrarians who work with students throughout the research process are well aware that the first step—and often the most crucial one—in creating a re- search project that is both successful and fulfilling is the selection and devel- opment of a research topic. Often, however, students choose a topic of inquiry long before they ever meet with a librarian in an instruction session or at the reference desk. Situations where students have settled on a topic that is vague or underdeveloped or does not meet the constraints of the assignment can lead to frustration for both librar- ians and students as a “triage” interaction takes place—librarians do the best they can to salvage a poorly defined research topic, or students give up an idea they initially found interesting so they can complete their assignment. The challenges that undergraduate students face with developing and focusing a research topic are highlighted in the 2010 Project Information Literacy report “Truth Be Told: How College Students Evaluate and Use Information in the Digital Age,” where 84 percent of students surveyed named the act of getting started as the most difficult part of the research process.1 How can librarians and instructors better support topic selection for undergraduates—especially first-year students, who have never conducted college-level research before? Though the challenges students face when selecting an interesting and manageable research topic are well known to librarians and instructors, little information literacy research has been conducted that systematically analyzes student research topics. Two librarians and an English composition instructor at the University of Nevada, Las doi:10.5860/crl.77.6.749 crl15-838 750 College & Research Libraries November 2016 Vegas (UNLV) addressed this gap in the literature by conducting an in-depth analysis of first-year student research topics. The research topics examined were generated by students enrolled in English 102 (ENG 102), a required research-based writing course at UNLV. The culminating assignment in ENG 102 is an 8- to 10-page researched argument essay, in which students are expected to develop a claim and support it with scholarly sources; students are generally free to choose any topic. Managing the selection of a topic that can be argued, sustained over 8–10 pages, and supported by scholarly evidence is understandably a daunting task for many novice researchers. Since the library already has a robust relationship with ENG 102, and a high percentage of our one-shot instruction sessions are dedicated to this course, the librarians who col- laborate with the ENG 102 program wanted to better understand how well developed ENG 102 research topics are, uncover any trends or patterns in topic selection, and use this information to make improvements to the ENG 102 library instruction program. Research Questions We used two content analysis approaches to evaluate research topics: a rubric to ex- amine proposed topics in terms of scope, development, and “researchability” of the topic, as well as textual analysis, using ATLAS.ti, to provide an overview of the types of subjects students select for a persuasive research essay assignment. The selected approaches were intended to answer two research questions: How can librarians and course instructors collaborate to help students select more feasible research topics? And how can information literacy instruction sessions be informed by the most common topical themes that first-year students select for their research projects? The purpose of this project is to offer a model of information literacy instruction that alleviates the pressures of research topic selection for students, course instructors, and librarians. Literature Review The literature on information literacy instruction has established that topic develop- ment is a critical phase in the research process, but that many students also find it to be the most difficult step. While previous research thoroughly examines the challenges students encounter with topic selection and development, much of the evidence is reported through indirect assessment, such as surveys, discussions, and focus groups, as opposed to direct measures of student work. The present study fills a gap in this literature by offering an examination of actual first-year student research topics in order to study their content and scope, as well as to provide a model for information literacy instruction that, based on best practices from the literature, better supports student topic development. In an early study on the student research process, Carol C. Kuhlthau examined the information-seeking behaviors of high school students, but she found that her model was also applicable to students at the undergraduate level. She identified six distinct aspects of the research process, the second of which was topic selection. After choosing a topic, students engage in step 3 (exploring information), which involves preliminary searching and the identification of subtopics, followed by step 4, forming a focus—focusing the selected topic based on findings from the initial sources and searching for specific information on a particular subtopic.2 Kuhlthau’s model was reexamined by Wendy Holliday and Qin Li in 2004 to see if these research phases were still relevant to students in the 21st century. They found that students considered the exploration phase to be “doing research”—after selecting a topic and consulting initial sources, many millennials stopped looking for additional information. Most students also skipped formulating a focus—the first sources they encountered determined the direction of their final papers. Students in this study indicated frustration with the Scope and Feasibility of First-Year Students’ Research Paper Topics 751 research process—a notion that was echoed throughout the literature.3 Alison J. Head conducted a study of how students in humanities and social science fields conceptual- ize the research process and found that students struggled with “narrowing down a topic and keeping it interesting,” and the “inevitable information overload” that made focusing a research topic difficult.4 The 2009 Project Information Literacy Progress Report also identified problems students face in selecting a research topic, especially with developing context—both the language context (terminology and discourse re- lated to the topic) and big-picture context (multiple points of view around a topic).5 It is apparent that students find the prospect of getting started with research—selecting, focusing, and developing a topic—a stressful task. But why is topic development so difficult, and how can the problems students face be mediated through information literacy instruction? It is possible that one contributing factor to the topic development dilemma is that faculty expectations of student research may not align with student educational experiences, especially in the case of early undergraduate students. This “novice vs. expert” paradox has implications for the topic development process. A 2004 study by Nancy Sommers and Laura Saltz identified the “novice-as-expert” paradox as a situation in which freshmen are asked “to become master builders while they are still apprentices”—that is, students are struggling to become proficient with content and process at the same time.6 This paradox is especially noticeable in the tendency of first-year students to approach their topics as if they are writing a descriptive report, instead of a persuasive argument essay. Saltz and Sommers claim that, for novice researchers, ideas need to be “ingested” before they can be “questioned”; to sustain an argument, students need to have a much greater sense of the issues and points of view around their topics.7 Two publications stemming from the Citation Project also found that first- and second-year students often do not have the subject mastery or skill level to engage with sources beyond the surface level; Sandra Jameison noted that, while instructors may assign a research paper to foster critical thinking and reflection, students may see the end goal as the paper itself—and they therefore only read and engage with their sources in a very shallow way.8 In a study of source integration in sophomore-level research papers, Rebecca Moore Howard, Tanya K. Rodrigue, and Tricia C. Serviss found that none of the papers they examined included summaries of sources (which requires greater understanding of the source in question), while nearly all of the papers included patchwriting—that is, copying the source’s text with basic modifications, such as deleting words or swapping in synonyms—which signifies in- complete or questionable comprehension.9 The authors posit that students at this level may not have enough knowledge around their topics to truly engage with or reflect upon their sources and make the move toward true synthesis. If novice researchers are still struggling with comprehension and basic knowledge around a topic, one can imagine how difficult it would be to truly develop a sense of a topic and formulate an argument around it. While this is well understood by information literacy practitioners, the faculty and instructors who most closely work with these students may not always build into their assignments the necessary time and space for students to fully grapple with both a new topic and a new methodology. Though narrowing or limiting a topic is the most commonly cited challenge stu- dents face, the process of actually getting started—of selecting an initial topic—is also difficult for many novice researchers. Head observed that students in focus groups discussed “pressures to be original and creative” when selecting a research topic,10 while Kuhlthau’s model encourages students to select topics based on interest, as- signment requirements, time allotted, and accessibility of available information.11 The importance of interest and ease were also present in a study by Kacy Lundstrom and 752 College & Research Libraries November 2016 Flora Shrode, in which the authors conducted focus groups with undergraduates to investigate the factors that contribute to selecting a topic for an argument-based research paper.12 They found that the major criteria students used to select a research topic were perceived ease and personal interest, but also what students thought would please their instructors and best fulfill assignment requirements. Also of note, Lundstrom and Shrode found that, at the topic development phase, students relied mostly on their course instructors for help, not librarians. This, coupled with the dangers of the “novice vs. expert” paradox described above, means that it is all the more important for instruction librarians to support and collaborate with course instructors as they guide students through the topic selection phase. While several studies have been devoted to identifying the problems related to student topic development, others have posited solutions to the issue. Sonia Bodi sug- gests asking students prompting questions about their topics.13 Some of the guiding questions provided include contextual elements—geographic and temporal prompts that help students find focus within their broader area of interest. Barbara Fister com- plicates this notion, however, by pointing out that the methods librarians tend to rely on to narrow a topic—guiding students to place finite restrictions on their topics such as time period, location, or population—is not an authentic process of exploration and inquiry.14 Fister instead advocates for strategies that help students “see the gaps, the places where questions lurk.”15 This is a time-consuming, yet important, process—a sentiment echoed by Sommers, who points out that students will not be able to write argumentatively about a topic until they have a baseline understanding of it.16 One process put forth for helping students through this topic exploration is Jeanne R. Davidson and Carole A. Crateau’s “conversation model,” which uses the concept of everyday conversation as a metaphor for rhetorical writing.17 Three phases of the scholarly conversation were identified: eavesdropping (becoming familiar with a discipline), entering (finding a focus), and engaging (forming ideas and persuading others). This model was implemented by Anne-Marie Deitering and Sara Jameson in a first-year writing course to teach students critical thinking, research, and writing skills.18 They found that, when students were given a model for the research process, they were able to better focus on learning the content of their topics and reflecting on their own ideas. The conversation model seems to have potent implications for informa- tion literacy instruction, and further practical applications of this theory are needed. Methods Content Analysis The methodology selected for this study was content analysis, which, according to Geoff Payne and Judy Payne, “seeks to demonstrate the meaning of written or visual sources…by systematically allocating their content to pre-determined, detailed cat- egories, and then both quantifying them and interpreting the outcomes.”19 Both of the tools used in this study (an evaluative rubric and qualitative analysis software) enable us to analyze the text of student research topics to establish meaning: in the case of the rubric, we can determine student research skill development; and, in the case of the qualitative software ATLAS.ti, we can discover thematic categories and subjects that students choose to write about. Sample In 2013, the UNLV Libraries launched an online Topic Narrowing Tutorial, an interac- tive digital worksheet in which students input their general idea for a topic and then focus it by answering “who,” “what,” “where,” and “when” prompts. The tutorial ends with students entering their new research question or claim statement. The Scope and Feasibility of First-Year Students’ Research Paper Topics 753 UNLV Libraries’ instruction department is copied on all responses, which go directly to the department e-mail. While the tutorial is open to all students, it was created as a support tool for ENG 102, and completion of the tutorial is a course requirement. Over the course of the 2013–2014 academic year, 1,478 ENG 102 students completed the tutorial. The responses to the tutorial created the data set used in both the rubric analysis and the ATLAS.ti analysis of research topics. All identifying information was removed from the sample responses, and responses were given a unique identification number. Each response consisted of the following: the student’s general topic; “who” (what population they are going to study); “what” (what aspect of their broader topic they are focusing on); “when” (what time period); “where” (international, U.S., state, etc.); and a final research question or topic statement. The prepared samples were placed in a spreadsheet to make them easily accessible for analysis (see figure 1). For the rubric analysis, a random 10 percent sample (n = 145) was taken from the total available rubric responses; for the ATLAS.ti analysis, all 1,478 responses were used. Rubric Approach: Analysis of Research Skills Instrument Analytic rubrics have long been used as strategic tools that identify varying degrees of success and competency among student work. John and Patti Hafner assert that a rubric is generally described as an assessment tool that evaluates differing levels of performance within a particular assignment.20 According to Judith Arter and Jay McTighe, a quality analytic rubric should describe key components of a task and enable each component to be judged separately.21 As Megan Oakleaf has noted, these tools can benefit both students and instructors; students are able to use rubrics to understand an assignment’s parameters as well as their instructor’s expectations, and the process of creating a rubric allows librarians and instructors the opportunity to discuss shared learning outcomes.22 Before assessing student work, we first had to establish what the shared learning outcomes would look like for ENG 102. Since there were two research librarians as well as a composition instructor involved in the development of the rubric, a variety of expectations were addressed; outcomes related to information literacy as well as argumentative writing were inserted into the rubric to reflect the goals of both the library and the composition program. Since the working relationship between the two departments is imperative for student success, the collaboration in developing the ru- bric had to model the desired collaboration between the librarians and the instructors. One key component that both parties were concerned with was whether or not a topic could reasonably be researched and argued within a set amount of time (less than half a semester), and for the required length of the assignment (8–10 pages). Further, we wanted to examine whether or not students had any familiarity with the “conver- sation” already going on about their topic. Finally, one of the more pragmatic points was whether or not the Topic Narrowing Tutorial left students with a topic that they could argue in their papers. After much deliberation, we decided on the categories of “researchability,” “appropriate breadth,” “language context,” and “arguable topic.” FIGURE 1 Example of Topic Narrowing Tutorial Response Prepared for Analysis Topic Who What Aspect When Where Research Question Narcotics Doctors, patients New laws, side effects Past 5 years United States How have the laws on narcotic prescriptions changed over the last 5 years? 754 College & Research Libraries November 2016 Based on the recommendations from the literature, the first task in developing the rubric was to decide what would constitute the highest and lowest levels of competency for each category and, from there, further delineate levels.23 The first draft of the rubric contained five levels of competency listed for every category. However, after testing the rubric against student research topics, it was quickly decided that there was not enough delineation—the middle tiers were not differentiated enough to make the scores useful. As Barbara M. Moskal points out, “It is better to have a few meaningful score categories [than] to have many score categories that are difficult or impossible to distinguish.”24 Arter and McTighe echo this sentiment and recommend that a rubric contain a range of 3–6 points to properly assess student achievement in a particular class.25 Based on this realization, we removed “below average” and “above average” from the rubric. The final tiers were renamed “beginning,” “developing,” and “exemplary” (see figure 2). The first concept we wanted to address was what a “researchable” topic looks like for a freshman-level writing course.26 Students, as novice researchers, must be able to engage with the topics they are choosing. The language within this portion of the rubric presented some challenges due to the fact that “researchability” is a somewhat abstract concept. We decided to define “researchability” as a topic that is able to be challenged, examined, and analyzed using a variety of evidence from both scholarly and popular sources. Further, it was also important that a successful topic be able to be explored in a feasible, and finite, amount of time. In general, the ENG 102 students have less than half the semester to research and write the final argumentative research paper. Therefore, it is important that the types of topics that these students choose have a variety of research materials readily available through the library. Based on the findings of previous researchers, it was clear that establishing appro- priate breadth for a topic is a part of the research process with which students often struggle; they commonly will choose topics that are either too broad or too narrow for an 8- to 10-page research paper.27 Using the tutorial prompts as a template, the rubric examined the detail with which the student identified, and strategically narrowed who is affected by their topic, what specific aspect of the topic they wanted to investigate, and finally where their topic takes place temporally and geographically. The degree to which students displayed both detail and strategy in their answers dictated where they landed on the three-tier scale. A “developing” topic in this category would lack speci- ficity in two areas, while a “beginning” topic would require more extensive revision. The third category, “language context,” was inspired by the findings of Head and Eisenberg.28 We were concerned not only with whether or not the students were aware of the vocabulary and discourse around their topics, but also to what extent they were able to use that knowledge to formulate useful search terms that would help them in their research process. For this study, language context was established if the student was able to integrate appropriate disciplinary or topic-specific vocabulary into their tutorial response. A “beginning” topic in this category would use no specific vocabu- lary, a “developing” topic would use only one topic-specific vocabulary word, and an “exemplary” topic would employ a number of them. The final category for evaluation was to judge whether or not the students’ final topic statement was arguable. The purpose of this category was to differentiate between an argument and a report. This was a challenging category to define, since students in ENG 102 may complete the tutorial at different points in the research process and get different directions from their instructors. However, we agreed that in general an “exemplary” topic would include a thesis statement or topic statement in this section, a “developing” topic would use an open-ended research question, and a “beginning” topic would use a closed-ended question. We wanted to differentiate between topics that prompted investigation and those that simply answered rhetorical questions. Scope and Feasibility of First-Year Students’ Research Paper Topics 755 FIGURE 2 Rubric for Analyzing Student Research Topics Exemplary–3 Points Developing–2 Points Beginning–1 Point Researchability Final topic selection is able to be challenged, examined, or analyzed by a novice researcher with a variety of readily available resources (both scholarly and popular) in a feasible amount of time. Final topic selection is able to be challenged, examined, or analyzed by a novice researcher, but there are potential issues around feasibility and/ or access of information resources. There may be too much or too little information available on this topic, only one kind of source that addresses this topic (i.e. only scholarly or only popular), or other issues with time and access. Final topic selection is not researchable because the topic cannot be challenged, examined, or analyzed by resources readily available to a novice researcher in a feasible amount of time. Appropriate Breadth (8–10 pages) Topic is manageable for an 8–10 page research paper. The student defines who is affected, what aspect of the issue they will deal with, what time frame they will be researching, and where their issue is present. Topic is too broad or narrow for an 8–10 page paper, but the student has defined some areas of their subject. The topic is somewhat manageable for this assignment but requires further specificity/ development in 2 areas (who, what, when, where). Topic is so broad or narrow that it is not manageable for an 8–10 page research paper; the student does not specifically define various aspects of their subject (who, what, when, where). Extensive revision is required. Topic-Related Vocabulary and Language Context Topic-related vocabulary is used to provide language context for the topic. Useful search terms can be derived from topic statement. Topic-related vocabulary is used to provide some language context. However, it is either not well-defined or not helpful for developing search terms from topic statement Topic-related vocabulary is not used and, therefore, language context is not established. No search terms can be derived from topic statement. End Result as Arguable Topic Final topic statement is thesis- driven and contains an argument. Student can proceed to the research process but may have to reflect back on the scope of the assignment. Final topic statement is general. Asks a “how” or “why” question that could lead to analysis or the development of an argument. Revision and further definition required before proceeding to the research process. Final topic statement is too general and/or not argument-driven. Asks a yes/no or “factual” question that does not facilitate analysis or argument. As it stands, the resulting paper would be solely information- based. 756 College & Research Libraries November 2016 Procedures Once the categories and competency levels were established, the rubric needed to be normed for multiple scorers; we followed the procedure recommended by Oakleaf.29 Fifty of the responses that were not part of the 10 percent sample were randomly se- lected and used in the rubric norming process. Along with an additional instruction librarian, we practiced using the rubric in a two-hour-long session. Using the practice responses, the group was able to discuss each category and come to an understanding about how the rubric ought to be applied. After norming the rubric, the group was given additional samples from the tutorial responses to score on their own to see if interrater reliability could be established. Cal- culating interrater reliability can determine if raters are applying a rubric in the same way, meaning the ratings can statistically be considered equivalent to one another.30 Due to this project having four scorers, Krippendorff’s alpha was used to calculate interrater reliability, since it corrects for chance and can be used with more than 2 observers; we calculated alpha using the SPSS macro extension developed by Andrew F. Hayes and Klaus Krippendorff.31 Calculations revealed insufficient inter-rater reli- ability, with the minimum alpha of 0.8 not being reached on any rubric categories.32 To address the interrater reliability issue, the authors used the “tertium quid” method, in which every response is scored by two different raters, with a third scorer examining and resolving any dissimilar ratings.33 For this study, the 10 percent sample of 145 responses was divided in half—samples 1–73 were given to two scorers and 74–145 to the other two; any differences in scores within one pair were reviewed and resolved by a rater from the other pair. After the study was completed, the authors met to debrief about the process and the results. It was concluded that the initial lack of interrater reliability was not due to the rubric itself but rather the participants’ lack of experience with its use. While best practices recommend that two or three norming sessions occur for proper participant calibration, due to time constraints it was only possible to hold one norming session.34 If this study were to be replicated, we would plan to have at least one additional session for the participants to further solidify their agreement on rubric language and meaning. Textual Analysis Approach: Analysis of Topic Themes Instrument We used qualitative data analysis software (ATLAS.ti) to analyze the textual data gen- erated by the compilation of student responses to the tutorial. The software allowed for the analysis of all 1,478 topics rather than taking a sample. In a 2013 article, B. Jane Scales reviewed ATLAS.ti for use in the qualitative analysis of student assignments. She found that using ATLAS.ti on a content analysis of student work made authentic, student-centered assessment more feasible. She recommends that instruction librarians who are looking for ways to assess student learning explore the possibilities afforded by a qualitative software product like ATLAS.ti.35 Qualitative analysis takes time, but qualitative analysis software speeds the process by facilitating a word count, code creation, and data analysis. In Scales’ article, she outlines a seven-step process that the authors followed to analyze student work us- ing ATLAS.ti. The researcher (1) begins by selecting a hermeneutic unit, which then dictates the organization and format for using data in ATLAS.ti. (2) The next step is running Word Cruncher, which generates a spreadsheet of words used in student responses. While reading through the text data, the researcher (3) may use the NVivo coding feature to mark words and phrases of interest. Then the qualitative researcher (4) creates a set of codes for the data, organizing by theme. (5) The researcher does the important and time consuming work of thinking about what the data means, and Scope and Feasibility of First-Year Students’ Research Paper Topics 757 how to derive broader codes, and then (6) adds a second layer of observational codes to incorporate additional codes and check the previous work to make sure the codes applied hold true. Finally, (7) data analysis takes place to group the responses by code and determine what their frequency might mean.36 Procedures We collected 1,478 responses from the tutorial for first-year students enrolled in a com- position course in academic year 2013–2014. Students were required to enter a research topic and then answer a series of questions about who was affected, what region was affected, what time period was most relevant, and finally a topic statement or question. For the textual analysis, only the topic statement was examined, as it represented the students’ starting point for approaching a research project and addressed who, what, where, and when all in one sentence. At the beginning of the project, we prepared an Excel spreadsheet with all of the student responses. Common terms (a, the, and, or, will, has, and more) were compiled in a stop list and discarded before quantitative and qualitative strategies were applied to the resulting list of words. A quantitative analysis, a simple count, was carried out using the Word Cruncher tool to create a master list of all the words used, and then the list was sorted in order of frequency. With an initial read of the list of frequently used words, the data began organizing itself into distinct themes. We created a set of corresponding codes to make the orga- nization more apparent. Qualitative analysis was used to group words into concepts, combining plural and singular cases as well as coding very specific topics (such as “cellphones”) into broader concepts (such as “technology”). In the qualitative analysis, a grounded theory approach was used to permit the categories to emerge from the data (the data being the text of the student research topic statements). Results Rubric Approach: Analysis of Research Skills Each criterion was scored on a three-point scale; results indicate that, for most of the criteria, students were scoring in the tier 2 (“developing”) level. The rubric was de- signed so that even a novice researcher could reach a tier 3 (“exemplary”); we would have liked to see more students achieving that level (see table 1). When looking at the “exemplary” level, 31.7 percent of responses received a level 3 score for “researchability” and “appropriate breadth,” 40 percent for “topic-related vocabulary,” and only 6.8 percent for “arguable topic” (see table 2). These results tell us that students are still having difficulty selecting topics that are manageable and arguable; students also struggle with bringing context to their topic. The area that had the smallest number of exemplary scores was choosing an arguable topic. Many of our ENG 102 students start with either a closed-ended question, or with a research question that would lead to an informative paper instead of a persuasive one (which TABLE 1 Scores for Each Rubric Criteria Researchability Appropriate Breadth Topic-Related Vocabulary Arguable Topic Mean (out of 3) 2.19 2.09 2.29 1.83 Mode (out of 3) 2.00 2.00 2.00 2.00 SD 0.65 0.74 0.66 0.53 758 College & Research Libraries November 2016 is required by the assignment). While we were pleased that the majority of responses were at the “developing” level rather than the “beginning,” these results indicate many areas where librarians and composition instructors can work together to improve student topic development. ATLAS.ti Approach: Analysis of Topic Themes Major Concepts We found that, while student topics covered a wide range of topics and it was not possible to fit all topics into the categories established, five major concepts described more than half of the topics proposed by first-year students. The broad topic “health” encompasses many student topics, including obesity, drugs, alcohol consumption, and healthcare. “Environment” includes topics such as global warming, farming, pollution, and water issues. The researchers were not surprised at the top three concepts students chose to research. Even though most first-year students are between 18 and 20 years old, they have experience with their own health or the health of their families, concerns about the environment they live in now and will continue to experience in the future, and more than a decade of experi- ence with different education systems. Students tend to write about what they know. In Lundstrom and Shrode’s article about topic selection, focus groups revealed that students choose topics based on ease of doing the research and personal relatability, among a few other factors.37 The factor of personal relatability certainly seems to be at play in these results as well. TABLE 2 Total Number, Percent of Responses Receiving Score for Each Criteria (n=145) Researchability Appropriate Breadth Topic-Related Vocabulary Arguable Topic Score Level 1: Beginning 19, 13.1% 33, 22.7% 16, 11% 35, 24% Score Level 2: Developing 80, 55.2% 66, 45.5% 71, 48.9% 100, 68.9% Score Level 3: Exemplary 46, 31.7% 46, 31.7% 58, 40% 10, 6.8% TABLE 3 Five Most Prevalent Topics for First-Year Students Concept Total Count Percentage (of 1,478) Health 632 43% Environment 392 27% Education 390 26% Media 308 21% Technology 301 20% Note: In the percentages above, more than one major concept may have been addressed in one student response (for example, “technology’s impact on health”), so percentages will add up to well over 100%. Scope and Feasibility of First-Year Students’ Research Paper Topics 759 Where, When, and Who Because of the focusing questions asked in the tutorial, many student topic statements included a geographic region, time period, and population that might be affected by the issue explored. Nearly 50 percent of student responses listed the United States as the geographic region they would focus on in their research. When thinking about the relevant time period for their research, 33 percent of responses were focused on the past and 20 per- cent on the present. Just 7 percent said the future (the rest didn’t specify). Responses were youth-focused in addressing who was affected by the research topics: 24 percent of responses named children, youth, or students as the population of concern. Again, it would seem that students are considering ease of research and personal interest and experience in their selection of topics. Discussion Since it was apparent from the rubric results that students are only performing at a “developing,” if not “beginning,” level in all aspects of topic development, several find- ings from the literature were confirmed. Students clearly struggled with the concepts of “researchability” and “appropriate breadth,” with the majority of research topics proving to be unmanageable and unfeasible for a first-year student to grapple with; students appear to not be focusing beyond their original ideas.38 Examples of student research topics that scored at the “beginning” level for “researchability” and “appro- priate breadth” are illustrative of these issues: “How have pharmaceutical drugs affected children’s behavior worldwide?” and “How has overpopulation affected the Earth in the world currently?” both demonstrate a lack of definition in terms of scope, and it’s clear that a novice researcher would have difficulty addressing these questions over the course of an 8- to 10-page paper. On the other end of the spectrum, “exemplary” scores for these same categories show more definition, establishing specific issues, places, and time periods. “How does the use of anabolic steroids affect athletic performance in high level athletes?” and “How has factory farming impacted the environment in the United States over the last 15 years?” are more manageable for a first-year student, and it would be feasible for students to begin to address these topics and meet the assignment requirements. Head and Eisenberg’s notion of “language context” was also a problem—with no background or prior knowledge to draw upon, many responses indicated only a vague sense of where they wanted to “go” with their topics.39 There were 58 responses that did score at an “exemplary” level, with responses such as “How has the increased use of social media affected the self-esteem of adolescents in the United States over the last 15 years?” TABLE 4 Most Common Populations, Time Periods, and Geographic Regions in First- Year Student Research Topics Percent of Total Research Topics Population: Who? Children, young people, or students 24% Time Period: When? Past Present 33% 20% Geographic Region: Where? United States 50% Note: In the percentages above, more than one time period may have been addressed in one student response (for example, “in the past through present day”), so percentages will add up to well over 100%. 760 College & Research Libraries November 2016 indicating some context around the topic—a specific type of media’s explicit impact on a particular population. Useful search terms can be derived from this statement, and this student seems to have some familiarity with this topic. However, the majority of responses scored at the “beginning” or “developing” levels, with responses such as “How has the media impacted society in the United States over the last 20 years?” including ambiguous vocabulary that would lead to an impractical and frustrating keyword search. It is clear that the student in this example has not thought about what kind of media or what aspect of society they are most interested in researching. Responses such as this indicate that students need more support with developing context around their topics and in using the information they find to refocus and refine their initial ideas. Finally, the issue of arguability is an interesting one. This category saw the lowest number of “exemplary” scores, but we are actually satisfied with the vast majority of students scoring at the “developing” level. A “developing” score indicates that students are asking an exploratory question but not yet making a claim. As was indicated in the literature, students need a great deal of time to fully engage with a topic before they have the necessary knowledge to fully enter into a scholarly conversation and make a persuasive claim.40 So while some composition instructors encourage students to begin with a thesis statement already in mind (“Division 1 college athletes should be getting paid to play”), it is more useful for many students to start with an exploratory question that allows them to begin finding useful information that will later inform an argumentative stance (“Should college athletes be getting paid to play?”). The results of this category may have been acceptable to librarians, but they underscore the need to work with composition instructors to help them understand the research process of first-year students. When examining the content and themes of student research topics, more interesting patterns are revealed. While we expected to see many students writing about hot-button political issues such as gun control, abortion, and the legal drinking age, those types of issues did not emerge as the most popular research topics. To our great relief and interest, research topics were more widely varied than our shared anecdotal evidence suggested. The wide range of topics, however, did fit into a few very broad categories that will help instructors and librarians suggest starting places and focusing questions for student researchers in the future. This study hoped to answer two research questions regarding the implications of student research topic development for information literacy instruction: How can librar- ians and course instructors collaborate to help students select more feasible research topics? And how can information literacy instruction sessions be informed by the most common topical themes that first-year students select for their research projects? The results yielded interesting information that was used to change collaborations with composition instructors as well as the content and format of the one-shot information literacy instruction sessions. In terms of collaborating with instructors, librarians implemented several changes based on this project. In the instructor orientation, a librarian presented the model of “research as a conversation,” giving instructors and librarians a shared metaphor for talking with students about the research process.41 At future orientations, we will also share a simplified rubric for student research topics and examples of exemplary and developing research topics to clarify to new instructors what types of topics are most feasible for first-year students. Librarians also made a concerted effort to schedule one-on-one meetings with instructors they were working with to talk about student research topics and support instructors as they were teaching the research process. There were also changes made to the actual content of the information literacy instruction sessions, the goal being to spend more time on topic development. The Scope and Feasibility of First-Year Students’ Research Paper Topics 761 revisions to the instruction program were heavily influenced by the “research as a conversation” model, which we thought would be accessible to students as well as composition instructors.42 The major change was the addition of a prelibrary session classroom visit, in which the instruction librarian assigned to a particular section of ENG 102 visits the students’ classroom for a 15-minute introductory meeting prior to the scheduled library instruction session. In this previsit, the librarian introduces the concept of “research as a conversation” by showing the students a short video created by UNLV Libraries that compares the research process to a person showing up late to a party and having to catch-up, and contribute to, the conversations around her.43 After showing the video, the librarian leads a discussion around the research conver- sation model and how it connects to the ENG 102 assignment—“eavesdropping” is preresearch, “entering” is searching for scholarly information, and “engaging” is criti- cally evaluating sources. The librarian then provides students with a worksheet that guides them through the preresearch stage by asking them guiding questions around their topics while prompting them to explore in CQ Researcher or Wikipedia to gain preliminary ideas and context, an idea similar to one of the miniassignments created by Deitering and Jameson.44 Students are instructed to complete the worksheet prior to the library session and bring it with them. This led to much more fruitful discus- sions during the library session, since students had a better starting point from which to launch their keyword searches as well as more developed ideas and questions to discuss with their instructor and librarian. In addition, one of the activities for the library instruction session is a “reading for relevance” activity in which students work in groups to critically evaluate an article. The ATLAS.ti portion of this project, which enabled us to identify common themes of student papers, will shape our choices of articles to assign to students for this activity. We plan to collect a set of articles on each of the most popular themes (health, envi- ronment, education, media, and technology) for future library instruction sessions. Limitations One limitation of this study was the Topic Narrowing Tutorial that was used to cap- ture student responses. When the tutorial was originally created, it was intended to walk students through constructing a topic statement or research question. While the tutorial certainly accomplishes that, this study made clear that students need less help with writing research questions than with the process of inquiry that goes into select- ing a topic. The tutorial may direct students to consider various aspects of their topic (“who,” “what,” “where,” and “when”), but as Fister noted, these inputs focus a topic in a cursory way and do very little to help the student deeply investigate the topic.45 The results of this study reveal that the tutorial is not effective at assisting students in the areas in which they need the most help, and this may be a contributing factor in the high number of level 1 and 2 scores on the rubric analysis. The preresearch work- sheet introduced in the prelibrary classroom visit is more promising in terms of topic exploration and development, but it is not an interactive tool that students can revisit on their own. We are currently discussing the most useful format for the preresearch worksheet and are considering replacing (or enhancing) the current tutorial with the content from the preresearch activity. Another limitation is that ours is a single-institution study that looks at a specific subset of the undergraduate population—first-year writing students at a large, public, urban university. It would be interesting to examine topic development—especially the content of student topics—at other kinds of institutions to see if the trending topics differ at other universities. 762 College & Research Libraries November 2016 Conclusion Our study provides new insight into a problem that librarians and composition instruc- tors have been grappling with for decades. While practitioners are aware of the issues students face with topic development, and some useful curriculum changes have been proposed, there have been no recent information literacy studies that systematically analyze student topics and provide practical solutions to these challenges. Many of the challenges that first-year students face with developing feasible research topics are due to the nature of the information landscape and are unlikely to be easily mediated; as technology continues to evolve and the amount of online information grows, students will continue to feel a sense of “information overload” that makes it difficult to nar- row and develop a topic. The information gleaned from our study has enabled us to make evidence-based decisions in the structuring of our information literacy program, but there is a need for more research in this area. The updates to the ENG 102 library instruction program have shown positive initial results, but a longitudinal study of student research topics would provide insight into the efficacy of these changes. In addition, further exploration of effective models for librarian-composition instruc- tor collaborations is needed. Our results highlighted the importance of introducing topic development early in a course and reinforcing this process throughout the semester; we therefore agree with the literature in that the classroom instructor should be the primary resource for students during this process. It is imperative that librarians sup- port composition instructors in their teaching by helping them embed sound learning outcomes and scalable activities into their courses. Finally, we encourage other institutions to consider a direct assessment of student research topics to uncover unique characteristics for their specific populations. While the “research as a conversation” model worked well for our purposes, we would be interested to see, in light of the new ACRL Framework for Information Literacy in Higher Education’s concept of “Scholarship Is a Conversation,” if other institutions find this to be an effective way to discuss the topic development process. We hope that our study inspires other practitioners to move from anecdotal, indirect evaluations of student topic development to authentic assessments of student work. Our profession, our collaborations with instructors, and our students will all benefit from our shared dialogue on this subject. Acknowledgements Thanks to colleagues at University of Nevada, Las Vegas Libraries and instructors of English 102. Notes 1. Alison J. Head and Michael B. Eisenberg, “Truth Be Told: How College Students Evaluate and Use Information in the Digital Age,” Project Information Literacy Progress Report, last modified November 1, 2010. 2. Carol C. Kuhlthau, Teaching the Library Research Process (Lanham, Md.: The Scarecrow Press, 1994). 3. Wendy Holliday and Qin Li, “Understanding the Millennials: Updating Our Knowledge about Students,” Reference Services Review 32, no. 4 (2004): 356–66. 4. Alison J. Head, “Information Literacy from the Trenches: How Do Humanities and Social Science Majors Conduct Academic Research?” College & Research Libraries 69, no. 5 (2008): 433. 5. Alison J. Head and Michael B. Eisenberg, “Finding Context: What Today’s College Students Say about Conducting Research in the Digital Age,” Project Information Literacy Progress Report, last modified February 4, 2009. 6. Nancy Sommers and Laura Saltz, “The Novice as Expert: Writing the Freshman Year,” College Composition and Communication 56, no. 1 (2004): 132. Scope and Feasibility of First-Year Students’ Research Paper Topics 763 7. Ibid, 134. 8. “The Citation Project,” available online at http://site.citationproject.net/ [accessed 15 July 2015]; Sandra Jamieson, “Reading and Engaging Sources: What Students’ Use of Sources Reveals about Advanced Reading Skills,” Across the Disciplines 10, no. 4 (2013). 9. Rebecca Moore Howard, Tricia Serviss, and Tanya K. Rodrigue, “Writing from Sources, Writing from Sentences,” Writing and Pedagogy 2, no. 2 (2010): 177–92. 10. Head, “Information Literacy from the Trenches,” 433. 11. Kuhlthau, Teaching the Library Research Process, 34. 12. Kacy Lundstrom and Flora Shrode, “Undergraduates and Topic Selection: A Librarian’s Role,” Journal of Library Innovation 4, no. 2 (2013): 23–41. 13. Sonia Bodi, “How Do We Bridge the Gap between What We Teach and What They Do? Some Thoughts on the Place of Questions in the Process of Research,” Journal of Academic Librarianship 28, no. 3 (2002): 109–14. 14. Barbara Fister, “The Research Processes of Undergraduate Students,” Journal of Academic Librarianship 18, no. 3 (1992): 163–69. 15. Ibid, 168. 16. Nancy Sommers, “The Call of Research: A Longitudinal View of Writing Development,” College Composition and Communication 60, no. 1 (2008): 152–64. 17. Jeanne R. Davidson and Carole A. Crateau, “Intersections: Teaching Research through a Rhetorical Lens,” Research Strategies 16, no. 4 (1998): 245–57. 18. Anne-Marie Deitering and Sara Jameson, “Step by Step through the Scholarly Conversa- tion: A Collaborative Library/Writing Faculty Project to Embed Information Literacy and Promote Critical Thinking in First Year Composition at Oregon State University,” College & Undergraduate Libraries 15, no. 1 (2008): 57–79. 19. Geoff Payne and Judy Payne, Key Concepts in Social Research (SAGE Key Concepts Series) (London: SAGE Publications, 2004): 51. 20. John Hafner and Patti Hafner, “Quantitative Analysis of the Rubric as an Assessment Tool: An Empirical Study of Student Peer-Group Rating,” International Journal of Science Education 25, no. 12 (2003): 1509–28. 21. Judith Arter and Jay McTighe, Scoring Rubrics in the Classroom: Using Performance Criteria for Assessing and Improving Student Performance (Thousand Oaks, Calif.: Corwin Press, 2001). 22. Megan Oakleaf, “Using Rubrics to Assess Information Literacy: An Examination of Methodology and Interrater Reliability,” Journal of the American Society for Information Science and Technology 60, no. 5 (2009): 969–83. 23. Barbara M. Moskal, “Scoring Rubrics: What, When, and How?” Practical Assessment Research and Evaluation 7, no. 3 (2000). 24. Ibid. 25. Arter and McTighe, Scoring Rubrics in the Classroom. 26. Lundstrom and Shrode, “Undergraduates and Topic Selection,” 23–41. 27. Head and Eisenberg, “Truth Be Told”; Head, “Information Literacy from the Trenches”; Bodi, “How Do We Bridge the Gap.” 28. Head and Eisenberg, “Finding Context.” 29. Oakleaf, “Using Rubrics to Assess Information Literacy.” 30. Oakleaf, “Using Rubrics to Assess Information Literacy”; Moskal, “Scoring Rubrics”; Jacob Cohen, “A Coefficient of Agreement for Nominal Scales,” Educational and Psychological Measure- ments 20, no. 1 (1960): 37–46. 31. Andrew F. Hayes and Klaus Krippendorff, “Answering the Call for a Standard Reliabil- ity Measure for Coding Data,” Communication Methods & Measures 1, no. 1 (2007): 77–89; Klaus Krippendorff, Content Analysis: An Introduction to its Methodology (Thousand Oaks, Calif.: Sage Productions, 2004). 32. Krippendorff, Content Analysis. 33. Boram Kim, “Resolving Discrepant Ratings in Writing Assessments: The Choice of Resolu- tion Method and Its Application,” English Teaching 66, no. 2 (2011): 211–31; Robert L. Johnson, James A. Penny, and Belita Gordon, Assessing Performance: Designing, Scoring, and Validating Performance Tasks (New York, N.Y.: The Guilford Press, 2008): 242. 34. Oakleaf, “Using Rubrics to Assess Information Literacy.” 35. Jane B. Scales, “Qualitative Analysis of Student Assignments: A Practical Look at ATLAS. ti,” Reference Services Review 41, no. 1 (2013): 134–47. 36. Ibid, 142–43. 37. Lundstrom and Shrode, “Undergraduates and Topic Selection.” 38. Head, “Information Literacy from the Trenches”; Holliday and Li, “Understanding the Millennials”; Kuhlthau, Teaching the Library Research Process. 39. Head and Eisenberg, “Finding Context.” http://site.citationproject.net/ 764 College & Research Libraries November 2016 40. Deitering and Jameson, “Step by Step through the Scholarly Conversation”; Sommers, “A Longitudinal View”; Sommers and Saltz, “ The Novice as Expert”; Davidson and Crateau, “Intersections.” 41. Davidson and Crateau, “Intersections.” 42. Deitering and Jameson, “Step by Step through the Scholarly Conversation”; Davidson and Crateau, “Intersections.” 43. “Research Is a Conversation,” YouTube video, 2:02, posted by “Leid Library,” November 13, 2014, available online at www.youtube.com/embed/D1uFR64Kmbk [accessed 13 November 2014]. 44. Head and Eisenberg, “Finding Context”; Deitering and Jameson, “Step by Step through the Scholarly Conversation”; Bodi, “How Do We Bridge the Gap.” 45. Fister, “The Research Processes of Undergraduate Students.”