Generative AI Use in a Business Ethics Course Assignment: A Descriptive Study on Student AI Choice and Perceptions

Amy M. Cedrone, M.A.
acedrone@harford.edu

From the Philosophy and Religion Department, Division of Arts and Humanities, Harford Community College, Bel Air, Maryland.

ABSTRACT

In this descriptive study I wanted to see how including an assignment which required students to use generative artificial intelligence (AI) would affect students’ perceptions of generative AI, including their own assessment and grading of generative AI-created content. I theorized that more than half the students would assess the generative AI’s answer as earning at least a C, i.e., a passing grade. In my business ethics course, I created a discussion assignment in which students would be required to use the generative AI of their choice to answer a prompt, after having written their own answer. Students were then asked to compare the generative AI’s response to their own, and give it a grade. In addition to these steps, many students shared their personal points of view about generative AI use, including but not limited to college coursework and professional use. This study lasted for 2 semesters, fall 2024 and spring 2025. The sections included 4 fully online courses and 2 traditional in-person courses. Most students concluded that the answers given by generative AI were very general, not well detailed, and vague. Most students did not think the generative AI would earn a high grade. I found that the data did support my hypothesis, in that more than half the students rated the generative AI’s answer at a C or better. That being said, commentary from students was generally underconfident in the generative AI’s ability to give a high-quality answer, and most students believed the level of specific detail was poor. The implication of this is that students may not trust generative AI to produce high quality answers, but they may trust it to produce work which earns at least a C grade. A second implication is that when students are required to use generative AI in class, they know the instructor is aware of the use. This leads to the question whether that awareness makes them less likely to use it.


INTRODUCTION

General artificial intelligence (AI) literacy of students is an important topic in education. More specifically, generative AI literacy of students is important because this particular form of AI is seeing greater use by students in college settings. A study released by researchers at Massachusetts Institute of Technology’s Media Lab (Kosmyna et al., 2025) suggests that use of generative AI over time can cause damage to a person’s ability to critically think. Writing for The Hill, Scully (2025) described what the Massachusetts Institute of Technology researchers found to be true about ChatGPT use over time. This includes decreased brain engagement, and “consistently [underperforming] at neural, linguistic, and behavioral levels.” The full study begins with the observation that use of Generative AI may have a “cognitive cost” which researchers observe in the setting of writing essays (Kosmyna et al., 2025). The authors believe that generative AI use is increasing at such a rate, that they are justified in releasing their findings thus far as soon as possible, in order to show some of the risks associated with generative AI use before more widespread implementation among school children takes place.

Student use of generative AI, both in and out of the classroom, may be widespread. While assessing general student use, background, experience, and context for use of generative AI was not a goal of this assignment experiment, future iterations of the assignment may include the opportunity to gather data in that realm. Rismanchain et al. (2024) used a “GenAI literacy survey” which allowed them to gather information on what undergraduate college students know about Generative AI use. Their study and conclusions highlight the prevalent use of generative AI and the corresponding need for generative AI literacy. It is worth noting that this study focused on gathering survey data regarding student generative AI literacy, and did not include an assignment component. Other studies, such as that published by Gattupalli (2024), focus on an assignment and gathering qualitative data on student views about Generative AI. This division of goals indicates that it may be more fruitful and practical to pursue either data on generative AI literacy, or data on student use of AI technology in assignments, not both at once. Unless a course has an explicit, strong, and learning outcome-focused concentration on generative AI literacy, trying to accomplish both these goals in a single course may lead to fewer college instructors exploring these aspects of this technology with students. Campbell and Cox’s (2024) study involving graduate students in a department of education focuses on student perceptions and uses of generative AI, while taking the audience through some but not all features of various brands of generative AI. This focus, paired with the qualitative nature of the study itself and the data gathered, demonstrates how faculty outside areas like computer science can use generative AI in a course, foster student literacy in this area, and do both without needing to become experts themselves on generative AI.

Rismanchian et al. (2024) focused on student generative AI literacy in their study, using a survey to collect data on student experience with and impressions of the technology. This involved the use of a survey without a corresponding assignment, and shows how faculty can design and deploy a survey to learn more about student generative AI literacy, thereby laying the foundation for facilitating more learning about the technology and increasing that literacy. The more faculty know about student perceptions of generative AI, the more targeted and clear their approach to teaching greater literacy can be. While my own concerns are more focused on AI use by college-age students and less on any harm caused by generative AI use, I do agree that releasing findings related to generative AI use as soon as possible may be wise, because of the rapid spread of this technology. If there are definite or potential harms associated with its use, it’s better that people know quickly so their decisions for use can be fully informed.

Over the past year, higher education has been saturated with excitement, fear, and warnings about generative AI such as ChatGPT and the potential for student use and abuse. The primary worry is that students will use generative AI to “cheat” on assignments by not writing their own material and having the generative AI do it instead (Blackwell-Starnes, 2025). I wanted to see how students would react to using generative AI for an assignment involving writing. In particular, I wanted to see how students would react to seeing how their own writing compares with the product of generative AI. Going beyond plain comparison, I wanted to find out how students would assess and grade the product of generative AI, in essence asking them to take my place and give a grade for the work produced. My expectation was low-level, in that I expected narrative feedback from students based on their subjective experience using generative AI, not high-level sophisticated analysis of the brand or type of generative AI in use.

The philosophy courses I teach all involve writing throughout. I had been considering tweaking assignments to make generative AI use more difficult, easier to detect, or less efficient for students. Before making sweeping changes across all courses, I thought a test assignment in 1 course over 1 year would be a helpful starting point. This would enable me to gather some information and observe how students use, react to, and assess generative AI products, provided I built the assignment to allow for those outcomes. I opted for a discussion assignment in a fully online, asynchronous course because discussions occur every week in the course, require students to write in order to complete, and therefore might be assignments for which students do consider using generative AI. When comparing a discussion assignment to a written assignment as a testing ground for generative AI use, based on experience I believed students were less likely to use generative AI on one of the short written, non-discussion assignments because those are subject to review by plagiarism detection software such as turnitin or SafeAssign.

The qualitative goal of the assignment was primarily to give students the opportunity to use generative AI and compare their writing with what the “bot” produced. Secondarily, I was interested in data on recurring themes and student opinions of the generative AI product. Informal conversation with students both in-person and online informed me that most college course instructors talk about generative AI and criticize it, but never have students use it in a formal setting. The same conversation also elicited the fact that students do use generative AI. In the in-person discussion, I found assuring students they would not “get in trouble” if they used generative AI was necessary to propel the conversation. I was transparent about my own intent, which was to know more about their experience with generative AI, leading into their completion of the assignment. I did assume that some students had used generative AI prior to this assignment, based on my own experience. I did not go into this assignment experiment believing I would draw deep conclusions about generative AI student literacy or use, because this was the first time I assigned such an item.

In my business ethics courses, most students are enrolled because they intend to work in banking, accounting, marketing, public relations, communications management, or social media management after graduation, with some students preparing for a combination of these roles. As these are professions where generative AI use seems to be increasing, the idea was to introduce responsible generative AI use to these students so they have a model for ethically sound use and citation of sources when they use generative AI at work. It would be doing the students a grave disservice to refuse to acknowledge generative AI use, so avoiding the topic did not seem useful. Instead, a direct hands-on approach was warranted. The intent was that students would gain experience using generative AI, have an opportunity to compare generative AI products with human products in writing, and practice assessing the usefulness, shortcomings, and other outcomes of generative AI use. Inherent in this is the hope that students will see how human products are in many ways superior to generative AI products, while acknowledging how generative AI could make work easier and more efficient for them. Hypothetically, the goals were modest. Exposure to generative AI use and an open acknowledgement that they may use it at work were primary. At this point, the scholarship on generative AI use is very thin. Most literature focuses on fearmongering and doom and gloom outlooks. This is also the chatter I have heard from coworkers, including faculty and administrators, at work and at conferences. Anecdotal feedback from students indicates that most faculty avoid generative AI altogether, neither discussing it nor acknowledging its use.

The literature on intentional use of generative AI by students in assignments is not currently robust. This is due in part to the fact that generative AI is still so new, and colleges and universities globally are still considering how to respond or use generative AI in a college context. This project proceeded, therefore, on the idea that this may be the first time this is done in the immediate geographic and academic space. Two papers in particular offer insight into student perception of generative AI use and the use of generative AI in assignments. Shurden and Shurden (2024) conducted surveys of students in business classes to assess how they used generative AI such as ChatGPT. This was not an assignment for class, however; their research focused on students in the Business School at Lander University. A second paper from Firth et al. (2024) focused more deliberately on teaching students how to responsibly use generative AI in coursework. This study involved assessing if and when the use of generative AI can be justified, and focused on both undergraduate and graduate students in the MBA program at the University of Montana, Missoula. Noteworthy is the fact that both these studies operate with a population of business students. This is a serendipitous similarity to my own case, in which I thought business ethics would be the best course to test the use of generative AI.

METHODS

In all my sections of business ethics during fall 2024 and spring 2025, I included a discussion assignment in the unit on technology, which seemed the natural place for a generative AI discussion to occur. The classes included 2 in-person and 4 fully online, asynchronous sections. In the in-person courses, I openly mentioned concerns about plagiarism, so students would understand why I was concerned as an instructor. In the online sections, I mentioned this factor in 1 weekly introduction video for the module where this particular discussion using generative AI was assigned.

The assignment was given to students as follows:

This discussion is an opportunity to use current technology while answering questions. There are a few steps to this discussion assignment:

  1. Write your own answer to the discussion questions for this module. The questions are below the last numbered step given here.
  2. Choose a free version of a Generative AI chatbot (such as ChatGPT), and use it to answer the same discussion questions. Describe which chatbot you use, including full citation information.
  3. Compare the two answers. How well, or poorly, does the chatbot answer the questions? Would you trust this kind of tool to write answers which earn you a high grade, a middling grade, or a low passing grade on a discussion assignment?

Here are the questions to use: How does technology infringe on privacy? Are there any business practices, which suffer from a use of technology, rather than benefit?

All students were instructed to complete the readings for the week before choosing a brand of generative AI and completing the discussion post. In all sections of the course, the writing done by students in response to the prompt, the generative Ais’ responses to the prompts, and the students’ comparisons were all visible to all participants in the course. Students in the online sections were required to post a reply to at least one classmate, which was visible to all participants. Students in the in-person sections were not required to post a reply. Instead, we discussed the posts during class time.

The assignment design was strongly influenced by the goal of the assignment, my own experience designing and assessing discussion assignments, and the choice of course (business ethics) as the group of students most likely to use generative AI in the workplace after graduation. My goal was to give students an opportunity to compare their own writing to the product of generative AI, and discuss their observations. I have 16 years of experience teaching fully online, asynchronous courses. I have credentials in online course design and facilitation. The overwhelming majority of students enrolled in business ethics are in business-related majors such as marketing, accounting, and advertising, in which they will be writing and will have the opportunity to use generative AI. Students in these majors tend to think of writing as a secondary skill they use to communicate a primary skill like market assessment. In that sense, these students are likely to use generative AI for writing because they may believe it makes communication easier.

All data was collected from the online classes in the discussion posts after the assignment was completed. Most data was collected from the in-person classes in the discussion posts. The in-person class meeting following completion of the assignment was a live discussion about what students concluded and thought about how the generative AI product compared with their own product. We also discussed how generative AI might be useful in their intended careers. Students with full-time employment and a few students with their own start-ups described their actual use of generative AI at work. There was nothing in the in-person discussion which contradicted or was inconsistent with the assignment feedback from students.

RESULTS

Completion numbers for the assignment are presented below in Table 1. The number of in-person students totaled 60, and the number of students who completed the assignment were 27 in the fall plus 29 in the spring, for a total of 56 students. The number of online students totaled 80, and the number of students who completed the assignment were 27 for the fall plus 29 in the spring, for a total of 56 students. The overall total was 140 students, with assignment completion by 112 students.

Table 1. Number of Responses by Modality and Semester.
Fall 2024 Spring 2025
Online 27 29
In-person 27 29
Total 54 58

For the in-person classes, students were required to post their own answer to a discussion question with no reply requirement. All in-person students completed the assignment both semesters. For the online classes, students were required to post their own answer to a discussion question with a single reply to one classmate during the week. About ¾ of all online students completed the assignment, either partially or completely. As the students who drop during the semester are removed from the course site throughout, accurate overall enrollment numbers are not available. The enrollment for fully online sections represents the number of students still in the course with access to the course site on the last day of the semester. This means 80.71% of students completed the assignment, with a non-completion rate of 6.67% among in-person students and a non-completion rate of 28.75% among online students. Table 2 presents the response rates during this study.

Table 2. Rate of Responses by Modality.
Completion Non-Completion
Online 56/80 = 70.00% 24/80 = 30.00%
In-person 56/60 = 93.33% 4/60 = 6.66%
Total 112/140 = 80.00% 28/140 = 20.00%

Students’ Choices of Generative AI Programs

Students were asked to name and cite the generative AI program they used in the assignment. I did not specify that they must use a free version. In fact, the assignment did not mention that detail at all, partially to see how students made their choice when that feature was not mentioned. A few students did use “pay to play” versions of generative AI, and most of them asked in advance whether that was acceptable. The choice of generative AI is presented in Table 3.

Table 3. Choice of Generative AI, in Order of Popularity.
Generative AI Brand Number of Students
ChatGPT 88
Microsoft Copilot 7
Google Gemini 5
Perplexity 2
DeepAI 1
DeepSeek 1
Snapchat 1
Not specified 7
Total 112

Students’ Impressions on the Quality of Generative AI Work

After completing the 2 replies in the assignment, their own and the generative AI product, students were then asked 2 follow up questions to understand how they evaluate the level of work produced by generative AI. This part of the assignment elicited qualitative feedback.

The first question for students to address was: How well, or poorly, does the chatbot answer the question? Some comments in this section occurred many times. More than 50% of students included comments such as:

A few students included more specific critical feedback comments as follows:

In both the online writing and the in-person discussion feedback, students said the generative AI was good at very general content, but poor at details. A few students pointed out that generative AI often included bullet points in replies. One of the students who pointed out the false claims from the generative AI said she included a reference to the chapter, to see what would result. The generative AI made up “facts” from the chapter, which were categorically wrong. This led other students to point out that the vague nature of the generative AI content reduces the chance it will get a fact “wrong.” They believed this would be a feature of a vague answer written by a person. One student put it very succinctly, saying if a student doesn’t really do the reading and writes something vague for an assignment, it might earn a C or D. In their judgment, the AI was good at writing C or D content. They wondered how easy or difficult it would be to distinguish AI content from low-quality human content.

The second question students addressed about the posts was: Would you trust this kind of tool to write answers which earn you a high grade, a middling grade, or a low passing grade on a discussion assignment? I included this question because so often I hear from students that they rely on less-than-honest ways of completing assignments when they run out of time, and they trust that the alternative will produce content which earns a passing grade. The feedback on this point was very eye-opening, for the students and myself. While more than 50% of the students said they thought the generative AI’s product would earn them a passing grade (i.e., a C) on an assignment, they would definitely not trust it to produce A- or B-level content. Most students said they believed generative AI would earn a C to D grade, because of the lack of detail. Several students said they would trust generative AI to outline facts, but not to compose narrative. Several students said they thought others would easily detect or suspect that a generative AI had done the writing, if they used it for an assignment. There were a dedicated few who said they thought generative AI content could earn an A. What was interesting on that point was how other students pushed back on that claim. In the online discussion, at least 3 students commented on someone else’s post saying they disagreed that the generative AI produced good quality, higher-than-C content.

DISCUSSION AND CONCLUSION

In my hypothesis, I theorized that more than half the students would assess the generative AI’s answer as earning at least a C, i.e., a passing grade. The results of this descriptive study did support that hypohesis. Because I left the assignment open for individualized feedback and questions, other factors came to light. One pleasant addition was the amount and quality of critical assessment which students performed on the generative AI results. From what I read and heard, I infer that students felt confident and comfortable sharing their critical thoughts, and did not feel inordinately nervous or hesitant about disagreement. Any disagreement comments were respectfully put and considerate of a right to disagree.

My sample size was limited to all students enrolled in PHIL 221: Business Ethics courses taught by me during the fall 2024 and spring 2025 semesters. While this may seem small, and one may ask why I did not include all my philosophy courses from those semesters in my sample size, I did choose that particular course for reasons related to student population, their intended career paths, and the likelihood those students would use generative AI in a professional or other setting after taking my course. I also believed that the urgency of concern with student generative AI literacy was strong enough to warrant assigning use of generative AI and collecting data on the outcomes as soon as possible. My sample number was 112 students who completed the assignment out of the 140 total enrollments.

Another study limitation is that I did not include a formal survey or other means to ascertain students’ prior experience with generative AI. In my in-person classes, there was a live discussion about prior experience, with input from some but not all students. While it is true that the choice of generative AI tool and brand may have had an effect on outcomes, and while I could have asked preliminary questions on that subject, I decided it was more important to get students using the technology immediately and gather their responses in as unfiltered a form as possible, in keeping with my goal of facilitating student comparison of generative AI product with their own writing. I also wanted to keep the form of the assignment uniform for one full academic year, so I could obtain a larger data set based on the same assignment details.

While there were some similarities with other studies (Blackwell-Starnes, 2025; Firth et al. (2024)), the approaches taken were different enough from my own that a cogent comparison of outcomes may not be possible. It is encouraging to see that other academics see a responsibility to openly address generative AI with students in a way that shows how and encourages them to use it. A common purpose seems to be taking the fear out of generative AI use in college assignments, and open discussion of how it can be used responsibly. The typical fear and avoidance of generative AI which I have myself seen is not an approach which effectively models to students what to do with this new and growing technology. I hope that more people will consider the benefits of generative AI use, even as they are wary of the potential misuses and critical of the environmental impact.

I did not use any outside sources to design the methodology of the assignment. I relied on my own experience and judgment, and I trusted that this would be the first time of many when I included this assignment in the course. My long-range plan for design was to keep the assignment the same for 1 full academic year (2024-2025). After evaluating that years’ data, I would decide whether further goals should be included, leading to changes in the assignment. As my primary goal was to give students opportunity to use and discuss generative AI and compare its product with their own, the design was geared primarily towards those steps. I did not believe assessing prior use of generative AI formally or familiarity would be materially significant, because even students with prior experience may never have compared the generative AI product with their own writing. I believed assessing whether students completed the assignment to earning a passing or higher grade was not important, because that goal may not depend on the course content but rather depends on the student’s own standards for what constitutes an acceptable grade.

As to the choice of generative AI, promotion of student agency led me to allow students to choose whichever brand they wanted to use. This did provide an opportunity to see how different generative AI brands produced different outcomes. The few students who chose to use a paid-for version with my approval also provided an opportunity to see how that version’s product may differ from the free versions. Of note is the fact that no student used Meta AI for this assignment. There were 7 students who failed to cite which generative AI they used. It is likely they used ChatGPT, since that is the example named in the assignment, however, I did not record them as choosing ChatGPT.

Having students use generative AI in an assignment, specifically a discussion assignment where all students can see all posts, was a positive opportunity for critical assessment of the technology and issues of cheating and plagiarism with generative AI use. I found the student feedback encouraging, and I believe that more experienced college students can benefit from this guided approach to generative AI. Others could use a similar assignment as a critical thinking exercise, while also emphasizing use in the workplace and incorporating lessons in responsible citation.

For this data set, I left the discussion assignment the same for fall and spring. Now that the year is over, I am considering alterations to the assignment. The assignment will definitely stay in place for all business ethics sections for the 2025-2026 academic year. Several insightful questions from students may lead to additional steps in the assignment. One student said they like to re-ask questions of generative AI in stages, to improve the answer, and we discussed as an in-person group how that might work well or poorly. Another student said they would have liked to see a comparison of free versus pay-to-play versions, for instance using ChatGPT free and paid versions, with the same question fed by the same person on the same device. A third student said they were worried about their privacy during the assignment, so they deliberately used a campus computer to complete the generative AI portion. A few (3-5) students said they were worried about the environmental impact, and we discussed whether including a comment on that would be a worthwhile component of the assignment. Another way that particular detail could show up is in the discussion for the environmental issues unit in the same course. This would provide some continuity of focal point and multiple different ways to consider generative AI use in the moral sense.

REFERENCES

  1. Blackwell-Starnes, K. (2025). “I prefer my own writing”: Engaging First-Year Writers’ Agency with Generative AI. Thresholds in Education, 48(1), 25–39. http://files.eric.ed.gov/fulltext/EJ1468038.pdf
  2. Campbell, L. O., & Cox, T. D. (2024). Utilizing AI Chatbots in higher education teaching and learning. Journal of the Scholarship of Teaching and Learning, 24(4). https://doi.org/10.14434/josotl.v24i4.36575
  3. Firth, D., Derendinger, M., & Triche, J. (2024). Cheating better with chatgpt: A framework for teaching students when to use CHATGPT and other generative ai bots. Information Systems Education Journal, 22(3), 47–60. https://doi.org/10.62273/bzsu7160
  4. Gattupalli, S. (2024). Exploring Undergraduate Student Perceptions of Generative AI in College Writing: An Experience Report. Accessed at https://eric.ed.gov/?id=ED646454
  5. Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025, June 10). Your brain on chatgpt: Accumulation of cognitive debt when using an AI assistant for Essay writing task. arXiv.org. https://arxiv.org/abs/2506.08872
  6. Rismanchian, S., Babar, E. T. R., & Doroudi, S. (2024, October 29). Genai-101: What undergraduate students need to know and actually know about Generative AI. EdWorkingPapers. https://edworkingpapers.com/ai25-1119
  7. Ryan, H., Abramov, D., Acker, S., & Elkins, S. (2025). Can AI Be a Co-Author?: How Generative AI Challenges the Boundaries of Authorship in a General Education Writing Class. Thresholds in Education, 48(1), 40–56. https://files.eric.ed.gov/fulltext/EJ1468037.pdf
  8. Scully, R. (2025, June 20). CHATGPT use linked to cognitive decline: MIT Research. The Hill. https://thehill.com/policy/technology/5360220-chatgpt-use-linked-to-cognitive-decline-mit-research/
  9. Shurden, S., & Shurden, M. (2024). Business Student’s Perception of the use of Artificial Intelligence in Higher Education with a focus on ChatGPT. Journal of Instructional Pedagogies, 30, 1–12. https://files.eric.ed.gov/fulltext/EJ1459917.pdf