New! Sign up for our free email newsletter.
Science News
from research organizations

How teachers make ethical judgments when using AI in the classroom

Date:
February 7, 2024
Source:
University of Southern California
Summary:
A teacher's gender and comfort with technology factor into whether artificial intelligence is adopted in the classroom, as shown in a new report.
Share:
FULL STORY

A teacher's gender and comfort with technology factor into whether artificial intelligence is adopted in the classroom, as shown in a new report from the USC Center for Generative AI and Society.

The study, "AI in K-12 Classrooms: Ethical Considerations and Lessons Learned," explores how teachers make ethical judgments about using AI in their classrooms. The paper -- authored by Stephen Aguilar, associate director of the center and assistant professor of education at the USC Rossier School of Education -- details differences in ethical evaluations of generative AI, as well as rule-based and outcome-based views regarding AI.

"The way we teach critical thinking will change with AI," said Aguilar. "Students will need to judge when, how and for what purpose they will use generative AI. Their ethical perspectives will drive those decisions."

The study is part of a larger report from the USC Center for Generative AI and Society titled "Critical Thinking and Ethics in the Age of Generative AI in Education." In addition to the study, the report introduces the center's inaugural AI Fellows Program to support critical thinking and writing in the age of AI for undergraduate students, and looks ahead to building the next generation of generative AI tools. The center advances the USC Frontiers of Computing initiative, a $1 billion-plus investment to promote and expand advanced computing research and education across the university in a strategic, thoughtful way.

Ethical ramifications a key factor in adoption of AI in the classroom

As AI technologies become more prevalent in the classroom, it is essential for educators to consider the ethical implications and foster critical thinking skills among students. Taking a thoughtful approach, educators will need to guide students in evaluating AI-generated content and encourage them to question the ethical considerations surrounding the use of AI.

The study's goal was to understand the teachers' perspectives on ethics around AI. Teachers were asked to rate how much they agreed with different ethical ideas and to rate their willingness to use generative AI, like ChatGPT, in their classrooms.

The study included 248 K-12 educators from public, charter and private schools, who had an average of 11 years of teaching experience. Of those who participated, 43% taught at elementary school, 16% taught middle school and 40% taught high school students. Over half of participants identified as women; educators from 41 states in the United States participated.

The published results suggest gender-based nuances. "What we found was that women teachers in our study were more likely to rate their deontological approaches higher," said Aguilar. "Male teachers cared more about the consequences of AI." Female teachers supported rule-based (deontological) perspectives when compared to male teachers.

This sample also suggests that self-efficacy (conindence in using technology) and anxiety (worry about using technology) were found to be important in both rule-based and outcome-based views regarding AI use. "Teachers who had more self-advocacy with using [AI] felt more confident using technologies or had less anxiety," said Aguilar. "Both of those were important in terms of the sorts of judgments that they're making."

Aguilar found the philosophical thought experiment the "trolley problem" applicable in his research. The trolley problem is a moral dilemma that questions whether it is morally acceptable to sacrifice one to save a greater number. In education, is a teacher a rule-follower ("deontological" perspective) or an outcome-seeker ("consequentialist" perspective)? Educators would have to decide when, where and how students can use generative AI in the classroom.

In the study, Aguilar concluded that teachers are "active participants, grappling with the moral challenges posed by AI." Educators are also asking deeper questions about AI system values and student fairness. While teachers have different points of view on AI, there is a consensus for the need to adopt an ethical framework for AI in education.

Generative AI holds 'great promise' as educational tool

The report is the first from the USC Center for Generative AI and Society. Announced in March 2023, the center was created to explore the transformative impact of AI on culture, education, media and society. The center is led by co-directors William Swartout, chief science officer for the Institute for Creative Technologies at the USC Viterbi School of Engineering, who leads the education effort; and Holly Willis, a professor and chair of the media arts + practice divisions at the USC School of Cinematic Arts, who is researching the intersection with media and culture.

"Rather than banning generative AI from the classroom, we need to rethink the educational process and consider how generative AI might be used to improve education, much like we did years ago for mathematics education when cheap calculators became available," said Swartout, whose analysis "Generative AI and Education: Deny and Detect or Embrace and Enhance?" appears in the overall report. "For example, by asking students to look at texts produced by generative AI and consider whether the facts are right and the arguments make sense, we could help improve their critical thinking skills."

Swartout said generative AI could be used to help a student brainstorm a topic before they begin writing. Posing questions like "Are there alternative points of view on this topic?" or "What would be a counterargument to what I'm proposing?" to generative AI can also be used to critique an essay, pointing out ways it could be improved, he added. Fears about using these tools to cheat could be alleviated with a process-based approach to evaluate a student's work.

"To reduce the risk of cheating, we need to record and evaluate the process that a student goes through in creating an essay, rather than just grading the artifact at the end," he said.

"Incorporating generative AI into the classroom -- if done right -- holds great promise as an educational tool."

The report also includes research from Gale Sinatra and Changzhao Wang of USC Rossier, undergraduate Eric Bui of the USC Dornsife College of Letters, Arts and Sciences and Benjamin Nye of the USC Institute for Creative Technologies, who also serves as an associate director for the Center for Generative AI and Society.

"We must ensure that such technologies are employed to augment human capabilities, not to replace them, to preserve the inherently relational and emotional aspects of teaching and learning," said USC Rossier Dean Pedro Noguera. "The USC Center for Generative AI and Society's new report is an invitation to educators, policymakers, technologists and learners to examine how generative AI can contribute to the future of education."


Story Source:

Materials provided by University of Southern California. Original written by Ellen Evaristo and Paul McQuiston. Note: Content may be edited for style and length.


Cite This Page:

University of Southern California. "How teachers make ethical judgments when using AI in the classroom." ScienceDaily. ScienceDaily, 7 February 2024. <www.sciencedaily.com/releases/2024/02/240207120510.htm>.
University of Southern California. (2024, February 7). How teachers make ethical judgments when using AI in the classroom. ScienceDaily. Retrieved December 20, 2024 from www.sciencedaily.com/releases/2024/02/240207120510.htm
University of Southern California. "How teachers make ethical judgments when using AI in the classroom." ScienceDaily. www.sciencedaily.com/releases/2024/02/240207120510.htm (accessed December 20, 2024).

Explore More

from ScienceDaily

RELATED STORIES