Insights and Impact

Q&A: AI in the Classroom 

Jennifer Steele, professor, School of Education; affiliate faculty member, Department of Public Administration and Policy, School of Public Affairs; and former English teacher 

By

robot hand holding a pencil

In the 1970s, a new technology that was faster than the human brain and cheaper and more portable than a transistor radio encroached on classrooms, threatening to disrupt education as we knew it. That magical new tool? The handheld calculator. Today, generative artificial intelligence (AI) poses a new threat—albeit one that is even more portable, inexpensive, and powerful. 
 
Q. Is AI an obstacle or an opportunity? 

A. When OpenAI made ChatGPT 3.5 available late last year, some school districts and universities quickly prohibited it. I understand the impulse to bury our heads in the sand, but banning AI is contrary to the mission of education and serves our own fears of obsolescence—not the needs of students. If we want young people to utilize AI rather than being manipulated by it, we must teach the evolving strengths and limitations of these tools.    
 
Q. What are the challenges of ChatGPT?

A. Much like calculators, ChatGPT presents a measurement challenge. We cannot assess students’ writing skills when a chatbot pens their history essay. And AI detection tools can easily lead to false accusations of plagiarism, especially for weaker writers or students whose first language is not English. Thus, the rise of generative AI may require that some writing assessments be administered in class offline.  

ChatGPT also presents an information accuracy challenge. It generates content by mimicking the language patterns of the existing human-generated texts on which it was trained, writing answers by predicting which words and phrases come next—not on a conceptual understanding of some underlying “intelligence.” Thus, AI can write things that sound reasonable but have no grounding in fact or logic. It also has the potential to produce biased or discriminatory answers. As such, educators must teach students to use AI-generated texts as a starting point—never an endpoint. 

Finally, ChatGPT presents a skill-devaluation challenge. AI turbo-charges writing tasks—but that does not mean we should stop teaching writing. Instead, we must do so while teaching higher-order communication skills—creativity, tone, inclusivity—which lie more firmly in the domain of the human.

Q. How can students learn from and with ChatGPT?

A. ChatGPT does at least three things faster than people: summarize texts, aggregate information, and adhere to genre conventions. If students harness this power, they can take these skills to new levels of sophistication that AI’s mimicry cannot easily replicate. 

First, when teaching difficult texts, we might ask students to critique summaries provided by ChatGPT. I find, for instance, that ChatGPT cannot parse graphs and tables—a skill with which many students also struggle. They can identify and reflect upon these limitations, fortifying their own comprehension skills in the process.  

We can also teach students to use ChatGPT for a quick gloss of a new topic, just as they might use an article on Wikipedia (which educators also resisted when it launched in 2001). The key is to read with a critical eye to determine what needs to be fact-checked.

Finally, we can use ChatGPT to teach writing conventions in a way that conveys that they are social norms, not inalienable laws of the universe. By prompting ChatGPT to write in a particular genre, for a particular audience, then tweaking those specifications, we can help students reflect on how the writing changes, from the number of paragraphs to sentence lengths. In this way, students learn not how to follow genre conventions, but how to analyze, critique, and adapt them.