Magazine Feature

How Do You Do AI?

Notes from the ground on tackling AI in the classroom and beyond

By Luna Shyr

Fall 2025

On college and university campuses near and far, a vibrant dialogue is taking place: how to approach the complex issue of artificial intelligence (AI) in the classroom. At faculty retreats, online workshops, national and international forums, and in informal settings, conversations on the topic abound as faculty, administrators, and students alike try to find a foothold amid tectonic shifts in technology and its impact on higher education. Ask about AI, and words like anxiety and exhaustion crop up. Sentiments run from excitement about applying AI in areas such as English composition, foreign-language instruction, and scientific research to despondence about distinguishing AI-generated from genuine student work. Amid the debates and discussions, you’ll hear widespread agreement that the increasingly pervasive technology comes with an enormous tangle of promises and pitfalls.   

“This technology is not a fixed reference, because it’s moving all the time, and it’s changing extremely fast,” says Frederick Eberhardt, professor of philosophy and co-director of the Ronald and Maxine Linde Center for Science, Society, and Policy at the California Institute of Technology. “It’s an extremely complex topic, and a matter of ongoing debate among the faculty here at Caltech is: How much AI do you permit in the classroom? How do you permit it, and what does it mean for the evaluation of assignments, whether it’s programming [computer] code or writing a philosophy essay?”

A 2024 survey from the Digital Education Council (DEC) found that globally, 86 percent of students regularly use AI in their studies, with ChatGPT being the most commonly used AI tool. Students, however, expressed concerns about privacy and data security when using AI, as well as the “trustworthiness” of AI-generated content. In a separate DEC survey, 83 percent of international faculty cited the ability of students to critically evaluate AI-generated responses as a top concern.

Indeed, the AI debate goes beyond how students should use the technology in coursework and how it affects learning and pedagogical approaches. The discussion is also about how faculty and institutions can equip students with the necessary skills for an AI world while tackling a rise in cheating and ensuring that students can still develop crucial skills like critical thinking and information literacy.

“We have to get our students to be absolutely literate in what this technology can do,” Eberhardt says. “Responsible use and responsible development of AI are important as well. As a consumer or user, it’s a matter of developing a backbone to stand up to responses you get from AI and say, ‘This doesn’t look quite right,’ or ‘I’m going to double-check this.’ ” 

In this pioneering era, higher education stands at a unique juncture in terms of influencing the way generations of students learn about AI and how it is used, not just in their studies but in the world beyond campus. Moreover, liberal education learning outcomes such as critical thinking, communication, personal and social responsibility, and applied learning may prove ever more essential as AI continues to develop and permeate all levels of human society.

In the United States, 93 percent of higher education leaders expect significant or some changes to their institution’s teaching model over the next five years due to AI, according to a recent survey from the American Association of Colleges and Universities (AAC&U) and Elon University. At the same time, more than half (56 percent) believe their institutions aren’t ready to prepare students for AI-driven jobs.

“It’s a grand challenge,” says C. Edward Watson, AAC&U’s vice president for digital innovation and co-author of Teaching with AI: A Practical Guide to a New Era of Human Learning. “We have to understand how these tools work and function within the context of our classes before we can even think about how to do things differently in terms of academic integrity, assignment design, or our broader pedagogy.”

At a faculty retreat in 2023, Smith College provided resources about AI and had its faculty members review their syllabi and assignments with the understanding that students are likely using AI tools, says Adrie Rose, a professor who teaches a poetry publishing course at the women’s liberal arts college in Massachusetts. Rose was so struck by the discussions at the retreat that she now devotes an entire lesson to the topic of AI.

“We were encouraged to create an AI policy so students were clear,” says Rose, who doesn’t permit students to use AI in her course but advises them to delve deeper into the topic. “It really felt so important to me. We have a lot to do in class and not a lot of extra time, but I do take a day to talk explicitly about AI.” 

Rose and her students discuss articles about the ethical, economic, and environmental impacts of AI use, as well as copyright issues related to generative AI tools—such as ChatGPT, Gemini, and Claude—which “scrape” existing texts and images from the internet or other training datasets as a basis from which to produce new images or content. The practice has spurred legal action by book authors, visual artists, movie studios, and others against AI companies. AI-generated work was also a lightning rod in the 2023 Hollywood writers’ strike.

Rose’s spring class produces a cover design for a poetry chapbook published annually by Smith’s Nine Syllables Press, so understanding how AI works—and how copyrighted material can end up in creative work, whether intentionally or not—is essential. “We need the cover art to be free and clear of copyright in order to publish it,” says Rose, the founder and editor of Nine Syllables. She also prohibits AI use in her classes because “the whole point of the course is for students to learn how to design something, not to have a program design something for them.”

 

Providing context for AI rules, clear and regular communication on AI policies, and positive framing on delicate subjects such as cheating are some of the strategies that Watson recommends when tackling AI use in the classroom. “It creates a better climate and culture if you have a conversation around academic integrity that’s more affirming than the ‘I’m out to get you’ kind of thing,” says Watson, who also co-leads an online workshop series geared toward helping faculty navigate the shifting AI terrain. 

Academic integrity remains a top concern and flash point around the use of generative AI, even if faculty and administrators don’t always agree on what constitutes cheating. A student’s use of AI to provide an outline for a writing assignment, for instance, might be a legitimate use in one person’s book but be regarded as cheating in another’s. Handling cases of suspected cheating is one of the trickiest issues for faculty—students also report anxiety around being falsely accused of or confronted about AI use. Watson notes that AI detection tools, at this stage, can carry risky consequences for false positives that outweigh potential benefits. 

Overreliance on generative AI tools is another top concern among higher education leaders: 92 percent of college and university leaders in the AAC&U/Elon survey consider it a negative impact of generative AI. A majority say it’s necessary to discuss ethical issues related to generative AI in the classroom, such as information bias, AI-generated inaccuracies known as hallucinations, deliberate misinformation and disinformation, and privacy issues related to personal data. At the same time, a majority of leaders anticipate positive impacts such as improved research skills, enhanced and customized learning, and even clearer, more persuasive writing. 

“It still feels like very mysterious territory, but I am really thankful when my professors bring it up, and I’ve had numerous lively class discussions about [AI] ethics,” says Sophia Jerome, a Smith College junior majoring in English language and literature and French studies.

Jerome recalls one intensive language course that required students to converse at length in French with ChatGPT about French movies and literature—even trying out a different generative AI tool to chat with historical figures. “This was honestly really useful for learning specific and relevant vocabulary words,” says Jerome, who asked ChatGPT for grammar feedback and sentence corrections. She also found that it misspelled words or didn’t follow instructions at times. “But overall,” she says, “it definitely helped me improve my written communication skills in a second language.” 

The “backbone” that Caltech’s Eberhardt refers to in regard to handling AI-generated responses is a common point made by faculty and some students: that is, the belief that users need to apply critical thinking to evaluate and discern the quality and accuracy of the information or feedback that AI returns. The cognitive work and actual practice that an individual does when, say, writing an essay or figuring out possible solutions to a scientific problem, also yield benefits that risk being lost from using AI too much or unskillfully. These benefits may not be obvious but are akin to an athlete or musician achieving excellence through regular practice. Liberal education learning outcomes such as critical thinking, inquiry and analysis, and a sense of social responsibility remain vital to the process of engaging with AI tools. 

“I always tell my students: You’re the captain of this ship—you’re the one responsible,” says Jeanne Beatrix Law, an English professor and former director of composition at Kennesaw State University in Georgia, who has made AI a core part of all her courses, which range from first year to graduate level. “Responsible use is human-driven use with AI cooperation. AI can’t replace—it can only amplify.”

In her lessons, Law emphasizes the generation of “responsible output” when crafting questions and requests for AI tools, a practice known as prompt engineering. She has students rigorously check AI outputs for accuracy, relevance, usefulness, and harmlessness as part of a back-and-forth process of evaluating AI responses and refining their own work. For her first-year writing courses, Law brings in AI from the outset with an ice-breaker exercise. After students introduce themselves, she has them put their descriptions into Copilot—Microsoft’s generative AI tool, provided free at the university—and prompt it to remix for style and audience, such as for a social media post, a script for a video post, or a meme. The students then share the results with the class.

“It’s fun and low stakes. I’ve found that allowing first-year writers to tinker with no grading, no stakes, helps them engage,” says Law, who coordinates the university’s graduate certificate in AI and writing technologies. “It’s also an introductory example of how students can put their own work into generative AI and get an output that’s revised but still their own work. And they’re engaging in prompt engineering without knowing they are.” 

Focusing on data and information literacy is one way that Juan Burwell, an associate professor of astronomy and chair of the Department of Physical Sciences at Holyoke Community College in Massachusetts, addresses blind spots that can arise from AI use. By requiring students to provide sources for their answers to problem sets, he aims to impart the importance of understanding the origins of information and knowing how to discern what’s reliable and what’s not. “Students are often going to use random Internet sources,” says Burwell, who discusses data literacy on the first day of class. “They can cut and paste but [then] they’re not really thinking about the information and where it’s coming from.”

Cultivating a scientific mindset and the ability to understand concepts that underlie facts are other key course objectives, which factor into Burwell’s current policy not to allow AI use in his introductory astronomy course. “Many of my students aren’t going to go on in astronomy, but they do need to understand that processes in nature and the world have underlying forces that affect them,” he notes. “I’m encouraging a scientific reasoning that can be applied to many different things.” 

Problems still arise: Students use AI against class policy, a form of cheating Burwell says is growing more difficult to spot as AI technology advances and despite institutional efforts to implement AI detection tools. In one instance, Burwell found a reference citing the AI tool itself. The student, a nonnative English speaker, had turned to ChatGPT because of a concern that their command of English wasn’t strong enough to describe a complex idea in the field.

“That gave me empathy for the student,” Burwell says, “but I had to spend a lot of time explaining the reasons I wanted them to do it this way and that I would even prefer their own bad grammar if it reflected their own thoughts.” 

Handwritten quizzes, he adds, generally work well to demonstrate a student’s grasp of class material. Burwell also periodically revisits his grade assessment methods to consider areas where AI might be used to cheat. “I don’t feel that I’ve necessarily landed on the answers to the right approach, but it’s an iterative, ongoing process,” he says. “I’m still grappling, and I think we will be for a long time.” 

The societal impacts of understanding and using AI skillfully go well beyond the classroom. At the American University in Bulgaria (AUBG), students have worked to track political disinformation, by using both a manual scorecard and AI tools that monitor online and social media in Bulgaria and nearby Albania. AUBG students, along with peers from Sofia University, have also worked as analysts at AUBG’s Center for Information, Democracy, and Citizenship (CIDC) to create reports about the insights they gleaned from the collected data. 

Disinformation is a fundamental problem facing Balkan democracies, says Jacob Udo-Udo Jacob, who was the center’s executive director until August 2025. Founded in 2022 to study online and social media narratives in the Balkan region, the CIDC later partnered with media analytics firm Sensika to incorporate its AI-driven tools into the media-monitoring process. Research at the CIDC-Sensika Disinformation Observatory has included identifying “malign narratives,” which Jacob defines as “narratives that seek to erode trust in democratic institutions, sow confusion, or create support for anti-democratic forces.”

One such media narrative common in Bulgaria revolves around “Communism nostalgia,” says Jacob, which harks back to times before the country—once under Soviet influence—joined the European Union in 2007. The themes, Jacob says, can be as basic as stories about tomatoes tasting “so much better back in the day, but since we joined the EU, they’re not as good.” He notes that flooding information channels with content can be a way of influencing large language models (LLMs)—a type of generative AI that trains primarily on text datasets to produce responses that simulate natural language—as well as targeting individual readers. A recent CIDC report, for instance, found that the Kremlin-linked Pravda network published some 650,000 articles over five months in a range of countries. “The splintering of reality is the fundamental problem of disinformation,” Jacob says. That, in turn, can lead to distrust in information sources once considered reliable—such as researchers, scholars, and independent journalism—and in public institutions. 

Finding ways to improve algorithmic fairness of AI and to establish stronger technical foundations for “responsible AI” is a core focus of Sulekha Kishore’s doctoral research at the Massachusetts Institute of Technology. As a computer scientist and political scientist who graduated from Caltech in 2025, Kishore says her interest in the societal impacts of AI helps ground her technical work. “It helps you ask the question of how people are actually going to be interacting and impacted by this very vague, nebulous in-the-cloud, on-a-computer-screen thing that you’re making,” she says.

Using community notes on X (a method of providing fuller context on a social media post) as an example, she explains that improved “AI safety” and “AI fairness” would mean AI tools that can aggregate massive amounts of data in a way that’s more reflective of the scope of a conversation, rather than prioritizing the notes or opinions that get the most engagement or likes, which can skew AI responses. Another potential application, highlighted in a research paper published in Science, would be a digital form of democratic deliberation, whereby an AI mediator helps citizen groups find common ground on divisive issues. By iteratively incorporating individuals’ views, the AI mediator would help formulate a series of opinion statements, with the goal of moving the group toward greater consensus on a collective statement.

“This is very much the tip of the iceberg in terms of, What does it mean to present balanced information?” Kishore says. “Maybe lots of people are talking about something but [that something] is factually false—those kinds of questions are still very much where the open research questions are. How do you do that from a technical perspective in terms of providing balanced information?”

Probing the wider social implications of AI technology has drawn keen interest from undergraduate students at Caltech, Eberhardt says, both in his Ethics and AI course and at the Linde Center for Science, Society, and Policy. Launched in 2023, the center aims to build connections between Caltech’s cutting-edge research and policymakers through talks, panel discussions, and workshops that explore the broader impact of scientific research on society. With AI, Eberhardt says, “we are training the people who are building the stuff, so there’s an important role to not only develop the technical skills but to make these developers aware of the overall implications of the technology.”

Anima Anandkumar, Bren Professor of Computing and Mathematical Sciences and co-leader of Caltech’s AI4Science (artificial intelligence for science) initiative, sees promise in AI technology as a way to accelerate scientific discoveries, from the atomic to planetary scales. She advocates for AI collaboration across the sciences and humanities, an interdisciplinary approach that Caltech encourages on its tight-knit Pasadena, California, campus. And though “consumer facing” AI applications often draw the most attention, Anandkumar says, AI has the potential to go above and beyond. 

“It’s what I call extrapolation—to not just be similar to the regime of training data but to ask: Can it go truly further ahead to help us [make] leaps and bounds from where we are today?” says Anandkumar, an expert in machine learning whose research concentrates on scientific applications such as weather forecasting and medical research. One of the first interdisciplinary students she advised, a 2023 doctoral graduate in chemistry, for instance, conducted award-winning research that used machine learning methods to create dynamic models of protein folding and molecular interactions, a biochemical process that carries potential implications for drug discovery.

With the power of AI technology comes the great responsibility for higher education to lead and shape new generations of students who will become its developers, purveyors, users, and policymakers. But while AI has started taking on human tasks and simulating human thought, one has to ask whether there are distinctly human capabilities that AI can’t replace, replicate, or teach. 

Watson points to human relationships and mentorship—the richness of learning from someone with longtime experience in a chosen field or career and the skill of engaging with people in real time. “A lot of the mechanical things can indeed be outsourced,” he says, “but when it comes to human relationships as an element of what a liberal education helps you develop, I don’t see that being replaced by AI.”

Then there are the more subtle qualities, such as those Watson describes as “authenticity” or what one recent graduate calls “this humanness that I profoundly believe in.” 

“AI has become mainstream, not only using it as a friend but also in lieu of one’s own critical thinking,” says Grace Ziegel, a class of 2025 English major at the University of Massachusetts, Amherst. “That’s probably the number one way I’ve seen it used and is the thing that maybe keeps me away from it.” Yet Ziegel and many others emphasize that their views and engagement with AI may shift as the technology and world around it change.

Burwell, the astronomy professor, points to historic examples of technology such as the calculator and spell-check that have altered teaching paradigms and more. While people may now struggle to calculate restaurant tips or remember correct spellings on their own, he says, offloading such tasks can also enable them to turn to other, perhaps more uniquely human, challenges. 

“In a perfect world, what I would be achieving in my course is a student making leaps of understanding from some concept and then applying it in a different area and being able to make that leap,” Burwell reflects. “Making the leap to new ideas is something I assume AI struggles with—I could be wrong. In fact, in the future maybe that is something AI will be capable of.”  

Illustrations by Mr.Nelson

Brainstorming Together

Cultivating mutual transparency and trust is one of the policy recommendations that emerged from AI Aware Universities, spearheaded by the American University in Bulgaria (AUBG). The project, which ran from April 2024 through April 2025, brought together students, faculty, and staff for collaborative discussions in small groups on the appropriate use of AI in education. In addition to AUBG, the participant institutions were Bard College Berlin, European Humanities University, LCC International University, Central European University, and Bratislava International School of Liberal Arts. The policy recommendations include:

  • Skill development and assessment: Prohibit AI use when a class assignment is intended to develop the skills that AI would replace.
  • Mutual trust and transparency: Students and faculty should disclose AI use in academic work. 
  • Clear expectations: Provide explicit guidelines on acceptable and unacceptable AI use for each course and assignment.
  • Alignment: Align AI use in classwork with student learning outcomes. 
  • Disclosure requirements: Ensure that work presented as original was created by the person claiming authorship.
  • Academic integrity and enforcement: Discuss suspected prohibited AI use before imposing any grade penalty.

Do’s and Don’ts

The AI Aware Universities project also developed recommendations for “ethical and pedagogically appropriate AI use” for students and faculty.

PERMITTED USES
For students

  • Brainstorming ideas and concepts
  • Assisting with vocabulary and word choice (like a thesaurus)
  • Expanding class notes with additional examples or explanations
  • Creating practice questions for exam preparation
  • Using AI as a study tutor with existing notes (for example, anticipating exam questions while reviewing)
  • Getting started with research

For faculty

  • Using AI as a tool to teach editing, writing, and correction skills
  • Demonstrating AI capabilities and limitations as part of course content
  • Creating case studies or other content with a clear purpose (AI-generated work should be cited as such)
  • Creating lecture outlines and other teaching materials

PROHIBITED USES
For students

  • Generating end-product assignments or portions thereof
  • Writing term papers, exams, or reports (including group work)
  • Completing coding or math assignments
  • Translating text in language courses
  • Recording and transcribing lectures without explicit permission
  • Direct cutting and pasting of AI-generated content
  • Presenting AI-generated answers without fact-checking

For faculty

  • Grading work or providing automated feedback without human review
  • Failing a student based solely on automated AI-detection programs

LEARN MORE
The 2025 Student Guide to Artificial Intelligence
Leading through Disruption: Higher Education Executives Assess AI’s Impacts on Teaching and Learning

Author

  • Luna Shyr

    Luna Shyr has written and edited for National Geographic, the Wall Street Journal, Brepols, and the Juilliard School. She is a contributing editor for Liberal Education.

Share