Skip to content
ADVERTISEMENT

Sometimes We Resist AI for Good Reasons

Why higher ed needs to listen to the contrarians in setting policies on using tools like ChatGPT in faculty work.

Illustration showing a pair of eyeglasses looking at the letters AI made from circuits, the image in the glasses’ lens is blurry and unreadable.
bubaone, Illustration by The Chronicle; Getty Images

It’s that time of the academic year again. No, not the beginning of classes, but rather, when campus after campus tries — for the third or fourth time at this point — to create clear AI policies for students and faculty members.

Since the explosive arrival of ChatGPT

To continue reading this article:

Create a free account

Limited access

  • Read this article
  • Browse job listings

Sign up for a free account

Sign Up
First Name
Last Name
Email
Password
Yes, please send me Academe Today, The Chronicle's daily flagship newsletter.
By creating a free account, you are agreeing to receive updates and special offers from The Chronicle and our selected partners. Unsubscribe links are provided in every email. View our user agreement and privacy policy.

Or

Subscribe for $1/week

Unlimited access + exclusive benefits

  • Read every article
  • Subscriber-only Daily Briefing newsletter
  • Chron: AI research assistant
  • Curated library of newsroom reports
Subscribe for $1/week
Already have an account?
We welcome your thoughts and questions about this article. Please email the editors or submit a letter for publication.
Share
About the Author
Kevin Gannon is a professor of history at Queens University of Charlotte, and director of its Center for the Advancement of Faculty Excellence. Find him on Bluesky: @TheTattooedProf.bsky.social.