Maggie Fernandes, University of Arkansas 1,2
Jennifer Sano-Franchini, West Virginia University
Megan McIntyre, University of Arkansas
In Refusing GenAI in Writing Studies: A Quickstart Guide, we wrote that we understand “refusal” as “the range of ways that individuals and/or groups consciously and intentionally choose to refuse Generative AI (GenAI)3 use, when and where we are able to do so.” The purpose of this post is to elaborate on and clarify what we mean when we say “refusal.” We do so because “refusal” and “refusers” are often mischaracterized as “doomers,” anti-technology, unrealistic, uninformed, impractical, ignorant, and/or pessimistic (Guglielmo; Kantrowitz; Marantz; Patel; Waite). Moreover, these misrepresentations are too often used to dismiss valid critiques of GenAI technologies for the benefit of Big Tech corporations.
Below, we outline four points that illustrate our understanding of—and approach to—GenAI refusal, as grounded in a writing studies perspective:
- Refusal is focused on the present implications of GenAI for perpetuating systems of oppression, as opposed to focusing on speculative risks or potential benefits of GenAI.
- Refusal includes multidimensional and plural responses to the changing realities of GenAI.
- Refusal is a pragmatic response that attends to specific contexts, positionalities, and circumstances.
- Refusal is hope in the face of uncertainty, with the understanding that what we do in writing studies—and beyond—matters.
Through these points, we aim to articulate a position that accounts for the present realities of GenAI; challenges reductive framings of GenAI refusal; recognizes that the capacity for refusal varies across institutions, contexts, and positionalities; and encourages others to consider the potentials and possibilities of understanding GenAI refusal as an act of hope.
Point 1. Refusal is focused on the present implications of GenAI for perpetuating systems of oppression, as opposed to focusing on speculative risks or potential benefits of GenAI.
Our approach to GenAI refusal focuses on the present conditions, operations, and harms of GenAI. Although our concern about the present has long term implications (e.g., climate crisis worsening), GenAI refusal is not a “doomer” orientation, which is to say we are not making uninformed or irrational fear-based decisions and we are not focused exclusively or even primarily on how this technology will shape the future. Focusing on the present disrupts the doom-hype binary that pulls focus to speculative existential threats and the equally speculative benefits of GenAI. By shifting our attention to the present, we can examine the circumstances around GenAI as they are right now, and choose not use GenAI because of how it is deepening inequality across intersectional lines (Bender, Gebru, McMillan-Major, and Schmitchell; Gebru; Owusu-Ansah; Tacheva and Ramasubramanian), accelerating climate crisis (Hogan and LePage-Richer), and undermining the value of human labor and creativity (MLA-CCCC Joint Task Force; Nyabola; Perrigo; Sano-Franchini et al.).
Often, these speculative positions rely on a doom-hype binary, where the two possible responses to GenAI are that one is either against—or for—GenAI use. Mapped onto those two possibilities are the related notions that one is either unrealistic or realistic, either uninformed or informed, either tech-averse or open to change, either unwilling to accept the inevitable or able to see the powerful possibilities of GenAI. This doom-hype binary mirrors longstanding tropes that frequently circulate with the rise of new writing technologies, and that have been described and critiqued as techno-utopianism/techno-dystopianism, and techno-optimism/techno-pessimism (Hawisher and Selfe; Feenberg, “Ten”; Feenberg, Transforming).
The doom-hype binary benefits Big Tech, as both positions overstate the capacities of the technology at hand and serve to diminish the sense that we can impact the future. As Timnit Gebru cautioned, “The same people cycle between selling AGI utopia and doom” (Marantz; see also, Gebru and Torres). In other words, AI doom, which focuses on distant existential threats posed by artificial intelligence, and AI hype, which promises yet unrealized AI possibilities, both benefit Big Tech corporations and are used in Big Tech marketing (Stokel-Walker).
AI hype and AI doom both tacitly advance the unsubstantiated and uncritical marketing claims about GenAI products as powerful and capable of—or soon will be capable of—significantly reshaping the fate of humanity. In this way, both doom and hype perspectives uncritically buy into GenAI marketing by taking for granted the power of GenAI and by assuming that widespread adoption of GenAI is inevitable and certain. Such views are reflected in calls for curricular overhauls in higher education (Watkins; Virtu), and in what Ben Williamson referred to as “critical hype”—critiques of GenAI that “implicitly accept what the hype says AI can do, and inadvertently boost the credibility of those promoting it.”
Importantly, both doom and hype discourses distract from the present realities and documented harms of GenAI products. AI critic and president of the Signal Foundation, Meredith Whittaker argued that focusing on the hypothetical existential threats of hyperintelligent AI shifts focus from the immediate threats and harms facing the most marginalized communities:
My concern with some of the arguments that are so-called existential, the most existential, is that they are implicitly arguing that we need to wait until the people who are most privileged now, who are not threatened currently, are in fact threatened before we consider a risk big enough to care about. Right now, low-wage workers, people who are historically marginalized, Black people, women, disabled people, people in countries that are on the cusp of climate catastrophe—many, many folks are at risk. (O’Leary)
Dismissing refusal as “doomer” logic risks minimizing the severity of GenAI’s harms to students, instructors, writing programs, writers, language diversity, and the environment. We opt for refusal as a way to acknowledge that current threats and harms are significant enough that they should inform our decisions right now.
Point 2. Refusal includes multidimensional, plural, and dynamic responses to the changing realities of GenAI.
We understand that just as decisions to adopt are multiple and varied, decisions to refuse and otherwise resist GenAI products are complex and multiple; they vary in terms of method and approach, level of engagement, level of enthusiasm or resignation with which a given decision has been made, as well as purpose and rationale. Refusal stances, may include—but are not limited to—one or more of the following positions:
- Refusal to use these technologies and to avoid integrations as much as possible.
- Refusal to require students to use these technologies for course assignments.
- Refusal to prioritize practical or technical instruction of these technologies, e.g. prompt engineering, in order to maintain course focus on existing learning outcomes.
- Critical engagement with expert perspectives on these technologies that take up the issues and concerns that come with GenAI without using it or requiring students to use it.
- Explicit prohibition of GenAI use for course assignments without corresponding punitive policies (see Nathaniel Rivers’ Generative AI Statement).
- Refusal to accept the inevitability rhetoric about GenAI pushed by Big Tech companies and in academic discourse, particularly in and around educational technologies.
- Refusal of writing program administrators to require instructors to teach with or otherwise interact with these technologies.
Each of these refusal stances reflects an orientation to GenAI in which critical importance is given to recognizing the documented harms caused by these technologies and refusing to promote or approve of these technologies tacitly by enabling, encouraging, or requiring their use in our classrooms. In other words, we need to be careful not to water down refusal to include practices that actually advance GenAI development and profits while undercutting our disciplinary goals and values.
These refusal stances also help to create a culture around GenAI in which students, scholar-teachers, and program administrators consider “opting in” rather than one where individuals are forced to justify “opting out.” As a result, students, teacher-scholars, and program administrators are positioned to think critically about every potential instance of adoption as well as the changing conditions and contexts surrounding GenAI development, rather than treating adoption as a given while trying to find moments for refusal.
Point 3. Refusal is a pragmatic response that attends to specific contexts, positionalities, and circumstances.
As reflected in the above examples of refusal approaches in Point 2, for us, refusal is not static or rigid in its demands but is pragmatic at the most basic level. This is also reflected in Premise 10 of our Refusal Quickstart Guide, where we state, “it is a rational and principled choice to not use GenAI products unless and until we have determined that their benefits outweigh their costs.” As the MLA-CCCC Joint Task Force on AI and Writing pointed out in Working Paper 3, many instructors may be limited in their institutional capacities to refuse (11).
For this reason, how we refuse might differ based on the agency and academic freedom afforded to us by our positions and institutions. We are hearing from our colleagues that some are facing a great deal of pressure to adopt GenAI at their universities while managing already intense workloads, and we believe it is important that there is as much space for pushing back on GenAI as possible. For example, as universities revise academic integrity policies in response to GenAI, it might be more pragmatic to prohibit GenAI use in individual courses without attempting to police GenAI use as students are receiving mixed messages about the technology.
Refusing GenAI is also pragmatic in terms of its attention to labor demands. Although we feel that it is important for instructors to become informed about GenAI, we share concerns outlined in “Working Paper: Overview of the Issues, Statement of Principles, and Recommendations” of the MLA-CCCC Joint Task Force on Writing and AI that the labor demands to learn and teach prompt engineering will fall to the most precarious and overworked teacher-scholars.
Refusal offers another way for these teacher-scholars to engage in the conversation around GenAI without having to overhaul their teaching to accommodate the instruction of prompt engineering. Likewise, in the classroom, our position of refusal is aligned with critical digital literacies4 that examine how all corporatized technologies are embedded in—and contribute to—systems of oppression and, rather than imposing a blanket ban on technologies, equip students with the ability to think critically about how and when to engage with technologies like ChatGPT.
To characterize refusal as unpragmatic implicitly suggests that we do not have any meaningful choices about whether and when we use these technologies, and that is dangerous.
Point 4. Refusal is hope in the face of uncertainty, with the understanding that what we do in writing studies—and beyond—matters.
Ultimately, refusal is rooted in the understanding that our actions can impact the future and that we have a responsibility to work for the future we want to see. In Refusing GenAI in Writing Studies: A Quickstart Guide, we articulated refusal as an informed response to GenAI and its implications for linguistic homogenization, bias, labor, and environmental crisis. Refusal as hope suggests that individual and collective efforts can make a difference in these matters.
This is not to say that refusal is the only way to fight for a better future, but it is one way to assert that we consider the many shortcomings of GenAI as it currently exists unacceptable for us as writers, instructors, and program administrators, and to deny or at least limit the ability of Big Tech to freely monetize and capitalize on our time, labor, and intellectual, emotional, and creative work. By communicating this stance to our students, programs, and universities, we hope to influence the future of this technology in higher education, and to remind people that the future is neither as certain nor as fixed as it may seem.
As hope, refusal insists that the work we do as writing studies teacher-scholars and program administrators matters and remains valuable to students. The hope is not that we can implement refusal in such a way that every student makes the decision to refuse GenAI, too. Rather, our hope in refusing to adopt GenAI in our classes and programs is a way for us to model a critical orientation to GenAI and Big Tech and the enactment of agency in the face of neoliberal techno-capitalism that we believe will benefit all students.
Hope is not all about positivity and good feelings. We turn to refusal in part because we want to acknowledge not just the reality of GenAI’s harms but also how uncertain, anxious, and sometimes even angry we feel about the world prompted by GenAI. These bad feelings are reasonable responses to GenAI and its current and possible impacts on higher education, Writing Studies as a discipline, workers and writers all over the world, and our planet. Although panic in the face of technological change is often unproductive and can cause more harm, we worry that efforts to mitigate panic have made it difficult to share and act on other negative feelings about GenAI without risking dismissal.
Rather than reject all bad feelings, we need to find ways to honor our honest, human reactions and learn from these negative feelings so that we may respond to them strategically and productively. To remain honest about these feelings is not to give in to doom or despair or to turn away from reality. Rather, remaining honest about the negative feelings engendered by GenAI and its harms allows us to have productive conversations grounded in both those legitimate concerns and our values and goals as teachers and as human beings.
Refusal as hope creates moments for reflection about possible futures in Writing Studies and beyond. For example, GenAI is yet another moment in our disciplinary history when we can recommit ourselves to championing linguistic diversity, not just via GenAI refusal but through our course designs and assessments. Similarly, the overconsumption of resources linked to GenAI helps us to recognize the digital damage associated with other institutional technologies, including Zoom (Edwards). At a moment when it feels that there is so much social change to fight for and even more harms to counter, refusal as hope reminds us that small actions can be the beginning of big changes.
References
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Schmargaret Schmitchell. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜.” Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 2021.
Darby, Flower. “Why You Should Rethink Your Resistance to ChatGPT.” The Chronicle of Higher Education. 13 Nov 2023.
Edwards, Dustin W. “Digital Rhetoric on a Damaged Planet: Storying Digital Damage as Inventive Response to the Anthropocene.” Rhetoric Review 39.1 (2020): 59–72.
Feenberg, Andrew. “Ten Paradoxes of Technology.” Techné: Research in Philosophy and Technology 14.1 (2010): 3–15.
Feenberg, Andrew. Transforming Technology: A Critical Theory Revisited. Oxford University Press, 2002.
Gebru, Timnit. “Race and Gender.” Oxford Handbook of Ethics of AI, edited by Markus D. Dubber, Frank Pasquale, and Sunit Das. Oxford UP, 2020, pp. 252–269.
Gebru, Timnit, and Émile P. Torres. “The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence.” First Monday (2024).
Guglielmo, Connie. “Silicon Valley Fights ‘Doomer’ Bill, Taylor Swift Actually Didn’t Endorse Trump.” CNET. 26 Aug 2024.
Hawisher, Gail E., and Cynthia L. Selfe. “The Rhetoric of Technology and the Electronic Writing Class.” College Composition & Communication 42.1 (1991): 55–65.
Hogan, Mél and Théo LePage-Richer. “Extractive AI.” Climate Justice and Technology Essay Series. Center for Media, Technology, and Democracy. 2023.
Kantrowitz, Alex. “The Claims That ‘A.I. Will Kill Us All’ Are Sounding Awfully Convenient.” Slate. 14 Nov 2023.
Marantz, Andrew. “Among the A.I. Doomsayers.” The New Yorker. 11 Mar 2024.
MLA-CCCC Joint Task Force on Writing and AI. “Working Paper: Overview of the Issues, Statement of Principles, and Recommendations.” Jul 2023. Modern Language Association and Conference College Composition and Communication.
MLA-CCCC Joint Task Force on Writing and AI and Critical AI Literacy for Reading, Writing, and Languages Workshop. “Working Paper 3: Building a Culture for Generative AI Literacy in College Language, Literature, and Writing.” Oct 2024. Modern Language Association and Conference College Composition and Communication.
Nyabola, Nanjala. “ChatGPT and the sweatshops powering the digital age.” Al Jazeera. 23 Jan 2023.
O’Leary, Lizzie. “What the President of Signal Wishes You Knew About A.I. Panic.” Slate. 16 May 2023.
Owusu-Ansah, Alfred L. “Defining Moments, Definitive Programs, and the Continued Erasure of Missing People.” Composition Studies 51.1 (2023): 143–148.
Patel, Nilay. “Barack Obama on AI, free speech, and the future of the internet.” The Verge. 7 Nov 2023.
Perrigo, Billy. “Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic.” Time. 18 Jan 2023.
Piper, Kelsey. “Four different ways of understanding AI — and its risks.” Vox. 14 Jun 2023.
Rivers, Nathaniel. “Generative AI Statement.” 2024.
Sano-Franchini, Jennifer, Megan McIntyre, and Maggie Fernandes. “Refusing GenAI in Writing Studies: A Quickstart Guide.” Refusing Generative AI in Writing Studies. Nov 2024.
Selfe, Cynthia L. “Technology and Literacy: A Story About the Perils of Not Paying Attention.” College Composition & Communication 50.3 (1999): 411–436.
Stokel-Walker, Chris. “OpenAI’s warnings about risky AI are mostly just marketing.” New Scientist. 13 Sep 2024.
Tacheva, Jasmina, and Srividya Ramasubramanian. “AI Empire: Unraveling the interlocking systems of oppression in generative AI’s global order.” Big Data & Society 10.2 (2023): 1–13.
Waite, Thom. “Doomer vs Accelerationist: the two tribes fighting for the future of AI.” Dazed. 24 Nov 2023.
Watkins, Marc. “Make AI Part of the Assignment.” The Chronicle of Higher Education. 2 Oct 2024.
Virtu, Angela. “Wake Up, Academia: The AI Revolution Waits for No One.” Inside Higher Ed. 6 Sep 2024.
Endnotes
- Recommended citation: Fernandes, Maggie, Jennifer Sano-Franchini, and Megan McIntyre. “What is GenAI Refusal?” Refusing Generative AI in Writing Studies. Dec. 2024. https://refusinggenai.wordpress.com/what-is-refusal/ ↩︎
- Thank you to Tara Salvati for copyediting this post. ↩︎
- We capitalize “GenAI” to indicate that we are referring specifically to text, image, and other Generative AI technologies produced by Big Tech corporations and backed by venture capital, and whose primary purpose is to generate profit for shareholders, e.g., OpenAI’s ChatGPT, Google Gemini, Microsoft Copilot, and Adobe Firefly. As we stated in Premise 1 of our Refusal Quickstart Guide, “we understand the importance of recognizing the differences and similarities between AI, generative AI, and text-generative AI, and how these terms are at times conflated to bolster pro-adoption stances.” ↩︎
- We will elaborate on the idea of critical digital literacies in our forthcoming “Learning Objectives” and “Practicing Refusal” pages. ↩︎