Refusing GenAI in Writing Studies: A Quickstart Guide

Jennifer Sano-Franchini, West Virginia University 1,2
Megan McIntyre, University of Arkansas
Maggie Fernandes, University of Arkansas

This guide positions refusal as a disciplinary and principled response to the emergence of Generative AI (GenAI)3 technologies in writing studies. We created this guide to add to ongoing efforts to think through approaches for responding to GenAI in writing studies, and in higher education more broadly. When we say GenAI “refusal,” we are talking about the range of ways that individuals and/or groups consciously and intentionally choose to refuse GenAI use, when and where we are able to do so. In other words, refusal is not monolithic. It does not imply a head-in-the-sand approach to these emergent and evolving technologies; we believe in the importance of being informed and thoughtful about GenAI and its long- and short-term effects, especially as these technologies have entered into our disciplinary conversations and the classes we teach. Additionally, refusal does not necessarily imply the implementation of prohibitive class policies that ban the use of GenAI among students. In short, we demonstrate that our call for GenAI refusal is not uninformed, moral outcry as a result of technological “aversion,” fear of change, or other so-called “doom and gloom” views of technological advancement as it is often framed, but rather extends on our disciplinary knowledge in rhetoric, writing, and composition studies, digital rhetorics and literacies, computers and writing, and technical communication.

To situate refusal as a disciplinary position, we offer ten premises that ground refusal as a disciplinary response to GenAI technologies:

  1. Writing studies teacher-scholars understand the relationship between language, power, and persuasion. 
  2. Writing studies teacher-scholars understand the broad purposes and uses of writing in the world. 
  3. Writing studies as a discipline stands against linguistic homogenization, which is accelerated and advanced by GenAI. 
  4. Writing studies as a discipline rejects punitive approaches to plagiarism and plagiarism surveillance. 
  5. Writing studies scholarship has demonstrated that technologies, including GenAI, are never ideologically neutral. 
  6. Writing studies as a discipline has taken up histories of digital writing technologies that provide important context for understanding our current GenAI moment. 
  7. Writing teachers are well-poised to understand the range of labor issues that come with GenAI adoption. 
  8. Writing teachers must be critical of the rhetorical and economic contexts surrounding GenAI, including the various ways that GenAI is promoted and marketed.
  9. Writing studies must consider the environmental impacts of GenAI as well as other digital technologies that rely on massive datasets such as Zoom and Facebook. 
  10. Refusal can be a principled and pragmatic response to the incursion of GenAI technologies in college writing courses.

Although these premises are often discussed separately, it is essential to consider them together for a full understanding of refusal as a disciplinary and principled response to GenAI. Each of the premises is elaborated upon below.


Premise 1. Writing studies teacher-scholars understand the relationship between language, power, and persuasion.

As a result, we recognize the need to consider the rhetorical and political implications of discursive choices. With this premise in mind, we understand the importance of recognizing the differences and similarities between AI, generative AI, and text-generative AI, and how these terms are at times conflated to bolster pro-adoption stances. The purposes, affordances, histories, and consequences of each of these technologies differ, and we must be aware of how defenses of AI—broadly conceived—are at times used to defend the use of text-generative AI in academic contexts, even when the purposes and outcomes differ.

Likewise, we understand the need to carefully consider the metaphors we use to describe GenAI. These metaphors often create associations between these technologies and human activities and capabilities—artificial intelligence, machine learning, chatbots, neural networks, hallucination, deep learning, write—that are designed to cultivate trust in corporate, exploitative, and extractive technologies. We must understand that using the metaphors of “tool” or “collaborator” to describe GenAI platforms like ChatGPT are likewise limited as they “[fail] to acknowledge the human contributions to the training corpus as well as the lack of human and social context in its output” (Anderson 8). As Salena Sampson Anderson explained, “ChatGPT is neither precisely a tool nor a collaborator […] Ultimately most metaphors we apply to our understanding of this new technology are limited—both helpful and potentially dangerous—in informing our understanding as we risk both underestimating and overestimating the constructive and destructive potential of this technology as well as the ethical dimensions of its use” (2). We must be critical of the ways that these metaphors and affective associations are used to exaggerate the abilities of these products in ways that strengthen the marketing efforts of Big Tech corporations like OpenAI.

Similarly, we must be careful to recognize how diversity, equity, inclusion, and accessibility discourses are sometimes used to justify the use of GenAI through the co-optation of terms like “advocate,” “advocacy,” “privilege,” “critical,” “access,” and “transparency,” often as a way to reframe it as an inherent good, despite its widely documented, material harms. Refusal involves choosing to use language that most accurately—and transparently—reflects the actual technology, and/or highlighting the discursive limitations of the language we commonly use to describe these products. 


Premise 2. Writing studies teacher-scholars understand the broad purposes and uses of writing in the world.

We know that writing is something that human beings do, not only to “write answers,” as text-generative LLM technologies like ChatGPT are primed to do (Vee) but also to build connections with others, cultivate relationships, learn and engage in inquiry, develop and grow as thinkers, participate in the embodied act of self-expression, experience the “pleasure of wrestling with difficult ideas” (Vee 180) and more (Alexander; Emig; Vee). As Vee writes, “Large language models such as ChatGPT will produce good writing. They will not produce challenging, thoughtful, innovative humans, such as good writing instruction helps to nurture now” (180). We will not be fooled into thinking that LLMs can take the place of human writers and writing teachers, and we must be able to understand—and help students recognize—the limits of new writing technologies that are placed in front of us by the corporate sector. Moreover, we must recognize the harms that will result when writing is primarily treated as a tool to transcribe answers, including its implications for critical thinking, democratic decision-making, and linguistic variation and expression. 


Premise 3. Writing studies as a discipline stands against linguistic homogenization, which is accelerated and advanced by GenAI.

This disciplinary stance is reflected in the 2020 Conference on College Composition and Communication (CCCC) Demand for Black Linguistic Justice, which built on the 1974 landmark CCCC resolution on Students’ Right to Their Own Language. The resolution on Students’ Right to Their Own Language proclaimed, “A nation proud of its diverse heritage and its cultural and racial variety will preserve its heritage of dialects. We affirm strongly that teachers must have the experiences and training that will enable them to respect diversity and uphold the right of students to their own language.” Following Carmen Kynard (“When Robots Come Home to Roost”), we recognize that Writing Studies has not lived up to the promises and commitments articulated in these statements, and that failure has made us as a discipline more vulnerable to GenAI products and technologies that erase language variation and difference.

Moreover, Alfred L. Owusu-Ansah has shown how ChatGPT advances linguistic homogeneity and white language supremacy by designating Ghanaian English as “non-standard” and by wrongly claiming that it is inappropriate for an academic paper, in addition to conflating it with Ghanaian Pidgin English. In short, “ChatGPT was echoing decades of imperialist framing that positioned English varieties of the Global South as being diametrically opposed to the English varieties of the North” (144). We must go beyond merely acknowledging that new writing technologies reinforce linguistic homogenization to actively resist contributing to this outcome, particularly when the known benefits do little to outweigh the established risks.


Premise 4. Writing studies as a discipline rejects punitive approaches to plagiarism and plagiarism surveillance.

We know from decades of work by scholars like Rebecca Moore Howard, Johndan Johnson-Eilola and Stuart Selber, Amy Robillard, and Chris Anson that plagiarism is often a moralistic distraction from the core goals of postsecondary writing instruction. As a result, plagiarism and academic integrity are not our primary concerns when it comes to GenAI, and detectors and process surveillance that take up more time and energy from both teachers and students are not appropriate responses to GenAI. Writing studies teachers continue to have many other more important concerns and outcomes in our efforts toward effective writing instruction (Banks; Hart-Davidson; MLA-CCCC Joint Task Force), and overemphasis on plagiarism often has negative effects on our efforts to create a learning environment grounded in trust, openness, and understanding, as opposed to punitive approaches designed to control student behavior.

As the MLA-CCCC Joint Task Force on Writing and AI explained in their first Working Paper, “the primary work of educators is to support students’ intellectual and social development and to foster exploration and creativity rather than to surveil, discipline, or punish students” (4). Moreover, “Students may experience an increased sense of alienation and mistrust if surveillance and detection approaches meant to ensure academic integrity are undertaken. Such approaches have been proven unreliable and biased; they can produce false positives that could lead to wrongful accusations, resulting in negative consequences for the students” (7). We also understand that commonplace pedagogical approaches in the discipline that focus on the writing process and the micro-level steps involved in creating a writing project—e.g., brainstorming, drafting, peer review, and revision—are often a deterrent to plagiarism.


Premise 5. Writing studies scholarship has demonstrated that technologies, including GenAI, are never ideologically neutral.

Digital rhetoricians understand that just as computer desktop interfaces, search engine algorithms, and email are not ideologically neutral technologies (Selfe and Selfe; Noble; Moses & Katz), nor are GenAI technologies, which are not only modeled on a corpus of text that contains biases and misinformation, but also on corporate capitalist and neoliberal values of expediency, individualism, so-called objectivity, and the upward redistribution of wealth and resources to the corporate class. Moreover, we understand the need to consider the material implications of these ideologies and to teach students how to think critically about how technologies, ideologies, and material realities intersect.


Premise 6. Writing studies as a discipline has taken up histories of digital writing technologies that provide important context for understanding our current GenAI moment.

In other words, we should understand GenAI in light of the histories of technologies like movable type, print capitalism, technical communication (Hart-Davidson et al. “The History”; Hart-Davidson et al. “Revisiting”), the internet, social media, and more. Such histories are useful as they contextualize the kinds of emotional and moral panic that often result from new writing technologies that seem to be moving toward wide adoption (Bolter). But we must also remember that the histories that we keep are often those of the technologies that have resulted in broad uptake across sectors and that not all writing technologies necessarily experience the same level of ubiquity. In this way, we must be careful of uncritically accepting the notion that GenAI in its current form will inevitably be widely taken up in the corporate sector and we must therefore prepare students for that time now. In addition, it is important to consider what exigencies have led to the development of these technologies, including the differences between technologies designed to solve specific problems and meet specific needs (however we may feel about those needs) versus those designed to meet the goals of corporate capitalism. 


Premise 7. Writing teachers are well-poised to understand the range of labor issues that come with GenAI adoption.

Rhetoric and composition teacher-scholars like Eileen Schell, Carmen Kynard, Seth Kahn, and Anicca Cox, as well as the Conference on College Composition and Communication Labor Caucus, have engaged with issues of labor exploitation in postsecondary writing instruction. As a discipline, we must think critically about how mandates to adopt GenAI technologies in writing programs will further exacerbate the already challenging and increasingly precarious labor conditions shouldered by many postsecondary writing instructors—including a large number of graduate instructors, part-time contingent faculty, and non-tenure-track lecturers. In addition, those in the field of technical and professional communication understand the need to consider how widespread adoption of these technologies may affect technical and professional writing labor markets that many of our students will enter into. Importantly, there is a need to understand these contexts in conjunction with assessments of the actual risks and benefits of GenAI technologies.


Premise 8. Writing teachers must be critical of the rhetorical and economic contexts surrounding GenAI, including the various ways that GenAI is promoted and marketed.

GenAI as it currently exists relies on an extractive economic model where the content of writers, students, teachers, internet users, and content creators are taken and used, often without our expressed consent, to fuel LLMs for the profit of Big Tech and its investors, with little benefit to those whose works have been exploited for this purpose (Milmo). We have also learned how OpenAI relies on Kenyan workers who were paid less than $2 an hour to “make ChatGPT less toxic” (Perrigo; Nyabola). As rhetoricians, we understand the need to keep context in mind, including economic context—e.g., how billions of dollars in investment money can dramatically increase the pressure to adopt GenAI. We must bear in mind who ultimately benefits from the widespread uptake of GenAI in higher education (it’s not the students), and we need to be aware of how those involved in the EdTech industry have shaped and influenced the discourse. For instance, an alarming number of articles published in InsideHigherEd and The Chronicle of Higher Education appear to be either authored by—or rely considerably on the word of—individuals who have a stake in EdTech and the neoliberal incursion of Big Tech in higher education (Coffey; Cyr; Darby; Mowreader; Schroeder).


Premise 9. Writing studies must consider the environmental impacts of GenAI as well as other digital technologies that rely on massive datasets such as Zoom and Facebook.

We know that using AI technologies like ChatGPT requires a significant amount of natural resources, including clean fresh water that is used to generate electricity to power data centers and to cool servers (Crawford). Not long after its public release, researchers found that GPT-3 required 700,000 liters (185,000 gallons) of water, and 500 ml of water for every 10–50 responses. GPT-4 likely requires even more than this given its larger model size (Li et al.). As writing studies is sure to continue to attend to a variety of digital writing technologies, it is critical that we consider what Dustin Edwards has referred to as “digital damage,” which considers “how the material infrastructures of the internet and connected platforms and devices are tangled up with lands, waters, energies, and histories that are often unseen, unfelt, or unacknowledged in our everyday lives” (60). Importantly, Edwards’ work “amplifies how the digital is not felt the same by all human and more-than-human bodies, as the expansive material infrastructures that tangle around the earth in the form of fiber optic cable networks, cable stations, data centers, and so on have potent consequences for certain landscapes and communities more than others” (60). These impacts are not just plausible; they are documented, tangible, and real. 


Premise 10. Refusal can be a principled and pragmatic response to the incursion of GenAI technologies in college writing courses.

It is a reasonable and practical choice not to use GenAI products unless they align with our disciplinary goals and ethical values. Likewise, it is a rational and principled choice to not use GenAI products unless and until we have determined that their benefits outweigh their costs. Radically revising and restructuring our discipline around GenAI is premature at this time, given the uncertainty and instability of the future of these technologies. Instead of centering a technology that is misaligned with so many of our disciplinary values, we can choose to opt out of active use of GenAI technologies until these products are better aligned with our values in the following ways:

  1. Changes to the extractive economic model of current GenAI technologies, including addressing the labor issues related to the creation, maintenance, and deployment of these products and technologies as well as concerns related to intellectual property and citation justice.
  2. Changes in algorithmic outputs that reinforce white supremacy whether in terms of linguistic, visual, aural, and/or aesthetic homogeneity.
  3. Federal regulation of environmental impacts, in dialogue with climate scientists and activists.
  4. Clarity with regard to the actual and not imagined or hypothetical benefits of GenAI products, and confidence that the material benefits of these technologies outweigh their substantial costs.
  5. Slow and meaningful collaboration with individuals, including academic faculty, who research the cultural and political implications of technology capitalism, as well as those who hold strong, unwavering commitments to public education, and to the liberal arts and humanities.

This is not meant to be a comprehensive list, and we invite you to think with us about what conditions can and should look like before adoption becomes a viable disciplinary and principled choice that you are willing to take, with the above premises in mind. 

References

Alexander, Jonathan. “Students’ Right to Write.Inside Higher Ed. 22 Nov 2023.  

Anderson, Salena Sampson. “‘Places to Stand’: Multiple Metaphors for Framing ChatGPT’s Corpus.Computers and Composition 68 (2023): 1–13.

Anson, Chris M. “Fraudulent Practices: Academic Misrepresentations of Plagiarism in the Name of Good Pedagogy.” Composition Studies 39.2 (2011): 29–43.

Baker-Bell, April. Linguistic Justice: Black Language, Literacy, Identity, and Pedagogy. Routledge, 2020.

Baker-Bell, April, Bonnie J. WIlliams-Farrier, Davena Jackson, Lamar Johnson, Carmen Kynard, Teaira McMurtry. This Ain’t Another Statement! This is a DEMAND for Black Linguistic Justice!” Conference on College Composition and Communication, July 2020.  

Banks, Adam. “From the Bridge: Reaffirmation and Recommitment, even in times of Reinvention: Or, Why There’s Joy In Repetition.” TeachingWriting. Stanford University. 8 Sep 2023.  

Bolter, Jay D. Writing Space: Computers, Hypertext, and the Remediation of Print. Erlbaum, 1991.

Coffey, Lauren. “Majority of Grads Wish They’d Been Taught AI in College.” Inside Higher Education. 23 Jul 2024.

Cox, Anicca. “Sad Math and the Weight of the Institution: Seeking Remedies for Faculty Long-Term Precarity.” College English 86.3 (2024): 219–243.

Cox, Anicca, et al. “The Indianapolis resolution: Responding to twenty-first-century exigencies/political economies of composition labor.” College Composition & Communication 68.1 (2016): 38–67.

Crawford, Kate. “Generative AI’s environmental costs are soaring — and mostly secret.” Nature. 20 Feb 2024.

Cyr, Matt. “Leading AI adoption while still learning it yourself.” Inside Higher Education. 19 Sep 2024.

Darby, Flower. “4 Steps to Help You Plan for ChatGPT in Your Classroom.” The Chronicle of Higher Education. 23 Jun 2024. 

Edwards, Dustin W. “Digital Rhetoric on a Damaged Planet: Storying Digital Damage as Inventive Response to the Anthropocene.” Rhetoric Review 39.1 (2020): 59–72.

Emig, Janet. “Writing as a Mode of Learning.” College Composition & Communication 28.2 (1977): 122–128.

Hart-Davidson, Bill. “Have We Ever Done A Good Job Teaching Writing?” Medium. 6 Sep 2023.  

Hart-Davidson, Bill, et al. “Revisiting Four Conversations in Technical and Professional Writing Scholarship to Frame Conversations About Artificial Intelligence.” Journal of Business and Technical Communication (2024): 10506519241280642.

Hart-Davidson, Bill, et al. “The History of Technical Communication and the Future of Generative AI.” Proceedings of the 42nd ACM International Conference on Design of Communication. 2024.

Howard, Rebecca Moore. “Understanding ‘Internet Plagiarism.’” Computers and Composition 24.1 (2007): 3–15.

Howard, Rebecca Moore, and Missy Watson. “The Scholarship of Plagiarism: Where We’ve Been, Where We Are, What’s Needed Next.” Writing Program Administration 33.3 (2010): 116–125.

Johnson-Eilola, Johndan, and Stuart A. Selber. “Plagiarism, Originality, Assemblage.” Computers and Composition 24.4 (2007): 375–403.

Kahn, Seth, William B. Lalicker, and Amy Lynch-Biniek. Contingency, Exploitation, and Solidarity: Labor and Action in English Composition. WAC Clearinghouse, 2017.

Kynard, Carmen. “Fakers and Takers: Disrespect, Crisis, and Inherited Whiteness in Rhetoric-Composition Studies.” Composition Studies 50.3 (2022): 131–204.

Kynard, Carmen. “When Robots Come Home to Roost: The Differing Fates of Black Language, Hyper-Standardization, and White Robotic School Writing (Yes, ChatGPT and His AI Cousins).” Education, Liberation & Black Radical Traditions for the 21st Century. 11 Dec 2023. [Blog post].

Li, Pengfei, et al. “Making AI Less ‘Thirsty’: Uncovering and Addressing the Secret Water Footprint of AI Models.” arXiv preprint arXiv:2304.03271 (2023).

Milmo, Dan. “‘Impossible’ to create AI tools like ChatGPT without copyrighted material, OpenAI says.” The Guardian. 8 Jan 2024.  

MLA-CCCC Joint Task Force on Writing and AI. “Working Paper: Overview of the Issues, Statement of Principles, and Recommendations.” Jul 2023. Modern Language Association and Conference College Composition and Communication.  

Moses, Myra G., and Steven B. Katz. “The Phantom Machine: The Invisible Ideology of Email (A Cultural Critique).” Critical Power Tools: Technical Communication and Cultural Studies (2006): 71–105.

Mowreader, Ashley. “Report: Generative AI can address advising challenges.” Inside Higher Education. 5 Sep 2024.

Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press, 2018.

Nyabola, Nanjala. “ChatGPT and the sweatshops powering the digital age.” Al Jazeera. 23 Jan 2023.

Owusu-Ansah, Alfred L. “Defining Moments, Definitive Programs, and the Continued Erasure of Missing People.” Composition Studies 51.1 (2023): 143–148.

Perrigo, Billy. “Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic.” Time. 18 Jan 2023.

Robillard, Amy E. “We Won’t Get Fooled Again: On the Absence of Angry Responses to Plagiarism in Composition Studies.” College English 70.1 (2007): 10–31.

Schell, Eileen E. Gypsy Academics and Mother-Teachers: Gender, Contingent Labor, and Writing Instruction. Portsmouth, NH: Boynton/Cook Publishers, 1998.

Schroeder, Ray. “AI is already advancing higher education.” Inside Higher Education. 10 Sep 2024.

Selfe, Cynthia L., and Richard J. Selfe Jr. “The Politics of the Interface: Power and Its Exercise in Electronic Contact Zones.” College Composition & Communication 45.4 (1994): 480–504.

Smitherman, Geneva. “‘Students’ Right to Their Own Language’: A Retrospective.” English Journal 84 (1995): 21–28.

Students’ Right to Their Own Language.” College Composition and Communication 25 (1974).

Vee, Annette. “Large Language Models Write Answers.” Composition Studies 51.1 (2023): 176–181.

Vie, Stephanie. “A Pedagogy of Resistance Toward Plagiarism Detection Technologies.” Computers and Composition 30.1 (2013): 3–15.

Endnotes

  1.  Recommended citation: Sano-Franchini, Jennifer, Megan McIntyre, and Maggie Fernandes. “Refusing GenAI in Writing Studies: A Quickstart Guide.” Refusing Generative AI in Writing Studies. Nov. 2024. refusinggenai.wordpress.com  ↩︎
  2. Thanks to Tara Salvati for copyediting this guide. ↩︎
  3.  We capitalize “GenAI” to indicate that we are referring to Generative AI technologies produced by Big Tech corporations and backed by venture capital, and whose primary purpose is to generate profit for shareholders, e.g., OpenAI’s ChatGPT, Google Gemini, Microsoft Copilot, and Adobe Firefly. ↩︎