The digital world is artificially constructed. Moderated by algorithmic tools, it contains more information than the world’s libraries combined—but much of this information comes from unvetted sources and lacks conventional indicators of trustworthiness. People scrolling through their social-media feeds are confronted with a deluge of updates and messages—an ad for a new device, a meme from a friend, news about the pandemic, and opinions on anything from climate change to the latest celebrity misstep—all in an endless stream produced and shared by human beings and promoted by algorithms designed to make people dwell on the platform so they can be exposed to more ads (
Wu, 2016).
The challenges of dealing with overabundant and attention-grabbing information are amplified by the proliferation of false information and conspiracy theories, whose prevalence may lead people to doubt the very existence of “truth” or a shared reality. An entirely new vocabulary has become necessary to describe disinformation and online harassment tactics, such as flooding, trolling, JAQing, and sealioning.
1 These tactics generate an excess of contradictory and irrelevant information in order to instill doubt, undermine a shared perception of reality, or simply distract people’s attention (
Kozyreva et al., 2020;
Lewandowsky, 2020).
To counteract the challenges of false and misleading information and other attention-grabbing traps online, policy work has taken a multipronged approach, ranging from content moderation to fact checking and introduction of prompts that slow down the spread of false rumors (
Lewandowsky et al., 2020). In addition, research has focused on preparing people to recognize and resist online manipulation and misinformation, through both preemptive (inoculation) and reactive (debunking) interventions (
Ecker et al., 2022), and on improving people’s competencies for media and information literacy (e.g.,
Wineburg et al., 2022). Much effort has been invested in repurposing the notion of critical thinking—that is, “thinking that is purposeful, reasoned, and goal directed” (
Halpern, 2013, p. 8)—from its origins in education to the online world. For example,
Zucker (2019), addressing the National Science Teachers Association, wrote that because of the flood of misinformation “it is imperative that science teachers help students use critical thinking to examine claims they see, hear, or read that are not based on science” (p. 6).
As important as the ability to think critically continues to be, we argue that it is insufficient to borrow the tools developed for offline environments and apply them to the digital world. When the world comes to people filtered through digital devices, there is no longer a need to decide what information to seek. Instead, the relentless stream of information has turned human attention into a scarce resource to be seized and exploited by advertisers and content providers. Investing effortful and conscious critical thinking in sources that should have been ignored in the first place means that one’s attention has already been expropriated (
Caulfield, 2018). Digital literacy and critical thinking should therefore include a focus on the competence of
critical ignoring: choosing what to ignore, learning how to resist low-quality and misleading but cognitively attractive information, and deciding where to invest one’s limited attentional capacities.
Information Selection in the Attention Economy
Being selective about available information is at the heart of human cognition. Virtually any time people process a stimulus, they do so only because they are ignoring multiple competing stimuli. At the level of perceptual processing, the mind must ignore irrelevant sensory information in order to focus on important objects in a continually changing environment (
Gaspar & McDonald, 2014). The general ability to perform cognitive tasks, drawing on working memory capacity, is related to the ability to suppress irrelevant distractors (
Gaspar et al., 2016). Ignoring information is also a distinctive feature of decision making of a boundedly rational mind (i.e., a real-world mind that is limited in time, knowledge, foresight, and cognitive resources;
Simon, 1990). A key class of decision-making strategies consists of heuristic strategies. Their nature is to ignore “part of the information with the goal of making decisions more quickly, frugally, and/or accurately” than is possible with more complex strategies (
Gigerenzer & Gaissmaier, 2011, p. 454).
2Information selection is mediated through one’s physical and social environments and their cues that signal, among other things, danger, reward, or emotional states of other people. Being attuned to these valuable signals (and ignoring what is essentially irrelevant) is crucial for efficient functioning of any biological or artificial agent with limited resources (
Simon, 1990). Ideally, humans’ cognitive tools for separating valuable from to-be-ignored information are adapted to the environments they operate in. However, humans’ long-standing evolved, learned, and taught tools for information selection may be inadequate in the digital world, where the power of information filtering and control over environmental signals rests mainly with platforms that curate content through a combination of algorithmic tools and choice architectures (i.e., designs for presenting choices to users).
For instance, in the world of small social groups in which humans have evolved, paying attention to surprising or emotionally charged information is important, because it usually signals potential dangers or rewards. However, online, the same cues and attention markers that indicate important information in a social group can be misused by content generators to attract attention to falsehoods and tempt people into spreading them. Indeed,
Vosoughi et al. (2018) found that false stories that “successfully” turned viral were likely to inspire fear, disgust, and surprise; true stories, in contrast, triggered anticipation, sadness, joy, and trust. The human proclivity to attend more to negative than to positive things (
Soroka et al., 2019) may explain why messages featuring moral-emotional language (e.g., language expressing negative emotions, such as moral outrage) are more likely to be shared than messages with neutral language are (
Brady et al., 2017). Unscrupulous content generators can exploit this bias and can continually refine their messages by monitoring the success (measured by engagement and sharing) of different versions—a facility, known as “A/B testing,” that is at the heart of Facebook’s advertising system (see
Meta, 2022).
Misleading and low-quality information becomes an even more profound risk when it is part of a targeted campaign. The “infodemic” of misinformation and calculated disinformation about COVID-19 not only pollutes the Web with false and dubious information, but also undermines citizens’ health literacy, fosters vaccine hesitancy, and cultivates detrimental outcomes for individuals and society. This infodemic is nontrivial because exposure to misinformation has been shown to reduce people’s intention to be vaccinated against COVID-19 (
Loomba et al., 2021).
Such harmful content, although it might be shared and embraced by a part of the public, often originates from malicious actors who are motivated by a variety of factors, including financial, ideological, and lobbying interests (e.g., climate denial is a concentrated effort;
Oreskes & Conway, 2011). Malicious actors also use trolling and other harassment tactics to intimidate and silence opposing voices. Moreover, competition for attention creates an overabundance of content that, although not necessarily harmful in itself, can negatively affect other important indicators of life quality, such as the amount of leisure time and well-being.
In sum, digital environments present new challenges to people’s cognition and attention. People must therefore develop new mental habits, or retool those from other domains, to prevent merchants of low-quality information from hijacking their cognitive resources. One key such competence is the ability to deliberately and strategically ignore information.
Critical Ignoring for Information Management
Deliberate ignorance refers to the conscious choice to ignore information even when the costs of obtaining it are negligible (
Hertwig & Engel, 2016). People deliberately ignore information for various reasons—for instance, to avoid anticipated negative emotions, to ensure fairness, or to maximize suspense and surprise. Deliberate ignorance can also be a tool for boosting information management, especially online (
Kozyreva et al., 2020). Critical ignoring (
Wineburg, 2021) is a type of deliberate ignorance that entails selectively filtering and blocking out information in order to control one’s information environment and reduce one’s exposure to false and low-quality information. This competence complements conventional critical-thinking and information-literacy skills, such as finding reliable information online, by specifying how to avoid information that is misleading, distractive, and potentially harmful. It is only by ignoring the torrent of low-quality information that people can focus on applying critical search skills to the remaining now-manageable pool of potentially relevant information. As do all types of deliberate ignorance, critical ignoring requires cognitive and motivational resources (e.g., impulse control) and, somewhat ironically, knowledge: In order to know what to ignore, a person must first understand and detect the warning signs of low trustworthiness.
Critical Ignoring as a New Paradigm for Education
The digital world’s attention economy, the presence of malicious actors, and the ubiquity of alluring but false or misleading information present users with cognitive, emotional, and motivational challenges. Mastering these challenges will require new competencies. An indispensable component of navigating online information and preserving one’s autonomy on the Internet is the ability to ignore large amounts of information. Critical-ignoring strategies, as part of a curriculum in information management, should therefore be included in school curricula. Traditionally, the search for knowledge has involved paying close attention to information—finding it and considering it from multiple angles. Reading a text from beginning to end to critically evaluate it is a sensible approach to vetted school texts approved by competent overseers. On the unvetted Internet, however, this approach often ends up being a colossal waste of time and energy. In an era in which attention is the new currency, the admonition to “pay careful attention” is precisely what attention merchants and malicious agents exploit. It is time to revisit and expand the concept of critical thinking, often seen as the bedrock of an informed citizenry. As long as students are led to believe that critical thinking requires above all the effortful processing of text, they will continue to fall prey to informational traps and manipulated signals of epistemic quality. At the same time that students learn critical thinking, they should learn the core competence of thoughtfully and strategically allocating their attentional resources online. This will often entail selecting a few valuable pieces of information and deliberately ignoring others (
Hertwig & Engel, 2016). This insight, although crucial in the digital age, is not new. As
William James (1904) observed, “The art of being wise is the art of knowing what to overlook” (p. 369).
Recommended Reading
Hertwig, R., & Engel, C. (2016). (See References). A comprehensive overview of deliberate ignorance.
Kozyreva, A., Lewandowsky, S., & Hertwig, R. (2020). (See References). An in-depth review of digital challenges and cognitive tools from psychological science that can be used to confront them.
Wineburg, S. (2021, May 14). (See References). An accessible recent article introducing critical ignoring.