Skip to main content
Intended for healthcare professionals

Abstract

Low-quality and misleading information online can hijack people’s attention, often by evoking curiosity, outrage, or anger. Resisting certain types of information and actors online requires people to adopt new mental habits that help them avoid being tempted by attention-grabbing and potentially harmful content. We argue that digital information literacy must include the competence of critical ignoring—choosing what to ignore and where to invest one’s limited attentional capacities. We review three types of cognitive strategies for implementing critical ignoring: self-nudging, in which one ignores temptations by removing them from one’s digital environments; lateral reading, in which one vets information by leaving the source and verifying its credibility elsewhere online; and the do-not-feed-the-trolls heuristic, which advises one to not reward malicious actors with attention. We argue that these strategies implementing critical ignoring should be part of school curricula on digital information literacy. Teaching the competence of critical ignoring requires a paradigm shift in educators’ thinking, from a sole focus on the power and promise of paying close attention to an additional emphasis on the power of ignoring. Encouraging students and other online users to embrace critical ignoring can empower them to shield themselves from the excesses, traps, and information disorders of today’s attention economy.
A wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it. (Simon, 1971, pp. 40–41)
The function of ignoring, of inattention, is as vital a factor in mental progress as the function of attention itself. (James, 1904, p. 371)
The digital world is artificially constructed. Moderated by algorithmic tools, it contains more information than the world’s libraries combined—but much of this information comes from unvetted sources and lacks conventional indicators of trustworthiness. People scrolling through their social-media feeds are confronted with a deluge of updates and messages—an ad for a new device, a meme from a friend, news about the pandemic, and opinions on anything from climate change to the latest celebrity misstep—all in an endless stream produced and shared by human beings and promoted by algorithms designed to make people dwell on the platform so they can be exposed to more ads (Wu, 2016).
The challenges of dealing with overabundant and attention-grabbing information are amplified by the proliferation of false information and conspiracy theories, whose prevalence may lead people to doubt the very existence of “truth” or a shared reality. An entirely new vocabulary has become necessary to describe disinformation and online harassment tactics, such as flooding, trolling, JAQing, and sealioning.1 These tactics generate an excess of contradictory and irrelevant information in order to instill doubt, undermine a shared perception of reality, or simply distract people’s attention (Kozyreva et al., 2020; Lewandowsky, 2020).
To counteract the challenges of false and misleading information and other attention-grabbing traps online, policy work has taken a multipronged approach, ranging from content moderation to fact checking and introduction of prompts that slow down the spread of false rumors (Lewandowsky et al., 2020). In addition, research has focused on preparing people to recognize and resist online manipulation and misinformation, through both preemptive (inoculation) and reactive (debunking) interventions (Ecker et al., 2022), and on improving people’s competencies for media and information literacy (e.g., Wineburg et al., 2022). Much effort has been invested in repurposing the notion of critical thinking—that is, “thinking that is purposeful, reasoned, and goal directed” (Halpern, 2013, p. 8)—from its origins in education to the online world. For example, Zucker (2019), addressing the National Science Teachers Association, wrote that because of the flood of misinformation “it is imperative that science teachers help students use critical thinking to examine claims they see, hear, or read that are not based on science” (p. 6).
As important as the ability to think critically continues to be, we argue that it is insufficient to borrow the tools developed for offline environments and apply them to the digital world. When the world comes to people filtered through digital devices, there is no longer a need to decide what information to seek. Instead, the relentless stream of information has turned human attention into a scarce resource to be seized and exploited by advertisers and content providers. Investing effortful and conscious critical thinking in sources that should have been ignored in the first place means that one’s attention has already been expropriated (Caulfield, 2018). Digital literacy and critical thinking should therefore include a focus on the competence of critical ignoring: choosing what to ignore, learning how to resist low-quality and misleading but cognitively attractive information, and deciding where to invest one’s limited attentional capacities.

Information Selection in the Attention Economy

Being selective about available information is at the heart of human cognition. Virtually any time people process a stimulus, they do so only because they are ignoring multiple competing stimuli. At the level of perceptual processing, the mind must ignore irrelevant sensory information in order to focus on important objects in a continually changing environment (Gaspar & McDonald, 2014). The general ability to perform cognitive tasks, drawing on working memory capacity, is related to the ability to suppress irrelevant distractors (Gaspar et al., 2016). Ignoring information is also a distinctive feature of decision making of a boundedly rational mind (i.e., a real-world mind that is limited in time, knowledge, foresight, and cognitive resources; Simon, 1990). A key class of decision-making strategies consists of heuristic strategies. Their nature is to ignore “part of the information with the goal of making decisions more quickly, frugally, and/or accurately” than is possible with more complex strategies (Gigerenzer & Gaissmaier, 2011, p. 454).2
Information selection is mediated through one’s physical and social environments and their cues that signal, among other things, danger, reward, or emotional states of other people. Being attuned to these valuable signals (and ignoring what is essentially irrelevant) is crucial for efficient functioning of any biological or artificial agent with limited resources (Simon, 1990). Ideally, humans’ cognitive tools for separating valuable from to-be-ignored information are adapted to the environments they operate in. However, humans’ long-standing evolved, learned, and taught tools for information selection may be inadequate in the digital world, where the power of information filtering and control over environmental signals rests mainly with platforms that curate content through a combination of algorithmic tools and choice architectures (i.e., designs for presenting choices to users).
For instance, in the world of small social groups in which humans have evolved, paying attention to surprising or emotionally charged information is important, because it usually signals potential dangers or rewards. However, online, the same cues and attention markers that indicate important information in a social group can be misused by content generators to attract attention to falsehoods and tempt people into spreading them. Indeed, Vosoughi et al. (2018) found that false stories that “successfully” turned viral were likely to inspire fear, disgust, and surprise; true stories, in contrast, triggered anticipation, sadness, joy, and trust. The human proclivity to attend more to negative than to positive things (Soroka et al., 2019) may explain why messages featuring moral-emotional language (e.g., language expressing negative emotions, such as moral outrage) are more likely to be shared than messages with neutral language are (Brady et al., 2017). Unscrupulous content generators can exploit this bias and can continually refine their messages by monitoring the success (measured by engagement and sharing) of different versions—a facility, known as “A/B testing,” that is at the heart of Facebook’s advertising system (see Meta, 2022).
Misleading and low-quality information becomes an even more profound risk when it is part of a targeted campaign. The “infodemic” of misinformation and calculated disinformation about COVID-19 not only pollutes the Web with false and dubious information, but also undermines citizens’ health literacy, fosters vaccine hesitancy, and cultivates detrimental outcomes for individuals and society. This infodemic is nontrivial because exposure to misinformation has been shown to reduce people’s intention to be vaccinated against COVID-19 (Loomba et al., 2021).
Such harmful content, although it might be shared and embraced by a part of the public, often originates from malicious actors who are motivated by a variety of factors, including financial, ideological, and lobbying interests (e.g., climate denial is a concentrated effort; Oreskes & Conway, 2011). Malicious actors also use trolling and other harassment tactics to intimidate and silence opposing voices. Moreover, competition for attention creates an overabundance of content that, although not necessarily harmful in itself, can negatively affect other important indicators of life quality, such as the amount of leisure time and well-being.
In sum, digital environments present new challenges to people’s cognition and attention. People must therefore develop new mental habits, or retool those from other domains, to prevent merchants of low-quality information from hijacking their cognitive resources. One key such competence is the ability to deliberately and strategically ignore information.

Critical Ignoring for Information Management

Deliberate ignorance refers to the conscious choice to ignore information even when the costs of obtaining it are negligible (Hertwig & Engel, 2016). People deliberately ignore information for various reasons—for instance, to avoid anticipated negative emotions, to ensure fairness, or to maximize suspense and surprise. Deliberate ignorance can also be a tool for boosting information management, especially online (Kozyreva et al., 2020). Critical ignoring (Wineburg, 2021) is a type of deliberate ignorance that entails selectively filtering and blocking out information in order to control one’s information environment and reduce one’s exposure to false and low-quality information. This competence complements conventional critical-thinking and information-literacy skills, such as finding reliable information online, by specifying how to avoid information that is misleading, distractive, and potentially harmful. It is only by ignoring the torrent of low-quality information that people can focus on applying critical search skills to the remaining now-manageable pool of potentially relevant information. As do all types of deliberate ignorance, critical ignoring requires cognitive and motivational resources (e.g., impulse control) and, somewhat ironically, knowledge: In order to know what to ignore, a person must first understand and detect the warning signs of low trustworthiness.

Critical Ignoring in the Digital World: Information Types and Tools

What are strategies for implementing critical ignoring? Different types of problematic information—such as distracting information, misinformation and disinformation, and interference by malicious actors—may require different mitigation strategies. We discuss three strategies—self-nudging, lateral reading, and the do-not-feed-the-trolls heuristic—and the circumstances in which they can be applied (see also Fig. 1).
Fig. 1. Critical ignoring online: types of information, corresponding strategies for critical ignoring, and targeted outcomes of those strategies.
Open in viewer

Self-nudging: removing distracting and low-quality information

Clickbait stories (“Ebola in the Air? A Nightmare That Could Happen”), emotional and sensational content, “breaking news”—the various forms of low-quality information are as tempting to the attentional system as junk food is to the taste buds. The key to controlling addictive habits—whether cutting out online gossip or sugary treats—is not to exercise superhuman willpower but rather to employ situational control strategies (Duckworth et al., 2016).3 This involves making changes to one’s environment in order to manage exposure to temptation. For instance, someone who cannot resist sweets can make them less accessible—putting them at the back of the hardest-to-reach shelf—to help control the urge to eat them. The same rationale can be harnessed for an information diet.
Self-nudging is a cognitive boost (Hertwig & Grüne-Yanoff, 2017) that fosters people’s competencies to design their environment in a way that works best for them. Self-nudging has roots in research on a behavioral policy approach called nudging (Thaler & Sunstein, 2008) and psychological research on situational self-control (Duckworth et al., 2016). Using extensively studied mechanisms of interventions, such as positional effects (e.g., making healthy food options more accessible in a supermarket or a cafeteria), defaults (e.g., making data privacy a default setting), or social norms, the self-nudger redesigns choice architectures to prompt behavioral change. However, instead of requiring a public choice architect, self-nudging empowers people to change their own environments (Reijula & Hertwig, 2022), thus making them citizen choice architects whose autonomy and agency is preserved and fostered.
To deal with attention-grabbing information online, people can apply self-nudging principles to organize their information environment so as to reduce temptation. For instance, digital self-nudges, such as setting time limits on the use of social media (e.g., via the Screen Time app on iPhone) or converting one’s screen to a grayscale mode, have been demonstrated to help people reduce their screen time (Zimmermann & Sobolev, 2020). A more radical self-nudge consists of removing temptations by deactivating the most distracting social-media apps (at least for a period of time). In a study by Allcott et al. (2020), participants who were incentivized to deactivate their Facebook accounts for 1 month gained on average about 60 min per day for offline activities, a gain that was associated with small increases in subjective well-being. Reduced online activity also modestly decreased factual knowledge of political news (but not political participation), as well as political polarization (but not affective polarization). As this study shows, there are trade-offs between potential gains (e.g., time for offline activities) and losses (e.g., potentially becoming less informed) in such solutions. The key goal of self-nudging, however, is not to optimize information consumption, but rather to offer a range of measures that can help people regain control of their information environments and align those environments with their goals, including goals regarding how to distribute their time and attention among different competing sources (e.g., friends on social media and friends and family offline).

Lateral reading: verifying credibility on the Web

Organized disinformation or misleading information that masquerades as legitimate is difficult to ignore, especially when it comes from political leaders and celebrities. Sources that disseminate such information adopt easily gamed indicators of epistemic quality, such as official-looking logos, scientific language, and top-level domains (e.g., dot.org; Wineburg & Ziv, 2019) in order to appear trustworthy. Other tricks include adding hyperlinks to reliable sources that look dependable enough to mask the fact that the sources do not actually support the claim being made (Breakstone et al., 2022).
In a digital environment, looks can be deceiving. It is often impossible to know the real agenda behind a site or a post simply by examining it. The trick is to not waste time doing so. Instead, a person can follow the strategy of professional fact-checkers known as lateral reading (Wineburg et al., 2022; Wineburg & McGrew, 2019). Lateral reading begins with a key insight: One cannot necessarily know how trustworthy a website or a social-media post is by engaging with and critically reflecting on its content. Without relevant background knowledge or reliable indicators of trustworthiness, the best strategy for deciding whether one can believe a source is to look up the author or organization and the claims elsewhere (e.g., using search engines or Wikipedia to get pointers to reliable sources). The strategy of lateral reading was identified by studying what makes professional fact-checkers more successful in verifying information on the Web compared with other competent adults (undergraduates at an elite university and Ph.D. historians from five different universities; Wineburg & McGrew, 2019). Instead of dwelling on an unfamiliar site (i.e., reading vertically), fact-checkers strategically and deliberately ignored it until they first opened new tabs to search for information about the organization or individual behind it. If lateral reading indicates that the site is untrustworthy, examining it directly would waste precious time and energy. Although this strategy might require motivation and time to learn and practice, it is a time-saver in the long run. In the study just mentioned, fact-checkers needed only a few seconds to determine trustworthiness of the source.
Lateral reading is part of the Civic Online Reasoning curriculum, whose effectiveness has been demonstrated in multiple studies (Axelsson et al., 2021; Brodsky et al., 2021; McGrew et al., 2019; Wineburg et al., 2022). For instance, in a recent field experiment across an entire urban school district in the United States (Wineburg et al., 2022), students who completed six 50-min lessons focusing on lateral reading and related strategies (n = 271) were significantly better judging the credibility of digital content relative to students in a control group (n = 228). Panizza et al. (2022), testing adults, also demonstrated that lateral reading can improve the ability to evaluate the accuracy of unfamiliar sources on a social-media website when prompted by a quick pop-up with tips on how to check information’s credibility.

Do not feed the trolls: ignoring malicious actors

Sometimes it is not the information but the people who produce it who need to be actively ignored. Problematic online behavior, including promulgation of disinformation and harassment, can usually be traced back to real people—more often than not to just a few extremely active individuals. Indeed, close to 65% of antivaccine content posted to Facebook and Twitter in February and March 2021 is attributable to just 12 individuals (Center for Countering Digital Hate, 2021).
Despite being a minority, conspiracy theorists and science denialists can be vocal enough to cause damage. Their strategy is to consume people’s attention by creating the appearance of a debate where none exists (e.g., Oreskes & Conway, 2011). One productive response is to resist engaging with these individuals or their claims by ignoring them. This approach can be implemented both on the individual and on the infrastructural level. For instance, reddit.com’s AskHistorians subreddit, one of the largest history forums online, removes questions that use the JAQing technique to deny the basic facts of the Holocaust (Breit, 2018).
Another category of bad actors online consists of those engaged in trolling, cyberbullying, and other forms of online harassment. Harassment—including physical threats, stalking, insults, and sexual harassment—is prevalent online; 41% of Americans say that they personally have experienced at least one form of such abuse (Vogels, 2021). Trolling, which includes interpersonal antisocial behaviors, such as deception, aggression, and disruption, is a particularly common and concerning type of online harassment (Craker & March, 2016).
Online harassment exacts an emotional toll on victims and erodes online civility. Crucially, as Craker and March (2016) demonstrated, individuals who engage in trolling are motivated by negative social power, and their trolling behavior is reinforced by the adverse impact their actions have (e.g., annoying and upsetting people). To fight back, as one of the authors of this study (March, 2016) argued, one needs to withdraw that negative social reward, thereby diminishing trolls’ motivation to engage in antisocial behavior. This strategy (which is useful for dealing with other malicious actors, such as superspreaders of mis- and disinformation, as well) is known as the do-not-feed-the-trolls heuristic. It consists of two rules: First, do not respond directly to trolls; do not correct them, engage in debate, retaliate, or troll in response. Second, instead, block trolls and report them to the platform. Another support for the use of this heuristic comes from the expert advice emphasizing the importance of two factors when dealing with online harassment: (a) seeking help and support from one’s social group and/or professionals and (b) not engaging with the malicious actors and instead blocking their messages. For example, UNICEF (n.d.) advises that, when bullying happens on a social-media platform, one should “consider blocking the bully and formally reporting their behaviour on the platform itself” (Question 4).
Finally, it is important to note that no one can—or should—bear the burden of online abuse and disinformation alone. The do-not-feed-the-trolls heuristic must be complemented by users reporting bad actors to platforms and by platforms implementing consistent content-moderation policies. It is also crucial to ensure that trolling and flooding tactics of science denialists are not left without response on the platform level. Platforms’ content-moderation policies and design choices should be the first line of defense against harmful online behavior. Strategies and interventions aimed at fostering critical thinking and critical ignoring competencies in online users should not be regarded as a substitute for developing and implementing systemic and infrastructural solutions at the platform and regulator levels. Empowering individuals and fostering better digital competencies is part of the defense against online harm but must not be misused by regulators and platforms as an alibi for doing nothing.

Critical Ignoring as a New Paradigm for Education

The digital world’s attention economy, the presence of malicious actors, and the ubiquity of alluring but false or misleading information present users with cognitive, emotional, and motivational challenges. Mastering these challenges will require new competencies. An indispensable component of navigating online information and preserving one’s autonomy on the Internet is the ability to ignore large amounts of information. Critical-ignoring strategies, as part of a curriculum in information management, should therefore be included in school curricula. Traditionally, the search for knowledge has involved paying close attention to information—finding it and considering it from multiple angles. Reading a text from beginning to end to critically evaluate it is a sensible approach to vetted school texts approved by competent overseers. On the unvetted Internet, however, this approach often ends up being a colossal waste of time and energy. In an era in which attention is the new currency, the admonition to “pay careful attention” is precisely what attention merchants and malicious agents exploit. It is time to revisit and expand the concept of critical thinking, often seen as the bedrock of an informed citizenry. As long as students are led to believe that critical thinking requires above all the effortful processing of text, they will continue to fall prey to informational traps and manipulated signals of epistemic quality. At the same time that students learn critical thinking, they should learn the core competence of thoughtfully and strategically allocating their attentional resources online. This will often entail selecting a few valuable pieces of information and deliberately ignoring others (Hertwig & Engel, 2016). This insight, although crucial in the digital age, is not new. As William James (1904) observed, “The art of being wise is the art of knowing what to overlook” (p. 369).

Recommended Reading

Hertwig, R., & Engel, C. (2016). (See References). A comprehensive overview of deliberate ignorance.
Kozyreva, A., Lewandowsky, S., & Hertwig, R. (2020). (See References). An in-depth review of digital challenges and cognitive tools from psychological science that can be used to confront them.
Wineburg, S. (2021, May 14). (See References). An accessible recent article introducing critical ignoring.

Acknowledgments

We thank Deb Ain for editing the manuscript.

ORCID iDs

Footnotes

Declaration of Conflicting Interests The author(s) declared that there were no conflicts of interest with respect to the authorship or the publication of this article.
Funding The article was written as part of a Volkswagen Foundation grant to R. Hertwig and S. Lewandowsky (Initiative “Artificial Intelligence and the Society of the Future”). S. Lewandowsky also acknowledges financial support from the European Research Council (ERC Advanced Grant 101020961 PRODEMINFO) and the Humboldt Foundation through a research award.
1. We use the term misinformation to refer to any information that later turns out to have been false. We reserve the term disinformation to refer to messages that the communicator knows to be false but is disseminating for political or personal purposes. Flooding consists of inundating online spaces with a torrent of messages to dominate and disrupt conversation and drown out dissenting voices. Trolling is a form of online harassment that involves posting provocative and inflammatory messages in order to disrupt the conversation and upset other people. Sealioning is a type of trolling and a harassment tactic of pestering participants in online discussions with disingenuous questions and incessant requests for evidence under the guise of sincerity. Similarly, JAQing (“just asking questions”) is a tactic of disingenuously framing false or misleading statements as questions.
2. For example, the take-the-best heuristic models how people infer which of two alternatives has a higher value on a criterion, on the basis of binary cues and cue values retrieved from memory. It assumes that search proceeds through cues in order of their validity. Selection is implemented by the stopping rule: The heuristic stops when it reaches the first cue that discriminates between the alternatives. The heuristic thus uses the single most predictive and discriminative cue for a task (e.g., a friend’s recommendation for which of two restaurants has the best food) and ignores the rest (e.g., price, rating by food websites, cuisine type).
3. According to Duckworth et al. (2016), situational self-control strategies include situation-selection strategies, which “involve intentionally choosing to be in situations that favor goal-oriented valuation systems over temptation-oriented valuation systems” (p. 40), and situation-modification strategies, which “entail purposefully changing [one’s] circumstances to advantage” (p. 40). Duckworth et al. provided both a theoretical framework and an overview of the available evidence supporting the effectiveness of situational self-control strategies in the domains of substance use, eating and exercise, academic performance, and saving for retirement.

Transparency

Action Editor: Robert L. Goldstone
Editor: Robert L. Goldstone

References

Allcott H., Braghieri L., Eichmeyer S., Gentzkow M. (2020). The welfare effects of social media. American Economic Review, 110(3), 629–676. https://doi.org/10.1257/aer.20190658
Axelsson C.-A. W., Guath M., Nygren T. (2021). Learning how to separate fake from real news: Scalable digital tutorials promoting students’ civic online reasoning. Future Internet, 13(3), Article 60. https://doi.org/10.3390/fi13030060
Brady W. J., Wills J. A., Jost J. T., Tucker J. A., Van Bavel J. J. (2017). Emotion shapes the diffusion of moralized content in social networks. Proceedings of the National Academy of Sciences, USA, 114(28), 7313–7318. https://doi.org/10.1073/pnas.1618923114
Breakstone J., Smith M., Ziv N., Wineburg S. (2022). Civic preparation for the digital age: How college students evaluate online sources about social and political issues. Journal of Higher Education. Advance online publication. https://doi.org/10.1080/00221546.2022.2082783
Breit J. (2018, July 20). How one of the internet’s biggest history forums deals with Holocaust deniers. Slate. https://slate.com/technology/2018/07/the-askhistorians-subreddit-banned-holocaust-deniers-and-facebook-should-too.html
Brodsky J. E., Brooks P. J., Scimeca D., Todorova R., Galati P., Batson M., Grosso R., Matthews M., Miller V., Caulfield M. (2021). Improving college students’ fact-checking strategies through lateral reading instruction in a general education civics course. Cognitive Research: Principles and Implications, 6, Article 23. https://doi.org/10.1186/s41235-021-00291-4
Caulfield M. (2018, December 19). Recalibrating our approach to misinformation. EdSurge. https://www.edsurge.com/news/2018-12-19-recalibrating-our-approach-to-misinformation
Craker N., March E. (2016). The dark side of Facebook®: The Dark Tetrad, negative social potency, and trolling behaviours. Personality and Individual Differences, 102, 79–84. https://doi.org/10.1016/j.paid.2016.06.043
Duckworth A. L., Gendler T. S., Gross J. J. (2016). Situational strategies for self-control. Perspectives on Psychological Science, 11(1), 35–55. https://doi.org/10.1177/1745691615623247
Ecker U. K. H., Lewandowsky S., Cook J., Schmid P., Fazio L. K., Brashier N., Kendeou P., Vraga E. K., Amazeen M. A. (2022). The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology, 1(1), 13–29. https://doi.org/10.1038/s44159-021-00006-y
Gaspar J. M., Christie G. J., Prime D. J., Jolicœur P., McDonald J. J. (2016). Inability to suppress salient distractors predicts low visual working memory capacity. Proceedings of the National Academy of Sciences, USA, 113(13), 3693–3698. https://doi.org/10.1073/pnas.1523471113
Gaspar J. M., McDonald J. J. (2014). Suppression of salient objects prevents distraction in visual search. The Journal of Neuroscience, 34(16), 5658–5666. https://doi.org/10.1523/JNEUROSCI.4161-13.2014
Gigerenzer G., Gaissmaier W. (2011). Heuristic decision making. Annual Review of Psychology, 62, 451–482. https://doi.org/10.1146/annurev-psych-120709-145346
Halpern D. F. (2013). Thought and knowledge: An introduction to critical thinking. Psychology Press.
Hertwig R., Engel C. (2016). Homo ignorans: Deliberately choosing not to know. Perspectives on Psychological Science, 11(3), 359–372. https://doi.org/10.1177/1745691616635594
Hertwig R., Grüne-Yanoff T. (2017). Nudging and boosting: Steering or empowering good decisions. Perspectives on Psychological Science, 12(6), 973–986. https://doi.org/10.1177/1745691617702496
James W. (1904). The principles of psychology (Vol. 2). Henry Holt & Company.
Kozyreva A., Lewandowsky S., Hertwig R. (2020). Citizens versus the Internet: Confronting digital challenges with cognitive tools. Psychological Science in the Public Interest, 21(3), 103–156. https://doi.org/10.1177/1529100620946707
Lewandowsky S. (2020). Willful construction of ignorance: A tale of two ontologies. In Hertwig R., Engel C. (Eds.), Deliberate ignorance: Choosing not to know (pp. 101–117). MIT Press.
Lewandowsky S., Smillie L., Garcia D., Hertwig R., Weatherall J., Egidy S., Robertson R. E., O’Connor C., Kozyreva A., Lorenz-Spreen P., Blaschke Y., Leiser M. (2020). Technology and democracy: Understanding the influence of online technologies on political behaviour and decision-making. Publications Office of the European Union. https://doi.org/10.2760/593478
Loomba S., de Figueiredo A., Piatek S. J., de Graaf K., Larson H. J. (2021). Measuring the impact of COVID-19 vaccine misinformation on vaccination intent in the UK and USA. Nature Human Behaviour, 5(3), 337–348. https://doi.org/10.1038/s41562-021-01056-1
March E. (2016, October 6). ‘Don’t feed the trolls’ really is good advice – Here’s the evidence. The Conversation. https://theconversation.com/dont-feed-the-trolls-really-is-good-advice-heres-the-evidence-63657
McGrew S., Smith M., Breakstone J., Ortega T., Wineburg S. (2019). Improving university students’ web savvy: An intervention study. British Journal of Educational Psychology, 89(3), 485–500. https://doi.org/10.1111/bjep.12279
Oreskes N., Conway E. M. (2011). Merchants of doubt: How a handful of scientists obscured the truth on issues from tobacco smoke to global warming. Bloomsbury Publishing.
Panizza F., Ronzani P., Martini C., Mattavelli S., Morisseau T., Motterlini M. (2022). Lateral reading and monetary incentives to spot disinformation about science. Scientific Reports, 12(1), 1–15. https://doi.org/10.1038/s41598-022-09168-y
Reijula S., Hertwig R. (2022). Self-nudging and the citizen choice architect. Behavioural Public Policy, 6(1), 119–149. https://doi.org/10.1017/bpp.2020.5
Simon H. A. (1971). Designing organizations for an information-rich world. In Greenberger M. (Ed.), Computers, communications, and the public interest (pp. 37–72). Johns Hopkins University Press.
Soroka S., Fournier P., Nir L. (2019). Cross-national evidence of a negativity bias in psychophysiological reactions to news. Proceedings of the National Academy of Sciences, USA, 116(38), 18888–18892. https://doi.org/10.1073/pnas.1908369116
Thaler R. H., Sunstein C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Yale University Press.
UNICEF. (n.d.). Cyberbullying: What is it and how to stop it. https://www.unicef.org/end-violence/how-to-stop-cyberbullying
Vogels E. A. (2021). The state of online harassment. Pew Research Center. https://www.pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/
Vosoughi S., Roy D., Aral S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. https://doi.org/10.1126/science.aap9559
Wineburg S. (2021, May 14). To navigate the dangers of the web, you need critical thinking – But also critical ignoring. The Conversation. https://theconversation.com/to-navigate-the-dangers-of-the-web-you-need-critical-thinking-but-also-critical-ignoring-158617
Wineburg S., Breakstone J., McGrew S., Smith M. D., Ortega T. (2022). Lateral reading on the open Internet. A district-wide field study in high school government classes. Journal of Educational Psychology, 114(5), 893–909. https://doi.org/10.1037/edu0000740
Wineburg S., McGrew S. (2019). Lateral reading and the nature of expertise: Reading less and learning more when evaluating digital information. Teachers College Record, 121(11). https://doi.org/10.1177/016146811912101102
Wineburg S., Ziv N. (2019, December 5). The meaninglessness of the .Org domain. The New York Times. https://www.nytimes.com/2019/12/05/opinion/dot-org-domain.html
Wu T. (2016). The attention merchants: The epic scramble to get inside our heads. Alfred A. Knopf.
Zimmermann L., Sobolev M. (2020). Digital nudges for screen time reduction: A randomized control trial with performance and wellbeing outcomes. PsyArXiv. https://doi.org/10.31234/osf.io/nmgdz
Zucker A. (2019). Commentary: Using critical thinking skills to counter misinformation. Science Scope, 42(8), 6–9. https://doi.org/10.2505/4/ss19_042_08_6

Cite article

Cite article

Cite article

OR

Download to reference manager

If you have citation software installed, you can download article citation data to the citation manager of your choice

Share options

Share

Share this article

Share with email
EMAIL ARTICLE LINK
Share on social media

Share access to this article

Sharing links are not relevant where the article is open access and not available if you do not have a subscription.

For more information view the Sage Journals article sharing page.

Information, rights and permissions

Information

Published In

Article first published online: November 8, 2022
Issue published: February 2023

Keywords

  1. critical ignoring
  2. deliberate ignorance
  3. lateral reading
  4. online environments
  5. digital information literacy
  6. critical thinking
  7. information management

Rights and permissions

© The Author(s) 2022.
This article is distributed under the terms of the Creative Commons Attribution 4.0 License (https://creativecommons.org/licenses/by/4.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage).

Authors

Notes

Stephan Lewandowsky, School of Psychological Science, University of Bristol Email: stephan.lewandowsky@bristol.ac.uk

Metrics and citations

Metrics

Journals metrics

This article was published in Current Directions in Psychological Science.

VIEW ALL JOURNAL METRICS

Article usage*

Total views and downloads: 68570

*Article usage tracking started in December 2016


Altmetric

See the impact this article is making through the number of times it’s been read, and the Altmetric Score.
Learn more about the Altmetric Scores



Articles citing this one

Receive email alerts when this article is cited

Web of Science: 22 view articles Opens in new tab

Crossref: 0

  1. On the Horizon: the Promise and Power of Higher Order, Critical, and C...
    Go to citation Crossref Google Scholar

Figures and tables

Figures & Media

Tables

View Options

View options

PDF/ePub

View PDF/ePub

Get access

Access options

If you have access to journal content via a personal subscription, university, library, employer or society, select from the options below:

APS members can access this journal content using society membership credentials.


Alternatively, view purchase options below:

Purchase 24 hour online access to view and download content.

Access journal content via a DeepDyve subscription or find out more about this option.

View figure
Fig. 1
Fig. 1. Critical ignoring online: types of information, corresponding strategies for critical ignoring, and targeted outcomes of those strategies.