ArticlePDF Available

Bias in Science: Natural and Social

Authors:

Abstract and Figures

Moral, social, political, and other "nonepistemic" values can lead to bias in science, from prioritizing certain topics over others to the rationalization of questionable research practices. Such values might seem particularly common or powerful in the social sciences, given their subject matter. However, I argue first that the well-documented phenomenon of motivated reasoning provides a useful framework for understanding when values guide scientific inquiry (in pernicious or productive ways). Second, this analysis reveals a parity thesis: values influence the social and natural sciences about equally, particularly because both are so prominently affected by desires for social credit and status, including recognition and career advancement. Ultimately, bias in natural and social science is both natural and social-that is, a part of human nature and considerably motivated by a concern for social status (and its maintenance). Whether the pervasive influence of values is inimical to the sciences is a separate question. Word count: 9,314 excluding references and abstract (11,198 total)
Content may be subject to copyright.
Bias in Science: Natural and Social
Joshua May
Published in Synthese 199: 3345–3366 (2021).
Abstract (150 words): Moral, social, political, and other nonepistemic values can
lead to bias in science, from prioritizing certain topics over others to the rationalization
of questionable research practices. Such values might seem particularly common or
powerful in the social sciences, given their subject matter. However, I argue first that
the well-documented phenomenon of motivated reasoning provides a useful
framework for understanding when values guide scientific inquiry (in pernicious or
productive ways). Second, this analysis reveals a parity thesis: values influence the
social and natural sciences about equally, particularly because both are so prominently
affected by desires for social credit and status, including recognition and career
advancement. Ultimately, bias in natural and social science is both natural and social
that is, a part of human nature and considerably motivated by a concern for social status
(and its maintenance). Whether the pervasive influence of values is inimical to the
sciences is a separate question.
Word count: 9,314 excluding references and abstract (11,198 total)
Keywords: values in science; wishful thinking; conflicts of interest; replication crisis;
research integrity; motivated reasoning; rationalization
1. Introduction
Science has long been influenced by financial conflicts of interest, politics, and other
biases. The replication crisis and high-profile cases of misconduct, however, have renewed
concerns about the generation of biased data and conclusions, owing perhaps to the
outsized influence of apparently “nonepistemic values,” such as political ideology and
personal gain. Due to a number of factors—e.g. low sample sizes, small effect sizes, and
ideological influences—one prominent scientist famously estimated that most published
scientific findings are false (Ioannidis 2005). A key concern is that a researcher’s
preferences or values can contribute to the rationalization of experimental designs or
interpretations of data that will bring the researcher status, support their favored ideology,
or promote what they perceive to be social justice (see e.g. Wilholt 2009).
Social science has received a disproportionate amount of criticism and skepticism.
With headlines like “How Academia’s Liberal Bias is Killing Social Sciencein The Week
(Gobry 2014) and “Social Sciences Suffer from Severe Publication Bias” in Nature
(Peplow 2014), there certainly appears to be a “crisis of confidence” about findings in these
fields (Pashler & Wagenmakers 2012: 528). Similar sentiments can be found in the popular
media, such as The Washington Post, which has dispassionately stated: “The social end of
the science spectrum is notorious for publishing questionable research, even in the most
well-respected journals (Gebelhoff 2017). In Scientific American, the science writer
May | Bias in Science
Page 2 of 22
Michael Shermer maintains that ideological bias is much worse in the social sciences
(2016).
One might think the influence of values is more prevalent in social science because
such researchers will be most motivated by moral and political agendas. As a team of
personality and social psychologists themselves put it, theirs “is the subfield of psychology
that most directly examines ideologically controversial topics, and is thus most in need of
political diversity” (Duarte et al. 2015: 2). Similarly, Steven Pinker recently writes:
“Moralization is the original sin of the behavioral sciences. …it’s irresistible to read our
morals into reality and describe the world as if it strove to implement our values” (in his
forward to Fiske & Rai 2014). Not only is the subject matter of social science replete with
values, the phenomena studied are highly complex and thus “the connection between
theories, hypotheses and empirical findings could be more flexible, negotiable and open to
interpretation” (Fanelli 2010: 6-7; see also Fanelli et al. 2017).
I argue, however, for a parity thesis: despite some differences, the influence of
values is not significantly more prevalent in the social compared to the natural sciences.
The argument turns chiefly on two mutually reinforcing claims. The susceptibility claim is
that a variety of values influence all sciences, including ideological motivations that might
seem particular to the social sciences. The minority claim is that ideological motivations
are less powerful and pervasive than other motives, such as profit and social credit, which
are present throughout science. The analysis develops motivated reasoning as a unifying
framework for how values influence science—whether in pernicious, benign, or productive
ways. Although the archetypes of motivated reasoning, such as wishful thinking and
confirmation bias, are often regarded as inimical to knowledge production in science (e.g.
Anderson 2004; Brown 2013), the parity thesis does not take a stance on whether and when
such influences are epistemically problematic, pushing science away from its primary aim
of acquiring knowledge (cf. Solomon 2001; Bright 2017). I argue only that bias in the
natural and social sciences is both natural and social—that is, a part of human nature and
considerably motivated by a concern for social status—which reveals just how inevitable
and inherent values are in all of science.
2. Values and Bias
The term “bias” is often used pejoratively to refer to unfairly or unwarrantedly favoring an
idea or individual, as when a coin is biased toward heads or a jury member’s bias against
women produces a tendency toward distrusting their testimony. In the context of scientific
investigation, a preference for a certain idea (e.g. a hypothesis, interpretation, or approach)
can deviate from truth or be unwarranted by the evidence. Importantly, however, the term
“bias” can be used even more broadly to include nobler tendencies toward accepting a
particular conclusion, such as a bias toward the truth. Let us broadly say that in human
psychology a bias is a tendency to favor a certain conclusion. Although in paradigmatic
cases the conclusion is favored in an unwarranted way, we’ll see that it isn’t inherently
objectionable to have one’s reasoning guided by one’s goals and values.
In science, an investigator’s values can readily serve as sources of bias. Since one’s
values generally give rise to corresponding motivations, they can influence various
decisions made during scientific investigation. For example, a researcher’s values and
goals can sway choices about how to test hypotheses, describe the results, and assess the
May | Bias in Science
Page 3 of 22
evidence (see e.g. Elliott 2017), and corresponding labels are often given, such as “design
bias” and “analysis bias” (Stegenga 2018: ch. 10; Fanelli et al. 2017). Even the decision to
publish or report a particular finding (or null result) can be influenced by a researcher’s
desire to construct a manuscript narrative that is more likely to survive peer review—a
form of publication bias (Franco et al. 2014). Such decisions are arrived at through
reasoning—sometimes deliberate, sometimes unconscious—which makes a framework of
“motivated reasoning” apt. Before analyzing bias in terms of motivated reasoning, though,
it will be useful to consider some examples of bias in science.
Discussions of values in science often focus on how industry-funded research
spawns financial and political conflicts of interest. In light of the recent replication crisis,
however, some discussions have focused on various “questionable research practices” that
make one’s studies more likely to produce a statistically significant result (see e.g. Nosek
et al. 2012; Peterson 2019). Many scientists have powerful personal, professional, and
ideological motivations to engage in such practices in order to rack up more publications,
especially in more prestigious journals, which prefer exciting findings that substantially
advance the cutting edge of research. One recent study collected anonymous responses
from over 2,000 psychological scientists about their own engagement in ten questionable
research practices (John et al. 2012), and the vast majority of respondents (91%) admitted
to engaging in at least one of them. Of the practices, three stand out as most common, given
that about half of respondents (45-65%) reported engaging in them:
failing to report all of a study’s dependent measures
selectively reporting studies that “worked” (excluding e.g. null results)
deciding whether to collect additional data after checking to see whether the results
were significant (a form of “p-hacking”)
Although the survey attempted to incentivize honesty, some respondents probably
remained reluctant to even reveal such misdeeds anonymously.
Another questionable practice on the rise is the reporting of and reliance on
“marginally significant” results. A p-value of less than 0.05 is the conventional threshold
for statistical significance, yet some researchers report slightly higher p-values as
significant or “marginally significant” to ultimately support a hypothesis. Over the past
few decades, this questionable practice has increased substantially in psychology (Pritschet
et al. 2016). Of course, the choice to rely on marginal significance can be motivated by the
desire to publish or to advance a desired conclusion. One potential example of both
motivations is the widely cited—and apparently only—empirical attempt to demonstrate
that blind auditions in orchestras increase the number of women who win auditions by
reducing discrimination or implicit bias (Goldin & Rouse 2000). However, the media and
the authors themselves tout the desired conclusion based largely on marginally significant
effects with large standard errors (for discussion, see Pallesen 2019).
Another practice influenced by personal goals is the failure to disclose aspects of
one’s methods or data that could impact conclusions. An example can be found in one of
the most famous studies in psychology, the so-called “Stanford Prison Experiment” led by
Philip Zimbardo in 1971. As the story goes, Zimbardo randomly assigned healthy male
students at Stanford to play the role of either guards or prisoners over the course of two
weeks in a basement on campus. Zimbardo shut the study down after only a week because
the situation had apparently devolved into guards mistreating prisoners so badly that some
begged to be released. In a recent exposé (Blum 2018), however, it appears Zimbardo
May | Bias in Science
Page 4 of 22
misrepresented the study’s design and observations. According to new interviews and old
uncovered transcripts of discussions with participants and others present, it was revealed
that Zimbardo essentially encouraged the mistreatment, that the prisoners were not quite
free to leave for any reason, and that the pleas to be released were likely faked just so the
students could get back to their lives (in one case to go study for an important exam).
Such questionable research practices also occur in the natural sciences. One recent
study asked over 800 scientists working in ecology and evolutionary biology about how
often they and their colleagues engage in questionable practices (Fraser et al. 2018). The
researchers also directly compared their data to surveys of psychologists and found
markedly similar results, leading to the conclusion that questionable research practices are
“broadly as common in ecology and evolution research as they are in psychology” (p. 9).
For example, about two thirds (64%) of respondents said they had cherry picked which
results they reported in articles by omitting null findings that were not statistically
significant. And over half (51%) admitted to claiming that unexpected findings were
predicted in advance. Another study mined articles in the PubMed database to estimate the
likelihood of p-hacking, defined as acts where “researchers collect or select data or
statistical analyses until nonsignificant results become significant” (Head et al. 2015: 1).
Studies in the database included many disciplines in the natural sciences—including
biology, chemistry, medicine, and geoscience—yet the authors conclude that “p-hacking
is widespread in the scientific literature” (11).
Money also exerts a particularly powerful influence in many areas of the natural
sciences, given that findings often have direct commercial applications, from the
development of prescription drugs to nanomaterials. Companies often have a vested
interest in finding certain effects, e.g. that a new drug reduces nausea in cancer patients.
Companies also have an interest in finding null results, e.g. that there is no link between a
certain plastic material and neurological disorders. It is common that industry-funded
research tends to produce markedly different results from government-funded work on the
same topic, due to the biased adoption of certain experimental protocols, interpretations of
data, and dissemination of results (Wilholt 2009). Biased research funded by the tobacco
industry in the mid-twentieth century infamously influenced the study of smoking’s
adverse health effects (Oreskes & Conway 2010). While some findings in social science
generate commercial applications (e.g. self-help books), the markets for such products are
often much smaller.
Of course, publicly funded research can exhibit bias too, such as the tendency to
generate effects and avoid null results. One striking example involves the analysis of
randomized controlled trials of cardiovascular interventions, funded primarily by the
National Institutes of Health. Kaplan and Irvin (2015) compared the rate of null results
reported before and after the year 2000, when the detailed plans of such studies had to be
pre-registered—i.e., publicly documented before acquiring data and reporting results.
Remarkably, while 57% of the pre-2000 trials reported an effect of the study’s intervention,
only 8% of those published afterward did. As the authors explain, “Prior to 2000,
investigators had a greater opportunity to measure a range of variables and to select the
most successful outcomes when reporting their results” (8).
May | Bias in Science
Page 5 of 22
3. Reasoning Motivated by Values
Various social and psychological factors can explain how values influence science. But we
will focus on how reasoning generally, including scientific reasoning, can be nudged
toward certain conclusions by values that are embodied in one’s motivations. This
framework applies to a wide range of cases, from the influence of industry-funded research
to personal desires to achieve recognition.
3.1 Motivated Reasoning
Reasoning or inference is the process of forming or changing beliefs on the basis of other
beliefs or credences (Boghossian 2012). For example, I conclude (form the belief) that
smoking causes cancer on the basis of my acceptance of (belief in) the scientific consensus,
and I conclude that I shouldn’t smoke on the same grounds. Such reasoning processes can,
of course, be influenced by one’s values and goals. Confirmation bias, for example, is the
notorious and ubiquitous tendency to search for and interpret new evidence as supporting
conclusions that one already accepts (Kahneman 2011: 81). A teenager’s desire to fit in
with his peers who smoke can lead him to doubt the severity of the health risks or inflate
the benefits so that they seem to outweigh the costs. Sometimes we form beliefs non-
inferentially, as when we take our perceptual experiences at face value and simply believe
what we see. But it’s controversial to what degree observations themselves can remain
independent of one’s goals and values (that is, whether there is “cognitive penetration” of
perception; see Firestone & Scholl 2016; Peterson 2019). However, even if we don’t
always just see what we want to see, we can certainly justify what we want to justify. And
much of the scientific enterprise involves reasoning or inference that is open to being so
motivated.
Indeed, cognitive biases often work through reasons. Instead of merely opting for
the conclusion one prefers, human beings curiously come up with reasons, even if dubious
ones, in order to justify their decisions to others and, importantly, to themselves. Coming
up with reasons for a specific conclusion is just what we colloquially call “rationalization,”
which is often used in a pejorative sense, but it has a non-pejorative use as well (see
Davidson 1963). Sometimes we make a choice or form a belief automatically or intuitively
and only afterward—post hoc—come up with a justification for why, and one that doesn’t
necessarily correspond with the reasons that actually drove one to the conclusion in the
first place. One might be certain that incest is immoral, but the rationale one gives that it’s
harmful won’t necessarily apply to a one-off instance of protected intercourse among adult
cousins (Haidt 2001). Sometimes this is called “confabulation” in psychiatry and
neurology, but it is common in ordinary life.
Reasoning and rationalization can also occur before a decision—ante hoc—in order
to justify it in the first place (May 2018). The most familiar ante hoc rationalization is a
form of motivated reasoning, which has been studied extensively (Kunda 1990; Ditto et al.
2009). You want a beer with lunch, and because you first justify it as deserved, given how
busy the morning has been, you imbibe. Or you want to believe that your favorite team will
win, so you first rationalize that the star player’s injury is but a flesh wound.
Construed broadly, however, motivated reasoning is just reasoning shaped by one’s
goals, desires, or preferences. This needn’t be irrational at least because one’s inferences
May | Bias in Science
Page 6 of 22
can be driven by the desire for truth or accuracy (Kunda 1990). In science, such a
“veritistic” motive could even incentivize questionable research practices in order to
promote a finding that one is already convinced is true (Bright 2017). Like biases,
motivated reasoning is thus neither virtuous nor vicious in itself, even though the term is
often used pejoratively and to only refer to reasoning that is guided by motives other than
truth. Moreover, whether motivated by truth or other values, reasoning can occur before or
after the relevant conclusion is drawn or decision made (ante hoc or post hoc). For example,
Beck may go into therapy already thinking he’s a loser, but his present attempts to
scrutinize that belief are motivated by a desire for self-knowledge, not wishful thinking.
Similarly, although a scientist may embark on a research project with the intuitive belief in
her pet theory, her attempts now to seek evidence for or against it can be motivated purely
by a desire to seek the truth.
Importantly, to accept the existence, even prevalence, of human biases is not to
accept the postmodernist doctrine that truth is always relative and objectivity impossible.
The point, rather, is a commonsense one: while truth and objectivity are possible, humans
are fallible and conflicts of interest can get in the way, due to various forms of
rationalization. Such biases can certainly conflict with the scientific enterprise, given its
fundamental commitment to truth and justification. But this epistemic commitment alone
can’t neutralize motivated reasoning or rationalization, given that they work by generating
justifications, even if spurious ones. Moreover, given the level of self-deception that
frequently co-occurs with rationalization, the influence of one’s goals and values often go
unnoticed.
Many philosophers of science have argued that values in science are inevitable and
aren’t inherently problematic (e.g. Longino 1990; Kitcher 2001). For example, as Elizabeth
Anderson (2004) documents, prior to the 1990s many researchers studying divorce
consistently looked only for negative effects on children, which presumed “traditional
family values.” To even look for positive or neutral effects of divorce on children, it took
researchers with a different set of values that arose from a more feminist approach to the
issue. So, as Anderson notes, it’s not a problem for values to influence science, especially
when they open new avenues of neglected inquiry. The problem is when values become
self-fulfilling prophecies or “operate to drive inquiry to a predetermined conclusion
(Anderson 2004: 11). This concern in philosophy of science has been called the problem
of wishful thinking or “claiming that something is the case because one wishes it were the
case” (Brown 2019: 227).
Wishful thinking is often a form of motivated reasoning, but they are distinct for at
least two reasons. First, “motivated reasoning” needn’t be a pejorative term, as when one’s
reasoning is influenced by a desire to be accurate, whether because accuracy is incentivized
or intrinsically valued. Second, “wishful thinking” often connotes the forming of a belief
that would promote one’s narrow self-interest, but motivated reasoning is not restricted to
a certain class of desires. Partisan citizens who interpret all of the president’s actions in a
positive light, even those detrimental to their own economic well-being, exhibit motivated
reasoning even if not wishful thinking. Similarly, although the hasty conviction of an
impatient jury can amount to wishful thinking, the protracted deliberations of a
conscientious judge do not, even if her verdict is driven by a powerful desire to avoid
injustice. In practice, many cases of motivated reasoning are appropriately described as
wishful thinking. Nevertheless, the former provides a broader and unified understanding
May | Bias in Science
Page 7 of 22
of when various values, in the form of motivations, guide scientific reasoning, whether in
problematic or acceptable ways (see Figure 1).
Figure 1: A Taxonomy of Motivated Reasoning
Motivated Reasoning/Rationalization
Motivated by Truth/Knowledge
Motivated by Nonepistemic Values
Ante hoc
(e.g. ordinary deliberation)
Post hoc
(e.g. self-knowledge)
Ante hoc
(e.g. wishful thinking)
Post hoc
(e.g. confabulation)
3.2 Biased Reasoning
Motivated reasoning is such a core part of the human condition that it naturally occurs in
both everyday life (Kunda 1990) and the scientific enterprise (Koehler 1993; Nosek et al.
2012; Stegenga 2018: 108). Whether post hoc or ante hoc, reasoning motivated by values
can influence scientific investigations. For example, if a researcher wants badly to publish
in a prestigious journal, ante hoc rationalization can help to justify engaging in questionable
research practices. Similarly, researchers motivated to detect a positive effect of a new drug
may inadvertently use experimental designs more likely to produce the desired outcome
and later (post hoc) rationalize their protocol as unbiased to peer reviewers. Such
rationalizations can allow values to meet the “criterion of illegitimate guidance” in science,
wherein they serve as self-fulfilling prophecies, driving an inquiry toward a predetermined
conclusion (Anderson 2004: 11).
The rationalizations in motivated reasoning are sometimes conscious. And this may
well represent a key function of conscious deliberation in human beings: to convince others
or ourselves of our intuitive verdicts rather than to uncover the truth (Mercier and Sperber
2017). Often this amounts to post hoc rather than ante hoc rationalization, however.
Importantly, motivated reasoning can be unconscious and nonetheless powerfully
influence the intuitive verdicts themselves. When rationalizations are ante hoc and
unconscious, they are particularly apt materials for motivated reasoning that produces the
kinds of wishful thinking and self-fulfilling prophecies that represent the apparently
problematic form of value-laden scientific inquiry.
A large body of research suggests that reasoning motivated by values is ubiquitous.
A meta-analysis of “moral licensing,” for example, suggests that people will implicitly
justify morally questionable, but personally advantageous, behavior to themselves when
they have recently engaged in virtuous acts or affirm their virtuous traits (Blanken et al.
2015). In one study, participants were more likely to cheat if they had recently supported
environmentally friendly products (Mazar & Zhong 2010). This is just one form of
motivated moral reasoning, wherein people will unconsciously rely on whatever moral
principles help to justify a desired verdict (Ditto et al. 2009). A similar phenomenon is
“motivated forgetting,” in which we rationalize morally dubious acts or a better self-image
May | Bias in Science
Page 8 of 22
by failing to recall relevant moral norms or past infractions (Stanley & De Brigard 2019).
The literature suggests that in many circumstances people aren’t willing to rationalize fully
breaking the rules, but they are happy to “bend” them. One series of studies examined
under what conditions people will be dishonest when motivated to earn extra cash in an
experiment (e.g. Mazar et al. 2008). Participants were told they would receive money for
each math problem solved within a limited amount of time. When payment was based
merely on self-reported success, most participants dishonestly reported solving more
problems than they did, but just a few more. Most people can rationalize to themselves
cheating a little but not a lot (Ariely 2012), due to their conflicting commitments to moral
truth and self-interest (May 2018: ch. 7).
Clearly motivated reasoning is not restricted to certain domains and is particularly
suited to rationalizing choices that are personally beneficial but otherwise questionable.
Some evidence speaks specifically to the social implications of research in the natural
sciences, which sparks motivated reasoning. In a large sample of Americans, climate
change was perceived to be slightly less threatening among more mathematically and
scientifically skilled respondents (Kahan et al. 2012). While the more scientifically savvy
liberals in the sample perceived climate change as more threatening to humanity, the more
savvy conservative respondents perceived less risk. Apparently, a greater familiarity with
science only made participants better able to rationalize their preferred stance on this now
politicized issue. A recent meta-analysis suggests that this tendency to evaluate information
more positively simply because it favors one’s own political views (“partisan bias”), is
equally present among both liberals and conservatives (Ditto et al. 2019).
The above studies focus on moral or social motivations, but they paint a picture of
motivated reasoning that is particularly relevant to how values influence science generally.
Research practices regarded as merely “questionable” are especially subject to motivated
reasoning, for there is enough of a fudge factor—enough wiggle room—to justify rule-
bending to oneself or one’s research group.
3.3 Is Motivated Reasoning Problematic?
Although many philosophers of science reject the idea that science can or should be entirely
value-free, many would regard motivated reasoning as generally problematic, at least given
the kind of wishful thinking it often engenders (cf. Elliott 2017; Brown 2013).
Nevertheless, some philosophers have argued that motivated reasoning isn’t always
epistemically problematic. Even post hoc rationalization and confabulation can serve a
valuable purpose in trying to make sense of one’s automatic and intuitive attitudes
(Gazzaniga 1983; Bortolotti 2010; Summers 2017; Cushman 2020). Therapy, for example,
might inaccurately identify the source of one’s marital problems as narcissism, but the
diagnosis might not be far off and promote greater self-understanding in other facets of
one’s life.
Even if motivated reasoning were always epistemically vicious for the individual,
it might often lead to knowledge in science at the aggregate level. The scientific enterprise
does have mechanisms for self-correction, including peer review and replication efforts.
Although such mechanisms don’t always function properly (Estes 2012; Nosek et al. 2012;
Stroebe et al. 2012), studies of collective deliberation have shown that individual
irrationalities can produce knowledge when conflicting perspectives are put into dialog
(Mercier & Sperber 2017). In this way, recent game-theoretic models suggest that the
May | Bias in Science
Page 9 of 22
motives of individual scientists, even if self-centered, can produce a greater good through
a competitive marketplace of data and ideas (e.g. Zollman 2018; Bright 2017). Much like
the invisible hand of the market, individual biases needn’t impugn science as a whole (see
Solomon 2001, although she rejects the analogy).
Fortunately, we needn’t adjudicate here whether and when values, biases, or
motivated reasoning are epistemically problematic for science. Rather, our aim is only to
assess whether values influence the social sciences significantly more than the natural
sciences. It could be that, although both natural and social scientists individually engage in
motivated reasoning, the ultimate result is unbiased knowledge. As our present concern is
only with the parity issue, we turn to the question of which motivations are likely to
influence research in various scientific fields.
4. The Parity of Natural and Social Science
4.1 Motives in Science
With motivated reasoning as our framework for understanding the influence of values on
science, we should consider what motivates scientists. We have already encountered
several common motives that can influence an investigator’s reasoning, including financial
gain, career advancement, ideology, and even truth. However, at any given time multiple
motivations can arise that serve quite different end goals.
It is thus imperative to distinguish two kinds of goals, motivations, or desires.
Instrumental (or extrinsic) desires are those one has as a means to achieving another goal,
such as the desire to take a pill in order to relieve a headache. Ultimate (or intrinsic) desires,
on the other hand, are those one has for their own sake, such as the desire to relieve a
headache. It is tempting to treat all desires as instrumental except for the desire to gain
pleasure or to avoid pain (as the theory of psychological egoism would have us believe).
But there is ample empirical evidence that humans ultimately desire more than their own
self-interest, including helping others and doing what’s right (Fiske & Rai 2014; Batson
2016; May 2018). Desires for power, fame, prestige, and knowledge are also plausibly
valued intrinsically. So it is not a stretch to believe that scientific reasoning is often guided
by the desire to produce knowledge for its own sake, which is commonly identified as the
ideal. However, scientists can also be motivated to produce novel and interesting results,
largely as a means to other ultimate goals, such as career advancement, which bring
recognition and social status, even if not financial gain. When such credit is the ultimate
goal, it can lead to questionable or otherwise poor research practices, which can frustrate
the ultimate aim of acquiring knowledge (Nosek et al. 2012; Tullett 2015). Poor practices
can also be rationalized as a means to achieve the ultimate goal of promoting or upholding
one’s favored ideology. Since one regards the ideology as correct, truth (or acceptance of
it) is typically the ultimate goal. However, landing by luck on the truth via the path of
wishful thinking does not amount to knowledge; sound evidence is required. Thus, rather
than a truth or veritistic motive (contrast e.g. Bright 2017), I prefer to speak of a
“knowledge motive,” which sharply distinguishes it from the ideology motive.
Scientists no doubt have many ultimate goals, whether held consciously or
unconsciously. But four distinct categories stand out: knowledge, ideology, credit, and
May | Bias in Science
Page 10 of 22
profit (see Table 1). Two of these—profit and credit—are ultimately self-interested, but
desires to produce knowledge or the acceptance of an ideology are not egoistic, provided
we appropriately understand these as ultimate goals desired for their own sakes. Whether
self-serving or not, our framework of motivated reasoning suggests that any of these four
ultimate goals can sway scientific investigation toward furthering them.
Table 1: Some Sources of Motivated Reasoning in Science
Ultimate goal
Means to the End
Knowledge
(production or
acquisition of it)
produce quality data, address
underexplored questions ignored
by rival values, etc.
Ideology
(promoting the
acceptance of it)
produce quality data, fabricate or
misrepresent data, etc.
Credit
(acquiring it)
produce quality data, fabricate or
misrepresent data, produce novel
findings, explore popular topics,
follow disciplinary norms, etc.
Profit
(acquiring it)
produce quality data, fabricate or
misrepresent data, promote
surprising counter-intuitive
findings, etc.
Of course, science is conducted not only by individuals but by communities.
Sometimes it is appropriate to ascribe motives to such groups of researchers, as when a
particular laboratory is motivated to achieve collective credit or to promote their favored
theory. However, the values within a community do not always reflect the motivations of
each individual within it, particularly when it comes to dominant assumptions, ideologies,
and stereotypes. Sometimes a community’s dominant framework will show up in the
motivations of the individuals within it—e.g. motivations to uphold (or at least not flout)
stereotypes about testosterone as a masculine hormone (Fine 2010) or about divorce as
inherently damaging to a family (Anderson 2004). But the individual and community can
diverge. For example, if environmentalism is the most widely accepted ideology within a
research community, some individual scientists might produce work that supports (or
avoids conflicting with) conservationist policies, not for the ultimate goal of promoting the
ideology or policies but as a necessary means to achieving profit or credit within the
community’s accepted framework. Thus, we can ultimately understand the effects of
community-level assumptions in terms of the motivations of individual scientists. But,
again, to understand the intrinsic values that influence scientific practice, it is essential to
distinguish between the ultimate and instrumental goals of individuals. The question now
is which ultimate motivations are most prevalent among scientists.
4.2 What Motivates Most Scientists?
It is often difficult to know for sure what ultimately motivates people, let alone most
scientists. A natural place to start is to ask them. Since some of the ultimate goals are self-
May | Bias in Science
Page 11 of 22
serving (profit and credit), some scientists won’t be fully truthful when self-reporting their
motivations. Nevertheless, interviews, anonymous surveys, and case studies provide some
relevant evidence.
In 2009, collaborating with the American Association for the Advancement of
Science, the Pew Research Center surveyed over 2,500 scientists about political issues and
the nature of the scientific enterprise (Pew 2009). Respondents were primarily in the
natural sciences (namely, biological/medical, chemistry, geosciences, physics/astronomy),
with only 19% in the “other” category. Most worked in academia (63%) with the rest in
government, industry, non-profits, or other sectors. Most of the scientists reported opting
for their careers in order to “solve intellectually challenging problems,” “work for the
public good,” or “make an important discovery.” Remarkably, though, a third admitted that
a “financially rewarding career” was very or somewhat important, and the number jumps
to about half (51%) for the scientists working in industry. Given the stigma attached to
doing science for the money, these self-reported attitudes are likely an under-estimation of
the reality, particularly among early career researchers who are often paid little for the
hours they work and the education level they have attained. These data suggest what is
fairly commonsense. Scientists are highly motivated to solve challenging problems and
produce knowledge that makes a difference in the world. But they also want to gain from
it, partly in the form of financial gain.
Other personal gains include social credit, such as recognition or career
advancement. Indeed, competition is fierce across all of the sciences. In a focus group
setting, over 50 researchers from the biomedical, clinical, biological, and behavioral
sciences reported no positive effects of competition among practitioners in their fields
(Anderson et al. 2007). Instead, even though the scientists were not explicitly asked
questions about competition, their responses regularly turned to competition and how it
often leads to secrecy, sabotage, soured relationships, scientific misconduct, and
interference with peer review. Multiple participants mentioned the “practice of taking
photographs of poster presentations in order then to publish the results first” (451). Some
participants reported that since “ideas get stolen constantly sometimes fellow scientists
will omit certain details of their research protocols in presentations or publications. Many
researchers may go into science primarily with a desire to produce knowledge, but its
competitive structure can inculcate desires for recognition and career advancement.
Vivid examples of the competition for credit and status can be found in cases of
fraud, although they go well beyond mere bias in science. Consider, for instance, what
motivated Dietrich Stapel, the infamous data-fabricating social psychologist. There is no
theme in his research that supports a particular moral or political ideology, such as
socialism or conservatism. There doesn’t even appear to be a particular theory of the human
mind that Stapel’s work supports. He wasn’t known for an overarching framework, such
as prospect theory, or even a famous mechanism, such as confirmation bias, moral
licensing, or the fundamental attribution error. Stapel also didn’t seem to accrue much
financial gain from, say, high-profile speaking engagements or a self-help book centered
on a key finding, such as “power posing.” His own rationalization is that he was on a “quest
for aesthetics, for beauty” in the data he reported, but a New York Times interviewer reports
that Stapel “didn’t deny that his deceit was driven by ambition” (Bhattacharjee 2013)—
that is, credit or social status.
May | Bias in Science
Page 12 of 22
Similar stories of course crop up in the natural sciences as well (Stroebe et al. 2012).
Across a diverse range of scientific fields, many questionable research practices are largely
explicable in terms of the desire for personal gain. Financial gains in science, particularly
academia, are often small, but the rewards of social credit are substantial. Indeed, one
needn’t be motivated by the desire to land a job or a more prestigious appointment. One of
the most powerful drives among deeply social creatures like us is to acquire and maintain
recognition, status, pride, or respect among peers. If to achieve such social status and
approval we will engage in violence (Fiske & Rai 2014), p-hacking is a breeze. Yet this
powerful motive is common among human beings generally, not just social scientists.
Concern with competition and social status is a natural feature of human life, grounded in
our having evolved to live in groups saturated with social hierarchies and norms (Henrich
2016).
Overall, the framework of motivated reasoning reveals an approximate parity
between the natural and social sciences primarily through two mutually reinforcing claims.
First, research in the natural sciences is also susceptible to various values, including moral,
political, and other ideological motivations that otherwise seem endemic to social science.
Second, ideological motives are generally minor compared to other motivations present in
both domains, particularly credit but also profit. Of course, in some cases research has been
influenced by ideology. Progressive values appear to have influenced psychological studies
of conservatism and prejudice (Duarte et al. 2015), and staunch ideological opposition to
government regulation has influenced geoscience independently of a desire for profit
(Oreskes & Conway 2010). However, our concern is not with particular instances but
general trends, which can serve as grounds for comparing broad scientific domains.
4.3 Similar Patterns of Bias
We’ve seen that the basic motivations throughout science are the same, particularly desires
for credit, profit, ideology, and knowledge. Key incentives are also similar (e.g. publish or
perish, acquire grant funding), as are the methods (experiments, interviews, meta-analyses,
theory building, case studies, etc.). Accordingly, we’ve seen that questionable research
practices arise equally in all scientific domains (Section 2). However, one might argue that
there must be different social arrangements or norms that give rise to greater bias in social
science, because that domain has particularly low replication rates, high publication bias,
and other patterns indicative of motivated reasoning.
Some systematic analyses of scientific literatures do purport to reveal significant
differences in such patterns. An analysis of thousands of papers across scientific disciplines
suggests that the bias in favor of publishing positive, as opposed to null, results is more
common in the social sciences (Fanelli 2010; see also Franco et al. 2014). This is just one
of many potential biases in science, and the parity thesis does not insist on symmetry for
each. However, a large random sample of meta-analyses in the physical, biological, and
social sciences provides a more comprehensive investigation of multiple biases (Fanelli,
Costas, & Ioannidis 2017). The authors found some evidence that the social science
literature more strongly exhibits some patterns that lead to an overestimation of effect sizes,
such as the tendency to publish larger effects for smaller studies and a decline in the
magnitude of particular effects over time as they are replicated.
However, such analyses reveal more commonalities than differences among
scientific domains. In their comprehensive examination of meta-analyses, Fanelli and co-
May | Bias in Science
Page 13 of 22
authors (2017) conclude that across all scientific domains one ought to “interpret with
caution results of small, highly cited, and early studies” (5), which tend to overestimate
effects due to a range of factors, including industry influence and pressures to publish
among early-career researchers. Although some statistically significant differences
emerged between the social sciences and the physical sciences (less commonly the
biological sciences), there were only a few differences out of the six key bias patterns.
Moreover, the authors clarify that the differences are “small in magnitude and not
consistently observed across robustness analyses” (4). Indeed, most of the patterns
measured across all domains were “relatively small,” having only “accounted for 1.2% or
less of the variance in reported effect sizes” of the sampled meta-analyses (5). Dwelling on
such minor differences between scientific domains misses the forest for the trees.
Examinations of meta-analyses provide one important aerial view, but we can also
zoom in to remind ourselves that many of these patterns of bias are quite visible outside of
the social science literature. Publication bias, for example, is on full view in various natural
sciences, from biology to epidemiology (Pautasso 2010). One analysis of nearly 600 trials
in the database ClinicalTrials.gov found that over a quarter had not been published five
years after completion in 2009, the vast majority of which had no results posted in the
database (Jones et al. 2013). Non-publication of results was more common among trials
funded by industry, although publicly funded trials can exhibit a bias against reporting null
results too (recall Kaplan & Irvin 2015). Consider also that a recent attempt to replicate 53
published findings in cancer research was only able to reproduce 11% of the effects (Begley
& Ellis 2012). Although this does not by any means reflect a precise estimate of the
replication rate in oncology, other attempts to confirm published findings also report low
success rates (e.g. 20-25% in Prinz et al. 2011). Indeed, replications can be even more
difficult when experiments involve a single crucial observation that is difficult to repeat in
rare circumstances or with inaccessible populations of people and other organisms. Studies
of rare brain disorders, for example, are sometimes published in top journals like Science
and shape the field with only a sample of two patients (e.g. Anderson et al. 1999). With
small samples and rare circumstances that are difficult to repeat in many natural sciences,
findings are more likely to be influenced by biases (Ionnidis 2005).
Of course, one field or literature does not represent the whole of either natural or
social science. Financial conflicts of interest are legion in medicine but comparatively
infrequent in cultural anthropology. Political bias, in the form of a motivation to promote
a preferred ideology, is probably more frequent in political science and economics than in
chemistry. Some such comparisons between individual disciplines and individual biases
might be fruitful (Fanelli et al. 2017). However, when it comes to the influence of values
generally, the various sciences are more alike than they are unalike. The general
explanation for why an approximate parity holds between natural and social science is
precisely that there is little that could unify a heterogenous group like the natural sciences
while distinguishing it from the social sciences in terms of bias. Each shares the same
mechanism of motivated reasoning which can operate on a diversity of motives among
researchers.
Consider an analogy with dogs. Although there may be interesting differences
between individual dog breeds, cleaving all dogs into those with spots and those without
will not yield many informative differences, except in terms of spots. Dogs in both groups
will generally be susceptible to training or disease, and any differences observed are likely
May | Bias in Science
Page 14 of 22
to be minor compared to commonalities. Similarly, natural and social science are defined
by their subject matters, not their norms or patterns that appear in some literatures. There
is little reason to expect a systematic connection between a wide range of topics and the
values that can influence their investigation. The ideology motive may seem to be an
exception, since moral, political, and social values are frequently the subject of
investigation across the social sciences. However, we’ve seen that moral and political
values play only one role, and a relatively minor one compared to others.
5. Defending the Parity Claim
The presumptive case in favor of the parity claim might be sufficient if there were no
powerful objections to it. Although extant analyses of scientific literatures do not upset
parity, there may be general reasons to believe that various values have a significantly
greater influence in the social sciences that isn’t easily detectable in meta-analyses,
replication efforts, and similar literature patterns. We’ll now consider general reasons for
rejecting parity and see that the rebuttals will continue to highlight our two mutually
reinforcing claims of susceptibility and minority.
5.1 Social Science as Politically Biased?
Even if tribalism is equally present among both liberals and conservatives (Ditto et al.
2019), ideological influences could be more prevalent in social science if it has
significantly less political diversity.
Several studies have mined voter registration data to determine how many
Democrats versus Republicans there are among the faculty at elite liberal arts colleges in
the United States (e.g. Klein & Stern 2005). One of the most recent voter registration
studies examined data on more than 5,000 professors and across many academic
disciplines, but only at rather elite colleges, namely, the top 51 liberal arts colleges ranked
in the U.S. News in 2017 (Langbert 2018). The mean Democratic-to-Republican ratio was
overall quite high at about 10:1 (rounded to the nearest whole number). Broken down by
domain, the mean Democratic-to-Republican ratio among this sample of social scientists
was 12:1 while it was 6:1 for those working in sciences regarded as “hard” (e.g. chemistry,
physics, and engineering). So, by this measure, there does appear to be more political
homogeneity among social scientists employed at elite liberal arts colleges like Oberlin
and Bryn Mawr.
Less extreme differences have been found by surveying nearly 1500 professors
from a wider range of American universities (Gross & Simmons 2007). Across the board,
slightly more professors in the social sciences identified as liberal (58% liberal, 37%
moderate, 5% conservative) compared to those in physics/biological sciences (45% liberal,
47% moderate, 8% conservative). Similarly small differences can be found in the 2009
Pew survey of scientists. Like social scientists, the natural scientists surveyed skewed
liberal: about half describe themselves as outright liberal and a large majority (81%)
identify with the Democratic party or lean that way. Remarkably, though, only a narrow
margin of the scientists (9%) described themselves as conservative (compared to 37% of
the general public).
May | Bias in Science
Page 15 of 22
So social scientists are only slightly more politically homogenous than natural
scientists, and there are few conservatives in either group. But presumably few physicists
do research that could be shaped by their political views. Perhaps even slightly less
ideological diversity could sway more social research because its findings are more often
relevant to social policy. One recent study of abstracts in political psychology reports that
conservatism is described more negatively and is more likely the target of explanation,
compared to liberalism (Eitan et al. 2018). And political ideologies seem to have led some
social scientists to even fabricate data whole cloth, as in the case of Michael LaCour who
attempted to show that attitudes toward same-sex marriage are more likely to change after
talking with a gay canvasser (Konnikova 2015).
However, the extent of the ideology motive can easily be overblown and is fairly
consistent across domains of science. The study of political ideology itself is but a small
portion of psychological science, so we can’t generalize across other areas of study within
the social sciences. Moreover, a recent analysis of nearly 200 psychology articles found
that their political slant was not strongly related to their replication success, sample size,
or effect size (Reinero et al. 2020). Finally, although we tend to think of findings in natural
science as disconnected from moral or social issues, many natural sciences have long been
politicized, from the persecution of Galileo for heliocentrism to environmental regulations
of acid rain and development of the atomic bomb. Connections among political values and
natural sciences continue into the 21st century, of course. Geology, biology, biomedicine,
neuroscience, and physics, for instance, have direct implications for many hotly debated
issues of policy and ideology, including climate change, evolution, mandatory
vaccinations, free will (and punishment), sex/gender differences, and intelligent design of
the cosmos (see e.g. Solomon 2001; Oreskes & Conway 2010; Elliott 2017; Peterson
2019). Not only do preachers and politicians have a vested interest in certain empirical
findings or conclusions; natural scientists too are people with values and policy
preferences, which can influence the questions they ask, the methods used to test
hypotheses, and the portrayal of their results.
More importantly, as we’ve seen, ideological motives are but one of many that can
influence research practices and interpretation of data. The desire for credit (status,
recognition, prestige) and profit (financial gain) are at least equally present and plausibly
more prevalent. So, even if there is less political diversity in social science, and even if
that’s important to correct (Duarte et al. 2015), it’s not enough to demonstrate greater bias
within social science, compared to natural science. Consider an analogy. Suppose two cars
share the same major defects—faulty seatbelts, say—but one also has a small scratch on
the fender. In terms of defects, these two cars are more alike than they are different. Put in
terms of strengths instead of flaws, imagine two co-workers, A and B, share nearly all of
the same virtues—they’re both exceptionally productive and cooperative. But A is also
extremely punctual. It’s true in one sense that A is better than B, but the two are more alike
than different. We wouldn’t expect A to receive higher pay increases than B.
5.2 Moral Values as Especially Powerful?
A natural follow-up is that, while the profit and credit motives can be found in both natural
and social science, moral motives are more powerful or prevalent in social science. After
all, moral values often rise to the level of convictions or sacred values, which can function
as such rigid fixpoints that they cloud judgment in particularly powerful ways (Tetlock
May | Bias in Science
Page 16 of 22
2003). One line of research does suggest that strong moral convictions, compared to strong
non-moral attitudes, make people less tolerant of opposing viewpoints and less inclined to
work with opponents to resolve disagreements (Skitka et al. 2005).
There are two ways to alleviate this concern. First, moral convictions obviously
affect natural science as well. We’ve already seen how values influence the production and
interpretation of climate science and biomedicine (Kahan et al. 2012; Oreskes & Conway
2010). Similar fervor surrounds research on the genetics of intelligence (Kampourakis
2019) and sex/gender differences at the biological levels of hormones and
neurotransmitters (Fine 2010). Again, large portions of natural science have direct
implications for deeply held beliefs connected to moral values—e.g. areas of physics
(religion), geoscience (climate change), and neurobiology (sex essentialism).
Second, we should be wary anyway of focusing too intently on moral and political
values, especially when motives provided by financial gain and social credit can be equally,
if not more, powerful and prevalent in science. Moreover, not all moral beliefs are
convictions or sacred values, and not all convictions are insensitive to contrary evidence.
We have all likely witnessed this in our own lives, but it has also been documented
experimentally using rigorous methods and open science. For example, most people will
assent to the general utilitarian principle “In the context of life or death situations, always
take whatever means necessary to save the most lives.” But their credence in this belief
lowers when presented with a single counter-example from the ethics literature—the
famous Transplant scenario, in which a doctor is able to kill one patient in order to use his
organs to save five others (Horne, Powell, & Hummell 2015). Similar results can be found
with even more controversial and deeply held attitudes, such as opposition to vaccines.
Researchers have found that, compared to arguments that debunk anti-vaccination myths,
opponents of vaccinations weaken their views when presented with factual information
about the harms of communicable diseases (Horne, Powell, & Hummell, & Holyoak 2015).
5.3 More Confirmation and Less Controversy in Natural Science?
Even if the natural sciences are just as susceptible to motives that can influence reasoning,
one might argue that there is much more subjectivity and controversy in social science,
which makes values more influential. As one commentator put it, “the intellectual
subjectivity inherent in the social sciences leaves more room for self-serving interpretation
of the data than with hard variables, such as “physical objects” (cf. Estes 2012: 4).
Similarly, in their discussion of the replication crisis in social psychology, Earp and
Trafimow raise the worry that “human behavior is notoriously complex” and humans are
not “relatively simple objects or organisms” such as “billiard balls, or beavers, or planets,
or paramecia” (2015: 3; see also Fanelli 2010; Duarte et al. 2015: 2). Hypotheses in the
natural sciences, in contrast, are arguably tested against evidence from indisputable
observables, which might seem more protected against all forms of motivated reasoning.
However, evidence in many natural sciences share these basic characteristics. At
the very least, there is always much dispute at the foundations and cutting edges of science,
from grand unifying theories in physics to the effectiveness of medical treatments (see e.g.
Ioannidis 2005; Stegenga 2018). Even if data under a certain description are indisputable,
the data don’t interpret themselves, and they can’t support or reject a hypothesis without
such interpretation. In physics, neurobiology, medicine, and nutrition, for example, it may
May | Bias in Science
Page 17 of 22
seem that the core data are indisputable, but each area deals with extremely complex
phenomena—as complex and mysterious as human behavior—which generates plentiful
disputes about which hypotheses are best supported by the accepted empirical evidence.
Ultimately, the relevant evidence in social science is just as observable and can be just as
indisputable. In psychology, for example, common data points include donation amounts,
reaction times, and boxes ticked on a questionnaire. Even qualitative data are often equally
concrete and objective phenomena, such as recorded testimonials and observed cultural
practices. The wiggle room is primarily in the operationalization and interpretation of data,
including quantitative data in the natural sciences, such as t-cells counted, weight lost,
neuronal excitation, and the distances objects have traveled.
Moreover, even if it were true that data and phenomena in the social sciences are
more complex and contested, this limits the financial incentives that generate more
powerful influences. While some researchers might profit from psychology books on self-
help or sociology books on racial disparities in society, there are many more and much
greater opportunities for wealth and recognition among scientists studying the next
breakthrough in pharmaceuticals, nutrition, green energy, or biological materials (Oreskes
& Conway 2010; Stegenga 2018). Gains from the development of gratitude journals or the
discovery of a new species of primate pale in comparison to the rewards of breakthroughs
in cancer or Alzheimer’s research. There is thus a positive relationship between the
practical applications commonly found in natural science and some of the most powerful
motivations, particularly credit and profit. Thus, even if the concepts and theories in social
science are more contested, greater certainties in natural science, whether real or imagined,
strengthen other more powerful influences in a domain where conflicts of interest abound.
5.4 Can We Trust the Social Science Used?
Finally, taking a step back, let’s briefly address a worry about the very social science
research we’ve relied on to identify and analyze values in science. The research on
motivated reasoning, for example, comes directly from the sort of social science that some
regard with suspicion, for apparently being particularly biased. Even if social science
research is no more subject to motivated reasoning than natural science, one might
conclude from my argument that the former (and thus the latter) are so problematically
biased that we can’t trust their results.
It is true, so far as it goes, that I assume that some social science research is reliable
and reveals facts about human psychology, including the minds of scientists. But that, I
take it, is hardly contested in this debate. At any rate, some research is better than others;
some studies should be regarded as preliminary and no philosophical conclusion should be
staked on one experiment or a small series of studies conducted by one research group
(Machery & Doris 2017). However, motivated reasoning is a commonsense phenomenon
that is well-documented and replicated in diverse literatures by multiple labs and supported
by meta-analyses. We wouldn’t necessarily need the science to raise the worry, but it
certainly bolsters the case. Moreover, some of the empirical support for the parity thesis
comes from merely qualitative and descriptive statistics (e.g. rates of responses from
participants), not inferential statistics such as p-values which have received much scrutiny
in the wake of the replication crisis.
May | Bias in Science
Page 18 of 22
6. Conclusion
We have seen how many of the putative biases that affect science can be explained and
illuminated in terms of motivated reasoning, which yields a general understanding of how
a researcher’s goals and values can influence scientific practice (whether positively or
negatively). This general account helps to show that it is unwarranted to assume that such
influences are significantly more prominent in the social sciences. The defense of this
parity claim relies primarily on two key points. First, the natural sciences are also
susceptible to the same values found in social science, particularly given that findings in
many fields have social or political implications. Second, the ideological motivations that
might seem to arise only in social science are minor compared to others. In particular, one’s
reasoning is more often motivated by a desire to gain social credit (e.g. recognition among
peers) than a desire to promote a moral or political ideology. Although there may be
discernible differences in the quality of research across scientific domains, all are
influenced by researchers’ values, as manifested in their motivations.
We began with the notion that bias in science is a problem, and a particularly
pressing one given concerns about replicability and questionable research practices.
However, I have not attempted to adjudicate whether the influence of any values in natural
or social science is ultimately pernicious. My goal has only been to make the case that we
ought to treat like cases alike. When value influences are detrimental, we should regard
them as disconcerting in both areas of science; when values are innocuous or even
beneficial, we ought to treat them as such in both domains. Whether scientific domains are
companions in innocence or in guilt, we should recognize that motivated reasoning
influences a wide range of research, which makes vivid how inherent values are to the
whole enterprise of science.
Acknowledgements
Versions of this paper were presented at the Philosophy of Science Association, a Philosophy and
Neuroscience Workshop organized by John Bickle and Antonella Tramacere, and the Values in
Medicine, Science, & Technology Conference organized by Matt Brown. In addition to the
organizers and audience members at these events, I thank the following for feedback on the
manuscript: Marshall Abrams, Rajesh Kana, Kevin McCain, and Alexa Tullett. Work on this article
was supported by an Academic Cross-Training Fellowship from the John Templeton Foundation.
The opinions expressed in this publication are those of the author and do not necessarily reflect the
views of the Foundation.
May | Bias in Science
Page 19 of 22
References
Anderson, E. (2004). Uses of Value Judgments in Science. Hypatia, 19(1): 1–24.
Anderson, M., Ronning, E., Vries, R., Martinson, B. (2007). The Perverse Effects of Competition
on Scientists’ Work and Relationships, Science and Engineering Ethics 13(4): 437-461.
Ariely, D. (2012). The Honest Truth About Dishonesty. Harper Collins.
Batson, C. D. (2016). What’s Wrong with Morality? Oxford University Press.
Bhattacharjee, Y. (2013). The Mind of a Con Man. The New York Times Magazine.
https://www.nytimes.com/2013/04/28/magazine/diederik-stapels-audacious-academic-
fraud.html
Blanken, I., van de Ven, N., & Zeelenberg, M. (2015). A Meta-Analytic Review of Moral
Licensing. Personality and Social Psychology Bulletin, 41(4), 540558.
Blum, B. (2018) .The Lifespan of a Lie: The most famous psychology study of all time was a sham.
Why can’t we escape the Stanford Prison Experiment? Medium.
https://medium.com/s/trustissues/the-lifespan-of-a-lie-d869212b1f62
Boghossian, P. (2012). “What is Inference?” Philosophical Studies 169(1): 118.
Bortolotti, L. (2010). Delusions and Other Irrational Beliefs. Oxford University Press.
Bright, L. K. (2017). On Fraud. Philosophical Studies 174(2): 291-310.
Brown, M. J. (2013). Values in Science Beyond Underdetermination and Inductive Risk.
Philosophy of Science 80(5): 829-839.
Brown, M. J. (2019). Is Science Really Value Free and Objective? From Objectivity to Scientific
Integrity. In K. McCain & K. Kampourakis (Eds.), What is Scientific Knowledge? An
Introduction to Contemporary Epistemology of Science. Routledge, pp. 226242.
Cushman, F. A. (2020). Rationalization is Rational. Behavioral and Brain Sciences, 43 (e28): 1–
59.
Davidson, D. (1963/2001). “Actions, Reasons, and Causes.” Reprinted in his Essays on Actions
and Events. Oxford University Press.
Ditto, P. H., Liu, B. S., Clark, C. J., Wojcik, S. P., Chen, E. E., Grady, R. H., et al. (2019). At Least
Bias is Bipartisan. Perspectives on Psychological Science, 14(2), 273291.
Ditto, P. H., Pizarro, D. A., & Tannenbaum, D. (2009). Motivated Moral Reasoning. In D. M.
Bartels, C. W. Bauman, L. J. Skitka, & D. L. Medin (Eds.), The Psychology of Learning and
Motivation (Vol. 50, pp. 307338).
Duarte, J. L., Crawford, J. T., Stern, C., Haidt, J., Jussim, L., & Tetlock, P. E. (2015). Political
Diversity Will Improve Social Psychological Science. Behavioral and Brain Sciences,
38(e130), 154.
Earp, B. D., & Trafimow, D. (2015). Replication, falsification, and the crisis of confidence in social
psychology. Frontiers in Psychology, 6(781), 10811.
Eitan, O., Viganola, D., Inbar, Y., Dreber, A., Johannesson, M., Pfeiffer, T., et al. (2018). Is
research in social psychology politically biased? Systematic empirical tests and a forecasting
survey to address the controversy. Journal of Experimental Social Psychology, 79, 188199.
Elliott, K. C. (2017). A Tapestry of Values: An Introduction to Values in Science. Oxford University
Press.
Estes, S. 2012. “The Myth of Self-Correcting Science.” The Atlantic. Accessed November 6, 2015,
<http://www.theatlantic.com/health/archive/2012/12/the-myth-of-self-correcting-
science/266228/>.
Fanelli, D. (2010). “Positive” Results Increase Down the Hierarchy of the Sciences. PLoS ONE,
5(4), e1006810.
Fanelli, D., Costas, R., & Ioannidis, J. P. A. (2017). Meta-assessment of bias in science.
Proceedings of the National Academy of Sciences, 114(14), 37143719.
May | Bias in Science
Page 20 of 22
Fine, C. (2010). Delusions of Gender: How Our Minds, Society, and Neurosexism Create
Difference. W. W. Norton.
Firestone, C., & Scholl, B. J. (2016). Cognition does not affect perception: Evaluating the evidence
for “top-down” effects. Behavioral and Brain Sciences, 39(e229), 177.
Fiske, A. P., & Rai, T. S. (2014). Virtuous violence: Hurting and killing to create, sustain, end, and
honor social relationships. Cambridge University Press.
Franco, A., Malhotra, N., & Simonovits, G. (2014). Publication Bias in the Social Sciences:
Unlocking the File Drawer. Science, 345(6203), 15021505.
Fraser, H., Parker, T., Nakagawa, S., Barnett, A., & Fidler, F. (2018). Questionable research
practices in ecology and evolution. PLoS ONE, 13(7), e020030316.
Gebelhoff, R. (2017). “How biased is science, really?” The Washington Post.
https://www.washingtonpost.com/news/in-theory/wp/2017/03/31/how-biased-is-science-
really/
Gobry, P. (2014). “How Academia’s Liberal Bias is Killing Social Science.” The Week.
https://theweek.com/articles-amp/441474/how-academias-liberal-bias-killing-social-science
Goldin, C., & Rouse, C. (2000). Orchestrating Impartiality: The Impact of “Blind” Auditions on
Female Musicians. The American Economic Review, 90(4), 715741.
Gross, N., & Simmons, S. (2007). The social and political views of American professors. Working
Paper presented at a Harvard University Symposium on Professors and Their Politics.
https://www.researchgate.net/profile/Solon_Simmons/publication/287093322/
Haidt, J. (2001). The Emotional Dog and Its Rational Tail. Psychological Review, 108(4), 814
834.
Head, M. L., Holman, L., Lanfear, R., Kahn, A. T., & Jennions, M. D. (2015). The Extent and
Consequences of P-Hacking in Science. PLoS Biology, 13(3), e1002106–15.
Henrich, J. (2016). The Secret of Our Success. Princeton University Press.
Horne, Z., Powell, D., & Hummel, J. (2015). A Single Counterexample Leads to Moral Belief
Revision. Cognitive Science, 39(8), 19501964.
Horne, Z., Powell, D., Hummel, J. E., & Holyoak, K. J. (2015). Countering antivaccination
attitudes. Proceedings of the National Academy of Sciences, 112(33), 1032110324.
Ioannidis, J. P. A. (2005). Why Most Published Research Findings Are False. PLoS Medicine, 2(8),
e1246.
John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the Prevalence of Questionable
Research Practices With Incentives for Truth Telling. Psychological Science, 23(5), 524532.
Jones, C. W., Handler, L., Crowell, K. E., Keil, L. G., Weaver, M. A., & Platts-Mills, T. F. (2013).
Non-Publication of Large Randomized Clinical Trials: Cross Sectional Analysis. British
Medical Journal, 347, f6104.
Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012).
The polarizing impact of science literacy and numeracy on perceived climate change risks.
Nature Climate Change, 2(6), 732735.
Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
Kampourakis, K. (2019). How Are the Uncertainties in Scientific Knowledge Represented in the
Public Sphere? In What Is Scientific Knowledge? K. McCain & K. Kampourakis (eds.).
Routledge, pp.288-305
Kaplan, R. M., & Irvin, V. L. (2015). Likelihood of null effects of large NHLBI clinical trials has
increased over time. PloS One 10(8): e0132382.
Kitcher, Philip (2001) Science, Truth, and Democracy. Oxford: Oxford University Press.
Klein, D. B., & Stern, C. (2005). Political Diversity in Six Disciplines. Academic Questions, 18(1),
4052.
Koehler, J. (1993). The Influence of Prior Beliefs on Scientific Judgments of Evidence Quality
Organizational Behavior and Human Decision Processes 56(1): 28-55.
May | Bias in Science
Page 21 of 22
Konnikova, M. (2015). How a Gay-Marriage Study Went Wrong. The New Yorker.
https://www.newyorker.com/science/maria-konnikova/how-a-gay-marriage-study-went-
wrong
Kunda, Z. (1990). The Case for Motivated Reasoning. Psychological Bulletin, 108(3), 480498.
Langbert, M. (2018). Homogenous: The Political Affiliations of Elite Liberal Arts College Faculty.
Academic Questions, 31(2), 186197.
Longino, H. (1990). Science as Social Knowledge. Princeton University Press.
Machery, E., & Doris, J. M. (2017). An Open Letter to Our Students. In B. G. Voyer & T. Tarantola
(Eds.), Moral Psychology: A Multidisciplinary Guide. Springer, pp. 127-47.
Mazar, N., Amir, O., & Ariely, D. (2008). The dishonesty of honest people: A theory of self-
concept maintenance. Journal of Marketing Research, 45(6), 633644.
Mazar, N., & Zhong, C. B. (2010). Do Green Products Make Us Better People? Psychological
Science, 21(4), 494498.
May, J. (2018). Regard for Reason in the Moral Mind. Oxford University Press.
Mercier, H., & Sperber, D. (2017). The Enigma of Reason. Harvard University Press.
Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific Utopia II: Restructuring incentives and
practices to promote truth over publishability. Perspectives on Psychological Science 7(6):
615631.
Oreskes, N. & Conway, E. N. (2010). Merchants of Doubt: How a Handful of Scientists Obscured
the Truth on Issues from Tobacco Smoke to Global Warming. Bloomsbury Press.
Pallesen, J. (2019). Orchestrating False Beliefs about Gender Discrimination. Medium.
https://medium.com/@jsmp/orchestrating-false-beliefs-about-gender-discrimination-
a25a48e1d02
Pashler, H., & Wagenmakers, E. J. (2012). Editors’ Introduction to the Special Section on
Replicability in Psychological Science. Perspectives on Psychological Science, 7(6), 528530.
Pautasso, M. (2010). Worsening file-drawer problem in the abstracts of natural, medical and social
science databases. Scientometrics 85(1): 193-202.
Peplow, M. (2014). Social Sciences Suffer from Severe Publication Bias. Nature.
https://www.nature.com/news/social-sciences-suffer-from-severe-publication-bias-1.15787
Peterson, E. L. (2019). Can Scientific Knowledge Sift the Wheat From the Tares? A Brief History
of Bias (and Fears about Bias) in Science. In K. McCain & K. Kampourakis (Eds.), What is
Scientific Knowledge? An Introduction to Contemporary Epistemology of Science. Routledge,
pp. 195-211.
Pew Research Center. (2009). Public Praises Science; Scientists Fault Public, Media.
https://www.people-press.org/2009/07/09/public-praises-science-scientists-fault-public-
media/
Prinz, F., Schlange, T., & Asadullah, K. (2011). Believe it or not: how much can we rely on
published data on potential drug targets? Nature Reviews: Drug Discovery, 12.
Pritschet, L., Powell, D., & Horne, Z. (2016). Marginally Significant Effects as Evidence for
Hypotheses. Psychological Science, 27(7), 10361042.
Reinero, D., Wills, J., Brady, W., Mende-Siedlecki, P., Crawford, J., Bavel, J. (2020). Is the
Political Slant of Psychology Research Related to Scientific Replicability? Preprint,
https://psyarxiv.com/6k3j5/download.
Shermer, Michael. (2016). Is Social Science Politically Biased? Scientific American. 314 (3):73.
Skitka, L. J., Bauman, C. W., & Sargis, E. G. (2005). Moral Conviction: Another Contributor to
Attitude Strength or Something More? Journal of Personality and Social Psychology, 88(6),
895917.
Solomon, M. (2001). Social Empricism. MIT Press.
Stanley, M. L., & De Brigard, F. (2019). Moral Memories and the Belief in the Good Self. Current
Directions in Psychological Science, 28(4), 387391.
Stegenga, J. (2018). Medical Nihilism. Oxford University Press.
May | Bias in Science
Page 22 of 22
Stroebe, W., Postmes, T., & Spears, R. (2012). Scientific Misconduct and the Myth of Self-
Correction in Science. Perspectives on Psychological Science, 7(6), 670688.
Summers, J. S. (2017). Post hoc ergo propter hoc: some benefits of rationalization. Philosophical
Explorations, 20(sup1), 2136.
Tetlock, P. E. (2003). Thinking the Unthinkable: Sacred Values and Taboo Cognitions. Trends in
Cognitive Sciences, 7(7), 320324.
Wilholt, T. (2009). Bias and Values in Scientific Research. Studies in History and Philosophy of
Science, 40(1), 92101.
Zollman, K. (2018). The Credit Economy and the Economic Rationality of Science. The Journal of
Philosophy 115(1): 5-33.
... To varying degrees, systemic challenges are undoubtedly real and stimulate motivated reasoning among scientists [34], but the question remains whether they are sufficient for explaining individual differences in QRPs. It does not follow logically that the susceptibility to negative impacts on behaviours from such challenges should be uniformly distributed across individuals. ...
... Personality traits indicate dispositions as they lower behaviour thresholds and make behaviours consistent with our most likely traits. Consequently, to varying degrees, systemic challenges are undoubtedly real and likely to influence motivated reasoning among scientists (May [34]). Still, our findings suggest that dispositions to negative impacts on behaviours from such challenges are not uniformly distributed across individuals. ...
Article
Full-text available
Questionable research practices (QRP) are believed to be widespread, but empirical assessments are generally restricted to a few types of practices. Furthermore, conceptual confusion is rife with use and prevalence of QRPs often being confused as the same quantity. We present the hitherto most comprehensive study examining QRPs across scholarly fields and knowledge production modes. We survey perception, use, prevalence and predictors of QRPs among 3,402 researchers in Denmark and 1,307 in the UK, USA, Croatia and Austria. Results reveal remarkably similar response patterns among Danish and international respondents (τ = 0.85). Self-reported use indicates whether respondents have used a QRP in recent publications. 9 out of 10 respondents admitted using at least one QRP. Median use is three out of nine QRP items. Self-reported prevalence reflects the frequency of use. On average, prevalence rates were roughly three times lower compared to self-reported use. Findings indicated that the perceived social acceptability of QRPs influenced self-report patterns. Results suggest that most researchers use different types of QRPs within a restricted time period. The prevalence estimates, however, do not suggest outright systematic use of specific QRPs. Perceived pressure was the strongest systemic predictor for prevalence. Conversely, more local attention to research cultures and academic age was negatively related to prevalence. Finally, the personality traits conscientiousness and, to a lesser degree, agreeableness were also inversely associated with self-reported prevalence. Findings suggest that explanations for engagement with QRPs are not only attributable to systemic factors, as hitherto suggested, but a complicated mixture of experience, systemic and individual factors, and motivated reasoning.
... Participants' science literacy and numeracy were assessed with tried and tested methods, and it was assumed, drawing on earlier studies, that higher levels are associated with stronger accuracy motives. The combination of expertise and accuracy motives, however, led not to accurate assessments but to "a greater facility to discover and use-or explain away-evidence relating to their groups' positions" (Kahan et al., 2012; see also May, 2021). ...
Article
Full-text available
Accuracy motives are motives for forming true beliefs. Research on motivated reasoning most commonly presents them as correctives to cognitive distortions. Some authors argue that they can also amplify cognitive distortions, attributing this amplification to indirect side effects. I argue for a third possibility. Focusing on rationalizations, I demonstrate that accuracy motives, despite their epistemically virtuous aim, can directly contribute to distortions by prompting us to construct particularly thorough and convincing rationalizations. In this way, they can stabilize and expand distortions, facilitating their social dissemination. Studying the distorting effects of accuracy motives is therefore not just epistemically but morally and politically relevant.
... Mathematics provides results that are valid only if all assumptions hold, yet many of these assumptions are often fragile or even unrealistic. Researchers bring with them biases, incentives, and cultural habits or even rituals that heavily affect how data are generated, analyzed, and reported [12,28,29,30,31,32]. This "human factor" implies that concepts like objectivity and subjectivity, often invoked in statistical debates, are better understood as bundles of more precise attributes such as transparency, fairness, and context awareness [28,29]. ...
Preprint
Full-text available
The interpretation of the P-value and its monotone transform s=-log2(p), or S-value, remains debated despite decades of dedicated literature. Within the neo-Fisherian framework, these values are often described as indices of (in)compatibility between the observed data and a set of ideal assumptions (i.e., the statistical model). In this regard, this paper proposes the distinction between two domains: the model domain, where assumptions are taken as perfectly true and every admissible outcome is, by construction, fully compatible with the model; and the real domain, where assumptions may fail and face empirical scrutiny. I argue that, although interpreted through an objective numerical index, any level of incompatibility can arise only in the latter domain, where the epistemic status of the model under examination is uncertain and a genuine conflict between data and hypotheses can therefore occur. The extent to which P- and S-values are taken as indicating incompatibility is a matter of contextual judgment. Within this framework, descriptive approaches serve to quantify the numerical values of P and S; these can be interpreted as indicative of a certain degree (or amount) of incompatibility between data and hypotheses once causal knowledge of the data-generating process and information about the costs and benefits of related decisions become clearer. Although the distinction between the model domain and the real domain may appear merely theoretical or even philosophical, I argue that this perspective is useful for developing a clear mental representation of how statistical estimates should be evaluated in practical settings and applications.
... The desire for knowledge and truth does shape much reasoning in scientific practice, but so do non-epistemic values. Of these, there are at least five intrinsic motivations that shape reasoning in science: profit, credit, ideology, altruism, and spite (a list that builds on May, 2021). For example, researchers can be motivatedoften unconsciously-to produce findings that will earn them a Nobel prize (credit and profit), promote their pet theory (ideology), help their graduate students get jobs (altruism), or ruin the career of their archenemy (spite). ...
Article
Full-text available
[Part of a symposium with 5 commentaries and replies.] The main message of Neuroethics is that neuroscience forces us to reconceptualize human agency as marvelously diverse and flexible. Free will can arise from unconscious brain processes. Individuals with mental disorders, including addiction and psychopathy, exhibit more agency than is often recognized. Brain interventions should be embraced with cautious optimism. Our moral intuitions, which arise from entangled reason and emotion, can generally be trusted. Nevertheless, we can and should safely enhance our brain chemistry, partly because motivated reasoning crops up in everyday life and in the practice of neuroscience itself. Despite serious limitations, brain science can be useful in the courtroom and marketplace. Recognizing all this nuance leaves little room for anxious alarmism or overhype and urges an emphasis on neurodiversity. The result is a highly opinionated tour of neuroethics as an exciting field full of implications for philosophy, science, medicine, law, and public policy.
Article
Purpose of review This opinion review examines recent literature questioning the traditional use of P values and null hypothesis significance testing (NHST) in biomedical research. It explores frameworks that reinterpret the P value as a continuous measure of model-data compatibility and introduces the S value (surprisal) as a more informative alternative. Recent findings Growing criticism targets the dichotomization of results into ‘significant’ and ‘nonsignificant’, which oversimplifies statistical evidence and ignores biomedical complexity. Researchers advocate for viewing the P value as a graded indicator of compatibility between data and model, rather than a threshold-based decision rule. Others note that the P value's scale is nonlinear, limiting interpretability. The S value, defined as the negative base-2 logarithm of the P value, offers a more intuitive measure of refutational evidence. Such insights have been incorporated into novel proposals such as compatibility and surprisal intervals, context-specific evidentiary targets – which better reflect clinical nuance, patient variability, and loss acceptability. Recent methodological developments encourage a shift from dichotomous testing to continuous, context-sensitive reasoning. Interpreting results through compatibility and surprisal promotes more transparent and nuanced inference, better suited to biomedical complexity. These approaches aim to make authors’ prior beliefs more explicit, allowing for interpretations that preserve a high degree of independence and adaptability to specific situations.
Article
Full-text available
In recent years, analysts have raised concerns about the threat misinformation poses to democracy, yet efforts to counter misinformation have been met with charges of bias and censorship, predominantly from the political right. This article asks who sets the terms of debate over misinformation: what it is, how much there is, whether it is a problem, and what to do about it. It frames the past decade’s controversies around misinformation as an implicit struggle for authority and offers a framework to interpret the arguments of the actors involved. It identifies three coalitions with distinct institutional and ideological profiles that have articulated consistent stances on misinformation. The analysis demonstrates how contestation among competing coalitions plays out in five distinct domains of misinformation: content, attribution, scale, consequences, and policy. Viewing the misinformation debate as part of broader political and cultural struggles within democracies at a time of low trust in institutions helps explain why (mis)information is so fiercely contested. The issue takes on outsize proportions because whoever prevails in shaping the discourse surrounding misinformation stands to gain authority over the rules governing the public sphere, with implications for the future of free speech and democratic participation.
Article
The French Model of opioid use disorder care is frequently cited to advocate for policy responses to the opioid crisis. Prior research reveals a disproportionate emphasis in such citations on federal regulatory changes, raising concerns about overly narrow interpretations and potential missed opportunities for evidence-informed policymaking. We aimed to analyze how the French Model has been used to construct policy responses to the opioid crisis internationally, exploring how unique contexts may shape them. We conducted a qualitative content analysis of scientific references to the French Model, informed by Bacchi's “What is the problem represented to be?” policy analysis approach. We analyzed 120 documents authored by scholars in 21 countries. Two concepts were identified to explain problem–solution constructions within their context: (1) cultural enthusiasm versus cultural concern for pharmaceuticals and (2) top-down, bottom-up, and mixed approaches to change. We mapped the problem solution constructions on a schema developed by intersecting these concepts. The schema had six configurations. Four of the six configurations were represented in the analyzed documents. Solutions were shaped by the various contexts in which they were constructed. They varied from deregulation of opioid agonists as a rapid response in the context of overdose crises to prescription drug monitoring programs as a response to diversion and misuse of buprenorphine. The schema we developed based on two cross-cutting concepts may be used to foster alternative, context-sensitive policy solutions.
Article
Full-text available
Дискуссия о статье Евгения Масланова "К вопросу о политической субъектности науки"
Article
Full-text available
Social science researchers are predominantly liberal, and critics have argued this representation may reduce the robustness of research by embedding liberal values into the research process. In an adversarial collaboration, we examined whether the political slant of research findings in psychology is associated with lower rates of scientific replicability. We analyzed 194 original psychology articles reporting studies that had been subject to a later replication attempt ( N = 1,331,413 participants across replications) by having psychology doctoral students (Study 1) and an online sample of U.S. residents (Study 2) from across the political spectrum code the political slant (liberal vs. conservative) of the original research abstracts. The methods and analyses were preregistered. In both studies, the liberal or conservative slant of the original research was not associated with whether the results were successfully replicated. The results remained consistent regardless of the ideology of the coder. Political slant was unrelated to both subsequent citation patterns and the original study’s effect size and not consistently related to the original study’s sample size. However, we found modest evidence that research with greater political slant—whether liberal or conservative—was less replicable, whereas statistical robustness consistently predicted replication success. We discuss the implications for social science, politics, and replicability.
Article
Full-text available
Most people believe they are morally good, and this belief plays an integral role in constructions of personal identity. Yet people commit moral transgressions with surprising frequency in everyday life. In this article, we characterize two mechanisms involving autobiographical memory that are utilized to foster a belief in a morally good self in the present—despite frequent and repeated immoral behavior. First, there is a tendency for people to willfully and actively forget details about their own moral transgressions but not about their own morally praiseworthy deeds. Second, when past moral transgressions are not forgotten, people strategically compare their more recent unethical behaviors with their more distant unethical behaviors to foster a perception of personal moral improvement over time. This, in turn, helps to portray the current self favorably. These two complementary mechanisms help to explain pervasive inconsistencies between people’s personal beliefs about their own moral goodness and the frequency with which they behave immorally.
Book
What should be the goal of science in a democratic society? Some say, to attain the truth; others deny the possibility (or even the intelligibility) of truth‐seeking. Science, Truth, and Democracy attempts to provide a different answer. It is possible to make sense of the notion of truth, and to understand truth as correspondence to a mind‐independent world. Yet science could not hope to find the whole truth about that world. Scientific inquiry must necessarily be selective, focusing on the aspects of nature that are deemed most important. Yet how should that judgement be made? The book's answer is that the search for truth should be combined with a respect for democracy. The scientific research that should strike us as significant would address the questions singled out as most important in an informed deliberation among parties committed to each others’ well‐being. The book develops this perspective as an ideal of ‘well‐ordered science’, relating this ideal both to past efforts at science policy and to the possibility that finding the truth may not always be what we want. It concludes with a chapter on the responsibilities of scientists.
Book
This volume collects Davidson's seminal contributions to the philosophy of mind and the philosophy of action. Its overarching thesis is that the ordinary concept of causality we employ to render physical processes intelligible should also be employed in describing and explaining human action. In the first of three subsections into which the papers are thematically organized, Davidson uses causality to give novel analyses of acting for a reason, of intending, weakness of will, and freedom of will. The second section provides the formal and ontological framework for those analyses. In particular, the logical form and attending ontology of action sentences and causal statements is explored. To uphold the analyses, Davidson urges us to accept the existence of non‐recurrent particulars, events, along with that of persons and other objects. The final section employs this ontology of events to provide an anti‐reductionist answer to the mind/matter debate that Davidson labels ‘anomalous monism’. Events enter causal relations regardless of how we describe them but can, for the sake of different explanatory purposes, be subsumed under mutually irreducible descriptions, claims Davidson. Events qualify as mental if caused and rationalized by reasons, but can be so described only if we subsume them under considerations that are not amenable to codification into strict laws. We abandon those considerations, collectively labelled the ‘constitutive ideal of rationality’, if we want to explain the physical occurrence of those very same events; in which case we have to describe them as governed by strict laws. The impossibility of intertranslating the two idioms by means of psychophysical laws blocks any analytically reductive relation between them. The mental and the physical would thus disintegrate were it not for causality, which is operative in both realms through a shared ontology of events.
Article
We studied publication bias in the social sciences by analyzing a known population of conducted studies—221 in total—in which there is a full accounting of what is published and unpublished. We leveraged Time-sharing Experiments in the Social Sciences (TESS), a National Science Foundation–sponsored program in which researchers propose survey-based experiments to be run on representative samples of American adults. Because TESS proposals undergo rigorous peer review, the studies in the sample all exceed a substantial quality threshold. Strong results are 40 percentage points more likely to be published than are null results and 60 percentage points more likely to be written up. We provide direct evidence of publication bias and identify the stage of research production at which publication bias occurs: Authors do not write up and submit null findings.
Article
Rationalization occurs when a person has performed an action and then concocts the beliefs and desires that would have made it rational. Then, people often adjust their own beliefs and desires to match the concocted ones. While many studies demonstrate rationalization, and a few theories describe its underlying cognitive mechanisms, we have little understanding of its function. Why is the mind designed to construct post hoc rationalizations of its behavior, and then to adopt them? This may accomplish an important task: transferring information between the different kinds of processes and representations that influence our behavior. Human decision-making does not rely on a single process; it is influenced by reason, habit, instinct, norms and so on. Several of these influences are not organized according to rational choice (i.e., computing and maximizing expected value). Rationalization extracts implicit information—true beliefs and useful desires—from the influence of these non-rational systems on behavior. This is a useful fiction: Fiction, because it imputes reason to non-rational psychological processes; Useful, because it can improve subsequent reasoning. More generally, rationalization belongs to the broader class of “representational exchange” mechanisms, which transfer information between many different kinds of psychological representations that guide our behavior. Representational exchange enables us to represent any information in the manner best suited to the particular tasks that require it, balancing accuracy, efficiency and flexibility in thought. The theory of representational exchange reveals connections between rationalization and theory of mind, inverse reinforcement learning, thought experiments, and reflective equilibrium.