How Institutional Failures Undermine Trust in Science
The Case of a Landmark Study on Sustainability and Stock Returns
Andrew A. King
For a long time, I resisted the accumulating evidence that our institutions for curating trustworthy science were failing.
I believed our academic gatekeepers—editors, reviewers, and research‑integrity officers—were quietly doing their jobs. Overstretched, but nevertheless, curating a trustworthy scientific record and correcting it when problems appeared.
That belief ended when I attempted to replicate an extraordinarily influential article: “The Impact of Corporate Sustainability on Organizational Processes and Performance,” by Robert Eccles, Ioannis Ioannou, and George Serafeim. The paper has been cited more than 6,000 times. Wall Street executives, top government officials, and even a former U.S. Vice President have all referenced it.
It contains serious flaws and misrepresentations.
The article appeared in a prestigious journal, Management Science. The authors work at highly reputed institutions. As a result, I thought correcting the record would be straightforward.
I ran into barrier after barrier.
The authors ignored me, the journal refused to act, and the scholarly community looked the other way. Two universities disregarded evidence of research misconduct -- even after the authors admitted publishing a misleading report.
The article remains largely uncorrected -- misleading thousands of people each year.
I believe our systems for curating trustworthy science are broken and need reformation.
The Authors
On September 11, 2023, I emailed Eccles, Ioannou, and Serafeim to explain that I was attempting to replicate their study and had encountered serious problems:
- The reported method did not work as described.
- A key result seemed to be mislabeled as statistically significant when it was not.
- Some measures defied construction.
- Critical statistical tests appeared to be missing.
- The sample was highly unusual.
I explicitly acknowledged uncertainty and asked for help. Over roughly half a dozen follow‑up emails, I shared progress updates and offered to collaborate.
I received no response.
My experience is not unusual. Bloomfield et al. (2018) show that requests from replicators are often ignored, delayed, or deflected. Because published articles frequently omit key details, authors can block replication simply by refusing to engage.
The Community of Scholars
I turned to colleagues and respected scholars for advice. I asked for help encouraging the authors to engage. I emphasized that mistakes happen—my own work is not unblemished—and that correcting errors strengthens, rather than diminishes, scholarly standing. I heard:
- “I can’t do anything—it would cause conflict.”
- “Your email is too long.”
- “I’m underwater for the next month.”
- “I’m too much of a coward.”
The last came from an internationally respected scholar with a chaired position at a top university. I appreciated the candor. It revealed an uncomfortable truth: much of social science operates on a culture of go-along, get-along.
“Once a paper is published… it is more harmful to one’s career to point out the fraud than to be the one committing it" (Bloomfield et al., 2018).
The Journal
Having received no response from the authors, I contacted Management Science. After getting advice, I submitted a comment.
It was rejected.
The reviewers did not address the substance of my comment; they objected to my "tone". They told me that published authors should be granted “discretion” in conducting their work and that replicators should tread very lightly. One reviewer was “inclined to turn down any invitation to review a revision” unless it was accompanied by a note from the original authors.
Knowing such a note would never come, I appealed. Rejected. I appealed again. Rejected.
The authors did admit to the editor that they had misreported a key finding — labeling it as statistically significant when it was not.
The authors claimed the error was a “typo.” They intended to type “not significant” but omitted the word “not.”
They did not address the implications of this "typo"—that it misrepresented the evidence for a central claim of the paper, that corporate sustainability increases stock returns.
I asked the journal to correct the record. Rejected.
My experience is not unusual. As one respondent told Bloomfield et al. (2018): “Replication studies don’t get cited, and journals don’t publish them. Nor do people get promoted for replication studies".
Help from Outsiders: LinkedIn and an Upstart Replication Journal
I decided I needed to go outside the standard process and post publicly about the "typo" on LinkedIn.
Days later, I heard that the journal would publish a correction.
I was told the authors had submitted the correction before my post, but it had been misplaced and forgotten.
I believe the journal's new editor found this news to be as incredible as I did. He quickly published an erratum.
I also submitted my replication to the Journal of Management Scientific Reports (JOMSR). This upstart publication was started in 2022 by a small group of courageous scholars who wanted to provide an outlet for replication studies like mine. I was impressed by their thorough reviews and tough guidance.
In spring 2025, JOMSR published my replication study.
Research Integrity Offices (Part 1)
While revising my replication for publication, I became convinced of a more serious issue: the method reported in Eccles, Ioannou, and Serafeim (2014) was not the method actually used. Worse, the true method could not support their "findings".
I contacted the authors again. No response.
I decided a research integrity complaint was in order.
In July and August 2025, I submitted complaints to Harvard Business School and London Business School. I alleged that the reported method could not have been conducted as described—and that the results were therefore uninterpretable.
(A technical aside describing the study’s method may be useful here. Feel free to skip.)
- The empirical strategy in Eccles, Ioannou, and Serafeim (2014) rests on a demanding requirement: the “treated” and “control” firms must be so closely matched that which firm is treated is essentially random. The authors appear to recognize this, reporting that they used very strict matching criteria “to ensure that none of the matched pairs is materially different.”
- Despite their strict criteria, they also claim to have achieved remarkable success in finding precise matches, reporting that 98% of their “high sustainability” firms could be matched with a near-twin “low sustainability” firm. Yet when I attempted to replicate the study, I achieved a much lower match rate—fewer than 15%. To better understand the discrepancy, I conducted a probability analysis using a Monte Carlo simulation. I determined that the reported matching success was highly unlikely—many, many, many times less than winning the lottery.
- Either their matching process was precise, in which case they would not have enough pairs to run their analysis, or it was loose, in which case their analysis could not be interpreted.
(End of aside.)
Shortly after I submitted my complaint, the authors acknowledged they had misreported their method.
But they did not ask Management Science to correct the text of their article.
Research Integrity Offices (Part 2)
Eccles, Ioannou, and Serafeim explained that the misreport was an unfortunate accident. There had been two studies, they said, and the false description belonged to an “exploratory” study that was later removed to satisfy length requirements, except the sentences describing its matching process, which were inadvertently left behind. As a result, those sentences now appeared to describe the "main" analysis, but that is not what they had intended. It might look like misrepresentation, but it was just an editing error.
They did not explain that this meant all of their results were uninterpretable.
The explanation also conflicts with the record.
- The incorrect claim appears in the earliest available draft of their article - marked "NEW!" on HBS's site.
- Over several later drafts, the false claim was retained and even edited, rather than removed.
- The “exploratory study" does not appear in any available draft.
In light of these inconsistencies, I submitted a revised complaint to Harvard Business School and London Business School.
Harvard Business School responded: “Whether or how the School does or does not move forward… will not be communicated to you.”
LBS was more open and responded quickly, concluding that the false claim was not an “intentional falsehood”. Why? Because the LBS professor (Ioannou) “did not have access to the raw data and did not conduct the analyses in question.” And in any case, the problem was of a “minor nature”, apparently because it pertained to some other study and thus did “not impact the main text, analyses, or findings.”
Sadly, LBS’s response is empty.
- Data access is immaterial. I did not allege data fabrication.
- The false claim is not minor. It is the difference between a usable and useless study.
- It does not address the central question: Did the exploratory study ever exist? If not, false statements were published twice—first in the article, and then in the offered explanation.
LBS did conclude that the author engaged in “poor practice”, which they planned to address through “education and training or another non-disciplinary approach.”
I suggest LBS begin by explaining an author’s duty to correct errors in published work.
Where This Leaves Us
Eccles, Ioannou, and Serafeim (2014) remains only partly corrected in the pages of Management Science. Diligent readers may discover the erratum correcting the "NOT significant" finding, but they will not learn of the misreported method in the pages of Management Science. Thus, thousands of readers remain misled.
Our institutions for curating trustworthy social science are not working. They must be changed, reformed, and revitalized.
What you can do
- Stop citing single studies as definitive. They are not. Check if the ones you are reading or citing have been replicated.
- If you or someone else finds an error in your published work, publish a correction.
- If one of your colleagues is behaving unprofessionally, tell them to stop.
- Support replication. Encourage others to do so. Support the Journal of Management Scientific Reports.
- Find out about the research integrity policies at your institution. If they are weak, strengthen them.
- If you know Eccles, Ioannou, and Serafeim, ask them to retract their article, or at least publish another correction.
What else needs to change
For years, I studied industry self-regulation. The evidence is clear: it works only when it is transparent, independently monitored, and supported by graduated sanctions. Applying this to the curation of science.
- Journals should disclose comments, complaints, corrections, and retraction requests. Universities should report research integrity complaints and outcomes.
- An independent third-party should audit the process.
- Penalties should reflect the severity of the violation, not be all-or-nothing.
- And to ensure the system works, we need what Andrew Gelman and I call Further Review.
More in future posts.
For a pdf of this article, go here:
Sadly, you are not alone. This is an epidemic of lies. There is money behind it but mostly the promotion of certain agendas. In my case, I have been analyzing *real* weather data since 1980. And I still have some of those original files, which came to me on reel-to-reel mag tape from other government agencies.
The AI slop problem is now in scientific research publication as well. I just saw an article about false references and other problems with submissions. And peer reviewers are now leaning on AI tools for assistance. I thought this was an interesting experiment with AI generation going between image and text description. Seems to be after a number of iterations there is a "regression to the mean" with more and more information removed as the series of conversions progresses. https://theconversation.com/ai-induced-cultural-stagnation-is-no-longer-speculation-its-already-happening-272488
A good read. Are some fields more prone to this than others? If I had to guess then I’d say it was rarer in humanities.
This kind of unethical behavior undermines public trust in science. The scientific community bemoans the loss of public trust in science, but we fail to acknowledge our own contributions to this loss. See also: https://www.strukturelle.ch/en/post/when-scientific-institutions-allow-unethical-behavior-by-scientists-why-should-the-public-trust-in
Unfortunately these types of interactions seem increasingly common. We need the prestige journals to take this more seriously. See here for our recent post https://www.linkedin.com/posts/douglas-sheil-55330a9a_academicpublishing-scientificintegrity-openscience-activity-7418940767680749568-55L-?utm_source=share&utm_medium=member_android&rcm=ACoAABUMCo8BrLwsbmfOYxByVavYPlYIkGcQH9s