Creating TED talks from peer-reviewed growth mindset research papers with colored brain pictures

The TED talk fallacy – When you confuse what presenters say about a peer-reviewed article – the breathtaking, ‘breakthrough’ strength of findings demanded for a TED talk – with what a transparent, straightforward analysis and reporting of relevant findings would reveal. 

mind the brain logo

The TED talk fallacy – When you confuse what presenters say about a peer-reviewed article – the breathtaking, ‘breakthrough’ strength of findings demanded for a TED talk – with what a transparent, straightforward analysis and reporting of relevant findings would reveal. 

 fixed vs growth mind setA reminder that consumers, policymakers, and other stakeholders should not rely on TED talks for their views of what constitutes solid “science’ or “best evidence,” even when presenters are established scientists.

The authors of this modest, but overhyped paper do not give TED talks. But this article became the basis for a number of TED and TED-related talks by a psychologist who integrated a story of its findings with stories about her own publications. She has a booking agent for expensive talks and a line of self-help products. This raises the question:  Should such information routinely be a reported conflict of interests in in publications?  

We will contrast the message of  the paper under discussion in this post, along with the TED talk with a new pair of comprehensive meta analyses. The meta analyses show that growth mindset and academic achievement are weak and interventions to improve mindset are ineffectual.

The study

 Moser JS, Schroder HS, Heeter C, Moran TP, Lee YH. Mind your errors: Evidence for a neural mechanism linking growth mind-set to adaptive posterror adjustments. Psychological Science. 2011 Dec;22(12):1484-9.

 Key issues with the study.

The abstract is uninformative as a guide to what was done and what was found in this study. It ends with a rousing promotion of growth mind set as a way of understanding and improving academic achievement.

A study with N = 25 is grossly underpowered for most purposes and should not be used to generate estimates of associations.

Key details of methods and results needed for independent evaluation are not available in article.

The colored brain graphics in the article were labeled “for illustrative purposes only.”

Where would you find such images of the brain not tied to the data in a credible neuroscience journal?  Articles in real such journals are increasingly retracted because of the discovery of suspected pasted-in or altered brain graphics.

The discussion has a strong confirmation bias, ignoring relevant literature and overselling the use of event-related potentials for monitoring and evaluating the determinants of academic achievement.

The press release issued by Association for Psychological Science.

How Your Brain Reacts To Mistakes Depends On Your Mindset

Concludes:

The research shows that these people are different on a fundamental level, Moser says. “This might help us understand why exactly the two types of individuals show different behaviors after mistakes.” People who think they can learn from their mistakes have brains that are tuned to pay more attention to mistakes, he says. This research could help in training people to believe that they can work harder and learn more, by showing how their brain is reacting to mistakes.

The abstract.

The abstract does not report basic details of methods and results, except what is consistent with the authors’ intended message. The crucial final sentence is quote worthy and headed for clickbait. When we look at what was done and what was found in this study, this conclusion is grossly overstated.

How well people bounce back from mistakes depends on their beliefs about learning and intelligence. For individuals with a growth mind-set, who believe intelligence develops through effort, mistakes are seen as opportunities to learn and improve. For individuals with a fixed mind-set, who believe intelligence is a stable characteristic, mistakes indicate lack of ability. We examined performance-monitoring event-related potentials (ERPs) to probe the neural mechanisms underlying these different reactions to mistakes. Findings revealed that a growth mind-set was associated with enhancement of the error positivity component (Pe), which reflects awareness of and allocation of attention to mistakes. More growth-minded individuals also showed superior accuracy after mistakes compared with individuals endorsing a more fixed mind-set. It is critical to note that Pe amplitude mediated the relationship between mind-set and posterror accuracy. These results suggest that neural mechanisms indexing on-line awareness of and attention to mistakes are intimately involved in growth-minded individuals’ ability to rebound from mistakes.

The introduction.

The introduction opens with:

Decades of research by Dweck and her colleagues indicate that academic and occupational success depend not only on cognitive ability, but also on beliefs about learning and intelligence (e.g., Dweck, 2006).

This sentence echoes the Amazon blurb for the pop psychology book  that is being cited:

After decades of research, world-renowned Stanford University psychologist Carol S. Dweck, Ph.D., discovered a simple but groundbreaking idea: the power of mindset. In this brilliant book, she shows how success in school, work, sports, the arts, and almost every area of human endeavor can be dramatically influenced by how we think about our talents and abilities.

Nowhere in the introduction are there balancing references to studies investigating Carol Dweck’s theory independently, from outside her group, nor any citing of any inconsistent findings. This is a selective, strongly confirmation-driven review of the relevant literature. (Contrast this view with an independent assessment from a recent comprehensive meta analysis at the end of the this post).

The method.

Twenty-five native-English-speaking undergraduates (20 female, 5 male; mean age = 20.25 years) participated for course credit.

There is no discussion of why a sample of only 25 participants was chosen or any mention of a power analysis.

If we stick to simple bivariate correlations with the full sample of N= 25:

R = .40 p <.05  (p= 0.0475)

R=  .51  p <.01 (p = 0.0092)

N = 25 does not allow reliable detection of a small to moderate sized,  statistically significant relationship where one exists.

Any significant findings will of necessity be large, r >.40 for p<.05 and  r> .51 for p<.01.

As been noted elsewhere:

In systematic studies of psychological and biomedical effect sizes (e.g., Meyer et al., 2001)  one rarely encounters correlations greater than .4.

How growth mindset scores were calculated is crucially important, but the information that is presented about the measure is inadequate. There is no reference to an established scale with psychometric data and cross validation. Rather:

Following the flanker [a noise letter version of the Eriksen flanker task (Eriksen & Eriksen,  1974)  task, participants completed a TOI scale that asked respondents to rate the extent to which they agreed with four fixed-mind-set statements on a 6-point Likert-type scale (1 = strongly disagree, 6 = strongly agree). These statements (e.g., “You have a certain amount of intelligence and you really cannot do much to change it”) were drawn from previous studies measuring TOI (e.g., Hong, Chiu, Dweck, Lin, & Wan, 1999). TOI items were reverse-scored so that higher scores indicated more endorsement of a growth mind-set, and lower scores indicated more of a fixed mind-set,

Details in the referenced Hong et al (1999) study are difficult to follow, but the paper lays out the following requirement:

Those participants who believe that intelligence is fixed (entity theorists) should consistently endorse responses at the lower (agree) end of the scale (yielding a mean score of 3.0 or lower), whereas participants who believe that intelligence is malleable (incremental theorists) should consistently endorse responses at the upper (disagree) end of the scale (yielding a mean score of 4.0 or above).

If this distribution occurred naturally, it would be an extraordinary set of questions. In the Hong et al (1999) study, this distribution was achieved by throwing away data in the middle of the distribution that didn’t fit the investigators’ preconceived notion.

Excluding the middle third of a distribution of scores with only N = 25 compounds the errors associated with the practice with a larger sample. With the small number of scores now reduced to N= 17, the influence of single outlier participant would be increased. Any generalization to the larger population would be even more problematic.  We cannot readily evaluate whether scores in the present sample were neatly and naturally bimodal. We are not provided the basic data, not even the means and standard deviations in text or table. However, as we will see, one graphic representation leaves some doubts.

Overview of data analyses.

Repeated measures analyses of variance (ANOVAs) were first conducted on behavioral and ERP measures without regard to individual differences in TOIs in order to establish baseline experimental effects. ANOVAs conducted on behavioral measures and the ERN included one 2-level factor: accuracy (error vs. correct response). The Pe [error positivity component ]was analyzed using a 2 (accuracy: error vs. correct response) × 2 (time window: 150–350 ms vs. 350–550 ms) ANOVA. Subsequently, TOI scores were entered into ANOVAs as covariates to assess the main and interactive effects of mind-set on behavioral and ERP measures. When significant effects of TOI score were detected, we conducted follow-up correlational analyses to aid in the interpretation of results.

Thus, multiple post hoc analyses examine the effects of the growth mindset (TOI), based on whether significant main and interaction effects were obtained in other analyses, which in turn, were followed up with correlational analyses.

Highlights of the results.

 Only a few of numerous analyses produced significant results for TOI. Given the sample size and multiple tests without correction, we probably should not attach substantive interpretations to them.

Behavioral data.

Overall accuracy was not correlated with TOI (r = .06, p > .79).

[Speed on error vs correct trials]  trials] When TOI was entered into the ANOVA as a covariate, there were no significant effects (Fs < 1.78, ps > .19, ηp 2s < .08) [where ‘ps’ and ‘no significant effects’ refer to either a main or interaction effects].

[Posterror adjustments] When TOI was entered into the ANOVA as a covariate, there were no significant effects (Fs <1.15, ps > .29, ηp 2 s  < .05).

When entered into the ANOVA as a covariate, however, TOI scores interacted with postresponse accuracy, F(1, 23) = 5.22, p < .05, ηp2= .19. Correlational analysis showed that as TOI scores increased, indicating a growth mind-set, so did accuracy on trials immediately following errors relative to accuracy on trials immediately following correct responses (i.e., posterror accuracy – postcorrect-response accuracy; r = .43, p < .05).

ERPs (event-related potentials).

As expected, the ANOVA confirmed greater ERP negativity on error trials (M = –3.43 μV, SD = 4.76 μV) relative to correct trials (M = –0.23 μV, SD = 4.20 μV), F(1, 24) = 24.05, p < .001, ηp2 = .50, in the 0- to 100-ms postresponse time window. This result is consistent with the presence of an ERN. There were no significant effects involving TOI (Fs < 1.24, ps > .27, ηp2s < .06).

When entered as a covariate, TOI showed a significant interaction with accuracy, F(1, 23) = 8.64, p < .01, ηp2 = .27. Correlational analysis demonstrated that as TOI scores increased so did positivity on error trials relative to correct trials averaged across both time windows (i.e., error activity – correct-response activity; r = .52,1 p < .01)

Mediation analysis.

As Figure 2 illustrates, controlling for Pe amplitude significantly attenuated the relationship between TOI scores and posterror accuracy. The 95% confidence intervals derived from the bootstrapping test did not include zero (.01–.04), and thus indicated significant mediation.

So, a priori conditions for testing for a significant mediation was met because a statistical test barely excluded zero (.01–.04, with no correction for the many tests of TOI in the study. But what are we doing exploring mediation with N = 25?

Distribution of TOI [growth mindset] scores.

Let’s look at the distribution of TOI scores in a graph available as the x-axis in Figure 1.

graph with outlier

Any dichotomization of these continuous scores would be arbitrary. Close scores clustered around different sides of the median would  be considered  different, but  diverging  scores on the same side of the median  would be treated as the same.  Any association between TOI and ERPs (event-related potentials) could be due to one or a few interindividual differences in brains or intraindividual variability of ERP over occasions. These are not the kind of data from which generalizable estimates of effects can be obtained.

The depiction of brains with fixed versus growth mind sets.

The one picture of brains in the main body of this article supposedly contrasts fixed versus growth mindsets. The differences appear dramatic, in sharply contrasting colors. But in the article itself, no such dichotomization is discussed. Nor should it be. Furthermore, the simulation is based on an isolation of one of the few significant effects of TOI. Readers are cautioned that the picture is “for illustrative purposes only.”

fixed vs growth mind set

The discussion.

Similar to the introduction, there is a selective citation of the literature with a strong confirmation bias. There is no reference to weak or null findings or any controversy concerning growth mindset that might have accumulated over a decade of research. There is no acknowledgment of the folly of making substantive interpretations of significant findings from such a small, underpowered study. Results of the mediation analysis are confidently presented, with no indication of doubts whether they should even have been conducted. Or that, even under the best of circumstances, such mediational analyses remain correlational  and provide only weak evidence of causal mechanisms. Event-related evoked potentials are proposed as biomarkers and as surrogate outcomes in implementations of growth mindset interventions. A lot of misunderstanding and neurononsense are crammed into a few sentences. There is no mention of any limitations to the study.

The APS Observer press release revisited.

Why was this article recognized with a special press release by the APS? The press release is much more tied to the author’s claims about their study, rather than to their actual methods and results. The press release provides an opportunity to publicize the study with further exaggeration of what it accomplished.

This is an unfortunate message to authors about what they need to do to be promoted by APS. Your intended message can override your actual results if you strategically emphasize the message and downplay any discrepancy with the results. Don’t mention any limitations of your study.

The TED talks.

A number of TED and TED-related talks incorporate a discussion of the study, with its picture of fixed versus growth mindset brains. There is remarkable overlap among these talks. I have chosen TEDxNorrkoping The power of believing that you can improve  because it had a handy transcript available.

 same screenshot in TED talk1

On the left, you see the fixed-mindset students. There’s hardly any activity. They run from the error. They don’t engage with it. But on the right, you have the students with the growth mindset, the idea that abilities can be developed. They engage deeply. Their brain is on fire with yet. They engage deeply. They process the error. They learn from it and they correct it.

“On fire”? The presented exploits the arbitrary red color chosen for the for-illustrative-purposes-only picture.

The brain graphic is reduced to a cartoon in a comic book level account of action heroes engaging their errors deeply, learning from them, and correcting their next response when ordinary mortals are running, like cowards.

The presenter soon introduces another cartoon for her comic book depiction of the effects of growth mindset on the brain. But first, here is an overview of how this talk fits the predictable structure of a TED talk.

The TED talk begins with a personal testimony concerning  “a critical event early in my career, a real turning point.” It is recognizable to TED talk devotees as an epiphany (an “epiphimony” if you like ) through which the speaker shares a personal journey of insight and realisation, its triumphs and tribulations. In telling the story, the presenter introduces an epic struggle between the children of the darkness (the “now” of a fixed mindset) versus children of the light (the “yet” or “not yet” of a growth mindset).

There is much more of a sense of a televangelist than academic presenting an accurate summary of her research to a lay audience. Sure, the live audience and the millions of viewers of this and related talks were not seeking a colloquium or even a Cafe Scientifique. The audience came to be entertained with a good story. But how much license can be taken with the background science? After all, the information being discussed is relevant to their personal decisions as parents and as citizens and communities making important choices about how to improve academic performance. The issue becomes more serious when the presenter gets to claims of dramatic transformations of impoverished students in economically deprived school settings.

The presenter cites one of her studies for an account of what students “gripped with the tyranny of now” did in difficult learning experiences:

So what do they do next? I’ll tell you what they do next. In one study, they told us they would probably cheat the next time instead of studying more if they failed a test. In another study, after a failure, they looked for someone who did worse than they did so they could feel really good about themselves.

We are encouraged to think ‘Students with a fixed mind set cheat instead of studying more. How horrible!’ But I looked up the study:

Blackwell LS, Trzesniewski KH, Dweck CS. Implicit Theories of Intelligence Predict Achievement Across an Adolescent Transition: A Longitudinal Study and an InterventionChild Development. 2007 Jan 1;78(1):246-63.

I searched for “cheat” and found one mention:

Students rated how likely they would be to engage in positive, effort-based strategies (e.g., ‘‘I would work harder in this class from now on’’ ‘‘I would spend more time studying for tests’’) or negative, effort-avoidant strategies (e.g., ‘‘I would try not to take this subject ever again’’ ‘‘I would spend less time on this subject from now on’’ ‘‘I would try to cheat on the next test’’). Positive and negative items were combined to form a mean Positive Strategies score.

All subsequent reporting of results was in terms of this composite Positive Strategies. So, I was unable to evaluate how common endorsement occurred of “I would try to cheat…”

Three minutes into the talk, the speaker introduces an element of moral panic about a threat to Western civilization as we know it:

How are we raising our children? Are we raising them for now instead of yet? Are we raising kids who are obsessed with getting As? Are we raising kids who don’t know how to dream big dreams? Their biggest goal is getting the next A, or the next test score? And are they carrying this need for constant validation with them into their future lives? Maybe, because employers are coming to me and saying, “We have already raised a generation of young workers who can’t get through the day without an award.”

Less than a minute later, the presenter gets ready to roll out her solution.

So what can we do? How can we build that bridge to yet?

Praising performance in terms of fixed characteristics like IQ or ability is ridiculed. However, great promises are made for praising process, regardless of outcome.

Here are some things we can do. First of all, we can praise wisely, not praising intelligence or talent. That has failed. Don’t do that anymore. But praising the process that kids engage in, their effort, their strategies, their focus, their perseverance, their improvement. This process praise creates kids who are hardy and resilient.

“Yet” or “not yet” becomes a magical incantation.  The presenter builds on her comic book science of the effects of growth mindset, by introducing by cartoon of a synapse (mislabeled as a neuron),  linked to her own research only by some wild speculation.

Just the words “yet” or “not yet,” we’re finding, give kids greater confidence, give them a path into the future that creates greater persistence. And we can actually change students’ mindsets. In one study, we taught them that every time they push out of their comfort zone to learn something new and difficult, the neurons in their brain can form new, stronger connections, and over time, they can get smarter.

I found no relevant measurements of brain activity in Dweck’s studies, but let’s not ruin a good story.

Look what happened: In this study, students who were not taught this growth mindset continued to show declining grades over this difficult school transition, but those who were taught this lesson showed a sharp rebound in their grades. We have shown this now, this kind of improvement, with thousands and thousands of kids, especially struggling students.

Up until now, we have disappointingly hyped and inaccurate accounts of how to foster academic achievement. But soon turns into a cruel hoax when claims are made about improving the performance of under privileged children in under resource settings.

So let’s talk about equality. In our country, there are groups of students who chronically underperform, for example, children in inner cities, or children on Native American reservations. And they’ve done so poorly for so long that many people think it’s inevitable. But when educators create growth mindset classrooms steeped in yet, equality happens. And here are just a few examples. In one year, a kindergarten class in Harlem, New York scored in the 95th percentile on the national achievement test. Many of those kids could not hold a pencil when they arrived at school. In one year, fourth-grade students in the South Bronx, way behind, became the number one fourth-grade class in the state of New York on the state math test. In a year, to a year and a half, Native American students in a school on a reservation went from the bottom of their district to the top, and that district included affluent sections of Seattle. So the Native kids outdid the Microsoft kids.

This happened because the meaning of effort and difficulty were transformed. Before, effort and difficulty made them feel dumb, made them feel like giving up, but now, effort and difficulty, that’s when their neurons are making new connections, stronger connections. That’s when they’re getting smarter.

So the Native kids outdid the Microsoft kids.” There is some kind of poetic license being taken here in describing the results of an intervention. The message is that subjective mindset can trump entrenched structural inequalities and accumulated deficits in skills and knowledge, as well as limits on ability. All school staff and parents need to do is wave the magic wand and recite the incantation “Not yet.” How reassuring to those in politics who control resources who don’t want to adequately fund the school settings. They just need to exhort anyone who wants to improve outcomes to recite the magic.

And what do we say when we don’t witness dramatic improvements? Who is to blame when such failures need to be explained. . The cruel irony is that school boards will blame principals, who blame teachers, and parents will blame schools and their children. All will be held to unrealistic expectations.

But it gets worse. The presenter ends with a call to action arguing that that not buying into her program would violate the human rights of vulnerable children.

Let’s not waste any more lives, because once we know that abilities are capable of such growth, it becomes a basic human right for children, all children, to live in places that create that growth, to live in places filled with “yet”.

Paradox: Do poor kids with a growth mindset suffer negative consequences?

Maybe so, suggests some recent research concerning the longer term outcomes of disadvantaged African American children.

A newly published study in the peer-reviewed journal Child Development …finds traditionally marginalized youth who grew up believing in the American ideal that hard work and perseverance naturally lead to success show a decline in self-esteem and an increase in risky behaviors during their middle-school years. The research is considered the first evidence linking preteens’ emotional and behavioral outcomes to their belief in meritocracy, the widely held assertion that individual merit is always rewarded.

“If you’re in an advantaged position in society, believing the system is fair and that everyone could just get ahead if they just tried hard enough doesn’t create any conflict for you … [you] can feel good about how [you] made it,” said Erin Godfrey, the study’s lead author and an assistant professor of applied psychology at New York University’s Steinhardt School. But for those marginalized by the system—economically, racially, and ethnically—believing the system is fair puts them in conflict with themselves and can have negative consequences.

We know surprisingly little about the adverse events associated with growth mindset interventions or their negative unintended consequences for children and school systems. Cost/benefit analyses of mindset interventions should be done with respect to academic interventions known to be effective when conducted with the equivalent resources, not no treatment.

Overall associations of growth mind set with academic achievement are weak and interventions are not effective.

Sisk VF, Burgoyne AP, Sun J, Butler JL, Macnamara BN. To What Extent and Under Which Circumstances Are Growth Mind-Sets Important to Academic Achievement? Two Meta-Analyses. Psychological Science. 2018 Mar 1:0956797617739704.

This newly published article published in Psychological Science started by noting  the influence of growth mind set.

These ideas have led to the establishment of nonprofit organizations (e.g., Project for Education Research that Scales [PERTS]), for-profit entities (e.g., Mindset Works, Inc.), schools purchasing mind-set intervention programs (e.g., Brainology), and millions of dollars in funding to individual researchers, nonprofit organizations, and for-profit companies (e.g., Bill and Melinda Gates Foundation,1 Department of Education,2 Institute of Educational Sciences3).

In our first meta-analysis (k = 273, N = 365,915), we examined the strength of the relationship between mind-set and academic achievement and potential moderating factors. In our second meta-analysis (k = 43, N = 57,155), we examined the effectiveness of mind-set interventions on academic achievement and potential moderating factors. Overall effects were weak for both meta-analyses.

The first meta analysis integrated 273 effect sizes. The overall effect was very weak, by conventional standards, hardly consistent with the TED talks.

The meta-analytic average correlation (i.e., the average of various population effects) between growth mind-set and academic achievement is r⎯⎯ = .10, 95% confidence interval (CI) = [.08, .13], p < .001.

The data set of effects of growth mindset interventions integrated 43 effect sizes and 37 of the 43 effect sizes (86%) are not significantly different from zero.

The authors conclude:

Some researchers have claimed that mind-set interventions can “lead to large gains in student achievement” and have “striking effects on educational achievement” (Yeager & Walton, 2011, pp. 267 and 268, respectively). Overall, our results do not support these claims. Mind-set interventions on academic achievement were nonsignificant for adolescents, typical students, and students facing situational challenges (transitioning to a new school, experiencing stereotype threat). However, our results support claims that academically high-risk students and economically disadvantaged students may benefit from growth-mind-set interventions (see Paunesku et al., 2015; Raizada & Kishiyama, 2010), although these results should be interpreted with caution because (a) few effect sizes contributed to these results, (b) high-risk students did not differ significantly from non-high-risk students, and (c) relatively small sample sizes contributed to the low-SES group.

Part of the reshaping effort has been to make funding mind-set research a “national education priority” (Rattan et al., 2015, p. 723) because mind-sets have “profound effects” on school achievement (Dweck, 2008, para. 2). Our meta-analyses do not support this claim.

And

From a practical perspective, resources might be better allocated elsewhere than mind-set interventions. Across a range of treatment types, Hattie, Biggs, and Purdie (1996) [https://www.teachertoolkit.co.uk/wp-content/uploads/2014/04/effect-of-learning-skills.pdf ] found that the meta-analytic average effect size for a typical educational intervention on academic performance is 0.57. All meta-analytic effects of mind-set interventions on academic performance were < 0.35, and most were null. The evidence suggests that the “mindset revolution” might not be the best avenue to reshape our education system.

The presenter’s speaker fees.

Presenters of TED talks are not paid, but a successful talk can lead to lucrative speaking engagements. It is informative to Google the speaking fees of the presenters of highly accessed Ted talks. In the case of Carol Dweck, I found the booking agency,  All American Speakers.

fee range

Mindsetonline provides products for sale as well as success stories about people and organizations adopting a growth mindset.

buy the bookbuy the software

businessa nd leadership

There is even a 4-item measure of mindset you can complete on line.  Each of the items is some paraphrasing of ‘you can’t change your intelligence very much’ either stated straightforwardly or reverse, ‘you can.’

Consumers beware! TED talks are not reliable dissemination of best evidence.

TED talks are to best evidence like historical fiction is to history.

Even TED talks by eminent psychologists often are little more than informercials for the self-help and lucrative speaking engagements and workshops.

Academics are under increasing pressure to demonstrate that there is more to the  impact of their work, in terms of citations of publications in prestigious journals. Social impact is being used to balance journal impact factors.

It is also being recognized that outreach involves the need to equip lay audiences to be able to grasp what are initially difficult or confusing concepts.

But pictures of color brains can be used to dumb down consumers and to disarm their intuitive skepticism about behavioral science working magic and miracles. Even PhD psychologists are inclined to be  overly impressed with references to neuroscience and pictures of color brains are introduced into the discussion. The vulnerability of lay audiences to neurononsense or neurobollocks is even greater.

False and exaggerated claims about academic interventions harm school systems, teachers, and ultimately, students. In communicating to lay audiences, psychologists need to be sensitive to the possible misunderstandings they are reinforcing. They have an ethical responsibility to do their best to critical thinking skills of their audiences, not damage it.

TED talks and declarations of potential conflicts of interest.

Personally, I found that calling out the pseudoscience behind claims for unproven medicine like acupuncture or homeopathy does not produce much blowback except mostly from proponents of these treatments. Similarly, campaigning for better disclosure of potential conflicts of interest does not meet much resistance when the focus is on pharmaceutical companies.

However, it’s a whole different matter to call out the pseudoscience behind self-help and exaggerated outbreak false claims about behavioral science being able to work miracles and magic. It seems to be a double standard in psychology by which is inappropriate to exaggerate the strength of findings when communicating with other professionals. On the other hand, in communicating with lay audiences, it’s perfectly okay.

We need to think about TED talks more like we think about talks by opinion leaders with ties to the pharmaceutical industry. Presenters  should start with a standard slide disclosing financial interests that may influence opinions offered about specific products mentioned in the talk. Given the pressure to get findings that will fit into the next TED talk, presenters should routinely disclose in their peer review articles that they give TED talks or have a booking agent.