What *is* metascience? Issues, inclusion and future public value

This week, the biennial Metascience Conference came to London’s â€œKnowledge Quarter” with around 800 participants from a wide range of roles, including researchers, librarians, research funders, publishers, media and corporations. For the uninitated, metascience is a multidisciplinary research space concerned with “the analysis of research systems, cultures and decision-making”. This broad remit attracted a range of academic disciplines to the conference, including some policy-engaged scholars from science and technology studies (STS). That was notable given a recurring critique from some in STS and other fields that metascience “exclusively exists to depoliticise analysis of science outside the confines of the critical disciplines”. Touching on this, I was fortunate to chair an excellent Day 2 panel on whether metascience is reinventing the wheel, but it turned out that an even more meaningful intervention arrived later that afternoon.

AI’s politics comes to the surface

Day 2's final panel was “AI in Science: Accelerating Discovery” centred on a keynote talk by Anna Koivuniemi of Google DeepMind which showcased the company’s AlphaFold protein structure database. Following some more technical talks on the role of AI in understanding scientific structures, Sabina Leonelli provided a critical edge to proceedings highlighting some of the pitfalls of AI, including the disjunct between what is good for AI and what is good for science, convenience AI, and the persistent undervaluing of the human labour needed to validate and calibrate models

This critical edge stepped up a gear with the second question from the floor in the Q&A. The contributor first acknowledged the undoubted breakthrough of AlphaFold, before then drawing attention to the data labellers whose labour is essential to the AI models of Google, as earlier highlighted by Leonelli. The questioner then highlighted the environmental costs of AI models, particularly the water needed to cool data centres in the world’s driest areas, before finally landing on perhaps the most fundamental question of all (which I paraphrase):

“Google is here before us but is not accountable to us or to wider democratic publics, only to its shareholders who seek to maximise profit. So the question is ‘what’s in it for you Anna, and what’s in it for Google?’”

From memory, the question lasted between 30-60 seconds and was punctuated by applause from the audience, the first time I heard a question being responded to in this way at the conference. There was some emotion in the delivery, as one might expect given what’s at stake, but was well within the bounds of robust academic questioning.

This was a timely intervention, not least given the preceding panel in the Logan Hall was a pretty uncritical affair about AI in science where virtually no time afforded to audience Q&A. I looked forward to seeing how Koivuniemi dealt with the line of questioning, if only to see how a professional with over a decade of experience at McKinsey prior to Google, dealt with issues of a political and ethical nature.

Sparing Google’s blushes?

But no! Our genial chair, Geraint Rees, instead interjected to claim that the question was phrased as a comment and so was not appropriate to be responded to at this panel. To their credit, the questioner remained in front of the mic and said they awaited a response from the panel. It was then that Rees unexpectedly got to the heart of the controversy over what metascience includes and excludes:

“A Metascience conference is not the right place to discuss this.” 

Welcome to the Knowledge Quarter!

This was a remarkable turn of events, for three reasons. First, the character of the conference up to that point had been admirably diverse, and in fact included and confronted at least some political aspects connected to metascience which critics claimed to be excluded from this space. For example, Andy Stirling on the social purposes of science, the influence of defence spending and inconvenient knowledge in science policy, Cassidy Sugimoto’s analysis of the Trump administration’s defunding of science, an issue only skirted around on Day 1, and a fascinating looking panel on feminist metascience (which I regrettably missed). Second, the environmental impacts of AI had been raised more than once elsewhere in the conference, and the issue of data labellers earlier in the same panel, so was hardly verboten, and really should be publicly ‘explainable’ by any Google manager. Finally, and worst of all, the sight of a high-ranking university representative stepping in to save a big tech executive from answering a difficult question was deeply embarrassing (or at least should have been) for all concerned.

By this point, a reasonable number of people in the audience were loudly booing or chuntering, while the questioner remained steadfast behind the mic. Faced with an increasingly unruly audience, Rees found a way out by throwing the question to Leonelli (break glass for emergency philosopher!), who argued that the reason AlphaFold is so famous, is that it is in fact a rare example of an AI-in-science success, albeit one dependent on decades of open research conducted in the area prior to Google entering the scene.

Can metascience maintain its critical diversity?

Does this episode confirm the critique that metascience depoliticises questions of science and technology? In one sense, the answer is ‘yes’, at least in the sense that some people in this space would like to see it become, in the words of Bart Penders, an “operations research branch…stripped of its academic qualities”, a quote that proved a presage of Rees’s intervention.

Yet the fact remains that this highly instructive, if unsatisfying, encounter would not have taken place in a regular academic conference; I have certainly never seen the big tech establishment taken to task in person at an STS conference before. The extractive foundations of AI and the unaccountability of big tech in the question were met with a response crystallising both the politics of free speech and positionality and power in research. This brief moment surfaced key issues in the contemporary politics of science and technology which are all too rarely raised in this company.

Where does this leave the future definition and direction of metascience? A new report presents metascience as a ‘discourse coalition’ rather than an emerging field. The conference’s unusual mix of participants certainly generated interesting and unusual conversations in the tricky space where STS is engaged with policy questions. However, it’s worth reflecting that throughout the first two days many plenary speakers raised the question of whether metascience could solve perceived problems around trust in science. Anyone concerned with that issue should consider how the concentration of technological and scientific power in companies such as Google contributes to these concerns, and how suppression of debate will impact on science’s ability to be trusted.

Overall, I am optimistic. This week’s conference provided plenty of evidence that metascience can meaningfully contribute to these very urgent debates, drawing on a broad range of voices. Yet we also saw a glimpse of an alternative future, where metascience is forcibly detached from these debates. This latter approach would be both a moral failing, and a knell for metascience’s broader public value. The trick is going to be maintaining the productive tension between both a diverse range of participants and an agenda which includes the most important issues of the day, as the politics of science and technology becomes ever more public.