Piloting turbocharged fast AI policy experiments in education

Government plans for AI in education are locking schools into a trajectory of inevitable adoption of AI to address sectoral problems. Photo by Joshua Hoehne on Unsplash

The UK Labour government has announced plans to “turbocharge” the deployment of artificial intelligence in public services. Part of the AI Opportunities Action Plan drawn up by investor Matt Clifford, and agreed in full by the government, this “unleashing” of AI will impact on education specifically, it is claimed, by supporting teachers to plan lessons and mark students’ work. But we are not going to need to wait months or years to see what this means, as the Department for Education, supported by the Department of Science, Innovation and Technology, is already funding and piloting prototypes and promoting live experiments with AI in schools.

AI is already being (in the political discourse surrounding the plan) “turbocharged” in education. The DfE announced a £4 million fund last summer to support the development of an “educational content store” of official educational materials – curriculum guidance, standardized lesson plans, and exemplars of student assignments – to enable edtech companies to train or finetune AI models for automated planning and marking.

Most of the fund – £3m – has been awarded to the company Faculty AI. A much-favoured government contractor on AI since its involvement in the Vote Leave campaign back in 2016, Faculty was already tasked by the DfE with helping to run teacher hackathons on AI beforehand, and also completed a proof-of-concept study using large language models to automatically mark primary school literacy tests.

The findings were intended “for reference by the EdTech sector.” It also specified the need for the content store of training data for which it has now been awarded the £3m contract. The remaining £1m has been allocated to 16 companies to build working prototypes based on the store.

The new AI Plan reaffirms the government’s commitment to automated lesson planning and marking to alleviate teachers’ workload. It is, in effect, constructing a lock-in mechanism whereby schools are given no option other than to embrace AI, adopt it into practices, and integrate it into existing systems of pedagogy and administration, or risk being left behind.

The effects here are not just on schools: they signify a shift in policy practice to rapidly piloting and testing technologies in public sector settings.

Turbocharged techno-solutionism

Together, these current efforts are prototyping the future of AI-enabled automation of key teaching tasks. They are first steps towards the far grander vision of AI being integrated into the schooling system produced by the Tony Blair Institute, the political think tank of former prime minister Tony Blair that is backed by funding from Oracle’s Larry Ellison and is reported to have significant influence on the Labour government. The TBI’s manifesto for “governing in the age of AI” was co-produced with (again) Faculty AI.

Previously I’ve suggested that such efforts constitute a kind of technological solutionism, whereby “decisionmakers in positions of political authority may reach for technological solutions out of expedience — and the desire to be seen to be doing something — rather than addressing the causes of the problem they are trying to solve.”

The result of such solutionism may be “automated austerity schooling,” with the existing austerity conditions of schools – exemplified by persistent teacher shortages, retention problems, under-recruitment, and classroom overcrowding – left unresolved while funds flow to building AI solutions. Such solutions are said to address issues like reducing teacher workload by a certain number of hours per week. As the AI Plan indicates, this approach to AI solutions for schools is now being turbocharged.

The current desire to turbocharge AI in schools needs to be properly understood as a particular policy approach. Regardless of whether AI is viewed as highly promising for education, or as a public problem, there can be no doubt now that it is a major education policy preoccupation and therefore needs to be examined through the lens of critical policy analysis.

Critical policy studies of AI in education

Contemporary critical education policy analyses often focus in particular on the contexts and conditions of policy development and diffusion, and on the ways that varied actors, materials, technologies and discourses have to be assembled together and sustained in order to make any policy actionable and sustainable. One enduring concept in such research is “fast policy.”

Fast policy, according to political geographers Jamie Peck and Nik Theodore, is how much 21st century social and public policy is done. Policies, they argue, are made and diffuse at speed, supported by sprawling webs of actors that include consultants, think tanks, entrepreneurial “gurus,” and other thought leaders and thinkers-and-doers with “ideas that works.” These figures have the political connections, industry contacts, multisectoral knowledge and social capital to shape policy priorities, diffuse discourses, and get things done. Peck and Theodore describe fast policy as a form of “experimental statecraft.”

In education, we can see recent developments like the rapid diffusion of policy ideas about “learning to code” into official schools curricula as the result of fast policy processes and practices. It’s accelerated, experimental statecraft in the sense of mobilizing ideas about technologies as a route to modernizing education systems and upskilling students for assumed digital futures.

From a fast policy perspective, we can view efforts to test and prototype AI in the English schooling sector as the result of such socially connected networks of ideational visionaries. They are building prototypes, undertaking technical pilots, constructing visionary reports, and circulating discourses to naturalize AI as a taken-for-granted aspect of the future of teaching and learning.

This is now fast policy being turbocharged in order to rush AI into schools. It involves tightly interconnected networks such as the TBI, Faculty AI, DSIT, and DfE, along with other organizations including AI for Schools and multi-academy trusts involved in the hackathons and pilots, as well as the edtech companies now enrolled into this program of live experimentation.

The turbocharged approach to fast AI policy in education can also be seen as an example of what sociologists Marion Fourcade and Jeff Gordon have termed “digital statecraft,” and a process of public authorities “cyberdelegating” their social responsibilities to private technology firms. In what they describe as a “dataist state,”

when the state defines itself as a statistical authority, an open data portal, or a provider of digital services, it opens itself up to claims by private parties eager to piggy-back on its data-minting functions and to challenge its supremacy on the grounds that they, the private technologists, are better equipped technically, more trustworthy, or both. … We suggest that the private appropriation of public data, the downgrading of the state as the legitimate producer of informational truth, and the takeover of traditional state functions by a small corporate elite may all go hand in hand.

Turbocharged AI policy involving the cyberdelegation of authority to mobile networks of private technology expertise is a concrete instance of dataist, digital statecraft. It means not only the work of the state being enacted by new fast policy networks, but outsourced to private automated technologies to accomplish public service tasks.

In fact, if the current DfE work with AI for schools is in fact influenced by the TBI, then it is significant that the TBI has explicitly positioned AI as a modernizing and transformative technology for the future of the British state at large. Current experiments in education may be understood as indicators of what is to come for state-run services across the public sector. 

This reflects the ways UK politicians have even begun treating AI companies. As critical computing scholar Dan McQuillan has put it, “the secretary of state for science, innovation and technology, has repeatedly stated that the UK should deal with Big Tech via “statecraft”; in other words, rather than treating AI companies like any other business that needs taxing and regulating, the government should treat the relationships as a matter of diplomatic liaison, as if these entities were on a par with the UK state.”

“Blitzscaling” AI in education

The kind of fast, cyberdelegated AI policy being developed in education is not just concerned with the production of policy texts and discourses. In line with the AI Plan’s emphasis on fast-paced piloting and scaling of AI in public services, it exemplifies a form of live experimentation, prototyping and beta-testing of new tools within the schools system itself.

Science and technology studies scholar Stephen Hilgartner has written recently that the release of large language models constitutes a “global real-world experiment” which “casts societies throughout the world as test beds for LLM technology.”

The new government’s plans around generative AI likewise cast the schooling sector as a test bed for large-scale experimentation, piloting and testing of new prototypes and products, helping support what Hilgartner calls “an experiment in ‘blitzscaling’” this technology into everyday practices.

The AI plan’ emphasis on rapid piloting and scaling is extremely industry-friendly. It’s hard not to imagine technology and edtech industry companies and their investors being very excited about the market prospects of schools becoming test beds for their AI innovations, ripe for their blitzscaling aspirations.

The current political discourse of “unleashing” and “turbocharging” AI in public services such as education, then, resembles the kind of blitzscaling strategies of the technology industry to rapidly roll-out new technologies and treat users as live testing subjects. In other words, schools may become networks of AI testing labs, where the technology being live-tested is intended to actively intervene in professional processes and pedagogic practices like lesson planning and assessment.   

Critics of both the government’s AI plan and the TBI vision underpinning it argue that public sector technology projects cannot and should not be rushed. For example, the director of the British Academy, Hetan Shah, has argued that “Tony Blair is wrong”:

Public services are complex systems, and rushing to bolt on unproved technology is unlikely to work. The UK does not have a good track record here and the danger is we will see the same kinds of IT transformation project failures that have been commonplace over the years in the public sector. In any nascent technology, and especially one as expensive as AI, the government will need to ask for much higher-quality evidence of costs and benefits. There is a lot of snake oil for sale.

The risks of rushing out AI snake oil into schools are very real. Yet in the English schools sector there is now a very powerful network of fast policy actors seeking cyberdelegated authority to turbocharge technology testing of AI solutions. They are already prototyping tools and publishing use cases, specifying the benefits of AI for teachers, and awarding funds to the edtech industry to build and test new products.

Whether you see AI as potentially beneficial for schools or not, it’s clear that AI in education is now a significant policy preoccupation – a preoccupation that largely prioritizes rapid innovation rather than foregrounding critical issues with the technology and its social, pedagogic and epistemic implications. It is locking in the English schooling sector to a trajectory of seemingly inevitable AI adoption and integration.

But viewing it as a policy process shows precisely that AI in schools is not inevitable, but a political choice that now being supported and driven by highly influential fast policy networks. In fact, what we are observing with AI in English schools is not only pilot projects but a piloting of turbocharged fast education policy processes that may prove hard to slow down.

This entry was posted in Uncategorized and tagged , , , , , , , , , . Bookmark the permalink.