The writer is a barrister and the author of ‘The Digital Republic: On Freedom and Democracy in the 21st Century’
The public debate around artificial intelligence sometimes seems to be playing out in two alternate realities.
In one, AI is regarded as a remarkable but potentially dangerous step forward in human affairs, necessitating new and careful forms of governance. This is the view of more than a thousand eminent individuals from academia, politics, and the tech industry who this week used an open letter to call for a six-month moratorium on the training of certain AI systems. AI labs, they claimed, are “locked in an out-of-control race to develop and deploy ever more powerful digital minds”. Such systems could “pose profound risks to society and humanity”.
On the same day as the open letter, but in a parallel universe, the UK government decided that the country’s principal aim should be to turbocharge innovation. The white paper on AI governance had little to say about mitigating existential risk, but lots to say about economic growth. It proposed the lightest of regulatory touches and warned against “unnecessary burdens that could stifle innovation”. In short: you can’t spell “laissez-faire” without “AI”.
The difference between these perspectives is profound. If the open letter is taken at face value, the UK government’s approach is not just wrong, but irresponsible. And yet both viewpoints are held by reasonable people who know their onions. They reflect an abiding political disagreement which is rising to the top of the agenda.
But despite this divergence there are four ways of thinking about AI that ought to be acceptable to both sides.
First, it is usually unhelpful to debate the merits of regulation by reference to a particular crisis (Cambridge Analytica), technology (GPT-4), person (Musk), or company (Meta). Each carries its own problems and passions. A sound regulatory system will be built on assumptions that are sufficiently general in scope that they will not immediately be superseded by the next big thing. Look at the signal, not the noise.
Second, we need to be clear-eyed in the way we analyse new advances. This means avoiding the trap of simply asking whether new AI systems are like us, then writing them off if they are not.
The truth is that machine learning systems are nothing like us in the way they are engineered, but they are no less significant for it. To take just one example: the fact that non-human AI systems, perhaps with faces and voices, will soon be able to participate in political debate in a sophisticated way is likely to be more important for the future of democracy than the fact that they do not “think” like humans. Indeed, asking whether a machine learning system can “think” like a human is often as useful as asking whether a car can gallop as fast as a horse.
Third, we should all by now recognise that the challenges thrown up by AI are political in nature. Systems that participate in, or moderate, the free-speech ecosystem will inevitably have some impact on the nature of our democracy. Algorithms that determine access to housing, credit, insurance or jobs will have real implications for social justice. And the rules that are coded into ubiquitous technologies will enlarge or diminish our liberty. Democracy, justice, liberty: when we talk about new technologies, we are often talking about politics, whether we realise it or not. The digital is political.
Finally, talk of regulation should be realistic. There was something naive about the implication in the open letter that the problems of AI governance might be substantially resolved during a six-month moratorium. The UK government probably won’t have reported its consultation results within six months, still less enacted meaningful legislation. At the same time, if we wait for the US, China, and the EU to agree rules for the governance of AI, we are going to be waiting forever. The problem of regulating AI is a generational challenge. The solutions will not be summoned forth in a short burst of legislative energy, like a coder pulling an all-nighter.
We will need a new approach: new laws, new public bodies and institutions, new rights and duties, new codes of conduct for the tech industry. Twentieth-century politics was defined by a debate about how much human activity should be determined by the state, and what should be left to market forces and civil society. In this century, a new question is coming to the fore: to what extent should our lives be directed by powerful digital systems, and on what terms? It will take time to find out if our public discourse can rise to the level of sophistication needed.