AI watch: UK electoral warning and OpenAI’s move into London | Artificial intelligence (AI)

AI watch: UK electoral warning and OpenAI’s move into London | Artificial intelligence (AI)

Artificial intelligence is either going to save humanity or finish it off, depending on who you speak to. Either way, every week there are new developments and breakthroughs. Here are just some of the AI stories that have emerged in recent days:

The US company behind the ChatGPT chatbot, OpenAI, has announced that its first international office will be in London. The move is a boost for the UK prime minister, Rishi Sunak, who has described the AI race as one of the “greatest opportunities” for the country’s tech industry. OpenAI said it chose the UK capital because of its “rich culture and exceptional talent pool”. This month Palantir, a $30bn US firm specialising in software programs that process huge amounts of data (customers range from the NHS to the US army), picked London as its European base for AI research and development.

Recent breakthroughs in AI have raised questions about the impact on jobs, given ChatGPT’s ability to mass-produce plausible text and usable computer code. A report last week estimated that 2.5% of all tasks within the UK economy would be affected by generative AI, although that proportion soars for creative professionals, with 43% of tasks performed by authors, writers and translators susceptible to their work being automated. Computer programmers, software developers, public relations professionals and IT support technicians were also high on the list, according to the report by the accounting group KPMG.

Retail, hospitality, construction and manufacturing are among the jobs expected to experience “almost no impact”. Overall, generative AI should add 1.2% to the level of UK economic activity, or the ability to produce more economic output with less work (which should, in theory, produce higher wages, although people currently employed as authors, writers and translators may find that a head scratcher).

The Internet Watch Foundation, a UK-based online safety watchdog, said it was beginning to see AI-generated images of child sexual abuse being shared online. “What is of most concern is the quality of these images, and the realism the AI is now capable of achieving,” said Charles Hughes, the organisation’s hotline director. The BBC also reported that paedophiles were using image-generating tools to create and sell child sexual abuse material on content-sharing sites.

If the debate over whether AI poses a serious existential threat is divisive among experts, there is consensus that disinformation is a serious short-term problem. The fear is that generative AI – the term for tools that can produce convincing text, images, video and human voice from a human prompt – could wreak havoc at next year’s US presidential election and a likely general election in the UK. Brad Smith, the president of Microsoft, a powerful player in the field, said this week that governments and tech companies had until the beginning of next year to protect those elections from AI-generated interference (by, for instance, introducing a labelling scheme for AI-made content).

“We do need to sort this out, I would say by the beginning of the year, if we are going to protect our elections in 2024,” he said at an event hosted by the Chatham House thinktank in London.

It came as the UK’s Electoral Commission watchdog warned that time was running out to introduce new rules on AI in time for the next general election, due to take place no later than January 2025.