The release of a new television series that uses groundbreaking ‘deepfake’ technology to mock celebrities and sports stars has sparked debate over the ethical implications of using AI in everyday life.
ITVX’s landmark comedy ‘Deep Fake Neighbour Wars’ boasts a stellar line-up of A-listers including Idris Elba, Adele and Harry Kane – with viewers able to watch on as Nicki Minaj and Tom Holland share a sofa in hot pink tracksuits and argue with their neighbour Stormzy.
The comedy, which launched on ITV‘s streaming service on 26 January, presents AI-generations of these famous faces, who have a ‘spot-on’ likeness to their real counterparts, to an audience in the UK’s first ever deepfake comedy.
Content creators are now more widely using artificial intelligence to scrape a single image of an actor that is then mapped onto a target’s face, animated and used as a ‘deep fake’.
The long-term implications of AI being rolled out across our television screens has split the debate, with some raising ethical and legal concerns and others highlighting the convenience of using technology in everyday life.
It comes as Equity, a leading performers’ union, warned actors could soon be out of the job as advancements in automated technology continue to impact the real world.
Deep Fake Neighbour Wars uses AI technology to replicate real famous people including Greta Thunberg (pictured). Despite concerns around the ethics of the premise, the show’s creators insist it’s just ‘silly’
In the show, deepfake AI technology is used to replicate famous people including Idris Elba and Kim Kardashian (pictured) to present them as ‘everyday’ bickering neighbours
Equity launched its ‘Stop AI stealing the show’ campaign as complaints over actors’ voice and likeness being used without their consent continue to mount.
Warning there would be ‘dystopian’ consequences unless copyright law was updated, the union said AI systems were ‘now replacing skilled professional performers’.
Although ITV’s show may look and sound realistic to the average viewer, each episode comes with a plethora of warnings and advisories telling the audience the following content has been created via deepfake.
Despite the advacements in technology, amateur examples have emerged in recent years with Ukrainian President Volodymyr Zelensky targeted in one clip that was laughed off by experts.
In the video, ‘Zelensky’ can be seen speaking from his lectern as he calls on his troops to lay down their weapons and give in to Putin’s invading forces.
The deepfake was widely circulated on Russian social media and was even planted by hackers on live TV on Ukraine and on a news site before it was taken down.
Internet users immediately flagged the discrepancies between the skin tone on Zelenskiy’s neck and face, the odd accent in the video, and the pixelation around his head.
But, Zoe Kleinman, the BBC’s technology editor, discussed the potential legal pitfalls for creators utilizing AI and celebrity deepfakes.
She told Radio 4’s Front Row: ‘It probably won’t surprise you to know that AI and regulation have not been in step, regulation is very very slow.
‘It’s sort of covered by other laws if you wanted to go for it. I suppose someone like Kim Kardashian or Ariana Grande could go for privacy or defamation. There are various ways to attack it.’
Spencer Jones, the show’s co-creator, insists the comedy does not deal with serious subjects and makes it clear the figures are not real (pictured: ‘Stormzy’, ‘Harry Kane’)
Among the high-profile celebrities recreated in the comedy are Nicki Minaj and Spiderman’s Tom Holland (pictured)
Television critic Scott Bryan added: ‘I think we are now going to be hitting a lot of ethical dilemmas around this – for example what would stop somebody bringing back someone from the dead to be promoting their advert with their likeness entirely intact?
‘The wider point is, I find, amid all the talk of AI technology is that people love things that are real.
‘Look at Happy Valley, it doesn’t have any CGI, any special effects, it’s grounded in the characters, it’s a traditional BBC drama and brilliantly written. I find that there’s this excitement about AI that’s not always needed.’
The so-called ‘deepfake’ phenomenon uses AI technology to manipulate videos and audio in a way that replicates real life.
While in this context, the use of the technology is described as ‘silly’ by its creators, concerns have been raised in the past about how deepfakes have been used to generate child sexual abuse videos and revenge porn, as well as political hoaxes.
In November, an amendment was made to the government’s Online Safety Bill which stated using deepfake technology to make pornographic images and footage of people without their consent would be made illegal.
Despite potential concerns over the ethics of creating AI versions of very famous people, the creators of Deep Fake Neighbour Wars told the Guardian they were not concerned about the content of their programme.
Spencer Jones told the newspaper: ‘Everything is silly. If you turn us on halfway through, and think that the real Harry Kane has really had his patio tile cracked by Stormzy, you might need to have a little look at yourself.’
Describing the characters as ‘heroes’, he added the purpose of the show was to ‘[reimagine] them with everyday problems’ – and argues that one of the most common and universal problems people have is irritating neighbours.
One actress on the show, Katia Kvinge, plays several different roles including environmental campaigner Greta Thunberg – and she argued there are benefits to not showing her face in her appearances.
OpenAI’s ChatGPT bot has also disrupted everyday life, with some schools banning homework due to the sophisticated AI’s ability to comb the internet and process complex and specific tasks in seconds
She said: ‘One morning before work, I was so tired. But then I was like: “Oh, it’s fine, my face isn’t on camera today.”‘
To ensure viewers are not left confused by the deepfake technology, the end of each episode shows the false faces falling away from the faces of each character so the actor’s real identity can be seen and it is clear the episode has been a simulation.
However, when the actors are in their roles, they have to take steps to ensure the AI effect looks as real as possible.
It comes as schools are also being forced to grapple with the ethical dilemmas spawned by AI technology.
Since ChatGPT was released in November last year, fears have grown among school leaders that it will make cheating easier than doing the work.
Staff at Alleyn’s School in southeast London are also said to be rethinking their homework plans after a test English essay produced by OpenAI bot ChatGPT was awarded an A* grade.
Headteacher Jane Lunnon explained that the school’s new focus on ‘flipped learning’ was an inevitable sign of the times due to the ‘seismic and game changing’ nature of AI.
She told the Times: ‘I truly feel this is a paradigm-shifting moment. It’s incredibly usable and straightforward.
‘However at the moment, children are often assessed using homework essays, based on what they’ve learnt in the lesson.
‘Clearly if we’re in a world where children can access plausible responses … then the notion of saying simply do this for homework will have to go.
‘Homework will be good for practice but if you want reliable data on whether children are acquiring new skills and information, that will have to be done in lesson time, supervised.’
King’s College London’s Cyber Security Research Group director Dr Tim Stevens has also warned about the potential dangers of deepfakes being used to spread fake news and challenge national security.
Dr Stevens said the technology could be exploited by autocracies like Russia to undermine democracies, as well as bolstering legitimacy for foreign policy aims like going to war.
He added: ‘What kind of society do we want? What do we want the use of AI to look like? Because at the moment the brakes are off and we’re heading into a space that’s pretty messy.
‘If it looks bad now, It’s going to be worse in future. We need a conversation about what these tools are for and what they could be for, as well as what our society will look like for the rest of the 21st century.
‘This isn’t going away. They’re very powerful tools and they can be used for good or for ill.’
So can you tell the real stars from the fakes?
Pop star Rihanna is featured in the ITV drama. She is pictured, right, during the Golden Globes Awards Show on January 10
Adele is another target for the AI’s deepfake technology
Swedish climate activist Greta Thunberg also crops up in ITV’s Deep Fake Neighbour Wars
There are eerie similarities for fans of Idris Elba who are watching ITV’s landmark Deep Fake Neighbour Wars
When he’s not banging in the goals for Spurs or England, Harry Kane can be found lounging on his sofa after arguing with his A-list neighbours – or so ITV’s show would have you believe
Can YOU spot the deepfake from the real person? Cyber expert warns AI images pose national security risk – as 15 tell-tale signs to look out for are revealed
By Alexander Butler
MailOnline has put together a deepfake test as well as everything you need to know about deepfake technology.
What are they? How do they work? What risks do they pose?
And most importantly, can you tell the difference between the real thing and AI?
What is a deepfake and how are they made?
If you’ve seen Tom Cruise playing guitar on TikTok, Barack Obama calling Donald Trump a ‘total and complete dipshit’, or Mark Zuckerberg bragging about having control of ‘billion’s of people’s stolen data’, you have probably seen a deepfake before.
A ‘deepfake’ is a form of artificial intelligence which uses ‘deep learning’ to manipulate audio, images and video, creating hyper-realistic media content.
The term ‘deepfake’ was coined in 2017 when a Reddit user posted manipulated porn videos to the forum. The videos swapped the faces of celebrities like Gal Gadot, Taylor Swift and Scarlett Johansson, on to porn stars.
A deepfake uses a subset of artificial intelligence (AI) called deep learning to construct the manipulated media. The most common method uses ‘deep neural networks’, ‘encoder algorithms’, a base video where you want to insert the face on someone else and a collection of your target’s videos.
The deep learning AI studies the data in various conditions and finds common features between both subjects before mapping the target’s face on the person in the base video.
Generative Adversarial Networks (GANs) is another way to make deepfakes. GANs employ two machine learning (ML) algorithms with dual roles. The first algorithm creates forgeries, and the second detects them. The process completes when the second ML model can’t find inconsistencies.
The accuracy of GANs depends on the data volume. That’s why you see so many deep fakes of politicians, celebrities, and adult film stars, as there is often a lot of media of those people available to train the machine learning algorithm.
Successes and failures of deepfakes
A notorious example of a deepfake or ‘cheapfake’ was a crude impersonation of Volodymyr Zelensky appearing to surrender to Russia in a video widely circulated on Russian social media last year.
The clip shows the Ukrainian president speaking from his lectern as he calls on his troops to lay down their weapons and acquiesce to Putin’s invading forces.
Savvy internet users immediately flagged the discrepancies between the colour of Zelensky’s neck and face, the strange accent, and the pixelation around his head.
Mounir Ibrahim, who works for Truepic, a company which roots out online deepfakes, told the Daily Beast: ‘The fact that it’s so poorly done is a bit of a head-scratcher.
‘You can clearly see the difference — this is not the best deepfake we’ve seen, not even close.’
One of the most convincing deepfakes on social media at the moment is TikTok parody account ‘deeptomcruise’.
The account was created in February 2021 and has over 18.1million likes and five million followers.
It posts hyper-realistic parody versions of the Hollywood star doing things from magic tricks, playing golf, reminiscing about the time he met the former President of the Soviet Union and posing with model Paris Hilton.
In one clip, Cruise can be seen cuddling Paris Hilton as they pretend to be a couple.
He tells the model ‘You’re so absolutely beautiful’, to which Hilton blushes and thanks him.
While looking in the mirror, Hilton tells the actor: ‘Looking very smart Mr Cruise’.
The account posts hyper-realistic parody versions of the Hollywood star doing things from magic tricks, playing golf, reminiscing about the time he met the former President of the Soviet Union and posing with model Paris Hilton.
Another video shared to the account shows deepfake Cruise wearing a festive Hawaiian shirt while kneeling in front of the camera.
He shows a coin and in an instance makes it disappear – like magic.
‘I want to show you some magic,’ the imposter says, holding the coin.
The ‘deeptomcruise’ account was created in February 2021 and has over 18.1million likes and five million followers.
Do deepfakes pose a threat?
Despite the entertainment value of deepfakes, some experts have warned against the dangers they might pose.
King’s College London’s Cyber Security Research Group director Dr Tim Stevens has warned about the potential deepfakes have in being used to spread fake news and undermine national security.
Dr Stevens said the technology could be exploited by autocracies like Russia to undermine democracies, as well as bolstering legitimacy for foreign policy aims like going to war.
He said the Zelensky deepfake was ‘very worrying’ because there were people who ‘did believe it’ as there are people who ‘want to believe it’.
Theresa Payton, CEO of cybersecurity company Fortalice, said deepfake AI also had potential to combine real data to create ‘franken-frauds’ which could infiltrate companies and steal information.
She said the ‘age of increased remote working’ was the perfect environment for these types of ‘AI people’ to flourish.
Miss Payton told the Sun: ‘As companies automate their resume scanning processes and conduct remote interviews, fraudsters and scammers will leverage cutting-edge deepfake AI technology to create ‘clone’ workers backed up with synthetic identities.
‘The digital walk into a natural person’s identity will be nearly impossible to deter, detect and recover.’
Dr Stevens added: ‘What kind of society do we want? What do we want the use of AI to look like? Because at the moment the brakes are off and we’re heading into a space that’s pretty messy.
‘If it looks bad now, It’s going to be worse in future. We need a conversation about what these tools are for and what they could be for, as well as what our society will look like for the rest of the 21st century.
‘This isn’t going away. They’re very powerful tools and they can be used for good or for ill.’
Source: | This article originally belongs to Dailymail.co.uk