Humans snapping photos of themselves with melting skin, blood smeared faces and mutated bodies, while standing in front of a world that is burning is what the DALL-E AI believes will be the last selfies taken at the end of times.
DALL-E AI, developed by OpenAI, is a new system that can produce full images when fed natural language descriptions and TikToker Robot Overlords simply asked it to ‘show the last selfie ever taken.’
The nightmarish results each show a human holding a phone and behind them are scenes of bombs dropping, colossal tornados and cities on fire, along with zombies standing in the middle of the destruction.
One of the selfies is a animated image of a man wearing what looks like riot gear. He slowly moves his head around with a look as if his life is flashing before his eyes while bombs fall from the sky around him.
Each of the videos have been viewed hundreds of thousands of times, with users commenting on how horrifying each selfie is – one user felt the images are going to keep them up at night because they are so chilling.
Scroll down for videos
A TikTok user asked DALLE-AI to generate images of what it thinks the last selfies ever taken will be. One of the images is of a man in riot gear watching in horror as bombs drop behind him
Other users joked about taking a selfie at the end of times, with one commenting: ‘But first, lemme take a selfie’ (if no one gets this reference I’m gonna cry).’
TikTok user Nessa shared: ‘and my boss would still ask if I’m coming into work.’
However, not everyone felt light-hearted about what the end of time would look like.
User named Victeur shared: ‘Imagine hiding in the dark for the war, not having seen your face in years and seeing this when you take a last picture of yourself.’
The nightmarish results each show a human holding a phone and behind them are scenes of bombs dropping, colossal tornados and cities on fire, along with zombies standing in the middle of the destruction
The selfies were generated by a TikToker who told the AI to generate what it thinks the last selfies will look like
Most of the commenters see the fun side of the images, but there has been a dark side uncovered with DALL-E – it is racial and gender bias.
The system is public and when OpenAI launched the second version of the AI it encouraged people to enter descriptions so the AI can improve on generating images over time, NBC News reports.
However, people started to notice that the images were bias. For example, if a user typed in CEO, DALL-E would only produce images of white males and for ‘flight attendant,’ just images of women were presented.
OpenAI announced last week that it was launching new mitigation techniques to help DALL-E create more diverse images and claims the update ensures users were 12 times more likely to see images with more diverse people
The nightmarish images, which show zombies standing in front of burning cities were created by DALL-E AI
DALL-E, developed by OpenAI, is a new system that can produce full images when fed natural language descriptions and TikToker Robot Overlords simply asked it to ‘show the last selfie ever taken.
The images are so chilling, some TikTok users said they will now have nightmares after seeing them
The original version of DALL-E, named after Spanish surrealist artist Salvador Dali, and Pixar robot WALL-E, was released in January 2021 as a limited test of ways AI could be used to represent concepts – from boring descriptions to flights of fancy.
Some of the early artwork created by the AI included a mannequin in a flannel shirt, an illustration of a radish walking a dog, and a baby penguin emoji.
Examples of phrases used in the second release – to produce realistic images – include ‘an astronaut riding a horse in a photorealistic style’.
On the DALL-E 2 website, this can be customized, to produces images ‘on the fly’, including replacing astronaut with teddy bear, horse with playing basketball and showing it as a pencil drawing or as an Andy Warhol style ‘pop-art’ painting.
DALL·E 2 has learned the relationship between images and the text used to describe them,’ OpenAI explained.
‘It uses a process called ‘diffusion,’ which starts with a pattern of random dots and gradually alters that pattern towards an image when it recognizes specific aspects of that image.’