Deepfakes are becoming so common that we may not even realise that some of the images and videos we encounter have been artificially created. We briefly discuss what a deepfake is and some of the ways it has been permeating our lives and the content we consume.
Generally, April Fool’s Day is perhaps the only day of the year when we have permission to share practical jokes and hoaxes, in the hope that some of the more gullible among us would believe them, but ultimately, we can all have a good laugh about it and move on. However, what happens when it is not April Fool’s Day, and hoaxes abound? That is the situation that increasingly is emerging and is of particular concern.
A CNN article published late last week highlighted some of the recent hoaxes that went viral: “Pope Francis wearing a massive, white puffer coat. Elon Musk walking hand-in-hand with rival GM CEO Mary Barra. Former President Donald Trump being detained by police in dramatic fashion.” The article continued by noting that none of those incidents happened, but they were artificial intelligence (AI)-generated images that are becoming cheaper and easier to create. For example, the image of Pope Francis wearing a puffer coat was reportedly created by a 31-year-old construction worker from Chicago, in the United States, who was tripping on mushrooms (Source: BuzzFeed News).
As the technology to create such content becomes even more accessible, it is imperative that we also become more aware, ask questions and perhaps be not so quick to believe what we can see.
What is deepfake?
The term, “deepfake”, is a mash-up between “deep learning” and “fake”, and refers to AI-generated media that manipulate or generate visual and audio content, usually with the purpose to deceive others. Deepfake had its beginning in image manipulation, which is decades old, but has improved and evolved as technology has developed.
For the ordinary consumer, who may have an appreciation of what AI-generated image platforms (such as MidJourney and Lensa AI) can do, deepfake can be considered an extrapolation of that, where convincing but fictional photos and videos can be created from scratch. And in the case of video, the voices of real-life people can be cloned and manipulated to say anything, with the source audio of individuals’ voices being collected from past recordings, and even WhatsApp messages.
Getting caught in the net of deepfakes
Suffice it to say, it is becoming increasingly difficult to spot deepfake content. Not only is the technology improving, but many of the subtle signs of fakes, such as eyes that do not blink, patchy skin tones, bad lip-synching and inconsistent lighting, have been addressed to varying degrees, and so on first glance – or even several glances – you might not readily be able to distinguish between real and fake content.
The fact of the matter is that deepfakes have already begun to permeate the media we consume. For example,
- There are dozens of deepfake apps and websites, many of which can be integrated into popular social media platforms
- Deepfakes were used during the 2020 United States Presidential campaign, and in political campaigns around the world, and
- Deepfake images are integrated into movies like Solo: A Star Wars Story.
Further, thousands of deepfake videos are being created for the porn industry, and concern is growing that “[d]eepfake technology is being weaponised against women” (Source: The Guardian). For example, the no longer available DeepNude app used “…neural networks to remove clothing from the images of women, making them look realistically nude” (Source: Vice). In a time of growing revenge porn and cyberbullying, the power of deepfake to harass, demean and undermine others is indeed worrying.
We must also highlight the power and influence of traditional media and media platforms that may unwittingly perpetuate and give legitimacy to deepfakes in the rush to be the first to release breaking news. Noting that it is becoming increasingly difficult to differentiate real and fake images and video, we as consumers may end up not only believing but also acting upon, information that may later be revealed as false and thereafter, may need to deal with the consequences.
The race is on to tell real from fake
Finally, although there is a definite need for concern about deepfakes, work is underway to give us, the users, information about the source of the content we are seeing, in order for us to decide whether what we are encountering is authentic or artificially created. Two of the companies leading the way are Microsoft and Adobe, who have joined forces to provide details on the history of a photograph or image, such as who took it, when and where it was taken, and the edits that were subsequently made (Source: CBS News).
This content credentials feature has been gaining wide support, and may become the norm in the not-too-distant future. Nevertheless, we, users and consumers, need to become more vigilant and educated about ongoing threats, and recognise that we may longer be able to take all that we see at face value. Increasingly, we will need to ask ourselves, “Is this real?”, and do a bit of sleuthing to come to an answer.
Image credit: ApolitikNow (flickr)
Indeed as users we need to be more responsible especially in not making ourselves instruments of “sharing and redistribution” of these fakes.