Everybody is talking about: Deepfakes
Here, our Creative Production Lead in Dubai, Amr Younis; Senior Creative Technologist in London, Sam Cox; and Innovation Director for Singapore, Arnaud Robin put their heads together to help marketers understand whether brands should play in this space, looking at the moral, ethical and legal conundrums.
What are deepfakes and why is there so much buzz around them?
Deepfakes have become quite the talkable subject of late. The ability to manipulate audio and video footage of anyone, to make them speak and act as you wish, is the domain that deepfakes play in. By using AI algorithms to analyse existing footage, then use predictive methods to manipulate footage, we’ve created a new monster that has the ability to change the online world as we know it.
Like many AI technologies, the popularity of deepfakes has risen rapidly due to the advancements in democratised processing power. Previously, rendering out the visuals was the of domain supercomputers. Now anyone with a good graphics card and the correct code is able to produce deepfakes.
Technology has often spread into interesting territories when democratised. And deepfakes have been no different. We’ve already seen high-profile scandals involving celebrity porn; and the digital domain is already flagging the dangers of deepfakes in spreading fake news.
However, some app developers have taken a more lighthearted approach to this technology. ZAO, for example, is a deepfake app that convincingly face-swaps your image into scenes from hundreds of movies and TV shows after uploading just a single photograph. An instant social media phenomenon which has helped to drive deepfakes to the forefront of internet culture.
Is this something brands should be considering using and, if so, what are the opportunities?
Current options for brands are limited, since this area is fraught with ethical, legal and moral issues that are still being discovered and tested by fringe explorers. However, if you tread carefully, opportunities can be found. For example, one engagement technique on social media is called “gamebook”, where the consumer is the hero of the content created by the brand. We can easily imagine an iteration of this where anyone, only by providing a few photos of their face, could become the hero of a commercial or the protagonist of a film or game (ideally, with moderation).
Another space that’s recently been tested is the ability for celebrities to plug a product or brand in their native tongue, only to have deepfakes convert that into endless other languages – and look convincing while doing it. This not only saves time and money but also gives greater reach to the content produced. This was put to the test by David Beckham and charity Malaria No More.
There’s also experimental work that can benefit from this technology. The Dali Museum in St Petersburg, Florida has used a controversial artificial intelligence technique to “bring the master of surrealism”, Salvador Dalí back to life. Getting Dali to create new material is obviously impossible. But by feeding the deepfake software old footage of how he spoke, along with his facial mannerisms, they were able to create new video footage of him greeting participants to the museum.
What are the potential red flags for brands?
Ultimately, deepfakes play a dangerous game with transparency, authenticity and trust. If used too much or for the wrong reasons, the public perception of brands using deepfakes could be negatively affected. Technology providers and social networks are already building tools to identify deepfakes, so making sure you’re on the right side of the line – and not deceiving your audience – is imperative.
However, the real heightened danger of deepfakes lies in spreading fake news. Stunts using a celebrity or political figure to make a point of view is extremely sensitive to major backlash. It should go without saying that consent is an absolute must.
Lastly, as deepfakes allow people to put themselves at the centre of content, will this devalue talent when people want to see themselves as the star?
How could the rise of deepfakes affect how brands work with talent?
There are several points to consider here.
- Virtual Rights: Traditionally, any talent that gave their right to any audio and visual knew pretty much what the finished article would portray (bar the odd bit of retouching manipulation). With deepfakes, however, will talent need new methods of protecting their rights?
- Access to the dead: As previously noted, talent who have passed away are now, in theory, available to create content. Just mind the legalities around that.
- Production: one common thing after shooting a film, for example, is that the client wants the comedian to say something different to what they said on stage. Post-production techniques and deepfake will allow more flexibility. Requiring less of the talent’s time.
- Easier access to talents might become the norm. Imagine a bank of celebrities, all pre-scanned. You contact the host and state which celebrity you would like to feature your production. You select one and hand it straight over to post-production, where they enter the text that needs to be spoken. You would no longer pay for talent time, but only for talent rights.
All in all, while deepfakes – this shiny new toy of the marketing world – provide a number of exciting opportunities for brands. However, the technology treads a very fine ethical line which needs to be respected. After all, the simulacrum has never been so
Those brands brave enough, and aware enough, to get that balance right could see great results. But those who charge in because it’s the latest must-have will risk alienating audiences and, ultimately, putting their reputations at risk.