Hello everyone, and welcome back to the Cognixia podcast. We are back with another interesting episode today. Every week, we get together to discuss a new topic from the world of emerging digital technologies – from new developments to hands-on guides, from things you should know to what you can do to embrace new tools and best practices, and so much more.
In today’s episode, we talk about something that has now become more than a menace and is threatening both individuals as well as enterprises. At some point, we are sure you have heard about this technology, maybe in the newspapers, online portals, or your Google Discover feed. We are talking about ‘Deepfakes’. These deepfakes are becoming a huge threat to everyone. In this episode, we will cover what are deepfakes, and how they are posing a threat to enterprises.
So, first, let us understand what are deepfakes. If you have come across the video that went viral in which Barack Obama called Donald Trump some expletives or Mark Zuckerberg announced that he had complete, total control of billions of people’s stolen data or even the one where popular Game of Thrones character, Jon Snow, delivers a moving apology for the dismal ending of the otherwise very famous show? If you have, then you have encountered a deepfake. In a way, deepfakes are the 21st-century version of Photoshop. Remember those times when pictures could be doctored easily using Photoshop and anybody could be inserted into any backdrop, doing anything in pictures? Deepfakes are an unscrupulous menace along the same lines. Deepfakes are fake pictures or videos or even audio generated using a form of artificial intelligence called deep learning, hence the name – deep fake. If you want to dance like a pro or just star in your favorite movie with all your favorite stars, you could make a deep fake and be a part of the next Christopher Nolan masterpiece!
In September 2019, the AI firm Deeptrace reported that it had found about 15,000 deepfake videos available online. Just nine months later, this number had more than doubled. About 96% of these videos were NSFW and 99% of these NSFW videos used images of female celebrities. Making these deepfakes doesn’t require a lot of skills so even unskilled individuals could make them relatively easily and circulate them around. It became an easy medium to doctor revenge videos. Besides NSFW content, deepfake also began being used to make tons of spoofs and satires like the videos we mentioned before.
Not just videos, deepfakes are used to make doctored audio and even doctored profiles of fake individuals on LinkedIn and other online platforms. From fake Bloomberg journalists to fake international studies, everything has been attempted. Fake voice skins and voice clones can also be made. You might have encountered reels and YouTube videos asking everyone to be vigilant and have a security question you can ask your loved ones to confirm it is them talking and not a voice skin. People have reported receiving phone calls that seemed like coming from someone they know, asking for money or some other details, with the exact same voice and manner of talking, but it turns out, it was not them at all! Even Instagram and WhatsApp voice notes have been used for this purpose.
If you are wondering what the difference between deepfakes and generative AI is, allow us to help you out here. With generative AI you enter a question as a prompt and you probabilistically get an answer. In contrast, deepfake leverages artificial intelligence to produce video or audio content, instead of written responses as in the case of generative AI or a large language model.
Now, a recent report by Forrester Research is cautioning enterprises to be on the lookout for five major deepfake scams that can wreak havoc. In this podcast, we will tell you the five deepfake scams or attacks that the Forrester report details.
The first on the list of deepfake scams is fraud, financial fraud to be precise. Deepfakes of cloned faces and voices can be generated, which can in turn be used to authenticate & authorize activity. This could lead to fraudulent financial transactions, further leading to the victimization of individuals and enterprises. For instance, a deepfake imitating the voice of senior management personnel or even the CFO could be used to have funds transferred out to unscrupulous elements.
The second deepfake scam on the list is stock price manipulation. Deepfakes could be used to generate fake announcements and spread fake news which could cause a company’s stock prices to rise or fall unnaturally, triggering an uncontrollable ripple effect in the economy. For instance, a deepfake could be generated announcing the exit of a key executive or declaring some major plans or even announcing a bankruptcy, which would be fake all the while, but powerful enough to snowball out of control.
Third on Forrester’s list of deepfake scams that enterprises need to be careful about is harm to the reputation and the brand of an enterprise. Deepfakes can be generated and used to use offensive language falsifying as coming from the brand, insult customers through communication from the brand, blame business partners and employees, spread fake news and information about the company’s products, etc. It could be a huge PR nightmare for the company but it could also adversely affect the company’s brand value, their top lines and bottom lines, and have wide-ranging ramifications, some of them irreversible or very hard to come back from in the very least.
The fourth deepfake scam on the list of deepfake scams an enterprise should protect itself against and keep an eye out against would be attacks on employee experience and human resources. Employees could create and circulate deepfakes featuring other employees or the members of the management or even any of the stakeholders, which could be out of revenge or any other malicious intentions. This could have a huge adverse impact on the mental health of the organization’s employees, threatening their careers as well as the organization’s reputation.
Last on the list of deepfake scams that Forrester Research warns against is amplification. Deepfakes can be used not just to generate fake content but to also spread other deepfake content. Deepfakes could be used to react to and spread other deepfake content, express opinions, share emotions, etc. This would also dent the company’s image, besides amplifying news/content that is already fake.
So, what can organizations do to protect themselves against the horrors and nightmares of deepfakes? Well, we can’t really stop it, there is no foolproof strategy here, and almost no sure-shot way to prevent this. That is also why deepfakes are so dangerous. Companies would need to constantly be on their toes to monitor what is happening with their brand. Monitoring social media and online platforms would be immensely important. There are also tools in the market that can help verify if the content is authentic, trustworthy, dependable, and simply put, not a deepfake. However, it is highly unlikely that the audience at large would do such a deep dive into the content. Also, as technology to detect deepfakes evolves, the deepfakes themselves evolve too, their makers figure out new loopholes and workarounds for the algorithms and programs that are used to detect fraudulent content.
Deepfakes are not going away anytime soon, they would continue to threaten individuals and enterprises. Organizations need to pull up their socks and stay proactive to safeguard themselves against such deepfake attacks. Get your workforce upskilled to top-notch cybersecurity skills, and invest in building a strong, responsible, and resilient brand. The rest of the story is for time and fate to tell.
Do check out our website, www.cognixia.com to learn more about our live online instructor-led cybersecurity courses.
With that, we come to the end of this week’s episode of the Cognixia podcast. Until next week then.
Happy learning!