NEWS CLIP: Cyber expert warns AI images pose national security risk
Can YOU spot the deepfake from the real person? Cyber expert warns AI images pose national security risk - as 15 tell-tale signs to look out for are revealed
The Mail Online has listed 15 ways to spot a deepfake after Dr Tim Stevens, director of the Cyber Security Research Group at King's College London, said deepfake AI - which can create hyper-realistic images and videos of people - had potential to undermine democratic institutions and national security. Dr Stevens said the widespread availability of these tools could be exploited by states like Russia to 'troll' target populations in a bid to achieve foreign policy objectives and 'undermine' the national security of countries.
A 'deepfake' is a form of artificial intelligence which uses 'deep learning' to manipulate audio, images and video, creating hyper-realistic media content. The most common method uses 'deep neural networks', 'encoder algorithms', a base video where you want to insert the face on someone else and a collection of your target's videos. A notorious example of a deepfake was a crude impersonation of Volodymyr Zelensky appearing to surrender to Russia in a video widely circulated on Russian social media last year. The clip shows the Ukrainian president speaking from his lectern as he calls on his troops to lay down their weapons and acquiesce to Putin's invading forces.
The technical support Specialist for ESET Nigeria and Ghana, Adeniran Omotade added that AI is moving at such a pace that we need to either regulate it more heavily or learn how to defend against it before it gets out of control. Deepfakes are without question one of the biggest and most advanced cybersecurity risks of the future and its limitations are potentially endless. From cloning people for defamation and social purposes to security problems such with facial recognition and scams, we are potentially moving towards a new internet age where seeing is no longer believing.
More education is desperately required as cybercriminals cleverly adopt new techniques to attack people and businesses. At the same time our own (cybersecurity) industry should admit that these types of emerging threats require machine-learning-based solutions. With deep learning autonomously generating quality fakes, we will soon require more forms of multi layered security to pass even the most trivial of transactions to prove identities.