Just when it seemed we were getting used to dodging fraudulent emails and disbelieving Fake News, the new technologies are throwing up new, more powerful and hard-to-dodge threats.
After Spain’s last general election held on 10 November a humorous video began on circulate on social media showing the famous members of Team A with the faces of a handful of Spanish political leaders. Delving a little below the surface, it turns out that this video, called “Equipo E” (Team E), is not the brainchild of a fame-thirsty geek with too much time on his or her hands. It seems rather to be a professional, marketing-driven job to boost the popularity of an open-code tool called DeepFakeLab, which has been circulating on the net for some time now.
This AI-based software enables any video to be tampered with, switching one person’s face for another’s. These programs began to be used some years ago for editing adult videos and creating humorous parodies. Although this already poses quite a threat to the head-switched victims in each case, a much bigger problem lurks in the wings when the technology acquires such uncanny realism that it becomes difficult to distinguish a manipulated video from a genuine one.
It now seems possible to produce a perfectly credible video in which any public personage could be shown acting inappropriately or coming out with some outrageous comment. A few weeks ago a putative Mark Zuckerberg video in which he made questionable declarations turned out to be completely false. Another attention-grabbing video concerned the U.S. House Speaker Nancy Pelosi, in which she was apparently shown to be drunk.
As well as the obvious threat of distorting public opinion, there is also the risk of a personal scam for committing some type of fraud. Only a few days ago news broke of a scam involving a tool capable of emitting the voice of another person during a conversation. It had been used to fool a top executive into believing he was being called directly by the company’s CEO, asking him to make an urgent and sizeable money transfer.
On the other side of the coin, the very same threat-producing technology might well be turned against the threat itself. There are now systems capable of detecting whether a photo, audio or video has been tampered with. The trouble is that, following the well-known arms race principle, a better, less-detectable version is always likely to appear, upping the stakes again. Furthermore, detection in itself would not actually head off the harm done.
From the legal point of view there is also a need to up protection levels. California has recently passed a law banning the creation of videos of this type for adults without the consent of the persons involved. It also forbids the circulation of manipulated content that might dent or harm the image of any politician. Such legislation is almost bound to be taken up soon by all countries. For the time being, however, it is still too difficult to prosecute the perpetrators effectively, so the law might to that extent be unenforceable. The day may come when it is possible only to visualize encrypted and signed content, ensuring perfect traceability, but this is unlikely to be possible in the short term.
For the moment, to stave off the adverse effects of this new technology, the first step, as always, is to take full cognizance of the threats and their aftermath. The next important step is to check things out properly. We as individuals are duty bound to become more critical about the information we receive, especially if it pertains to our interests or opinions.
Author: Crescencio Lucas Herrera
Las opiniones vertidas por el autor son enteramente suyas y no siempre representan la opinión de GMV
The author’s views are entirely his own and may not reflect the views of GMV