Deep Fake Technology Said To Be Outpacing Security Measures To Control Or Temper It
“Deep fake” technology is a sub-branch of artificial intelligence that is going to be incredibly important in the future. The technology, which allows for a person, place, or thing to be substituted over or against another person, place, or thing first became popularized in the adult film industry, and from that quickly spread to mainstream movies and military applications.
While the popularity of “deep fake” emerged out of the lusts of man’s heart to project personal fantasies onto other people in a way that appear realistic (for example, giving the impression of a famous movie star being in an adult film), that is but the most superficial and useless application of the technology. It’s real use and simultaneous threat is the ability to manufacture evidence to support a false conclusion. Often times one hears of the term “false flag” terrorist attacks, such as the proposed Operation Northwoods, where a terrorist attack was actually organized by a government to justify a certain policy. “Deep fake” adds a brand new dimension to this because now “evidence” can be OPENLY presented as true when it really is a fake.
Now “deep fakes” are not mutually exclusive, as the technology can be used by anybody. For example, a zealous prosecutor could used such technology to create a video of a man he wants incarcerated taking part in a crime which the man was not a part of. However, a government could do the same, and not just the US to another country, but another country to the US, such as with allegations of bribery, corruption, or “plans” to attack a nation. The fundamental problem with it is that the technology breaks down one of the most well-established forms, which is the use of video evidence in a crime. This has raised concern at the highest levels of the military, who have said that the advancements in the technology present a threat to national security:
In July, Sen. Marco Rubio appeared to be a lone cry in the dark when he declared in remarks he made at the Heritage Foundation that “Deep Fake” technology “which manipulates audio and video of real people saying or doing things they never said or did” poses a serious menace to national security.
Indeed. This is an industry that’s growing rapidly; far out-pacing its national security implications and the development of biometric de-facializing countermeasures, which some authorities told Biometric Update is “very likely” to become a new biometric industry technology off-shoot. Indeed. According to a report published by Markets and Markets in 2017, the global facial recognition market was estimated at $3.37 billion in 2016, and is expected to grow up to $7.76 billion by 2022, with an annual growth rate of 13.9 percent. But this could be stunted by the growth in the biometric deception technology market as the existing biometric industry may be forced to work on developing de-facializing countermeasures, industry authorities said.
Rubio warned, “I believe that this is the next wave of attacks against America and Western democracies … the ability to produce fake videos that … can only be determined to be fake after extensive analytical analysis,” which, going forward, poses a formidable threat, given that the Pew Research Center found that, according to Bobby Chesney, the Charles I. Francis Professor in Law and Associate Dean for Academic Affairs at the University of Texas School of Law and Director of UT-Austin’s Robert S. Strauss Center for International Security and Law; and Danielle Citron, the Morton & Sophia Macht Professor of Law at the University of Maryland Carey School of Law and author of Hate Crimes in Cyberspace, “As of August 2017, two-thirds of Americans [(67 percent] reported … that they get their news at least in part from social media. This is fertile ground for circulating deep fake content. Indeed, the more salacious, the better.”
It’s a menace that certainly has quietly sparked a gold rush among the biometrics industry to begin working on developing “de-facializing” technologies to thwart the already lucrative deep fake technology business.
According to intelligence and military officials, the inability to biometrically de-facialize deep fakes is indeed rapidly becoming such a dangerous concern; so much so that the Department of Defense (DOD) and Intelligence Community (IC) components are beginning to vigorously work on de-facializing biometric countermeasures. For example, the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), has been sponsoring proof-of-concept research programs targeting development of facial biometric “de-identification” technologies.
Meanwhile, the Pentagon’s Defense Advanced Research Projects Agency (DARPA) has funded a Media Forensics project that’s been tasked to explore and develop technologies to automatically weed out deep fake videos and digital media.
The 2017, “threatcasting report,” The New Dogs of War: The Future of Weaponized Artificial Intelligence, by the Army Cyber Institute at West Point and Arizona State University’s Threatcasting Lab, had also warned that, “The clearest and most apparent threat that emerged from the workshop raw data was a unique way in which AI could be weaponized. Surveillance and coercion are not new threats, but when conducted with the speed, power, and reach of AI, the danger is newly amplified.”
Continuing, the report stated, “The goal of the adversary would depend on the nature of the threat actor (criminal, terrorist, state sponsored). Regardless, the weaponization of AI to surveil and coerce individuals is a powerful emerging threat. As a developing platform for psychological, physical, or systemic infiltration, AI is quickly becoming the realization of a modern dog of war, unleashing the worst of humanity and our technology onto ourselves,” adding, “Although clearly more research is needed, it is imperative to take immediate pragmatic steps to lessen the destabilizing impacts of nefarious AI actors. If we are better able to understand and articulate possible threats and their impacts to the American population, economy, and livelihood, then we can begin to guard against them while crafting a counter-narrative.”
The problem is somber
“The West is ill-prepared for the wave of ‘deep fakes’ that AI could unleash,” and, “As long as tech research and counter-disinformation efforts run on parallel, disconnected tracks, little progress will be made in getting ahead of [these] emerging threats,” recently wrote Brookings Institution’s Chris Meserole, Fellow, Foreign Policy, Center for Middle East Policy; and, Alina Polyakova, David M. Rubenstein Fellow, Foreign Policy, Center on the United States and Europe.
“Thanks to bigger data, better algorithms, and custom hardware, in the coming years, individuals around the world will increasingly have access to cutting-edge artificial intelligence,” which when combined with “deep learning and generative adversarial networks, [will make] it possible to doctor images and video so well that it’s difficult to distinguish manipulated files from authentic ones. And thanks to apps like FakeApp and Lyrebird, these so-called ‘deep fakes’ can now be produced by anyone with a computer or smartphone.”
In October, in their paper, Disinformation on Steroids: The Threat of Deep Fakes, published by the Council on Foreign Relations, Chesney and Citron worrisomely presaged that, “Disinformation and distrust online are set to take a turn for the worse. Rapid advances in deep learning algorithms to synthesize video and audio content have made possible the production of ‘deep fakes’ — highly realistic and difficult-to-detect depictions of real people doing or saying things they never said or did. As this technology spreads, the ability to produce bogus, yet credible video and audio content will come within the reach of an ever larger array of governments, nonstate actors, and individuals,” and, “as a result, the ability to advance lies using hyperrealistic, fake evidence is poised for a great leap forward.”
Chesney and Citron forewarned that, “The array of potential harms that deep fakes could entail is stunning,” explaining that, “A well-timed and thoughtfully scripted deep fake or series of deep fakes could tip an election, spark violence in a city primed for civil unrest, bolster insurgent narratives about an enemy’s supposed atrocities, or exacerbate political divisions in a society. The opportunities for the sabotage of rivals are legion — for example, sinking a trade deal by slipping to a foreign leader a deep fake purporting to reveal the insulting true beliefs or intentions of US officials.”
“Consider these terrifying possibilities,” they posited:
• Fake videos could feature public officials taking bribes, uttering racial epithets, or engaging in adultery;
• Politicians and other government officials could appear in locations where they were not, saying or doing horrific things that they did not;
• Fake videos could place them in meetings with spies or criminals, launching public outrage, criminal investigations, or both;
• Soldiers could be shown murdering innocent civilians in a war zone, precipitating waves of violence and even strategic harms to a war effort;
• A deep fake might falsely depict a white police officer shooting an unarmed black man while shouting racial epithets;
• A fake audio clip might “reveal” criminal behavior by a candidate on the eve of an election;
• A fake video might portray an Israeli official doing or saying something so inflammatory as to cause riots in neighboring countries, potentially disrupting diplomatic ties or even motivating a wave of violence;
• False audio might convincingly depict US officials privately “admitting” a plan to commit this or that outrage overseas, exquisitely timed to disrupt an important diplomatic initiative; and,
• A fake video might depict emergency officials “announcing” an impending missile strike on Los Angeles or an emergent pandemic in New York, provoking panic and worse.
“Note that these examples all emphasize how a well-executed and well-timed deep fake might generate significant harm in a particular instance, whether the damage is to physical property and life in the wake of social unrest or panic or to the integrity of an election,” they wrote, ominously noting that, “The threat posed by deep fakes … has a long-term, systemic dimension.”
“The looming era of deep fakes will be different, however, because the capacity to create hyperrealistic, difficult-to-debunk fake video and audio content will spread far and wide,” they warn, pointing out that. “Advances in machine learning are driving this change. Most notably, academic researchers have developed generative adversarial networks that pit algorithms against one another to create synthetic data (i.e., the fake) that is nearly identical to its training data (i.e., real audio or video). Similar work is likely taking place in various classified settings, but the technology is developing at least partially in full public view with the involvement of commercial providers. Some degree of credible fakery is already within the reach of leading intelligence agencies, but in the coming age of deep fakes, anyone will be able to play the game at a dangerously high level. In such an environment, it would take little sophistication and resources to produce havoc. Not long from now, robust tools of this kind and for-hire services to implement them will be cheaply available to anyone.”
…
“By blurring the line between fact and fiction, deep fake technology could undermine public trust in recorded images and videos as objective depictions of reality,” and “could become a potent tool for hostile powers seeking to spread misinformation. The first step to help prepare the Intelligence Community, and the nation, to respond effectively, is to understand all we can about this emerging technology, and what steps we can take to protect ourselves,” Schiff said in a statement. “It’s my hope that the DNI will quickly work to get this information to Congress to ensure that we are able to make informed public policy decisions.”
“We need to know what countries have used it against US interests, what the US government is doing to address this national security threat, and what more the Intelligence Community needs to effectively counter the threat,” said Murphy, also a member of the House Committee on Armed Services.
Curbelo added that, “Deep fakes have the potential to disrupt every facet of our society and trigger dangerous international and domestic consequences. With implications for national security, human rights, and public safety, the technological capabilities to produce this kind of propaganda targeting the United States and Americans around the world is unprecedented.” (source, source)
“Dindu nuffin” is a common phrase used online, often times by persons with racist inclinations, to describe the attitude of many American blacks towards committing crimes and their assertion of innocence in the face of manifest evidence against them. While often used with malicious intentions, the phrase reflects a reality that is for many considered “taboo” in American culture, which is the fact that consistently, such person who commit crimes really do deny their crimes in spite of clear evidence, such as video evidence, against them as a reliable source of information.
Now that “deep fake” technology is growing, what was a cry too often made by the easily proven guilty may be thrown into legal jeopardy because if one can manufacture evidence of a crime that did not happen, nobody is safe because there no longer exists and objective standard by which to judge accurately the events at a given time so to make a sound decision. This is not to say that it will allow for the exoneration of the guilty, but that people of all backgrounds who are innocent face a much greater chance if they happen into a legal situation of being falsely convicted. This does not even explore the geopolitical consequences, such as starting major wars or regional wars in the name of profit using the same kinds of manufactured evidence.
The future is not 2019, but 1984.
Comments are closed.