Jesus' Coming Back

Can the First Amendment Accommodate an AI Solution?

In the epic 1979 film Apocalypse Now, an audio transmission of the Colonel Kurtz character cast a light on the chaos of the Viet Nam mess in which he found himself:

“I watched a snail crawl along the edge of a straight razor. That’s my dream. That’s my nightmare. Crawling, slithering, along the edge of a straight… razor… and surviving.”

A thought-provoking nugget emerged when viewing Kurtz’s insanity: how does one survive if facing a true dilemma? When up against a serious threat, how does one endure if there are no easy answers how to do so?

While nowhere near in as much of a precarious position as the fictional Kurtz, we in the U.S. are wrestling with some constitutional quandaries that have emerged after 9/11, Orlando, San Bernardino, New York City, New Orleans, and Las Vegas. The legal conundrums that challenge us are these: in this era of artificial intelligence’s (AI) advanced anti-terrorism capabilities, the cutting-edge powers it brings to U.S. security agencies may cause backups for investigation and deployment of assets because of misinterpreting allowed First Amendment speech.

Now, at this point, almost everyone understands that shouting “fire” in a public movie theater, if there is no imminent threat to the assembled patrons, likely violates the First Amendment protections of free speech because of the harm it may create. Speech is also not protected if it incites a mob to violence, as some think that Ray Epps initiated on January 6, 2021 when Epps exhorted protestors: “we’re going into the Capitol!”

One problem that has arisen is that in a social media-based world one may not actually be present to be harmed or incited to participate in violent acts. If speech consumers are not in fact in a public space to face an impending menace or to be provoked to commit violence immediately, how should those communications be interpreted?

A further complication surrounds the question of whether AI can distinguish incitement of immediate, intended malice so it flags only the most serious threats? Can its programming perceive or discern context if someone is making an allowed political statement if using innocuous, but possibly poor analogies in speech or writings? For example: “… this… would hand…. a dangerous new tool it could use to… target political opponents and punish disfavored groups.”

RawPixel.com

American Thinker

Jesus Christ is King

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More