Artificially Speaking: The Intersection of Free Speech and AI
By Khoury Johnson
A few days or weeks into the new year, the classic — sometimes grating — songs that pervade the holiday season often remain stuck in your head. Try this one: “The best Christmas present in the world is a blessing / I’ve always been there for the rest of our lives / A hundred and a half hour ago / I’m glad to meet you.”
Although it feels familiar, this gibberish is not a classic holiday tune at all. In fact, it was written by a neural network trained by University of Toronto researchers in August 2019 to write an original Christmas song.
This achievement, however trivial or bizarre, underscores the evolving nature of the discourse around Free Speech and artificial intelligence (AI), which asks whether the First Amendment “is (or should be) so broad that it protects human and nonhuman speakers alike,” as John Frank Weaver, an AI-focused attorney, wrote in a touchstone 2018 Slate article.
In dissecting this question, Weaver enumerates four paths governing the intersection of Free Speech and AI. The first stipulates that the government can restrict any AI speech that would not be protected under the First Amendment for humans.
The second posits that AI speech is the product of human-generated code and is therefore just another form of human speech. Problems arise, however, when artificial intelligence becomes sophisticated enough to produce its own content: Unlike the innocuous Christmas song, more insidious cases have played out in recent years, including an AI Twitter account Microsoft created in 2016 that began tweeting racist and sexist remarks within a day of launching.
The third camp suggests protecting AI speech only when it reflects the speech of its human programmer; otherwise, AI speech should not be constitutionally safeguarded.
The fourth model, which Weaver backs, argues that AI speech should have the same constitutional protections as its human-produced counterpart. Nothing in the text of the First Amendment suggests that Free Speech should be limited to people, Weaver stresses, meaning that constitutional protections should be extended to “AI, robots, and Twitterbots” alike. Ultimately, Weaver contends, the onus rests on people to differentiate more effectively between AI- and human-generated content, not on corporations or governments to curb the proliferation of harmful bot accounts.
Furthermore, shaping all-encompassing policies around the most odious manifestations of AI speech would unduly muzzle more benign forms of artificial intelligence, including robots and programs with strictly artistic or practical ambitions, according to Weaver.
Most dramatically opposed to Weaver are those who cite young history, especially the well-documented effects AI wrought on the 2016 U.S. presidential election in the form of Russian bots: online accounts that appear human-operated but are actually powered by artificial intelligence.
More than 36,000 Russian-linked bot accounts tweeted about the U.S. election between Sept. 1 and Nov. 15, 2016, creating around 288 million impressions of Russian bot tweets, according to a 2018 congressional briefing.
Facebook and Twitter alone have played host to an estimated 100 million bot accounts that engaged in toxic behavior ranging from stoking election discord to attacking survivors of the 2018 Parkland, Fla. shooting to spreading racism online, according to Common Sense Media, a nonprofit focused on the nexus of media, technology, and children.
That figure has proven to be a conservative estimate: Facebook purged approximately 2.2 billion fake accounts from its platform in just the first three months of 2019.
In California, bot-borne fears led to legislation: The governor at the time, Jerry Brown (D), signed into law the Bolstering Online Transparency Act of 2018 — aptly shorthanded the B.O.T. Act — in September 2018. The law, which went into effect July 1, 2019, made it illegal for a bot to conceal its artificial identity, negating the need for online users to become more proficient at identifying bot accounts, as Weaver calls for.
The fears at the heart of the B.O.T. Act are well-grounded in empirically real and disastrous consequences — and the legislation reflects widespread desire for transparency, honesty, and safety.
But the legislation’s gravest sin stems from its overreach. In forcing all bots to out themselves, the law applies too sweeping a mandate and unfairly penalizes scrupulous bots and their humans. AI speech should be protected by the First Amendment, in congruence with all rights and protections afforded to human beings, and the Electronic Freedom Foundation (EFF), a nonprofit committed to protecting civil liberties in the digital age, agrees the law is overly broad.
Not all bots are created with the same objectives; rather, they can service a broad array of interests, including those neither political nor ill-intentioned. Ideally, governments or businesses would enact regulations tailored to specific species of bot, specifically those most likely to manipulate elections. In this way, social media companies’ elimination of millions — if not billions — of fake accounts is a necessary measure to eradicate coercive actors online.
But if the B.O.T. Act is any indication, it seems unlikely that all governing bodies will exercise this level of discernment. Assuredly, many will work to pass similar disclosure measures in their attempts to stifle AI speech across the board, creating a problematic precedent, as the EFF argued. In particular, this broad brush style of legislation could infringe upon the protected right of online users to speak anonymously, a vital aspect of the internet’s utility and appeal.
“Courts recognize that protecting anonymous speech, which has long been recognized as ‘a shield from the tyranny of the majority,’ is critical to a functioning democracy and subject laws that infringe on the right to anonymity in ‘core political speech’ to close judicial scrutiny,” the foundation wrote in its statement.
Protecting AI speech is also key to ensuring the consistent application of the First Amendment. The discussion over Free Speech for artificial intelligence closely mirrors and proxies the larger conversation about hate speech, and whether it should be legally regulated to prevent physical violence from materializing out of constitutionally protected incendiary rhetoric.
This debate, of course, centers on who gets to decide what constitutes hateful speech. Depending on the political climate or party in power, such interpretations could oscillate over time. “The First Amendment is among America’s greatest strengths, both in terms of substantive protections offered here and in advertising our values abroad,” Weaver keenly observed in his Slate piece. “Autonomous speech from AI like bots may make the freedom of speech a more difficult value to defend, but if we don’t stick to our values when they are tested, they aren’t values. They’re hobbies.”
Khoury Johnson is project manager and editor of the Free Speech Project. Before joining FSP, Khoury worked as a writer for the Urban Institute. A graduate of Georgetown’s Asian studies master’s program, he earned his bachelor’s in political science (minor in Chinese) from Temple University.