Anarchy on the Internet: Can AI be judge and jury for online content?


Artificial intelligence is here to save the day, we are told, tackling or nearly ready to tackle seemingly intractable problems.

Microsoft plans to cure cancer using AI. Vehicle congestion and its attendant pollution could soon become a thing of the past once fleets of self-driving cars take to the streets. And now Google, via its subsidiary Jigsaw, is going to use AI to clean up the internet, to ensure humans interact politely, diminishing the hazard of dangerous conversation using an authoritarian-by-design machine learning tool that expels trolls and terrorists that may appear online.

It’s this latter use of AI that is the most thought-provoking: an algorithm that becomes the judge of morally acceptable behavior, an algorithm developed by humans, a corporation, that demonstrates universal ethics by flagging or removing the language of wrongdoers. Does this not seem a little puritanical, a company applying what seems like a divine right in condemning certain internet users for digital heresy?

Powerful tech juggernauts right now are at arms against a sea of public sentiment telling them they are oppressive, authoritarian, conspiratorial. Earlier this month, Facebook Chief Executive Mark Zuckerberg was accused of “abusing his power” after the social media platform – which has become a major news distributor even if Zuckerberg denies this – decided the public should not see the iconic Pulitzer prize-winning photograph of a naked, highly distressed nine-year-old girl running away from a napalm attack during the Vietnam war. The photograph is far from being gratuitous… lest we forget.

As this report states, almost half of U.S. citizens get their news on Facebook now, public opinion could be shaped by Facebook’s news-choosing algorithms and how it decides to censor content. The public ought to be concerned about how tech companies leverage their might.

The difference between right and wrong

Twitter Inc. has also come under the gun as a major free-speech oppressor, for those of a liberal disposition, anyway. It has also been seen as a threat to society and a safe haven for bullies and Tweet-frenzied psychopaths. The fact is, it’s probably a bit of both: sometimes oppressive regarding free speech, and also a good place to air hatred with impunity. Twitter, now also calling itself a major news distributor, was involved in a fiery free-speech debate after banning Milo Yiannopoulos, the self-confessed internet super-villain. Although not long after this ban it came to light that some of Twitter’s execs had fought tooth and nail over the years for the platform’s “radical” freedom. There was apparently mutiny on Twitter’s Bounty as it sailed into the future, a schism in the ranks regarding this touchy subject of free speech.

We may be in our rights to ask now how much radical freedom we have left, because the online walls of our house of free speech suddenly seem to be closing in. This year the U.S. National Security Agency (NSA) built an AI called SKYNET to thwart the actions of terrorists, the results were reportedly disastrous, its accuracy “completely bullshit,” according to one Human Rights Data Analysis Group.

Microsoft, Google and Twitter this year also vowed to crackdown on online incitement of violence that could lead to acts of terrorism or might be perceived as a threat to a person or group. “The Internet has become the primary medium for sharing ideas and communicating with one another, and the events of the past few months are a strong reminder that the Internet can be used for the worst reasons imaginable,” said Microsoft about the disturbing but obvious fact that free speech, online or offline, can lead to violence and hatred.

Who is the rightful moral judge?

Microsoft wants to be, among other tech companies, the arbiter of good and bad speech. It’s hardly conceivable that grievous acts, such as bombing and killing innocents in the name of an extremist religious group, can universally be seen as not bad, but how will Microsoft et. al. fare when revolutionary groups with a just cause state something inflammatory that could lead to large groups of people amassing on places such as banks or government offices? Is there a place for anarchy on the Internet? Can hatred, or even violence, be justified? In an attempt to “rein in content that officials say incites violence,” Facebook met with the Israeli government last week to talk about censorship of the social network. One view doesn’t suit all. though. Depending on which side of the fence you stand on, you may have different ideas concerning good and bad content. Digital rights groups called this proposal to more stringently monitor content a “slippery slope to censorship.”

Algorithms are essentially authoritarian, and they don’t immediately justify their actions. In England in the year 1381, Wat Tyler led a Peasants’ Revolt. This “social upheaval’ led to an end of many oppressive measures against the poor of England and was the catalyst to the end of feudalism. It was bloody and violent, starting with private chats, public sermons and later the forming of rebel groups and demonstrations, some apparently “monstrous.” But it led to more prosperity for the poor, the freedom of former serfs and a general easing of abject inequality in England.

We might ask that as modernity ensues and we are propelled into a future in which algorithms tell us what is good and bad, will any modern iteration of a Peasants’ Revolt would get off the ground, or will those who embrace it might swiftly find themselves marked as “dangers to society”? In employing algorithms to do our censoring could we be denying ourselves of a needed dialectic for future social progress? By not employing these algorithms, could we be neglecting a technology that could save lives or spare someone from debilitating torment?

Another brick in the Wall

Google’s Jigsaw is a subsidiary of the company that promotes anti-extremism and espouses world peace and safety from injustice. Jigsaw just released a tool called, Conversation AI, which through machine learning will become very good at finding extremism, or what it deems harassment, toxic or insulting language online. In a test, this included the words, “Donald Trump is a moron.” A removable comment, according to the parameters of the algorithm. At the same time, something far more disturbing was deemed relatively harmless.

According to an article in Wired, Jigsaw says this heat-seeking harassment technology is better than anything that’s already out there and has “more than 92 percent certainty and a 10 percent false-positive rate.”

“I want to use the best technology we have at our disposal to begin to take on trolling and other nefarious tactics that give hostile voices disproportionate weight,” Jigsaw founder and President Jared Cohen told Wired.

Jigsaw, a Google-created New York think tank, seems to believe it can “fix” the internet, or as one Google exec calls it, the “problems that bedevil humanity involving information.” Some of these challenges that Jigsaw is focused on also relate to “money laundering, organized crime, police brutality, human trafficking, and terrorism,” according to the article.

Wish you weren’t here

The other matter, the more controversial matter that falls within Jigsaw’s remit, is finding and censoring so-called internet trolls. This cause célèbre has become a fractious debate with no resolution in sight. Who invents the rules, who decides on the parameters of online propriety? Can we rely on algorithms to censor the internet? This is what some critics are asking right now. “After all, you don’t develop something like Conversation AI because you’re hoping to keep humans around in the long run,” said one commentator.

The same article alludes to the fact that political correctness can become its own form of oppression. Fighting Wrong and Doing Right is the stuff of Hollywood films. In reality, there is nuance, and current algorithms are not so good at reading nuance. Neither can one group of humans be trusted to dictate what is good and bad speech. That’s why we embrace open debate. Discussing the conundrum of creating the morally responsible robot, Patrick Lin, director of Ethics and Emerging Sciences Group at California Polytechnic State University, tells us ethics are not consistent, and so programming ethics is virtually impossible. “The whole system may crash when it encounters paradoxes or unresolvable conflicts,” says Lin.

On (Authoritarian) Artificial Intelligence, AII perhaps, diginomica poses the question, “All algorithms are political. All are designed to produce a set of predefined outcomes.  But who defines those outcomes, and why? Computer algorithms always have socio-political and ethical dimensions. They reflect the values and beliefs of the societies or organisations in which they are written, not to mention the interests of corporate shareholders.”

Can Conversation AI really tell us what’s best for all? By, perhaps rightly, taking out one dangerous troll, could it also, wrongly, be denying us of a needed voice?

Social contracts

Not surprisingly, Breitbart, the stomping ground of Milo Yiannopoulos, called this technology dangerous, saying that it could be used by “tyrannical regimes overseas to detect populist uprisings within its online borders.” The first world isn’t perfect either, and populist uprisings such as Occupy Wall Street, or something along the lines of Haters against Hilary, or the promulgators of The Terrible Trump, need a platform, and at times it won’t be pleasant.  If we remove the unpleasantness, the sterility could be unpleasant in its own way.

Then again, there is a case for its use, just as we police our physical world to protect people from danger. Do we take the Hobbesian view that humans and the state of nature is essentially selfish, apt to brutality, and requires stringent policing by an oppressive authority figure, or the Rousseauian view that people are generally good creatures when living in an equitable climate? To censor or not to censor?

The former philosopher would likely be a fan of Jigsaw, and the latter may have been inclined to like it. The Hobbesian view that our state of nature is brutish and selfish is reflected on almost any given comment thread on a site such as YouTube. But the question is, if Hobbes was right, then isn’t the power structure, and the police, also part of this state of nature, moving towards its own selfish ends? That’s why we have public opinion, and forums: to analyze power structures and occasionally condemn or even transform them. This is why we admire the Internet, and why we should be worried when a power structure wants to “clean it up.”

It’s important that this debate concerning online free speech is ongoing. We should think twice before supporting an authority that wants carte blanche oversight of online morality. Our move into the highly connected world is still in its baby steps, and we should treat ourselves as infants in this new world.

We the salute cancer curing and pollution reducing algorithms, but perhaps morality and machine learning should not be in the same equation.

Photo credit: martinak15 via Flickr