During Mark Zuckerberg’s recent Congressional testimony, the founder and CEO of Facebook ruffled conservative feathers with a suggestion that the world’s largest social network would eventually rely on artificial intelligence to police “hate speech.”
Despite the obvious implications for free expression, perhaps conservatives should rejoice at the thought of a an impartial computer determining what is acceptable rather than a Silicon Valley millennial with a preference for political correctness.
That’s precisely the case being made in Tama City, Japan, a suburb of Tokyo where voters are entertaining the idea of an AI-equipped robot serving as mayor. Per the UK’s Express:
“. . . in an attempt to offer “fair and balanced opportunities for everyone”, the AI mayor would analyse petitions put forward to the council, breaking down the pros and cons and statistically dictating what effect they would have.”
And why not? After all, artificial intelligence lacks the prejudices garnered via human experience and can therefore, in theory at least, make perfectly impartial decisions based solely on facts. All of the bluster and rhetoric that haunt the modern political process, and therefore make it unattractive to many, could be rendered obsolete by a purely dispassionate referee.
AI stands to infiltrate nearly every nook and cranny of society in the coming decades (perhaps years), and politics and policy are not immune. In fact, one could easily envision the replacement of judges, CEOs, and countless other decision-making professions with computer-based alternatives.
There’s just one problem: AI-based platforms are designed by humans and have the unfortunate tendency to harbor the biases of their creators.
And much like humans, an AI platform’s bias is primarily the result of its “education.” Be they neural networks or machine- or deep-learning algorithms, these platforms must be trained, a process that requires feeding them substantial amounts of data from which they “learn.” If the data are skewed, the network’s education is necessarily skewed, resulting in a biased understanding of the issues.
As Google’s head of AI John Giannandrea told a tech conference last year: “The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased.”
It is precisely this phenomenon that has caused a slew of embarrassing incidents for tech companies: an early version of Google’s Photos app confused black people (and white women) with gorillas, and in China, there were claims that the iPhone X’s facial recognition technology couldn’t tell Asians apart.
In both cases the likely cause was biased data, i.e., the faces the companies used to train their respective AIs were either skewed or insufficient. It wasn’t that the platforms were racist, but rather their creators simply hadn’t been provided the proper data to enable them to make the correct determinations. Such a scenario could easily occur with data pertaining to political ideas, public opinion, or legal precedent.
Given the fact that Silicon Valley is the leader in AI development, and that culture’s penchant for progressivism, conservatives have every right to be concerned about the future. Particularly when it comes to the potential of AI to supplant human decision making in matters of policy.
After all, not so long ago it was alleged that Facebook employees regularly swept under the rug articles considered conservative; Google’s firing of James Damore, an employee who dared to question the company’s obsession with political correctness, cast an unseemly light on the tech giant’s tolerance for conservative thought; and Twitter’s shadowbanning of right-leaning voices came as little surprise to those who follow CEO Jack Dorsey’s editorial preferences.
Given AI’s rapid adoption across nearly every facet of society, it’s not a matter of if, but rather when, some form of this disruptive technology will be promoted as an impartial solution to partisan matters.
AI-based tech is already being used to predict crime and determine sentencing for the convicted, and while it remains to be seen exactly how AI will infiltrate politics it’s easy to envision the aforementioned pioneers serving in advisory, or even developmental, roles for an eventual platform.
AI clearly harbors enormous potential for good, but leaders in the field have thus far shown a preference for progressive ideas. And they may be the only ones who truly know what’s going on.
While a dystopian future ruled by liberal robot overlords is paranoia at this point, conservatives would be wise to proceed with caution before conceding political ground to a technology that can be just as flawed as today’s politicians.