Artificial Intelligence: Moving Society to a Brighter or Darker Future?
With Facebook’s facial-recognition software, Netflix’s personalized recommendation algorithm, and the potential commercialization of self-driving cars on the horizon, artificial intelligence (AI) is rapidly changing life as we know it. As well as allowing us to skip the arduous process of manually tagging friends or scavenging for the perfect movie, AI - the ability of a machine to perform cognitive functions such as learning, perceiving, and interacting with the environment - is also making our society easier, more environmentally-friendly, and healthier. For example, the revolutionizing smart grid can more effectively manage energy resources and diagnose symptoms with more ease. Tech companies, according to a McKinsey report, spent between $20 billion and $30 billion on AI in 2016 alone. Bill Gates, in his blog post addressed to 2017 college graduates, AI is one of the optimal fields in which one can make a big impact on the world.
Although AI is seemingly catapulting people into more advanced, exalted lifestyles, there are several concerns tainting the emphatic enthusiasm for the innovation. One prominent voice is Nick Bostrom, a professor of philosophy at Oxford University and author of Superintelligence: Paths, Dangers, Strategies. In the novel, he admonished the unrestrained development of powerful artificial intelligence. A device with superintelligence, he warned, or with the ability to outperform human brains in every cognitive function, could eventually subvert humans’ role in society. Similarly, Tesla and SpaceX CEO Elon Musk believes that artificial intelligence is “the greatest risk we face as a civilization,” a staunch supporter of proactive government regulation to keep companies from utilizing AI that would further their competitive advantage at the expense of human welfare. Additionally, many want to use government regulation to limit the development of autonomous weapons, which would select and kill targets without human intervention. In an open letter, which science and tech giants such as Stephen Hawking, Elon Musk, and Steve Wozniak signed, the Future of Life Institute advocated a ban on autonomous weapons, arguing that development by one nation would inevitably result in a potentially-catastrophic arms race.
Less apocalyptically but equally notable, many artificial intelligence devices have perceived, learned from, and further perpetuated racial or gender bias: a Google image recognition program frequently registered the faces of black people as gorillas; a Linkedin advertising program showed a preference for male profiles in searches; and a Microsoft chatbot began posting anti-Semitic content after only a few hours of interacting with and learning from various Twitter users. Additionally, a program called PredPol, which police departments have used to predict hotspots for future crimes based on current crime and arrest statistics, suggested majority black and brown neighborhoods at almost twice the rate of white ones. Although this reveals a feedback loop of over-policing racial minorities that is impossible to ignore, the software, if unchecked, could inadvertently reinforce this loop.
In a talk at Recode’s Code Conference in 2016, Musk emphasized the need for “benign AI,” pr machines that would never maximize goals through immoral means. Similarly, it is evident that extra checks need to be imposed to remove the bias in data from which devices learn to interact, as or else, as previously mentioned, machines will learn from and further advance the inequalities that already exist in society. These artificial intelligence systems do not have the subjective ability to discount these inequalities and vices. Although artificial intelligence is advancing society in almost every industry, we need to make sure that optimization does not sacrifice public welfare, and that the quality of life for all is elevated.