Bot-Omation

On sites like Twitter, Instagram and Facebook, followers and friends can be seen as a form of social currency. The allure of feeling popular makes it tempting to accept “friend” requests from strangers without bothering to read their profiles. However, the rise of automated accounts, or “bots,” has risen dramatically in just the past few years. In 2011 on Twitter, for example, if one were to skim through his or her list of followers, it would have been easy to spot which accounts were not run by humans. The main giveaways at the time were the handles consisting of random, nonsensical sequences of characters, and the complete lack of a profile picture. Since then, bots have evolved to not only possess greater online capabilities, but more strikingly, deceive humans by assuming legitimate identities.

While several types of bots exist on the Internet, this article will focus on social media bots that populate sites like Twitter, Facebook, and Instagram. A social media bot is an account that is programmed to perform several functions automatically, such as tracking hashtags, retweeting new posts immediately, posting original content based on algorithms, and even conversing with other users. To be clear, bots themselves are neither inherently good nor bad. In fact, there are plenty of “good” bots on Twitter that make scheduled posts consisting of self-care reminders or corny jokes. Still, mildly irritating “bad” bots also exist on social media. At the bottom line, bots are neutral. These accounts are almost always transparent about their bot-nature, so no deception is involved.

Screen Shot 2017-05-29 at 5.02.49 PM.png

The real danger lies not in what bots can do, but what they can be used for. One of the most common uses for social media bots is to inflate an individual’s popularity or influence. There is a black market for fake followers. According to one source, the cost of 5000 Twitter followers can be as low as $77. In the recent election campaigns, both Donald Trump and Hillary Clinton have been targeted for the number of fake accounts in their Twitter followers-bases. Even more unsettling is the fact that anyone can buy fake followers for any account – not just their own. This means that even if someone has a follower-base comprised of 81% bots, he or she may not necessarily have obtained them intentionally. Instead, delegitimizing people’s reputations on social media by flooding their follower’s list with bots has become a sabotage tactic in itself.

In addition to distorting popularity levels, bots have been manipulated to be malicious, fulfilling political agendas on behalf of governments and other self-interested organizations. Samuel Woolley, the Director of Research of the Computational Propaganda Project at the Oxford Internet Institute, discusses the “more sophisticated propaganda accounts that can engage with real users,” as his team has done studies on the role of bots in the 2016 US presidential election as well as the Brexit vote. He notes that “during the US Presidential election we found the top 100 most automated accounts, those tweeting using election specific hashtags more than 500 times a day, accounts for over 500,000 tweets in the week leading up to the election,” and similarly “during Brexit one percent of accounts tweeting about the referendum accounted for nearly a third of all tweets about the topic.” According to Woolley, the bots in these cases were programmed to create “strategic, political traffic” and follow a predetermined agenda.

Larger, more advanced networks of bots are capable of propagating fake news and manipulating public opinion. The phenomenon of “going viral” has become increasingly automated on Twitter, where bots beat humans when it comes to achieving the timing and level of support needed to make a topic “trending.” There is an alarming number of bots that are programmed to retweet posts without first verifying their legitimacy. Even worse, a multiplier effect happens when human users react to popularized tweets and unwittingly contribute to the spread of fake news. Consequences of this effect include stock market crashes – ironically, there exist investor-bots programmed to web-scrape, data-mine, and track news on social media for the purpose of predicting market trends algorithmically. More broadly speaking, real human lives are easily impacted by the publicized influence of individuals who use bots as weapons in the age of technological warfare.

Screen Shot 2017-05-29 at 5.03.05 PM.png

So who exactly are the humans behind social media bots? The answer is unknown. Since bot creation has become relatively simple, anyone with a computer can make them. GitHub.com is a website that serves as a repository for source-code, and it is referenced as one of the most widely accessed resources used by bot creators. Certain programmers have already written the code for Twitterbots with different capabilities, so all one needs to do is to follow a set of directions to run it. This accessibility adds new dimension of uncertainty to the already murky world of bots, and their anonymous nature conveniently protects their human owners from being exposed.

Fortunately, cybersecurity teams within Twitter, Facebook, and Instagram design and implement algorithms to accurately spot bots and the networks they may belong in. Studies on the behavior of social media bots have shown that they often act in coordination, making it possible to identify their entire communities. For example, they would post similar content within a certain time frame, and they tend to follow other bots instead of humans to avoid unwanted attention. Academic researchers have also been actively creating and testing out new ways to identify whether an account is human or bot. There are even free websites such as Bot or Not? and StatusPeople that use algorithms to detect fake accounts.

When existing bots get removed from social media platforms, they often simply adapt and reappear in more sophisticated forms, similar to how bacteria evolves to resist antibiotics that once killed them. Now, bots not only assume realistic profiles but also better imitate human behavior regarding timing and the frequency of posts. They retweet and post according to the circadian cycle and even seem to exhibit unique personalities. However, it is their tactic of targeting and interacting with human users that seems to transform science fiction into reality.

Sophisticated bots can communicate with human users by tweeting at them or messaging them directly. These tactics promote their underlying agenda to a live audience, and more importantly, helps them gain real followers to appear legitimate. While bots used to be programmed to only follow other bots, the more evolved ones have a mix of human and bot followers to be harder for algorithms to detect. One study has found that humans themselves come out on top when it comes to distinguishing between real and automated accounts, with an accuracy rate of 90% or higher. Indeed, crowdsourcing has been proposed as a possible solution to identifying human-like bots, but its obvious drawbacks – such as slowness and costliness – have prevented it from being widely utilized.

A general strategy that both Twitter and Facebook use to regulate bot activity is to increase the risk-reward ratio for humans to own and control bots on their platforms. By making the consequences of illegal activity more severe and potential profit less lucrative, they de-incentivize the creation and maintenance of malicious bots. Such tactic has been proven effective while conducted well, as Google achieved some success in their anti-spam efforts in 2014 and 2015. However, a significant difference between spam and bots is that while there is no such thing as “good” spam, there is a substantial population of good and useful bots despite being outnumbered by the malicious ones. It is just as important to avoid discouraging the growth of these good bots as it is to suppress malicious ones.

According to Woolley, eliminating all bots is neither reasonable nor feasible. He notes that instead, “policy makers need to make informed decisions – by talking to those that build and study algorithmic systems and new technology – about how to respond to things like automated disinformation and hate speech.” He also highlights the importance of transparency and how automated accounts need to be clearly labeled as such.

Despite the steady growth of the bot population, humans are still the majority users on social media platforms, with new people regularly continuing to join sites like Twitter. Yet as bots now constitute around 50% of the traffic on social media, those of us who manage personal accounts on sites like Twitter must learn to coexist with them. Our increasing reliance on social media for news, self-expression, and political activism makes it even more indispensable to ensure a secure and healthy online ecosystem. In order to effectively interact with this new internet demographic, we must first understand the human incentives behind their creation. As the bots are evolving, it is only fitting that humans heighten our consciousness in order to “follow” them. ﹥

Print, Spring 2017Gladys Teng