Fake tweets endanger integrity of U.S. presidential election
This combination of pictures created on November 07, 2016 shows
Republican presidential candidate Donald Trump in Cleveland, Ohio on October 22, 2016 and US Democratic presidential nominee Hillary Clinton in Manchester, New Hampshire, on November 6, 2016.
U.S. computer scientists say that the presence of software robots can negatively impact online democratic political discussion and endanger the integrity of the presidential election in the country.
According to a recent paper published on the journal First Monday, computer scientists, by leveraging bot detection algorithms, have found that a surprisingly high percentage of the U.S. political discussion taking place on Twitter was created by software robots, or social bots, with the express purpose of distorting the online discussion regarding the elections.
“Many recent papers have demonstrated how people’s opinions are swayed by what they read online, (and) bots can contribute to that effect,” Emilio Ferrara, the author of the study, wrote to Xinhua.
A “bot,” short for software robot, is a type of automated computer software. A social bot is programmed to control a social media account. By tweeting, retweeting, sharing contents, making comments, “liking” other users, and even engaging in conversations, social bots can appear as real people.
While many bots are designed to provide service to the online community, many others, like the ones found here, intentionally pose as humans, keeping their artificial identity undisclosed.
From Sept. 16 to Oct. 21, researchers collected 20 million election-related tweets by 2.8 million users. They singled out the top 50,000 accounts, which were responsible for about 60 percent of all tweets.
Using an open-source software tool called “Bot Or Not,” which Ferrara, a research assistant professor at the Information Science Institute of the University of Southern California, and his colleagues developed earlier, they classified 7,183 as bots, 2,654 as uncertain, and the rest as humans.
They further estimated that about 19 percent of all election-related tweets were from bots, which accounted for around 15 percent of the user population.
Social media has become increasingly important for the presidential runners as a platform for them to reach out to voters.
Scientists have found that Twitter accounts identified as pro-Donald Trump bots have mainly been tweeting positive messages, increasing the U.S.republican candidate’s popularity, while only half of pro-Hillary Clinton bots were spreading positive messages, with the other half criticizing the democratic candidate.
In the past, social bots not only unintentionally spread unverified information, but also were intentionally used to inject false news into social media to smear political candidates. In both cases, the ripple effect of Twitter makes it hard to control the damage caused by fake tweets.
Since high-frequency stock trading bots often have access to social media, and often lack fact-checking capabilities, while fake tweets, either injected or amplified by social bots, can lead to crashes in the stock market. In April 2013, the hacked Twitter account of the Associated Pressed posted a false rumor about a terrorist attack on the White House, causing an immediate crash of the stock market.
Fake Twitter profiles are created often by stealing online pictures, with biographical information cloned from existing accounts.
“There is no clear law in the United States” against such behaviors, Ferrara told Xinhua, “Bots are clearly against the terms of service of Twitter.”
Even though it is hard for Twitter to eliminate all such accounts, the computer scientist believed Twitter should try to do better.
Despite revealing the existence of fake tweets, scientists have not been able to determine the creators of the social bots.
In fact, tools for constructing social bots at various levels of sophistication are readily available online.
Political parties, local, national and foreign governments and “even single individuals with adequate resources could obtain the operational capabilities and technical tools to deploy armies of social bots and affect the directions of online political conversation,” wrote the scientists.
However, everybody taking up arms on their own may not improve the social media environment, Ferrara said, adding that social bot “could be used, but should be regulated.”