Rise of the Social Media Bot: Judgement day?

 
A piece of software that generates unique content, captions photographs and shares its posts…how can we tell what’s real?
Following Donald Trump’s victory in the American Presidential election, there has been a great deal of introspection regarding ‘fake news’ and our supposedly ‘post-truth’ society. A related trend that received rather less coverage was the growing prevalence of computer-generated social media content. Created by social media publishing tools – or bots – this represents a worrying example of algorithms being tasked with acting as proxy people.
Do You Really Know Who Is Posting?
A social media bot (or socialbot) is an automated piece of software that behaves like a real person. It can generate unique content, and it has been proven in surveys to fool almost a third of users into accepting its authenticity. It can respond to replies, make recommendations it thinks we will like, and create captions for photos before sharing them with its followers. Realism levels are improving all the time, and legitimate adoption is evident everywhere from help functionality in apps (Slack and RBS) to personal styling tips (Sephora and H&M).
Image CTA
Earlier this year, Mark Zuckerberg announced that Facebook Messenger Platform would enable companies to develop their own customized social media bots. Turning human language into structured data, each interaction enables the algorithms powering these machine learning bots to refine and improve future output. Current examples of Facebook bots include printing apps and interfaces offering healthcare advice.
The Dark Side
Inevitably, socialbots can also be used for negative purposes. The recent attempt at a mass bot-outing on Facebook shortly before the company’s IPO was intended to suggest the platform wasn’t worth investing in. Bots can distribute spam or bombard accounts with junk, generate fraudulent profiles, and potentially influence public opinion. The Arab Spring saw key hashtags and activists’ Twitter profiles being swamped with meaningless rubbish to drown out genuine messages. It’s been alleged that a fifth of tweets about Hillary Clinton and Donald Trump in the weeks running up to November’s Presidential election were created by bots to influence wavering voters, or to reinforce existing opinions and prejudices.
Spot the Bot: Can You Tell the Difference?
A bot’s use of language can sometimes be a giveaway to observant audiences, and even the world-famous Spot the Bot recently tweeted ‘Minus 3hrs and 9mins’ as a countdown to a One Direction movie premiere. However, many people would consider this as being unremarkable compared to the illiteracy of real friends and contacts. Bots often steal photos and personal details from genuine social media accounts for added authenticity, and they are usually followed by thousands of equally fake accounts. This lends a veneer of credibility to the user statistics few people look beyond, persuading us to trust what the bots are saying.
There is a clear danger that bots could undermine – or even destroy – some social media channels. Twitter looks particularly vulnerable as it struggles with flatlining user numbers and endless controversy about trolling. If people lose faith in the authenticity of information carried on a social media platform, they will simply turn elsewhere for information and entertainment. After all, the internet is hardly short of rival attractions seeking our attention.
The risk of a dynamic modern communication platform being ruined by automated misuse is reminiscent of email’s historic battles with spam. Indeed, social media’s future may depend on a code of conduct governing when socialbots can be deployed, and how their posts are identified. It may otherwise become increasingly difficult to separate fact from fiction – which could be particularly harmful for anyone reliant on social media for their world view…
Follow us on social! We’re on Twitter, Facebook, Instagram and LinkedIn.