This week Facebook, Google and Twitter faced some pretty aggressive bipartisan congressional hearings in Washington DC. Hauled before the Senate Intelligence Committee, the House Intelligence Committee and the Senate Judiciary Committee, they came to argue they’re ready to fight back against the scourge of fake news and the Russian misinformation campaign that influenced the 2016 US presidential election. The senators were left unimpressed.
Democratic Senator Mark Warner from Virginia said “You showed a lack of resources, commitment and a lack of genuine effort.” The Senate Intelligence Committee has been studying this issue since January and, he said, the committee’s early concerns “were frankly blown off by your leadership.” Senator Warner’s views are particularly interesting because he made his own fortune investing in Silicon Valley businesses and might be regarded as sympathetic to the tech giants.
Twitter General Counsel Sean Edgett said the company is constantly improving its process of identifying propaganda on its platform. “That’s not enough” Democratic Senator Dianne Feinstein from California said. The Committee is among other things considering greater regulation to help prevent this kind of information manipulation. Senator Feinstein warned “You created these platforms… you will have to be the ones to do something about it, or we will.”
In his prepared statement Senator Warner said “we need to recognise that current law was not built to address these threats. We can all be assured that other adversaries, including foreign intelligence operatives and potentially terrorist organizations, are reading their playbook and already taking action.”
Facebook, Google and Twitter have got into this mess because in their rush to monetise their platforms they have made the buying of advertising slots fully automatic. This has been hugely successful in raising enormous revenues but it has created a monster over which they exercise little control. As I pointed out in my blog Is Advertising at a Crossroads?[i]
advertising on social media is riven with fraud and requires independent verification. However, it is now clear that it requires much more control than that.
But it isn’t just advertising. That’s important because that’s the only way these companies can make profits and so survive. But they also pretend that they have no editorial role even though they are clearly a publishing platform. All over the world the spread of fake news on Facebook and other platforms is growing quickly and causing terrible problems.
We have been watching on our screens the horrible ethnic cleansing of Rohingya Muslims by the military in Myanmar. Last week the New York Times
reported that violence against the Rohingya has been fuelled, in part, by misinformation and anti-Rohingya propaganda spread on Facebook, which is used as a primary news source by many people in the country. Doctored photos and unfounded rumours have gone viral on Facebook, many of them shared by official government and military accounts.
Facebook has successfully engineered a global network of 2 billion users of real-time communication and broadcasting tools, and then largely left an unprepared world to deal with the consequences. In Myanmar Facebook drove wide adoption of its platform by partnering with MTP, the state-run telecom company, to give subscribers access to its Free Basics programme. Free Basics includes a limited suite of Internet services, including Facebook, that can be used without counting towards a mobile phone data plan. As a result, the number of Facebook users in Myanmar has soared from 2 million in 2014 to over 30 million today.
In India where internet use has also grown quickly in the past few years, WhatsApp, the Facebook owned messaging app, has been inundated with rumours, hoaxes and false stories. In May, the Jharkhand region in Eastern India was destabilised by a viral WhatsApp message that falsely claimed that children in the area were being abducted by gangs. The message incited widespread panic and led to retaliatory lynchings, in which at least seven people were beaten to death. Naïve internet users believe everything on their phones is true. Of course, fighting misinformation on WhatsApp is problematic as it is end-to-end encrypted.
Facebook argues that the benefits of global connectivity will outweigh the negatives. Try telling that to the people of South Sudan. Despite being one of the poorest countries in the world with only 20% of its citizens connected to the internet, it has become a snake pit of social media misinformation. Political operatives both inside and outside the country have used Facebook to spread rumours and incite anger between rival factions, fostering violence that threatens to escalate into a civil war. According to a 2016 United Nations Report in South Sudan “Social media has been used by partisans on all sides, including some senior government officials, to exaggerate incidents, spread falsehoods and veiled threats, or post outright messages of incitement.“
Another problem facing the tech giants is that some of their own former employees, including several who played a key role in developing the technology, are now recognising the widespread harm they are doing to society and openly campaigning against them.
In 2007 Justin Rosenstein was one of a small group of Facebook employees who decided to create a path of least resistance – a single click – to “send little bits of positivity” across the platform. Facebook’s “like” feature was, Rosenstein says, “wildly successful”. Engagement skyrocketed as people enjoyed the short term boost they got from giving or receiving social information. At the same time Facebook gathered valuable data about user preferences that could be sold to advertisers.
But the cumulative effect of these “likes“ has been to generate addiction, Rosenstein and others argue. Technology may be contributing towards so-called “continuous partial attention”, severely limiting our ability to focus, and possibly lowering our IQ. “Everyone is distracted,” Rosenstein says. “All of the time.”
Tristan Harris, a 33-year-old former Google employee, has also become a vocal critic of the industry. “All of us are jacked into the system,” he says. “All of our minds can be hijacked. Our choices are not as free as we think they are.” In 2013 Harris was working as a product manager at Google when he circulated a thought provoking memo, A Call To Minimise Distraction & Respect Users’ Attention
to ten close colleagues. It quickly spread to 5,000 employees including senior managers who promptly “promoted” Harris to be Google’s in-house design ethicist and product philosopher. He now sees that this was a deliberately marginal role. But in it he analysed how LinkedIn exploits a need for social reciprocity to widen its network; [ii]
how YouTube and Netflix autoplay videos and next episodes, depriving users of a choice about whether or not they want to keep watching; how Snapchat created its addictive Snapstreaks feature, encouraging near-constant communication between its mostly teenage users. An internal Facebook report leaked this year revealed that the company can identify when teens feel “insecure”, “worthless” and “need a confidence boost”.
These companies are using the same techniques that gambling firms use to generate addiction and dependence. Just as the pull on the slot machine provides stimulus so do swipes on the phone. Loren Brichter aged 32 created the pull-to-refresh mechanism. He says he never intended this to be addictive but now 100% accepts that it is. “I have two kids now and I regret every minute that I’m not paying attention to them because some smartphone has sucked me in.”
James Williams, now 35, is a former Google strategist who built the metrics system for the company’s global search advertising business. He left Google last year to take a DPhil at Oxford exploring the ethics of persuasive design. His moment of truth came when he noticed he was surrounded by technology that was inhibiting him from concentrating on the things he wanted to focus on. a realisation that technology is supposed to be doing the complete opposite. He finds it hard (as I do) to understand why this issue is not on the front page of the newspapers every day.
“Eighty-seven percent of people wake up and go to sleep with their smartphones.” This means that most of the world has a new prism through which to understand politics. Just as tech firms designed tricks with which to hook users they now depict the world in a way that makes for compulsive, irresistible viewing. Just as road users cannot resist looking at a car accident on the opposite carriage way so they are fed the sensational over what is nuanced, appealing to emotion, anger and outrage.
The attention economy is set up to promote a phenomenon like Trump, who is masterly at grabbing and retaining the attention of supporters and critics alike, often by exploiting or creating outrage. Since the US election, Williams has explored another dimension to today’s brave new world. If the attention economy erodes our ability to remember, to reason, to make decisions for ourselves – faculties that are essential to self-governance – what hope is there for democracy itself? If Apple, Facebook, Google, Twitter, Instagram and Snapchat are gradually chipping away at our ability to control our own minds, could there come a point at which democracy no longer functions? “Will we be able to recognise it, if and when it happens?” Williams asks. “And if we can’t, then how do we know it hasn’t happened already?”