One reason I write this newsletter about social networks is to cover the new and exotic methods that state actors employ to bend the public to their will. Much of the conversation over the past two years has been around “troll farms” or “troll armies” — essentially, remote workforces that attempt to wreak havoc from their laptops on targets around the world.
On Saturday we learned of a much more disturbing — and in-person — method of social media hacking. Katie Benner, Mark Mazzetti, Ben Hubbard and Mike Isaac had the tale of Ali Alzabarah, a Twitter engineer recruited by Saudi Arabia to use his position to identify government critics:
Twitter executives first became aware of a possible plot to infiltrate user accounts at the end of 2015, when Western intelligence officials told them that the Saudis were grooming an employee, Ali Alzabarah, to spy on the accounts of dissidents and others, according to five people briefed on the matter. They requested anonymity because they were not authorized to speak publicly.
Mr. Alzabarah had joined Twitter in 2013 and had risen through the ranks to an engineering position that gave him access to the personal information and account activity of Twitter’s users, including phone numbers and I.P. addresses, unique identifiers for devices connected to the internet.
Perhaps it had previously occurred to you that state actors would attempt to recruit engineers and other social-network employees as spies. I spent less time thinking about it than I probably should have! In any case, it’s chilling, and had real-world consequences. Alzabarah — who was fired, and now reportedly works for the Saudi government — accessed dozens of accounts, as part of a wide-ranging effort to identify the kingdom’s most influential critics and intimidate them into silence.
Another part of this effort involved the consulting company McKinsey, best known as the place where your college friends spend two lazy postgraduate years before business school. As the New York Times reported, McKinsey assembled a 9-page report on the Saudis’ behalf naming prominent Saudi dissidents. One of the men named was arrested, along with two of his brothers, and the account of an anonymous critic was shut down. (McKinsey denied everything, rather weakly.)
Facebook has spoken often in the past about the strict controls it places around user accounts in an effort to thwart the kind of attack that Alzabarah mounted. Every time a user’s data is accessed, Facebook logs which employ did so, and regularly audits the logs looking for suspicious behavior.
At Twitter, things are much looser. Perhaps you have forgotten the time that a contract worker briefly deactivated President Trump’s account; I sure haven’t. Here is the seriousness with which Twitter takes account security, from my story last year:
In the wake of Trump’s account deactivation shortly before 10PM ET on Thursday, former employees gathered in a private Slack that they use to discuss the company’s travails. The rogue employee, who has not been identified, was an immediate source of fascination. “We’re now referring to this individual as ‘the legend,’” one former employee told The Verge. At the same time, the former employee was not surprised by the incident. “People have ‘dropped the mic’ in the past and deleted accounts, verified users, and otherwise abused their power on the last day,” the employee said. In each case, the employee said, the abuse was caught quickly and did not become public.
These “mic drops” were possible because of the broad availability of customer support tools inside Twitter. The company won’t say how many people have access to the tools necessary to deactivate an account like Trump’s — and after today, the number is likely much lower. But up until now, as many as hundreds of people have had access to the tools, which let employees see a broad range of information about the account. The access does not allow employees to send tweets from other users’ accounts, or to read a user’s direct messages.
The man was eventually revealed to be a German citizen named Bahtiyar Duysak. He said that he had made a mistake. Still, when considered in light of the Times’ story about spying, it ought to give pause to the large group of people who use Twitter as a tool for activism.
It ought to give pause to other social networks, as well. I asked around for other public cases in which a social network had caught a spy in its ranks, and came up empty. But it’s a safe bet that others have attempted the playbook that the Saudis have, and possibly succeeded — at Twitter and elsewhere. For activists who risk their freedom when they tweet, it’s a chilling reminder to take extra steps to protect their identities, lest they wind up in the next McKinsey report. And for Twitter, it’s another major embarrassment in a year that has had too many of them.
Adam Satariano investigates more Facebook dark money: a group pushing Britain to exit the European Union in much starker terms than it has planned. Facebook says it will soon require British advertisers to confirm and disclose their real identities:
In the past 10 months, the organization spent more than 250,000 pounds on ads pushing for a more severe break from the European Union than Mrs. May has planned. The ads reached 10 million to 11 million people, according to a report published on Saturday by a House of Commons committee investigating the manipulation of social media in elections.
The ads, which disappeared suddenly this week, linked to websites for people to send prewritten emails to their local member of Parliament outlining their opposition to Mrs. May’s negotiations with the European Union.
The Digital Forensics Research Lab digs in on the October 19th indictment of a Russian national in connection with an effort to interfere in the US midterm elections. Key point: Russia is spending more on its campaign this year than it did in 2016. (Fake accounts are getting more expensive!)
The first financial detail included in the criminal complaint against Elena Khusyaynova showed that between January 2016 and June 2018, Project Lakhta’s proposed operating budget was more than two billion Russian rubles ($35 million USD). In the first half of 2018, the proposed operating budget was 650 million Russian rubles (over $10 million USD).
Put simply, the budget for first half of 2018 nearly matched the total troll farm budgets from 2016 and 2017. The itemized budget requests, which Khusyaynova allegedly organized, increased every single month in 2018.
Sue Halpern surveys the political landscape post-Cambridge Analytica and finds any number of companies still invested in the same kind of psychographic targeting. And much of it looks much more invasive, on the surface, than anything Cambridge Analytica did:
Judging personalities, measuring voice stress, digging through everything someone has ever said—all of this suggests that future digital campaigns, irrespective of party, will have ever-sharper tools to burrow into the psyches of candidates and voters. Consider Avalanche Strategy, another startup supported by Higher Ground Labs. Its proprietary algorithm analyzes what people say and tries to determine what they really mean—whether they are perhaps shading the truth or not being completely comfortable about their views. According to Michiah Prull, one of the company’s founders, the data firm prompts survey takers to answer open-ended questions about a particular issue, and then analyzes the specific language in the responses to identify “psychographic clusters” within the larger population. This allows campaigns to target their messaging even more effectively than traditional polling can—because, as the 2016 election made clear, people often aren’t completely open and honest with pollsters.
“We are able to identify the positioning, framing, and messaging that will resonate across the clusters to create large, powerful coalitions, and within clusters to drive the strongest engagement with specific groups,” Prull said. Avalanche Strategy’s technology was used by six female first-time candidates in the 2017 Virginia election who took its insights and created digital ads based on its recommendations in the final weeks of the campaign. Five of the six women won.
Snapchat is a surprisingly popular place for kids to get news, according to new data from the Knight Foundation:
In a survey of 5,844 college students from 11 US institutions, 89 percent said they got at least some of their news from social media over the previous week. And Facebook was the most popular outlet, with 71 percent of respondents saying they got news from the platform during that time period. Interestingly, Snapchat came in second place, with 55 percent of the students saying they had gotten news from the app during the past week. And YouTube, Instagram and Twitter followed, pulling 54 percent, 51 percent and 42 percent of respondents, respectively.
Ryan Broderick looks at the political success that a group of YouTubers have had getting elected to Congress in Brazil:
Kim Kataguiri is known in Brazil for a lot of things. He’s been called a fascist. He’s been called a fake news kingpin. Is he a YouTuber? He definitely usesYouTube. He’s definitely a troll. A troll with a consistent message, though, he points out. Maybe he’s Brazil’s equivalent of Milo Yiannopoulos. His organization, Movimento Brasil Livre (MBL) — the Free Brazil Movement — is like the Brazilian Breitbart. Or maybe it’s like the American tea party. Maybe it’s both. Is it a news network? Kataguiri says it isn’t. But it’s not a political party, either. He says MBL is just a bunch of young people who love free market economics and memes.
One thing is very clear: His YouTube channel, the memes, the fake news, and MBL’s army of supporters have helped Kataguiri, 22, become the youngest person ever elected to Congress in Brazil. He’s also trying to become Brazil’s equivalent of speaker of the House.
YouTube’s head of product, Neal Mohan, tells YouTubers to oppose the European Union’s Article 13, which creates draconian new requirements on tech platforms to check for copyright infringement.
This legislation poses a threat to both your livelihood and your ability to share your voice with the world. And, if implemented as proposed, Article 13 threatens hundreds of thousands of jobs, European creators, businesses, artists and everyone they employ. The proposal could force platforms, like YouTube, to allow only content from a small number of large companies. It would be too risky for platforms to host content from smaller original content creators, because the platforms would now be directly liable for that content. We realize the importance of all rights holders being fairly compensated, which is why we built Content ID and a platform to pay out all types of content owners. But the unintended consequences of article 13 will put this ecosystem at risk. We are committed to working with the industry to find a better way. This language could be finalized by the end of the year, so it’s important to speak up now.