“I believe the future of communication will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure and their messages and content won’t stick around forever,” the Facebook CEO wrote, acknowledging that “we don’t currently have a strong reputation for building privacy-protective services.” This is the future I hope we can contribute to. We want to construct it the same way we built WhatsApp.”
End-to-end encryption, which turns all messages into an unreadable format that is only unlocked when they reach their intended destinations, was at the centre of Zuckerberg’s concept, which he claimed the business was aiming to apply to Instagram and Facebook Messenger. He claims that WhatsApp messages are so secure that no one else — not even the corporation — can read them. “We don’t see any of the content in WhatsApp,” Zuckerberg said earlier this year in testimony to the US Senate.
WhatsApp emphasises this point so often that before users transmit messages, a flag with a similar assurance displays on-screen: “No one outside of this chat, not even WhatsApp, may read or listen to them.”
Given those broad assurances, it may come as a surprise to hear that WhatsApp employs more than 1,000 contract workers throughout Austin, Dublin, and Singapore. These hourly workers, seated at laptops in pods organised by job assignments, filter through millions of private messages, photographs, and videos using special Facebook software.
In less than a minute, they make a decision on whatever pops up on their screen, ranging from charges of fraud or spam to child porn and possible terrorist plotting. Workers only have access to a subset of WhatsApp messages: those reported as potentially abusive by users and automatically transmitted to the firm.
The review is part of a larger monitoring operation in which the corporation examines non-encrypted data, such as information about the sender and their account. At WhatsApp, policing users while promising them that their privacy is protected is an unpleasant mission.
ProPublica received a 49-slide internal business marketing presentation from December that emphasises WhatsApp’s “fierce” promotion of its “privacy narrative.”
Carl Woog, WhatsApp’s director of communications, admitted that teams of contractors in Austin and other locations scan WhatsApp conversations to identify and delete “the worst” abusers. But, according to Woog, the firm does not consider this activity to be content moderation, noting, “We actually don’t often use the phrase for WhatsApp.”
Executives at the company declined to be interviewed for this piece, but they did respond to inquiries in writing.
According to the business, “WhatsApp is a lifeline for millions of people throughout the world. The decisions we make about how we construct our app are centred on ensuring our customers’ privacy while maintaining a high level of reliability.”
Facebook has also played down the amount of data it obtains from WhatsApp users, what it does with it, and how much it provides with law enforcement.
WhatsApp’s denial that it moderates material differs significantly from Facebook’s position on WhatsApp’s corporate siblings, Instagram and Facebook. According to the firm, 15,000 moderators review content on Facebook and Instagram, both of which are not encrypted. It publishes quarterly transparency reports that show how many accounts Facebook and Instagram have taken action against for various types of harmful behaviour.
For WhatsApp, there is no such report.