Facebook’s latest “apology” reveals security and safety disarray

Facebook’s latest “apology” reveals security and safety disarray

A person in a Hazmat suit covers the Facebook logo with warning tape.

Facebook had it rough last week. Leaked documents—many leaked documents—formed the backbone of a string of reports published in The Wall Street Journal. Together, the stories paint the picture of a company barely in control of its own creation. The revelations run the gamut: Facebook had created special rules for VIPs that largely exempted 5.8 million users from moderation, forced troll farm content on 40 percent of America, created toxic conditions for teen girls, ignored cartels and human traffickers, and even undermined CEO Mark Zuckerberg’s own desire to promote vaccination against COVID.

Now, Facebook wants you to know it’s sorry and that it’s trying to do better.

“In the past, we didn’t address safety and security challenges early enough in the product development process,” the company said in an unsigned press release today. “Instead, we made improvements reactively in response to a specific abuse. But we have fundamentally changed that approach.”

The change, Facebook said, was the integration of safety and security into product development. The press release doesn’t say when the change was made, and a Facebook spokesperson couldn’t confirm for Ars when integrity became more embedded in the product teams. But the press release does say the company’s Facebook Horizon VR efforts benefitted from this process. Those were released to beta only last year.

The release would appear to confirm that, prior to development of Horizon, safety and security were sideshows that were considered after features had been defined and code had been written. Or, maybe problems weren’t addressed until even later, when users encountered them. Regardless of when it happened, it’s a stunning revelation for a multibillion dollar company that counts 2 billion people as users.

Advertisement

Missed the memo

Facebook isn’t the first company to have a cavalier approach to security, and as such, it didn’t have to make the same mistakes. Early in Facebook’s history, all it had to do was look as far as one of its major shareholders, Microsoft, which had bought special stock in the startup in 2007.

In the late 1990s and early 2000s, Microsoft had its own issues with security, producing versions of Windows and Internet Information Server that were riddled with security holes. The company began to fix things after Bill Gates made security the company’s top priority in his 2002 “Trustworthy computing” memo. One result of that push was the Microsoft Security Development Lifecycle, which implores managers to “make security everyone’s business.” Microsoft began publishing books about its approach in the mid-2000s, and it’s hard to imagine that Facebook’s engineers were unaware of it.

But a security-first development program must have come with costs that Facebook was unwilling to bear—namely, growth. Time and again the company has been confronted with choices about whether to address a safety or security problem or prioritize growth. It has ignored privacy concerns by allowing business partners to access users’ personal data. It killed a project to use artificial intelligence to tackle disinformation on the platform. It’s focus on Groups a few years ago led to “super-inviters” able to recruit hundreds of people to the “Stop the Steal” group that ultimately helped foment the January 6 insurrection at the US Capitol. In each case, the company had chosen to pursue growth first and deal with the consequences later.

“Many different teams”

That mindset appears to have been baked into the company from the beginning, when Zuckerberg took an investment from Peter Thiel and copied the “blitzscaling” strategy that Thiel and others used at PayPal.

Advertisement

Today, Facebook is fracturing under the internal strife caused by growth at all costs. The leaks to the WSJ, said Alex Stamos, the company’s former chief security officer, are the result of frustrations the safety and security people experience when they’re overruled by growth and policy teams. (Policy teams have their own conflicts—the people who decide what flies on Facebook are the same ones talking with politicians and regulators.) 

“The big picture is that several mid-level VPs and Directors invested and built big quantitative social science teams on the belief that knowing what was wrong would lead to positive change. Those teams have run into the power of the Growth and unified Policy teams,” Stamos tweeted this week. “Turns out the knowledge isn’t helpful when the top execs haven’t changed the way products are measured and employees are compensated.”

Even today, there doesn’t appear to be one person who is responsible for safety and security at the company. “Our integrity work is made up of many different teams, so hard to say [if there is] one leader, but Guy Rosen is VP of Integrity,” a Facebook spokesperson told Ars. Perhaps it’s telling that Rosen doesn’t appear on Facebook’s list of top management.

For now, Facebook doesn’t seem to have much incentive to change. Its stock price is up more than 50 percent over the last year, and shareholders don’t have much leverage given the outsize power of Zuckerberg’s voting shares. Growth at all costs will probably continue. Until, of course, the safety and security problems become so large that they start harming growth and retention. Given Facebook’s statement today, it’s not clear whether the company is there yet. If that moment arrives—and if Microsoft’s transition is anything to go by—it will be years before an embrace of safety and security affects users in a meaningful way.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Blog - UK News - BlogUK News - BlogUK