How Mark Zuckerberg’s Meta Failed Children on Safety

In April 2019, David Ginsberg, a Meta executive, emailed his boss, Mark Zuckerberg, with a proposal to research and reduce loneliness and compulsive use on Instagram and Facebook.

In the email, Mr. Ginsberg noted that the company faced scrutiny for its products’ impacts “especially around areas of problematic use/addiction and teens.” He asked Mr. Zuckerberg for 24 engineers, researchers and other staff, saying Instagram had a “deficit” on such issues.

A week later, Susan Li, now the company’s chief financial officer, informed Mr. Ginsberg that the project was “not funded” because of staffing constraints. Adam Mosseri, Instagram’s head, ultimately declined to finance the project, too.

The email exchanges are just one slice of evidence cited among more than a dozen lawsuits filed since last year by the attorneys general of 45 states and the District of Columbia. The states accuse Meta of unfairly ensnaring teenagers and children on Instagram and Facebook while deceiving the public about the hazards. Using a coordinated legal approach reminiscent of the government’s pursuit of Big Tobacco in the 1990s, the attorneys general seek to compel Meta to bolster protections for minors.

A New York Times analysis of the states’ court filings — including roughly 1,400 pages of company documents and correspondence filed as evidence by the State of Tennessee — show how Mr. Zuckerberg and other Meta leaders repeatedly promoted the safety of the company’s platforms, playing down risks to young people, even as they rejected employee pleas to bolster youth guardrails and hire additional staff.

In interviews, the attorneys general of several states suing Meta said Mr. Zuckerberg had led his company to drive user engagement at the expense of child welfare.

“A lot of these decisions ultimately landed on Mr. Zuckerberg’s desk,” said Raúl Torrez, the attorney general of New Mexico. “He needs to be asked explicitly, and held to account explicitly, for the decisions that he’s made.”

The state lawsuits against Meta reflect mounting concerns that teenagers and children on social media can be sexually solicited, harassed, bullied, body-shamed and algorithmically induced into compulsive online use. Last Monday, Dr. Vivek H. Murthy, the United States surgeon general, called for warning labels to be placed on social networks, saying the platforms present a public health risk to young people.

His warning could boost momentum in Congress to pass the Kids Online Safety Act, a bill that would require social media companies to turn off features for minors, like bombarding them with phone notifications, that could lead to “addiction-like” behaviors. (Critics say the bill could hinder minors’ access to important information. The News/Media Alliance, a trade group that includes The Times, helped win an exemption in the bill for news sites and apps that produce news videos.)

In May, New Mexico arrested three men who were accused of targeting children for sex after, Mr. Torrez said, they solicited state investigators who had posed as children on Instagram and Facebook. Mr. Torrez, a former child sex crimes prosecutor, said Meta’s algorithms enabled adult predators to identify children they would not have found on their own.

Meta disputed the states’ claims and has filed motions to dismiss their lawsuits.

In a statement, Liza Crenshaw, a spokeswoman for Meta, said the company was committed to youth well-being and had many teams and specialists devoted to youth experiences. She added that Meta had developed more than 50 youth safety tools and features, including limiting age-inappropriate content and restricting teenagers under 16 from receiving direct messages from people they didn’t follow.

“We want to reassure every parent that we have their interests at heart in the work we’re doing to help provide teens with safe experiences online,” Ms. Crenshaw said. The states’ legal complaints, she added, “mischaracterize our work using selective quotes and cherry-picked documents.”

But parents who say their children died as a result of online harms challenged Meta’s safety assurances.

“They preach that they have safety protections, but not the right ones,” said Mary Rodee, an elementary school teacher in Canton, N.Y., whose 15-year-old son, Riley Basford, was sexually extorted on Facebook in 2021 by a stranger posing as a teenage girl. Riley died by suicide several hours later.

Ms. Rodee, who sued the company in March, said Meta had never responded to the reports she submitted through automated channels on the site about her son’s death.

“It’s pretty unfathomable,” she said.

Meta has long wrestled with how to attract and retain teenagers, who are a core part of the company’s growth strategy, internal company documents show.

Teenagers became a major focus for Mr. Zuckerberg as early as 2016, according to the Tennessee complaint, when the company was still known as Facebook and owned apps including Instagram and WhatsApp. That spring, an annual survey of young people by the investment bank Piper Jaffray reported that Snapchat, a disappearing-message app, had surpassed Instagram in popularity.

Later that year, Instagram introduced a similar disappearing photo- and video-sharing feature, Instagram Stories. Mr. Zuckerberg directed executives to focus on getting teenagers to spend more time on the company’s platforms, according to the Tennessee complaint.

The “overall company goal is total teen time spent,” wrote one employee, whose name is redacted, in an email to executives in November 2016, according to internal correspondence among the exhibits in the Tennessee case. Participating teams should increase the number of employees dedicated to projects for teenagers by at least 50 percent, the email added, noting that Meta already had more than a dozen researchers analyzing the youth market.

In April 2017, Kevin Systrom, Instagram’s chief executive, emailed Mr. Zuckerberg asking for more staff to work on mitigating harms to users, according to the New Mexico complaint.

Mr. Zuckerberg replied that he would include Instagram in a plan to hire more staff, but he said Facebook faced “more extreme issues.” At the time, legislators were criticizing the company for having failed to hinder disinformation during the 2016 U.S. presidential campaign.

Mr. Systrom asked colleagues for examples to show the urgent need for more safeguards. He soon emailed Mr. Zuckerberg again, saying Instagram users were posting videos involving “imminent danger,” including a boy who shot himself on Instagram Live, the complaint said.

Two months later, the company announced that the Instagram Stories feature had hit 250 million daily users, dwarfing Snapchat. Mr. Systrom, who left the company in 2018, didn’t respond to a request for comment.

Meta said an Instagram team developed and introduced safety measures and experiences for young users. The company didn’t respond to a question about whether Mr. Zuckerberg had provided the additional staff.

In January 2018, Mr. Zuckerberg received a report estimating that four million children under the age of 13 were on Instagram, according to a lawsuit filed in federal court by 33 states.

Facebook’s and Instagram’s terms of use prohibit users under 13. But the company’s sign-up process for new accounts enabled children to easily lie about their age, according to the complaint. Meta’s practices violated a federal children’s online privacy law requiring certain online services to obtain parental consent before collecting personal data, like contact information, from children under 13, the states allege.

In March 2018, The Times reported that Cambridge Analytica, a voter profiling firm, had covertly harvested the personal data of millions of Facebook users. That set off more scrutiny of the company’s privacy practices, including those involving minors.

Mr. Zuckerberg testified the next month at a Senate hearing, “We don’t allow people under the age of 13 to use Facebook.”

Attorneys general from dozens of states disagree.

In late 2021, Frances Haugen, a former Facebook employee, disclosed thousands of pages of internal documents that she said showed the company valued “profit above safety.” Lawmakers held a hearing, grilling her on why so many children had accounts.

Meanwhile, company executives knew that Instagram use by children under 13 was “the status quo,” according to the joint federal complaint filed by the states. In an internal chat in November 2021, Mr. Mosseri acknowledged those underage users and said the company’s plan to “cater the experience to their age” was on hold, the complaint said.

In its statement, Meta said Instagram had measures in place to remove underage accounts when the company identified them. Meta has said it has regularly removed hundreds of thousands of accounts that could not prove they met the company’s age requirements.

A company debate over beauty filters on Instagram encapsulated the internal tensions over teenage mental health — and ultimately the desire to engage more young people prevailed.

It began in 2017 after Instagram introduced camera effects that enabled users to alter their facial features to make them look funny or “cute/pretty,” according to internal emails and documents filed as evidence in the Tennessee case. The move was made to boost engagement among young people. Snapchat already had popular face filters, the emails said.

But a backlash ensued in the fall of 2019 after Instagram introduced an appearance-altering filter, Fix Me, which mimicked the nip/tuck lines that cosmetic surgeons draw on patients’ faces. Some mental health experts warned that the surgery-like camera effects could normalize unrealistic beauty standards for young women, exacerbating body-image disorders.

As a result, Instagram in October 2019 temporarily disallowed camera effects that made dramatic, surgical-looking facial alterations — while still permitting obviously fantastical filters, like goofy animal faces. The next month, concerned executives proposed a permanent ban, according to Tennessee court filings.

Other executives argued that a ban would hurt the company’s ability to compete. One senior executive sent an email saying Mr. Zuckerberg was concerned whether data showed real harm.

In early 2020, ahead of an April meeting with Mr. Zuckerberg to discuss the issue, employees prepared a briefing document on the ban, according to the Tennessee court filings. One internal email noted that employees had spoken with 18 mental health experts, each of whom raised concerns that cosmetic surgery filters could “cause lasting harm, especially to young people.”

But the meeting with Mr. Zuckerberg was canceled. Instead, the chief executive told company leaders that he was in favor of lifting the ban on beauty filters, according to an email he sent that was included in the court filings.

Several weeks later, Margaret Gould Stewart, then Facebook’s vice president for product design and responsible innovation, reached out to Mr. Zuckerberg, according to an email included among the exhibits. In the email, she noted that as a mother of teenage daughters, she knew social media put “intense” pressure on girls “with respect to body image.”

Ms. Stewart, who subsequently left Meta, did not respond to an email seeking comment.

In the end, Meta said it barred filters “that directly promote cosmetic surgery, changes in skin color or extreme weight loss” and clearly indicated when one was being used.

In 2021, Meta began planning for a new social app. It was to be aimed specifically at children and called Instagram Kids. In response, 44 attorneys general wrote a letter that May urging Mr. Zuckerberg to “abandon these plans.”

“Facebook has historically failed to protect the welfare of children on its platforms,” the letter said.

Meta subsequently paused plans for an Instagram Kids app.

By August, company efforts to protect users’ well-being work had become “increasingly urgent” for Meta, according to another email to Mr. Zuckerberg filed as an exhibit in the Tennessee case. Nick Clegg, now Meta’s head of global affairs, warned his boss of mounting concerns from regulators about the company’s impact on teenage mental health, including “potential legal action from state A.G.s.”

Describing Meta’s youth well-being efforts as “understaffed and fragmented,” Mr. Clegg requested funding for 45 employees, including 20 engineers.

In September 2021, The Wall Street Journal published an article saying Instagram knew it was “toxic for teen girls,” escalating public concerns.

An article in The Times that same month mentioned a video that Mr. Zuckerberg had posted of himself riding across a lake on an “electric surfboard.” Internally, Mr. Zuckerberg objected to that description, saying he was actually riding a hydrofoil he pumped with his legs and wanted to post a correction on Facebook, according to employee messages filed in court.

Mr. Clegg found the idea of a hydrofoil post “pretty tone deaf given the gravity” of recent accusations that Meta’ s products caused teenage mental health harms, he said in a text message with communications executives included in court filings.

Mr. Zuckerberg went ahead with the correction.

In November 2021, Mr. Clegg, who had not heard back from Mr. Zuckerberg about his request for more staff, sent a follow-up email with a scaled-down proposal, according to Tennessee court filings. He asked for 32 employees, none of them engineers.

Ms. Li, the finance executive, responded a few days later, saying she would defer to Mr. Zuckerberg and suggested that the funding was unlikely, according to an internal email filed in the Tennessee case. Meta didn’t respond to a question about whether the request had been granted.

A few months later, Meta said that although its revenue for 2021 had increased 37 percent to nearly $118 billion from a year earlier, fourth-quarter profit plummeted because of a $10 billion investment in developing virtual reality products for immersive realms, known as the metaverse.

Last fall, the Match Group, which owns dating apps like Tinder and OKCupid, found that ads the company had placed on Meta’s platforms were running adjacent to “highly disturbing” violent and sexualized content, some of it involving children, according to the New Mexico complaint. Meta removed some of the posts flagged by Match, telling the dating giant that “violating content may not get caught a small percentage of the time,” the complaint said.

Dissatisfied with Meta’s response, Bernard Kim, the chief executive of the Match Group, reached out to Mr. Zuckerberg by email with a warning, saying his company could not “turn a blind eye,” the complaint said.

Mr. Zuckerberg didn’t respond to Mr. Kim, according to the complaint.

Meta said the company had spent years building technology to combat child exploitation.

Last month, a judge denied Meta’s motion to dismiss the New Mexico lawsuit. But the court granted a request regarding Mr. Zuckerberg, who had been named as defendant, to drop him from the case.

Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top