Arrest Mark Zuckerberg for Child Endangerment
The plaintiffs’ brief alleges that Meta was aware that its platforms were endangering young users, including by exacerbating adolescents’ mental health issues. According to the plaintiffs, Meta frequently detected content related to eating disorders, child sexual abuse, and suicide but refused to remove it. For example, one 2021 internal company survey found that more than 8 percent of respondents aged 13 to 15 had seen someone harm themself or threaten to harm themself on Instagram during the past week. The brief also makes clear that Meta fully understood the addictive nature of its products, with plaintiffs citing a message by one user-experience researcher at the company that Instagram “is a drug” and, “We’re basically pushers.”
Perhaps most relevant to state child endangerment laws, the plaintiffs have alleged that Meta knew that millions of adults were using its platforms to inappropriately contact minors. According to their filing, an internal company audit found that Instagram had recommended 1.4 million potentially inappropriate adults to teenagers in a single day in 2022. The brief also details how Instagram’s policy was to not take action against sexual solicitation until a user had been caught engaging in the “trafficking of humans for sex” a whopping 17 times. As Instagram’s former head of safety and well-being, Vaishnavi Jayakumar, reportedly testified, “You could incur 16 violations for prostitution and sexual solicitation, and upon the seventeenth violation, your account would be suspended.”
The decision to expose adolescents to these threats was, according to the brief, an entirely knowing one. As plaintiffs allege, by 2019 Meta researchers were recommending that Instagram shield its young users from unwanted adult contact by making all teenage accounts private by default. Meta’s policy, legal, and well-being teams all echoed this recommendation, stressing that the policy would “increase teen safety.” But the primary response by Meta’s corporate leadership was to question how this policy would impact its profits. The company directed its growth team to analyze what a default private setting would do to engagement. They found it would have a negative effect—according to one employee quoted in the court filing, limiting “unwanted interactions” would likely cause a “potentially untenable problem with engagement and growth.” As a result, Meta failed to implement this safety recommendation until 2024, allowing billions of nonconsensual interactions between teenagers and adult strangers during the intervening four years. A significant enough number of these encounters were inappropriate, according to plaintiffs, that Meta had an acronym—“IIC,” short for “inappropriate interactions with children”—for them.