Second Meta whistleblower testifies about failure to protect teens

Arturo Béjar voiced his frustration with the lack of action taken by Meta to address the harm experienced by teenagers on their platforms.

Second Meta whistleblower testifies about failure to protect teens

Image:
Second Meta whistleblower testifies about failure to protect teens

Arturo Béjar, a former engineering director at Meta, testified before the US Congress on Tuesday, sharing his personal experience of witnessing his own daughter suffer harassment on Instagram.

Arturo Béjar, a former engineering director at Meta, testified before the US Congress on Tuesday, sharing his personal experience of witnessing his own daughter suffer harassment on Instagram.

Béjar's testimony comes after previous whistleblower Frances Haugen revealed internal documents to news organisations and the US Senate, shedding light on Meta's safety issues and their impact on young users.

Béjar had previously worked as an engineering director at Facebook from 2009 to 2015, gaining recognition for his efforts to combat cyberbullying. During his initial tenure at Meta, he believed that progress was being made in addressing online harassment. However, when he returned to the company as a contractor in 2019, he was shocked to learn about the distressing experiences that his own daughter and her friends were facing on Instagram.

Speaking before the Senate subcommittee on Tuesday, Béjar voiced his frustration with the lack of action taken by Meta to address the harm experienced by teenagers on their platforms.

Béjar claimed that Meta's leadership was aware of the extensive harm experienced by its youngest users, but did not take adequate measures to address these issues.

In an interview with the Wall Street Journal, he shared allegations against the company, which he reiterated in his congressional testimony.

During the subcommittee hearing, Béjar revealed the troubling incidents his daughter had encountered on the platform.

"She and her friends began having awful experiences, including repeated unwanted sexual advances, harassment," he testified. "She reported these incidents to the company, and it did nothing," he said.

According to Béjar, a conversation with chief product officer Chris Cox brought to light Meta's knowledge of the harm teenagers were experiencing. He recounted how Cox acknowledged that he was already aware of the statistics related to the harm done to teens.

This revelation left Béjar disheartened, as it indicated that the company had knowledge of the issues but was not acting upon them.

On the same day that Frances Haugen testified before the Senate on 5th October 2021, Béjar emailed top Meta executives, including CEO Mark Zuckerberg, COO Sheryl Sandberg and Instagram CEO Adam Mosseri.

In the email, Béjar raised concerns he had already discussed with Sandberg, Mosseri, and Cox. He presented a survey of 13-to-15 year-olds on Instagram, which revealed that 13% had received unwanted sexual advances on the platform in the last seven days, 26% had witnessed discrimination based on various identities and 21% felt worse about themselves due to others' posts on Instagram.

Béjar advocated for increased funding and prioritisation of efforts to understand the content that fuelled negative experiences for users, what percentage of that content violated policies, and what product changes could enhance the platform's user experience.

However, Béjar claimed that he never received a response from Zuckerberg or Sandberg.

In response to Béjar's claims, Meta released a statement expressing their commitment to safeguarding young people online. The company cited its support of user surveys, similar to the ones Béjar mentioned in his testimony, and the creation of tools like anonymous notifications to report potentially hurtful content.

It added that the company's teams work daily to ensure the safety of young people online.

The Senate subcommittee's hearing on Tuesday emphasised the need for legislation to protect children online, particularly the Kids Online Safety Act (KOSA).

KOSA aims to hold tech companies responsible for the safe design of their products for children, addressing algorithm-driven toxic content.

The debate on how to address these challenges and protect children online is ongoing, and lawmakers continue to push for legislative solutions.