Connect with us

Social Media Platforms

Meta announces new guidelines for teens on Instagram-Facebook

Implementation of the new polices means teens will see their accounts placed on the most restrictive settings on the platforms

Published

on

Entrance sign to Meta headquarters campus in Menlo Park, California. (Los Angeles Blade file photo)

MENLO PARK, Calif. – Social media giant Meta announced Tuesday that new content policies for teens restricting access to inappropriate contentĀ including posts about suicide, self-harm and eating disorders on both of its largest platforms, Instagram and Facebook.

In a post on the company blog, Meta wrote:

Take the example of someone posting about their ongoing struggle with thoughts of self-harm. This is an important story, and can help destigmatize these issues, but itā€™s a complex topic and isnā€™t necessarily suitable for all young people. Now, weā€™ll start to remove this type of content from teensā€™ experiences on Instagram and Facebook, as well as other types of age-inappropriate content. We already aim not to recommend this type of content to teens in places like Reels and Explore, and with these changes, weā€™ll no longer show it to teens in Feed and Stories, even if itā€™s shared by someone they follow

“We want teens to have safe, age-appropriate experiences on our apps,” Meta said.

Implementation of the new polices means teens will see their accounts placed on the most restrictive settings on the platforms, the caveat being that the teen didn’t lie about their age when they set the accounts up.

Other changes the company announced include:

To help make sure teens are regularly checking their safety and privacy settings on Instagram, and are aware of the more private settings available, weā€™re sending new notifications encouraging them to update their settings to a more private experience with a single tap. If teens choose to ā€œTurn on recommended settingsā€, we will automatically change their settings to restrict who can repost their content, tag or mention them, or include their content in Reels Remixes. Weā€™ll also ensure only their followers can message them and help hide offensive comments.

Related

In November, California Attorney General Rob Bonta announced the public release of a largely unredacted copy of the federal complaint filed by a bipartisan coalition of 33 attorneys general against Meta Platforms, Inc. and affiliates (Meta) on October 24, 2023.

Co-led by Attorney General Bonta, the coalition is alleging that Meta designed and deployed harmful features on Instagram and Facebook that addict children and teens to their mental and physical detriment.

Highlights from the newly revealed portions of the complaint include the following:

  • Mark Zuckerberg personally vetoed Metaā€™s proposed policy to ban image filters that simulated the effects of plastic surgery, despite internal pushback and an expert consensus that such filters harm usersā€™ mental health, especially for women and girls. Complaint Ā¶Ā¶ 333-68.
  • Despite public statements that Meta does not prioritize the amount of time users spend on its social media platforms, internal documents show that Meta set explicit goals of increasing ā€œtime spentā€ and meticulously tracked engagement metrics, including among teen users. Complaint Ā¶Ā¶ 134-150.
  • Meta continuously misrepresented that its social media platforms were safe, while internal data revealed that users experienced harms on its platforms at far higher rates. Complaint Ā¶Ā¶ 458-507.
  • Meta knows that its social media platforms are used by millions of children under 13, including, at one point, around 30% of all 10ā€“12-year-olds, and unlawfully collects their personal information. Meta does this despite Mark Zuckerberg testifying before Congress in 2021 that Meta ā€œkicks offā€ children under 13. Complaint Ā¶Ā¶ 642-811.

The Associated Press reported that critics charge Meta’s moves don’t go far enough.

“Today’s announcement by Meta is yet another desperate attempt to avoid regulation and an incredible slap in the face to parents who have lost their kids to online harms on Instagram,” said Josh Golin, executive director of the children’s online advocacy group Fairplay. “If the company is capable of hiding pro-suicide and eating disorder content, why have they waited until 2024 to announce these changes?”

Advertisement
FUND LGBTQ JOURNALISM
SIGN UP FOR E-BLAST

Social Media Platforms

Instagram battles financial sextortionĀ scams, blurs DM nudity

When sending or receiving these images, people will be directed to safety tips, developed with guidance from experts, about potential risks

Published

on

Instagram app start up screen on iPhone/Los Angeles Blade graphic

Editor’s note: The following article is provided as a public service for readers regarding actions taken by Instagram, a social media platform, dealing with a subject of general interest and concern. The Los Angeles Blade has not verified the information contained herein.

By Meta Public & Media Relations | MENLO PARK, Calif. – Financial sextortion is a horrific crime. Weā€™ve spent years working closely with experts, including those experienced in fighting these crimes, to understand the tactics scammers use to find and extort victims online, so we can develop effective ways to help stop them.

Today, weā€™re sharing an overview of our latest work to tackle these crimes. This includes new tools weā€™re testing to help protect people from sextortion and other forms of intimate image abuse, and to make it as hard as possible for scammers to find potential targets ā€“ on Metaā€™s apps and across the internet. Weā€™re also testing new measures to support young people in recognizing and protecting themselves from sextortion scams.

These updates build on our longstanding work to help protect young people from unwanted or potentially harmful contact. We default teens into stricter message settings so they canā€™t be messaged by anyone theyā€™re not already connected to, show Safety Notices to teens who are already in contact with potential scam accounts, and offer a dedicated option for people to report DMs that are threatening to share private images. We also supported the National Center for Missing and Exploited Children (NCMEC) in developing Take It Down, a platform that lets young people take back control of their intimate images and helps prevent them being shared online ā€“ taking power away from scammers.

Takeaways:

  • Weā€™re testing new features to help protect young people from sextortion and intimate image abuse, and to make it more difficult for potential scammers and criminals to find and interact with teens.
  • Weā€™re also testing new ways to help people spot potential sextortion scams, encourage them to report and empower them to say no to anything that makes them feel uncomfortable.
  • Weā€™ve started sharing more signals about sextortion accounts to other tech companies through Lantern, helping disrupt this criminal activity across the internet.

Introducing Nudity Protection in DMs

While people overwhelmingly use DMs to share what they love with their friends, family or favorite creators, sextortion scammers may also use private messages to share or ask for intimate images. To help address this, weā€™ll soon start testing our new nudity protection feature in Instagram DMs, which blurs images detected as containing nudity and encourages people to think twice before sending nude images. This feature is designed not only to protect people from seeing unwanted nudity in their DMs, but also to protect them from scammers who may send nude images to trick people into sending their own images in return.

Nudity protection will be turned on by default for teens under 18 globally, and weā€™ll show a notification to adults encouraging them to turn it on.

When nudity protection is turned on, people sending images containing nudity will see a message reminding them to be cautious when sending sensitive photos, and that they can unsend these photos if theyā€™ve changed their mind.

Screenshots showing a message reminding user to be cautious when sending sensitive photos.

Anyone who tries to forward a nude image theyā€™ve received will see a message encouraging them to reconsider.

Screenshots showing a message encouraging them to reconsider when a nude image is forwarded.

When someone receives an image containing nudity, it will be automatically blurred under a warning screen, meaning the recipient isnā€™t confronted with a nude image and they can choose whether or not to view it. Weā€™ll also show them a message encouraging them not to feel pressure to respond, with an option to block the sender and report the chat.

Screenshots showing an automatically blurred under a warning screen when someone receives an image containing nudity.

When sending or receiving these images, people will be directed to safety tips, developed with guidance from experts, about the potential risks involved. These tips include reminders that people may screenshot or forward images without your knowledge, that your relationship to the person may change in the future, and that you should review profiles carefully in case theyā€™re not who they say they are. They also link to a range of resources, including Metaā€™s Safety Centersupport helplinesStopNCII.org for those over 18, and Take It Down for those under 18.

Screenshots showing safety tips about the potential risks involved when sending or receiving these images.

Nudity protection uses on-device machine learning to analyze whether an image sent in a DM on Instagram contains nudity. Because the images are analyzed on the device itself, nudity protection will work in end-to-end encrypted chats, where Meta wonā€™t have access to these images ā€“ unless someone chooses to report them to us.

ā€œCompanies have a responsibility to ensure the protection of minors who use their platforms. Metaā€™s proposed device-side safety measures within its encrypted environment is encouraging. We are hopeful these new measures will increase reporting by minors and curb the circulation of online child exploitation.ā€ ā€” John Shehan, Senior Vice President, National Center for Missing & Exploited Children.

ā€œAs an educator, parent, and researcher on adolescent online behavior, I applaud Metaā€™s new feature that handles the exchange of personal nude content in a thoughtful, nuanced, and appropriate way. It reduces unwanted exposure to potentially traumatic images, gently introduces cognitive dissonance to those who may be open to sharing nudes, and educates people about the potential downsides involved. Each of these should help decrease the incidence of sextortion and related harms, helping to keep young people safe online.ā€ ā€” Dr. Sameer Hinduja, Co-Director of the Cyberbullying Research Center and Faculty Associate at the Berkman Klein Center at Harvard University.

Preventing Potential Scammers from Connecting with Teens

We take severe action when we become aware of people engaging in sextortion: we remove their account, take steps to prevent them from creating new ones and, where appropriate, report them to the NCMEC and law enforcement. Our expert teams also work to investigate and disrupt networks of these criminals, disable their accounts and report them to NCMEC and law enforcement ā€“ including several networks in the last year alone.

Now, weā€™re also developing technology to help identify where accounts may potentially be engaging in sextortion scams, based on a range of signals that could indicate sextortion behavior. While these signals arenā€™t necessarily evidence that an account has broken our rules, weā€™re taking precautionary steps to help prevent these accounts from finding and interacting with teen accounts. This builds on the work we already do to prevent other potentially suspicious accounts from finding and interacting with teens.

One way weā€™re doing this is by making it even harder for potential sextortion accounts to message or interact with people. Now, any message requests potential sextortion accounts try to send will go straight to the recipientā€™s hidden requests folder, meaning they wonā€™t be notified of the message and never have to see it. For those who are already chatting to potential scam or sextortion accounts, we show Safety Notices encouraging them to report any threats to share their private images, and reminding them that they can say no to anything that makes them feel uncomfortable.

For teens, weā€™re going even further. We already restrict adults from starting DM chats with teens theyā€™re not connected to, and in January we announced stricter messaging defaults for teens under 16 (under 18 in certain countries), meaning they can only be messaged by people theyā€™re already connected to ā€“ no matter how old the sender is. Now, we wonā€™t show the ā€œMessageā€ button on a teenā€™s profile to potential sextortion accounts, even if theyā€™re already connected. Weā€™re also testing hiding teens from these accounts in peopleā€™s follower, following and like lists, and making it harder for them to find teen accounts in Search results.

New Resources for People Who May Have Been Approached by Scammers

Weā€™re testing new pop-up messages for people who may have interacted with an account weā€™ve removed for sextortion. The message will direct them to our expert-backed resources, including our Stop Sextortion Hubsupport helplines, the option to reach out to a friend, StopNCII.org for those over 18, and Take It Down for those under 18.

Weā€™re also adding new child safety helplines from around the world into our in-app reporting flows. This means when teens report relevant issues ā€“ such as nudity, threats to share private images or sexual exploitation or solicitation ā€“ weā€™ll direct them to local child safety helplines where available.

Fighting Sextortion Scams Across the Internet

In November, we announced we were founding members of Lantern, a program run by the Tech Coalition that enables technology companies to share signals about accounts and behaviors that violate their child safety policies.

This industry cooperation is critical, because predators donā€™t limit themselves to just one platform ā€“ and the same is true of sextortion scammers. These criminals target victims across the different apps they use, often moving their conversations from one app to another. Thatā€™s why weā€™ve started to share more sextortion-specific signals to Lantern, to build on this important cooperation and try to stop sextortion scams not just on individual platforms, but across the whole internet.

*****************************************************************************************

The preceding article was previously published by Instagram here: (Link)

Continue Reading

Social Media Platforms

Social Media platforms still lagging on critical LGBTQ+ protections

All Social Media platforms should have policy prohibitions against harmful so-called ā€œConversion Therapyā€ content

Published

on

Photo by Dan Balinovic

ByĀ Leanna Garfield &Ā Jenni Olson | NEW YORK – GLAAD, the worldā€™s largest lesbian, gay, bisexual, transgender, and queer (LGBTQ) media advocacy organization released new reports documenting the current state ofĀ twoĀ importantĀ LGBTQ safety policy protections on social media platforms.Ā 

The reports show how numerous platforms and apps (including, most recently Snapchat) are increasingly adopting two LGBTQ safety protections that GLAADā€™s Social Media Safety Program advocates as best practices for the industry: firstly, expressly stated policiesĀ prohibitingĀ targeted misgendering and deadnamingĀ of transgender and nonbinary people (i.e. intentionallyĀ using the wrong pronouns or using a former nameĀ to express contempt); andĀ secondly, expresslyĀ statedĀ policiesĀ prohibiting the promotion and advertising of harmful so-called ā€œconversion therapyā€Ā (aĀ widely condemnedĀ practice attempting to change an LGBTQ personā€™s sexual orientation or gender identity which has been banned or restricted in dozens ofĀ countriesĀ andĀ US states).Ā 

GLAAD

Major companies that have suchĀ LGBTQ policy safeguardsĀ include:Ā TikTok,Ā Twitch,Ā Pinterest,Ā NextDoor, and nowĀ Snapchat.

Companies lagging behind and failing to provide such protections include: YouTubeBlueSkyLinkedInReddit, and MastodonX/Twitter and Metaā€™s InstagramFacebook, and Threads have received partial credit due to ā€œself-reportingā€ requirements.

related

ā€œNow is the time for all social media platforms and tech companies to step up and prioritize LGBTQ safety,ā€ saidĀ GLAAD President and CEO Sarah Kate Ellis. ā€œWe urgeĀ allĀ social media platforms to adopt, and enforce, these policies and to protect LGBTQ people ā€” and everyone.ā€

Companies will have another opportunity to be acknowledged for updating their policies later this year. To be released this summer, GLAADā€™s annualĀ Social Media Safety IndexĀ report will feature an updated version of the charts.

Conversion Therapy

GLAAD

The widely debunked and harmful practice ofĀ so-called ā€œconversion therapyā€Ā falsely claims to change an LGBTQ personā€™s sexual orientation, gender identity, or gender expression, andĀ has been condemnedĀ by all major medical, psychiatric, and psychological organizations including theĀ American Medical AssociationĀ andĀ American Psychological Association. Globally, there has been a growing movement to ban ā€œconversion therapyā€ at the national level. As of February 2024,Ā 14 countriesĀ have such bans, including Canada, France, Germany, Malta, Ecuador, Brazil, Taiwan, and New Zealand. In the United States,Ā 22 states and the District of ColumbiaĀ have restrictions in place.

Expressing concurrence with GLAADā€™s Social Media Safety Program guidance, IFTAS (the non-profit supporting the Fediverse moderator community) stated in a February 2024 announcement: “Due to the widespread and insidious nature of expressing anti-transgender sentiments in bad faith, itā€™s imperative to have specific policy addressing this issue.ā€ Further explaining the rationale behind such policies, the IFTAS announcement continues: ā€œThis approach is considered a best practice for two key reasons: it offers clear guidance to users, and it assists moderators in recognizing and understanding the intent behind such statements. Itā€™s important to reiterate that the focus is not about accidentally getting someoneā€™s pronouns wrong. Rather, our concern centers on deliberate and targeted acts of hate and harassment.” 

Conveying appreciation to companies but also highlighting the need for policy enforcement, GLAADā€™s new reporting notes that while the policies mark significant progress: ā€œThese new policy additions do not solve the extremely significant other related issue of policy enforcement (a realm in which many platforms are known to be doing a woefully inadequate job).ā€ 

There is broad consensus and building momentum toward protecting LGBTQ people, and especially LGBTQ youth, from this dangerous practice. However, ā€œconversion therapyā€ disinformation, extremist scare-tactic narratives, and the profit-driven promotion of such services continues to be widespread on social media platforms, via both organic content and advertising. And, as a December 2023 Trevor Project report reveals, ā€œconversion therapyā€ continues to happen in nearly every US state.

Thankfully, more tech companies and social media platforms are taking leadership to address the spread of content that promotes and advertises ā€œconversion therapy.ā€ In December 2023, the social platform Post added an express prohibition of such content to their policies, and in January 2024 Spoutible did the same. That same month, in response to key stakeholder guidance from GLAAD, IFTAS (the non-profit supporting the Fediverse moderator community) crafted sample policy language and implemented an ā€œIFTAS LGBTQ+ Safety Server Pledgeā€ system for the Fediverse, in which servers can sign-on confirming they have incorporated a policy prohibiting both the promotion of ā€œconversion therapyā€ content and targeted misgendering and deadnaming. In February, Snapchat also added both prohibitions into their Hateful Content and Harmful False or Deceptive Information community guidelines policies.

GLAAD President and CEO Sarah Kate Ellis acknowledged this recent progress, saying to The Advocate: ā€œAdopting new policies prohibiting so-called ā€˜conversion therapyā€™ content puts these companies ahead of so many others. GLAAD urges all social media platforms to adopt, and enforce, this policy and protect their LGBTQ users.ā€

A January 2024 report from the Global Project on Hate & Extremism (GPAHE) illuminates how many social media companies and search engines are failing to mitigate harmful content and ads promoting ā€œconversion therapy.ā€ The report outlines the enormous amount of work that needs to be done, and offers many examples of simple solutions that platforms can and should urgently implement. Recommendations from the report are listed below.

In February 2022, GLAAD worked with TikTok to have the platform add an explicit prohibition of content promoting ā€œconversion therapy.ā€ TikTok updated its community guidelines to include the following: ā€œAdding clarity on the types of hateful ideologies prohibited on our platform. This includes ā€¦ content that supports or promotes conversion therapy programs. Though these ideologies have long been prohibited on TikTok, weā€™ve heard from creators and civil society organizations that itā€™s important to be explicit in our Community Guidelines.ā€

In 2022, GLAAD also urged both YouTube and Twitter (now X) to add an express prohibition of ā€œconversion therapyā€ into their content and ad guidelines. While X does not currently have such a policy, YouTube, with the assistance of its AI systems, does mitigate ā€œconversion therapyā€ content by showing an information panel from the Trevor Project that reads: ā€œConversion therapy, sometimes referred to as ā€˜reparative therapy,ā€™ is any of several dangerous and discredited practices aimed at changing an individualā€™s sexual orientation or gender identity.ā€ However, unlike TikTok and Meta, YouTube does not include an explicit prohibition in its Hate Speech Policy.

Metaā€™s Facebook and Instagram (and by extension Threads which is guided by Instagramā€™s policies) currently do have such a prohibition (against: ā€œContent explicitly providing or offering to provide products or services that aim to change peopleā€™s sexual orientation or gender identity.ā€). However it is listed separately from the companyā€™s standard three tiers of content moderation consideration as requiring, ā€œadditional information and/or context to enforce.ā€ GLAAD has recommended that it be elevated to a higher priority tier. In addition to this content policy, Metaā€™s Unrealistic Outcomes ad standards policy also prohibits: ā€œConversion therapy products or services. This includes but is not limited to: Products aimed at offering or facilitating conversion therapy such as books, apps or audiobooks; Services aimed at offering or facilitating conversion therapy such as talk therapy, conversion ministries or clinical therapy; Testimonials of conversion therapy, specifically when posted or boosted by organizations that arrange and provide such services.ā€

Among other platforms, it is notable that the community guidelines of both Pinterest and NextDoor include a prohibition against content promoting or supporting ā€œconversion therapy and related programs.ā€ While Twitchā€™s community guidelines expressly state that: ā€œregardless of your intent, you may not: Encourage the use of or generally endorsing sexual orientation conversion therapy.ā€ As mentioned above, Post and Spoutible also have amended their policies, with Spoutibleā€™s new guidelines being the most extensive:

Prohibited Content: Any content that promotes, endorses, or provides resources for ā€˜conversion therapy.ā€™ Content that claims sexual orientation or gender identity can be changed or ā€˜cured.ā€™ Advertising or soliciting services for ā€˜conversion therapy.ā€™ Testimonials supporting or promoting the effectiveness of ā€˜conversion therapy.ā€™

Spoutibleā€™s policy also thoughtfully outlines these exceptions:

Content that discusses ā€˜conversion therapyā€™ in a historical or educational context may be allowed, provided it does not advocate for or glorify the practice. Personal stories shared by survivors of ā€˜conversion therapy,ā€™ which do not promote the practice, may be permissible.

In addition to GLAADā€™s advocacy efforts advising platforms to add prohibitions against content promoting ā€œconversion therapyā€ to their community guidelines, we also urge these companies to effectively enforce these policies.

To clarify even further, all platforms should add express public-facing language prohibiting the promotion of ā€œconversion therapyā€ to both their community guidelines and advertising services policies. While some platforms have described off-the-record that ā€œconversion therapyā€ material is prohibited under the umbrella of other policies ā€” policies prohibiting hateful ideologies, for instance ā€” the prohibition of ā€œconversion therapyā€ promotion should be explicitly stated publicly in their community guidelines and other policies.

When such content is reported, itā€™s also important for moderators to make judgments about the content in context, and distinguish between harmful content promoting ā€œconversion therapyā€ versus content that mentions or discusses ā€œconversion therapyā€ (i.e. counter-speech). As a 2020 Reuters story details, social media platforms can provide a space for ā€œconversion therapyā€ survivors to share their experiences and find community.

GLAAD also urges all platforms to review and follow the below recommendations from the Global Project on Hate & Extremism (GPAHE):

To protect their users, tech companies must:

  • Use common sense when evaluating whether content violates rules on conversion therapy and remember that it is dangerous, and sometimes deadly, to allow pro-conversion therapy material to surface. It is quintessential medical disinformation.
  • Invest in non-English, non-American cultural and language resources. The disparity in the findings for non-English users is stark.
  • Elevate authoritative resources in the language being used for the terms found in the appendix.
  • Incorporate ā€œsame-sex attractionā€ and ā€œunwanted same-sex attractionā€ into their algorithm that moderates conversion therapy content and elevate authoritative content.
  • Create or expand the use of authoritative information boxes about conversion therapy, preferably in the language being used.
  • All online systems must keep up with the constant rebranding and use of new terms, in all languages, that the conversion therapy industry uses.
  • Refrain from defaulting to English content in non-English speaking countries where possible, and if this is the only content available it must be authoritative and translatable.
  • All companies must avail themselves of civil society and subject matter experts to keep their systems current.
  • Additional recommendations from previous GPAHE research.

Source: Conversion Therapy Online: The Ecosystem in 2023 (Global Project Against Hate & Extremism, Jan 2024)

An earlier version of this overview first appeared inĀ Tech Policy PressĀ and was adapted from theĀ 2023 GLAAD Social Media Safety IndexĀ report. The next report is forthcoming in the summer of 2024.

The preceding article was previously published by GLAAD and is republished by permission.

Continue Reading

Popular