Social Media Platforms
Meta announces new guidelines for teens on Instagram-Facebook
Implementation of the new polices means teens will see their accounts placed on the most restrictive settings on the platforms
MENLO PARK, Calif. – Social media giant Meta announced Tuesday that new content policies for teens restricting access to inappropriate contentĀ including posts about suicide, self-harm and eating disorders on both of its largest platforms, Instagram and Facebook.
In a post on the company blog, Meta wrote:
Take the example of someone posting about their ongoing struggle with thoughts of self-harm. This is an important story, and can help destigmatize these issues, but itās a complex topic and isnāt necessarily suitable for all young people. Now, weāll start to remove this type of content from teensā experiences on Instagram and Facebook, as well as other types of age-inappropriate content. We already aim not to recommend this type of content to teens in places like Reels and Explore, and with these changes, weāll no longer show it to teens in Feed and Stories, even if itās shared by someone they follow.
“We want teens to have safe, age-appropriate experiences on our apps,” Meta said.
Implementation of the new polices means teens will see their accounts placed on the most restrictive settings on the platforms, the caveat being that the teen didn’t lie about their age when they set the accounts up.
Other changes the company announced include:
To help make sure teens are regularly checking their safety and privacy settings on Instagram, and are aware of the more private settings available, weāre sending new notifications encouraging them to update their settings to a more private experience with a single tap. If teens choose to āTurn on recommended settingsā, we will automatically change their settings to restrict who can repost their content, tag or mention them, or include their content in Reels Remixes. Weāll also ensure only their followers can message them and help hide offensive comments.
In November, California Attorney General Rob Bonta announced the public release of a largely unredacted copy of the federal complaint filed by a bipartisan coalition of 33 attorneys general against Meta Platforms, Inc. and affiliates (Meta) on October 24, 2023.
Co-led by Attorney General Bonta, the coalition is alleging that Meta designed and deployed harmful features on Instagram and Facebook that addict children and teens to their mental and physical detriment.
Highlights from the newly revealed portions of the complaint include the following:
- Mark Zuckerberg personally vetoed Metaās proposed policy to ban image filters that simulated the effects of plastic surgery, despite internal pushback and an expert consensus that such filters harm usersā mental health, especially for women and girls. Complaint Ā¶Ā¶ 333-68.
- Despite public statements that Meta does not prioritize the amount of time users spend on its social media platforms, internal documents show that Meta set explicit goals of increasing ātime spentā and meticulously tracked engagement metrics, including among teen users. Complaint Ā¶Ā¶ 134-150.
- Meta continuously misrepresented that its social media platforms were safe, while internal data revealed that users experienced harms on its platforms at far higher rates. Complaint Ā¶Ā¶ 458-507.
- Meta knows that its social media platforms are used by millions of children under 13, including, at one point, around 30% of all 10ā12-year-olds, and unlawfully collects their personal information. Meta does this despite Mark Zuckerberg testifying before Congress in 2021 that Meta ākicks offā children under 13. Complaint Ā¶Ā¶ 642-811.
The Associated Press reported that critics charge Meta’s moves don’t go far enough.
“Today’s announcement by Meta is yet another desperate attempt to avoid regulation and an incredible slap in the face to parents who have lost their kids to online harms on Instagram,” said Josh Golin, executive director of the children’s online advocacy group Fairplay. “If the company is capable of hiding pro-suicide and eating disorder content, why have they waited until 2024 to announce these changes?”
Social Media Platforms
Instagram battles financial sextortionĀ scams, blurs DM nudity
When sending or receiving these images, people will be directed to safety tips, developed with guidance from experts, about potential risks
Editor’s note: The following article is provided as a public service for readers regarding actions taken by Instagram, a social media platform, dealing with a subject of general interest and concern. The Los Angeles Blade has not verified the information contained herein.
By Meta Public & Media Relations | MENLO PARK, Calif. – Financial sextortion is a horrific crime. Weāve spent years working closely with experts, including those experienced in fighting these crimes, to understand the tactics scammers use to find and extort victims online, so we can develop effective ways to help stop them.
Today, weāre sharing an overview of our latest work to tackle these crimes. This includes new tools weāre testing to help protect people from sextortion and other forms of intimate image abuse, and to make it as hard as possible for scammers to find potential targets ā on Metaās apps and across the internet. Weāre also testing new measures to support young people in recognizing and protecting themselves from sextortion scams.
These updates build on our longstanding work to help protect young people from unwanted or potentially harmful contact. We default teens into stricter message settings so they canāt be messaged by anyone theyāre not already connected to, show Safety Notices to teens who are already in contact with potential scam accounts, and offer a dedicated option for people to report DMs that are threatening to share private images. We also supported the National Center for Missing and Exploited Children (NCMEC) in developing Take It Down, a platform that lets young people take back control of their intimate images and helps prevent them being shared online ā taking power away from scammers.
Takeaways:
- Weāre testing new features to help protect young people from sextortion and intimate image abuse, and to make it more difficult for potential scammers and criminals to find and interact with teens.
- Weāre also testing new ways to help people spot potential sextortion scams, encourage them to report and empower them to say no to anything that makes them feel uncomfortable.
- Weāve started sharing more signals about sextortion accounts to other tech companies through Lantern, helping disrupt this criminal activity across the internet.
Introducing Nudity Protection in DMs
While people overwhelmingly use DMs to share what they love with their friends, family or favorite creators, sextortion scammers may also use private messages to share or ask for intimate images. To help address this, weāll soon start testing our new nudity protection feature in Instagram DMs, which blurs images detected as containing nudity and encourages people to think twice before sending nude images. This feature is designed not only to protect people from seeing unwanted nudity in their DMs, but also to protect them from scammers who may send nude images to trick people into sending their own images in return.
Nudity protection will be turned on by default for teens under 18 globally, and weāll show a notification to adults encouraging them to turn it on.
When nudity protection is turned on, people sending images containing nudity will see a message reminding them to be cautious when sending sensitive photos, and that they can unsend these photos if theyāve changed their mind.
Anyone who tries to forward a nude image theyāve received will see a message encouraging them to reconsider.
When someone receives an image containing nudity, it will be automatically blurred under a warning screen, meaning the recipient isnāt confronted with a nude image and they can choose whether or not to view it. Weāll also show them a message encouraging them not to feel pressure to respond, with an option to block the sender and report the chat.
When sending or receiving these images, people will be directed to safety tips, developed with guidance from experts, about the potential risks involved. These tips include reminders that people may screenshot or forward images without your knowledge, that your relationship to the person may change in the future, and that you should review profiles carefully in case theyāre not who they say they are. They also link to a range of resources, including Metaās Safety Center, support helplines, StopNCII.org for those over 18, and Take It Down for those under 18.
Nudity protection uses on-device machine learning to analyze whether an image sent in a DM on Instagram contains nudity. Because the images are analyzed on the device itself, nudity protection will work in end-to-end encrypted chats, where Meta wonāt have access to these images ā unless someone chooses to report them to us.
āCompanies have a responsibility to ensure the protection of minors who use their platforms. Metaās proposed device-side safety measures within its encrypted environment is encouraging. We are hopeful these new measures will increase reporting by minors and curb the circulation of online child exploitation.ā ā John Shehan, Senior Vice President, National Center for Missing & Exploited Children.
āAs an educator, parent, and researcher on adolescent online behavior, I applaud Metaās new feature that handles the exchange of personal nude content in a thoughtful, nuanced, and appropriate way. It reduces unwanted exposure to potentially traumatic images, gently introduces cognitive dissonance to those who may be open to sharing nudes, and educates people about the potential downsides involved. Each of these should help decrease the incidence of sextortion and related harms, helping to keep young people safe online.ā ā Dr. Sameer Hinduja, Co-Director of the Cyberbullying Research Center and Faculty Associate at the Berkman Klein Center at Harvard University.
Preventing Potential Scammers from Connecting with Teens
We take severe action when we become aware of people engaging in sextortion: we remove their account, take steps to prevent them from creating new ones and, where appropriate, report them to the NCMEC and law enforcement. Our expert teams also work to investigate and disrupt networks of these criminals, disable their accounts and report them to NCMEC and law enforcement ā including several networks in the last year alone.
Now, weāre also developing technology to help identify where accounts may potentially be engaging in sextortion scams, based on a range of signals that could indicate sextortion behavior. While these signals arenāt necessarily evidence that an account has broken our rules, weāre taking precautionary steps to help prevent these accounts from finding and interacting with teen accounts. This builds on the work we already do to prevent other potentially suspicious accounts from finding and interacting with teens.
One way weāre doing this is by making it even harder for potential sextortion accounts to message or interact with people. Now, any message requests potential sextortion accounts try to send will go straight to the recipientās hidden requests folder, meaning they wonāt be notified of the message and never have to see it. For those who are already chatting to potential scam or sextortion accounts, we show Safety Notices encouraging them to report any threats to share their private images, and reminding them that they can say no to anything that makes them feel uncomfortable.
For teens, weāre going even further. We already restrict adults from starting DM chats with teens theyāre not connected to, and in January we announced stricter messaging defaults for teens under 16 (under 18 in certain countries), meaning they can only be messaged by people theyāre already connected to ā no matter how old the sender is. Now, we wonāt show the āMessageā button on a teenās profile to potential sextortion accounts, even if theyāre already connected. Weāre also testing hiding teens from these accounts in peopleās follower, following and like lists, and making it harder for them to find teen accounts in Search results.
New Resources for People Who May Have Been Approached by Scammers
Weāre testing new pop-up messages for people who may have interacted with an account weāve removed for sextortion. The message will direct them to our expert-backed resources, including our Stop Sextortion Hub, support helplines, the option to reach out to a friend, StopNCII.org for those over 18, and Take It Down for those under 18.
Weāre also adding new child safety helplines from around the world into our in-app reporting flows. This means when teens report relevant issues ā such as nudity, threats to share private images or sexual exploitation or solicitation ā weāll direct them to local child safety helplines where available.
Fighting Sextortion Scams Across the Internet
In November, we announced we were founding members of Lantern, a program run by the Tech Coalition that enables technology companies to share signals about accounts and behaviors that violate their child safety policies.
This industry cooperation is critical, because predators donāt limit themselves to just one platform ā and the same is true of sextortion scammers. These criminals target victims across the different apps they use, often moving their conversations from one app to another. Thatās why weāve started to share more sextortion-specific signals to Lantern, to build on this important cooperation and try to stop sextortion scams not just on individual platforms, but across the whole internet.
*****************************************************************************************
The preceding article was previously published by Instagram here: (Link)
Social Media Platforms
Social Media platforms still lagging on critical LGBTQ+ protections
All Social Media platforms should have policy prohibitions against harmful so-called āConversion Therapyā content
ByĀ Leanna Garfield &Ā Jenni Olson | NEW YORK – GLAAD, the worldās largest lesbian, gay, bisexual, transgender, and queer (LGBTQ) media advocacy organization released new reports documenting the current state ofĀ twoĀ importantĀ LGBTQ safety policy protections on social media platforms.Ā
The reports show how numerous platforms and apps (including, most recently Snapchat) are increasingly adopting two LGBTQ safety protections that GLAADās Social Media Safety Program advocates as best practices for the industry: firstly, expressly stated policiesĀ prohibitingĀ targeted misgendering and deadnamingĀ of transgender and nonbinary people (i.e. intentionallyĀ using the wrong pronouns or using a former nameĀ to express contempt); andĀ secondly, expresslyĀ statedĀ policiesĀ prohibiting the promotion and advertising of harmful so-called āconversion therapyāĀ (aĀ widely condemnedĀ practice attempting to change an LGBTQ personās sexual orientation or gender identity which has been banned or restricted in dozens ofĀ countriesĀ andĀ US states).Ā
Major companies that have suchĀ LGBTQ policy safeguardsĀ include:Ā TikTok,Ā Twitch,Ā Pinterest,Ā NextDoor, and nowĀ Snapchat.
Companies lagging behind and failing to provide such protections include: YouTube, BlueSky, LinkedIn, Reddit, and Mastodon. X/Twitter and Metaās Instagram, Facebook, and Threads have received partial credit due to āself-reportingā requirements.
āNow is the time for all social media platforms and tech companies to step up and prioritize LGBTQ safety,ā saidĀ GLAAD President and CEO Sarah Kate Ellis. āWe urgeĀ allĀ social media platforms to adopt, and enforce, these policies and to protect LGBTQ people ā and everyone.ā
Companies will have another opportunity to be acknowledged for updating their policies later this year. To be released this summer, GLAADās annualĀ Social Media Safety IndexĀ report will feature an updated version of the charts.
Conversion Therapy
The widely debunked and harmful practice ofĀ so-called āconversion therapyāĀ falsely claims to change an LGBTQ personās sexual orientation, gender identity, or gender expression, andĀ has been condemnedĀ by all major medical, psychiatric, and psychological organizations including theĀ American Medical AssociationĀ andĀ American Psychological Association. Globally, there has been a growing movement to ban āconversion therapyā at the national level. As of February 2024,Ā 14 countriesĀ have such bans, including Canada, France, Germany, Malta, Ecuador, Brazil, Taiwan, and New Zealand. In the United States,Ā 22 states and the District of ColumbiaĀ have restrictions in place.
Expressing concurrence with GLAADās Social Media Safety Program guidance, IFTAS (the non-profit supporting the Fediverse moderator community) stated in a February 2024 announcement: “Due to the widespread and insidious nature of expressing anti-transgender sentiments in bad faith, itās imperative to have specific policy addressing this issue.ā Further explaining the rationale behind such policies, the IFTAS announcement continues: āThis approach is considered a best practice for two key reasons: it offers clear guidance to users, and it assists moderators in recognizing and understanding the intent behind such statements. Itās important to reiterate that the focus is not about accidentally getting someoneās pronouns wrong. Rather, our concern centers on deliberate and targeted acts of hate and harassment.”
Conveying appreciation to companies but also highlighting the need for policy enforcement, GLAADās new reporting notes that while the policies mark significant progress: āThese new policy additions do not solve the extremely significant other related issue of policy enforcement (a realm in which many platforms are known to be doing a woefully inadequate job).ā
There is broad consensus and building momentum toward protecting LGBTQ people, and especially LGBTQ youth, from this dangerous practice. However, āconversion therapyā disinformation, extremist scare-tactic narratives, and the profit-driven promotion of such services continues to be widespread on social media platforms, via both organic content and advertising. And, as a December 2023 Trevor Project report reveals, āconversion therapyā continues to happen in nearly every US state.
Thankfully, more tech companies and social media platforms are taking leadership to address the spread of content that promotes and advertises āconversion therapy.ā In December 2023, the social platform Post added an express prohibition of such content to their policies, and in January 2024 Spoutible did the same. That same month, in response to key stakeholder guidance from GLAAD, IFTAS (the non-profit supporting the Fediverse moderator community) crafted sample policy language and implemented an āIFTAS LGBTQ+ Safety Server Pledgeā system for the Fediverse, in which servers can sign-on confirming they have incorporated a policy prohibiting both the promotion of āconversion therapyā content and targeted misgendering and deadnaming. In February, Snapchat also added both prohibitions into their Hateful Content and Harmful False or Deceptive Information community guidelines policies.
GLAAD President and CEO Sarah Kate Ellis acknowledged this recent progress, saying to The Advocate: āAdopting new policies prohibiting so-called āconversion therapyā content puts these companies ahead of so many others. GLAAD urges all social media platforms to adopt, and enforce, this policy and protect their LGBTQ users.ā
A January 2024 report from the Global Project on Hate & Extremism (GPAHE) illuminates how many social media companies and search engines are failing to mitigate harmful content and ads promoting āconversion therapy.ā The report outlines the enormous amount of work that needs to be done, and offers many examples of simple solutions that platforms can and should urgently implement. Recommendations from the report are listed below.
In February 2022, GLAAD worked with TikTok to have the platform add an explicit prohibition of content promoting āconversion therapy.ā TikTok updated its community guidelines to include the following: āAdding clarity on the types of hateful ideologies prohibited on our platform. This includes ā¦ content that supports or promotes conversion therapy programs. Though these ideologies have long been prohibited on TikTok, weāve heard from creators and civil society organizations that itās important to be explicit in our Community Guidelines.ā
In 2022, GLAAD also urged both YouTube and Twitter (now X) to add an express prohibition of āconversion therapyā into their content and ad guidelines. While X does not currently have such a policy, YouTube, with the assistance of its AI systems, does mitigate āconversion therapyā content by showing an information panel from the Trevor Project that reads: āConversion therapy, sometimes referred to as āreparative therapy,ā is any of several dangerous and discredited practices aimed at changing an individualās sexual orientation or gender identity.ā However, unlike TikTok and Meta, YouTube does not include an explicit prohibition in its Hate Speech Policy.
Metaās Facebook and Instagram (and by extension Threads which is guided by Instagramās policies) currently do have such a prohibition (against: āContent explicitly providing or offering to provide products or services that aim to change peopleās sexual orientation or gender identity.ā). However it is listed separately from the companyās standard three tiers of content moderation consideration as requiring, āadditional information and/or context to enforce.ā GLAAD has recommended that it be elevated to a higher priority tier. In addition to this content policy, Metaās Unrealistic Outcomes ad standards policy also prohibits: āConversion therapy products or services. This includes but is not limited to: Products aimed at offering or facilitating conversion therapy such as books, apps or audiobooks; Services aimed at offering or facilitating conversion therapy such as talk therapy, conversion ministries or clinical therapy; Testimonials of conversion therapy, specifically when posted or boosted by organizations that arrange and provide such services.ā
Among other platforms, it is notable that the community guidelines of both Pinterest and NextDoor include a prohibition against content promoting or supporting āconversion therapy and related programs.ā While Twitchās community guidelines expressly state that: āregardless of your intent, you may not: Encourage the use of or generally endorsing sexual orientation conversion therapy.ā As mentioned above, Post and Spoutible also have amended their policies, with Spoutibleās new guidelines being the most extensive:
Prohibited Content: Any content that promotes, endorses, or provides resources for āconversion therapy.ā Content that claims sexual orientation or gender identity can be changed or ācured.ā Advertising or soliciting services for āconversion therapy.ā Testimonials supporting or promoting the effectiveness of āconversion therapy.ā
Spoutibleās policy also thoughtfully outlines these exceptions:
Content that discusses āconversion therapyā in a historical or educational context may be allowed, provided it does not advocate for or glorify the practice. Personal stories shared by survivors of āconversion therapy,ā which do not promote the practice, may be permissible.
In addition to GLAADās advocacy efforts advising platforms to add prohibitions against content promoting āconversion therapyā to their community guidelines, we also urge these companies to effectively enforce these policies.
To clarify even further, all platforms should add express public-facing language prohibiting the promotion of āconversion therapyā to both their community guidelines and advertising services policies. While some platforms have described off-the-record that āconversion therapyā material is prohibited under the umbrella of other policies ā policies prohibiting hateful ideologies, for instance ā the prohibition of āconversion therapyā promotion should be explicitly stated publicly in their community guidelines and other policies.
When such content is reported, itās also important for moderators to make judgments about the content in context, and distinguish between harmful content promoting āconversion therapyā versus content that mentions or discusses āconversion therapyā (i.e. counter-speech). As a 2020 Reuters story details, social media platforms can provide a space for āconversion therapyā survivors to share their experiences and find community.
GLAAD also urges all platforms to review and follow the below recommendations from the Global Project on Hate & Extremism (GPAHE):
To protect their users, tech companies must:
- Use common sense when evaluating whether content violates rules on conversion therapy and remember that it is dangerous, and sometimes deadly, to allow pro-conversion therapy material to surface. It is quintessential medical disinformation.
- Invest in non-English, non-American cultural and language resources. The disparity in the findings for non-English users is stark.
- Elevate authoritative resources in the language being used for the terms found in the appendix.
- Incorporate āsame-sex attractionā and āunwanted same-sex attractionā into their algorithm that moderates conversion therapy content and elevate authoritative content.
- Create or expand the use of authoritative information boxes about conversion therapy, preferably in the language being used.
- All online systems must keep up with the constant rebranding and use of new terms, in all languages, that the conversion therapy industry uses.
- Refrain from defaulting to English content in non-English speaking countries where possible, and if this is the only content available it must be authoritative and translatable.
- All companies must avail themselves of civil society and subject matter experts to keep their systems current.
- Additional recommendations from previous GPAHE research.
Source: Conversion Therapy Online: The Ecosystem in 2023 (Global Project Against Hate & Extremism, Jan 2024)
An earlier version of this overview first appeared inĀ Tech Policy PressĀ and was adapted from theĀ 2023 GLAAD Social Media Safety IndexĀ report. The next report is forthcoming in the summer of 2024.
The preceding article was previously published by GLAAD and is republished by permission.
-
Los Angeles4 days ago
The dedicated life and tragic death of gay publisher Troy Masters
-
Ghana3 days ago
Ghanaian Supreme Court dismisses challenges to anti-LGBTQ+ bill
-
Health3 days ago
Breaking down the latest mpox vaccination survey results
-
White House5 days ago
Biden establishes national monument for first female Cabinet secretary
-
White House5 days ago
Trump appoints Richard Grenell to his administration
-
Kenya4 days ago
Man convicted of killing Kenyan activist, sentenced to 50 years in prison
-
Sports5 days ago
Saudi Arabia to host 2034 World Cup
-
The Vatican4 days ago
LGBTQ+ pilgrimage to take place during Catholic Churchās 2025 Jubilee
-
Nepal2 days ago
Two transgender women make history in Nepal
-
Congress3 days ago
Senate braces for anti-LGBTQ+ attacks with incoming Republican majority