Meta's flawed system for "community safety" has deeply impacted influences, brands and other legitimate creators. The impact of these actions has harmed the reputation, marketing, brand position, emotional well being and the ability to generate income for scores of individuals.
We spend too much of our time (often time away from our families), effort and care into cultivating a place on Social Media. As creators, artists, performers and brands we do this not just to increase our reach and our influence but to make friends and build connections with people we'd never have the privilege of meeting otherwise. We do this to capture and record memories, growth, progress and achievement of our respective skill and journey.
For a company who's stated mission is to bring people closer to the people and things they love. The decisions made by Meta and recent actions to wrongfully suspend and remove accounts of legitimate people and brands is a direct contradiction to their stated purpose as an organization.
We certainly support efforts to make a safer online community. We are loud advocates for protecting minors online. We can appreciate the complexity of the undertaking to monitor a network of billions of users. But the results of the last few weeks have been abundantly clear: The current system is highly flawed and the 'margin of acceptable error' is far too large. With no actual reliable recourse for incorrectly flagged accounts. To have all our hard work, memories, friends, connections, and for some - the primary source of earning income wrongfully ripped away with no real means to recover it is not only shameful it's a violation of trust of consumers. Not to mention a violation of the Terms of Service made with subscribers. Especially while the truly problematic accounts seem to remain unaffected.
It's become clear that we've handed over far too much control over our own individual success to a mega-corporation who has zero interest or care in safeguarding our hard work, investment of our time, and the connections we've built.
We implore Meta to take a long honest look at the systems deployed on Instragram's network that are terminating accounts as a result of miscalculated, misguided or falsely evaluated risk.
Why Post-Only Moderation Leaves Kids Exposed
We’re all for protecting minors online: full stop. But the way most social-platform policies go about it is fundamentally flawed: they focus almost entirely on what gets posted instead of who is lurking.
The core problem
-
Moderation is triggered by visible content.
If you never upload a photo or Reel, the AI scanners have nothing to review and no rules to enforce. -
Bad actors know this.
Predators, harassers and content thieves create “ghost” accounts—no profile picture, zero posts, following thousands of kids or dance hashtags, and followed by almost no one. -
Result: surveillance without accountability.
While legitimate creators sometimes lose their accounts to over-zealous filters, predators stay online, sliding into DMs or scraping images to share in private forums.
What a typical “ghost” account looks like
Signal | Legit creator | Lurker / predator |
---|---|---|
Posts | Dozens of photos & videos | Zero—or a single stock image |
Followers / Following | Balanced ratio | Follows 2-10K accounts, <25 followers |
Profile info | Bio, links, contact | Empty or generic emoji |
Activity style | Likes & comments on friends’ posts | Mass-likes young users; frequent DM requests |
These profiles contribute nothing positive to the community, yet current rules let them sit untouched because they technically “haven’t broken content guidelines.”
Why post-based safety backfires
-
High false-positive rate – AI removes borderline dance or cheer content that isn’t sexual, alienating real users.
-
Low true-positive rate for lurkers – The worst offenders fly under the radar precisely because they never post.
-
Resource drain – Review teams spend time on harmless photos while predators keep harvesting new material.
A smarter way forward
-
Account-behaviour scoring
Flag profiles that follow thousands of minors, send mass DMs, or have a lopsided follow ratio—whether or not they post. -
Minimum-activity requirement
If an account has no posts, no profile details and no legitimate followers after 30 days, auto-mute its ability to message or view private content until it verifies. -
Graduated verification
• Basic photo ID for adult accounts interacting with minors.
• Extra hurdles (e.g., phone verification, two-factor auth) before they can DM under-18 users. -
Transparent appeals
Make it easy for genuine new users to unlock restrictions—but force bad actors into the light.
Bottom line
Post-based moderation punishes visible, creative communities while giving silent predators an open lane. Until platforms tackle who is watching as aggressively as what is posted, minors will remain exposed—and the wrong accounts will keep getting shut down.