TL;DR
- Musk’s new AI “Grokipedia” promotes debunked anti-trans conspiracy theories as fact
- The trans section leans on discredited sources to push the harmful “social contagion” narrative
- Grokipedia cites far-right activists and biased reports while dismissing actual trans voices
- The platform frames LGBTQ identities as ideology while claiming “truth” and “neutrality”
- Experts warn this AI “knowledge” tool risks fueling real-world hate against trans people

Musk’s AI Encyclopedia Pushes Dangerous Anti-Trans Myths
Elon Musk’s latest pet project, the AI-powered encyclopedia Grokipedia, claims it will “exceed Wikipedia by several orders of magnitude in breadth, depth and accuracy.” But when it comes to the LGBTQ community — especially trans people — the platform already reads like a greatest hits compilation of right-wing talking points disguised as factual “knowledge.”
Marketed as the next big truth-machine of the internet, Grokipedia is positioning itself as the anti-“woke” alternative to Wikipedia. Yet just days after launch, its entry on the trans community exposes what kind of “truth” the platform is eager to promote. Spoiler: it’s not reality. It’s repackaged transphobia under a tech-bro halo.
The page parrots the long-debunked narrative that being trans is a “social contagion” trend that supposedly appeared out of nowhere in the 2010s — as if trans people didn’t exist for centuries. According to Grokipedia, this alleged phenomenon just happened to “coincide with expanded access to social media and peer networks,” presenting normal self-discovery and community support as some kind of mass hysteria. The page cites the discredited Rapid Onset Gender Dysphoria (ROGD) theory multiple times, treating it as legitimate research, despite the fact that the study’s own author had to walk back its claims.

Rather than learning a lesson from that, Grokipedia doubles down, pulling “validation” from the Cass Report — a document heavily criticized for cherry-picking data to support anti-trans policy. In typical AI-parrot fashion, the platform lists the backlash, shrugs it off, and moves on as if the mere existence of criticism checks the “objectivity” box.
When discussing the term “cisgender,” the encyclopedia tries to frame it as some ideological attack on straight, non-trans people. Instead of referencing scientific or sociological research, it leans on writings from an anti-LGBTQ activist known for pushing extremist rhetoric. According to Grokipedia, calling someone cisgender “retroactively pathologizes normality” — the kind of argument that sounds smart until you realize it’s just fear of acknowledging that trans people exist too.
The Real-World Harm Behind AI-Packaged Hate
This isn’t a quirky AI mistake or an early-version hiccup. These aren’t “oopsie-doodle, the training data was off” errors. These are deliberate narratives that fuel harmful legislation, discrimination, and harassment of trans people and anyone in the queer community. AI doesn’t “accidentally” cite the most notorious anti-trans sources on the internet unless it’s being fed — or fed from — a particular ideological pantry.
What Musk calls a “neutral truth-seeking engine” is already mirroring the rhetoric used to justify banning gender-affirming care, censoring LGBTQ education, and pushing queer people out of public life. Slapping AI onto the same old hate speech doesn’t make it innovation. It makes it scalable.
If Grokipedia becomes a go-to source for users who trust Musk’s “truth machine,” it risks doing exactly what he accuses others of: spreading propaganda. Only this time, the target is already a marginalized community fighting for basic dignity and rights.
For a tool claiming to fix “bias,” Grokipedia’s trans page shows exactly what direction the compass is pointing. And the LGBTQ community sees it clearly: technology is only as ethical as the people who build it — and the ones who cheer it on.