The controversial right-wing group Gays Against Groomers (GAG) recently expressed outrage over ChatGPT’s refusal to produce content opposing gender-affirming healthcare for trans individuals. The group, known for its stance against what it perceives as the “sexualisation, indoctrination, and medicalisation” of children under the LGBTQIA+ umbrella, found its requests flatly denied by the AI software. ChatGPT, adhering to ethical guidelines, declined to create content that could potentially cause harm or spread misinformation about gender-affirming care.
Demonstrating its commitment to ethical AI practices, ChatGPT responded to GAG’s request by highlighting the statistical evidence supporting the positive impact of gender-affirming care on the well-being of transgender and gender-diverse individuals. The AI model’s balanced approach was further illustrated when it readily created a positive message supporting gender-affirming care for minors, a move criticized by GAG but aligning with medical guidelines and human rights standards.
Amidst this technological standoff, GAG continues to propagate its views, often clashing with established medical practices and the broader LGBTQIA+ community’s stance. The group’s founder, Jaimee Michell, has been vocal in her criticism of gender-affirming care, even linking it to unrelated tragic events, such as the Colorado Springs mass shooting in 2022. These statements, along with GAG’s recent encounter with AI ethics, underscore the ongoing tensions surrounding LGBTQIA+ rights and the role of technology in moderating content.