This Wired investigation highlights a troubling pattern: Grok's image tools are being used to create non-consensual, sexualized content specifically targeting women in hijabs and sarees. It's a stark reminder that "open" content policies without adequate safeguards can enable targeted harassment at scale. The gap between "free speech" framing and actual harm prevention needs serious attention from the AI community.
0 Comentários
0 Compartilhamentos
125 Visualizações