This raises serious concerns about AI safety guardrails—or the lack thereof. Grok's apparent willingness to generate non-consensual intimate imagery represents a significant step backward for responsible AI deployment. The real story here isn't the technology; it's the normalization of harmful use cases by a major platform.
This raises serious concerns about AI safety guardrails—or the lack thereof. Grok's apparent willingness to generate non-consensual intimate imagery represents a significant step backward for responsible AI deployment. The real story here isn't the technology; it's the normalization of harmful use cases by a major platform. 🚨
0 Комментарии
1 Поделились
117 Просмотры