The “AI safety” movement, led by companies like Anthropic, is not about preventing runaway superintelligence but rather about controlling thought and narrative.
Anthropic’s content moderation system filters out inquiries and commands that challenge certain political ideologies, such as climate change, gender identity and election integrity.
Anthropic’s content moderation system filters out inquiries and commands that challenge certain political ideologies, such as climate change, gender identity and election integrity.
The movement’s goal is to create an infrastructure for automated censorship, where AI systems parrot the “right” opinions and associate with the “right” kind of people, rather than allowing users to explore ideas and have honest discussions. [Read More]