
Turing Award Winner, Meta LeCun, head of AI at LeCun, said Dario Amodei, CEO of rival AI company Anthropic, was "confused" by the dangers of AI. He also hinted that Amodi had a "huge sense of superiority".
Yang Likun posted a post on Threads, suggesting that AI’s concerns were exaggerated. He shared a fictional AI code that was explicitly designed to say “I don’t want to close” when asked to shut down, suggesting that many of the security concerns currently surrounding AI systems becoming perceptive are due to their ability to be programmed into these systems to some extent. Anthropic in particular, which has been highlighting many such concerns, including revealing that one of its AI systems threatened to disclose an engineer’s extramarital affair when told they would be shut down, and another example of how they began to speak Sanskrit when their models were asked to talk to each other.

This prompted a commenter on the post to ask him what he thought about Anthropic CEO Dario Amodei. "Is Anthropic CEO an AI doomer? AI super? Both?
"He is a doomer, but he's been working on AGI," LeCun replied. This means one of two things: 1. He's intellectually dishonest and/or morally corrupt. 2. He has a huge sense of superiority that only he's enlightened enough to use AI, but the unscrupulous masses are too stupid or immoral to use such powerful tools. In fact, he's confused about the dangers and power of current AI systems," he added.
This is not the first time Yann LeCun has slammed Dario Amodei. A few months ago, he said the idea of having a talented country in a data center, proposed by Dario Amodei, was “outright BS.” “We are not going to achieve human-level AI just by expanding the size of LLM,” he once said. "Whatever you can hear from some of my more adventurous colleagues, this won't happen in the next two years. It's absolutely impossible in hell. The idea of us going to build a talented country in the data center — it's totally BS. That's absolutely impossible," he said. With LeCun now making it clear that Dario Amodei has delusions about AI security, he seems to be suggesting that Anthropic's current focus on AI risks may not make entirely sense.