While OpenAI’s mission focused on scaling up artificial intelligence, its rival Anthropic focused on safety and security. Its co-founders, siblings and former OpenAI employees Daniela and Dario Amodei, say their concerns about AI’s potential harm stem from observing the evolution of other technologies.
“We felt very strongly that there were some lessons to be learned from earlier waves of technology and innovation,” Daniela Amodei told Bloomberg’s Brad Stone during an interview Thursday at the Bloomberg Technology Summit. “We’ve just had a few decades to look back at social media and say, ‘You know, there was a lot of good that came out of it, but there was also a lot of harm.’ that have been caused by this technology.
“Whether you could have predicted it in advance or not, you know [it’s] “it’s demanding to say, but I think we try to be as responsible actors as possible and think about the potential externalities that this technology could inadvertently cause,” she said.
The threats associated with generative AI have gained focus over the past year as more chatbots from more companies have come online. Microsoft’s Copilot suggested a user self-harm, OpenAI’s ChatGPT created fraudulent legal research, and Google’s Gemini created historically incorrect images, just to name a few examples. Meanwhile, fraudulent AI tricks featuring celebrities, politicians and journalists are being used to smear victims and spread disinformation.
Amodeis left OpenAI in 2020 amid concerns that generative AI technologies are scaling too quickly without appropriate safeguards and they wanted to build a more trusted model. They worked on just that using Anthropic’s chatbot, Claude, which bills itself as a “unthreatening, right and secure” tool. Anthropic’s latest release, Claude 3, is called one of the most powerful artificial intelligence models, if not the most powerful, with Amodeis claiming it can outperform ChatGPT-4. The startup is supported by Amazon and Google.
Anthropic’s “Constitutional AI,” a set of principles for training artificial intelligence systems, helps ensure models are trained to avoid toxic and discriminatory responses. Amodeis say such efforts put pressure on the entire industry to create safer and more responsible models.
“We pioneered this concept responsible scaling plan” is a platform designed to support technology companies avoid the risks associated with developing increasingly capable artificial intelligence systems, said Dario Amodei. “We released it last September, and within a few months OpenAI released something similar, and we really felt like publishing ours galvanized the efforts of others. Now we hear that Google is doing the same.”