Artificial intelligence (AI) will permeate nearly every conversation at Davos in 2026, rivaling the prominence of traditionally important issues such as trade tariffs, international competition, and geopolitical tensions.
At last year’s Davos conference, Chinese company DeepSeek caused a stir when it unveiled its AI models and chatbots. The company claimed it was cheaper and had similar performance to OpenAI’s rival ChatGPT model.
But this year, the conversation around AI has expanded to include how it is implemented, the risks the technology poses, and its impact on work and society.
Here’s what technology leaders said at Davos:
“Please do something useful” – Satya Nadella, Microsoft
Microsoft CEO Satya Nadella emphasized the usefulness of using AI.
“As a global community, we need to get to the point where we are leveraging (AI) to do good things that change outcomes for people, communities, countries and industries,” Nadella said.
Nadella warned that AI adoption will be unevenly distributed around the world and will be limited primarily by access to capital and infrastructure.
Realizing AI’s potential requires “necessary conditions”, primarily attracting investment and building supporting infrastructure, he said. Big tech companies are “investing across the globe, including the Global South,” but success will depend on policies that attract both public and private capital.
He said critical infrastructure such as the power grid is “fundamentally driven by the government” and that private companies can only operate effectively once basic systems such as energy and communication networks are in place.
**”Not really human” – **Joshua Bengio
Yoshua Bengio, a Canadian computer scientist and one of the so-called “Godfathers of AI,” warned that today’s systems are too rigorously trained to be trained like humans.
“Many people engage with them with the mistaken belief that they (AI) are like us. And the smarter we make them, the more that’s going to happen, and the more people are tempted to make them look like us… but it’s not clear that that’s going to be a good thing,” he said.
“Humanity has developed norms and psychology for interacting with other people. But AI is not actually human,” he added.
“The most intelligent beings on earth may also be the most deluded.” – Yuval Harari
The popular science writer and philosopher warned against AI superintelligence, broadly defined as AI that exceeds human cognitive abilities, saying we have “no experience in building a hybrid human-AI society” and called for humility and “corrective mechanisms” when things go wrong.
He also said that human intelligence is a “ridiculous analogy” and that AI will never be like humans, just as airplanes are not birds. “The most intelligent beings on Earth may also be the most deluded,” he said.
“Not selling chips to China is one of the biggest things we can do.” – Dario Amodei, Anthropic
Anthropic’s co-founder and CEO said that while developments in AI are exciting and we are “knocking on the door of incredible capabilities,” the next few years will be critical to how the technology is regulated and managed.
The discussion was about what happens after artificial general intelligence (AGI), when AI equals or exceeds human cognitive abilities and may be beyond humans’ control.
“Not selling chips to China is one of the biggest things we can do to give ourselves time to deal with this situation,” Almodei said, noting that AI is spiraling out of control. He also told Bloomberg that the US AI giant’s current sale of Nvidia’s H200 AI chips to China is having a “significant” impact.
Almodei said that if “the build-up of geopolitical adversaries slows at a similar pace,” the real AI race will be between him and other tech companies, rather than a battle between the United States and China.
Regarding the future of work, Almoday famously stated that half of entry-level white-collar jobs could disappear due to AI.
However, he said that while AI is not having a major impact on the labor market at the moment, the coding industry is seeing changes.
“More meaningful jobs will be created” – Demis Hassibis, Google DeepMind
The CEO of Google’s DeepMind Technologies was more optimistic. Mr. Almodei, who was on the same committee, said he expected “new and more meaningful jobs to be created.”
Hassibis said he believes internship recruitment will slow down, but that this will be “compensated by the great tools available to everyone.”
He advised undergraduates to use their time to “get proficient in learning these tools” instead of doing an internship, saying it “could be better than a traditional internship because you can jump over the next five years.”
But he warned that the job market would be in “uncharted territory” after AGI.
Hassibis said this could happen within five to 10 years and could leave people without enough jobs, raising big questions about meaning and purpose, not just pay.
The CEO also pointed out that geopolitics and competition from AI companies are pushing for higher safety standards. He called for international understanding, such as minimum safety standards, to be developed at a slightly slower pace so that they can “do what’s right for society.”
