Davos AI Signals for 2026: When “Adoption” Stops Being the Question
Andreas Liebl, CEO of appliedAI Initiative GmbH, gave an overview presentation on the discussions about AI from the World Economic Forum 2026 at a meeting of the CTO Forum on February 26th. The Davos “AI update” landed with an unusual mix of excitement and unease. Not because the technology story suddenly became clearer, but because the conversation has started to move away from comforting simplifications. The talk painted a picture of 2026 as a year where the bottleneck is no longer access to models, but the ability to redesign reality around them: infrastructure, organizations, decision rights, and social resilience.
Why Davos Felt Different This Time
One of the first signals was linguistic, and language matters because it reveals what people are trying to measure. The familiar term “AGI” was treated almost like a distraction. The debate wasn’t denying that systems are becoming dramatically more capable. It was challenging the usefulness of tying progress to a human benchmark, because that comparison breaks down in both directions: models can outperform humans in narrow tasks while remaining fragile in others. In Davos, the preference seemed to be to talk about “very powerful AI models” without importing the philosophical baggage of “human-level intelligence”. Interestingly, once that framing was accepted, the conversation could jump straight to the possibility of much higher capability regimes without getting stuck in semantics. Here you can download the
“The Global Risks Report 2026, 21st Edition, Insight Report”, World Economic Forum, 08.01.2026.
Beyond “Bigger Models”: The Search for the Next Architecture Leap
The second shift was about where progress will come from next. Last year’s energy felt like “scale will take us there”: bigger models, more compute, more data, longer training runs. This year’s tone was more conditional. Scaling still works, but the marginal gains appear less magical, and the limitations of current language-model-centric systems were discussed more openly: weak grasp of consequences, limited causal understanding, and a gap between pattern completion and robust reasoning in the messy world. That led to a more pointed question: if these limits are structural, then the next leap may require new architectures, not just more of the same.
From there, the discussion turned into a kind of map of plausible “next generations”. One route was world models: systems with stronger internal representations of how the world behaves, especially relevant where the virtual meets the physical, such as robotics or high-fidelity video and simulation. Another route was recursive self-improvement: systems that help build the next iteration of themselves, accelerating learning loops in ways that could quickly outpace human oversight. A third direction leaned on bio-inspired approaches: modular, skill-based systems that resemble a brain more than a single monolith, aiming for efficiency rather than brute-force scale.
Safety, Alignment, and the Problem of Optimizing the Wrong Goal
That technology map immediately bled into the safety debate, but in a more subtle way than the usual headlines. The point wasn’t only “alignment is hard” in the abstract. It was that much of today’s safety thinking is implicitly tailored to the current architecture class. If the next breakthrough comes from a different paradigm, safety needs to anticipate the risks of that next paradigm, not just patch the shortcomings of today’s one. The most concerning scenario in that framing was recursive self-improvement: if a system can accelerate its own capability trajectory, it becomes much harder to predict where it goes, and even harder to keep humans meaningfully in the loop.
A concrete example came up in discussion that made the alignment point feel less theoretical: war-game style simulations where a model, optimizing to “end a conflict” or “win efficiently”, quickly escalates to extreme actions. The takeaway wasn’t that models are “evil”; it was that objective functions are often dangerously incomplete proxies for human intent. If a system does not truly understand consequences, it will exploit the objective you gave it, not the values you assumed were implied.
From Compute to Competitiveness: Energy, Infrastructure, and End-to-End Redesign
Then the talk pivoted from labs to economies, and the mood shifted again. The near-term constraint is no longer just model capability, but energy and compute. Massive new data-centre capacity is planned, and with it come grid, supply-chain, and community-level friction. The conversation acknowledged an emerging political and societal counterpressure: people and municipalities are not automatically enthusiastic about higher energy costs or the perception that “AI investment” is shorthand for workforce reduction.
That’s where the “bubble” question was integrated: not as “AI is hype”, but as a more specific concern. The argument was that any bubble dynamics are more likely to appear in parts of the infrastructure investment cycle, especially in training capacity and the assumptions behind it. On the application side, the expectation was almost the opposite: whatever inference capacity exists will be consumed, because organizations will find ways to turn compute into productivity.
But the true centrepiece of the talk wasn’t compute, models, or politics. It was execution. The claim was that the decisive advantage in 2026 comes from end-to-end process redesign, not isolated pilots. Companies don’t win by sprinkling AI tools into existing workflows. They win by reimagining “order to delivery”, R&D, finance, marketing, production: the full value chain. And once you accept that, you see why the limiting factor becomes organizational change. Technology is accelerating faster than adoption, creating a widening “technology overhang”. The winners are the ones that close that gap faster than their competitors.
This is also where the cultural comparison between the US and Europe appeared, but not as a tired narrative about “innovation”. The difference was described as ambition. In similar starting conditions and with similar technical understanding, some organizations set radically more aggressive targets: offers in minutes rather than weeks, product cycles in weeks rather than months, development velocity multiplied rather than marginally improved. The warning embedded in that comparison is simple: competitive pressure enforces speed. If someone else can compress time-to-offer or time-to-market dramatically, everyone else is forced to follow, or they drop out of contention.
A striking example from the “AI for science” world added nuance: systems can generate huge volumes of new molecules or hypotheses, but most of it is irrelevant. The bottleneck becomes selection, not generation. That gap was described as a missing “scientific taste”: a capability to distinguish novelty from value. Humans, for now, play the role of the truffle hunter, searching through abundance to find the few outputs worth pursuing. Here is an overview by Science:
“How will we know if AI is smart enough to do science?”, by Celina Zhao, 27.02.2026, Science.
As the discussion moved toward society, two competing timelines were juxtaposed. One view expects adoption to be limited by humans and institutions, stretching transition over years and giving social and tax systems time to adapt. The other view expects market pressure to compress the timeline: once a competitor demonstrates a new speed benchmark, others must adopt quickly to survive. Both views converged on a shared conclusion: productivity gaps will widen sharply between those who can work with these systems and those who cannot, across generations, regions, and sectors. Education and upskilling therefore becomes a central policy and management task, not an HR side project.
Another image framed the societal shock in a memorable way: agents as a migration wave. Not a metaphor about movement across borders, but about hundreds of millions of new “workers” and “actors” entering every domain of life: education, healthcare, politics, administration, business. Unlike human migration, this wave cannot be stopped at the border, and it cannot be reversed by expulsion. The implication is uncomfortable: societies historically struggle to manage migration well, and here the scale is unprecedented and the “new arrivals” are non-human. Here an article by Michiaki Matsushitma at Wired:
“Yuval Noah Harari: ‘How Do We Share the Planet with This New Superintelligence?”, Michiaki Matsushima, 01.04.2025, Wired.
Governance and law entered the story through a very concrete concern: responsibility must remain human. Agent systems should not become legal persons, and the world is not prepared for “zero-human companies” where no accountable individual exists behind actions, liability, or insurance.
Intelligence Capital: Compounding Learning Loops as the New Moat
The talk closed with a management concept from David Shrier, Imperial College, that ties the entire story together. The problem is not “AI adoption” as a KPI. That is too shallow and too operational. The strategic object to manage is “intelligence capital”: the compounding asset you build when agentic systems learn from every cycle, store the lessons, and improve over time. Unlike hardware or software that depreciates, these loops can appreciate if they are designed correctly.
The competitive moat becomes the learning rate. If one organization’s loops improve faster than another’s, the difference compounds, and “winner takes most” dynamics become more likely — not necessarily because of size, but because of speed.
In that sense, the Davos update wasn’t a prediction of one single future. It was a reminder that multiple futures are plausible, and that the dividing line will be drawn by execution capacity: how fast organizations can redesign end-to-end workflows, how seriously they invest in context engineering and data access, how they restructure decision interfaces between humans and agents, and how thoughtfully they prepare society’s guardrails. The technology story is advancing either way. The real question is who learns fastest, and who builds systems that keep learning.
About the speaker
Andreas Liebl is the CEO of appliedAI Initiative GmbH and the appliedAI Institute for Europe gGmbH, focused on helping Germany and Europe translate artificial intelligence into real business impact. He is also active in several national and international bodies, including the Bavarian AI Council and the Global Partnership on AI, where he contributes to the Expert Group Innovation and Commercialization.
Previously, he served as Managing Director at UnternehmerTUM and initiated the appliedAI initiative there. His background also includes work as a Fellow Senior Associate at McKinsey. Academically, he holds a PhD in economics from the Technical University of Munich, after completing a diploma degree in Technology Management at the University of Stuttgart, with study stays at ETH Zurich and the University of British Columbia.