DLD Munich 2026: The GLOBALS Recap

Munich in January has a way of stripping language down to essentials. The air is too sharp for ornament. Your breath becomes visible, then disappears. That is what the best conversations do, too. They make the invisible visible, and then they force you to decide what to do with it.

This year’s gathering at the House of Communication had the usual ingredients: founders hunting signal, investors hunting asymmetry, policymakers hunting legitimacy, artists hunting meaning. Nearly 2,000 people and 250+ speakers, compressed into three days and two stages.

But the story was not the crowd or the stagecraft. The story was a shared, uneasy recognition that the ground has shifted under our feet.

Not because “AI is coming.” Because AI is here, and it has begun to behave less like a product category and more like a force of nature. It changes the weather of markets, politics, attention, and identity. It puts stress on institutions the way rising seas stress a coastline.

Over and over, across sessions that looked unrelated on paper, the same three pressures kept surfacing:

Legitimacy: who do we trust when machines speak fluently?

Power: who owns the infrastructure of intelligence and the value chains beneath it?

Meaning: what remains distinctly human when generation becomes cheap?

You can call those themes philosophical, if you want. In practice, they are operational. They decide which companies scale, which societies hold, and which regions remain sovereign rather than subsidized.

The mood shift: from hype to accountability

The conference’s signature isn’t its guest list. It’s the friction it creates by placing incompatible conversations side by side.

This year, the friction hardened into a theme: we are building systems faster than we can govern them, and we are consuming power faster than we can justify it. Europe’s questions, in particular, felt sharper: who owns the infrastructure of intelligence, who benefits from the spending, and what happens when trust collapses before code does?

Burda’s own recap framed DLD26 as a program built for a “wild, wary, world-shaping” future, with AI, digital sovereignty, and humanity’s role in accelerated time as central pillars.

DLD Munich 2026

1) The Trust Problem Is Now a Systems Problem

Maria Ressa’s appearance worked like a siren in fog. Her session, framed as “Journalism Against the Autocratic Playbook,” was not a nostalgic defense of newspapers. It was a warning about what happens when information systems are optimized for arousal rather than truth.

In founder terms, she was describing an ugly product truth: engagement is not the same as trust, and you can scale one while destroying the other. The damage shows up later, but it shows up everywhere. In elections. In health decisions. In social cohesion. In the ability of a society to agree on what happened yesterday.

The useful takeaway for builders is uncomfortable because it is so concrete:

If your product mediates public reality, you are no longer “just shipping.” You are building civic infrastructure. Your moderation policy, provenance system, and incentive design are not side quests. They are the business model, whether you admit it or not.

Europe, especially, has a choice here. It can continue to play defense with regulation that arrives late, or it can compete on something harder and more durable: trustworthy systems by design.

2) The Control Problem Is Not Sci-Fi. It’s Governance Debt.

Later, the conversation moved from public reality to machine reality. Stuart Russell’s talk with Kenneth Cukier carried a title that sounded like provocation but functioned as a boundary condition: “How Not to Destroy the World With AI.”

Russell’s core point, echoed by many hallway conversations afterward, is a paradox that has become the signature of this era: we are building systems that impress us while remaining difficult to interpret and hard to control. When such systems move from “toy” to “infrastructure,” you inherit a new kind of liability.

Not the liability of bugs. The liability of misalignment at scale.

The founders who seemed most awake to this did not talk about “ethics” as a virtue. They talked about controllability as a competitive advantage. Auditability as a sales feature. Traceability as a moat. If AI becomes a layer in enterprise and government decision-making, then governance isn’t paperwork. It becomes product architecture.

This is one of the strangest business inversions of the decade: the most serious companies will be those that make their systems less magical and more accountable.

3) The Ownership Question: Are We Building Citizens or Tenants?

One session cut through the polite techno-optimism with a blunt economic lens. Raffi Krikorian (Mozilla) and Nicholas Thompson (The Atlantic) discussed the way intelligence is being packaged: not as something people own, but as something they rent.

That framing matters because a rental model doesn’t just concentrate profit. It concentrates power: over distribution, over norms, over the boundary between what is possible and what is permitted. If the default relationship between people and intelligence becomes tenancy, then agency becomes a premium feature.

Mozilla’s public framing of the talk emphasized open models, data transparency, and user agency.
Translate that into strategy and it becomes a European opportunity hiding in plain sight:

Europe is unlikely to dominate the “foundation model arms race” on raw spend alone. It can, however, lead in systems that preserve agency: verifiable provenance, interoperability, privacy-preserving architectures, and governance that is legible to non-experts.

In other words, Europe can build AI that behaves like civic infrastructure rather than a casino.

4) Intelligence Has an Electric Bill, and the Bill Is Strategic

A strange thing happened across panels that were supposedly about different topics. Energy kept appearing like a ghost.

The reason is simple: intelligence is physical. Models run on chips, grids, cooling, water, and land. The cloud is not a cloud. It is a power plant with a brand.

This is why the “20 watts” comparison keeps resurfacing in serious rooms. The human brain, as one NIH-hosted review puts it, produces art, science, and “poetry” on an energy budget of about ~20 W.
Even a public explainer from the Human Brain Project uses the same estimate and makes the point in a way any operator understands: roughly the energy draw of a monitor in low-power mode.

The comparison is not a party trick. It’s a design mandate.

If Europe wants sovereignty in an age where computation is power, then efficiency stops being a virtue and becomes survival. Whoever learns to deliver capability per watt will shape the next decade the way whoever learned to deliver steel per unit cost shaped the last century.

Honorary Members: Mario & Nico’s salute to Steffi Czerny and Yossi Vardi

We made it official in the most GLOBALS way possible: not with a plaque, but with a moment and personal recognition.

On-site, Mario and Nico named Steffi Czerny and Yossi Vardi honorary GLOBALS Members, as a gesture to the two people who’ve done something rare in Europe: they’ve built an institution that consistently attracts builders, artists, policymakers, and contrarians into the same room and makes them talk like grown-ups. They are a beacon and its truly an honour to have them with us.

Mario & Nico w/ honorary member Yossi Vardi
Mario & Nico w/ honorary member Steffi Czerny

5) Culture Was Not a Side Stage. It Was a Warning Label.

It would have been easy for the arts programming to feel like garnish. It didn’t.

The “Co-Creating with Machines” session with Sougwen Chung and Carol Reiley explored what happens when authorship becomes shared between a human nervous system and a machine system.

Here’s why it mattered to founders and investors, not just artists: as generation becomes cheap, meaning becomes scarce. The market will drown in competent output. Differentiation rises into higher ground: taste, intent, cultural credibility, emotional resonance.

This is not sentimental. It’s economic.

If AI makes “good enough” abundant, the competitive edge moves to what cannot be mass-produced: depth, trust, and genuine aesthetic or moral signal. The companies that win will not only ship faster. They will craft better, and they will understand the cultural layer of their products.

6) When Machines Shape Belief, “User Experience” Becomes Theology

The most quietly radical session on the program asked a question that most technology conferences avoid because it is too intimate: what happens when people go to machines for meaning?

“When Humanity Looks to AI for Meaning,” with Mustafa Y. Ali, Father Paolo Benanti, and Joanna Shields, treated AI not as a tool but as a persuasive presence in the moral and emotional lives of users.

This is a frontier issue, and it’s arriving faster than many leaders want to admit. Systems that provide comfort, counsel, and certainty will become spiritually adjacent for some users, especially the young. That does not mean people will “worship AI.” It means AI will compete with human communities for authority.

For product leaders, that implies a new responsibility:

If your system is designed to be emotionally sticky, you are shaping moral formation, not just retention curves. You can’t outsource that to a policy PDF.

7) Europe’s Implementation Gap, Spoken Out Loud

Then came the session that turned all these abstract pressures into one painfully concrete European question: can this continent still convert its intelligence into power?

The TechFor panel moderated by John Thornhill brought together Ann Mettler, Carl Benedikt Frey, and Infineon’s Andreas Urschitz.
It began as a polite discussion about “opportunities.” It ended as something closer to a diagnostic scan.

Europe’s underreported industrial leverage

Urschitz began by puncturing the simplistic semiconductor narrative. The public conversation is dominated by CPUs and GPUs, he said, but the market is broader.

The other three are microcontrollers, power electronics, and sensors. So in these three areas, Europe is the one who has a play.

Then the quiet flex: in these segments, Europe holds massive share. His point wasn’t pride. It was leverage. If Europe wants to remain consequential, microelectronics is not merely a sector. It’s the base layer under energy systems, manufacturing automation, mobility, defense, and applied AI.

The scale gap, described in one humiliating comparison

Urschitz then described Germany’s high-tech agenda: “18 billion euros… meant to be spent in the next four years, split into seven different sectors…
Then he contrasted it with the spending planned by “Google, Meta, Microsoft, and also Amazon…” which “in 2026 intend to spend $400 billion in the AI.

His verdict wasn’t ideological. It was managerial:

We put our money into many, many, many baskets. A little bit of everything, but nothing really at scale.

Europe’s problem, in this telling, isn’t the absence of money. It is the refusal to choose, and the fear of concentrating resources in a way that might offend someone.

Ann Mettler’s line that broke the spell

Mettler’s contribution wasn’t a think-tank sermon. It was the voice of someone who has watched Europe perform brilliance early and surrender later.

That basically turns us into an incubator for the world. Why? Because we don’t scale in Europe.

Then she laid down the statistic that has become a kind of European shibboleth:

Europe has not produced a single deep tech startup that is listed in Europe and has a market cap above 100 billion euros in more than 50 years.

She listed Chinese giants, not to praise China’s politics, but to kill Europe’s favorite excuse: “it can’t be done.”

So someone tell me this isn’t possible? Why can others do what we can’t do if we are so good?

Then came the implementation critique that every operator in energy, mobility, or hardware recognized in their bones.

No startup can manage this complexity.
And the prescription, delivered like a demand rather than a proposal:

Europe needs a Complexity Reduction Act. Urgently. This is not a joke.

Now, to the line you rightly insisted on anchoring in context: “six years into what?”

She was referring to the Green Deal timeline. The European Green Deal was unveiled at the end of 2019, and the Commission’s investment plan to mobilize major sustainable investment followed in early 2020. That is the six-year arc she is pointing at from 2026.

In that frame, her outburst lands with brutal clarity:

We’re now six years into it. We have nothing to show for… What the hell are we doing?

And then the strategic escalation that made the room sit up: if Europe repeats this pattern in defense tech, it won’t just be inefficient. It will be fatal.

The single market problem, quantified

Carl Benedikt Frey supplied the structural mechanism behind the scale failure:

We don’t have a single market for services.

Then the estimate designed to shame policymakers into action:

the IMF estimates… [EU internal barriers] amount to something like 110% tariffs… self-imposed within the European Union.

For any founder trying to scale across borders, this isn’t theory. It’s Tuesday.

The defense mirror: spending that leaks is not security

Mettler warned against spending “hundreds of billions” without building European capacity.
Urschitz turned that warning into arithmetic, noting the dominance of U.S. firms in the top defense companies and how little value-add remains in Germany.

And then the line, crude and memorable by design:

the guys who are benefiting from that is Mr. Trump in the morning and Mr. Putin in the afternoon.

Beneath the provocation sits the hard policy truth: procurement is industrial policy. If Europe spends on defense and the value chain lives elsewhere, Europe is buying security while exporting prosperity.

The closing prescription: coalitions, focus, courage

Mettler offered a political tactic: “Coalitions of the willing… Empower them. Let them go… rather than the… lowest common denominator.
Urschitz offered a strategic focus: “three to five” technologies, including “microelectronics… applied AI… autonomous systems,” and then the financing principle: “factor 10x, not double down.
Finally, the cultural requirement: “We need to dare. Dare to dare to decide.

If you want the European “survival kit” in one line, it’s this:

Europe must stop confusing discussion with execution.

GLOBALS LIVE: “Why is it gonna be wild?”

We went LIVE on the floor and asked a simple question: why is it going to be wild?

Ralph Simon FRSA, Frank Seehaus, Paolo Benanti, Sascha Karstaedt, Klas Roggenkamp, Hasseb Iqbal, Britta Weddeling, Yervand Sarkisyan, Frank-Jürgen Richter, Catherine Carlton, Jernej Pintar (PhD), Diane Brady, Dirk Hoke, Sabine Klauke, Tilo Bonow, Marianne Dennler, Alexander Zumdieck, and Helmut Sussbauer all had an answer.

Curious what they said? Watch the video below!

8) The Human Moat, Embodied

If Mettler’s session was Europe’s industrial audit, the Aenne Burda Award ceremony was its cultural one.

FKA twigs received the award not because she is “tech-friendly,” but because she represents something algorithms cannot cheaply manufacture: radically human expression and creative ownership.

German press coverage added a detail that fits the week’s larger paradox: twigs has used a digital twin, “AI Twigs,” to handle social, email, and press tasks so she can preserve creative focus.

That detail matters because it reframes “AI productivity” in a human way. The point isn’t to become more machine-like. It’s to use machines to defend the part of you that remains irreducible.

In a week full of arguments about intelligence, twigs quietly demonstrated the principle: the endgame is not output. It’s authorship.

What’s the actual takeaway?

Most conference reports are name-check lists. The useful output here is simpler: five constraints that will decide who scales, who governs, and who stays sovereign in the AI decade.

1) Treat legitimacy as infrastructure

If your product shapes what people believe, buy, vote for, or fear, you’re not building “an app.” You’re building public reality infrastructure.

So what (operator version):

  • Make provenance (where outputs come from), accountability, and human override explicit product features.

  • If trust is bolted on later, it becomes a lawsuit, a scandal, or a ban.

2) Treat energy as strategy

Here’s the concept you must introduce before the “per watt” line:

AI is not just software. It’s industrial-scale electricity.
Every model output is paid for in energy, chips, cooling, and grid capacity. When AI becomes infrastructure, the biggest bottleneck stops being “talent” and becomes power availability + cost.

That’s why the brain comparison matters. The human brain runs astonishing cognition on roughly ~20 watts—a tiny energy budget for what it can do. The point isn’t biology worship. The point is a design lesson: efficiency is the next frontier.

So what (operator version):

  • Winners won’t be the teams with the flashiest model. They’ll be the teams with the best capability-per-euro and capability-per-watt.

  • In practice: push distillation, caching, smaller models, edge/hybrid deployment, and outcome-based inference.

  • If you can’t tell a CFO your “€ per task saved,” you’re stuck in pilot purgatory.

3) Treat scale as sovereignty

Europe keeps producing good ideas and exporting the compounding. Why? Because cross-border scaling is still too hard.

You heard it in plain language: Europe doesn’t have a true single market for services, and internal barriers behave like a self-imposed tariff.

So what (operator version):

  • Build an “EU expansion kit” (contracts, VAT/payments, data posture, support, compliance) as a repeatable product, not a scramble.

  • For policymakers: harmonizing services is not a Brussels hobby. It’s the fastest growth lever Europe has.

4) Stop funding everything; start winning something

Europe’s default is to spread money thin across many “strategic” baskets. The critique from stage was blunt: nothing reaches escape velocity.

So what (operator version):

  • Choose 3–5 domains where Europe can plausibly lead in the physical economy (microelectronics, applied AI in industry, autonomous systems, energy hardware, dual-use).

  • Then fund + procure at real scale. Incrementalism is how you lose slowly.

5) Defend the human moat

When generation becomes cheap, meaning becomes expensive. The creative sessions weren’t decoration; they were a warning about commoditization.

So what (operator version):

  • Build “taste” and “trust” into products: curation, restraint, editorial intelligence, community.

  • Culture isn’t vibes. It’s loyalty, brand gravity, and pricing power.


The next decade rewards builders who can ship trustworthy systems, scale across borders, and deliver AI that is efficient enough to be deployed everywhere, not just demoed.

A short, ruthless Monday-morning checklist

If you’re a founder:

  • Build for cross-border scaling from day one, or accept you’re building an acquisition target.
  • Engineer provenance, transparency, and controllability as product features.
  • Compete where Europe has industrial pull: applied AI in manufacturing, energy systems, mobility, automation.

If you’re an investor:

  • Underwrite teams that can navigate fragmentation without dying in it.
  • Look for compounding advantage in hardware-software stacks: chips + applied AI + energy efficiency.

If you’re a policymaker:

  • Harmonize services like it’s an emergency.
  • Make permitting deadlines real, and simplify subsidy stacks.
  • Use procurement to keep value-add inside Europe. Stop exporting your future by default.

Want to turn this into deals and distribution? Our next activations:

  • Join us for GSTF26 (Barcelona, March 1, 2026)

  • Meet us around the world at GLOBALS On Tour 2026

  • Become a GLOBALS Member for year-round access to the network, On Tour, and priority visibility.

Log in with your credentials

or    

Forgot your details?

Create Account