brianletort.ai
All Posts
AI StrategyDigital RealtyEnterprise AIInfrastructureExecutiveData CentersAI Economics

From AI-Ready Infrastructure to AI Economics Platform

Space, power, and cooling was the right product for the last era. It is not the right product for this one. A first-person argument — from inside Digital Realty — about where infrastructure platforms are actually going.

April 23, 202615 min read

TL;DR

  • Space, power, and cooling was the right product for the last era of data-center infrastructure. It is not the right product for the AI era. The next product surface is AI economics.
  • Digital Realty already owns the primitives that would matter: 300+ data centers across 55+ metros, an interconnect-dense fabric, AI-ready high-density environments, and a Data Gravity research posture that named the 93% problem before the rest of the industry did.
  • The missing layer is FOCUS-aligned AI economics reporting — what a data-center platform can uniquely see about a customer's AI workload that no hyperscaler and no AI vendor can. This is the product that turns infrastructure telemetry into a governance surface.
  • The moat is not any single primitive. It is the combination — interconnect density, metro coverage, sovereignty readiness, and control-plane partnership capability — and the infrastructure platforms that do not make this move will get disintermediated by the ones that do.
  • This is a first-person view from inside Digital Realty. It is public analysis, not a company position, written by someone who thinks the industry is closer to this shift than the industry is acting like.

A disclaimer that belongs at the top, not the bottom.

I work at Digital Realty. This essay is my own analysis, not a company position. It argues — sometimes pointedly — that infrastructure platforms including Digital Realty need to make a product-surface move that the industry has been circling but not yet committed to. I write it because I think the argument is right, and because writing publicly about where I think my own industry is going is the posture I want to take as an operator rather than as a spectator. Read it as thinking-in-public from inside the machine, not as a roadmap.

With that out of the way.

For twenty years, data-center platforms sold space, power, cooling, and cross-connects. That was the product. That is still the product. It is not going to stop being the product. It is also not going to keep being enough of the product, because the enterprises buying infrastructure from platforms like Digital Realty have a new question, and the new question has a different answer than the old one.

The old question was: "do you have the capacity we need, where we need it, with the power and cooling profile that AI requires?"

The new question is: "can you help us govern the economics of the AI we run on that capacity?"

That is a different product. It sits on top of the same physical substrate, which is good, because the physical substrate is the hardest thing in the world to build and we are one of the few companies that has. But the layer above it is an economics layer, and right now no one — including us — has made it a first-class product.

This essay is about what that looks like, why it matters, and why I think the first infrastructure platform that commits to it wins a decade of enterprise relationships the others are going to have to chase.

Where the value has already shifted in every adjacent cycle

Infrastructure platforms do not get disintermediated by lower-cost alternatives. They get disintermediated by higher-value layers that sit on top of the old product and redefine what the customer is buying.

Cloud did this to data centers once already. The hyperscalers did not win because their racks were cheaper; they won because the API surface above the racks — compute-on-demand, storage-as-a-service, managed databases — turned infrastructure from a capacity purchase into a capability purchase. Plenty of enterprises still run infrastructure in dedicated environments. But the conversation about where enterprise infrastructure is going gets had at the API layer, not the rack layer.

Data platforms did the same thing to enterprise databases a few years later. Snowflake did not win because it was the fastest database; it won because the consumption model, the governance layer, and the cross-data-source reach turned a database purchase into a data-platform purchase. Plenty of enterprises still run their own databases. The strategic conversation moved one layer up.

AI is now in the same position, and the people paying attention at the infrastructure layer know it. The four hyperscalers are racing to build control-plane-grade AI products on top of their existing IaaS substrate. The AI-native clouds — the neoclouds — are competing on per-GPU economics and on specialized hardware access. The model vendors are trying to verticalize into infrastructure. Everyone is trying to own the layer above the physical substrate.

Data-center platforms have a choice to make. Either they build the layer above the substrate themselves, or they let it get built by somebody else who sits on top of them and extracts the margin.

The primitives Digital Realty already has

I want to be specific about what DLR has today, because the argument I am about to make is that we are closer to the economics-platform move than the company is acting like — and if that is going to read as a take, it should be grounded in what is actually on the ground.

Metro coverage. 300+ data centers across 55+ metros on six continents. The physical footprint that lets an enterprise AI workload run close to where its data actually lives — which the data-gravity essay argued is not an architectural nicety anymore; it is the primary economic constraint.

Interconnect density. The private-fabric connectivity between tenants, cloud on-ramps, and network providers that turns a metro into a fabric rather than a collection of boxes. This is the thing the hyperscalers cannot easily replicate, because it is built into the commercial relationships with thousands of other tenants over two decades.

AI-ready high-density environments. Dedicated high-density, liquid-cooled, AI-optimized capacity — the physical substrate that AI workloads actually require. The work the industry has been doing here for the last five years is what makes the economics-platform move possible, because you cannot sell governance of a workload that cannot physically run.

The Data Gravity research posture. The thesis that 93% of enterprise data will be created outside the public cloud did not come out of nowhere; DLR commissioned and published the research that named it. That intellectual posture matters, because it means the company is on the record about the structural point that the economics-platform argument depends on. The research has already identified the destination; the product work is how we get there.

Innovation labs and collaborations. An active innovation-lab footprint where real AI workloads are tested in real colocation environments. Collaborations with NVIDIA and other partners around AI factories. These are not just marketing surfaces — they are the places where the economics-platform product would be prototyped before it becomes a general offering.

The pieces are there. What is missing is the stitching — the decision to treat them as a product platform rather than as adjacent initiatives.

What the product actually looks like

The product stack, then and next

Data-center platforms sold space, power, and cooling for two decades. The next product surface is AI economics — and it sits on top of the same physical substrate.

Space · power · cooling

Physical substrate

Still the foundation. Still sold by the rack.

Private interconnect

Placement substrate

Low-latency, governed fabric between data and inference.

Capacity lanes

Capacity productnew

Public / reserved / regional / sovereign lanes exposed as distinct, priced SKUs.

Placement policy

Policy productnew

Residency, sovereignty, and data-gravity rules enforced at the platform layer.

FOCUS-compliant economics

Economics productnew

Unified cost records spanning tokens, placement, and infrastructure — legible to every enterprise ERP.

AI control plane partnership

Governance productnew

Deep integration with the customer's control plane so policy, routing, and ledger flow through without a bespoke integration.

Every layer above the physical substrate is a new product the infrastructure platform gets to charge for — and, more importantly, a layer that makes the substrate underneath it more defensible.

Toggle between the two stacks.

The legacy stack is honest. It is what data-center platforms have sold for twenty years, and it still works. It just does not tell the enterprise AI buyer anything about the thing that buyer is increasingly trying to understand — what is our AI actually costing us, where is it running, and is it running in the right places for the right reasons?

The AI economics stack keeps everything from the legacy stack — the physical substrate does not go away — and adds four new product layers above it. Capacity lanes as distinct SKUs. Placement policy as an enforced product. FOCUS-compliant economics reporting as a shipped deliverable. AI control-plane partnership as an integration surface with the customer's own platform.

Each of these is a product that the infrastructure platform already has the raw material to build, that the customer already needs, and that the hyperscalers cannot easily replicate without acquiring a data-center platform. The moat is not the rack. It is the combination of the rack and the four layers above it.

FOCUS-aligned economics as the missing layer

Of the four new layers, the one that changes the relationship most is economics reporting.

A FOCUS-compliant cost record, for one AI run

What a single AI inference run looks like to the enterprise ERP when the infrastructure platform treats economics as a first-class product. Sixteen FOCUS-aligned fields, every one answerable.

focus.ai-inference / record #acme-04182138
FOCUS v1.1 compliant
EffectiveCostCost$0.024

What the enterprise actually paid, after reservation coverage and cache savings.

Identity
Workload
Cost
Placement
Accounting

Sixteen fields. FOCUS-compliant, which means a FinOps team can drop the record into the existing FOCUS pipeline without translation. Broken out in a way that lets finance, compliance, and the CIO each see the parts they care about. Distinct, priced SKUs that the enterprise can reserve, commit to, or switch between.

This is the artifact that no hyperscaler and no AI vendor can produce for the enterprise — and I want to spend a minute on why.

The hyperscaler sees its own infrastructure. It does not see the enterprise's private colocation footprint, the physical fiber runs to the regulated data store, the cross-connects to the model provider, the latency between the metro and the application, or the egress economics between its own regions and the customer's other regions. The hyperscaler has the most legible view of its own slice of the AI economics — and a structurally incomplete view of the rest.

The AI vendor — OpenAI, Anthropic, the model providers — sees its own API usage. It does not see where the data came from, where the data is allowed to travel, what the latency profile looked like from the application's perspective, or what the egress economics were to get the data to its API. The AI vendor has the most legible view of token consumption and zero view of everything that surrounded the token purchase.

The data-center platform — by virtue of sitting in the interconnect fabric between the data, the application, and the model — has visibility into all of it. Not perfectly, not automatically, not today. But uniquely. The infrastructure platform is the one party that can see the end-to-end economics of an enterprise AI workload, because it operates the fabric where all of the parts meet.

That visibility is the product. FOCUS-compliant cost records are the artifact. And the customer is already paying for five partial versions of this artifact today from five partial vendors, which is precisely why the customer will pay for a complete version from the single party that is structurally positioned to produce it.

The customer dashboard that does not exist yet

The customer dashboard that doesn't exist yet

A mockup. An enterprise-AI customer of a data-center platform, looking at their AI economics the way they look at cloud spend today — except finer-grained, placement-aware, and FOCUS-native.

Mockup · not a shipped product
/ai-economics · acme-enterprise
Q2 2026

Cost per verified outcome

$1.42

-22% QoQ

Sovereignty hit rate

94%

+2 pp

Cache leverage

38%

+6 pp

Route win rate

71%

+9 pp

Direct allocation

82%

+18 pp

Egress avoided

$284K

+41% QoQ

Lane mix (by spend)

Public API
22%
Reserved PTU
18%
Regional
14%
Colocated
32%
Sovereign
14%

Spend by metro

Northern Virginia
Frankfurt
London
Singapore
Tokyo
Sydney

This is the view an enterprise AI buyer wants to open on a Monday morning. No infrastructure platform ships it today. The first one that does changes the conversation with every buyer in the enterprise segment.

This is the view I want to be able to open as a DLR customer on a Monday morning. It does not exist. No infrastructure platform ships it.

The metrics across the top are the ones from the scorecard post — cost per verified outcome, sovereignty hit rate, cache leverage, route win rate, direct allocation, egress avoided. These are the numbers the CFO and the CIO want to see every quarter. An enterprise AI buyer currently has to stitch them together across seven systems, some of which do not exist yet and some of which their vendors deliberately obfuscate. Packaging them into a single view is not a small product; it is the product.

The lane mix is the portfolio view. How much of the enterprise's AI spend is running on public, reserved, regional, colocated, and sovereign lanes? The Compounder archetype from the previous post is one distribution; the Buyer is another. The customer should be able to see their shape, compare it to their peers, and make a placement call every quarter based on the actual numbers instead of on anecdote.

The metro view is the placement view. Where is the money actually landing geographically? Which metros are dominating spend, and does that match where the data lives? If it does not, there is a placement decision to revisit.

Every one of these views is assembleable from telemetry the data-center platform already has access to — ingress, egress, cross-connect, fabric-level flow data, tenant metadata, colocation density — plus light instrumentation in the customer's AI control plane. The dashboard is not a moon-shot product; it is a product that a dedicated team could have in private beta in two quarters.

The moat is the stack, not any layer

The usual strategic question at this point is "why won't a hyperscaler just do this?" The answer is the combination.

A hyperscaler can ship FOCUS-aligned reporting for its own services. It can even ship placement-policy products for its own regions. But it cannot ship those same products against the enterprise's full AI footprint, because that footprint includes its competitors' regions, its customer's private colocation, its partners' model APIs, and a data-gravity distribution that mostly does not live in any single hyperscaler's cloud. A hyperscaler's view is structurally partial.

A neocloud or GPU specialist can ship great per-GPU economics. It cannot ship metro coverage, it cannot ship interconnect density, it cannot ship cross-cloud visibility, and it cannot ship sovereignty-readiness across 55 metros.

An AI vendor sees tokens. It does not see infrastructure.

A pure AI-FinOps tool can aggregate the bills. It cannot enforce placement, it cannot ship capacity lanes, and it does not own the substrate its recommendations run on.

The combination of metro coverage, interconnect density, AI-ready capacity, sovereignty readiness, and AI-control-plane partnership capability is only available from a handful of data-center platforms in the world. Digital Realty is one of them. That is not a marketing statement; it is a structural one. And structural advantages turn into products only when a company commits to building them.

What the commercial packaging could look like

If the product exists, the commercial question is "how do we sell it?" Three formats, in increasing order of ambition.

Format 1: AI economics reporting as a deliverable. The lowest-friction step. Offer FOCUS-compliant AI cost records as an add-on to existing colocation and interconnect contracts. Priced as a reporting SKU. Doesn't change the core commercial model; does change the conversation, because the customer's FinOps team now has data they could not previously get.

Format 2: Capacity lanes as priced SKUs. One step up. Offer public, reserved, regional, colocated, and sovereign capacity lanes as distinct, commercially-packaged SKUs — with explicit residency guarantees, pricing structure, and control-plane integration. Lets the customer reserve, switch between, and portfolio across lanes the same way they reserve cloud capacity today.

Format 3: AI economics platform as a managed product. The most ambitious. Offer an integrated AI-economics platform that unifies placement policy, capacity lanes, cost records, and control-plane partnership into a single managed offering. Priced per-workload, per-outcome, or as an annual platform fee. The customer consumes AI economics the way they consume cloud compute — as a governed managed service, not as a collection of parts.

Format 1 is product-market-fit territory today. Format 2 is a two-quarter build. Format 3 is a multi-year strategic commitment. A serious platform move probably starts at Format 1, earns the right to Format 2 inside a year, and lands Format 3 by 2028.

The risk of not making the move

The uncomfortable part of this argument.

If infrastructure platforms do not build the economics layer themselves, somebody else will build it on top of them. The most likely candidate is a purpose-built AI-FinOps vendor that aggregates bills across clouds, adds a thin placement-advisory layer, and sells the whole thing to enterprise CFOs. The product will be weaker than what an infrastructure platform could build — because it will not own the substrate, will not have fabric-level telemetry, and will not be able to enforce placement decisions it recommends — but it will exist, and it will sit in front of the infrastructure-platform relationship.

Once the customer is looking at AI economics through a third party's dashboard, the infrastructure platform is one click farther from the CFO and the CIO. The substrate still runs fine. The margin still clears. But the strategic conversation about enterprise AI economics happens in a room the infrastructure platform is not in.

That is disintermediation. And it is the thing infrastructure platforms did not stop in the previous cycle — the one where the hyperscalers ate the control-plane layer while colocation kept shipping space and power. That cycle is a warning, not a precedent to repeat.

The leadership move

The leadership move is to treat AI economics as a product.

Not a dashboard. Not an initiative. A product, with a team, a roadmap, commercial packaging, customer commitments, and an engineering investment sized to the ambition.

The infrastructure platform that makes this move first — and I continue to believe Digital Realty is structurally the best positioned to make it — buys something the previous cycle did not buy: a seat at the table where enterprise AI economics gets discussed, quarter after quarter, as long as the customer relationship lasts. That seat is the thing that compounds. The physical substrate is the asset that earns the seat.

The industry is closer to this shift than the industry is acting like. The primitives exist. The research posture is on the record. The customer pain is acute and measurable. The missing piece is the decision to commit.

From where I sit, that decision looks like the next one worth making.


This is the culminating essay in the executive token-economics thread. The arc starts with The CEO's Guide to Token Economics, moves through Data Gravity Meets Token Economics on placement, Designing the AI Control Plane on architecture, The Enterprise Token Scorecard on measurement, and the two-part Rent vs Own series (Part 1: The Rent-vs-Own Question, Part 2: What It Means to Own AI Assets) on the portfolio. This essay is the commercial destination of that arc. Read as thinking-in-public, not as a company position.