brianletort.ai
All Posts
AI StrategyToken EconomicsExecutiveEnterprise AIFinOpsPortfolioAI Assets

From Renting Tokens to Owning AI Assets — Part 2: What It Means to Own AI Assets

The phrase 'own AI assets' is usually shorthand for 'host a model ourselves.' That is the thinnest version of the move. Six rungs, a balance-sheet shift, and the one asset class almost nobody is buying yet — but should.

April 22, 202612 min read

TL;DR

  • 'Owning AI assets' is usually shorthand for 'hosting a model.' It is the thinnest version of the move. The full ladder has six rungs — and most of them do not involve hosting a model at all.
  • The most overlooked asset in enterprise AI is compiled context: system prompts, policy blocks, retrieval indexes, tool definitions, eval suites, and few-shot libraries. They are the modern equivalent of the enterprise data warehouse — and most organizations have not noticed they are building one.
  • The balance-sheet implication is real. A mature compounder moves 30–50% of AI spend out of pure OpEx into reserved OpEx, CapEx, and intangible assets. The shift changes how the P&L reads for a decade.
  • Ownership is a ladder, not a step. A healthy enterprise sits on several rungs at once, each workload assigned to the rung its economics justify. The laddering decision is the CFO's; the execution is the platform's.
  • The enterprises that will compound in AI are the ones that stop treating every AI dollar as an expense and start treating some of those dollars as asset accumulation.

From Renting Tokens to Owning AI Assets

Part 2 of 2

Every time an executive says "we want to own AI assets" — in a talk, on a podcast, on a panel, in a LinkedIn thread — the same follow-up question is worth asking out loud: which kind?

The answer is almost always the same. They want to host a model. That is one tier of ownership. It is a meaningful tier. It is also, in my experience, the fifth-most-important tier on a ladder of six — and the one most enterprises overshoot toward at the expense of the five below it that they could already be harvesting.

Part 1 argued that the portfolio decision is real — that the four lanes have fundamentally different unit-economics shapes, that the crossovers are calculable, and that the question belongs to the CFO. This post — Part 2 — redefines what ownership actually means in this context.

It is deliberately short on jargon, because this is where most executives get lost. "Fine-tuning" and "RAG" and "PTUs" and "colocation" all sound like different things. They are. They also all belong on the same ladder, and the ladder itself is what the board should be reading.

The ladder

The ownership ladder

"Own an AI asset" is usually shorthand for "host a model ourselves." That is the thinnest version of the move. Six rungs, each a different kind of ownership. Click any rung to see what you are actually buying.

Compiled context as capitalRung 6 of 6

The most overlooked asset on the ladder. Compiled context — system prompts, policy blocks, retrieval indexes, tool definitions, few-shot libraries — is owned knowledge that survives model changes. It is the AI equivalent of the enterprise data warehouse, and most organizations have not noticed they are building one.

You own

Compiled knowledge

Balance sheet

Intangible asset (most durable)

Commitment

Durable across models

Wins when

Any enterprise with repeated workflows, proprietary process, and accumulated institutional knowledge

Risk

Governance and lineage discipline required; under-instrumented by default

The ladder is not a migration plan. A mature enterprise sits on several rungs at once, with each workload at its own rung based on its own economics.

Six rungs. Each represents a different kind of ownership.

Rung 1: Rented tokens. The baseline. Public API, metered consumption, month-to-month. You own nothing. You are buying capability by the request.

Rung 2: Reserved capacity. The first step up. Provisioned throughput, committed-use discounts, annual contracts. You still own nothing technically — the silicon belongs to the vendor — but the economics of ownership begin here, because above the commit you have essentially fixed your unit cost.

Rung 3: Data-zone deployment. A specialized form of rental. You pay a locality premium for the vendor to run in a specific region or boundary. Still no ownership, just purchased residency. Wins when soft locality pressure exists but volume does not yet justify committing capacity.

Rung 4: Colocated inference. The first rung where real ownership shows up. You own the lane — the physical placement of inference, in the metro, next to the regulated data, on private interconnect. The GPUs may still be leased or rented; the lane, and the control over where intelligence runs, is yours.

Rung 5: Distilled proprietary models. You own the weights. A smaller, faster, task-specific model — usually distilled from a frontier model onto your own task — that you run where you choose, at a cost that does not grow with a vendor's pricing page. This is where most executives think "owning AI" starts. It is actually the second-highest rung.

Rung 6: Compiled context as capital. The most durable asset in enterprise AI, and the one almost nobody is valuing. System prompts, policy blocks, retrieval indexes, tool definitions, eval suites, and few-shot libraries. Owned knowledge, versioned, cached, depreciable. This is the rung most organizations are already building on — usually without noticing, always without governance, nearly always without finance knowing.

A healthy enterprise does not pick a rung. It picks several rungs, for several workloads, simultaneously — and the portfolio committee introduced in Part 1 keeps that assignment fresh every quarter.

What changes on the balance sheet

This is the part that matters to the CFO.

From pure OpEx to a balance-sheet portfolio

The AI spend of an organization that owns nothing versus an organization that owns the right things. Same total dollars, different capital treatment, wildly different compounding.

22%
18%
22%
14%
19%
Public API tokens22%

Still present, rationalized to unpredictable or low-volume workloads.

Reserved capacity18%

Committed OpEx. Unit economics drop materially above commit threshold.

Colocated inference22%

Owned lane in the metro. CapEx depreciated against a multi-year plan.

Distilled models14%

Intangible asset. Cheaper per call every quarter the model is used.

Compiled context19%

The durable asset. Survives model changes; compounds with use.

Per-seat licenses5%

Trimmed as licenses without workloads get rationalized off the ledger.

Capital treatment

OpEx

27%

Reserved OpEx

18%

CapEx

22%

Intangible asset

33%

The Buyer is 100% OpEx and zero assets on the books. The Compounder has moved a meaningful share of the same spend into CapEx and intangible assets — and the difference shows up on the P&L every quarter after that.

Toggle the view.

The Buyer's balance-sheet profile for AI spend is clean, simple, and inefficient: 100% OpEx. Every dollar spent is an expense, gone at the end of the period, with no asset on the books and nothing to depreciate. The accounting is trivial. The compounding is zero.

The Compounder's profile is more complex — and more economically powerful. A significant share of the same total spend has moved out of pure OpEx: some into committed reserved capacity (still OpEx, but with unit economics that improve above the commit), some into CapEx (depreciable against a multi-year plan), and some — this is the overlooked part — into intangible assets: distilled models and compiled context, both of which can be accounted for as owned intellectual property with a real useful life.

That shift does three things.

It spreads the cost over time. CapEx depreciates. Intangibles amortize. A dollar of AI spend that shows up on the income statement today as a full OpEx hit can, if structured as a capital asset, show up as a fraction of that cost for each of the next several years. The accounting posture alone is meaningful to the P&L.

It creates leverage. Above the commit threshold of a reservation, every additional request runs at a lower marginal cost. Above the break-even of a distilled model, every additional inference is a direct saving against the public-API rate the enterprise would have paid. Ownership, done right, means the unit economics improve every quarter — not hold steady, improve.

It builds the asset. Most importantly, the Compounder is accumulating things on its own books that the Buyer is not. Distilled models trained on proprietary data. Compiled context that encodes institutional knowledge. Retrieval indexes over decades of documents. Five years from now, the Buyer has renewed its vendor contracts several times. The Compounder has a shelf of assets that did not exist before AI and cannot be easily replicated.

The CFO of a Compounder is running an AI line on the balance sheet that looks structurally different — in shape, in treatment, in optionality — from the CFO of a Buyer running pure OpEx. That difference shows up on shareholder returns over a decade, in exactly the way that infrastructure ownership showed up on industrial returns in previous economic cycles.

The asset nobody is buying

Of the six rungs, the one I want to argue hardest for is the one at the top.

Context as capital

Six asset classes most enterprises are already accumulating without measuring. Collectively, they are the most durable form of AI ownership — because they survive every model change. Click any asset to see why it compounds.

Retrieval indexesIntangible asset

Governed, versioned indexes over enterprise knowledge — vector stores, semantic graphs, fine-tuned retrievers. Model-agnostic.

In the field

A rolling corpus of 4.2M documents with enforced access control, lineage, and freshness — usable by any future model the platform onboards.

Why it compounds

The hardest asset to build, the most durable once built. Outlasts every model generation. It is the modern data warehouse.

These six assets are the modern enterprise equivalent of the data warehouse. Every organization is already accumulating them. Very few have noticed, versioned, governed, or depreciated them — which is why the ones that do will compound faster than the ones that do not.

Six asset classes. All of them invisible in most enterprises. All of them accumulating anyway, because every AI workflow built in 2024–2026 is writing prompts, assembling policy text, designing tools, indexing documents, running evals, and curating examples — whether or not the enterprise has decided any of that is an asset.

Compiled context is the AI-era equivalent of the enterprise data warehouse. And most enterprises have not yet noticed they are building one.

Consider what a single production AI workload actually accumulates over two years.

A system prompt that starts as 400 tokens of instructions and ends, after six months of refinement, at 2,400 tokens of deeply-tested behavioral guidance. A policy block that codifies the residency rules, escalation criteria, and sensitivity classes the workload has to respect — reused across seven adjacent agents. A tool schema that defines how the agent queries the enterprise data store, versioned like an internal API. A retrieval index over the last five years of regulatory filings, kept current, with lineage, with access control. A 312-case eval suite that tests every new model candidate against the workload's quality floor. A few-shot library of 48 canonical examples — how this company handles this kind of clause, this kind of customer, this kind of edge case.

Every single one of these is an asset. Every single one can be owned, versioned, governed, cached, depreciated, transferred across model generations. Together they are often the most durable intellectual property an enterprise will build during this AI cycle — because models will change, vendors will change, infrastructure will change, and the compiled context will still be there.

The reason most enterprises do not value this asset class is that it looks, at first glance, like a bunch of configuration files. It is not. It is the distillation of organizational craft into tokens. And the organizations that start valuing it — protecting it, governing it, reusing it, compounding it — are the ones that will look back in five years on an AI budget that produced something other than an ever-increasing vendor invoice.

When each rung wins: a short playbook

The ladder is not a checklist. Each rung wins under specific conditions, and the mature enterprise moves a workload to the appropriate rung and leaves it there until something changes.

Stay on Rung 1 (rented tokens) when: the workload is new, the volume is small or unpredictable, no locality pressure exists, and the enterprise does not yet have a credible eval against which to test alternatives. The cost of optionality is worth more than the savings.

Move to Rung 2 (reserved capacity) when: the workload has run consistently at high volume for at least two quarters, the quality floor is proven, and the commit threshold is less than the trailing-twelve-month spend. If reserving would have paid for itself over the last year, it will pay for itself over the next one too.

Move to Rung 3 (data-zone deployment) when: a customer contract or regulatory signal demands locality but the workload volume is not yet enough to reserve. The premium buys the residency story without the commitment.

Move to Rung 4 (colocated inference) when: regulation is non-negotiable, volume is high enough to amortize the fixed cost of private infrastructure, and the interconnect density between the data and the inference matters. Most large enterprises have a handful of workloads that qualify; very few have identified them.

Move to Rung 5 (distilled models) when: a repeated workload has a clear quality floor, a large volume of correctly-labeled examples, and economics that would justify a model fine-tune or distillation. The math is simpler than most executives expect. The ongoing maintenance cost is the part most skip.

Invest in Rung 6 (compiled context) from day one, regardless of other rungs. This is not a "move to when" — it is the asset that compounds under every other rung simultaneously. Every enterprise should be governing its system prompts, policy blocks, tools, retrieval, evals, and few-shot libraries from the moment it has any production AI workload at all. Most enterprises wait. That wait is the single largest accrued asset debt in enterprise AI right now.

The governance question

Two objections usually come up at this point, and both deserve brief treatment.

"Doesn't ownership tie us to specific technology?" Less than you might think. The whole point of compiled context as an asset class is that it is model-agnostic. System prompts, policy blocks, tool definitions, retrieval indexes, eval suites — none of them are locked to a particular model vendor. They migrate when models change. That is why they compound.

Distilled models are more technology-specific, and the answer there is the same answer enterprises have always given to similar questions: diversify, retain multi-source optionality, write contracts that protect portability. A mature AI ownership posture includes the ability to retrain, refresh, and migrate assets — because vendor lock-in is a governance problem, not an ownership problem.

"Isn't this what the vendor's platform already does?" No. The vendor's platform does what is valuable for the vendor to do, which is to make their services sticky. An enterprise AI ownership posture does what is valuable for the enterprise, which is to accumulate assets the enterprise controls — even when those assets are built on top of vendor services. The two are not adversarial; they are complementary. But they are not the same, and conflating them is how organizations arrive at five years of vendor payments and no asset base to show for it.

The leadership move

The shift is narrow and sharp.

Stop treating every AI dollar as an expense. Start treating some of those dollars as asset accumulation.

The Buyer is optimizing an expense line. The Compounder is building a balance sheet. Both of them are spending the same total dollars. Five years from now, one of them will have renewed vendor contracts. The other will have a shelf of assets — distilled models, colocated lanes, compiled context, governed retrieval, institutional evals — that did not exist before and cannot be bought off the shelf by anyone else.

The portfolio decision is not about cost. It is about which kind of dollar the enterprise is spending. Operating-expense dollars are flows. Asset dollars are accumulation. The strategic difference is the difference between consuming AI and owning the AI that your enterprise runs.

That is the whole argument. The ladder is the tool. The balance sheet is the receipt. The compiled-context layer is the asset nobody else is buying yet — and the one that will define the gap between the enterprises that compound and the enterprises that rent forever.


This concludes the two-part Rent vs Own series inside the broader executive token-economics thread. The frame is The CEO's Guide to Token Economics. The placement dimension is Data Gravity Meets Token Economics. The architecture is Designing the AI Control Plane. The metrics are The Enterprise Token Scorecard. The portfolio decision starts in Part 1: The Rent-vs-Own Question. The commercial horizon — where the platforms this all runs on are actually going — is in From AI-Ready Infrastructure to AI Economics Platform.

From Renting Tokens to Owning AI Assets

Part 2 of 2