brianletort.ai
All Posts

The AI-Native Computer Series

A 3-part exploration of how AI is reshaping enterprise architecture

AI ArchitectureEnterprise AIEnterprise Architecture

Architecting the AI-Native Enterprise: A BDAT Playbook

How should a leading organization design for an AI-native future? Using the BDAT lens—Business, Data, Application, Technology—we explore what's next.

December 22, 20259 min read

If Part 1 was "there's a new computer in town," and Part 2 was "software is becoming its toolbelt," then Part 3 is the uncomfortable but important question:

So what are we actually going to build?

This is where strategy stops being slides and starts being structure.

The good news: we already have a language for thinking about this—BDAT:

  • Business
  • Data
  • Application
  • Technology

In an AI-native world, BDAT doesn't go away. It becomes non-optional. Every layer still matters—but each layer gets a new flavor.

Let's walk through it from the top down.

B = Business: Start with Capabilities and Outcomes

If AI assistants and agents are the new front end, our starting point cannot be systems. It has to be business capabilities.

Instead of asking:

  • "What can our CRM do?"
  • "What features does this SaaS offer?"

We should be asking:

  • "What capabilities do we want agents to perform on behalf of the business?"
  • "What outcomes do we want to delegate to AI, under what policies?"

Examples:

  • "Qualify and route inbound opportunities within 2 hours."
  • "Keep this portfolio's capacity plan accurate within defined tolerances."
  • "Renew standard contracts automatically under agreed thresholds."
  • "Proactively surface operational risks for our top N customers."

For each capability, we need to clarify:

  • What "good" looks like (success criteria)
  • Which decisions can be automated vs which require human judgment
  • What guardrails and approvals are non-negotiable

In other words, we're designing the job description for our future AI agents.

If we skip this and start with tools, we get AI toys—cute demos that don't move the needle. If we start here, we get AI that actually shows up in business metrics, not just adoption dashboards.

D = Data: Design the Memory Your Agents Live In

Once we know the capabilities we care about, the next question is:

"What data do agents need to see, and in what shape, to do this safely and intelligently?"

In the AI-native computer, data isn't just rows in a table. It's the memory the LLM draws from:

  • Operational facts – transactions, events, logs
  • Reference data – customers, contracts, locations, assets
  • Documents and policies – playbooks, standards, agreements
  • Historical context and patterns – trends, exceptions, outcomes

This is where:

  • Semantic layers
  • Common vocabularies
  • Ontologies and knowledge graphs

...move from "nice architecture slides" to must-have infrastructure.

For each business capability, we should understand:

  • What data is authoritative?
  • What data is sensitive and must be masked or abstracted?
  • What data needs to be fresh vs is fine as a summary?
  • How do we represent it so RAG and tools can use it effectively?

Remember: if the LLM is the CPU and the context window is RAM, your data platforms are the disk and extended memory.

The better you organize that memory, the smarter and safer your agents will look. The worse you organize it, the more "AI problems" you'll have that are actually data problems with better branding.

A = Application: From Monolithic Apps to Capability Platforms

At the Application layer, BDAT meets the new reality most directly.

We need to evolve from:

Monolithic "systems of record" with tightly coupled UIs, logic, and data

toward:

Capability platforms with:

  • Headless, well-defined actions
  • Clean APIs and events
  • Rich semantics and policy hooks

Applications become:

  • The place where core business logic lives
  • The guardians of key invariants and guarantees
  • The providers of capabilities to AI front ends and other systems

We intentionally design:

  • Capability catalogs – "here's what this domain can do"
  • Contracts – "here's how to safely ask us to do it"
  • Events – "here's how we tell the rest of the world what happened"

UIs still exist, but they're just one client of these capabilities (alongside agents), focused on configuration, exception handling, and deep dives.

This is where "application rationalization" gets a new twist:

  • Redundant UIs matter less than redundant or conflicting capabilities
  • We care less about how many portals we have, and more about how many sources of truth and overlapping actions we've created

The question stops being "How many apps do we have?" and becomes "How many ways do we do the same thing, and which one should the agent trust?"

T = Technology: Platforms for Tokens, Context, and Orchestration

At the Technology layer, we're used to thinking about:

  • Infrastructure (cloud, on-prem)
  • Networks
  • Identity and access
  • Observability

All of that remains. But in an AI-native enterprise, we add some new first-class citizens:

  • LLM gateways – how we manage which models we use, with what policies and routing
  • Retrieval and memory infrastructure – vector stores, knowledge indexes, context services
  • Agent orchestration – planners, tool routers, workflow engines
  • Policy engines – reusable services for "am I allowed to do this?"
  • Token and cost management – dashboards and controls around usage

Done well, this becomes a shared AI platform:

  • Business teams define new capabilities and experiences on top
  • Data teams manage the semantic and retrieval layers
  • App teams plug their domains in as tools
  • Technology teams ensure it's secure, observable, and resilient

This is where architecture is either:

  • A force multiplier, making it easy to build new AI-native experiences safely, or
  • A bottleneck, forcing every team to reinvent prompts, retrieval, and guardrails from scratch

In an AI-native world, "we don't have a platform" isn't just a technical gap. It's a competitive disadvantage.

How a Leading Organization Might Look in a Few Years

If we project this out a bit, a leading organization might have:

  • A Business capability map explicitly designed with AI agents in mind
  • A Semantic data layer that agents can reliably reason over
  • A Capability catalog that describes what each domain system can do, in machine-readable form
  • A Unified AI platform providing models, retrieval, orchestration, and policy
  • A Portfolio of AI experiences: some embedded in existing apps, some as standalone assistants, some deeply integrated into day-to-day workflows

And crucially: BDAT is not just an architecture slide. It's how the organization actually plans and prioritizes work:

  • Business: Which capabilities do we want to automate or augment next?
  • Data: What do agents need to see to do that safely?
  • Application: Which systems need to expose which capabilities?
  • Technology: What platform changes are required underneath?

Organizations that get there will be able to:

  • Spin up new AI experiences quickly, with confidence
  • Keep humans in control without grinding everything to a halt
  • Treat architecture as a strategic asset, not a compliance cost

They won't just be "using AI." They'll be operating an AI-native enterprise.

What to Do Now: A Pragmatic Roadmap

You don't need to transform everything at once. But you do need to start moving in this direction on purpose.

0–6 Months: Understanding and First Experiments

Business: Identify 3–5 high-value capabilities that are good candidates for AI assistance (not just chat, actual work). Clarify what "good" looks like and where humans must stay in the loop.

Data: Map the data required for those capabilities. Identify gaps, quality issues, and sensitive elements that need protection.

Application: For the systems involved, list the key actions an agent would need (even if they don't exist as APIs yet). Start building a lightweight capability catalog, even if it's just a structured document.

Technology: Stand up an initial AI platform: model access, basic retrieval, logging. Run one agent-first pilot end to end, with explicit guardrails and monitoring.

6–18 Months: From Pilots to Platform

Business: Expand the capability map: where else would AI agents meaningfully change outcomes? Start measuring business value from early experiments, not just usage.

Data: Invest in a shared semantic layer: common vocabularies, curated domains, standardized events. Implement a governed retrieval layer for AI use cases (RAG with controls).

Application: Refactor at least one critical system into a true capability provider: clean APIs with semantics, events, and policy hooks. Standardize how new capabilities are documented and published.

Technology: Evolve the AI platform: model routing, policy enforcement, token and cost management, observability and audit for agents.

18–36 Months: Operating as an AI-Native Enterprise

At this stage, the goal is:

  • New business ideas are expressed in terms of capabilities for agents and humans
  • Data is managed as the memory of the enterprise, not just storage
  • Applications are understood and evolved as capability platforms, not just portals
  • Technology is delivering a stable, secure, efficient AI platform others can build on

And architecture is doing what it should always have done: translating strategy into structures, making it easier to build the right things safely, and turning the messy reality of systems into a coherent, composable foundation.

Closing Thought

We're used to thinking of AI as something we add into our existing stack: a feature here, a chatbot there.

The more honest framing is this:

We're standing up a new computer on top of everything we've already built.

It has a different CPU, a different kind of memory, and different bottlenecks. It will reshape how people experience technology and how work gets done.

The question for each of us is not, "Will this happen?" It's:

"Are we going to let this new computer be designed for us, or are we going to design it deliberately—aligned with our business, our data, our applications, and our technology?"

The organizations that use BDAT to design for this future, instead of reacting to it, will be the ones defining what "normal" looks like in a few years.

Everyone else will be busy trying to retrofit yesterday's architecture into tomorrow's computer—and wondering why it never quite fits.