If Part 1 was about the new computer—LLM as CPU, context as memory, knowledge and tools as disk—then Part 2 is about the uncomfortable follow-up:
If that's the computer, what does "software" even mean now?
For the last couple of decades, we've treated software as:
- A place you go
- A UI you log into
- A stack of screens and workflows you click through
We measure adoption in logins, active users, and time in app. We celebrate beautiful dashboards. We build roadmaps around new pages and features.
The AI-native computer does not care about any of that.
From the perspective of this new CPU:
- Software is a collection of capabilities it can call
- UIs are optional visualizations
- The primary interface is language and intent, not menus and buttons
In other words: your software is no longer the destination. It's becoming the tooling your AI front end uses to get real work done.
From Screens to Intent: Experiences Start with "What I Want"
In the traditional model, work begins with a destination:
- "Open the CRM."
- "Log into the ticketing system."
- "Go into the billing portal."
We force humans to learn where things live and how to drive each app.
In an AI-first model, work begins with intent:
- "Help me grow this account over the next quarter."
- "Clean up anything overdue in my queue and tell me what you couldn't resolve."
- "Plan capacity for this customer across regions given these constraints."
The person doesn't start by choosing an app. They start by telling an assistant what they're trying to achieve.
The assistant:
- Interprets the goal
- Decomposes it into steps
- Calls capabilities across systems
- Comes back with a synthesized view, options and tradeoffs, and proposed actions
The user might still click into a UI for detail or override, but the default path is conversation + agent, not navigation + forms.
Yesterday: "Which app should I open?" Tomorrow: "What outcome do I want?"
SaaS Unbundled: From Portal to Headless Capability Graph
Most SaaS products today are bundles of:
- Data models
- Business logic
- Screens and workflows
- Integrations and APIs
In an AI-native world, the key asset stops being the portal and starts being the capability graph:
- "Create quote"
- "Adjust contract terms within policy"
- "Provision cross-connect in this market"
- "Open incident and apply runbook X"
- "Run forecast for these scenarios"
An AI agent doesn't care about your menu structure. It cares about:
- What actions are available
- What they do
- What data they require
- What assurances they come with (policies, invariants, SLAs)
Your product from the agent's perspective is:
- A set of clean, composable actions
- With well-defined inputs/outputs
- Embedded in a web of business rules and constraints
Humans still need UIs, but those UIs become:
- Ways to inspect, explain, and override
- Places to design and configure rules
- Visualization surfaces for complex outcomes
They're not the only way—or even the main way—work gets done.
For software teams and vendors, that's a big mental flip:
The measure of quality shifts from "How delightful is the UI?" to "How usable and trustworthy are our capabilities to an AI front end?"
If your system looks great to humans but opaque to agents, you're optimizing for the wrong audience.
Software Development: From Page Wiring to Behavior Design
If software is no longer primarily "screens," what do software teams actually do?
1. Designing Capabilities, Not Just Features
We move from:
"Add a new screen for manual invoice adjustment."
to:
"Define a safe, auditable capability for adjusting invoices, with clear policies, limits, and events."
That means:
- Explicit inputs and outputs
- Clear business semantics ("what this really does")
- Idempotency and consistency guarantees
- Events emitted so the rest of the ecosystem can "see" what happened
Features stop being just UI surface area. They become nodes in a capability graph.
2. Designing Behaviors and Guardrails
Agents introduce new questions:
- Under what conditions should an agent act autonomously?
- When must a human approve?
- How should the agent trade off speed vs risk vs cost?
- What invariants must never be violated?
Software teams start to look more like behavior designers:
- Encoding policies in executable form
- Defining escalation paths and exception handling
- Designing "I'm not sure, here are your options" patterns
This is less about HTML and CSS, and more about rules, flows, and outcomes.
3. Testing Becomes Scenario Simulation
Click-through tests won't be enough. We'll need:
- Simulations of many different user goals
- Agents trying different paths through the capability graph
- Assertions that policies were respected, invariants held, and risk thresholds weren't crossed
This looks less like testing a UI wizard and more like simulating an ecosystem.
You're not just asking, "Does this button work?" You're asking, "When an agent tries to achieve this goal, do we still like what happens?"
A New Support Model
If AI assistants are the front door, support changes shape too.
Users will ask different questions:
- "Why did it recommend that?"
- "What systems did it touch to do this?"
- "Why couldn't it complete this step?"
Support teams and SREs will need tools to:
- Trace agent decisions across systems
- Reconstruct context for a given session
- See which capabilities were called and in what order
- Understand which policies or data led to a particular choice
We'll still need help desk support for specific apps and devices, but we'll also need:
- AI ops / Agent ops – people and tools focused on the reliability, safety, and quality of agents as a whole
- Context debugging – is the agent seeing the right information at the right time?
- Policy debugging – did rules fire correctly? Did they block too much or too little?
The "support surface" becomes not just the app, but the experience—the conversation, the decisions, and the outcome.
Healthy Questions to Ask About Your Current Portfolio
You don't need perfect answers yet, but these questions are worth asking:
-
If an AI assistant wanted to use this system, what could it do? Can it see a clean list of capabilities, or is everything buried in UI flows and ad hoc scripts?
-
How much of our "secret sauce" is UI polish vs hard business logic and guarantees? Which pieces would still matter if the UI disappeared?
-
Do our systems emit rich events about what they're doing, or are they black boxes? If an agent took actions across five systems, could we reconstruct the story cleanly?
-
Are our policies—approvals, limits, exceptions—expressed in code the system can enforce? Or are they trapped in slide decks, SharePoint sites, and people's heads?
-
If we measured adoption by agent calls instead of logins, which systems would be "most important" in three years? Are we investing accordingly?
The answers to those questions will determine how well your current portfolio adapts to the new computer.
In a world of "users with agents," the systems that win are the ones that are easiest for agents to understand, trust, and compose.
Where Architecture Comes In
All of this points to one thing:
Architecture—done well—becomes a strategic advantage in an AI-native world.
Not architecture as in "how many layers in the diagram," but architecture as in:
- How clearly you express business capabilities
- How well you structure and govern data
- How intentionally you define and expose applications as headless tools
- How thoughtfully you design technology platforms underneath
That lines up nicely with a view many of us already use: BDAT—Business, Data, Application, Technology—in that order.
Part 3 will lean into that: How does BDAT look in an AI-native architecture? How should a leading organization be thinking today to enable these future experiences?
Because once AI is the front end, software is no longer the stage. It's the backline gear your new computer depends on—and architecture decides whether that gear makes the band sound tight or completely out of tune.