In Part 1, I explained what Results-as-a-Service is and showed where it's already working. Now we go deeper than pricing. This is the architecture that makes RaaS real.
If you take nothing else from this post, take this: RaaS is the commercialization of an execution loop. To sell outcomes, you must reliably define the outcome, execute work toward it, verify it happened, measure cost and risk, and repeat at scale.
This is why RaaS is possible now. Agents can execute. It's also why RaaS is hard. Outcomes require governance.
What RaaS Architecture Actually Looks Like
A real RaaS system needs five things you can point to in a diagram:
A Result Contract: What counts, what doesn't, how it's measured. This is the machine-readable spec that defines success.
An Orchestration Loop: How agents and tools execute deterministically. If you've read my piece on the stochastic core and deterministic shell, this will be familiar. The LLM proposes, the shell decides.
A Verification Layer: How "done" is proven. Not just "did the agent complete?" but "does this meet the billable outcome definition?"
A Results Ledger: An auditable record of what happened. Every outcome has a trace. Every tool call is recorded. This is how you answer "why did you bill me?"
Guardrails: Policies, risk tiers, approvals, safe fallbacks. The provider is carrying financial risk, so they need mechanisms to bound that risk.
The Outcome Control Loop
Most production agent systems follow a control loop, and I've written about this pattern before. What RaaS does is tighten the requirements on that loop, particularly around verification and billing.
Here's the loop:
It starts with a trigger: an event arrives. A support ticket, a failed payment, a contract renewal. This is where the clock starts for time-bounded contracts.
Then context building: gather what matters. Customer history, relevant documents, previous interactions. Only what's needed. If you've read my work on context engineering, you know this is memory management for the AI-native computer.
Next, the LLM plans: proposing steps and selecting tools. This is the stochastic part. The model reasons probabilistically and suggests what to do.
Then a policy check validates proposed actions against the Result Contract. Is this tool allowed? What's the risk tier? Does it require approval? In RaaS, the policy check isn't just generic rules. It references the specific contract for this outcome type.
If it passes, the agent acts by calling tools via deterministic APIs. Side effects are explicit. Actions are logged with trace IDs. The provider is spending cost to pursue the outcome.
Observe records decisions, tool calls, outputs, and errors. This builds the evidence bundle that will justify the bill. The shell writes the logs, not the model.
Verify executes the verification signals defined in the Result Contract. Did the ticket resolve without escalation? Did the customer confirm? Has the reopen window passed? This is where RaaS gets real. Verification becomes contract-driven.
Finally, decide: based on verification, mark billable and update the Results Ledger, or mark non-billable and log why. Retry if recoverable. Escalate if needed. This is the billing gate.
The key insight is that this loop stays deterministic even when the planner is not. The LLM sits inside one step. Everything else enforces structure. And in RaaS, four of the eight steps become "RaaS-critical," where standard agent loops need to be tightened for outcome-based billing.
The Result Contract: Where Most RaaS Hype Dies
Most "outcome-based" pricing fails because "outcome" is vague. So define it like engineers.
A Result Contract is a machine-readable spec that answers: What is the outcome event? What signals prove it? What disqualifies it? What is the time window? What actions are allowed and at what risk tier? What evidence must be stored?
Here's what a Tier-1 Support Resolution contract might look like:
result_contract:
name: "Tier-1 Support Resolution"
outcome:
event: "ticket_resolved"
definition: "Resolved without human escalation"
window: "72h"
verification:
required_signals:
- "no_human_assignee"
- "customer_confirmed OR llm_verification_passed"
- "no_reopen_within: 7d"
price_model:
unit: "verified_resolution"
price: 0.99
guardrails:
allowed_tools:
- name: "search_kb"
risk_tier: 0
- name: "draft_reply"
risk_tier: 1
- name: "issue_refund"
risk_tier: 3
requires_approval: true
logging:
retention: "365d"
required_artifacts:
- "trace_id"
- "tool_calls"
- "verifier_decision"
- "human_override_events"
This is the architectural spin: RaaS is outcome contracts plus runtime enforcement.
Notice the verification block. Multiple signals reduce false positives. The 7-day reopen window catches premature resolutions. "Customer confirmed OR llm_verification_passed" provides fallback verification when customers don't respond.
Notice the guardrails. Each tool has a risk tier. High-risk actions like refunds require approval. This is how providers manage financial risk while still delivering outcomes.
Notice the logging. Every outcome needs a trace. Every tool call is recorded. Human overrides are tracked. This is how you answer "why did you bill me?"
Orchestration: Why Agentic Doesn't Mean Autonomous
Enterprises won't adopt RaaS at scale if the only control mechanism is "prompt harder."
Orchestration must be boring: typed tool contracts, deterministic workflows, retries and timeouts, explicit human-in-the-loop gates. The LLM can propose. The shell decides.
And orchestration is where providers manage financial risk: cap cost per attempt, route low-confidence cases to humans, stop execution when quality degrades.
If your RaaS system doesn't have a failure story, it doesn't have a business model.
The risk tiers matter here. Tier 0 is observe: summarizing, classifying, extracting, drafting. No side effects. Tier 1 is recommend: propose actions with evidence but don't execute. Tier 2 is act with bounds: low-risk actions with strict validation. Tier 3 is high-risk: requires approval, dual control, or rollback capability. Tier 4 is autonomous: rare, only appropriate when failure is cheap and reversible.
Most enterprises should live in Tier 0–2 for a while. That's not slow. That's sane.
The Data Perspective: Outcomes Are Data Products
RaaS forces a data maturity jump because measurement becomes contractual.
To buy or sell RaaS, you need event-level instrumentation (what happened), verification and evidence (why it counted), and trust (governance, lineage, auditability).
If AI is the front end, your systems become capability graphs. If outcomes are the product, your data becomes the ledger.
Minimum Data Architecture for RaaS
If you're consuming RaaS: define "golden outcome events" in a shared taxonomy, instrument both human and agent work, store evidence artifacts (not just metrics), and create an outcome mart for finance and procurement visibility.
If you're providing RaaS: build a customer-queryable results ledger, expose "why you billed me" evidence bundles, and measure unit economics at the outcome level (cost per verified outcome, not just cost per API call).
Provider vs Consumer: Using RaaS Externally and Internally
There are three ways to engage with RaaS, and most organizations will eventually do all three. Each requires different thinking.
If You're Consuming RaaS
Your job is to prevent "outcome theater," the appearance of outcome-based pricing without the substance.
Outcome theater looks like this: a vendor says "pay per resolution," but their definition of "resolved" is "agent responded." Or they guarantee "qualified meetings," but "qualified" means "showed up for 30 seconds." The pricing sounds outcome-based, but the risk never actually transferred.
Procurement questions that matter: What exactly counts as billable? How is it verified, and by whom? What's the dispute process and timeline? What audit artifacts do we get per outcome? What happens when the model drifts or accuracy degrades? Who owns the data generated during outcome delivery?
If a provider can't answer those questions with specifics, you're not buying RaaS. You're buying hope with a different invoice format.
The deeper question for consumers is whether you're ready to receive outcomes. RaaS works best when you can clearly define what "done" means, when you have instrumentation to verify it independently, and when you're willing to cede some control over how the work gets done. If you're going to micromanage the process, you're not buying outcomes. You're buying labor with extra steps.
If You're Providing RaaS
Start narrower than you think. The temptation is to promise broad outcomes because they sound more valuable. Resist it.
The best early RaaS offerings share a pattern: high volume (enough attempts to smooth out variance), clear end state (you know when it's done), reversible actions (mistakes can be fixed), and minimal ambiguity (edge cases are rare or well-defined).
Support resolution fits this pattern. So does fraud detection, appointment scheduling, document classification, and data validation. Notice what doesn't fit: strategy consulting, creative work, or anything where "good" is subjective.
Provider playbook: pick one outcome with a clean contract, build verification and ledger first, then scale coverage with agents. Don't build "an agent." Build an outcome factory with quality gates, cost controls, and a clear definition of done.
The economics matter more than you think. If your cost to deliver an outcome is unpredictable, you can't price it sustainably. You need to know your cost per attempt, your success rate, your retry rate, and your escalation rate before you can set a price that works for both sides. This is why verification and observability come before scale, not after.
Internal RaaS: The Model Enterprises Forget They Can Use
Here's the insight most organizations miss: RaaS is not only a vendor model. It's an internal operating model. And it might be more transformative inside your organization than outside it.
Think about how internal services work today. IT is a cost center. Legal is perpetually overwhelmed with requests. Data teams are overhead. Shared services are... shared, which usually means nobody is accountable for specific outcomes.
What if that changed?
Imagine internal teams offering explicit outcome contracts to their business partners. Not "we'll work on your tickets" but "we'll resolve P2 incidents within 4 hours, 95% of the time, and here's the evidence bundle for each one." Not "we'll review your contracts" but "we'll return risk-scored contracts within 48 hours with clause-level annotations, and you can see exactly what we caught."
This isn't a fantasy. It's how the best internal operations already think, just without the formal contract structure. RaaS gives you the framework to make it explicit.
Internal RaaS for IT Operations
Consider incident management. Today, IT tracks MTTR as a metric. With internal RaaS, IT offers "incident containment" as a productized outcome. The contract specifies: P1 incidents contained within 30 minutes, P2 within 2 hours, with containment defined as "impact isolated, no further degradation, and remediation path identified." Each incident gets a resolution receipt showing timeline, actions taken, and verification signals.
Suddenly IT isn't a cost center. It's an outcome provider. The "cost" isn't headcount, it's cost per contained incident. And when business units complain about IT, there's an evidence trail to examine rather than finger-pointing.
Internal RaaS for Data Teams
Data quality is the classic "everyone owns it, nobody owns it" problem. Internal RaaS changes the framing. The data platform team offers "data quality assurance" as a service: critical data products will maintain 99.5% accuracy against defined validation rules, with daily verification and immediate alerting on drift.
The outcome isn't "we ran the pipeline." It's "the data met the quality contract." Each delivery includes a quality receipt showing which checks passed, which failed, and what was done about failures. Data consumers know exactly what they're getting. Data producers have a clear standard to meet.
Internal RaaS for Legal and Compliance
Legal teams often describe themselves as drowning in contract review requests. Internal RaaS reframes this as a capacity problem with a solvable structure. Legal offers "contract risk assessment" as a productized outcome: standard commercial contracts reviewed within 72 hours, with risk score, flagged clauses, and recommended modifications.
The key is that "reviewed" has a definition. It means specific things were checked. The output includes evidence of what was examined. And the turnaround time is a commitment, not an aspiration.
Internal RaaS for HR and People Operations
Even traditionally "soft" functions can adopt outcome thinking. HR offers "position-to-offer" as a service: from approved headcount to signed offer letter within 30 days for standard roles, with stage-by-stage tracking and bottleneck identification.
The outcome isn't "we're working on your req." It's "here's exactly where your req is, here's what's blocking it, and here's our commitment on delivery."
Why This Matters More Than External RaaS
Internal RaaS challenges a fundamental assumption: that internal services are fundamentally different from external ones. They're not. They have customers (business units), they deliver outcomes (or should), and they consume resources (which should be tied to value delivered).
The resistance you'll encounter is predictable. "We can't commit to outcomes because every request is different." (Then define categories with different contracts.) "We can't measure our work that precisely." (You can if you instrument it.) "This will create perverse incentives." (So does the current model, where busyness is rewarded and outcomes are invisible.)
The organizations that figure out internal RaaS will have a structural advantage. Their internal services will be measurable, accountable, and continuously improving. Their cost allocation will be tied to value, not headcount. And when they do adopt external RaaS, they'll know exactly what questions to ask, because they'll have answered them internally first.
Start with one team. Pick a function that already thinks in outcomes, even informally. Help them define a contract, build a verification mechanism, and track results. Let the model prove itself before scaling.
The deliverable is not a dashboard. It's a measurable result with evidence. That's the shift.
Predictions: What Changes in 2026 and Beyond
2026: The Year RaaS Goes Mainstream
In 2026, expect RaaS to become normal in three areas where the outcome is already well-defined and measurable.
Customer Operations will see resolution-based pricing expand beyond chat to voice, email, and social channels. The $0.99 per resolution model that Intercom pioneered will become table stakes. Vendors who can't articulate their resolution rate and verification methodology will lose deals to those who can. Expect consolidation as point solutions struggle to compete with platforms that can offer outcome guarantees across multiple channels.
Revenue Operations will shift from activity metrics to outcome metrics. "Emails sent" and "calls made" will give way to "qualified meetings booked" and "pipeline generated." SDR-as-a-Service offerings will mature, with pricing tied to meetings that actually happen with decision-makers who match the ICP. The controversial question: does this accelerate or slow the replacement of human SDRs? My bet is it accelerates it, because outcome-based pricing makes the ROI comparison brutally clear.
Risk and Compliance will see the first wave of warranty-backed AI services. Fraud detection with chargeback guarantees (already here via Riskified) will expand to compliance monitoring with penalty coverage and security operations with breach response SLAs. The insurance industry will figure out how to underwrite AI outcomes, creating a new category of outcome guarantees.
2027: RaaS Moves Into Operations
The second wave hits functions where outcomes are measurable but historically haven't been productized.
IT Operations will see MTTR (Mean Time to Resolution) become a purchasable outcome. Managed service providers will offer incident remediation guarantees: "P1 incidents resolved within 2 hours or the incident is free." Automated remediation will mature enough that providers can take financial risk on resolution times. The internal IT teams that adopted internal RaaS in 2026 will have a head start.
FinOps and Cloud Optimization will shift from "we'll find savings" to "we guarantee savings with a share of the upside." Cloud cost management becomes outcome-based: providers take a percentage of documented savings rather than charging flat fees. This aligns incentives properly, finally. Expect aggressive competition and rapid consolidation.
Legal Operations will see contract review and clause extraction offered with turnaround guarantees and accuracy warranties. The early movers will focus on high-volume, standard contract types where AI accuracy is already high. Custom or novel contract work stays human-led, but the 80% of contracts that are routine become outcome-priced.
Data Operations will productize data quality and freshness. "Your critical data products will meet these quality thresholds, verified daily, or we remediate and credit." Data platform teams will compete on outcome guarantees, not just features.
2028 and Beyond: The Outcome Economy
By 2028, outcome-based models will have expanded to the point where the question flips. Instead of "can this be outcome-priced?" the question becomes "why isn't this outcome-priced yet?"
The laggards will be functions where outcomes are genuinely subjective (creative work, strategy) or where measurement is politically difficult (executive performance, organizational effectiveness). But even these will face pressure as adjacent functions adopt outcome models.
The organizational implications are significant. Budgeting shifts from headcount and tools to outcomes purchased. Procurement develops new competencies around outcome contract negotiation. Finance builds new models for outcome-based cost allocation. And the enterprises that figured out internal RaaS first will have a structural advantage in navigating this transition.
The long-term winners won't be the companies with the best AI. They'll be the companies with the best outcome definitions, the cleanest verification systems, and the most trusted results ledgers. The control plane becomes the competitive moat.
If the outcome can be instrumented and verified, it can be productized. The only question is when.
The Punchline
When pricing is tied to outcomes, architecture stops being an internal concern. It becomes the product.
Because RaaS requires verifiable measurement, deterministic orchestration, auditability and evidence, and guardrails that keep risk bounded.
In 2026, the companies that win won't be the ones with the flashiest demos. They'll be the ones with the cleanest result contracts and the strongest control planes.
What outcome could you sell, or buy, if you could prove it every time?
← Back to Part 1: Why 2026 Is the Year Outcomes Become the Product