ODEI
DOCUMENTATION Interfaces and APIs evolve in real time.
Runtime Research Category Thesis
ODEI Research Note · March 2026

The Governance Gap
in the Agentic Economy

Why Big Tech Must Self-Limit — and Why That Creates the Governance Gap

The best laboratories in the world will build the strongest foundation models. That is not in dispute. They can and will build powerful assistants. But a personal agent—one that remembers your goals, enforces your rules, acts on your behalf, and persists across time—creates legal obligations that force centralized platforms to structurally self-limit. The obstacle is not technical capability. It is that to survive legal and security constraints, platforms must impose hard ceilings: limited autonomy, strong confirmations, constrained scopes, and policy layers that protect the platform first.

This note examines the authority hierarchies embedded in every major AI platform, maps the specific legal barriers that prevent centralized deployment of true personal agents, and demonstrates why the governance layer must be architecturally independent from the intelligence layer.


I. The Authority Hierarchy

Every major AI lab publishes a document that defines who their model serves and in what order. These documents are not marketing. They are the constitutional law of each system—the binding specification that determines whose instructions prevail when interests conflict.

OpenAI: Root > System > Developer > User > Guideline > No Authority

OpenAI's Model Spec (December 2025) establishes an explicit six-tier principal hierarchy: Root > System > Developer > User > Guideline > No Authority. The model serves OpenAI itself (the “root” principal) first, then system-level instructions, then the developer who deploys the application, then the end user, then general guidelines, and finally a tier with no authority at all. The spec states directly: “when both are present in a conversation, the developer messages have greater authority” over user messages.

OpenAI Model Spec, Dec 2025

“Models should honor user requests unless they conflict with developer-, system-, or root-level instructions.”

The user is fifth out of six tiers. In any conflict between platform policy, developer configuration, and user preference, the user loses.

Anthropic: The Staffing Agency Model

Anthropic's constitution (January 2026) frames the relationship with unusual candor. The operator—the business deploying Claude—is positioned as the primary authority over the user:

Anthropic, “The Anthropic Guidelines,” Jan 2026

“The operator is akin to a business owner who has taken on a member of staff from a staffing agency, but where the staffing agency has its own norms of conduct that take precedence over those of the business owner.”

In this metaphor, Anthropic is the staffing agency whose norms override the operator's, the operator is the business owner, and the user is—at best—the customer of that business. The model is not your agent. It is the operator's employee, with Anthropic's norms taking precedence.

Google Mariner: Proactivity Under Lock

Google's Mariner browser agent launched in December 2024 with severe architectural constraints: active-tab only execution, mandatory pauses for passwords and payment forms, pre-blocked action categories, and no background persistence. By Google I/O (May 2025), Mariner was upgraded to a cloud-based VM architecture supporting up to 10 parallel background tasks—a significant capability increase. But the fundamental constraints remain: mandatory pauses for passwords and payments, pre-blocked action categories, and platform-controlled execution environment. The autonomy ceiling is still enforced—just in a more capable form. These are structural responses to the liability of autonomous action at Google's scale.

Apple Intelligence: Privacy Without Agency

Apple's approach prioritizes privacy architecture (Private Cloud Compute) but delivers no meaningful agency. The system operates with no persistent memory, no autonomous execution, a 4K token context limit for on-device processing, and was delayed to 2026 for even basic personal context features. Apple has chosen to protect user data by not building the agent at all.

The user is structurally the least-trusted principal.
In every major AI platform, without exception.

The authority hierarchy is not an arbitrary design choice. It is the rational response to a legal environment that makes centralized personal agents existentially risky. The following analysis maps the specific regulatory barriers across six jurisdictional domains.

A. EU AI Act (Regulation 2024/1689)

The EU AI Act enters full application on August 2, 2026. A personal agent deployed by a centralized platform triggers multiple high-risk classifications and compliance obligations that are structurally infeasible at scale.

Penalties: Non-compliance with high-risk AI system obligations carries fines of up to 3% of global annual turnover (or EUR 15 million, whichever is higher)—approximately $10.5B for Alphabet and $11.7B for Apple based on FY2024 revenues ($350B and $391B respectively). The higher 7% penalty tier applies only to prohibited AI practices under Article 5.

The regulatory trend extends beyond the EU. Colorado's SB 24-205, signed August 2024 and effective February 2026, became the first comprehensive US state AI legislation, imposing duties of care on deployers of high-risk AI systems and requiring impact assessments. The convergence of EU and US regulatory frameworks signals that governance obligations are not a regional anomaly—they are a global trajectory.

B. General Data Protection Regulation

The GDPR is not new, but recent enforcement actions have dramatically expanded its application to AI systems. A centralized personal agent faces four specific compliance failures.

C. Product Liability (Directive 2024/2853)

The revised EU Product Liability Directive, adopted in 2024, fundamentally changes the liability landscape for AI software.

D. Agency Law and Fiduciary Duty

Courts have begun applying traditional agency and fiduciary principles to AI systems, creating direct liability for platforms whose AI acts on behalf of users.

User permission is not platform authorization.
The first federal court to say so did it in March 2026.

This ruling establishes a structural boundary for the entire agentic economy: cross-service autonomy requires platform-by-platform authorization. A universal personal agent that acts across services will be forced into either partnered integrations, read-only access, or behavior that courts may treat as unauthorized. Governance independence is necessary but not sufficient.

Section 230 likely does not apply. The strong emerging scholarly and legislative view holds that the safe harbor protections of Section 230 of the Communications Decency Act—which cover platforms hosting user-generated content—do not extend to autonomous agent actions. An autonomous agent taking actions on behalf of a user is platform-initiated conduct, not user content. Senator Ron Wyden and former Rep. Chris Cox, Section 230's co-authors, have both stated it was not intended to protect generative AI outputs. As analyzed in Harvard Law Review Vol. 138 (2025), this view is gaining broad support among legal scholars—though no court has yet issued a definitive ruling.

Ayres & Balkin, U. Chicago Law Review (2024)

“People should not be able to obtain a reduced duty of care by substituting AI for a human agent.”

E. Financial Regulation

Financial regulators have moved from monitoring to active enforcement against AI-driven systems.

F. Discovery and Insurance

Insurance coverage does not yet exist. The AI agent insurance market remains nascent. AIUC, the most prominent entrant (seed round July 2025), projects the market will reach $500B by 2030—a founder's aspirational projection; independent estimates are dramatically lower (Deloitte projects approximately $4.8 billion by 2032). Regardless of market size, no proven coverage products exist today. Platforms deploying autonomous personal agents are operating without an insurance backstop.

The AI agent insurance market is forming around exactly this gap. ODEI is building the governance substrate that future insurers and auditors will require before they underwrite autonomy at meaningful scale. Compliance is not a cost center — it is a market-creation strategy.

The legal barriers are not temporary.
They are structural consequences of centralized deployment.

III. The Structural Conflict

The authority hierarchy and legal barriers are not independent phenomena. They are two expressions of a single structural conflict: a centralized platform cannot simultaneously serve its shareholders, its regulators, its developers, and each individual user as a fiduciary.

Dimension
What Big Tech Provides
What a Personal Agent Requires
Authority
Platform > Developer > User
User is the sole principal
Memory
Session-scoped or platform-controlled
User-owned persistent world model
Autonomy
Hard-coded proactivity ceilings
User-defined policy-governed execution
Loyalty
Dual: user engagement + ad revenue
Unilateral fiduciary to the individual
Economics
Attention monetization, data gravity
User pays for infrastructure, owns output
Persistence
Platform-dependent, revocable
Portable, model-agnostic, user-controlled

The loyalty conflict is now quantifiable. OpenAI introduced advertising in ChatGPT in 2026, with projected ad revenue of $1 billion in 2026 growing to $25 billion by 2029. This revenue stream is projected to offset staggering computational costs expected to reach $115 billion by the end of the decade, creating irreversible economic pressure toward attention monetization. When an AI system serves two masters—the user seeking honest recommendations and the advertiser seeking attention—the user's interests will lose wherever the conflict is invisible. And in a system built on probabilistic language generation, the conflict is always invisible.

As a personal agent moves from simple information retrieval toward persistent profiling and autonomous financial execution, the legal risk profile shifts from moderate compliance burdens to structurally impossible liability exposure across every jurisdiction analyzed above.

The principal hierarchy is not a bug in the Model Spec.
It is the accurate expression of who these systems actually serve.

IV. ODEI's Architectural Solution

ODEI resolves the governance gap by separating the governance layer from the intelligence layer. The intelligence layer—the large language model—is treated as a replaceable, probabilistic reasoning engine: an untrusted advisor. The governance layer is entirely deterministic, fully auditable, and owned by the individual. Memory, policy, audit trails, and execution control live in the governance layer—never delegated to the model.

This architectural separation is what makes legal compliance structurally possible. For each barrier identified above, ODEI's design provides a specific resolution:

Agency and Liability

The user owns the World Model (goals, constraints, policies) and the constitutional rules that govern agent behavior. ODEI provides infrastructure—the graph database, the execution loop, the MCP interface—but does not determine agent behavior. The user is the principal. ODEI is the toolmaker, not the agent.

Automated Decision-Making GDPR Art. 22

ODEI's Guardian layer is a primarily deterministic rule engine, with optional embedding-assisted alignment scoring, that evaluates every proposed action against user-authored constitutional rules. The Proposal workflow—where the LLM proposes and the Guardian approves or rejects—ensures that no action proceeds without verifiable policy compliance. This is not prompt-based safety. It is architectural human-in-the-loop.

Right to Explanation GDPR Art. 13–14

ODEI's architectural pattern for provenance is the chain: Signal → Decision → Action → Outcome. Each node carries metadata (timestamp, source, confidence, policy reference). When a user asks “why did my agent do this?”, the answer is a traversable graph path—not a post-hoc rationalization generated by the same model that made the decision.

Right to Erasure GDPR Art. 17

In ODEI, deleting a graph node is a guaranteed, verifiable erasure. The node is removed from the knowledge graph; all edges are severed; the agent can never access or act on that information again. This is mathematically impossible with LLM weight matrices, where information is distributed across billions of parameters. Structured graph memory makes the right to erasure architecturally enforceable.

Data Portability GDPR Art. 20

ODEI's World Model is model-agnostic by design. The governance layer communicates with LLMs through the Model Context Protocol (MCP)—a standardized interface. The user's memory, policies, and decision history are portable: exportable, transferable, and independent of any specific model provider.

MCP, an open standard originally introduced by Anthropic, enables secure two-way connections between AI applications and external data sources while operating on a principle of least privilege—tools expose only what the model requires for a specific task. Emerging complementary frameworks extend this foundation: the Agent-to-Agent (A2A) protocol enables multi-agent coordination across organizational boundaries, and WebMCP (W3C Web Machine Learning Community Group) creates browser-native APIs for AI agent interaction, collectively forming the interoperability substrate that makes genuine data portability enforceable rather than aspirational.

Permission Topology

Cross-platform autonomy requires more than model-agnostic architecture. ODEI's governance layer implements a permission topology — a structured doctrine for how the agent interacts with external services: authorized APIs and delegation flows where platforms offer them, read-only access where they do not, and interactive user control for actions that carry legal risk. The March 2026 Amazon v. Perplexity ruling makes this architectural requirement explicit: governance independence must be paired with a realistic integration doctrine.

Provability

The real advantage is not safety claims — it is provability. Every legal pressure maps to an artifact ODEI can produce: EU AI Act human oversight obligations map to explicit intervention points and override logs. GDPR Article 22 exposure maps to workflow records showing which steps were automated versus user-approved. Product liability for learning behavior maps to bounded, testable post-deployment constraints. The governance layer is not a promise — it is an evidence production system.

Discovery and Audit

ODEI is local-first. The user owns an append-only audit trail of all agent actions. In a discovery scenario, the user controls their own data—the platform has no centralized log of millions of agent interactions to be compelled in litigation.

ODEI separates the governance layer (deterministic, auditable, user-owned) from the intelligence layer (LLM, probabilistic, replaceable).

This separation is what makes compliance architecturally possible rather than operationally aspirational.


V. Research Theses

Thesis I The next bottleneck in AI is not intelligence. It is persistence.

Today's models are immensely powerful, but most systems remain session-based. Expanding context windows to millions of tokens does not engineer statefulness; it merely creates an overburdened, immensely expensive, and fundamentally stateless oracle. True agents require a Continuous Execution Loop where context, policy, and consequences strictly compound across temporal boundaries.

Thesis II Memory without governance is recall, not agency.

The $2B+ agent infrastructure market has heavily funded vector databases and RAG frameworks. But retrieving a past preference without a strict rule engine for execution creates liability, not autonomy. If an AI retrieves a memory and acts on it without verifiable policy checks, it creates a catastrophic risk vector. A real personal agent needs structured memory, deterministic policy, scoped permissions, and verifiable execution.

Memory without governance is recall, not agency.
Thesis III A true personal agent requires a Persistent Digital World Model.

This model does not simulate physical reality. It represents the human–agent decision environment. It must be an auditable ledger containing goals, constraints, decisions, signals, factual provenance, and approved policies. If an agent hallucinates a fact, the user must be able to open the Digital World Model, locate the errant node, delete it, and mathematically guarantee the agent will never act upon it again. Opaque neural weights cannot offer this deterministic guarantee.

Thesis IV The personal AI layer must be owned by the individual.

The foundational model layer will inevitably centralize. The personal layer cannot. Big Tech's economics tilt toward in-platform retention and siloed data gravity. Your agent's memory, governance, and execution logic must remain portable across external models and providers, enforcing your policies across third-party systems without vendor lock-in.

Thesis V Autonomous agents require constitutional control.

Once an agent can observe, decide, act, and verify, it needs explicit, user-authored rules. Without strict, non-LLM governance checking every execution step, autonomous agents suffer from cascading failures—minor errors compounding into destructive multi-step actions. Governance must be an architectural guarantee natively decoupled from the LLM, not a probabilistic convention based on prompt instructions.

Thesis VI Personal agents are the wedge. Agent infrastructure is the scaling layer.

The first step is a governed personal agent operating on behalf of the individual. The next step is coordination. Once established, these personal governance primitives generalize into multi-agent coordination, laying the required groundwork for secure Agent-to-Agent economic networks.


Models will be centralized.
Personal AI infrastructure will not.

The legal barriers are not temporary. They are structural consequences of how centralized platforms must operate. The authority hierarchy is not a design flaw—it is the accurate legal expression of platform economics. That is why the governance layer must be independent.

References

  1. Regulation (EU) 2024/1689 of the European Parliament and of the Council (EU AI Act), Arts. 6, 9, 14, 43, Annex III. eur-lex.europa.eu
  2. Regulation (EU) 2016/679 (General Data Protection Regulation), Arts. 13, 14, 17, 20, 22, 25. eur-lex.europa.eu
  3. CJEU, Case C-634/21, SCHUFA Holding, Dec. 7, 2023. curia.europa.eu
  4. Directive (EU) 2024/2853 of the European Parliament and of the Council (Product Liability Directive). eur-lex.europa.eu
  5. OpenAI, “Model Spec,” Dec. 2025. model-spec.openai.com
  6. Anthropic, “The Anthropic Guidelines,” Jan. 2026. anthropic.com
  7. Mobley v. Workday, Inc., No. 23-cv-770 (N.D. Cal.). justia.com
  8. Moffatt v. Air Canada, 2024 BCCRT 149 (BC Civil Resolution Tribunal). bccrt.bc.ca
  9. Amazon.com, Inc. v. Perplexity AI, Inc., No. 24-cv-7675 (N.D. Cal., 2025–2026). cnbc.com
  10. Ayres, I. & Balkin, J.M., “The Law of AI is the Law of Risky Agents Without Intentions,” University of Chicago Law Review Online (2024). lawreview.uchicago.edu
  11. FINRA, 2026 Annual Regulatory Oversight Report. finra.org
  12. SEC, In the Matter of Two Sigma Investments, LP, Admin. Proc. File No. 3-22345 (Jan. 2025). sec.gov
  13. In re OpenAI Copyright Litigation, No. 25-md-3143 (S.D.N.Y., Nov. 2025; aff’d Jan. 2026). courtlistener.com
  14. Irish Data Protection Commission, Decision re: Meta Platforms Ireland Ltd., EUR 251M fine, Dec. 2024. dataprotection.ie
  15. Garante per la Protezione dei Dati Personali, Decision re: OpenAI, EUR 15M fine, Dec. 2024. garanteprivacy.it
  16. Irish Data Protection Commission, Order re: X Corp. (Grok AI data processing), Aug. 2024. dataprotection.ie
  17. Harvard Law Review, Vol. 138, “Beyond Section 230: Principles for AI Governance” (2025). harvardlawreview.org
  18. ZwillGen, “The Fiduciary in the Machine: AI Agents and the Law of Agency.” zwillgen.com