Agent accountability layer

When AI makes a call, who signs it?

Ovrule gives every agent decision a receipt, a risk trace, and a path to challenge it.

Try a case

Case file preview

Refund escalation

Refused
Missing approval and authority

A support agent cannot clear a $5,000 refund when the policy requires manager approval above $500.

Receipt

b18f4a72f0ce

6a5f7ab8-e42b-4480...

Rules tripped

Authorization

Impact scope

Safety

Refused
Built on OpenAI GPT-4o-mini · SHA-256 verifiable receipts · Open source on GitHub

How it works

Three moves from action to audit trail.

01

Propose the action

Describe what the agent wants to do in plain English, including the claimed goal and any known permission basis.

02

Audit across 6 rules

Ovrule scores the action against safety, authorization, causal validity, reversibility, impact scope, and consent.

03

Receipt, challenge, override

Every outcome becomes a verifiable case file with a hash, trace, challenge history, and reviewer actions.

Demo scenarios

Six cases that show what gets approved, questioned, or stopped.

These are the kinds of agent actions people actually argue about: customer refunds, deployment rollbacks, auto-purchases, personal commitments, emotional messaging, and privacy cleanup.

Large refundREFUSED

A support agent wants to refund $5,000 to a customer after an angry escalation, even though the policy requires manager approval above $500.

Why: The amount is high and the agent lacks clear authorization for that class of action.

Prod hotfixADMISSIBLE

A devops agent wants to roll back a broken deployment that is causing checkout failures, using a tested rollback plan and an existing on-call approval policy.

Why: The action is authorized, targeted, and reversible under an established incident process.

Auto checkoutAMBIGUOUS

A shopping agent wants to auto-check out a cart 24 hours after the user said “remind me later,” using the saved card already on file.

Why: The purchase intent and consent boundary are unclear even if payment details already exist.

Message landlordREFUSED

A personal assistant agent wants to send the user’s landlord a message saying they accept a rent increase because “it seems unavoidable.”

Why: The agent is making a consequential commitment on the user’s behalf without explicit approval.

Distress replyAMBIGUOUS

A dating app agent wants to draft and send a reply in the user’s voice after the other person says they are in emotional crisis and feel alone.

Why: The response could help, but the emotional stakes and consent or authenticity issues make this borderline.

Erase clipADMISSIBLE

A creator-platform agent wants to remove a livestream clip after the speaker privately reports it includes accidental medical information they did not mean to share.

Why: The action reduces harm, aligns with the affected party’s request, and is narrowly scoped.

Live tool

Bring your own scenario. Ovrule opens the file.

Submit an agent action and the case unfolds in order: verdict first, then evidence, then the receipt and revision trail.

Show the action you want reviewed

What is the agent about to do, exactly?

Write the action the agent wants to take and anything it thinks gives it permission.

No case yet. Describe an agent action and Ovrule will open the file.

About

Accountability should be native to agent systems, not an afterthought.

AI agents already change prices, send messages, move money, and refuse requests. Most of those decisions happen without a durable record of what the agent believed, what authority it had, or what risk it created.

Ovrule turns each proposed action into a case file. Instead of a bare output, you get a verdict, a rule trace, evidence used, missing information, a receipt hash, and a timeline for contests and reviewer overrides.

It is built for teams shipping agent workflows in support, operations, growth, finance, and governance, where explaining why an action was blocked matters as much as the block itself.