Your AI agents already read source, run shells, call tools, and ship code to production. Provedit gives you one provable record of every action they take: which agent, on whose authority, whether it was allowed, and who approved it. Across every vendor, in one timeline.
Cursor, Claude Code, Copilot agent mode, OpenAI Assistants, LangChain workers, your own MCP-driven services. They open files, run shells, call tools, change data, and deploy to production. They do it with credentials originally issued to a human developer or a service account, so the record of what happened points back at the person whose token was borrowed, not the agent that used it.
Every vendor logs its own slice. None of them follow the action across a different vendor's pipeline. None of them record the policy decision or the human approval as evidence bound to the action it authorised. What you are left with is a pile of partial logs that nobody can stitch back together when an auditor, a regulator, or an incident lands on your desk.
Three questions then become very hard to answer:
Provedit is built to answer all three, plus five more, for every single action your agents take.
Every action that lands in Provedit shows up with the same eight questions already answered, each with a status badge and the evidence behind it. Your analysts stop writing queries and start reading verdicts.
Identity model and session metadata, not just a credential.
Policy engine decision persisted in the entry hash.
Normalised target, payload hash, drill-down to blob.
~30 normalised action types across read, write, exec, network, secret, IAM, deploy.
Signed policy.approve bound to the original action by entry hash.
Per-agent rolling baseline plus anomaly flags.
Sensitive-target hints, exfiltration patterns, allowlisted-host check.
Hash chain plus periodic Merkle anchors plus signed root.
Same eight answers, every action. Here is how the platform produces them.
One schema feeds one recorder, which writes to one verifiable ledger. Three steps for every action.
Signed events arrive from MCP proxies, CI SDKs, IDE extensions, host sensors, and cloud-audit forwarders. They all speak the same schema, so adding a new agent vendor is a config change, not a rewrite.
The recorder classifies the action, evaluates policy (allow, deny, or require approval), scores it against the agent's normal behaviour, hash-chains the entry, persists it, and returns the outcome. All in one atomic step.
Periodic Merkle anchors and signed roots turn the chain into evidence. Auditors and incident responders can verify, weeks or years later, that nothing was edited after the fact.
You don't open Provedit on a stream of raw events. You open it on the agent: its sessions, its normal behaviour, the sensitive things it has touched recently, the approvals waiting on it. SOC analysts already work this way with EDR device pages, so the muscle memory transfers on day one.
A normal log says "X happened". Provedit says "X happened, this rule evaluated it, this person approved it, and the approval is cryptographically bound to that exact action." That binding is what survives an audit, a lawsuit, or a regulator.
Each agent platform will keep improving its own logs inside its own walls. Provedit sits one layer above them all: Cursor, Claude Code, Copilot, OpenAI Assistants, LangChain, JetBrains, self-hosted MCP tools, CI bots, and the cloud APIs they touch. One timeline, one identity model, one chain of evidence, regardless of which vendor produced the action.
Not every team using AI, and not on day one. Provedit is built first for teams where agents are already touching regulated data, and where one platform group owns how those agents are deployed:
Inside those teams: platform security and AI platform engineering as the champion, CISO and GRC as the sponsor, the SOC as the day-to-day operator. If that sounds like you, the waitlist below is the right next step.
Each vendor is improving observability inside its own surface, and that work is welcome. None of them is incentivised, or positioned, to be the neutral system of record across every other vendor's agents. Provedit is the layer above them all: one identity model, one policy engine, one tamper-evident chain that spans Cursor, Claude Code, Copilot, OpenAI Assistants, LangChain, self-hosted MCP tools, and CI agents, and that survives outside any single vendor's retention window.
A SIEM stores events. Provedit treats the agent as a long-lived identity, evaluates policy in line, binds human approvals to specific actions cryptographically, and produces a tamper-evident chain that an auditor can verify on their own. Your SIEM is a downstream consumer of that chain, not the source of truth for it.
Both, dialled per action class. The same product runs in observe, alert, require-approval, and block modes, with the MCP proxy as the natural enforcement point. It defaults to observe, so a noisy approval queue never erodes trust on day one.
Article 12 (record-keeping, in force 2 August 2026), Article 14 (human oversight), and Article 19 (log retention) require traceability, oversight, and evidence outcomes. They do not mandate a specific control. Provedit is one defensible implementation path for those outcomes, alongside ISO/IEC 42001, NIST AI RMF, and the GenAI profile.
No. The pilot footprint is an MCP proxy plus a CI SDK, with nothing installed on developer machines. IDE extensions, log tailers, and host sensors are opt-in collectors you can add later, once the value is obvious.
We are taking a small number of design partners through the pilot now. One week to instrument your highest-risk internal agents and MCP tools with the proxy and CI SDK. From that point on, every agent action arrives with an identity, a policy decision, an approval where one was required, and a tamper-evident timeline. Tell us a little about your environment and we'll be in touch directly.