Back to Insights
Work Provenance Infrastructure for the AI Office
Work Provenance AI Governance Code Governance Category Creation GuardSpine Strategy

Work Provenance Infrastructure for the AI Office

We're not building an AI governance tool. We're building work provenance infrastructure for the AI office. The difference matters.

We’re not building an AI governance tool. We’re building work provenance infrastructure for the AI office. The difference matters.

“AI governance tool” describes what we do today. “Work provenance infrastructure” describes what the world needs tomorrow. The category is bigger than the current product, bigger than the current market, and bigger than any single company. That’s exactly why we’re building it.

The Category Problem

Every startup faces a category decision. You either fit into an existing category or create a new one. Fitting in is easier — buyers already have budget for the category, analysts already cover it, and comparison shopping is straightforward. Creating a category is harder, slower, and more expensive.

We’re creating a category because the existing categories don’t describe what we’re building.

GRC (Governance, Risk, Compliance). GRC tools manage policies, track risks, and generate compliance reports. They’re designed for human processes. A human reviews a change, files a report, updates a risk register. GRC tools automate the filing, not the governance itself. We automate the governance.

SAST/DAST (Static/Dynamic Application Security Testing). Security scanning tools find vulnerabilities in code. They produce findings. They don’t produce evidence of governance. A SAST scan that finds no vulnerabilities doesn’t prove that someone reviewed the code. It proves the scanner didn’t find anything. Different claim.

Code Review Tools. GitHub, GitLab, Gerrit. They facilitate review. They don’t enforce it, tier it by risk, or generate tamper-evident evidence. “Dave approved this PR” is a log entry, not evidence.

AI Code Review. Newer tools like ours that use AI models to review code. Most of them focus on finding bugs and suggesting improvements. We do that too. But bug-finding isn’t the product. Evidence-generating, risk-tiered, hash-chained governance is the product. The AI review is one input to the evidence bundle.

None of these categories capture what we’re building. We need a new one.

Work Provenance: The Category

Work provenance answers a simple question: for any artifact — code, document, spreadsheet, image, configuration — who changed it, what changed, when, why, and what evidence supports the change?

This is different from version control. Git tells you who committed a change and when. It doesn’t tell you whether the change was reviewed, what the reviewer found, what risk tier the change carries, or whether the change complies with regulatory requirements.

This is different from audit logging. Audit logs tell you what happened. They don’t tell you whether what happened was governed. A log entry that says “user deployed code at 3 AM” doesn’t tell you whether the code was reviewed, whether the deployment was authorized, or whether the change complied with SOX requirements.

Work provenance combines version history, review evidence, risk classification, compliance evaluation, and tamper-evident sealing into a single, verifiable record. For every artifact. Not just code.

Why “Provenance” and Not “Governance”

Governance covers AI changes. Provenance covers ALL artifact changes. This distinction matters because the governance problem isn’t limited to AI.

A human developer who modifies a financial model spreadsheet without review creates the same governance gap as an AI that modifies it. A marketing team that updates a regulatory-facing document without tracking changes creates the same compliance risk whether a person or a model made the edit.

The AI explosion makes the problem urgent. But the problem exists without AI. Work provenance solves both: the existing governance gap for human changes and the accelerating governance gap for AI-assisted changes.

Calling it “AI governance” constrains the market to “organizations that use AI and worry about governing it.” Calling it “work provenance” expands the market to “organizations that produce artifacts and need to prove what happened to them.” The second market is every organization.

The Standard Play

Unicorns ship standards, not just products. The evidence bundle specification IS the standard.

Docker didn’t win because the container runtime was better than LXC. Docker won because it shipped a container image format that became the standard. Once the format was standard, everything built on it — registries, orchestrators, CI/CD pipelines — and Docker was at the center.

The evidence bundle spec plays the same role. If the evidence format becomes the standard way to represent work provenance, then everything built on it — GRC integrations, audit tools, compliance dashboards, marketplace connectors — builds on our foundation.

This is why the spec is open source. A proprietary spec can’t become a standard. An open spec can. The more tools that produce and consume evidence in our format, the more valuable the format becomes. Network effects compound.

The commercial product sits on top of the standard. The standard is free. The management, routing, analytics, and enterprise features are paid. The standard grows adoption. Adoption drives demand for management tools. Management tools are the business.

The Market Math

The GRC market was $49.2 billion in 2024. It’s projected to reach $127.7 billion by 2033. That’s the existing market for governance, risk, and compliance.

Work provenance isn’t a subset of GRC. It’s adjacent. GRC manages policies and risk registers. Work provenance generates the evidence that proves policies are followed. Every GRC tool needs evidence. Work provenance generates it.

The overlap: work provenance evidence feeds into GRC platforms. The evidence bundles become inputs to risk assessments, compliance reports, and audit findings. This isn’t competitive with GRC — it’s complementary.

The expansion: GRC today is primarily policy management. Work provenance adds the artifact layer — the proof that policies were implemented in practice, not just documented on paper. This is the gap that auditors consistently identify: “you have policies, but where’s the evidence they’re followed?”

The AI multiplier: as AI accelerates artifact creation, the volume of work that needs provenance grows exponentially. A development team that manually produces 100 PRs per month might produce 500 with AI assistance. The governance surface area grew 5x. The work provenance system must scale with it.

Position: Governance + Auditability + Artifact Integrity

Three pillars. Each one alone exists in the market. Combined, they define the category.

Governance. Risk-tiered review of changes. Multi-model council evaluation. Human-in-the-loop approval for high-risk changes. Authority boundaries enforced at runtime. This is the review engine.

Auditability. Tamper-evident evidence bundles with hash chains. Offline verification. SARIF export for integration with existing security tooling. Contemporaneous evidence generation. This is the evidence engine.

Artifact Integrity. The evidence applies to any artifact type, not just code. Documents, spreadsheets, images, configurations. SheetGuard, PDFGuard, ImageGuard, CodeGuard — each guard lane handles a different artifact type with the same governance framework. This is the scope.

Competitors that cover one pillar are tools. A product that covers all three is infrastructure. Infrastructure is what companies build on. Tools are what companies switch between.

The AI Office

The “AI office” isn’t a physical place. It’s the operating model where AI agents handle routine work alongside human workers. Every organization is moving toward this model at different speeds.

In the AI office, work provenance is as fundamental as version control. You wouldn’t accept code without git history. You shouldn’t accept any artifact without provenance — a verifiable record of what changed, who or what made the change, who approved it, and what evidence supports it.

Today, that statement sounds aspirational. In three years, it will sound obvious. The gap between aspirational and obvious is where category-defining companies get built.

We’re early. The market is forming. Most organizations haven’t articulated the need yet. They know something is missing — they feel the gap between “we use AI” and “we can prove our AI-assisted work is governed.” They just don’t have a name for what’s missing.

Work provenance is the name.

What We’re Betting On

Three bets:

Bet 1: Evidence will be required. Regulatory pressure on AI-assisted work will increase. The EU AI Act enforcement date is August 2, 2026 — not “someday,” not “eventually.” FDA guidance on AI in drug development, SEC scrutiny of AI in financial reporting — all point toward mandatory evidence of governance. Organizations that generate evidence now will be compliant when the requirements arrive. Organizations that wait will scramble.

Bet 2: Standards will consolidate. The current landscape has a dozen incompatible ways to represent governance evidence. JSON blobs, PDF reports, Jira tickets, Confluence pages. A standard format will win because interoperability always wins. Our bet is that the evidence bundle spec, being open and already in production, has a head start.

Bet 3: Artifacts, not just code. Code governance is the wedge. Every buyer we talk to starts with code because that’s where AI adoption is furthest along. But the conversation always expands: “can you do this for our regulatory documents? Our financial models? Our clinical data?” The answer is yes, because work provenance is artifact-agnostic. The guard lanes are specific. The evidence framework is universal.

These bets are falsifiable. If evidence requirements don’t materialize, bet 1 is wrong. If a better spec gains traction, bet 2 is wrong. If the market stays code-only, bet 3 is wrong. We’ll adapt if any of them fail. But the trend lines support all three.

Building the Category

Category creation is a communication exercise as much as a product exercise. You need to name the problem, define the category, and position yourself as the obvious solution.

The problem: artifacts are changing faster than organizations can prove those changes were governed.

The category: work provenance infrastructure.

The position: the open-source evidence standard, with commercial tools for managing provenance at scale.

We didn’t invent provenance. The concept is as old as supply chains. Physical goods have provenance — chain of custody from raw materials to finished product. We’re applying the same concept to digital work products: chain of custody from creation to deployment.

The “infrastructure” part is deliberate. We’re not building an app. We’re building the layer that other apps sit on. GRC platforms import our evidence. CI/CD systems generate it. Audit tools verify it. The infrastructure position means we don’t compete with the tools above us. We make them better.

The Honest Assessment

Category creation is risky. Most companies that try it fail. They spend too much on education and not enough on product. They define a category nobody cares about. They get outmaneuvered by a competitor who fits into an existing category and just works.

We mitigate these risks with the open-core model. The open-source tools provide immediate, tangible value without category education. “Install this GitHub Action and get governed PRs in 5 minutes” doesn’t require understanding work provenance as a concept. It requires five minutes.

The category story is for investors, analysts, and enterprise buyers who need to understand the strategic position. The product story is for developers who need to solve a problem today. Both stories are true. They operate at different altitudes.

The evidence bundle accumulates. Every installation generates evidence. Every evidence bundle strengthens the format as a de facto standard. By the time the market consciously recognizes “work provenance” as a category, thousands of organizations will already be generating evidence in our format.

Category creation from the ground up. Not by telling the market what to want. By building something useful and letting the pattern become undeniable.


Thinking about how work provenance fits into your organization’s governance strategy? Book a call and let’s map the evidence gaps in your current process.