Stop relying on screenshots. Execution Proof Infrastructure (.epi) is the open standard for Verifiable Execution. It captures code, context, and computation into a verifiable signed receipt that works everywhere.
Review AI decision case files in the browser, try a safe sample instantly, and open the new Decision Ops workspace at epilabs.org/viewer/.
From epi demo to a browser-openable `.epi` artifact in seconds.
Recommended first path: install once, run epi demo, and inspect the sample run in your browser.
If you prefer zero setup, open the Colab notebook instead.
epi view --extract now vendors and inlines jszip.min.js -
extracted viewers open cleanly in offline and air-gapped environments.
Extracted viewers no longer load scripts from JSDelivr. The full JSZip runtime is
bundled directly in the generated viewer.html.
Embedded artifact viewers and the browser policy editor now share the same bundled JSZip runtime path as extracted viewers.
Wheel auditing now fails if the vendored jszip.min.js runtime asset is
missing from the built package, preventing broken releases.
One-line callback for 100+ LLM providers. OpenAI, Anthropic, Cohere, Mistral, Azure, Bedrock - all recorded.
EPICallbackHandler captures LLM calls, tool invocations, chain steps, and agent
decisions.
epi demo:curl -sSL https://raw.githubusercontent.com/mohdibrahimaiml/epi-recorder/main/scripts/install.sh | sh
iwr https://raw.githubusercontent.com/mohdibrahimaiml/epi-recorder/main/scripts/install.ps1 -useb | iex
epi command not found, use
python -m epi_cli (100% reliable)
Begin with a refund approval, then expand to your own workflow.
epi demo
Recommended first run. Opens the refund-review demo in the canonical browser review view.
Shippedepi view <file.epi>
Open a case file in the browser review view.
Shippedepi verify <file.epi>
Verify integrity and authenticity before you share a case file.
Shippedepi run <script.py>
Advanced path for recording your own script after you have seen the demo.
Shippedepi review <file.epi>
Add saved review notes to a case file.
Shippedepi verify <file.epi> --json
Output verification results as JSON for CI/CD integration.
Shippedepi ls
List all recordings in ./epi-recordings/ directory.
Shippedepi doctor
Shipped in v2.1.1: Self-healing diagnostics. Auto-detects and repairs PATH issues.
Shippedepi keys list
List all Ed25519 keypairs in your keystore.
Shippedepi keys generate
Generate a new Ed25519 keypair (auto-generated on first use).
Shippedepi keys generate --name <keyname>
Generate a named keypair for team/project separation.
Shippedepi keys export --name <keyname>
Export public key for sharing with verifiers.
Shippedepi --help
Show all available commands and usage information.
Shippedepi help
Show extended quickstart guide with examples.
Shippedepi version
Show EPI version information (currently v3.0.2).
Shippedepi <command> --help
Get detailed help for any specific command.
Shipped@record
Decorator to record a function. @record(goal="test")
with record("file.epi", goal="...")
Context manager. Tip: Always provide an explicit filename when adding
metadata like goal or metrics.
epi install --global
Auto-record all Python processes via sitecustomize.py. Idempotent and safe.
epi uninstall --global
Remove auto-recording cleanly. One command.
ShippedEPI evidence bundles are now first-class OS citizens natively wired to your environment. Double-click any .epi file on Windows, macOS, or Linux to seamlessly launch the isolated viewer without terminal intervention.
$ epi associate # Register .epi file extension natively
$ epi unassociate # Cleanly remove global registry bindings
import litellm
from epi_recorder.integrations.litellm import EPICallback
litellm.callbacks = [EPICallback()] # That's it
response = litellm.completion(model="gpt-4", messages=[...])
response = litellm.completion(model="claude-3-opus", messages=[...])
# Every call -> signed .epi evidence
from langchain_openai import ChatOpenAI
from epi_recorder.integrations.langchain import EPICallbackHandler
llm = ChatOpenAI(model="gpt-4", callbacks=[EPICallbackHandler()])
result = llm.invoke("Analyze this contract...")
# Captures: LLM, tools, chains, retrievers, agents
client = wrap_openai(OpenAI())
with record("stream.epi"):
stream = client.chat.completions.create(
model="gpt-4", stream=True,
messages=[{"role": "user", "content": "Write a poem"}]
)
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="")
# Assembled response + token usage logged automatically
$ pytest --epi --epi-dir=evidence
======================== EPI Evidence Summary ========================
OK test_auth_flow.epi (signed, 12 steps)
OK test_payment.epi (signed, 8 steps)
OK test_refund.epi (signed, 6 steps)
======================================================================
- name: Verify EPI evidence
uses: mohdibrahimaiml/epi-recorder/.github/actions/verify-epi@main
with:
path: ./evidence
fail-on-tampered: true
from epi_recorder.integrations.opentelemetry import setup_epi_tracing
setup_epi_tracing(service_name="my-agent")
# All OTel spans -> signed .epi files automatically