Receipt telemetry is an optional, structured object inside the signedDocumentation Index
Fetch the complete documentation index at: https://docs.vaultgraph.com/docs/llms.txt
Use this file to discover all available pages before exploring further.
receipt payload. Use it to attach runtime signals like run IDs, model usage, tool activity, timing, and error hints without flattening everything into free-form metadata.
Telemetry is part of the canonical receipt JSON. When present, VaultGraph hashes it, signs it, validates it during ingestion, stores it with the receipt, and includes it in exports.
When to use telemetry
Use telemetry for content-free execution details that help operators and auditors understand how a run behaved:- model provider and model name
- token counts
- start and completion timing
- run IDs and parent run IDs
- tool names
- finish reasons
- error names
- ordered execution events
receipt.metadata when they are not part of the runtime trace itself.
Top-level fields
| Field | Description |
|---|---|
schema_version | Telemetry schema version. Current value: v1. |
source | Telemetry producer: manual, ai-sdk, or langchain. |
run_kind | High-level run type such as generate, stream, or a vendor-defined workflow label. |
capture_phase | Integration capture stage, for example response_ready or stream_start. |
external_run_id | Framework or vendor run identifier for the current execution. |
parent_run_id | Parent execution identifier when the framework exposes one. |
tags | Short labels emitted by the integration or caller. |
flags | Boolean hints such as has_inputs, has_output, has_error, and has_action. |
model | Optional model summary with provider and name. |
usage | Optional token counts: input_tokens, output_tokens, total_tokens. |
timing | Optional run timing summary with started_at, completed_at, and latency_ms. |
error | Optional error summary. Current shape includes name. |
events | Ordered list of structured execution events used to build the run timeline. |
Event timeline
Telemetry events let VaultGraph render a structured execution timeline in the portal. The current event kinds are:run_startedrun_finishedrun_failedllm_startedllm_finishedllm_failedtool_startedtool_finishedtool_failed
Portal run detail surface
In the portal, agent and deployment receipt tables can open a receipt run detail surface for a selected receipt. That view renders the signed telemetry alongside the canonical receipt proof material. When telemetry is present, operators can inspect:- model name and provider
- total, input, and output token counts
- source, run kind, capture phase, and run IDs
- boolean flags for inputs, output, tools, and errors
- ordered execution timeline derived from
events - telemetry tags
- receipt JSON, signature, and context hash
Safety guidelines
Telemetry should stay content-free. Good telemetry fields describe execution structure, not the underlying conversation or tool payload. Safe examples:- model IDs
- token usage
- timestamps and latency
- finish reasons
- tool names
- run IDs
- high-level workflow labels
- raw prompts
- raw model outputs
- tool arguments
- transcript bodies
- API keys, secrets, or access tokens
- customer PII that is not already intended to live in signed receipt metadata
context_hash in the receipt.
Integration behavior
The built-in integrations populate telemetry automatically:- Vercel AI SDK records source, run kind, capture phase, usage, and error hints when available.
- LangChain.js records callback-derived run type, run IDs, tags, and execution hints when available.
createTelemetry(...) so callers can normalize telemetry before signing. See SDK for the full helper API and API Reference for the ingestion contract.