Glossary

Definitions of key terms and concepts in execution-layer security for AI agents.

Execution-Layer Security#

A runtime enforcement model that intercepts actual operating system calls — file I/O, network connections, process execution, signals — made by AI agents, rather than relying on prompt instructions or model alignment to constrain behavior. Policies are evaluated deterministically against real system operations at the kernel level. The agent never sees the enforcement; it simply cannot perform operations that policy does not permit.

Why it matters: Unlike prompt-based controls, execution-layer security cannot be bypassed through prompt injection and provides auditable evidence of enforcement for compliance frameworks like SOC 2 and NIST AI RMF.

See: Getting Started | What Is Execution-Layer Security?

Deterministic Enforcement#

Policy decisions based on deterministic rules — file paths, network destinations, process names, operation types — that produce the same result every time for the same input. A deny rule on /etc/passwd will always deny access, regardless of how the AI model frames its request or what prompt engineering is applied.

Why it matters: Deterministic enforcement can be formally audited and verified. Auditors can review the policy, observe the behavior, and confirm the outcome is predictable — something impossible with probabilistic AI-based controls.

See: Policy Reference

Non-Deterministic Security Controls#

Security mechanisms that depend on AI model behavior for enforcement — prompt engineering, system instructions, guardrail models, and output filtering. These controls are probabilistic: the same input may produce different outcomes depending on model state, context window contents, or adversarial inputs like prompt injection.

Why it matters: Non-deterministic controls cannot guarantee enforcement. They can be bypassed through prompt injection, model updates, or context manipulation, and they cannot satisfy compliance requirements that demand verifiable, repeatable security controls.

See: Getting Started — contrasts non-deterministic controls with execution-layer security

Policy#

A declarative set of rules defining what an AI agent can and cannot do at the execution layer. Policies specify decisions (allow, deny, steer, approve) for file access, network connections, command execution, signals, and MCP tool calls. They are evaluated deterministically against actual system operations.

Why it matters: Policies are the foundation of deterministic enforcement — they define the security boundary in a format that is human-readable, version-controllable, and auditable.

See: Policy Reference

Steering#

Redirecting agent operations to approved alternatives rather than simply allowing or denying them. For example, rewriting file paths to a sandboxed directory, redirecting network connections to an approved proxy, or substituting a command with a safer equivalent. The agent continues working without interruption, but its operations are transparently redirected.

Why it matters: Steering enables security without blocking agent productivity. Instead of failing operations and requiring manual intervention, agents are guided to safe alternatives automatically.

See: Features

Audit Trail#

An immutable, tamper-evident log of every operation intercepted at the execution layer — file access, network connections, process execution, policy decisions, and their outcomes. Audit trails use HMAC-based integrity verification to detect tampering and can be exported via OpenTelemetry for integration with SIEM and observability platforms.

Why it matters: Audit trails provide the compliance evidence that security controls are working. They demonstrate to auditors exactly what an AI agent did, what was blocked, and what was allowed — with cryptographic proof of log integrity.

See: Observability

Ptrace Mode#

An opt-in enforcement backend that uses the Linux ptrace API to intercept syscalls at the kernel level. The ptrace tracer attaches to agent processes via PTRACE_SEIZE and intercepts execve, file I/O, network, and signal syscalls — enforcing the same policy rules as other security modes but without requiring seccomp user-notify, eBPF, Landlock, or FUSE. A seccomp BPF pre-filter can be injected into traced processes to reduce overhead to only policy-relevant syscalls.

Why it matters: Ptrace mode enables full policy enforcement on restricted container runtimes — AWS Fargate, Modal (gVisor), and restricted Kubernetes — where the preferred kernel mechanisms are blocked. It provides ~95% of the protection of full mode as a fallback for environments that would otherwise be limited to minimal mode.

See: Setup → Ptrace Mode | Features → Ptrace Enforcement

Policy Signing#

Cryptographic proof of policy authorship and integrity using Ed25519 detached signatures. Each policy YAML file has a companion .sig file containing a signature, key ID, signer label, and timestamp. Signatures are verified against a trust store of public keys, with configurable enforcement modes (enforce, warn, off).

Why it matters: Policy signing ensures agents only enforce policies from trusted authorities. Without it, anyone with filesystem access could modify policy files to weaken or bypass enforcement. In enforce mode, an unsigned or tampered policy is rejected before the server starts.

See: Features → Policy Signing | Policy Reference → Policy Signing

HMAC Integrity Chain#

A tamper-evident audit logging mechanism where each log entry contains an HMAC hash that depends on the previous entry, forming a cryptographic chain. The hash covers the sequence number, previous hash, and canonical JSON payload. Supported algorithms are HMAC-SHA256 and HMAC-SHA512, with keys managed via local files, environment variables, or enterprise KMS providers (AWS KMS, Azure Key Vault, HashiCorp Vault, GCP Cloud KMS).

Why it matters: A single modified, inserted, or deleted log entry breaks the chain from that point forward. This provides the tamper-evident audit trail required by SOC 2, NIST AI RMF, and ISO 27001 — auditors can cryptographically verify that no log entries have been altered after the fact.

See: Observability → Audit Log Integrity | Setup → Audit Log Integrity

Trust Store#

A directory of Ed25519 public key files used to verify policy signatures. Each key file contains a key_id (derived as hex(SHA256(public_key_bytes))), a human-readable label, and an optional expires_at timestamp for time-based revocation. The trust store validates key identity, rejects duplicates, and enforces file permission requirements in enforce mode.

Why it matters: The trust store is the root of trust for policy verification. It determines which signing keys are authorized, enabling organizations to enforce that only designated teams or systems can produce valid policies.

See: Setup → Policy Signing