Glossary

Definitions of key terms and concepts in execution-layer security for AI agents.

Execution-Layer Security#

A runtime enforcement model that intercepts actual operating system calls — file I/O, network connections, process execution, signals — made by AI agents, rather than relying on prompt instructions or model alignment to constrain behavior. Policies are evaluated deterministically against real system operations at the kernel level. The agent never sees the enforcement; it simply cannot perform operations that policy does not permit.

Why it matters: Unlike prompt-based controls, execution-layer security cannot be bypassed through prompt injection and provides auditable evidence of enforcement for compliance frameworks like SOC 2 and NIST AI RMF.

See: Getting Started | What Is Execution-Layer Security?

Deterministic Enforcement#

Policy decisions based on deterministic rules — file paths, network destinations, process names, operation types — that produce the same result every time for the same input. A deny rule on /etc/passwd will always deny access, regardless of how the AI model frames its request or what prompt engineering is applied.

Why it matters: Deterministic enforcement can be formally audited and verified. Auditors can review the policy, observe the behavior, and confirm the outcome is predictable — something impossible with probabilistic AI-based controls.

See: Policy Reference

Non-Deterministic Security Controls#

Security mechanisms that depend on AI model behavior for enforcement — prompt engineering, system instructions, guardrail models, and output filtering. These controls are probabilistic: the same input may produce different outcomes depending on model state, context window contents, or adversarial inputs like prompt injection.

Why it matters: Non-deterministic controls cannot guarantee enforcement. They can be bypassed through prompt injection, model updates, or context manipulation, and they cannot satisfy compliance requirements that demand verifiable, repeatable security controls.

See: Getting Started — contrasts non-deterministic controls with execution-layer security

Policy#

A declarative set of rules defining what an AI agent can and cannot do at the execution layer. Policies specify decisions (allow, deny, steer, approve) for file access, network connections, command execution, signals, and MCP tool calls. They are evaluated deterministically against actual system operations.

Why it matters: Policies are the foundation of deterministic enforcement — they define the security boundary in a format that is human-readable, version-controllable, and auditable.

See: Policy Reference

Steering#

Redirecting agent operations to approved alternatives rather than simply allowing or denying them. For example, rewriting file paths to a sandboxed directory, redirecting network connections to an approved proxy, or substituting a command with a safer equivalent. The agent continues working without interruption, but its operations are transparently redirected.

Why it matters: Steering enables security without blocking agent productivity. Instead of failing operations and requiring manual intervention, agents are guided to safe alternatives automatically.

See: Features

Audit Trail#

An immutable, tamper-evident log of every operation intercepted at the execution layer — file access, network connections, process execution, policy decisions, and their outcomes. Audit trails use HMAC-based integrity verification to detect tampering and can be exported via OpenTelemetry for integration with SIEM and observability platforms.

Why it matters: Audit trails provide the compliance evidence that security controls are working. They demonstrate to auditors exactly what an AI agent did, what was blocked, and what was allowed — with cryptographic proof of log integrity.

See: Observability