Term 1
Behavioral Context Layer
Plain English
The data layer that shows how work is actually done across systems, screens, and decisions — capturing what system logs and screenshots miss.
Why It Matters
AI agents need context about exceptions, escalations, and judgment calls before they can safely automate. Event logs show what systems recorded. Screenshots show pixels. The behavioral context layer shows what people actually do — the decision points, the workarounds, the moments where rules bend under real operational pressure.
How TNDRL Implements It
Lightweight desktop and browser collectors (Fiber) observe real work execution continuously. Metadata only — no screenshots, no PII. TNDRL captures which applications were used, in what sequence, with how much time between steps, and which decision branches were taken. The signal includes timing anomalies, rework loops, and escalation triggers — the raw material for understanding whether a workflow is safe to automate.
See also: Workflow Twin
Term 2
Workflow Twin
Plain English
A live, continuously updated operational model of how a workflow actually runs — including every variant, exception path, escalation, and timing pattern.
Why It Matters
Static process maps are outdated before they're finished. A Workflow Twin reflects current reality and evolves as operations change. It becomes the single source of truth for how work actually flows — not how it's supposed to flow in documentation, but how it flows on Tuesday afternoon under deadline pressure.
How TNDRL Implements It
Weave (TNDRL's intelligence engine) assembles captured behavioral data into a living workflow graph with happy paths, variants, exceptions, decision points, and timing annotations. The graph updates continuously as new behavioral data arrives. Every edge represents a transition that real people took. Every node represents a stable decision point. Visualization shows not just the primary path but the full distribution of real behavior — where people deviate, why they deviate, and how often. Weave also performs dimensional automation readiness scoring — evaluating 6 weighted dimensions (complexity, variability, exception handling, rework potential, compliance risk, and interaction dependency) with interaction penalties that flag hidden complexity when dimensions conflict.
See also: Automation Readiness Score
Term 3
Automation Readiness Score
Plain English
A composite 0–100 score that tells you whether a workflow is safe to delegate to AI agents — not binary, but dimensional.
Why It Matters
Without a readiness score, enterprises are guessing which workflows to automate. That's how automation projects fail — by automating exception-heavy, high-variance work that agents can't handle. A readiness score makes risk visible before you commit capital and brand trust to automation.
How TNDRL Implements It
Weave (TNDRL's intelligence engine) scores automation readiness across 6 weighted dimensions: complexity (decisioning branches and edge cases), variability (consistency of workflow structure across agents), exception handling (error recovery and escalation pathways), rework potential (likelihood of failure triggering repeated work), compliance risk (regulatory sensitivity of automated decision points), and interaction dependency (human-in-the-loop judgment calls). Cross-dimensional interaction penalties reduce the score when dimensions conflict — high determinism plus high exception rate flags hidden complexity. A workflow scoring 85+ is automation-ready. A workflow scoring 40 needs process redesign before agents can safely handle it. Trellis (TNDRL's governance engine) then enforces the readiness gate: only workflows that meet safety thresholds are promoted to governed status.
See also: Entropy
See also: Governed Blueprint
Term 4
Entropy
Plain English
How chaotic a workflow is — measuring branching, inconsistency, and unpredictability in execution paths.
Why It Matters
High entropy means agents face unpredictable decision points. It signals whether automation needs tighter guardrails or the process needs redesign first. Low entropy workflows are candidates for rapid automation. High entropy workflows need human judgment or deeper process understanding before agents can handle them.
How TNDRL Implements It
TNDRL measures entropy across the Workflow Twin's execution paths — quantifying the degree of branching and unpredictability. Low entropy means consistent, repeatable paths. High entropy means many different execution routes with unpredictable switching logic. The entropy metric appears in the Workflow Twin visualization and contributes to the Automation Readiness Score. High-entropy steps often correlate with judgment calls or exception handling.
See also: Automation Readiness Score
Term 5
Flow Efficiency
Plain English
The ratio of value-adding work to total elapsed time — how much of the process is productive versus waiting, reworking, or unnecessary handoffs.
Why It Matters
Shows where the biggest efficiency gains are, whether through process redesign or automation. For example, a workflow with 40% flow efficiency means the remaining elapsed time is consumed by waiting, rework, or unnecessary handoffs. That's where agents, better tooling, or process simplification can create immediate ROI.
How TNDRL Implements It
Measured from behavioral timing data across every execution. TNDRL tracks active work (typing, clicking, system response) versus idle time (waiting for approvals, waiting for another system to respond, manual data lookup). Surfaces wait states, redundant steps, and rework loops automatically. Flow efficiency contributes to the Automation Readiness Score and is prioritized in the bottleneck analysis dashboard.
See also: Automation Readiness Score
Term 6
Governed Blueprint
Plain English
A machine-readable execution plan with approved paths, blocked paths, escalation rules, and runtime constraints — the guardrails that travel with your automation.
Why It Matters
Without governance, agents operate without boundaries. A governed blueprint defines what an agent can do, what it can't, and when to escalate to a human. It's the difference between safe automation and liability.
How TNDRL Implements It
Generated by Trellis from the Workflow Twin and Automation Readiness Score. Includes safety thresholds (e.g., escalate if confidence drops below 70%), compliance rules (which steps require human review), and rollback triggers (conditions that halt agent execution and alert ops). The blueprint is versioned, audited, and travels with the agent as it executes — every decision point includes a reference back to the blueprint that authorized it. Policy decisions made in the web app compile into the active blueprint that governs live agent behavior.
See also: Drift Monitoring
Term 7
Drift Monitoring
Plain English
Continuous detection of when real work starts diverging from the approved model or safe operating conditions.
Why It Matters
Workflows change. People find new workarounds. Systems get updated. Drift monitoring catches when your automation model is no longer accurate — before it causes problems. Without drift detection, agents continue executing against stale policy while real work has evolved elsewhere.
How TNDRL Implements It
Behavioral observation continues post-deployment. TNDRL compares live execution against the governed blueprint and alerts when divergence exceeds thresholds. Drift can be structural (new decision branches appearing in the Workflow Twin) or policy-based (approval rules changing, new compliance requirements). Alerts appear in the web app with severity (informational vs. critical) and recommended actions (update the blueprint, suspend automation, escalate to compliance review).
See also: Governed Blueprint
See also: Collection Integrity
Term 8
Collection Integrity
Plain English
The guarantee that data shown in the product has a real, complete path from collection to display — with no gaps, stubs, or fabricated data.
Why It Matters
Enterprise buyers need to trust that what they see in TNDRL reflects reality. Collection integrity means every score, every metric, and every workflow visualization is backed by real observed behavior. No hardcoded examples. No demo data masquerading as live data. No scoring algorithms using synthetic inputs.
How TNDRL Implements It
Tiered sync architecture — Tier 1 metadata always flows, Tier 2 sanitized data on schedule, Tier 3 raw data stays local. Classification and masking happen at the source before transmission. Every behavioral event carries source provenance metadata: when it was captured, by which collector, from which process, and how many validation passes it completed. The web app surfaces collection health (percentage of time collectors are active, percentage of machines enrolled, sync latency) so administrators can see gaps and trust the signal.
See also: Behavioral Context Layer