Platform Architecture Industries Compare Evaluate Security Request a Demo

Enterprise AI Operations · Workflow Intelligence and Governance

Which workflows can you safely automate with AI?
And how will you govern them in production?

Two questions every enterprise AI leader now has to answer — most without the data to answer either one. TNDRL observes how work actually runs across desktops and browsers, scores every workflow for automation readiness, and governs every agent action at runtime against guardrails learned from real execution.

console.tndrl.co / twins / claims-intake
Workflow Twin
Twin Map
Variants
Exceptions
Scoring
Readiness
Drift
Governance
Blueprint
Runtime
Claims Intake — First Notice of Loss
Live 142 executions
Open Pull policy Validate Enter details Submit Manual lookup 15% of executions Req. docs 8% of executions Coverage gap Escalate to sup. 12% of executions Override 5% — rejoins Special handling 7% — escalated out 6s 14s 18s 8s 75%
Happy path · 75%
Variant · 23%
Exception · 12%
5 steps · 3 variants · 4 exceptions · 9 total paths
Automation Readiness
74 / 100 Ready with conditions
Consistency
82
UI stability
68
Repetition
81
Complexity
76
Data structure
89
Exception rate
48
Compliance risk
Generate Blueprint
Active

Two questions.
One platform.

First in pilots. Then in production. Then at the board. Every enterprise rolling out AI lands here — and TNDRL answers both from the same observed evidence.

Question 1 · The portfolio decision

Which workflows can we safely automate with AI?

Most teams are guessing — relying on process docs, SME interviews, or vendor claims. None of those see the exceptions, handoffs, and judgment calls where automation actually breaks.

Fiber observes real execution across desktops and browsers. Weave scores each workflow for Automation Readiness across seven weighted dimensions — and withholds the score when observation is insufficient.

Question 2 · The runtime decision

How do we govern those workflows once they are automated?

Deploying an agent is the easy part. Keeping it in line once it's running is harder — allowed paths, escalation rules, drift detection, and an audit trail for every decision.

Sprout generates a Living Blueprint with guardrails learned from observation already attached. Trellis evaluates every agent action at runtime — allow, escalate, or block, logged and auditable.

The Living Blueprint answers both questions at once — what's automatable, and the guardrails that govern it in production.

Market reality

Every platform is shipping AI agents.
None of them score the risk first.

PROCESS MINING

Reads system logs. Misses the exceptions, escalations, and workarounds that happen between systems.

Blind to where automation breaks
TASK MINING

Screenshots capture pixels, not meaning. Can't distinguish a routine step from a critical judgment call.

No context, no governance
?
MANUAL DOCUMENTATION

Months of interviews. Captures the official process, not the real one.

Outdated before it's done

The happy path is easy to automate.
The rest is where it breaks.

95%

of AI pilots deliver no measurable P&L impact. The missing ingredient: understanding how work actually runs.

MIT, The GenAI Divide, 2025
40%+

of agentic AI projects will be canceled by 2027 — due to unclear value, escalating costs, or inadequate risk controls.

Gartner, 2025
<10%

of enterprises have scaled AI agents to deliver tangible value. The rest are stuck in pilot limbo.

McKinsey, 2026

51% of firms report AI incidents. Only one-third of organizations with AI in core operations have mature governance controls in place. McKinsey, State of AI Trust, 2026

TNDRL sees what they miss — and answers both questions from the same observed evidence.

What TNDRL delivers

A Workflow Twin. A score. A Living Blueprint. Governed runtime.

A Workflow Twin of every process, built from observed execution — not documentation, not interviews, not system logs. An Automation Readiness Score across seven dimensions that tells you what's safe to delegate, withheld when observation is insufficient. A Living Blueprint — the deployable automation, with guardrails learned from observation already attached. Governed runtime — every agent action evaluated against the Blueprint's boundaries before it executes.

Observe. Score. Build. Govern. One continuous loop.

How it works

From observed work to governed runtime.
In weeks, not months.

Collectors deploy in days. First Automation Readiness scores land in weeks — not after a months-long process-mining engagement. Living Blueprints ship only when observation is sufficient. Trellis keeps evaluating every agent action once they do.

See the work
01 — DISCOVER

See the exceptions nobody documented

Lightweight collectors observe workflow execution across desktop apps, browsers, and cross-system handoffs. Within two weeks, TNDRL surfaces the real workflows — including the exception paths, escalation logic, workarounds, and shadow processes that no documentation captures. Metadata only. No screenshots. No PII.

APP_SWITCHGuidewire → SAP12:04:32
DECISIONDeductible > threshold12:04:48
EXCEPTIONShadow process detected12:05:01
FIELD_ENTRYPolicy # entered (masked)12:05:14
COPY_PASTEClaim ID → SAP12:05:22
Score the risk
02 — PRIORITIZE

Score every workflow for Automation Readiness

TNDRL assembles a living model of each process and scores it across 7 dimensions — backed by observed execution data, not documentation. You see which workflows are safe now, which carry exception patterns that need controls first, and where the biggest risk lives.

74 CONDITIONAL
Claims Intake Consistency 82 Repetition 81 Exception rate 48 Data structure 89
Build the blueprint
03 — BUILD

Generate a Living Blueprint with guardrails built in.

Sprout turns observed sessions into the deployable automation via causal variant analysis — approved paths, blocked paths, escalation rules. The boundaries learned from observation travel with the blueprint, not tacked on after.

LIVING BLUEPRINT v2.4
WorkflowClaims Intake
Steps14 mapped
Variants3 governed
Allow <$5K Block no-ID Escalate >$10K
Govern the runtime
04 — GOVERN

Evaluate every agent action at runtime.

Trellis evaluates every action against the Living Blueprint's guardrails — allow, escalate, or block, logged and auditable. Drift is detected as real work diverges from the model and fed back to Fiber, so the loop stays closed.

RUNTIME
94% Allowed
1 drift → Fiber

Platform

Four modules.
One continuous lifecycle.

Fiber sees the work. Weave scores it. Sprout generates the Living Blueprint. Trellis governs every action at runtime. Drift detected during governance feeds back to Fiber — the loop stays closed.

Fiber · See
Capture real work
Lightweight desktop and browser collectors observe every application switch, decision point, and cross-system handoff. Metadata only. No PII. No screenshots.
Weave · Score
Build the Workflow Twin
Assembles captured behavior into a continuously updated Workflow Twin and scores Automation Readiness across two co-equal sub-components — Process Stability and Execution Risk — with cross-cluster interaction penalties. Variants, exceptions, timing, decision logic — all mapped and evaluated automatically.
Sprout · Build
Generate the Living Blueprint
When a workflow is ready, Sprout generates a Living Blueprint — the deployable automation, built from observed evidence via causal variant analysis, with the boundaries learned from observation already attached as guardrails.
Trellis · Govern
Govern every action
Evaluates every agent action at runtime against the Living Blueprint's guardrails — the boundaries learned from observation. Allow, escalate, or block. Drift feeds back into Fiber, the loop closes.

Fiber sees. Weave scores. Sprout builds. Trellis governs.

Sprout output · What a Living Blueprint is

Observed workflow in. Living Blueprint out.

The Blueprint isn't a recommendation or a diagram. It's an executable governance artifact — the guardrails an agent runs against at every action.

Observed Workflow
Claims Intake · First Notice of Loss
  • Runs observed142
  • Happy path coverage75%
  • Variants mapped9
  • Exception paths4
  • Systems crossed6
  • Escalation triggers3
Sprout
generates
Living Blueprint
Readiness 74 · Conditional
Executable guardrails · machine-readable
Allow Happy path + 2 stable variants (Manual Lookup, Request Docs). Covers 98% of runs with sufficient evidence.
Escalate Coverage-gap exception. Override path rejoining on supervisor approval. Missing-docs > 48h.
Block Policy violation, duplicate-claim, special handling dead-end. Never auto-approved.
Drift New variant appearing in > 3% of runs triggers re-score. Flow Efficiency drop > 10% triggers review.
Enforced by Trellis · every agent action evaluated at runtime

Differentiation

Why existing approaches aren't enough

TNDRL Process Mining Task Mining Agent Orchestration
What it sees Behavioral execution — decisions, exceptions, escalations, cross-app sequences System event logs — what software recorded, not what people did Screen pixels — actions without operational meaning or context Agent tool calls only — no visibility into how the work actually runs
Exception handling Maps every exception path, escalation, and workaround automatically Only sees exceptions that generate system events Captures screenshots of exception screens without understanding why Retries and re-prompts at the agent layer — no operational model
Automation safety Dimensional readiness scoring across 6 risk factors, plus evidence-sufficiency gate — we withhold a score when observation is insufficient No pre-automation safety scoring No pre-automation safety scoring Routing and policy checks — not evaluated against observed behavioral reality
Runtime governance Living Blueprints with approved paths, blocked paths, and escalation rules — learned from observation Post-hoc conformance checking against modeled process No runtime governance capability Static guardrails and tool allowlists — not grounded in workflow evidence
Drift detection Continuous behavioral monitoring. Drift feeds back into observation — the loop closes. Detects drift in system logs only — misses behavioral drift No continuous monitoring No drift signal against the human workflow — agent-side only
Privacy model Metadata-first. No screenshots. No PII captured. No screen data. Relies on system logs (may contain PII in event payloads). Screenshots capture everything on screen — PII, PHI, credentials Depends on upstream tool access — not a collection layer

Harness-agnostic by design

Pick any agent harness. TNDRL decides which workflows are safe for a harness to run — and evaluates every action it takes against guardrails learned from observation.

Anthropic Managed Agents  ·  OpenAI Agents SDK  ·  Google Vertex Agent Engine  ·  Microsoft Foundry Agent Service

The harness decides how. TNDRL decides whether.

Patent pending — provisional filed April 14, 2026 covering six families across the closed loop: behavioral observation, dimensional scoring, Living Blueprint generation, and runtime governance.

The math behind the score

Here's what's in the composite.

Two co-equal halves. Seven weighted dimensions. Interaction penalties where dimensions conflict. An evidence-sufficiency gate that withholds the score when observation is insufficient — because a confident-looking number you can't trust is worse than no number at all.

Automation Readiness

74 / 100
Automation Readiness 74 · the composite
Weighted combination of the seven dimensions, reduced by cross-cluster interaction penalties, admissibility-gated on observation sufficiency.
Derivation for Claims Intake
Process Stability 77 (Consistency 82, UI stability 68, Repetition 81)
Execution Risk 71 (Complexity 76, Data structure 89, Exception rate 48, Compliance risk 84)
Two penalties triggered · net composite 74.
Penalties applied
UI stability (68) × Exception rate (48) — instability meets deviation. Consistency (82) × Exception rate (48) — stable primary path but meaningful exception tail.
Verdict
70–85 band — safe with guardrails. Explicit escalation rules on the three exception categories; tighten the manual cross-app handoff before lifting guardrails.
Ready with conditions

Below 50: redesign first. 50–70: redesign or tight guardrails. 70–85: safe with guardrails. Above 85: safe to delegate.

Score withheld if observation insufficient

Composite = Process Stability 77 + Execution Risk 71 Cross-cluster penalties 74

Process Stability

77 / 100

Can an agent run this workflow predictably?

Consistency
82
Consistency · how deterministic is execution
Does the same input produce the same path every run?
Computed from
Variance of path signatures across observed sessions. Primary variant frequency vs. long-tail variance. Step-transition predictability.
Claims Intake · 30 days
142 sessions · 3 dominant variants · 87% follow primary path · 12 sessions deviate but cluster into 2 known exception branches.
What this means
Predictable enough for agent pattern-matching; three variants are governable with explicit branch rules.
UI stability
68
UI stability · interface durability
Do the applications this workflow touches change underneath the agent?
Computed from
Aggregated instability flag patterns: rework loops, repeated failed actions, loading stalls, navigation deviations, validation failures, non-transitioning steps.
Claims Intake · 30 days
2.1% failed-action rate · 12 rework loops observed · Guidewire DOM churn flagged on 2 views.
What this means
Moderate fragility — selector-based automation will age as the vendor ships UI updates. Favor DOM-resilient action strategies.
Repetition
81
Repetition · observation density and ROI
How frequently does this workflow actually run?
Computed from
Session volume over the observation window, frequency variance (coefficient of variation), persistence across the capture period.
Claims Intake · 30 days
142 executions · avg 4.7/day · σ = 1.2 · volume stable across weekdays.
What this means
Strong statistical basis for the composite score. Meaningful automation payback at this volume.

Execution Risk

71 / 100

If automation misbehaves, how bad is the damage?

Complexity
76
Complexity · cross-application integration
How many systems does this workflow span, and how clean are the handoffs?
Computed from
Distinct application count, handoff quality classification, step depth, context-passing method (structured API vs. copy-paste vs. manual re-entry).
Claims Intake · 30 days
3 applications (Guidewire, SAP, Outlook) · 14 steps · 2 cross-app handoffs — 1 structured, 1 manual copy-paste flagged.
What this means
One fragile handoff. Agents must carry Claim ID context across apps; the copy-paste step is a context-loss risk worth tightening.
Data structure
89
Data structure · input predictability
How well-formed is the data this workflow receives?
Computed from
Input field schema typing, validation presence, format consistency, unstructured-blob ratio.
Claims Intake · 30 days
86% of inputs typed and validated · 14% free-text (adjuster notes) · 0% unstructured binary attachments.
What this means
Clean enough for reliable agent reasoning. Free-text notes are a minor judgment surface worth routing to human review.
Exception rate
48
Exception rate · happy-path deviation
How often does execution deviate into escalation, retry, or judgment loops?
Computed from
Ratio of exception-path executions to total. Categorization by exception type. Frequency stability.
Claims Intake · 30 days
5.2% exception rate · 7.4 exceptions/week · 3 known categories (deductible threshold, prior-auth check, fraud flag).
What this means
Exceptions concentrate in three known categories — governable with explicit escalation rules in the Living Blueprint.
Compliance risk
84
Compliance risk · regulatory exposure
If automation misjudges the work, what's the regulatory blast radius?
Computed from
Regulated-field detection on captured field names (PII, PHI, PCI, financial instruments), cross-system regulated-data flow audit gaps, delegation of human-judgment regulatory decisions to agents.
Claims Intake · 30 days
2 PII fields (claimant name, SSN — masked at source) · 0 PHI (property insurance, no medical records) · 0 audit gaps · 1 judgment step (deductible approval) flagged for human escalation.
What this means
Low regulatory exposure. Keep deductible-approval escalation rule; the workflow is safe for agent execution outside that gate.

Interaction penalties pair a Process Stability dimension with an Execution Risk dimension, catching the looks stable but execution is risky patterns a linear combination would miss. Each penalty is a filed claim, not a heuristic.

Diagnostic Lenses

0.6
Entropy

Shannon entropy of the variant frequency distribution. Explains why a workflow's AR is what it is — high entropy means agents face unpredictable branching.

Entropy 0.6 · behavioral variability
How diverse is execution across the observed session population?
Computed from
Shannon entropy of the variant frequency distribution across observed sessions. Low when a few variants dominate; high when execution fragments.
Claims Intake · 30 days
142 sessions · 15 distinct variants · H = 0.62 · top 3 variants cover 87% of sessions · long tail is 12 rare variants.
Why it matters
Diagnostic for AR: 0.6 is why AR is 74 rather than higher — the long-tail variants mean the agent will periodically face branches outside the dominant three.
68%
Flow Efficiency

Ratio of value-adding work to total elapsed time. Surfaces where the time is going — wait states, handoffs, rework — and often drives redesign-vs-automate decisions.

Flow Efficiency 68% · productive time ratio
How much elapsed time is actually value-adding work?
Computed from
Active work time divided by total elapsed time per session. Active = typing, clicking, waiting for system response. Non-active = idle waits, handoffs, rework loops.
Claims Intake · 30 days
avg session 12:04 · active 8:13 (68%) · wait 2:45 · rework 1:06 · biggest single-step stall: prior-auth lookup (avg 1:38).
Why it matters
32% of elapsed time is wait and rework — a redesign opportunity orthogonal to automation. A workflow can have high AR and still carry meaningful Flow Efficiency gains from process simplification.

Honest scoring · Evidence-sufficiency gate

When we haven't seen enough executions to score a workflow responsibly, we tell you we don't know.

Every Automation Readiness Score carries an admissibility check. If the observational basis is insufficient — too few runs, too much variance, not enough exception coverage — the score is withheld and the reason is surfaced. We would rather defer a judgment than fabricate one.

System architecture

How the pieces fit together.

Five deployment artifacts, two trust boundaries, one closed loop — metadata crosses, PII doesn't.

ENDPOINT · CUSTOMER PERIMETER TNDRL CLOUD · TENANT-ISOLATED (GCP CLOUD RUN) Fiber Desktop Desktop capture Metadata only · no PII Fiber Extension Browser capture DOM structure · not content Raw business data Stays on endpoint. Never transmitted. Canopy Display, analysis, and governance UI · supervisors, analysts, admins Fiber Fleet Collector lifecycle Enrollment · auth Policy distribution Telemetry ingestion See Weave INTELLIGENCE Workflow Twin 6-dim scoring Evidence gate Drift detection Score Sprout GENERATION Causal variant analysis Living Blueprint emission Build Trellis ENFORCEMENT Allow Escalate Block Audit log Govern mTLS DRIFT → RE-OBSERVE · THE LOOP CLOSES
Trust boundary
Metadata crosses the boundary. PII, screen content, and raw business data stay on the endpoint.
Tenant isolation
Row-level security enforced via rls_transaction() on every query. One tenant can run multiple programs, BPO accounts, or business units under a single boundary.
Closed loop
Drift detected at Trellis feeds back into observation. Scores update. The Living Blueprint adapts. Govern informs See, and the cycle continues.

Where TNDRL runs

Built for structured operational work.

TNDRL is defined by operating model, not vertical. Wherever teams run multi-step, multi-system, exception-heavy work, the behavioral execution layer applies.

Financial Services
KYC review. Expense approval. Compliance reporting.
Multi-system review workflows with regulatory audit trails. TNDRL captures the judgment calls that never make it into the ticket.
Insurance
Claims triage. Policy renewal. Contract review.
Exception-heavy workflows with coverage-gap escalations. Living Blueprints turn the adjuster's judgment into auditable guardrails.
Healthcare Operations
Prior authorization. Credentialing. Utilization review.
HIPAA-sensitive work where metadata-only collection is a prerequisite. Compliance posture tracked end-to-end.
Back Office
Invoice reconciliation. Payment exceptions. Order fulfillment.
High-volume, queue-driven transactional work. TNDRL discovers the variants and workarounds that shadow the documented SOP.

If the work is structured, multi-step, and exception-heavy, TNDRL applies. Ask how.

Security & Privacy

Privacy by architecture

TNDRL captures the structure of work, not the underlying data. Process metadata, decision points, and timing patterns are analyzed. PII, financial data, and health records never leave your perimeter.

Metadata only

Application names, field names, timing, workflow variants. No PII. No screenshots. Field values never leave your environment by default.

Sanitized proxy

Values stripped, patterns preserved. Used only for validation in air-gapped environments.

Raw data

Stays on-premises only. Never transmitted. Available for high-security local processing.

Metadata-first architecture
On-premises available
Certification roadmap in progress
Security review packet available
Guidewire ClaimCenter
Claimant Name NOT CAPTURED
John M. Smith
Social Security Number NOT CAPTURED
412-55-8832
Claim Amount NOT CAPTURED
$14,250.00
WHAT TNDRL CAPTURES
APP Guidewire ClaimCenter
FIELDS claimant_name, ssn, claim_amount
ACTION Form submit → next step
TIMING 4.2s on this step

FIELD NAMES, NEVER FIELD VALUES

Trust posture · Precise, not aspirational

What we can provide today
  • ·Metadata-only architecture with source-side masking
  • ·Field-name-only capture — never field values
  • ·Configurable exclusion lists and collection policies
  • ·On-premises deployment option for regulated data
  • ·Security review packet with architecture and controls map
  • ·Full audit trail of what was collected and what was not
Designed for · Certifications pending

Responsibilities mapped. Architecture and controls designed to satisfy the requirements buyers ask about most:

  • ·SOC 2 Type II
  • ·HIPAA BAA
  • ·PCI DSS
  • ·Regional data residency
  • ·Customer-managed keys (CMK)

Learn more about our security architecture →

The 30-Day Evaluation

A concrete evaluation path. Not a multi-month implementation.

Lightweight collectors deploy on a target team's desktops and observe real work silently. You see a Workflow Twin in two weeks and a full Automation Readiness map in thirty days — on workflows that matter to your business, not a sandbox.

Week 0
Scoped deploy

Collectors installed on a target team. Collection policy and exclusions configured centrally. No workflow changes required.

Weeks 1–2
First Workflow Twin

Continuous behavioral observation assembles the first living Workflow Twin — variants, exceptions, timing, and decision logic mapped from reality.

Weeks 3–4
Readiness map

Automation Readiness scored across seven dimensions with interaction penalties and the evidence-sufficiency gate. Safe-to-delegate vs. needs-human-judgment, per workflow.

Day 30
Executive readout

Walkthrough of the readiness map, the top candidates, the workflows to leave alone, and the guardrails the Living Blueprint would carry if promoted.

What you get

A Workflow Twin per observed process, an Automation Readiness map, and a written recommendation on where automation is safe, where it is not, and why.

What we need

One target team, IT sign-off on collector deployment, and a short scoping call. No data migration, no workflow re-engineering, no change management.

What stays yours

Source-side masking, field-name-only capture by default, configurable exclusions, and a full audit trail of what was collected and what was not.

See the full evaluation methodology →

Get Started

See your Automation Readiness Score before you automate.

See TNDRL build a living process model from real behavioral data, surface the risk your agents would miss, and score every workflow before automation begins. No slides. Working product.

Or email us directly: hello@tndrl.co

Built by operators who've led enterprise automation programs.

FAQ

Common questions

TNDRL is the workflow intelligence and control platform for enterprise AI — built on the behavioral execution layer. Four modules running as one closed loop: Fiber observes how work actually runs — including exceptions, escalations, workarounds, and judgment calls that happen between systems. Weave builds a living Workflow Twin of each process and scores it for Automation Readiness — with an evidence-sufficiency gate that withholds a score when observation is insufficient. Sprout generates a Living Blueprint — the deployable automation, with guardrails learned from observation already attached. Trellis governs every agent action at runtime against those guardrails. Patent pending.

A Workflow Twin is a continuously updated digital representation of how a process actually runs — built from real behavioral observation, not documentation or interviews. It captures every step, variant, exception, decision point, and timing pattern. Unlike static process maps, a Workflow Twin evolves as your operations change, so it always reflects current reality. Deep dive →

AI agents execute tasks, but they operate on a narrow view of the process — the documented happy path. TNDRL provides the missing context: how the work actually runs, where it breaks, and what the risk profile looks like before anything is automated. Before agents act, TNDRL scores the risk. After they act, TNDRL monitors for drift and enforces compliance.

No. TNDRL is the governance layer around your harness. An agent harness — Anthropic Managed Agents, OpenAI Agents SDK, Google Vertex Agent Engine, Microsoft Foundry Agent Service — provides the runtime infrastructure for an AI agent: model invocation, tool orchestration, session state, sandboxed execution. TNDRL does none of that. TNDRL tells you which workflows are safe for a harness to run before it runs them, and evaluates every action the harness takes against guardrails learned from observation. Harness-agnostic by design. The harness decides how; TNDRL decides whether.

Process mining reconstructs workflows from system event logs — necessary context, but not sufficient. It misses the work that happens between systems: decisions, workarounds, shadow processes, and cross-application sequences. Some process mining vendors now claim event logs are an adequate context layer for AI agents. TNDRL provides the behavioral context layer — the missing dimension that makes agent context complete.

A TNDRL automation blueprint is a machine-readable, governed spec built from real behavioral observation. It includes every step, variant, exception, and decision point — plus allowed paths, blocked paths, safety thresholds, and rollback triggers. It evolves as operations change, so your automation stays current.

No. TNDRL uses a metadata-first architecture. It captures application names, field names, timing, and workflow patterns — never the underlying business data. PII, financial records, and health data never leave your premises.

Every automation blueprint includes governance rules: allowed variants, blocked paths, safety thresholds, and rollback triggers. TNDRL enforces these at runtime, continuously monitoring execution and flagging when reality drifts from what was approved. Think of it as a compliance layer that travels with your automation.

Weave evaluates workflows across weighted behavioral dimensions organized into two co-equal sub-components — Process Stability (Consistency, UI stability, Repetition) and Execution Risk (Complexity, Data structure, Exception rate, with Compliance risk shipping next). Interaction penalty functions pair a Process Stability dimension with an Execution Risk dimension, catching compounding risk a linear combination would miss. Rather than a binary yes/no, the Automation Readiness Score tells you which workflows are safe now, which need guardrails, which need redesign first, and which should stay human. You see the full risk picture — exactly what you're committing to before you commit. Deep dive →

Automation Readiness is THE score — the composite that answers whether a workflow is safe to delegate to an agent, computed across two co-equal sub-components: Process Stability (Consistency, UI stability, Repetition) and Execution Risk (Complexity, Data structure, Exception rate, with Compliance risk shipping next). Entropy and Flow Efficiency are diagnostic companions — not peer scores. Entropy quantifies behavioral variability observed across the session population; Flow Efficiency measures the ratio of value-adding work to total elapsed time. Both explain why an AR score is what it is, but neither is a decision signal on its own.

TNDRL deploys lightweight collectors on a target team's desktops. Within two weeks, you see your first Workflow Twin and Automation Readiness Scores for real workflows. In 30 days, you have a readiness map of your top automation candidates. No multi-month implementation. No change management required for the pilot. The collectors run silently alongside normal work. See the full evaluation process →