Architecture Comparison

Guardrails and control across three AI application architectures

Compare where AI actually has autonomy, what guardrails constrain it, and why Cherry is positioned between open-ended agents and cloud workflow builders.

This page turns the architecture comparison into a product-facing overview for sales, customer education, and internal positioning.

Where AI decides

The core difference is whether AI controls planning and tool use, or whether a fixed workflow engine keeps orchestration deterministic.

What guardrails exist

Cloud workflows and Cherry constrain AI to scoped calls with known inputs and outputs. Agentic systems push far more decision-making into the model.

Why Cherry exists

Cherry keeps the workflow model of a reliable automation engine, but runs it on your infrastructure with typed connectors and optional local LLMs.

01

OpenClaw / Devin / AutoGPT-style

Fully Agentic

The model is the orchestrator. It receives a high-level goal, plans its own steps, chooses tools, executes actions, and decides when it is done.

Full autonomyBroad permissionsNon-deterministic

How it works

The AI agent receives a high-level goal and autonomously decides what to do. It plans its own steps, picks tools, executes actions, and iterates until the goal is met or abandoned.

There is no predefined workflow. The agent itself owns planning, branching, retries, and completion criteria at runtime.

User Goal
  -> [AI Agent] <- full autonomy
        -> decides what tools to call
        -> decides execution order
        -> decides when to retry or pivot
        -> reads and writes external systems directly
        -> decides when "done"

Where AI has control

This table makes the blast radius explicit by separating model influence from deterministic orchestration.

LayerAI controlNotes
Task planningFullAgent decides what steps to take.
Tool selectionFullAgent picks which APIs and tools to invoke.
Execution orderFullNo fixed sequence. The agent decides at runtime.
Data flowFullAgent decides what data moves between steps.
Error handlingFullAgent decides how to recover from failures.
TerminationFullAgent decides when the task is complete.
External actionsFullSends emails, writes to DBs, and calls APIs directly.

Guardrails

What constrains the system

  • Minimal by design. The agent operates with broad permissions and self-directed logic.

  • Some implementations add tool-level permission scoping such as read-only filesystems or sandboxed execution.

  • Human-in-the-loop approvals can be inserted, but they weaken the autonomy model and are often optional.

  • Token budgets, cost limits, and timeouts are usually the main hard stops.

Business

Pros

  • Maximum flexibility for novel, unstructured tasks.

  • Low setup cost for one-off work because no workflow must be designed first.

  • Strong demo value and marketing appeal.

  • Can adapt to edge cases on the fly without code changes.

Business

Cons

  • Outcomes are unpredictable. The same input can produce different results across runs.

  • Auditing is difficult because there is no stable step sequence.

  • Token and compute costs are hard to forecast.

  • Hard to guarantee SLAs because execution time and quality are non-deterministic.

  • Usually depends on the most capable and expensive frontier models.

  • Creates a customer trust problem because behavior is hard to explain or promise.

Security

Pros

  • Very few inherent security advantages.

  • If sandboxed well, some actions can be contained.

Security

Cons

  • Largest attack surface because prompt injection can redirect the full workflow.

  • High credential exposure risk because the agent often needs broad access to many systems.

  • No predictable blast radius if the agent is compromised or hallucinating.

  • High data exfiltration risk because the agent can read from one system and write to another.

  • Forensics are harder because runs do not follow a stable sequence.

  • Least-privilege is difficult when autonomy requires wide permissions.

02

n8n / Make / Zapier with AI nodes

Cloud Workflow + LLM Calls

A human defines a fixed workflow and the platform executes it. AI is used inside specific nodes, while the workflow engine keeps control over branching, sequencing, and side effects.

Fixed DAGAI as toolVendor hosted

How it works

A human designs a deterministic workflow in a visual builder. The workflow runs on the platform infrastructure and executes the same step sequence each time.

AI appears as a node in the flow for tasks such as classification, extraction, or drafting. The workflow engine still controls execution order, branching, and data routing.

Trigger (for example, a new email)
  -> [Workflow Engine] <- orchestrator, fixed DAG
        -> Step 1: Fetch data
        -> Step 2: Call LLM
        -> Step 3: Branch on result
        -> Step 4: Call LLM
        -> Step 5: Write to destination
        -> Step 6: Send notification

Where AI has control

This table makes the blast radius explicit by separating model influence from deterministic orchestration.

LayerAI controlNotes
Task planningNoneWorkflow is predefined by a human designer.
Tool selectionNoneTools are fixed in the workflow definition.
Execution orderNoneThe DAG is static.
Data flowPartialAI outputs feed downstream steps, but routing remains fixed.
Error handlingNoneThe workflow engine handles retries and fallbacks.
TerminationNoneThe workflow ends when all steps complete.
External actionsScopedAI only acts through the nodes where it is called.

Guardrails

What constrains the system

  • Workflow-level guardrails come from the fixed DAG, which makes runs predictable and auditable.

  • Each AI node has defined inputs and an expected output shape.

  • The platform manages authentication, rate limits, and sandboxing.

  • Branching logic remains deterministic even when it consumes AI output.

Business

Pros

  • Execution is predictable because every run follows the same workflow.

  • Visual builders let non-developers assemble automation quickly.

  • Large connector ecosystems accelerate integration work.

  • Fast time-to-value for common internal automation.

  • Pricing and execution volume are easier to model than agent loops.

  • Auditability is straightforward because each run follows the same path.

Business

Cons

  • Strong vendor dependency because workflows live on someone else's infrastructure.

  • Data leaves your perimeter because workflow data is processed on the cloud platform.

  • AI sophistication is limited because the model is treated as a boxed tool.

  • High-throughput automation can become expensive quickly.

  • Edge cases often require more branches or separate workflows.

  • Platform pricing, roadmap, or availability changes can affect all workflows.

  • Complex custom logic often falls back to workaround code nodes.

Security

Pros

  • Small AI attack surface because AI is sandboxed to specific nodes.

  • Deterministic audit trail because each run follows the same step order.

  • Managed infrastructure shifts patching, uptime, and isolation to the platform.

  • Credential injection is handled per node by the vendor platform.

Security

Cons

  • Data residency risk because processing happens on the vendor infrastructure.

  • You must trust the vendor with your credentials and workflow data.

  • Shared multi-tenant infrastructure increases dependency on vendor isolation.

  • Encryption and key-management controls are mostly opaque to you.

  • Compliance posture depends on the vendor's DPA and hosting regions.

  • You cannot independently verify how API keys and service accounts are stored.

03

Self-hosted, structured workflow with scoped AI

Cherry

Cherry keeps a fixed workflow engine, typed connectors, and auditable step execution, while running on your infrastructure and keeping AI limited to structured operations.

Self-hostedScoped AIDeterministic

How it works

A self-hosted workflow engine executes predefined workflows using a queue and worker model. AI appears only at specific steps with structured inputs and expected output schemas.

Cherry follows the reliability model of workflow builders, but keeps execution on your server. When needed, the LLM can also run locally for full data sovereignty.

Trigger (gmail.newEmail polling)
  -> [Worker / Queue] <- orchestrator, fixed steps
        -> Step 1: gmail.fetchEmail
        -> Step 2: ai.classify
        -> Step 3: Branch
             -> [not application] -> ai.generate + gmail.sendReply
             -> [application] -> continue
        -> Step 4: ai.extract
        -> Step 5: pdf.extractText
        -> Step 6: ai.summarize
        -> Step 7: ai.score
        -> Step 8: google_sheets.append
        -> Step 9: ai.generate + gmail.send
        -> Step 10: telegram.send

Where AI has control

This table makes the blast radius explicit by separating model influence from deterministic orchestration.

LayerAI controlNotes
Task planningNoneWorkflow steps are predefined in code or config.
Tool selectionNoneEach step declares its connector and action explicitly.
Execution orderNoneThe workflow definition controls the sequence.
Data flowPartialAI returns structured output that can feed downstream steps.
Error handlingNoneThe worker handles retries, lease expiry, and fallbacks.
TerminationNoneThe workflow completes when the step sequence completes.
External actionsScopedConnectors execute side effects. AI does not call tools directly.
Content generationScopedAI can draft text or JSON, but the workflow decides when it is used.

Guardrails

What constrains the system

  • Workflow-level guardrails come from the fixed step sequence and auditable `step_logs` records.

  • Each AI call has a specific operation type, structured input, and expected JSON schema.

  • Connectors are typed modules. AI never directly touches external APIs.

  • Credentials are resolved by the worker at runtime and are never injected into AI prompts.

  • Infrastructure stays on your server. With a local LLM, data can remain entirely inside your perimeter.

  • Queue leasing and retry logic reduce duplicate execution and improve recovery behavior.

  • Cherry separates thinking from doing: AI produces structured outputs, connectors perform side effects.

Business

Pros

  • Full data sovereignty, which is useful for Swiss and EU compliance requirements.

  • Predictable execution with step-by-step logs and deterministic workflow structure.

  • No workflow-platform execution fees. You pay for infrastructure and model usage.

  • Portable architecture with no lock-in to a workflow vendor.

  • Can swap hosted LLMs for local models in zero-data-egress environments.

  • Modular connector architecture keeps the core engine clean.

  • Custom retry logic, prompts, and branching are under your control.

  • Works as both product infrastructure and a technical sales artifact.

Business

Cons

  • Higher upfront build cost because you are building platform capability, not just configuring it.

  • Maintenance burden stays with your team.

  • No visual builder yet, so workflows are still defined in code or config.

  • Connector ecosystem is smaller than cloud workflow platforms.

  • Bus factor risk is concentrated if platform knowledge is narrow.

  • Local model quality can lag frontier hosted models on harder tasks.

Security

Pros

  • Data can remain inside your perimeter, especially with local models.

  • You control credential storage, access, and encryption strategy.

  • Deterministic audit trail with explicit step logs.

  • AI is sandboxed to scoped operations and cannot invoke connectors directly.

  • Least-privilege is easier because connectors declare credential requirements per step.

  • Single-tenant infrastructure gives you direct isolation control.

  • Compliance is easier to reason about because residency and retention are under your control.

  • The stack is transparent because it is your own code and infrastructure.

Security

Cons

  • You own patching, hardening, and operational security.

  • SQLite does not provide built-in encryption at rest without an additional solution.

  • Credential encryption is not yet implemented in the current design.

  • A single server compromise exposes the DB, credentials, and queue.

  • Authentication is basic and does not yet include stronger controls such as MFA or SSO.

  • Network segmentation is limited when worker, API, and dashboard share the same host.

Summary Matrix

Fast read across the three models

If the longer sections are for evaluation, this matrix is for quick positioning. It condenses the main business, control, and security tradeoffs into one view.

DimensionAgenticCloud workflowCherry
AI autonomyFullNone (tool only)None (tool only)
Workflow predictabilityLowHighHigh
Audit trail consistencyLowHighHigh
Data sovereigntyDepends on deploymentLow (cloud vendor)Full
Prompt injection blast radiusEntire systemSingle node outputSingle step output
Credential exposure to AIHighLow (platform injects)None (worker resolves)
Setup effortLowLow-MediumHigh
Operational cost at scaleUnpredictableHigh (per execution)Low (infrastructure only)
Flexibility for novel tasksHighLow-MediumMedium
Regulatory compliance (CH/EU)DifficultDepends on vendorStraightforward
Vendor lock-inModel providerPlatform + modelNone (own code)
Local LLM optionImpracticalNoYes