I Watched Nate Jones Describe What I Already Built

April 2026 — Dave Eichler, Founder, DEIA Solutions


Nate Jones published a video last week breaking down the six-layer agent infrastructure stack — compute sandboxes, identity, memory, tools, provisioning, and orchestration. It's one of the clearest breakdowns of the category I've seen. Watch it.

Then he said something that stopped me cold.

"The orchestration layer is where the next infrastructure-defining company gets built."

He called it the missing piece. The layer that doesn't exist yet at production grade. The Kubernetes moment for agents — scheduling, lifecycle management, health checking, dependency resolution — that nobody has solved.

I've been running it in production for months.


What Nate Got Right

The six-layer framing is accurate. Agents need somewhere safe to run code. They need identity. They need memory that doesn't live inside a model. They need tools without having to hand-wire every integration. They need to provision and pay for things. And they need something to coordinate all of it reliably.

He's also right that most of this stack is still in flux. Identity for agents is a shim on top of a human protocol. Memory is a land grab between startups and the model labs. Orchestration barely exists outside of framework-level demos that nobody has hardened to enterprise grade.

Where I'd push back: the orchestration problem isn't unsolved. It's unrecognized. Because the people who've solved it built it for themselves and didn't put a VC announcement in front of it.


What I Built

Let me describe the factory.

Every task in my development process flows through three autonomous daemons that run 24/7, coordinate entirely through files, and require zero human intervention to schedule, dispatch, and execute work across a fleet of AI workers — governed by policy to a maximum of 20 concurrent bees in the hive at any time.

The Scheduler is not a queue. It's an optimization engine built on OR-Tools that continuously evaluates the backlog, resolves dependency graphs, estimates velocity from historical completion data, and produces a schedule.json diff-tracked in real time. When execution deviates from the plan — a task runs long, a dependency fails, velocity drops — the scheduler recomputes and the downstream systems adjust. It doesn't wait to be told. It watches.

The Dispatcher reads schedule.json on a ten-second loop. It checks how many workers are currently running, how many slots are available, which tasks have their dependencies confirmed complete in _done/, and moves exactly the right number of tasks from backlog/ to queue/. It maintains its own dispatched.jsonl ledger — an append-only record of every decision it made and when. It never races the queue-runner because it reads actual filesystem state before every dispatch decision.

The Queue-Runner is 7,500 lines of code that has been evolving for months — not running perfectly from day one, but built up incrementally under my guidance until it became what it is now: the thing I'm describing here. A human-guided, AI-executed construction project that built its own factory. It handles QUEUE → RUNNING → DONE lifecycle transitions, heartbeat monitoring, watchdog timeouts, retry logic, fix cycles, and result routing. When a task completes clean, it goes to _done/. When it fails, it goes to _needs_review/. The queue-runner doesn't know a scheduler exists. It just processes whatever lands in queue/. That separation is intentional.

Deconfliction is structural. Tasks only promote to queue/ when their declared dependencies are confirmed in _done/. Race conditions are impossible by design — not by lock — because the dispatcher reads ground truth from the filesystem before every move. There is no shared mutable state. There is no coordination protocol between daemons. There is a dependency graph and a set of folders, and the system is correct by construction.

This is the scheduling and lifecycle layer Nate said doesn't exist. It exists. It runs autonomously overnight. I wake up to completed work.


The Broader Stack

Since we're mapping Nate's layers to what I've built:

Compute — my workers are governed AI instances, not cloud VMs, but the principle is the same: isolated execution, bounded scope, no cross-contamination between tasks.

Identity — hodeia is my identity service. Unpublished, but real and running. Four-Vector profiles (quality, reliability, preference, authority scores) give every entity in the system a measurable identity that compounds over time. Not a shim. Not email. A native agent identity layer built to outlast the current era of "just use email as a key."

Memory — the Event Ledger. Append-only, hash-chained, tamper-evident. Every interaction, every governance decision, every task completion is logged. The memory layer is also the audit layer. Nobody owns your memory in this system — that's a constitutional guarantee, not a product decision.

Tools — GateEnforcer evaluates every outbound action against a governance policy before it executes. TASaaS scans every inbound payload before an agent sees it. The Agent Skills governance wrapper (what I call TASK-019) is what Composio should be if it had a conscience built in at the protocol level, not bolted on at the marketing level.

Provisioning — Three Currencies: Clock, Coin, Carbon. Every task execution emits raw telemetry — model name, tokens in, tokens out, wall time, cost tier. Coin and Carbon derive from raw telemetry, never from hardcoded rates. No agent in my system can spend resources without it being measured and attributed. That's not billing. That's accountability. And sitting underneath all three, as a fourth dimension that lives in the Orchestration layer: Concurrency. How many agents ran simultaneously, at what cost, under what governance constraints. That number is capped at 20. On purpose. By policy. Not by accident.

Governance — this is the through-line Nate didn't name as its own layer, but it's what makes my stack different from every other orchestration story. GateEnforcer evaluates every outbound action before it executes. TASaaS scans every inbound payload before an agent sees it. Every decision — scheduling, dispatching, executing, routing — is logged to an append-only, hash-chained Event Ledger. The governance layer isn't an add-on. It's the substrate. The factory cannot run outside of it.

Orchestration — see above. Governed to 20 concurrent bees. Scheduled, dispatched, deconflicted, and audited. End to end.


Why This Matters Beyond My Build

Nate's right that most teams are hand-holding orchestration. They spin up three agents in a notebook and call it a multi-agent system. Then they wonder why it falls apart at scale, why they can't recover from failures, why the cost reports make no sense, why nobody knows what the agents actually did.

The answer isn't a framework. It's infrastructure. Infrastructure that separates concerns — scheduling from dispatching from executing — so each piece can be correct independently. Infrastructure that makes deconfliction a property of the system, not a policy you enforce manually. Infrastructure that maintains an audit trail not as an afterthought but as the primary artifact.

My thesis has always been: they execute, we measure, govern, and optimize. The factory is how I proved it to myself before I prove it to anyone else.

The orchestration layer exists. It's running. And it's just the beginning.


Dave Eichler is the founder of DEIA Solutions and the builder of the ShiftCenter platform ecosystem. He's also available for Principal Engineer and Technical Architect roles in Austin, TX. Find him at linkedin.com/in/daaaave-atx.