Build Log

I Built a 10-Agent
Content Engine
in One Session

No dev team. No SaaS platform. Just Claude Code, skill files and a terminal.

bionic content-engine
01 — The Problem

Good raw material.
No pipeline. No learning loop.

Four disconnected systems. Every piece of content required context-switching between six different tools, manually checking sources, formatting for the platform and remembering what worked last time. The overhead was killing output.

Repo skills

Not talking to each other

RSS harvester

Nobody reading the output

2,700 LinkedIn posts

Sitting unused in a spreadsheet

Autoresearch scorer

Never calibrated

02 — Architecture

Orchestrator plus ten agents.
All markdown. Zero framework.

Every agent is a markdown file. No code. No YAML config. Just structured instructions. The entire orchestration layer is a markdown file. The simplicity is the feature.

03 — Governance

Five gates. Non-negotiable.

Supervised autonomy — machines execute, humans govern. The checkpoints aren't safety theatre. They're structural.

1

Angle approval

The engine proposes the topic and angle. I approve, redirect or kill it.

2

Research review

I see what the researcher found before the creator writes anything.

3

Draft approval

After fact-checking and editing. I read every word before it moves to visual production.

4

Visual direction

I see the Art Director's brief before the Producer executes.

5

Publish approval

Nothing goes live without my explicit sign-off.

04 — Surprises

What surprised me

01

Parallel research is transformative

Three research modes firing simultaneously cut what used to be an hour of manual source-hunting into minutes. The combination produces research briefs richer than anything I assembled manually.

02

The fact-checker changed how I think about content

Having every claim automatically graded by source credibility — Green, Amber, Red — means I can see at a glance where my evidence is strong and where I'm leaning on weak sources.

03

Markdown agents are enough

I expected to need a proper orchestration framework. Some kind of state management, error handling, retry logic. The markdown skill files just work. I can modify any agent's behaviour in thirty seconds.

04

The learning loop changes everything

The gap between a static content system and one that learns from its own output is enormous. After twenty posts the engine will know which hook patterns drive comments, which personas engage most, which posting times work. That's not automation. That's a system that gets smarter.

05 — The Stack

No SaaS. No framework.
Just markdown.

cat stack.yml
Orchestration:Markdown skill files
Agents:Markdown instruction files
Runtime:Claude Code context window
Knowledge base:Structured markdown in Git
Config:YAML + markdown
Learning system:Self-updating playbook
Infrastructure cost:$0/month
Framework:None
06 — What This Means

The content engine isn't the point.
The pattern is.

One person with the right AI architecture can run a production pipeline that would have required a content team of five or six people — strategist, researchers, writer, editor, designer, analyst.

Not by replacing those roles with worse AI versions. By decomposing the work into specialist agents that each do one thing well, orchestrated through a system with human judgement at every gate.

The skill files are the blueprint. The agents are the workforce. The checkpoints are the governance. The learning loop is the compounding advantage. And the whole thing runs from a terminal.

$ bionic --status

That's what makes me bionic.

Not replacing the human. Making the human unstoppable.