I Built a 10-Agent
Content Engine
in One Session
No dev team. No SaaS platform. Just Claude Code, skill files and a terminal.
Good raw material.
No pipeline. No learning loop.
Four disconnected systems. Every piece of content required context-switching between six different tools, manually checking sources, formatting for the platform and remembering what worked last time. The overhead was killing output.
Repo skills
Not talking to each other
RSS harvester
Nobody reading the output
2,700 LinkedIn posts
Sitting unused in a spreadsheet
Autoresearch scorer
Never calibrated
Orchestrator plus ten agents.
All markdown. Zero framework.
Every agent is a markdown file. No code. No YAML config. Just structured instructions. The entire orchestration layer is a markdown file. The simplicity is the feature.
Five gates. Non-negotiable.
Supervised autonomy — machines execute, humans govern. The checkpoints aren't safety theatre. They're structural.
Angle approval
The engine proposes the topic and angle. I approve, redirect or kill it.
After Strategist
Research review
I see what the researcher found before the creator writes anything.
After Researcher
Draft approval
After fact-checking and editing. I read every word before it moves to visual production.
After Editor
Visual direction
I see the Art Director's brief before the Producer executes.
After Art Director
Publish approval
Nothing goes live without my explicit sign-off.
After Producer
Angle approval
The engine proposes the topic and angle. I approve, redirect or kill it.
Research review
I see what the researcher found before the creator writes anything.
Draft approval
After fact-checking and editing. I read every word before it moves to visual production.
Visual direction
I see the Art Director's brief before the Producer executes.
Publish approval
Nothing goes live without my explicit sign-off.
What surprised me
Parallel research is transformative
Three research modes firing simultaneously cut what used to be an hour of manual source-hunting into minutes. The combination produces research briefs richer than anything I assembled manually.
The fact-checker changed how I think about content
Having every claim automatically graded by source credibility — Green, Amber, Red — means I can see at a glance where my evidence is strong and where I'm leaning on weak sources.
Markdown agents are enough
I expected to need a proper orchestration framework. Some kind of state management, error handling, retry logic. The markdown skill files just work. I can modify any agent's behaviour in thirty seconds.
The learning loop changes everything
The gap between a static content system and one that learns from its own output is enormous. After twenty posts the engine will know which hook patterns drive comments, which personas engage most, which posting times work. That's not automation. That's a system that gets smarter.
No SaaS. No framework.
Just markdown.
The content engine isn't the point.
The pattern is.
One person with the right AI architecture can run a production pipeline that would have required a content team of five or six people — strategist, researchers, writer, editor, designer, analyst.
Not by replacing those roles with worse AI versions. By decomposing the work into specialist agents that each do one thing well, orchestrated through a system with human judgement at every gate.
The skill files are the blueprint. The agents are the workforce. The checkpoints are the governance. The learning loop is the compounding advantage. And the whole thing runs from a terminal.
That's what makes me bionic.
Not replacing the human. Making the human unstoppable.
