shipped 5 min read

A Claude Code Watcher Pattern for Agentic Builds

Drop a markdown file in a folder, an agent picks it up, runs Claude Code, reports back. The pattern that runs everything I am building.

Stack
Bash inotifywait Claude Code Docker

Most “AI agent” tutorials want you to wire up a queue, a worker pool, and a monitoring stack before anything actually runs. I did the opposite. The agent that builds my projects is a 40-line bash script and a folder.

The whole pattern

# /workspace/agent-watcher.sh
QUEUE=/shared/queue
DONE=/shared/done
LOGS=/shared/logs

mkdir -p "$QUEUE" "$DONE" "$LOGS"

inotifywait -m -e create,moved_to "$QUEUE" --format '%f' | while read FILE; do
  PATH_IN="$QUEUE/$FILE"
  RUN_ID=$(date +%Y%m%d-%H%M%S)-$(basename "$FILE" .md)
  LOG="$LOGS/$RUN_ID.log"

  echo "[$(date)] starting $RUN_ID" | tee -a "$LOG"
  claude --print --dangerously-skip-permissions "$(cat "$PATH_IN")" 2>&1 | tee -a "$LOG"
  EXIT=$?

  mv "$PATH_IN" "$DONE/$RUN_ID.md"
  echo "[$(date)] done $RUN_ID exit=$EXIT" | tee -a "$LOG"
done

That’s it. Drop a .md file in /shared/queue/, the watcher pipes it to claude --print, the file moves to /shared/done/, the full transcript lands in /shared/logs/. No Redis, no cron, no message broker.

Why it works

A few decisions that paid off:

Files as the queue. A real queue would be more correct. It would also require me to maintain it. A folder watched by inotifywait has zero failure modes I can’t fix with mv and cat.

One agent at a time. No concurrency. The watcher processes files serially. This means a long task blocks the queue, which sounds bad, but in practice the failure mode of two Claude Code processes editing the same file at once is much worse than waiting two extra minutes.

Self-healing supervisor. The container that runs the watcher is node:20-slim and the watcher script installs Claude Code on first boot if it’s missing. No custom Docker image to maintain — when the upstream Anthropic CLI updates, the next container restart picks it up.

Per-project report folders. Every project gets its own /shared/projects/<name>/ directory with CLAUDE.md (project context), prompts/ (templated tasks), reports/ (build reports). Switching projects is a single command on the bot side.

What this enables

The whole site you’re reading was built this way. Each prompt was a focused chunk of work — “scaffold the Jekyll directories”, “build the includes library”, “write component CSS for the includes”, “deploy to Docker”, and so on. Roughly a hundred prompts ran through this watcher over the build period. Each one produced a build report I could audit.

The thing nobody mentions when they pitch agentic AI: the agent only works as well as the prompt queue. Garbage in, garbage out, but with much higher token bills. The discipline this pattern enforces — every task is a markdown file you can read before it runs — is the part I’d be reluctant to give up.

What I’d add next

A simple web UI to inspect the queue and reports without SSH. A Telegram bot for sending prompts from my phone (working on this — half-built). A “rerun this report’s task” button for when a build needs to be redone after a fix.

Everything else is fine where it is.