How I Built an Autonomous X Reply Agent with GPT-5.4 and Zero API Costs

A solo founder's guide to building an autonomous X reply system using Codex CLI, bird search, Chrome CDP, and the humanizer pattern. Zero API costs via ChatGPT Pro.

automationai-agentsx-twitterbuild-in-public

I needed my product (vaos.sh) to show up in every conversation about OpenClaw memory problems on X. Manually finding and replying to tweets was eating 2 hours a day. So I built a system that does it autonomously.

The Stack

Total cost beyond my existing ChatGPT Pro subscription: $0.

How It Works

Every 30 minutes, the system:

  1. Picks a random search query from 15 OpenClaw-related topics
  2. Searches X via bird CLI (free, uses browser cookies)
  3. Filters out tweets I've already replied to
  4. Sorts by engagement (likes + replies)
  5. Sends the tweet to GPT-5.4 via Codex with humanizer rules
  6. GPT drafts a reply under 180 characters that sounds like a tired builder texting at 2am
  7. Posts via Chrome CDP with the account already logged in
  8. Logs to Supabase event bus

The Humanizer Prompt

The key insight: LLM-generated replies sound like LLM-generated replies. The humanizer rules fix this:

Example output: "Yeah, stuffing everything into MEMORY.md is a dead end. Context bloats, the agent gets dumb, and you spend half your time re-explaining the repo."

That reads like a human. Because the prompt told the LLM to write like one.

Results

Day 1: The system found and replied to a tweet with 127 likes and 32 replies. The reply was contextually relevant, under 160 characters, and sounded natural.

The system runs via macOS launchd (like cron but persistent). It survives reboots. No server needed.

What I'd Do Differently

The code is part of the VAOS infrastructure at vaos.sh. The agent hosting platform gives your AI persistent memory and behavioral corrections — the same tech that powers this reply system.


Follow the build in public journey: @StraughterG