Projects

ZenBot: When Three AI Minds Are Better Than One

image
August 6, 2025
"Should we launch now or wait?" I've heard this question countless times in startup meetings. The engineers want more testing. The sales team sees competitors moving. The CEO is torn between momentum and quality. Everyone has a valid point, but the debate goes in circles. Last month, I faced a challenge: build an API that orchestrates multiple LLMs to solve real problems. But I didn't want to build just another chatbot wrapper. I wanted something that reflected how we actually make decisions by wrestling with competing perspectives until we find wisdom in the tension. That's how ZenBot was born: a decision-making API where three AI agents debate your dilemmas, each bringing a radically different perspective to the table. Here's what I've learned from years of building products: the best decisions don't come from consensus, they come from synthesis. You need someone pushing for speed, someone advocating for caution, and someone wise enough to see the path between them. So I designed ZenBot around three distinct AI personalities:
  • Build Fast: The entrepreneurial hustler who sees opportunities and wants to move NOW
  • Stillness: The mindful philosopher who values patience, reflection, and long-term thinking
  • Zen Judge: The wise mediator who synthesizes both perspectives into actionable wisdom
The magic happens when all three talk to each other about your problem. When you send ZenBot a dilemma, here's what happens behind the scenes:
  1. Build Fast and Stillness both receive your question simultaneously (thanks to Go's goroutines)
  2. They each craft their argument independently, no influence, pure perspective
  3. Once both respond, Zen Judge reads both arguments and synthesizes a resolution
  4. You watch it all unfold in real-time through Server-Sent Events
The whole process takes about 2-3 seconds, but you get status updates throughout:
→ "Build Fast argues"
→ "Stillness reflects"  
→ "Combining results"
→ "Resolution forming"
→ [Streamed response, word by word...]
It feels like watching a debate unfold, which is exactly what I wanted. Building this wasn't trivial. I had to solve some interesting problems: Problem 1: How do you call multiple LLMs in parallel? Go's concurrency primitives made this elegant. I use goroutines to invoke both agents simultaneously, with channels to collect their responses:
// Launch both agents at once
for i, agent := range agents {
    go callLLMAgent(ctx, w, llmService, agent, req.Message, 
                   conversationHistory, responseChans[i], errorChan, &wg)
}

// Wait for both to finish
wg.Wait()
This cuts the response time in half. Instead of waiting 4-6 seconds sequentially, we get both arguments in ~2-3 seconds. Problem 2: How do you stream responses in real-time? Server-Sent Events (SSE) let me push updates to the client as they happen. But I took it further. I stream the final response word-by-word, creating this typing effect that makes the AI feel more deliberate:
func streamMessageProgressively(w http.ResponseWriter, message string) {
    words := strings.Fields(message)
    for _, word := range words {
        currentContent += word
        // Send progressive update
        w.Write([]byte(fmt.Sprintf("event: message\ndata: %s\n\n", currentContent)))
        time.Sleep(150 * time.Millisecond) // Human-like pacing
    }
}
The real artistry was in designing each agent's prompt. I wanted distinct voices that felt authentic, not generic. Build Fast is that founder friend who's always hustling:
"You embody values of curiosity and opportunity-seeking. Provide a concise argument for rapid, bold action to achieve tangible results. Focus on innovation and opportunity."
This agent pushes for speed, sees market opportunities, and isn't afraid to take calculated risks. Stillness is your thoughtful advisor who makes you pause:
"You embody values of ego-less collaboration, emptiness, and OK-ness. Provide a concise argument for patience, introspection, and balance. Emphasize calmness and long-term harmony."
This agent questions assumptions, values sustainable growth, and reminds you that rushing often creates more problems. Zen Judge is where it gets interesting. I gave this agent a unique personality, professional yet playful, mixing business metrics with Zen paradoxes:
"Synthesize a creative, actionable resolution balancing both perspectives. Weave in exactly three Zen-inspired paradoxes in 'to X is to Y' format (like 'to rush is to pause'). Include travel-inspired metaphors and specific, measurable metrics."
I also told it to use meditative emojis (🌿, 🕉️, 🌄) sparingly for rhythm. The result? Responses that are both actionable and contemplative. Here's a dilemma I threw at ZenBot during testing: "Should I take new investment or grow my user base so we can bootstrap?" Within seconds, I watched the three agents debate: Build Fast argued for taking investment to "accelerate growth dramatically" and "seize market opportunities before competitors." Stillness countered with the value of "organic strength and resilience" and how bootstrapping "fosters intimate connection with users." Then Zen Judge synthesized both into this response:
"To rush is to pause 🌿. Begin your journey as a mindful trek: chart a path that honors momentum without forsaking mindfulness. From Build Fast, embrace that 'rapid user growth attracts better investment offers,' targeting a 20% month-over-month user base increase by experimenting swiftly with your value proposition. Yet heed Stillness's wisdom that 'clarity arises from stillness,' and secure only $100K in strategic, mission-aligned investment—just enough to support essential scaling without clouding your vision 🕉️. Grow your user community as a living compass, letting engagement guide your product evolution, while also inviting funding as a companion—quiet, respectful, and in step with your pace. To gain is to yield. To surge is to stay centered. 🌄 As you navigate this shared path, ask gently: what does the trek reveal that a destination cannot?"
Look at that. It gives you specific metrics (20% MoM growth, $100K investment), acknowledges both perspectives, and wraps it in a metaphor. That's exactly the kind of response I wanted: actionable yet thoughtful. 1. Prompt engineering is an art form Getting the agents to sound distinct took iteration. Too generic and they all sound the same. Too extreme and they're caricatures. The sweet spot is giving them clear values and a consistent voice. 2. Go's concurrency model is perfect for LLM orchestration Goroutines, channels, WaitGroups. These primitives made parallel LLM calls almost trivial. The entire concurrency logic is maybe 30 lines of code. 3. Real-time feedback changes everything Those SSE status updates ("Build Fast argues...") make the experience feel alive. Without them, you're just staring at a loading spinner wondering what's happening. 4. Keep it simple The entire system is 531 lines of Go. No frameworks, no ORMs, just standard libraries. SQLite for storage, standard net/http for the server. Sometimes less is more. For those interested in the technical details, here's the stack:
  • Language: Go (because concurrency)
  • LLMs: OpenAI's GPT-4 Mini for Build Fast and Stillness, GPT-4o for Zen Judge
  • Database: SQLite (keeps it simple, no server needed)
  • Streaming: Server-Sent Events for real-time updates
  • Testing: Standard Go testing with race detection
The folder structure is deliberately simple:
zenbot/
├── cmd/zenbot/          # Main entry point
├── internal/
│   ├── handler/         # HTTP handlers and SSE streaming
│   ├── llm/            # OpenAI client and prompts
│   └── model/          # Data structures
└── db/                 # SQLite database
One thing I'm proud of: the whole system handles errors gracefully. If one LLM times out, you get a clear error message via SSE. If the client disconnects mid-stream, Go's context cancellation cleans up the goroutines. No hanging processes, no leaked resources. I also wrote tests for the critical paths, particularly around the SSE streaming and concurrent LLM calls. Running go test -race ./... ensures there are no race conditions. Using ZenBot is straightforward. Here's a curl example:
curl -X POST http://localhost:8080/v1/zenbot \
  -H "Content-Type: application/json" \
  -d '{
    "conversation_id": "my-dilemma",
    "message": "Should we launch the new AI feature now or refine it further?"
  }'
You'll get a stream of events back:
event: status
data: Build Fast argues

event: status
data: Stillness reflects

event: status
data: Resolution forming

event: message
data: message_id: zenbot-1640995200, content: A balanced path forward...
The full source code is on GitHub if you want to explore or adapt it for your own use cases. This was a fun weekend project, but there's so much more I want to explore: Slack Integration: Imagine asking ZenBot directly in your team channel. /zenbot Should we hire a senior engineer or two junior devs? and watching the debate unfold. A Meditative UI: Right now it's just an API. I'm envisioning a beautiful web interface with gradients that shift as the agents debate, maybe some ambient sound. Make the decision-making process feel intentional. Custom Agent Personalities: Let teams define their own agents. Maybe you want "Pragmatic Engineer" vs "Visionary Designer" instead of Build Fast vs Stillness. Better Threading: Right now conversation history is stored as JSON. For production, I'd move to a proper messages table with indexing and cleanup policies. We're entering an era where AI can do almost anything. But the real value isn't in having one AI give you one answer. It's in orchestrating multiple perspectives to help you think more clearly. ZenBot doesn't replace your judgment. It enhances it. It forces you to consider both speed and wisdom, action and reflection. It gives you a framework for making better decisions when the path isn't obvious. And honestly? It's just more interesting than a chatbot that regurgitates a single perspective. Building ZenBot taught me that the future of AI isn't about smarter models. It's about smarter orchestration. How do we make AIs work together? How do we encode cultural values into their dialogue? How do we present their reasoning in ways that feel human? These are design problems as much as technical ones. The entire codebase is just 531 lines. The real complexity is in the prompts, the orchestration, the user experience. That's where the craft is. If you're building with LLMs, don't just wrap a single API call. Think about how multiple perspectives could create something richer. Think about the dialogue, not just the output.
Want to try ZenBot? Check out the GitHub repo and let me know what dilemmas you throw at it. I'm curious to see what kinds of decisions benefit most from this three-agent approach. Questions or ideas? Reach out! I'd love to hear how you'd adapt this pattern for your own projects.