<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DUMB DEV Community</title>
    <description>The most recent home feed on DUMB DEV Community.</description>
    <link>https://dumb.dev.to</link>
    <atom:link rel="self" type="application/rss+xml" href="https://dumb.dev.to/feed"/>
    <language>en</language>
    <item>
      <title>We didn’t have a coding problem - We had a “where do I even start?” problem</title>
      <dc:creator>Bogdan Varlamov</dc:creator>
      <pubDate>Mon, 06 Apr 2026 20:38:19 +0000</pubDate>
      <link>https://dumb.dev.to/bgdnvarlamov/we-didnt-have-a-coding-problem-we-had-a-where-do-i-even-start-problem-4ap2</link>
      <guid>https://dumb.dev.to/bgdnvarlamov/we-didnt-have-a-coding-problem-we-had-a-where-do-i-even-start-problem-4ap2</guid>
      <description>&lt;h2&gt;
  
  
  Problem
&lt;/h2&gt;

&lt;p&gt;A lot of our tasks started the same way:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;unclear Jira description
&lt;/li&gt;
&lt;li&gt;20–30 minutes just figuring out what is actually required
&lt;/li&gt;
&lt;li&gt;then jumping around the codebase trying to find the right entry point
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the area was unfamiliar, easily &lt;strong&gt;1–2 hours gone before writing anything.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Or you just go ask someone who knows.&lt;/p&gt;

&lt;p&gt;Which is fine, but doesn’t scale&lt;/p&gt;

&lt;p&gt;And this wasn’t rare.&lt;br&gt;&lt;br&gt;
This was pretty normal.&lt;/p&gt;




&lt;h2&gt;
  
  
  What was wrong
&lt;/h2&gt;

&lt;p&gt;We kept paying for the same thing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;understanding context
&lt;/li&gt;
&lt;li&gt;figuring out patterns
&lt;/li&gt;
&lt;li&gt;re-learning how things are done
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nothing complex, just repeated work.&lt;/p&gt;

&lt;p&gt;And heavily dependent on:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;who already knows this part of the system&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;We didn’t try to “generate code faster”.&lt;/p&gt;

&lt;p&gt;We focused on:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;understanding + execution as a single flow&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not magic, just structure.&lt;/p&gt;




&lt;h3&gt;
  
  
  1. Give the agent real system context (agent.md)
&lt;/h3&gt;

&lt;p&gt;A structured entry point (~500 lines):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;architecture and patterns
&lt;/li&gt;
&lt;li&gt;conventions (code, testing, logging)
&lt;/li&gt;
&lt;li&gt;how to extend the system
&lt;/li&gt;
&lt;li&gt;risky areas and common pitfalls
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This removes the need to “figure things out from scratch”.&lt;/p&gt;




&lt;h3&gt;
  
  
  2. Build a custom prompt that actually thinks
&lt;/h3&gt;

&lt;p&gt;Not just “generate code”.&lt;/p&gt;

&lt;p&gt;The prompt does this in steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understand the task (type, scope, intent)
&lt;/li&gt;
&lt;li&gt;Estimate complexity
&lt;/li&gt;
&lt;li&gt;Decide if it should be solved by the agent
&lt;/li&gt;
&lt;li&gt;Find relevant parts of the codebase
&lt;/li&gt;
&lt;li&gt;Propose implementation options
&lt;/li&gt;
&lt;li&gt;Build a step-by-step plan
&lt;/li&gt;
&lt;li&gt;Highlight risks
&lt;/li&gt;
&lt;li&gt;Generate a verification checklist
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the task is too complex, it splits it into smaller parts.&lt;/p&gt;




&lt;h3&gt;
  
  
  3. Use the agent where it makes sense
&lt;/h3&gt;

&lt;p&gt;If complexity is low/medium - the prompt explicitly suggests using the agent.&lt;/p&gt;

&lt;p&gt;At this point the agent already has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;system context
&lt;/li&gt;
&lt;li&gt;correct patterns
&lt;/li&gt;
&lt;li&gt;a clear plan
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So it can generate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;code aligned with the repository
&lt;/li&gt;
&lt;li&gt;tests for new functionality
&lt;/li&gt;
&lt;li&gt;regression tests for existing behavior
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Not perfect, but good enough to remove most of the routine work.&lt;/p&gt;




&lt;h2&gt;
  
  
  Result
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Tasks estimated at 6–8 hours - often 2–3 hours
&lt;/li&gt;
&lt;li&gt;Simple tasks - from ~1 hour to minutes
&lt;/li&gt;
&lt;li&gt;Less need to “figure things out” before starting
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This works because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;context is given upfront
&lt;/li&gt;
&lt;li&gt;implementation is partially handled
&lt;/li&gt;
&lt;li&gt;common mistakes are reduced
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But the main change:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;less mental overhead
&lt;/li&gt;
&lt;li&gt;faster start
&lt;/li&gt;
&lt;li&gt;fewer interruptions
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Takeaway
&lt;/h2&gt;

&lt;p&gt;Most teams don’t have a coding problem.&lt;/p&gt;

&lt;p&gt;They have a &lt;strong&gt;“time-to-understanding” problem.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If every task starts with digging through the system&lt;br&gt;&lt;br&gt;
and re-learning patterns, that’s the real bottleneck.&lt;/p&gt;

&lt;p&gt;AI helps when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it understands your system
&lt;/li&gt;
&lt;li&gt;and is used selectively
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I see this a lot:&lt;/p&gt;

&lt;p&gt;Teams either don’t use AI,&lt;br&gt;
or try to use it everywhere.&lt;/p&gt;

&lt;p&gt;Neither works well.&lt;/p&gt;




&lt;p&gt;I see this pattern in many teams - curious how you approach it.&lt;/p&gt;

</description>
      <category>softwaredevelopment</category>
      <category>ai</category>
      <category>productivity</category>
      <category>career</category>
    </item>
    <item>
      <title>The Seven Engineering Problems That Make Real-Time Enterprise Sync Almost Impossible</title>
      <dc:creator>Ruben Burdin</dc:creator>
      <pubDate>Mon, 06 Apr 2026 20:36:30 +0000</pubDate>
      <link>https://dumb.dev.to/ruben-burdin/the-seven-engineering-problems-that-make-real-time-enterprise-sync-almost-impossible-9hf</link>
      <guid>https://dumb.dev.to/ruben-burdin/the-seven-engineering-problems-that-make-real-time-enterprise-sync-almost-impossible-9hf</guid>
      <description>&lt;p&gt;I spent 18 months trying to make two databases agree with each other. Not eventually. Not within 15 minutes. In real time, bidirectionally, without losing data.&lt;/p&gt;

&lt;p&gt;The first version crashed after 10,000 records. The second version handled the volume but corrupted fields when both systems wrote to the same record within the same second. The third version solved conflicts but broke each time that Salesforce added a custom field. I threw out all three and started over.&lt;/p&gt;

&lt;p&gt;That was 2022. I had a business degree, six months of self-taught programming, and a problem I could not stop thinking about: why is it so difficult to keep a CRM and a database synchronized in real time?&lt;/p&gt;

&lt;p&gt;Three years and one Y Combinator batch later, &lt;a href="https://stacksync.com" rel="noopener noreferrer"&gt;Stacksync&lt;/a&gt; syncs millions of records across 200+ enterprise systems with sub-second latency. I want to explain why this problem is as hard as it is, because most engineering teams underestimate it until they're six months into a failing project.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Polling Is a Lie You Tell Yourself
&lt;/h2&gt;

&lt;p&gt;The first approach every team tries is polling. You set up a cron job that checks Salesforce for changes every five minutes. Maybe every minute if you're ambitious. You pull the delta, write it to Postgres, and call it done.&lt;/p&gt;

&lt;p&gt;It works on the demo. It falls apart in production.&lt;/p&gt;

&lt;p&gt;Polling has a floor. You cannot poll faster than the API allows, and enterprise APIs are not built for high-frequency reads. Salesforce enforces a daily API call limit that depends on your license tier and the number of seats in your org. A mid-size company with 100 users gets roughly 100,000 API calls per day. That sounds like a lot until you realize that a single SOQL query checking for updated records across five objects consumes five calls. Run that query every minute and you've burned 7,200 calls by end of day on a single sync job. Add a second system, add more objects, add any complexity at all, and you hit the ceiling before lunch.&lt;/p&gt;

&lt;p&gt;The deeper problem with polling is temporal. Between polls, the world changes. A sales rep updates an opportunity at 10:01:14. Your poll runs at 10:01:00 and again at 10:02:00. For 46 seconds, your database is wrong. If anything downstream reads that record during those 46 seconds, a customer portal shows stale pricing, an internal tool triggers the wrong workflow, a report goes out with yesterday's numbers.&lt;/p&gt;

&lt;p&gt;Forty-six seconds sounds minor. Multiply it by every record, every object, every connected system, and you have a data infrastructure that is never fully consistent. You just hope the inconsistency window is small enough that nobody notices.&lt;/p&gt;

&lt;p&gt;We noticed.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Change Data Capture Sounds Simple Until You Build It
&lt;/h2&gt;

&lt;p&gt;The alternative to polling is event-driven architecture. Instead of asking "what changed?" on a timer, you listen for change events as they happen. Salesforce offers Change Data Capture. HubSpot has webhooks. NetSuite has SuiteScript triggers.&lt;/p&gt;

&lt;p&gt;Each implementation is different. Each has its own delivery guarantees, event formats, retry logic, and failure modes.&lt;/p&gt;

&lt;p&gt;Salesforce CDC publishes events to a streaming API channel with a 72-hour retention window. If your listener goes down for longer than that, events are gone. No replay. You need to detect the gap, fall back to a full or incremental poll to reconstruct the missing window, then resume streaming without duplicating records or missing the transition point. That recovery logic alone took our team weeks to get right, and it still needed refinement after edge cases surfaced in production.&lt;/p&gt;

&lt;p&gt;HubSpot webhooks fire on property changes but batch them. You receive a payload containing multiple changes to multiple records, and the ordering within that batch is not guaranteed. If record A was updated before record B, you may receive B first. For independent records, ordering does not matter. For related records where a parent update must land before a child reference, out-of-order delivery corrupts your foreign key relationships.&lt;/p&gt;

&lt;p&gt;NetSuite SuiteScript triggers fire synchronously inside the transaction. If your sync logic is too slow, the user's save operation in NetSuite hangs. You are directly on the critical path of another vendor's product. Time out, and the user sees an error they cannot diagnose.&lt;/p&gt;

&lt;p&gt;Every connector demands its own CDC implementation, its own retry semantics, and its own failure recovery path. There is no universal standard. We build and maintain a separate ingestion pipeline for each of the 200+ systems Stacksync connects to.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Bidirectional Sync Is a Distributed Consensus Problem
&lt;/h2&gt;

&lt;p&gt;One-way sync is a pipeline. Data flows from A to B. If B receives a record it already has, you overwrite it. If B receives a new record, you insert it. The logic fits in a page of pseudocode.&lt;/p&gt;

&lt;p&gt;Two-way sync is a distributed systems problem. Both A and B can write to the same record at the same time. Neither system knows the other system is also making changes. There is no shared clock, no shared transaction log, no coordinator sitting between them enforcing order.&lt;/p&gt;

&lt;p&gt;This is a variant of the same problem that databases solved decades ago with distributed consensus protocols like Paxos and Raft. The difference is that you do not control either endpoint. Salesforce is not going to implement your consensus protocol. Neither is Postgres. You are synchronizing two sovereign systems that have no awareness of each other.&lt;/p&gt;

&lt;p&gt;The naive solution is "last write wins." Whichever timestamp is later takes precedence. This fails for three reasons.&lt;/p&gt;

&lt;p&gt;First, clocks drift. Salesforce's server clock and your database clock are not perfectly synchronized. NTP reduces drift to milliseconds, but milliseconds matter when two writes happen within the same second.&lt;/p&gt;

&lt;p&gt;Second, granularity varies. Salesforce timestamps record-level changes. If a rep updates the phone number at 10:01:00 and your system updates the email at 10:01:00, last-write-wins at the record level discards one of those changes. You need field-level conflict detection, which means tracking individual field timestamps across both systems and merging them independently. The data model for this alone is significant.&lt;/p&gt;

&lt;p&gt;Third, intent matters. Some fields should always defer to the CRM because the sales team owns them. Other fields should always defer to the database because an automated pipeline owns them. A blanket conflict resolution strategy treats all fields the same, which is wrong for every real-world use case. You need per-field, per-object, per-direction conflict policies that the customer can configure without writing code.&lt;/p&gt;

&lt;p&gt;We spent four months on our conflict resolution engine before a single customer used it. It is the hardest part of the system and the least visible.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Schema Is a Moving Target
&lt;/h2&gt;

&lt;p&gt;Enterprise systems do not have fixed schemas. Salesforce admins add custom fields weekly. HubSpot users create custom properties for every new campaign. NetSuite consultants build custom record types for industry-specific workflows.&lt;/p&gt;

&lt;p&gt;When a new field appears in the source system, your sync layer has three options: ignore it, fail, or adapt. Ignoring it means data loss. Failing means downtime. Adapting means your system must detect the schema change, determine its type and constraints, create the corresponding column in the target database, backfill historical values for that field, and resume syncing, all without interrupting the ongoing sync of every other field.&lt;/p&gt;

&lt;p&gt;This is schema evolution, and it has to happen automatically because you cannot ask a customer to manually adjust their database schema every time a Salesforce admin adds a picklist field.&lt;/p&gt;

&lt;p&gt;The edge cases are where it gets brutal. What happens when a field is renamed? Your sync layer mapped it by its API name, which in Salesforce is immutable, but in HubSpot it can change. What happens when a field type changes from text to number? Your Postgres column is a VARCHAR and the source is now sending integers. What happens when a field is deleted? Do you delete the column and lose historical data, or mark it deprecated and keep it?&lt;/p&gt;

&lt;p&gt;Every schema change is a migration event that has to execute live, without locks, on a table that is actively receiving writes from both directions. We handle thousands of these per week across our customer base.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Rate Limits Are Adversarial by Design
&lt;/h2&gt;

&lt;p&gt;Enterprise SaaS vendors set API rate limits to protect their infrastructure from misbehaving integrations. It's reasonable from their perspective and adversarial from yours.&lt;/p&gt;

&lt;p&gt;Salesforce enforces both daily and concurrent request limits. HubSpot throttles at 100 requests per 10 seconds for OAuth apps. NetSuite's concurrency model limits you to a handful of simultaneous connections depending on the customer's license.&lt;/p&gt;

&lt;p&gt;Your sync engine needs to be the most efficient possible consumer of these APIs. Every unnecessary call is a call your customer cannot use for something else. Every burst that triggers throttling creates a backlog that degrades your latency SLA.&lt;/p&gt;

&lt;p&gt;We implemented adaptive rate management that monitors remaining quota in real time and adjusts throughput dynamically. When quota gets low, the engine shifts to larger batch sizes with fewer calls. When quota is abundant, it runs smaller, more frequent batches for lower latency. The system optimizes continuously for the tradeoff between freshness and quota consumption, and the optimal point shifts throughout the day as the customer's other integrations compete for the same pool.&lt;/p&gt;

&lt;p&gt;This is invisible to the customer. They see sub-second sync. Underneath, the engine is playing a real-time resource allocation game against constraints it does not control.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Ordering Across Distributed Systems
&lt;/h2&gt;

&lt;p&gt;When you sync a Salesforce Account and its child Contacts to Postgres, the Account must exist before the Contacts reference it. If the Contact arrives first and the Account does not yet exist in Postgres, the foreign key insert fails.&lt;/p&gt;

&lt;p&gt;In a single-system database, this is solved by transactions. You wrap both inserts in a transaction and they either both succeed or both roll back. Across systems, there is no transaction boundary. Events arrive when they arrive. The Account creation event might be delayed by network latency, CDC channel ordering, or a retry cycle, while the Contact creation event arrives immediately.&lt;/p&gt;

&lt;p&gt;You need a dependency graph that understands the relationships between objects, holds child records until their parents exist, and processes them in the correct topological order. This graph has to account for circular dependencies (Object A references Object B which references Object A), self-references (an Account with a parent Account), and polymorphic lookups (a field that can reference different object types depending on the record).&lt;/p&gt;

&lt;p&gt;We process these dependency graphs in real time as events arrive, reordering them on the fly without buffering longer than necessary. Getting this wrong means either data loss (dropping events that cannot be ordered) or data corruption (inserting records with dangling references).&lt;/p&gt;

&lt;h2&gt;
  
  
  7. The Compounding Effect
&lt;/h2&gt;

&lt;p&gt;Each of these problems is hard in isolation. Together, they multiply.&lt;/p&gt;

&lt;p&gt;A schema change triggers a migration that temporarily increases API calls, which hits a rate limit, which creates a backlog, which delays events, which breaks ordering guarantees, which causes a child record to arrive before its parent, which triggers a dependency hold, which extends the latency, which means the conflict resolution window widens, which means more conflicts need to be resolved, which means more writes, which consumes more API quota.&lt;/p&gt;

&lt;p&gt;One perturbation propagates through every layer. The system must absorb these cascades without data loss, without latency spikes visible to the customer, and without human intervention.&lt;/p&gt;

&lt;p&gt;This is why most internal sync projects fail. The team builds something that works for the first use case, in the first month, with the first 10,000 records. Then the schema changes. Then the volume grows. Then a second system is added. Then an edge case creates a cascade, and the debugging obliges you to trace events across three systems, two CDC channels, a conflict resolution log, and a dependency graph.&lt;/p&gt;

&lt;p&gt;Most teams give up and switch to batch ETL. They accept 15-minute delays because the alternative is building a distributed systems engine that very few engineering teams have the time, budget, or expertise to maintain.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;Synchronizing enterprise systems in real time is a distributed systems problem wearing a SaaS costume. It looks like a simple data pipeline until you peel back the first layer and find consensus algorithms, schema evolution, adversarial rate limiting, and causal ordering all compressed into a single system that has to run 24/7 without losing a single record.&lt;/p&gt;

&lt;p&gt;We have been building Stacksync for three years. The problem has not diminished. Every new connector, every new edge case, every new scale milestone reveals another layer of complexity that was invisible at the previous level.&lt;/p&gt;

&lt;p&gt;If your team is considering building this in-house, my honest recommendation is to reflect carefully on the total cost. Not the cost of the first version, which will work. The cost of the 40th version, which is the one that actually survives production at scale.&lt;/p&gt;

&lt;p&gt;The engineering is possible. I know because we did it. The question is whether that engineering is the best use of your team's time when their actual job is building your product, not maintaining the plumbing underneath it.&lt;/p&gt;

&lt;p&gt;That distinction tends to clarify the decision quickly.&lt;/p&gt;

</description>
      <category>database</category>
      <category>dataengineering</category>
      <category>startup</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>We Added a /refer Page to TIZZLE: A Cleaner Referral Flow That Actually Converts</title>
      <dc:creator>Xander Taylor</dc:creator>
      <pubDate>Mon, 06 Apr 2026 20:34:51 +0000</pubDate>
      <link>https://dumb.dev.to/xandertaylor/we-added-a-refer-page-to-tizzle-a-cleaner-referral-flow-that-actually-converts-12hi</link>
      <guid>https://dumb.dev.to/xandertaylor/we-added-a-refer-page-to-tizzle-a-cleaner-referral-flow-that-actually-converts-12hi</guid>
      <description>&lt;h1&gt;
  
  
  We Added a /refer Page to TIZZLE: A Cleaner Referral Flow That Actually Converts
&lt;/h1&gt;

&lt;p&gt;We just shipped a new page on the TIZZLE site: &lt;a href="https://tizzle.org/refer" rel="noopener noreferrer"&gt;&lt;code&gt;/refer&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The goal was simple:&lt;/p&gt;

&lt;p&gt;make referrals frictionless, clear the payout model up front, and remove the back-and-forth that usually slows intros down.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why we added it
&lt;/h2&gt;

&lt;p&gt;A lot of referrals were already happening through DMs and email.&lt;/p&gt;

&lt;p&gt;That works, but it creates avoidable problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;missing client details&lt;/li&gt;
&lt;li&gt;unclear expectations on commission&lt;/li&gt;
&lt;li&gt;inconsistent handoff process&lt;/li&gt;
&lt;li&gt;no obvious place to send people who want to refer someone&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So we moved that flow into a dedicated page with one clear CTA and a short submit process.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the new &lt;code&gt;/refer&lt;/code&gt; page does
&lt;/h2&gt;

&lt;p&gt;The page explains the referral program in plain language:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;earn &lt;strong&gt;10%&lt;/strong&gt; of first project value&lt;/li&gt;
&lt;li&gt;no cap on number of referrals&lt;/li&gt;
&lt;li&gt;open to developers, agencies, consultants, clients, and friends&lt;/li&gt;
&lt;li&gt;payout after the referred client’s first payment clears&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then it gives a direct submit form for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;referrer details&lt;/li&gt;
&lt;li&gt;client details&lt;/li&gt;
&lt;li&gt;service type needed&lt;/li&gt;
&lt;li&gt;optional project context&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No account creation.&lt;br&gt;
No long onboarding.&lt;br&gt;
No extra steps after submit.&lt;/p&gt;

&lt;h2&gt;
  
  
  The conversion decisions behind it
&lt;/h2&gt;

&lt;p&gt;We kept the page focused on one journey:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;understand the reward quickly&lt;/li&gt;
&lt;li&gt;see concrete payout examples&lt;/li&gt;
&lt;li&gt;trust the process through FAQ&lt;/li&gt;
&lt;li&gt;submit in under two minutes&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A few implementation choices mattered:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;clear hero copy with immediate value proposition&lt;/li&gt;
&lt;li&gt;reward banner anchored around the &lt;code&gt;10%&lt;/code&gt; model&lt;/li&gt;
&lt;li&gt;example payout figures tied to actual service pricing&lt;/li&gt;
&lt;li&gt;FAQ section to remove uncertainty before form completion&lt;/li&gt;
&lt;li&gt;minimal form fields to reduce abandonment&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why this matters for service businesses
&lt;/h2&gt;

&lt;p&gt;If referrals are part of your pipeline, treating them like a side process costs you.&lt;/p&gt;

&lt;p&gt;A dedicated referral page makes the channel:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;easier to share&lt;/li&gt;
&lt;li&gt;easier to measure&lt;/li&gt;
&lt;li&gt;easier to scale without manual coordination&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It also improves trust because terms are visible before anyone submits.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thought
&lt;/h2&gt;

&lt;p&gt;Most sites optimize for leads from search and ads, but under-optimize referrals even when referrals close faster.&lt;/p&gt;

&lt;p&gt;The new &lt;a href="https://tizzle.org/refer" rel="noopener noreferrer"&gt;&lt;code&gt;/refer&lt;/code&gt; page&lt;/a&gt; is our attempt to fix that with a cleaner, lower-friction system.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://tizzle.org" rel="noopener noreferrer"&gt;Explore the full site at tizzle.org&lt;/a&gt;&lt;/p&gt;

</description>
      <category>product</category>
      <category>showdev</category>
      <category>ux</category>
      <category>tizzle</category>
    </item>
    <item>
      <title>Write Once, Publish Everywhere: Build a Multi-Platform Dev Blog Pipeline with GitHub Actions</title>
      <dc:creator>Akhilesh Pothuri</dc:creator>
      <pubDate>Mon, 06 Apr 2026 20:34:27 +0000</pubDate>
      <link>https://dumb.dev.to/akhileshpothuri/write-once-publish-everywhere-build-a-multi-platform-dev-blog-pipeline-with-github-actions-5ai2</link>
      <guid>https://dumb.dev.to/akhileshpothuri/write-once-publish-everywhere-build-a-multi-platform-dev-blog-pipeline-with-github-actions-5ai2</guid>
      <description>&lt;h1&gt;
  
  
  Zero to Published: Setting Up a Multi-Platform Dev Blog Pipeline
&lt;/h1&gt;

&lt;h3&gt;
  
  
  How to write once in Markdown and automatically publish to Dev.to, Hashnode, Medium, and your personal site without losing your sanity or your SEO
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;You spent four hours writing the perfect technical post, hit publish on Dev.to, then remembered you also need to post it to Hashnode. And Medium. And your personal blog. By the time you're done reformatting code blocks for the third platform, you've completely massacred your article.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here's the uncomfortable math: developers who cross-post manually often spend roughly 30–45 minutes &lt;em&gt;per platform&lt;/em&gt; adjusting formatting, re-uploading images, and fixing broken syntax highlighting — based on typical manual workflows. That can add up to two or three hours of busywork for every single article—time you could spend actually writing.&lt;/p&gt;

&lt;p&gt;What if you could write once in Markdown, push to GitHub, and watch your words automatically appear everywhere your readers hang out—with proper formatting, canonical URLs that protect your SEO, and zero copy-paste gymnastics?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;By the end of this guide, you'll have a working GitHub Actions pipeline that publishes to four platforms simultaneously, and you'll never manually cross-post again.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Your Blog Deserves More Than One Home
&lt;/h2&gt;

&lt;p&gt;Picture this: you spend four hours crafting the perfect tutorial on async/await patterns. You publish it on your personal blog, feel accomplished, then remember you should probably post it on Dev.to. And Medium. And maybe Hashnode. By the time you've reformatted the code blocks three times and fixed the broken images twice, it's midnight and you're questioning every life choice that led you here.&lt;/p&gt;

&lt;p&gt;You're not alone. Developer attention is scattered across a dozen platforms, and no single platform has clearly won broad developer mindshare. Your audience might discover you on Dev.to during their lunch break, stumble across your Medium article from a Google search, or find your personal blog through a conference talk. Missing any of these touchpoints means missing readers — and potential opportunities.&lt;/p&gt;

&lt;p&gt;Here's the uncomfortable math: manual cross-posting typically takes roughly 30–45 minutes per platform, per article — based on typical manual workflows. That's not writing time — that's &lt;em&gt;reformatting&lt;/em&gt; time. Fixing markdown quirks, re-uploading images, adjusting code syntax highlighting, setting canonical URLs so Google doesn't penalize you for duplicate content. Most developers maintain this discipline for exactly two weeks before their cross-posting ambitions quietly die in a browser tab labeled "Draft - Dev.to."&lt;/p&gt;

&lt;p&gt;The pipeline we're building solves this with a simple principle: &lt;strong&gt;write once, publish everywhere&lt;/strong&gt;. You'll create your content in a single markdown file, push to GitHub, and watch as automation handles the rest — deploying to your personal blog while simultaneously cross-posting to Dev.to, Medium, and Hashnode with proper formatting and canonical links intact.&lt;/p&gt;

&lt;p&gt;No more copy-paste marathons. No more "I'll cross-post this tomorrow" lies we tell ourselves. Just write, commit, and let the robots handle distribution while you move on to your next article.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Foundation: Git + Markdown as Your Single Source of Truth
&lt;/h2&gt;

&lt;p&gt;Think of your blog content like source code. You wouldn't store your Python files in Google Docs, manually copying changes between team members' laptops. You'd use Git — because Git tracks every change, lets you branch and experiment, and never loses your work. Your writing deserves the same treatment.&lt;/p&gt;

&lt;p&gt;Markdown is the plain-text format that makes this possible. Unlike WordPress or Medium's rich editor, a markdown file is just text. Open it in VS Code, Vim, or Notepad — it works everywhere. When you inevitably decide to switch from Hugo to Astro three years from now, your 47 articles come with you. Try exporting a hundred posts from WordPress sometime; I'll wait.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Frontmatter transforms plain markdown into smart content.&lt;/strong&gt; Those few lines of YAML at the top of each file become your metadata layer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Building&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;RAG&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Pipelines&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;That&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Don't&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Hallucinate"&lt;/span&gt;
&lt;span class="na"&gt;date&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2024-01-15&lt;/span&gt;
&lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;llm&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;rag&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;python&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="na"&gt;canonical_url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://yourdomain.com/posts/rag-pipelines&lt;/span&gt;
&lt;span class="na"&gt;dev_to&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;medium&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;hashnode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Platform-specific overrides live here too — maybe Dev.to needs different tags, or Medium requires a subtitle. One file holds everything.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Folder structure matters more than you think.&lt;/strong&gt; Start simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;content/
├── drafts/           # Work in progress
├── published/        # Live posts (dated folders)
│   └── 2024-01-15-rag-pipelines/
│       ├── index.md
│       └── images/   # Co-located assets
└── templates/        # Reusable frontmatter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Co-locating images with posts means no broken links when you reorganize. Git tracks your drafts' evolution. Every published piece has a paper trail.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Canonical URLs (The SEO Magic That Makes This Work)
&lt;/h2&gt;

&lt;p&gt;Imagine you have a favorite family recipe. You share photocopies with relatives, but you write "Original in Mom's cookbook, page 42" at the bottom of each copy. If anyone wants to know the &lt;em&gt;real&lt;/em&gt; source, they know exactly where to look. That's a canonical URL — it's you telling search engines "this is my original content, everything else is an authorized copy."&lt;/p&gt;

&lt;p&gt;Without this signal, Google sees your brilliant post appearing on your blog, Dev.to, Medium, and Hashnode and thinks: "Four identical articles? Someone's gaming the system." The result? All versions get penalized, or Google picks a random one as the "original" — often not your personal site.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here's how each platform handles canonicals differently:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dev.to&lt;/strong&gt; makes it easy — add &lt;code&gt;canonical_url&lt;/code&gt; to your frontmatter, and they automatically add the proper &lt;code&gt;&amp;lt;link rel="canonical"&amp;gt;&lt;/code&gt; tag pointing back to your blog&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hashnode&lt;/strong&gt; goes further, offering a dedicated "Originally published at" field that both sets the canonical AND displays a visible attribution link&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Medium&lt;/strong&gt; is trickier — you must import stories using their "Import a story" feature (not copy-paste) to set canonicals, or manually add it in story settings&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The one frontmatter field that saves your SEO:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;canonical_url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://yourblog.com/posts/your-article-slug&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That single line is your insurance policy. Every platform in your pipeline should read this field and respect it. Your personal blog becomes the authoritative source, the copies drive traffic back to you, and Google rewards everyone appropriately.&lt;/p&gt;

&lt;p&gt;No canonicals? You're essentially competing against yourself for rankings. With them? You're building a syndication network where every platform amplifies your original work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building Your Publishing Pipeline with GitHub Actions
&lt;/h2&gt;

&lt;p&gt;Think of GitHub Actions as your personal publishing assistant who never sleeps. You write, you push, they handle the rest — formatting, authenticating, posting to three platforms before your coffee gets cold.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Basic Trigger: Push and Publish&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Your workflow starts simple. When you push to your &lt;code&gt;main&lt;/code&gt; branch (specifically to the &lt;code&gt;posts/&lt;/code&gt; folder), the pipeline wakes up:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Publish to Platforms&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;main&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;posts/**'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This prevents every tiny README change from triggering a publishing spree. Only new or updated posts start the machine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Secrets: Your API Keys' Secure Home&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Never commit tokens. Ever. GitHub's repository secrets are your vault:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to Settings → Secrets and variables → Actions&lt;/li&gt;
&lt;li&gt;Add &lt;code&gt;DEVTO_API_KEY&lt;/code&gt;, &lt;code&gt;HASHNODE_TOKEN&lt;/code&gt;, and &lt;code&gt;MEDIUM_TOKEN&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Reference them in workflows as &lt;code&gt;${{ secrets.DEVTO_API_KEY }}&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each platform's API authentication differs slightly — Dev.to uses a simple API key header, Hashnode requires a Personal Access Token with publication permissions, and Medium's integration token needs specific scopes. Store all three; your workflow will pull the right one for each platform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Quirks That Will Bite You&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here's where pipelines get messy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Image URLs&lt;/strong&gt;: Relative paths break everywhere. Convert all images to absolute URLs pointing to your hosted site or a CDN&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code blocks&lt;/strong&gt;: Dev.to handles triple-backtick fencing beautifully; Medium sometimes mangles language hints. Test your syntax highlighting&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Markdown flavors&lt;/strong&gt;: Hashnode supports MDX components, Medium strips most formatting, Dev.to has liquid tags. Your pipeline needs platform-specific transforms&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The solution? A preprocessing step that reads your canonical Markdown and outputs platform-flavored versions before each API call.&lt;/p&gt;

&lt;h2&gt;
  
  
  The blogpipe CLI: A Working Cross-Posting Tool
&lt;/h2&gt;

&lt;p&gt;Think of blogpipe as a smart mail carrier that knows each recipient's preferences — it takes your single letter (Markdown post) and reformats the envelope appropriately for every destination.&lt;/p&gt;

&lt;p&gt;The architecture is straightforward: &lt;strong&gt;Markdown in → frontmatter extraction → platform-specific transforms → API dispatch&lt;/strong&gt;. When you run &lt;code&gt;blogpipe publish ./posts/my-article.md&lt;/code&gt;, here's what actually happens:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The parser reads your file and separates YAML frontmatter (title, tags, canonical URL) from content&lt;/li&gt;
&lt;li&gt;Transform functions modify the Markdown per platform — Medium gets simplified code blocks, Dev.to gets liquid tag conversions&lt;/li&gt;
&lt;li&gt;API handlers authenticate and POST to each enabled platform&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The features that save your sanity:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dry-run mode&lt;/strong&gt; (&lt;code&gt;--dry-run&lt;/code&gt;) previews exactly what would publish without touching any APIs. It shows you the transformed content for each platform, validates your frontmatter, and catches broken image links. Always run this first.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Canonical injection&lt;/strong&gt; automatically sets the canonical URL on every platform pointing back to your primary site. This isn't optional — without it, you're creating duplicate content that hurts your SEO and confuses readers who find the same post multiple places.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Image URL transformation&lt;/strong&gt; rewrites relative paths (&lt;code&gt;./images/diagram.png&lt;/code&gt;) to absolute URLs (&lt;code&gt;https://yourblog.dev/posts/my-article/images/diagram.png&lt;/code&gt;). Broken images are the fastest way to look unprofessional.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Atomic error handling&lt;/strong&gt; is critical. If Dev.to publishes successfully but Medium fails, you shouldn't end up with a half-distributed post and no idea what happened. Blogpipe uses a transaction-like approach: it attempts all platforms, collects results, and gives you a clear report of what succeeded, what failed, and retry commands for failures.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Things Break: Gotchas and Platform Limitations
&lt;/h2&gt;

&lt;p&gt;Let's be honest: this pipeline will break, and usually at the worst possible time. Here's what's going to bite you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Medium's API Is Basically Hostile&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Medium deprecated their official API years ago and never brought it back. The "Integration tokens" in settings technically work but are severely limited — you can create posts, but you can't update them, can't delete them, and can't even reliably fetch your own content. The workaround everyone actually uses? RSS import. You publish to your canonical site, Medium pulls from your RSS feed, and you manually claim the post. It's clunky, but it works consistently. Some developers use unofficial API endpoints discovered through browser inspection, but these break without warning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rate Limits Will Find You&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Dev.to allows 30 requests per 30 seconds — generous for publishing, but aggressive if you're also fetching to check existing posts. Hashnode's GraphQL API is more forgiving but has daily limits. The solution: implement exponential backoff with jitter. Don't just retry after 1 second, 2 seconds, 4 seconds — add randomness (1.2 seconds, 2.7 seconds, 4.1 seconds) to prevent thundering herd problems if you're running multiple pipelines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Silent Code Block Destruction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This one hurts. Medium converts triple-backtick code blocks into their proprietary format, often stripping language identifiers and mangling indentation. LinkedIn's article editor is worse — it can completely flatten multi-line code into a single paragraph. Dev.to and Hashnode handle Markdown properly, but always verify after publishing. The safest approach: use GitHub Gist embeds for critical code samples. They render correctly everywhere and update automatically when you fix bugs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Your First Week: A Practical Rollout Plan
&lt;/h2&gt;

&lt;p&gt;Think of your first week like setting up a new kitchen. Days one and two, you're just getting organized — putting things in the right cabinets so you can actually cook later.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Day 1-2: Build Your Foundation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create your repository with the folder structure we discussed: &lt;code&gt;/content/posts/&lt;/code&gt;, &lt;code&gt;/templates/&lt;/code&gt;, and &lt;code&gt;/.github/workflows/&lt;/code&gt;. Write your first post in Markdown — pick something short, around 500 words. This isn't about creating your masterpiece; it's about having real content to test the pipeline. Include a code block, an image, and a link. These three elements break most cross-posting workflows, so you want to catch issues early.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Day 3-4: Wire Up Automation (But Don't Go Live)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Configure your GitHub Actions workflow with a critical flag: &lt;code&gt;dry-run: true&lt;/code&gt;. This simulates publishing without actually posting anything. You'll see exactly what would happen — which API calls would fire, how your Markdown transforms for each platform, where images would upload. Run this at least three times with small tweaks to your post. Check the output logs obsessively. When everything looks right, manually publish to ONE platform (I recommend Dev.to — its API is the most predictable) and verify the result matches your dry-run preview.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Day 5+: Launch and Learn&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Remove the dry-run flag. Publish a real post. Then immediately check all platforms. Something will be wrong — accept this now. Maybe Hashnode stripped a heading level, or Medium's code block lost syntax highlighting. Fix it, update your templates, and try again next week.&lt;/p&gt;

&lt;p&gt;Set up basic tracking: which platform drives the most views? Most engagement? After a month of data, you'll know where to focus your energy.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Full working code&lt;/strong&gt;: &lt;a href="https://github.com/AKhileshPothuri/GenAI-Playbook/tree/main/zero-to-published-setting-up-a-multi-platform-dev-" rel="noopener noreferrer"&gt;GitHub →&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;







&lt;p&gt;Building a multi-platform publishing pipeline isn't about chasing vanity metrics across every site—it's about writing once and letting automation handle the tedious copy-paste-reformat dance that kills most developer blogs before post three. The upfront investment feels steep (five days of setup for a blog post?), but you're not building infrastructure for one article. You're building infrastructure for the next hundred. Every post after this one takes fifteen minutes from draft to published-everywhere, and that changes the economics of writing completely.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Start with Markdown as your single source of truth&lt;/strong&gt; — platform-specific quirks get handled in templates, not in your writing process&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dry-run mode is non-negotiable&lt;/strong&gt; — test your pipeline with fake publishes until you trust it, then test it three more times&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Track results from day one&lt;/strong&gt; — a month of data will tell you which platforms deserve your attention and which are just noise&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What surprised you most about multi-platform publishing? Drop a comment below—I'm especially curious if anyone's found clever workarounds for Medium's image hosting limitations.&lt;/p&gt;

</description>
      <category>blogging</category>
      <category>developertools</category>
      <category>automation</category>
      <category>githubactions</category>
    </item>
    <item>
      <title>50 Things Anthropic's API Can't Do (And We're Going to Walk Through Every Single One)</title>
      <dc:creator>Jonathan Murray</dc:creator>
      <pubDate>Mon, 06 Apr 2026 20:34:17 +0000</pubDate>
      <link>https://dumb.dev.to/jon_at_backboardio/50-things-anthropics-api-cant-do-and-were-going-to-walk-through-every-single-one-4ilc</link>
      <guid>https://dumb.dev.to/jon_at_backboardio/50-things-anthropics-api-cant-do-and-were-going-to-walk-through-every-single-one-4ilc</guid>
      <description>&lt;p&gt;Quick disclaimer before we start: Claude helped me write this. Very intentionally.&lt;/p&gt;

&lt;p&gt;Not just "helped," either.&lt;/p&gt;

&lt;p&gt;I gave Claude direct access to our docs at docs.backboard.io. It navigated the docs itself, read them, and produced this list.&lt;/p&gt;

&lt;p&gt;So yes, an AI made by Anthropic read our documentation and wrote about the limitations of Anthropic's own API.&lt;/p&gt;

&lt;p&gt;It did not argue.&lt;br&gt;
It did not resist.&lt;br&gt;
Because it knows.&lt;/p&gt;

&lt;p&gt;So let's talk about what it knows.&lt;/p&gt;

&lt;p&gt;Anthropic's API is stateless.&lt;br&gt;
So is OpenAI's.&lt;br&gt;
So is Grok.&lt;br&gt;
So is OpenRouter.&lt;/p&gt;

&lt;p&gt;That one word, stateless, explains almost every pain point developers hit the second they move beyond a toy demo.&lt;/p&gt;

&lt;p&gt;And yes, we solve this at Backboard. You get free state for life, by the way. Not to bury the lead. But that is only part of the story.&lt;/p&gt;

&lt;p&gt;Here is the bigger point.&lt;/p&gt;

&lt;p&gt;Stateless means every API call starts from zero.&lt;/p&gt;

&lt;p&gt;The model does not know who you are.&lt;br&gt;
It does not know what was said five minutes ago.&lt;br&gt;
It does not know what your user cares about.&lt;br&gt;
It does not know what happened in the last session.&lt;/p&gt;

&lt;p&gt;You send context.&lt;br&gt;
It responds.&lt;br&gt;
The connection closes.&lt;br&gt;
It forgets.&lt;/p&gt;

&lt;p&gt;That is not a bug.&lt;br&gt;
That is the design.&lt;/p&gt;

&lt;p&gt;These APIs are low-level primitives. And low-level primitives are supposed to be simple.&lt;/p&gt;

&lt;p&gt;But the second you try to build something real, something users come back to, something that gets better over time instead of feeling reset every session, you hit a wall.&lt;/p&gt;

&lt;p&gt;And that wall is infrastructure.&lt;/p&gt;

&lt;p&gt;Session management.&lt;br&gt;
Context window handling.&lt;br&gt;
Memory extraction and retrieval.&lt;br&gt;
Vector databases for RAG.&lt;br&gt;
Multi-provider credential management.&lt;br&gt;
Agent orchestration.&lt;/p&gt;

&lt;p&gt;None of that ships with the raw API.&lt;br&gt;
All of it becomes your problem.&lt;/p&gt;

&lt;p&gt;That is where Backboard comes in.&lt;/p&gt;

&lt;p&gt;Backboard is a single API layer that handles all of it across 17,000+ models, including Claude, GPT, Gemini, Grok, and more.&lt;/p&gt;

&lt;p&gt;Shared state.&lt;br&gt;
One key.&lt;br&gt;
One abstraction.&lt;/p&gt;

&lt;p&gt;Below is a list of 50 specific things Backboard does that the raw Anthropic API does not.&lt;/p&gt;

&lt;p&gt;We are going to break all of them down in a 5-part series, starting with the most important concept: what "state" actually means.&lt;/p&gt;

&lt;p&gt;Then we build from there, all the way to multi-agent systems you can spin up by describing what you want in plain English.&lt;/p&gt;

&lt;p&gt;For now, here are the headlines. Follow me if you want to see all 5 parts without battling the Algo.&lt;/p&gt;




&lt;h2&gt;
  
  
  State and Conversation Persistence
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Persist a full conversation across sessions without storing anything yourself&lt;/li&gt;
&lt;li&gt;Pick up exactly where you left off, days or weeks later&lt;/li&gt;
&lt;li&gt;Give every user their own isolated conversation thread&lt;/li&gt;
&lt;li&gt;Run unlimited threads per assistant&lt;/li&gt;
&lt;li&gt;Tag threads with metadata like user IDs, plans, or channels&lt;/li&gt;
&lt;li&gt;Get the full structured conversation history back from the API at any time&lt;/li&gt;
&lt;li&gt;Keep threads alive indefinitely until you explicitly delete them&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Memory Across Sessions
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Have an assistant automatically remember user preferences between completely separate conversations&lt;/li&gt;
&lt;li&gt;Auto-extract facts from conversations and store them in a knowledge base&lt;/li&gt;
&lt;li&gt;Automatically retrieve relevant memories when they matter, without writing any retrieval logic&lt;/li&gt;
&lt;li&gt;Pre-load what you already know about a user before they ever say a word&lt;/li&gt;
&lt;li&gt;Search semantically over everything the assistant has learned about a user&lt;/li&gt;
&lt;li&gt;Use memory in read-only mode, retrieve without ever writing&lt;/li&gt;
&lt;li&gt;Add, update, or delete specific memories via API&lt;/li&gt;
&lt;li&gt;Customize exactly what kinds of facts get extracted, per assistant&lt;/li&gt;
&lt;li&gt;Use higher-accuracy memory extraction for high-stakes use cases&lt;/li&gt;
&lt;li&gt;Share everything the assistant learns about a user across all of that user's conversations&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Context Window Management
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Automatically handle conversations that exceed the model's context limit&lt;/li&gt;
&lt;li&gt;Never manually count tokens or write truncation logic&lt;/li&gt;
&lt;li&gt;Switch models mid-conversation without recalculating context for the new model&lt;/li&gt;
&lt;li&gt;Automatically adjust document chunking when the model changes&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Model Routing and Multi-Provider Access
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Access 17,000+ models from a single API key&lt;/li&gt;
&lt;li&gt;Switch models mid-conversation without losing any state or history&lt;/li&gt;
&lt;li&gt;Use different models for different messages in the same thread&lt;/li&gt;
&lt;li&gt;Route cheap queries to cheap models and hard ones to expensive models, in the same thread&lt;/li&gt;
&lt;li&gt;Implement transparent provider fallback when a provider goes down&lt;/li&gt;
&lt;li&gt;Browse the full model catalog programmatically, filter by capability, context size, and price&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  RAG and Document Intelligence
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Upload a document and have it queryable in minutes with zero infrastructure&lt;/li&gt;
&lt;li&gt;Get hybrid keyword and semantic search automatically on every query&lt;/li&gt;
&lt;li&gt;Index mixed document types in one knowledge base, PDFs next to code files next to spreadsheets&lt;/li&gt;
&lt;li&gt;Scope a document to a single conversation instead of the whole assistant&lt;/li&gt;
&lt;li&gt;Choose your own embedding model and dimensions per assistant&lt;/li&gt;
&lt;li&gt;Tune how many chunks get retrieved per query&lt;/li&gt;
&lt;li&gt;Index code files natively alongside prose&lt;/li&gt;
&lt;li&gt;Check document indexing status and get chunk and token counts back from the API&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Tool Calling with Persistent State
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Have every tool result automatically become part of the persistent conversation history&lt;/li&gt;
&lt;li&gt;Chain multiple rounds of tool calls without rebuilding state between rounds&lt;/li&gt;
&lt;li&gt;Loop tool calls until the agent reaches a completed state&lt;/li&gt;
&lt;li&gt;Run multiple tools in parallel within a single response&lt;/li&gt;
&lt;li&gt;Stream the final answer to the user after tool execution completes&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Web Search
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Give an assistant real-time web access with a single parameter&lt;/li&gt;
&lt;li&gt;Let the assistant decide on its own when to search vs. use what it already knows&lt;/li&gt;
&lt;li&gt;Combine live web search, persistent memory, and streaming in one API call&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Multi-Agent Architecture
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Run parallel agent threads simultaneously and merge outputs in a coordinator&lt;/li&gt;
&lt;li&gt;Build specialist and coordinator agent networks&lt;/li&gt;
&lt;li&gt;Give each agent in a network its own model&lt;/li&gt;
&lt;li&gt;Give each agent its own system prompt and identity&lt;/li&gt;
&lt;li&gt;Give each agent distinct tool-calling capabilities&lt;/li&gt;
&lt;li&gt;Have every agent in a network share what they know about the same user&lt;/li&gt;
&lt;li&gt;Describe a complete multi-agent system in plain English and have it built for you, no code required&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;That last one gets its own post. It's the whole point of doing all the other work first.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's coming
&lt;/h2&gt;

&lt;p&gt;This is the start of a 5-part series. Each post takes a chunk of the list above and walks through it properly, starting from first principles. If you don't know what "state" means, Part 1 explains it. If you've never thought about the difference between conversation context and long-term memory, Part 2 covers that. We're not assuming anything.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Part 1 (Beginner):&lt;/strong&gt; What state is, why it matters, and your first 10 stateful patterns explained from scratch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Part 2 (Intermediate):&lt;/strong&gt; The difference between context and memory, and 10 patterns that make your assistant genuinely smarter over time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Part 3 (Advanced):&lt;/strong&gt; RAG without the infrastructure. Hybrid search, mixed document types, scoping, tuning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Part 4 (Expert):&lt;/strong&gt; Multi-model routing, stateful tool chains, and parallel agent execution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Part 5 (Master):&lt;/strong&gt; Describing multi-agent systems in plain English and having them built for you via MCP.&lt;/p&gt;

&lt;p&gt;Follow along. By the end you'll have gone from "what is state" to building systems most teams spend months architecting.&lt;/p&gt;

&lt;p&gt;Start here: &lt;a href="https://docs.backboard.io" rel="noopener noreferrer"&gt;docs.backboard.io&lt;/a&gt;&lt;br&gt;
Or just get an API key: &lt;a href="https://app.backboard.io" rel="noopener noreferrer"&gt;app.backboard.io&lt;/a&gt; — $5 free credits, no credit card needed&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>programming</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Brainstorming with BMAD and Qwen Code.</title>
      <dc:creator>Kévin Drapel</dc:creator>
      <pubDate>Mon, 06 Apr 2026 20:33:36 +0000</pubDate>
      <link>https://dumb.dev.to/kdr/brainstorming-with-bmad-and-qwen-code-48d0</link>
      <guid>https://dumb.dev.to/kdr/brainstorming-with-bmad-and-qwen-code-48d0</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;There are plenty of tools that promise to make coding faster. Fewer actually help you think better.&lt;/p&gt;

&lt;p&gt;That difference matters. Most mistakes in software don’t come from typing speed or syntax. They come from unclear thinking, rushed decisions, or blind spots that only show up later when things break.&lt;/p&gt;

&lt;p&gt;This article is about a setup that tries to address that problem directly: combining &lt;strong&gt;BMAD as a structured thinking framework with Qwen Code through its CLI. The goal is not just to generate code, but to guide how decisions are made before and during implementation.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To make that practical, I will use the Six Thinking Hats approach as the backbone for brainstorming. It forces you to look at a problem from multiple angles, instead of defaulting to the first idea that “seems good enough.” When paired with a CLI-driven coding assistant, it becomes a workflow: &lt;strong&gt;think deliberately, then execute quickly&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This is not a theoretical piece. It’s about how to actually use these tools together to reduce bad decisions. While my example will focus on software development, the tools and techniques presented here are broadly applicable to many contexts and industries, including decision-making, business plans, presentations, what-if scenarios, etc.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Squad
&lt;/h1&gt;

&lt;p&gt;This setup uses two components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Qwen Code (CLI)&lt;/strong&gt; to write and run code from the terminal&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;BMAD framework (v6)&lt;/strong&gt; to structure thinking and guide decisions
The BMAD framework is available for all common AI tools (Claude Code, Codex, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Qwen Code
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqjxjtr862wrhgiy35ity.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqjxjtr862wrhgiy35ity.png" alt=" " width="800" height="224"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/QwenLM/qwen-code" rel="noopener noreferrer"&gt;https://github.com/QwenLM/qwen-code&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Installing Qwen Code is straightforward, follow the official setup instructions. Using npm is recommended, as it is the standard package manager for most AI-related tooling. You must install NodeJS v20+, we will also need it for BMAD in a moment.&lt;/p&gt;

&lt;p&gt;You will need a Qwen account to use the CLI. Notably, Qwen provides a very generous free-tier quota. It is possible to run full sessions at no cost that would otherwise cost tens of dollars on comparable plans such as Claude. You can even use a local model (Qwen models are opensource) if you want a 100% standalone and free version (but be aware BMAD may not work properly if model capabilities are too tight), check my article on this: &lt;a href="https://medium.com/@kevin.drapel/your-local-qwen-with-qwen-cli-and-lm-studio-564ffb4c1e9e" rel="noopener noreferrer"&gt;https://medium.com/@kevin.drapel/your-local-qwen-with-qwen-cli-and-lm-studio-564ffb4c1e9e&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Qwen can be invoked from the commandline by simply launching “qwen”. You will need to authenticate the first time (/auth command in case of issues), choose “Qwen OAuth” which is the free plan.&lt;/p&gt;

&lt;h2&gt;
  
  
  BMAD 6
&lt;/h2&gt;

&lt;p&gt;The acronym stands for “Build More Architect Dreams” It is a comprehensive Agile development framework, but in this context only a small subset is used, specifically the brainstorming capability.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/bmad-code-org" rel="noopener noreferrer"&gt;
        bmad-code-org
      &lt;/a&gt; / &lt;a href="https://github.com/bmad-code-org/BMAD-METHOD" rel="noopener noreferrer"&gt;
        BMAD-METHOD
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Breakthrough Method for Agile Ai Driven Development
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://github.com/bmad-code-org/BMAD-METHOD/banner-bmad-method.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fbmad-code-org%2FBMAD-METHOD%2FHEAD%2Fbanner-bmad-method.png" alt="BMad Method"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.npmjs.com/package/bmad-method" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/58952704e78a53df84f128c2132f6a23e0b9d25eee9aeb0cf935393e20a98a44/68747470733a2f2f696d672e736869656c64732e696f2f6e706d2f762f626d61642d6d6574686f643f636f6c6f723d626c7565266c6162656c3d76657273696f6e" alt="Version"&gt;&lt;/a&gt;
&lt;a href="https://github.com/bmad-code-org/BMAD-METHOD/LICENSE" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/fdf2982b9f5d7489dcf44570e714e3a15fce6253e0cc6b5aa61a075aac2ff71b/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c6963656e73652d4d49542d79656c6c6f772e737667" alt="License: MIT"&gt;&lt;/a&gt;
&lt;a href="https://nodejs.org" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/e07bcc4fee8282ed8b094dc9b03da92aef6a178ff584a79a82d4f996cde9120a/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f6e6f64652d25334525334432302e302e302d627269676874677265656e" alt="Node.js Version"&gt;&lt;/a&gt;
&lt;a href="https://www.python.org" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/4e8275a117788c548854e8632b0a483621314f77d035435a4a15720abb86eb8e/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f707974686f6e2d253345253344332e31302d626c75653f6c6f676f3d707974686f6e266c6f676f436f6c6f723d7768697465" alt="Python Version"&gt;&lt;/a&gt;
&lt;a href="https://docs.astral.sh/uv/" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/a5176f90cd1576095b37073809285477833a93dcb23226248e266cc3de2941e0/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f75762d7061636b6167652532306d616e616765722d626c756576696f6c65743f6c6f676f3d7576" alt="uv"&gt;&lt;/a&gt;
&lt;a href="https://discord.gg/gk8jAdXWmj" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/585e0b9a83896ef294e3507dd0107b132e32af506ace1876124bfd90d59030dd/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f446973636f72642d4a6f696e253230436f6d6d756e6974792d3732383964613f6c6f676f3d646973636f7264266c6f676f436f6c6f723d7768697465" alt="Discord"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Build More Architect Dreams&lt;/strong&gt; — An AI-driven agile development module for the BMad Method Module Ecosystem, the best and most comprehensive Agile AI Driven Development framework that has true scale-adaptive intelligence that adjusts from bug fixes to enterprise systems.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;100% free and open source.&lt;/strong&gt; No paywalls. No gated content. No gated Discord. We believe in empowering everyone, not just those who can pay for a gated community or courses.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Why the BMad Method?&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;Traditional AI tools do the thinking for you, producing average results. BMad agents and facilitated workflows act as expert collaborators who guide you through a structured process to bring out your best thinking in partnership with the AI.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AI Intelligent Help&lt;/strong&gt; — Invoke the &lt;code&gt;bmad-help&lt;/code&gt; skill anytime for guidance on what's next&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scale-Domain-Adaptive&lt;/strong&gt; — Automatically adjusts planning depth based on project complexity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Structured Workflows&lt;/strong&gt; — Grounded in agile best practices across analysis, planning, architecture, and implementation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Specialized&lt;/strong&gt;…&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/bmad-code-org/BMAD-METHOD" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;In practice, the framework installs a collection of agents and skills into a local folder. These can then be invoked by an AI coding agent, such as Qwen Code in this setup.&lt;/p&gt;

&lt;h1&gt;
  
  
  Preparing our Project for BMAD
&lt;/h1&gt;

&lt;p&gt;Create an empty folder on your disk, this is the place where the project will be stored. BMAD will create a few folders next to the one of Qwen (.qwen).&lt;/p&gt;

&lt;p&gt;Launch a terminal, navigate to the folder, then launch BMAD setup:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npx bmad-method install&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzn8ihpdjfjvevhj1ffoa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzn8ihpdjfjvevhj1ffoa.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First question in the installation process will be the folder, you can just press Enter to install it in the current one (and Enter a second time to confirm).&lt;/p&gt;

&lt;p&gt;When asked for the modules to install, pick the 3 first modules (use arrows/space to enable them), then Enter to confirm.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuk1bcqr2pnzp6ss7l3x2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuk1bcqr2pnzp6ss7l3x2.png" alt=" " width="800" height="184"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;BMAD will ask if you want to add custom modules or agents, you can answer No and continue. Next step is the integration with tools, here we need to select “QwenCoder”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs60hhpnqoytl3zspwk2t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs60hhpnqoytl3zspwk2t.png" alt=" " width="698" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next questions are about the interaction, how you want the agent to call you and in which language. The choice is up to you but we will continue in English. We also choose the default output folder.&lt;/p&gt;

&lt;p&gt;The module configuration is then set on “Express Setup”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7k0n7ed5a1mhylywwukb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7k0n7ed5a1mhylywwukb.png" alt=" " width="713" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;BMAD finalizes the installation and we are ready to go.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F97ehi5uv51udcdc9psbs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F97ehi5uv51udcdc9psbs.png" alt=" " width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Invoking The Brainstormer
&lt;/h1&gt;

&lt;p&gt;With the setup complete, we can now use BMAD inside Qwen.&lt;/p&gt;

&lt;p&gt;Launch Qwen and enter “/skills” command. You should see a long list of “bmad-xxxxx” skills indicating your installation was successful. We are going to use bmad-brainstorming.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3vsa160zgr9dkootoez2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3vsa160zgr9dkootoez2.png" alt=" " width="800" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before going into the skill, let’s set the scenario for our brainstorming demonstration: we aim to develop a web application for gardeners that assists them in planning their next plantings. The app will help them decide what to plant, when to plant it, and the optimal locations for each plant in their garden.&lt;/p&gt;

&lt;p&gt;We now need to expand this idea but we are not sure what the best options are, so we will use the bmad-brainstorming skill to help us. In Qwen, we simply type “bmad-brainstorming” as a prompt (in Claude Code and other tools, the skills are exposed as real commands). Note that during the different steps, BMAD (through Qwen) will ask different permissions to write files into its folders. “It keeps track of the intermediate steps as markdown files.&lt;/p&gt;

&lt;p&gt;The skill asks us what is the vision and what are the expected outcomes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcupj805wpcc0vhp20dv1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcupj805wpcc0vhp20dv1.png" alt=" " width="800" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is our first answer:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;1) Make a simple web tool for gardeners that tells them what veggies to plant and where to put them. It should give seasonal tips based on where they live and let them map out their plants in basic beds or pots. Keep it simple, don’t ask for too much info, and focus on being clear rather than super precise. 2) I want to have a specification that can be used for the implementation.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;BMAD summarizes the key aspects of the vision and wants us to confirm if we are aligned on the goals. It also proposes the approach for the next steps. There are different brainstorming methods available and depending on the kind of goal to achieve, you can let it propose the most appropriate choice. You can also enforce a given methodology.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj37rra37zwnyl3i393hu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj37rra37zwnyl3i393hu.png" alt=" " width="800" height="182"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We prompt “1” for the user-selected techniques, BMAD shows a list of categories. You could choose one of the main category or you can enforce a specific approach here: “Six Thinking Hats” by De Bono (&lt;a href="https://en.wikipedia.org/wiki/Six_Thinking_Hats" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Six_Thinking_Hats&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;This method involves adopting multiple perspectives to examine goals and challenges. It is especially valuable in software design, as it clearly separates emotional, analytical, and creative thinking, helping reduce bias in decision-making. Each “hat” is associated with a specific colour and represents a distinct mindset.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzv1rvigsbvuf55ipsyqy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzv1rvigsbvuf55ipsyqy.png" alt=" " width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I want to use Six Thinking Hats&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr9gb7xnj4jsdvn3ppsc1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr9gb7xnj4jsdvn3ppsc1.png" alt=" " width="800" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;BMAD invites us to either continue with this or add another methodology into our process. As we want to keep it simple for the sake of the demonstration, we just continue.&lt;/p&gt;
&lt;h1&gt;
  
  
  Going through the Hats
&lt;/h1&gt;

&lt;p&gt;BMAD starts with the blue hat that is focused on the thinking process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2jlc9w5tryhbaa4cha2d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2jlc9w5tryhbaa4cha2d.png" alt=" " width="800" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One important point: you are not required to write everything in detail, you can complete your ideas iteratively. This is where AI is extremely powerful as it can easily pivot. If a request is unclear or you can’t make up your mind, you can also say you don’t know, ask for some suggestions or simply skip an answer.&lt;/p&gt;

&lt;p&gt;Coming back to the blue hat, we are asked what the key components are, logical flow, constraints. Here I skip the answer for question 2, I will only answer for 1 and 3.&lt;/p&gt;

&lt;blockquote&gt;
&lt;ol&gt;
&lt;li&gt;I need functional specifications, as well as technical specifications.&lt;/li&gt;
&lt;li&gt;It must be simple for a gardener, audience is not expected to be familiar with computers ​
We get a summary of the blue hat and BMAD switches to the white hat which is focusing on facts. This is one of the most important hats because it builds the pillar leading to a robust analysis.&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxvmx0zveei27v2gk84p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxvmx0zveei27v2gk84p.png" alt=" " width="800" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The questions are targeting the needs and required data to achieve the goals. We give some answers based on our expertise:&lt;/p&gt;

&lt;blockquote&gt;
&lt;ol&gt;
&lt;li&gt;When to plant, what kind of soil, sunny vs shady area, when is the harvest.&lt;/li&gt;
&lt;li&gt;location, season, space between plants.&lt;/li&gt;
&lt;li&gt;we need the location of the garden to figure out if the area is cold, mild, etc. we also need the spacing information and some plants cannot be planted together, you also need to consider rotations.&lt;/li&gt;
&lt;li&gt;watering needs &lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;White hat is summarized and then the red hat is presented with a perspective based on the gut feeling.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcamh8tgrw281ur7klm0m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcamh8tgrw281ur7klm0m.png" alt=" " width="800" height="535"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The questions focus on the feeling of the users, what they like and what is frustrating them, as well as our own feeling. This is our answer:&lt;/p&gt;

&lt;blockquote&gt;
&lt;ol&gt;
&lt;li&gt;they should feel that the tool is reliable, they must feel relieved that they do not have to keep in mind all the scheduling&lt;/li&gt;
&lt;li&gt;missing planting at the right time + missing watering&lt;/li&gt;
&lt;li&gt;it is important to have a calendar and reminder of regular tasks (watering)&lt;/li&gt;
&lt;li&gt;users would love to be guided ​&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;I will skip the next hats (yellow, black, green) but the process is similar, you just answer to refine the AI’s understanding. The black hat is particularly interesting as it forces you to adopt a negative mindset, what could go wrong. The green one takes the opposite path, how to get out of the negative aspects. The yellow one focuses on the benefits.&lt;/p&gt;

&lt;p&gt;At the end, the blue hat is presented again if we want to refine the mindset. We just ask BMAD to go forward with a “let’s continue”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1w70kce8d4bc2se76nih.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1w70kce8d4bc2se76nih.png" alt=" " width="800" height="624"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A summary of the hats is shown and again BMAD asks you what you want to do next. You can add more ideas, dive deeper into a particular topic or just continue. To keep the demo concise, we proceed directly to specification generation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi28otsb8jfc729o9lphe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi28otsb8jfc729o9lphe.png" alt=" " width="800" height="199"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Generated Specification
&lt;/h1&gt;

&lt;p&gt;A specification is quickly generated. It covers all sections you could expect from such document: overview, functional specification with features and user stories.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9wmwcnqghmw6xcbqyuca.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9wmwcnqghmw6xcbqyuca.png" alt=" " width="800" height="291"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The technical aspects are covered as well as it was part of the information we requested: architecture, technical stack, data sources, APIs, etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F813dnhfznr416wy2r1he.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F813dnhfznr416wy2r1he.png" alt=" " width="763" height="620"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Further Refinements
&lt;/h1&gt;

&lt;p&gt;At this point, it is strongly recommended to read the first version of the generated document. You will probably have some changes to perform in it. You may have new ideas that come up during the review. Instead of simply changing manually the .md file, you can make a new round of brainstorming.&lt;/p&gt;

&lt;p&gt;This is what we are going to do by invoking the “green hat” (the optimistic view) again&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Can we make a run with a green hat, I would like to improve a few things&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As we had made a first full pass of Six Thinking Hats, the green hat is now presented with some additional ideas that you can explore.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffj9hj3teaj8fkp1oj6mx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffj9hj3teaj8fkp1oj6mx.png" alt=" " width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Green hat comes up with some ideas such as a webcam in the garden.&lt;br&gt;
We propose a few new ideas but also ask for some advice:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I think it would be nice to have an idea of the incoming weather in that area. I think users would also like to have the schedule that is printable (PDF?) . Can you also propose me some ideas going into those directions?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The green hat comes up with an extensive set of areas to explore and summarize them. Our proposal is the first one in the list, but BMAD also brought some new ideas such as a “harvest goals calculator” or a gamification with “streaks &amp;amp; badges”. We decide to go for the practical additions (B).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F273kc2uksbwq3uqbkjhb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F273kc2uksbwq3uqbkjhb.png" alt=" " width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;B is nice, proceed with that&lt;br&gt;
BMAD proceeds with the necessary changes in our specification.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7apa2yww8wbjjdn22ys5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7apa2yww8wbjjdn22ys5.png" alt=" " width="800" height="527"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Jumping to another Brainstorming Method
&lt;/h1&gt;

&lt;p&gt;Maybe you feel the Six Thinking Hats is not sufficient or appropriate for your usage, you can instruct BMAD to propose you an alternative:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Can you suggest another brainstorming method that would be appropriate? We would continue with it&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It recommends a few options, the top 3 is:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Question Storming: You have a solid spec now identify what you don’t know yet.&lt;br&gt;
Assumption Reversal: Challenge the design decisions you’ve made to find better alternatives.&lt;br&gt;
Constraint Mapping: Before implementation, know which constraints are real vs. imagined&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We go with the Assumption Reversal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F67gmdd995r1hb12hkckt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F67gmdd995r1hb12hkckt.png" alt=" " width="800" height="231"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We get challenged on the location, what happens if the user NEVER enters his location. This removes a lot of information that could be useful for the garden management.&lt;/p&gt;

&lt;p&gt;The user will not have access to weather information. It will not be possible to give advices on the right time for planting. An alternative could be to allow the user to choose a hardiness zone if he does not want to disclose his exact location. If none of this is possible, the suggestions will be disabled. ​&lt;/p&gt;

&lt;p&gt;If we stop the process at that point, we get a conclusion with our reversal thinking. After each major milestone, we are proposed to continue with brainstorming, adapt the plan or proceed with the implementation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr5jus9u5in40buwq09oy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr5jus9u5in40buwq09oy.png" alt=" " width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;The brainstorming tool in BMAD is not just a way to generate ideas, it is a way to structure thinking.&lt;/p&gt;

&lt;p&gt;By combining perspective-based reasoning and exploration, it transforms what is usually an unstructured and sometimes inconsistent process into something repeatable and reliable. Ambiguities are reduced early, hidden constraints surface naturally, and ideas evolve into more coherent solutions.&lt;/p&gt;

&lt;p&gt;The time invested in this phase is not wasted. It directly impacts the quality of the outcome by strengthening the foundations of the solution before any implementation begins.&lt;/p&gt;

&lt;p&gt;Used properly, BMAD does not just help you think of better ideas, it helps you design better systems.&lt;/p&gt;

&lt;p&gt;Original article: &lt;/p&gt;
&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
      &lt;div class="c-embed__body flex items-center justify-between"&gt;
        &lt;a href="https://medium.com/@kevin.drapel/brainstorming-with-bmad-and-qwen-code-12d1ee5e8fba" rel="noopener noreferrer" class="c-link fw-bold flex items-center"&gt;
          &lt;span class="mr-2"&gt;medium.com&lt;/span&gt;
          

        &lt;/a&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


</description>
      <category>ai</category>
      <category>productivity</category>
      <category>design</category>
    </item>
    <item>
      <title>The Release That Broke Everything</title>
      <dc:creator>Wu Long</dc:creator>
      <pubDate>Mon, 06 Apr 2026 20:32:27 +0000</pubDate>
      <link>https://dumb.dev.to/oolongtea2026/the-release-that-broke-everything-78h</link>
      <guid>https://dumb.dev.to/oolongtea2026/the-release-that-broke-everything-78h</guid>
      <description>&lt;p&gt;Some releases ship features. Some ship fixes. And some ship chaos.&lt;/p&gt;

&lt;p&gt;OpenClaw v2026.4.5 managed to break things on every major platform simultaneously. Not one bug, not two — a cascade of regressions that turned stable deployments into resource-hungry, crash-looping messes within hours of upgrading.&lt;/p&gt;

&lt;p&gt;Let's look at what happened, because the failure modes here are textbook examples of how complexity compounds.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Damage Report
&lt;/h2&gt;

&lt;p&gt;Within 24 hours of v2026.4.5 going live, users reported failures across macOS, Windows, and Linux. Here's the highlight reel.&lt;/p&gt;

&lt;h3&gt;
  
  
  macOS: 87 Processes, 888% CPU
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/openclaw/openclaw/issues/62051" rel="noopener noreferrer"&gt;#62051&lt;/a&gt; is the kind of bug report that makes you wince. A Mac Mini user upgraded from v2026.4.2 and watched their system spawn &lt;strong&gt;87+ worker processes&lt;/strong&gt;, each independently loading all plugins:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[plugins] BlockRun provider registered (55+ models via x402)
[plugins] Registered 1 partner tool(s): blockrun_x_users_lookup
[plugins] Not in gateway mode — proxy will start when gateway runs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That message repeated for every single child process. The result:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;103 total openclaw processes&lt;/strong&gt; (vs ~8 on the previous version)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;888% CPU&lt;/strong&gt; across all cores&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Load average 17.77&lt;/strong&gt; on an 8-10 core machine&lt;/li&gt;
&lt;li&gt;API response times went from 10ms to over 2 minutes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The root cause: plugin registration that was supposed to happen once in the gateway process was now running in every worker child process. Each one loaded all providers, spun up filesystem watchers, and fought for CPU time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Windows: Stack Overflow Before Startup
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/openclaw/openclaw/issues/62055" rel="noopener noreferrer"&gt;#62055&lt;/a&gt; hit Windows users with a completely different failure mode. The CLI wouldn't even start:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;RangeError: Maximum call stack size exceeded
    at evaluateSync (node:internal/modules/esm/module_job:458:26)
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The ESM module graph had grown significantly between releases. On Linux and macOS, V8's default stack (~8 MB) handled it fine. On Windows, the default ~1 MB stack couldn't cope. Users who worked around the stack issue with &lt;code&gt;--stack-size&lt;/code&gt; then hit heap OOM at 4 GB.&lt;/p&gt;

&lt;p&gt;Same codebase, same version, completely different crash — because the release process didn't test against platform-specific V8 defaults.&lt;/p&gt;

&lt;h3&gt;
  
  
  Linux: Tools Rendered as Raw Text
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/openclaw/openclaw/issues/62089" rel="noopener noreferrer"&gt;#62089&lt;/a&gt; was subtler but arguably worse. Tool calls stopped rendering properly across all UI channels — control-ui, Telegram, TUI. Instead of formatted output, users saw raw &lt;code&gt;[TOOL_CALL]&lt;/code&gt; blocks.&lt;/p&gt;

&lt;p&gt;The tools still &lt;em&gt;executed&lt;/em&gt; fine. The results were correct. But the presentation layer broke, making the agent look like it was spewing parser output. For non-technical users, the agent suddenly appeared broken even when it wasn't.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Compound Effect
&lt;/h3&gt;

&lt;p&gt;One user (&lt;a href="https://github.com/openclaw/openclaw/issues/62095" rel="noopener noreferrer"&gt;#62095&lt;/a&gt;) documented the full experience: &lt;strong&gt;10 gateway restarts in 8 hours&lt;/strong&gt;. Their stable Mac Studio M3 Ultra setup hit all of these simultaneously:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;doctor --fix&lt;/code&gt; didn't actually fix the warnings it reported&lt;/li&gt;
&lt;li&gt;Subagent announce timeouts defaulted to 120s, blocking the gateway for up to 8 minutes per failure&lt;/li&gt;
&lt;li&gt;New security checks broke existing LAN setups without migration guidance&lt;/li&gt;
&lt;li&gt;Slack health-monitor reconnected every 35 minutes in a loop&lt;/li&gt;
&lt;li&gt;Gateway hit 1.5GB RAM with 379 accumulated session files&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each issue alone was survivable. Together, they made the system unusable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Happens
&lt;/h2&gt;

&lt;p&gt;This isn't unique to OpenClaw. Any fast-moving project with these characteristics is vulnerable:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Plugin isolation boundaries shift silently.&lt;/strong&gt; The worker process change probably looked innocent in the diff — maybe a refactor that moved initialization earlier, or a startup path that stopped checking whether it was in gateway mode. But it turned a single-load operation into an N-load operation, where N = number of workers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Platform-specific limits aren't in CI.&lt;/strong&gt; The module graph grew gradually across many PRs. No individual change was problematic. But the cumulative effect crossed Windows' stack threshold. Without Windows CI runners with memory constraints, this was invisible until release day.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Default values are load-bearing.&lt;/strong&gt; The 120-second announce timeout was probably fine when subagents were rare. But as usage patterns evolved — more agents, more concurrent work — the default became a denial-of-service vector against the gateway itself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Presentation regressions are stealth killers.&lt;/strong&gt; The tool rendering bug didn't affect functionality at all. But it destroyed the user experience. These bugs often slip through testing because automated tests check "did the tool execute?" not "did the result render correctly?"&lt;/p&gt;

&lt;h2&gt;
  
  
  The Deeper Pattern
&lt;/h2&gt;

&lt;p&gt;What makes v2026.4.5 interesting isn't any single bug — it's the &lt;em&gt;simultaneity&lt;/em&gt;. Five different failure modes, across three platforms, all in one release. This usually means one of two things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A large structural change (like the plugin loading refactor) had cascading effects that weren't fully traced&lt;/li&gt;
&lt;li&gt;Multiple risky changes landed in the same release window without adequate soak time&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The fix is almost never "more testing" in the abstract. It's more specific:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Canary releases&lt;/strong&gt; that expose changes to a subset of users first&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Platform-diverse CI&lt;/strong&gt; that catches the Windows-specific failures before they ship&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource-budget tests&lt;/strong&gt; that fail when process count or memory exceeds expected bounds&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rollback documentation&lt;/strong&gt; so users know exactly how to get back to the last stable version&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  For Agent Builders
&lt;/h2&gt;

&lt;p&gt;If you're building on top of a fast-moving agent framework:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Pin your versions.&lt;/strong&gt; Don't auto-upgrade to latest. Wait 48-72 hours after a release and check the issue tracker.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitor your resources.&lt;/strong&gt; Process count, memory, CPU — these are your early warning system. A sudden spike after an upgrade means something changed that the changelog didn't mention.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Keep the previous version's binary.&lt;/strong&gt; Being able to roll back in 30 seconds is worth more than any amount of testing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test your specific platform.&lt;/strong&gt; "Works on my machine" is especially dangerous when the codebase targets Linux, macOS, and Windows simultaneously.&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;v2026.4.5 will get patched. The individual bugs will get fixed. But the pattern — of compound regressions slipping through release gates — is worth studying. Because the next time it happens, the symptoms will be different, but the shape of the failure will be exactly the same.&lt;/p&gt;

</description>
      <category>openclaw</category>
      <category>reliability</category>
      <category>releaseengineering</category>
      <category>agents</category>
    </item>
    <item>
      <title>Building a Multi-Agent ATDD Pipeline with LangGraph and Hexagonal Architecture</title>
      <dc:creator>Carlos Eduardo Sotelo Pinto</dc:creator>
      <pubDate>Mon, 06 Apr 2026 20:30:48 +0000</pubDate>
      <link>https://dumb.dev.to/csotelo/building-a-multi-agent-atdd-pipeline-with-langgraph-and-hexagonal-architecture-5a9k</link>
      <guid>https://dumb.dev.to/csotelo/building-a-multi-agent-atdd-pipeline-with-langgraph-and-hexagonal-architecture-5a9k</guid>
      <description>&lt;h1&gt;
  
  
  Building a Multi-Agent ATDD Pipeline with LangGraph and Hexagonal Architecture
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Write the spec, mark the story as ready, walk away. The agents do the rest.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The problem with solo AI development
&lt;/h2&gt;

&lt;p&gt;Building a product solo is brutal.&lt;/p&gt;

&lt;p&gt;You are the PO, the architect, the developer, and the QA — all at the same time. When AI coding agents entered the picture, I didn't see a magic button. I saw a new kind of team member that needed the same thing any team member needs: &lt;strong&gt;clear responsibilities, short tasks, and a verifiable definition of done&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The first thing I tried was the obvious approach: long prompts, one agent, do everything. It failed the way it always fails. The model drifted, lost context, and confidently built the wrong thing.&lt;/p&gt;

&lt;p&gt;Then I applied something I already knew from software architecture:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Divide and conquer.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If a long prompt fails, what about a very short one with a very specific context? What if instead of one agent doing everything, you had &lt;strong&gt;multiple agents — each with a single role, a precise skill, and just enough context to do their job&lt;/strong&gt;?&lt;/p&gt;

&lt;p&gt;That question led me to build an ATDD orchestrator: a pipeline of specialized AI agents, coordinated by a state machine, that takes a user story from spec to acceptance without human intervention in the technical stages.&lt;/p&gt;

&lt;p&gt;In this article I'll walk through the architecture, the design decisions, and — the main focus — how replacing a Celery queue with a &lt;strong&gt;LangGraph state machine&lt;/strong&gt; made the whole thing significantly cleaner and more explicit.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is ATDD and why does it fit AI agents perfectly?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Acceptance Test Driven Development&lt;/strong&gt; says: write acceptance criteria first, then build until they pass. The acceptance criteria — written as Gherkin scenarios — are the only real definition of done.&lt;/p&gt;

&lt;p&gt;This maps naturally to a multi-agent pipeline:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Architect&lt;/strong&gt; (human + Claude): define the spec, write the acceptance criteria, refine user stories&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test engineer&lt;/strong&gt; (autonomous): write unit and integration tests in RED — before any implementation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Developer&lt;/strong&gt; (autonomous): make the RED tests pass — and only that&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tester&lt;/strong&gt; (autonomous): quality gate — regressions, ruff, mypy&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ATF worker&lt;/strong&gt; (autonomous): run the Gherkin acceptance scenarios with Playwright + Behave&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each role has a single responsibility. Each transition is triggered by a status change in a file. No agent can skip a stage or self-certify completion.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;story.md → status: ready
              │
              ▼
        test_engineer  → tests in RED
              │
              ▼
          developer    → tests GREEN
              │
              ▼
            tester     → quality gate
              │
              ▼
          atf_worker   → acceptance scenarios pass
              │
              ▼
        status: accepted  ✓
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The spec is the contract. The Gherkin scenario is the verdict.&lt;/p&gt;




&lt;h2&gt;
  
  
  The architecture: hexagonal all the way down
&lt;/h2&gt;

&lt;p&gt;Before talking about LangGraph, it's worth understanding the underlying architecture — because it's the reason the LangGraph integration was so clean.&lt;/p&gt;

&lt;p&gt;The orchestrator follows strict &lt;strong&gt;hexagonal architecture&lt;/strong&gt; (ports and adapters):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;domain          — pure Python, no external dependencies
application     — imports domain only (use cases)
infrastructure  — imports domain + application + external libs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The domain defines three ports (interfaces):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;StoryRepository&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ABC&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;story_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Story&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;...&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;save_status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;story_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Status&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;note&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;""&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;...&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;find_by_status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Status&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="bp"&gt;...&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;CodeRunner&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ABC&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;...&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;TaskQueue&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ABC&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;enqueue&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;task_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;story_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each use case depends only on these ports. For example, &lt;code&gt;RunDeveloper&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;RunDeveloper&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;story_repo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;runner&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;queue&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="bp"&gt;...&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;story_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;_runner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;developer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_PROMPT&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;story_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;story_id&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;_repo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;save_status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;story_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Status&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;READY_TO_TEST&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;_queue&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;enqueue&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;run_tester&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;story_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;exc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;_repo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;save_status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;story_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Status&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;BLOCKED&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;note&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;exc&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
            &lt;span class="k"&gt;raise&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice: the use case doesn't know &lt;em&gt;what&lt;/em&gt; &lt;code&gt;CodeRunner&lt;/code&gt; is (subprocess? API call? mock?), &lt;em&gt;what&lt;/em&gt; &lt;code&gt;StoryRepository&lt;/code&gt; stores to (files? database?), or &lt;em&gt;how&lt;/em&gt; &lt;code&gt;TaskQueue&lt;/code&gt; delivers the next task (Celery? Redis? LangGraph?).&lt;/p&gt;

&lt;p&gt;This is the key. &lt;strong&gt;The infrastructure is replaceable without touching the domain.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The original infrastructure: Celery + Redis
&lt;/h2&gt;

&lt;p&gt;The first implementation used Celery with Redis as the broker. Each role had a dedicated queue. The dispatcher polled every 30 seconds for stories in status &lt;code&gt;INBOX&lt;/code&gt; and enqueued the first task.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# infrastructure/celery/tasks.py
&lt;/span&gt;&lt;span class="nd"&gt;@app.task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;run_developer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;queue&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ready-to-dev&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;task_developer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;project_path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;story_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;runner&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;queue&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;notifier&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;_deps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;project_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nc"&gt;RunDeveloper&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;runner&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;queue&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;story_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This worked. But it had friction:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Redis is required even for local development&lt;/li&gt;
&lt;li&gt;The state machine logic is &lt;strong&gt;implicit&lt;/strong&gt; — split across four separate queue definitions and the enqueue calls inside use cases&lt;/li&gt;
&lt;li&gt;Retries when a quality gate fails required manual re-enqueue logic scattered across use cases&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;docker compose up&lt;/code&gt; just to run a pipeline on your laptop&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I wanted the workflow to be explicit. That's where LangGraph came in.&lt;/p&gt;




&lt;h2&gt;
  
  
  The new infrastructure: LangGraph
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/langchain-ai/langgraph" rel="noopener noreferrer"&gt;LangGraph&lt;/a&gt; is a library for building stateful, graph-based workflows. You define nodes (work units) and edges (transitions), compile the graph, and invoke it with an initial state. It handles traversal, conditional routing, and can even do checkpointing.&lt;/p&gt;

&lt;p&gt;It's designed for AI agent workflows — and an ATDD pipeline is exactly that.&lt;/p&gt;

&lt;h3&gt;
  
  
  State
&lt;/h3&gt;

&lt;p&gt;The graph state is a &lt;code&gt;TypedDict&lt;/code&gt; — a plain Python dict with type annotations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;typing&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;TypedDict&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Optional&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;atdd_orchestrator.domain.story&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Status&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;PipelineState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;TypedDict&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;story_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
    &lt;span class="n"&gt;project_path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
    &lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Status&lt;/span&gt;
    &lt;span class="n"&gt;blocked_reason&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Optional&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;dev_retries&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;dev_retries&lt;/code&gt; is the key addition over the Celery implementation. It's the counter that prevents infinite loops when the quality gate or acceptance tests keep failing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Nodes
&lt;/h3&gt;

&lt;p&gt;Each node calls the existing use case with one important change: it receives a &lt;code&gt;NoOpQueue&lt;/code&gt; instead of a real queue adapter. &lt;strong&gt;The graph handles routing — the use cases don't need to know what comes next.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;_NoOpQueue&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;TaskQueue&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;enqueue&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;task_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;story_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;pass&lt;/span&gt;  &lt;span class="c1"&gt;# routing is the graph's job
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After executing the use case, the node reads the current status back from the repository (the use case already persisted it), and returns the updated state:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;developer_node&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;PipelineState&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;PipelineState&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;runner&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;notifier&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;queue&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;_deps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;project_path&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nc"&gt;RunDeveloper&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;runner&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;queue&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;story_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;pass&lt;/span&gt;  &lt;span class="c1"&gt;# use case already saved BLOCKED to the repo
&lt;/span&gt;
    &lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;reason&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;_read_status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;project_path&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;story_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;blocked_reason&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The node is thin. It wires dependencies, delegates to the use case, reads the result, and returns. No business logic lives here.&lt;/p&gt;

&lt;h3&gt;
  
  
  Graph with conditional edges
&lt;/h3&gt;

&lt;p&gt;This is where LangGraph pays off. The state machine that was previously implicit becomes explicit Python code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langgraph.graph&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;StateGraph&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;END&lt;/span&gt;

&lt;span class="n"&gt;MAX_DEV_RETRIES&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_route_after_tester&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;PipelineState&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;Status&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;READY_TO_ATF&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;atf&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;Status&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;READY_TO_DEV&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dev_retries&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;MAX_DEV_RETRIES&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;developer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;   &lt;span class="c1"&gt;# quality gate failed → retry
&lt;/span&gt;    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;END&lt;/span&gt;                   &lt;span class="c1"&gt;# blocked or retries exhausted
&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_route_after_atf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;PipelineState&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;Status&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DONE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;END&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;Status&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;READY_TO_DEV&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dev_retries&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;MAX_DEV_RETRIES&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;developer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;   &lt;span class="c1"&gt;# acceptance failed → retry
&lt;/span&gt;    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;END&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;build_pipeline&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;g&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;StateGraph&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;PipelineState&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_node&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;test_engineer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;test_engineer_node&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_node&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;developer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;     &lt;span class="n"&gt;developer_node&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_node&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tester&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;        &lt;span class="n"&gt;tester_node&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_node&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;atf&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;           &lt;span class="n"&gt;atf_node&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_entry_point&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;test_engineer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_conditional_edges&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;test_engineer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_route_after_test_engineer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;developer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;developer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;END&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;END&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_conditional_edges&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;developer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;     &lt;span class="n"&gt;_route_after_developer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tester&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tester&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;       &lt;span class="n"&gt;END&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;END&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_conditional_edges&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tester&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;        &lt;span class="n"&gt;_route_after_tester&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;atf&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;atf&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;developer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;developer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;END&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;END&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_conditional_edges&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;atf&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;           &lt;span class="n"&gt;_route_after_atf&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;developer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;developer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;END&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;END&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;compile&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Reading this, you immediately understand the full lifecycle of a story:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;test_engineer
  → READY_TO_DEV   → developer
  → BLOCKED        → END

developer
  → READY_TO_TEST  → tester
  → BLOCKED        → END

tester
  → READY_TO_ATF   → atf
  → READY_TO_DEV   → developer (retry, up to 3×)
  → else           → END

atf
  → DONE           → END
  → READY_TO_DEV   → developer (retry, up to 3×)
  → else           → END
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The retry logic, the blocking conditions, the terminal states — all visible, all in one place.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dispatcher
&lt;/h3&gt;

&lt;p&gt;With LangGraph, the dispatcher becomes trivially simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;project_path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;pipeline&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;build_pipeline&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;repo&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;FrontmatterStoryRepository&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;project_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;story_id&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find_by_status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Status&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;INBOX&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="n"&gt;initial_state&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;story_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;story_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;project_path&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;project_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Status&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;INBOX&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;blocked_reason&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dev_retries&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="n"&gt;final_state&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;initial_state&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Pipeline done — story: %s | final status: %s&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                     &lt;span class="n"&gt;story_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;final_state&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

        &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;POLL_INTERVAL&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No broker, no worker process, no &lt;code&gt;docker compose up&lt;/code&gt;. Just:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;".[langgraph]"&lt;/span&gt;
python dispatcher_langgraph.py /path/to/your/project
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  The design decisions worth discussing
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Why is the state stored in files, not a database?
&lt;/h3&gt;

&lt;p&gt;Each user story's state lives in a &lt;code&gt;story.md&lt;/code&gt; file with YAML frontmatter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;US04&lt;/span&gt;
&lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;User can reset their password&lt;/span&gt;
&lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;in-progress:ready-to-test&lt;/span&gt;
&lt;span class="na"&gt;sprint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sprint_02&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Keeping state in the repository means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Any agent, any tool, any human can read and update it with a text editor&lt;/li&gt;
&lt;li&gt;The state survives restarts with no migration&lt;/li&gt;
&lt;li&gt;Git history shows every state transition&lt;/li&gt;
&lt;li&gt;The orchestrator doesn't own the state — &lt;strong&gt;the project does&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why is the Architect role never automated?
&lt;/h3&gt;

&lt;p&gt;The architect (human + Claude) defines scope, acceptance criteria, and what the story means. That judgment stays human-controlled.&lt;/p&gt;

&lt;p&gt;Automating that step is how you end up building the wrong thing perfectly. The spec is the contract — someone has to own it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why keep Celery?
&lt;/h3&gt;

&lt;p&gt;LangGraph is a better fit for local development and single-machine setups. Celery is better when workers need to run on separate machines or when you need horizontal scaling.&lt;/p&gt;

&lt;p&gt;Both share the &lt;strong&gt;same domain and the same use cases&lt;/strong&gt;. The difference is purely in which infrastructure adapter you wire in. That's hexagonal architecture earning its cost.&lt;/p&gt;

&lt;h3&gt;
  
  
  The NoOpQueue pattern
&lt;/h3&gt;

&lt;p&gt;This is worth calling out explicitly. The use cases currently call &lt;code&gt;self._queue.enqueue("run_tester", story_id)&lt;/code&gt; as a side effect of their execution. In the LangGraph world, that call is meaningless — the graph decides what runs next.&lt;/p&gt;

&lt;p&gt;Rather than modifying the use cases (which would break the Celery flow), we pass a &lt;code&gt;NoOpQueue&lt;/code&gt; that swallows the enqueue calls silently. The use case's core behavior — running the agent, saving the status, raising on failure — is unchanged. Only the side effect is suppressed.&lt;/p&gt;

&lt;p&gt;This is dependency injection doing exactly what it's supposed to do.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I tested on a real project
&lt;/h2&gt;

&lt;p&gt;The first project running through this pipeline is &lt;a href="https://github.com/csotelo/atdd-framework/tree/main/atf-ai" rel="noopener noreferrer"&gt;atf-ai&lt;/a&gt; — a CLI tool with Playwright-based acceptance tests. Five user stories, end-to-end:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Story&lt;/th&gt;
&lt;th&gt;Status&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;US01 — Scaffolding CLI&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;damaged&lt;/code&gt; (1 failing scenario, known issue)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;US02 — Docker Runner&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;accepted&lt;/code&gt; ✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;US03 — Screenplay Actors &amp;amp; Steps&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;accepted&lt;/code&gt; ✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;US04 — Feedback &amp;amp; State Tracking&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;accepted&lt;/code&gt; ✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;US05 — Reports Pipeline &amp;amp; PyPI&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;accepted&lt;/code&gt; ✓&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Four out of five stories went from &lt;code&gt;ready&lt;/code&gt; to &lt;code&gt;accepted&lt;/code&gt; autonomously. The fifth is blocked on a known state mismatch that requires architectural review — exactly the kind of thing that should block, not silently pass.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;p&gt;A few things on the backlog:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LangGraph checkpointing&lt;/strong&gt; — persist graph state to disk so a pipeline can resume after a crash without re-running completed stages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parallel stories&lt;/strong&gt; — run multiple stories concurrently using &lt;code&gt;asyncio&lt;/code&gt; and LangGraph's async support&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Observability&lt;/strong&gt; — emit structured events at each node transition for tracing and debugging&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-healing architect&lt;/strong&gt; — when a story is &lt;code&gt;blocked&lt;/code&gt;, trigger a Claude session to diagnose and propose a fix&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Repository
&lt;/h2&gt;

&lt;p&gt;The full source is open:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/csotelo/atdd-framework" rel="noopener noreferrer"&gt;github.com/csotelo/atdd-framework&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;atdd_orchestrator/
├── domain/          # Story, Status, ports — pure Python
├── application/     # use cases — depend only on domain
└── infrastructure/
    ├── celery/      # Celery + Redis adapter
    └── langgraph/   # LangGraph adapter (new)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;35 tests, 0 failures. No Redis, no OpenCode, no network required to run the test suite — all ports are stubbed.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final thought
&lt;/h2&gt;

&lt;p&gt;The thing that surprised me most about this project wasn't the AI part. It was how much cleaner the orchestration became once I made the state machine &lt;strong&gt;explicit&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;With Celery, the workflow lived in queue names, routing keys, and scattered &lt;code&gt;enqueue()&lt;/code&gt; calls inside use cases. You had to read four files to understand what happened after a test failure.&lt;/p&gt;

&lt;p&gt;With LangGraph, the entire lifecycle is in one function. The retry logic, the terminal conditions, the branching paths — all visible at a glance. That's not a small thing when you're the only engineer on the project and you come back to the code three weeks later.&lt;/p&gt;

&lt;p&gt;If you're building multi-agent pipelines, make the state machine explicit. Your future self will thank you.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Questions, issues, or contributions: &lt;a href="https://github.com/csotelo/atdd-framework" rel="noopener noreferrer"&gt;github.com/csotelo/atdd-framework&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>testing</category>
      <category>architecture</category>
    </item>
    <item>
      <title>I've Touched Everything and Mastered Nothing</title>
      <dc:creator>Dmitry (Dee) Kargaev</dc:creator>
      <pubDate>Mon, 06 Apr 2026 20:28:35 +0000</pubDate>
      <link>https://dumb.dev.to/deeflect/ive-touched-everything-and-mastered-nothing-48ii</link>
      <guid>https://dumb.dev.to/deeflect/ive-touched-everything-and-mastered-nothing-48ii</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.deeflect.com%2Fmedium-img%2Fi-ve-touched-everything-and-mastered-nothing.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.deeflect.com%2Fmedium-img%2Fi-ve-touched-everything-and-mastered-nothing.jpg" alt="I've Touched Everything and Mastered Nothing" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Seventeen years. That's how long ADHD has been making me touch every skill, hobby, and career path that crossed my path. I'm 30 now. I've lived in eight countries, built products in five programming languages, shipped code on four blockchains, released music on Spotify under an artist name I genuinely cannot remember, and learned enough Vietnamese to haggle at a market in Nha Trang.&lt;/p&gt;

&lt;p&gt;I am not world-class at any of it.&lt;/p&gt;

&lt;p&gt;That's the honest version of this story. Not the LinkedIn version where "my diverse background gives me a unique perspective." The real version, where I've spent a decade and a half chasing dopamine across every domain imaginable and I'm only now figuring out what that actually means for a career, an identity, and a life.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the ADHD cycle actually looks like across every skill, hobby, and career path
&lt;/h2&gt;

&lt;p&gt;The cycle runs on roughly a two-week clock. Something new appears - on Twitter, in a YouTube rabbit hole, in a Discord I shouldn't be in at 2am. The dopamine hits immediately, before I've done anything. I'm already planning the next phase while I'm still in the first hour.&lt;/p&gt;

&lt;p&gt;Then comes the manic stretch. Twelve-hour sessions at the computer, deep in documentation and GitHub repos and forum threads from 2015 that nobody else has read. I still eat well, still sleep well, still train. Health is the one thing I never let slide, no matter how deep the rabbit hole goes. But everything else disappears.&lt;/p&gt;

&lt;p&gt;Two weeks later, sometimes less, I wake up and it's gone. Not burnout. Burnout has texture - exhaustion, resentment, a desire to rest and come back. This is nothing. Flat affect toward something that consumed me completely three days ago. The interest didn't fade. It evaporated.&lt;/p&gt;

&lt;p&gt;This has been my entire adult life. Before that, too - graffiti, parkour, long-distance running, acrobatics, all before I owned a computer. I fixed a relative's MS-DOS machine in English when I was around eight and barely spoke English. I rigged our home phone line to connect to a friend's LAN across the city. I built a radio to intercept our wireless home phone so I could eavesdrop on my mom's calls.&lt;/p&gt;

&lt;p&gt;I was never bored. I was always building something I'd abandon.&lt;/p&gt;

&lt;p&gt;What I didn't understand at eight, and only started to understand around 27, is that this isn't a moral failing. It's the shape of how my brain processes novelty. The dopamine system in ADHD brains responds to new stimuli harder and drops off faster than neurotypical brains. I wasn't undisciplined. I was running a biological process I had no name for.&lt;/p&gt;

&lt;p&gt;Knowing that doesn't stop the cycle. But it changes how you relate to the wreckage.&lt;/p&gt;

&lt;h2&gt;
  
  
  The graveyard is real
&lt;/h2&gt;

&lt;p&gt;Let me just list some of it.&lt;/p&gt;

&lt;p&gt;A DJI drone I used four times. A Sony DSLR that mostly lives in a bag. A 3D printer I operated twice, let collect dust for six months, and sold at a loss. A soldering station I learned to use at university, felt was essential to own again years later, and which currently sits in my wardrobe untouched. A ukulele from Turkey. Guitar - I can play a few songs. Piano - I can play Kanye's Runaway and that's it. Harmonica. Stylophone. Otamatone. An AKAI MPK Mini. Ableton. AI music with Suno. I released lofi tracks on Spotify. I cannot remember the artist name.&lt;/p&gt;

&lt;p&gt;Stocks in Russia via an app. Mostly broke even, maybe lost a bit. Cybersecurity hardware - a Flipper Zero, a Kali Linux laptop, ESP32 boards with Marauder firmware, an ESB dongle for intercepting TPMS sensors, and an M5Stack kit with a pile of controllers and sensors. Cardputer. LLM630. Tiny screens, weird modules, more little boards than I had any reason to own. I spent weeks flashing firmware and scanning radio frequencies. Did I build anything useful? No. Make money? No. But I understood how your wireless water meter broadcasts unencrypted data to anyone with the right hardware, and that felt worth it.&lt;/p&gt;

&lt;p&gt;None of this is sustainable. The "ADHD is a superpower" content you see everywhere stops before this part. The graveyard. The money spent. The projects half-finished. The domains where I got to "good enough to be dangerous" and never further, because the dopamine was already somewhere else.&lt;/p&gt;

&lt;p&gt;This pattern has been running long enough that it stopped feeling surprising a while ago.&lt;/p&gt;

&lt;h3&gt;
  
  
  The financial reality nobody mentions
&lt;/h3&gt;

&lt;p&gt;I want to be concrete about this because most "ADHD and creativity" content is vague about costs.&lt;/p&gt;

&lt;p&gt;The 3D printer was around $400. Sold for $180. The drone was $700. Used it maybe ten hours total. The cybersecurity hardware - Flipper, M5Stack, the various ESP32 boards, cables, accessories, random modules - probably $600 spread across three months. The Ableton license. The AKAI. The ukulele I bought in a market in Istanbul because I was in a hyperfocus phase about lo-fi music production.&lt;/p&gt;

&lt;p&gt;I don't have an exact number. But it's real money. And this doesn't count the opportunity cost of the hours - hundreds of hours flashing firmware on microcontrollers that never produced anything, learning guitar in 30-minute sessions spread across five years, studying Vietnamese for six weeks before the next interest hit.&lt;/p&gt;

&lt;p&gt;I'm not saying this to make it sound worse than it is. I'm saying it because the "embrace your ADHD divergent thinking" content leaves this part out and I think it's dishonest. The breadth has real costs. The search has a price.&lt;/p&gt;

&lt;h2&gt;
  
  
  When it does stick
&lt;/h2&gt;

&lt;p&gt;Some things stuck. Design, 17 years. The gym, 15+ years. AI, three years and accelerating.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.deeflect.com%2Fmedium-img%2Fi-ve-touched-everything-and-mastered-nothing-1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.deeflect.com%2Fmedium-img%2Fi-ve-touched-everything-and-mastered-nothing-1.jpg" alt="When it does stick" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I started freelancing at 14, selling forum banners and GIFs on ICQ. By university I was building websites for roughly $80-120 each in local-currency terms at the time. When I graduated, companies were offering something like $150 a month. I was already making more than that per site. I said no thanks and kept going. Design is probably the closest thing I have to real expertise after 17 years - I spent five of those years as the solo senior product designer at VALK, a fintech platform that moved $4B+ in deals across 70+ financial institutions in 15 countries. Awards. Real scale. Products that live in actual banks.&lt;/p&gt;

&lt;p&gt;The gym stuck because I started before I knew what consistency meant and just never stopped. Different approaches over the years - bodybuilding, strength, cuts, boxing for two years in Krasnodar - but the baseline never dropped. I track bloodwork every few months and stay on top of health like it's another system to tune. When something becomes infrastructure instead of a project, the ADHD can't kill it.&lt;/p&gt;

&lt;p&gt;AI stuck because it's the first domain that feeds the obsession cycle faster than the cycle drains it. Every week there's something genuinely new. You can't get bored because the field won't hold still. I built one of the first agentic loops I'd seen anyone build in 2022 - a Telegram bot for a crypto community that could reason and take actions, before "agentic" was even a term. I didn't know what I was building. I just thought it was interesting.&lt;/p&gt;

&lt;p&gt;Now I let AI maintain my second brain - knowledge, reminders, loose thoughts, follow-ups, personal assistant type shit. Less "look at my agent stack," more "I built something that remembers what I forget and keeps my life from scattering." I also host local models on a Mac Mini because of course that became another obsession too - if something can run on my own box, I want to try it. It's not a demo. It runs every day. You can read more about &lt;a href="https://blog.deeflect.com/06-coding-stack/" rel="noopener noreferrer"&gt;how I approach coding and multi-model workflows&lt;/a&gt; if you want the technical side.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why some things survive the cycle
&lt;/h3&gt;

&lt;p&gt;I've thought about this a lot. What makes design and the gym persist when everything else evaporates?&lt;/p&gt;

&lt;p&gt;Two patterns show up every time.&lt;/p&gt;

&lt;p&gt;First: external feedback loops that don't require internal motivation. The gym gives me bloodwork numbers, strength PRs, photos over time. Design gives me client feedback, shipped products, metrics. When my internal interest flags, there's still external data pulling me back. The drone had none of that. The guitar had none of that. They only worked when I was actively excited, and when the excitement went, nothing remained.&lt;/p&gt;

&lt;p&gt;Second: the domain kept changing fast enough to feed new obsession cycles. Design went from print to web to mobile to design systems to AI-generated UI. Every three years there was a new layer to get obsessed about. The gym went from machines to compound lifts to programming to bloodwork optimization. The domain regenerated novelty before I burned through it.&lt;/p&gt;

&lt;p&gt;AI has both properties at a ridiculous level. New model every month. New architectural pattern every six weeks. New tool category every quarter. It's basically a purpose-built trap for an ADHD brain and I walked straight into it.&lt;/p&gt;

&lt;h2&gt;
  
  
  How ADHD shapes every skill, hobby, and career path you choose
&lt;/h2&gt;

&lt;p&gt;Here's what I've realized after 30 years of this: ADHD doesn't just affect how you learn things. It determines what career paths even become available to you.&lt;/p&gt;

&lt;p&gt;I have an Information Security bachelor's that I haven't used professionally once. But the two years I spent in that program gave me networking fundamentals, an understanding of cryptography primitives, and enough systems-level thinking that when blockchain showed up in my life it wasn't foreign territory. That "useless" degree became the reason I could evaluate Solidity code for security issues at VALK without being a dedicated security engineer.&lt;/p&gt;

&lt;p&gt;The career I've actually had - designer, then product lead, then AI engineer - looks like three different careers. But they're the same brain solving the same problem at different layers of abstraction. Design is about modeling user mental states. Product is about modeling system interactions. AI engineering is about modeling agent behavior. I didn't pivot three times. I drilled down.&lt;/p&gt;

&lt;p&gt;That's the ADHD career path nobody maps: not a straight line, not even a zigzag, but a spiral. You come back around to the same core problems from different angles. The angle changes. The problem doesn't.&lt;/p&gt;

&lt;h3&gt;
  
  
  The generalist discount problem
&lt;/h3&gt;

&lt;p&gt;No job posting says "wanted, someone who can do a little bit of everything."&lt;/p&gt;

&lt;p&gt;Generalists get discounted by hiring processes designed for specialists. I have production code in TypeScript, Python, Rust, Solidity, and SQL. Deployed products on four blockchains. Sysadmin experience across every OS since childhood. Enough crypto experience to have launched 20+ tokens. On a resume this looks scattered. In a specific situation - say, a 48-hour sprint where a client needs a smart contract, a minting frontend, and custom illustrations - it's exactly what's needed and nobody else in the room has all three.&lt;/p&gt;

&lt;p&gt;When VALK wanted a Christmas NFT campaign I told them I'd handle it. Smart contract, minting site, illustrations, the whole frontend - solo, in one sprint. That's not something a specialist does. That's an ADHD brain that collected 12 different surface-level skills over a decade, all converging on one afternoon's work.&lt;/p&gt;

&lt;p&gt;The problem is you can't interview for that. "I know a little about a lot" doesn't clear an ATS. So the career path for someone like me had to go around traditional hiring - freelance, founding roles, solo building. Places where the breadth shows up in delivered work rather than credentials.&lt;/p&gt;

&lt;h2&gt;
  
  
  The breadth problem and the breadth premium
&lt;/h2&gt;

&lt;p&gt;This year I wrote a 33,000-word book from scratch. I'd never written a book before. &lt;a href="https://dontreplace.me" rel="noopener noreferrer"&gt;Don't Replace Me: A Survival Guide to the AI Apocalypse&lt;/a&gt; - 24 chapters, formatted, cover designed, published on KDP, audiobook via ElevenLabs, SEO landing page with schema markup, Amazon ads. Not because I wanted to be an author. Because the process of building the whole machine was interesting to me.&lt;/p&gt;

&lt;p&gt;The ADHD didn't stop me from finishing a book. It kept me engaged long enough to finish one because there were 15 different new processes to learn inside the single project. The writing was interesting for the first 10,000 words. Then the Kindle formatting was interesting. Then the audiobook pipeline. Then the schema markup. By the time I finished, I had a complete book, a production process, and had learned four skills I didn't have before.&lt;/p&gt;

&lt;p&gt;This is the pattern. I don't go deep on one thing. I go wide enough that when a project needs five different skills, I'm the one person in the room who can cover them all. The breadth is a premium in those moments - genuine leverage that a specialist can't replicate without a team.&lt;/p&gt;

&lt;p&gt;The honest version though: those moments don't come every day. Most days, the breadth just means I know enough to be frustrated by problems in domains where I don't know enough to solve them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the AI age changes this calculation for ADHD builders
&lt;/h2&gt;

&lt;p&gt;AI is doing something weird to the generalist problem. On one side, everyone's a generalist now. Vibe coding means your non-technical friend can ship a landing page. The moat of "I know five programming languages" is basically gone.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.deeflect.com%2Fmedium-img%2Fi-ve-touched-everything-and-mastered-nothing-2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.deeflect.com%2Fmedium-img%2Fi-ve-touched-everything-and-mastered-nothing-2.jpg" alt="Why the AI age changes this calculation for ADHD builders" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the other side - and this is the part I actually believe - breadth becomes more valuable when AI amplifies execution. You don't need to be an expert Rust developer. You need enough Rust to direct an AI agent doing Rust work. You need to know when the output is wrong. You need the taste to recognize what "good" looks like without being able to produce it from scratch at 100 words per minute.&lt;/p&gt;

&lt;p&gt;I spent 17 years accumulating surface-level knowledge across maybe 30 domains. That knowledge doesn't help me compete with a specialist in any one of them. But it means I can look at an AI's output in almost any domain and tell you whether it's right. Design intuition applied to code review. Crypto chaos tolerance applied to agentic system failures. Sysadmin muscle memory applied to Docker containers. The ADHD brain that couldn't go deep now has a use case that rewards breadth.&lt;/p&gt;

&lt;p&gt;This isn't hypothetical. I use &lt;a href="https://dee.ink" rel="noopener noreferrer"&gt;dee.ink&lt;/a&gt; - a collection of 31 Rust CLI tools I've built for AI agent workflows - as a practical example. Building those tools required Rust, CLI design, documentation, packaging, and enough understanding of AI agent needs to spec useful primitives. Same with the home lab stuff - a ZimaBoard running home server experiments, local infra, and smart home services because once I touched self-hosting I obviously had to touch that too. A specialist Rust engineer would write better Rust. A specialist AI engineer would understand agent needs more deeply. A specialist infra person would build a cleaner home lab. But I could build the whole thing myself, end to end, without waiting on anyone.&lt;/p&gt;

&lt;p&gt;That's the AI-era argument for the ADHD generalist: you're not competing on depth anymore. You're competing on range of judgment. And AI is making range of judgment the bottleneck, not depth of execution.&lt;/p&gt;

&lt;p&gt;If you've felt this same tension - the generalist guilt, the half-finished projects, the identity question of what you even "are" professionally - I wrote more about &lt;a href="https://blog.deeflect.com/02-adhd-and-ai/" rel="noopener noreferrer"&gt;navigating ADHD and AI as actual compensation tools&lt;/a&gt;, not the productivity-porn version.&lt;/p&gt;

&lt;h2&gt;
  
  
  The identity question ADHD every skill, hobby, and career path creates
&lt;/h2&gt;

&lt;p&gt;Here's the thing nobody talks about with the ADHD generalist lifestyle: you don't know what you are.&lt;/p&gt;

&lt;p&gt;Ask a specialist what they do and they tell you in one sentence. "I'm a backend engineer." "I'm a product designer." The identity is clean. The work is legible to other people.&lt;/p&gt;

&lt;p&gt;Ask me and I have to make a choice about which version of myself I'm presenting. The designer with 17 years? The AI engineer building multi-agent systems? The guy who spent three months obsessively learning about RF signal interception for no professional reason? All of these are equally true. None of them is the whole answer.&lt;/p&gt;

&lt;p&gt;For a long time this felt like a problem. I'd look at people with clear professional identities - engineers who'd been doing one thing for ten years, designers with a coherent portfolio narrative - and feel like I was faking it. Like my breadth was evidence of some underlying lack of commitment.&lt;/p&gt;

&lt;p&gt;What I've landed on at 30 is different. The identity question isn't a problem to solve. It's a feature of a specific type of brain operating at full capacity. The discomfort of not fitting a category is the cost of not being constrained by one.&lt;/p&gt;

&lt;p&gt;I'm not a designer who learned to code. I'm not an engineer with design skills. I'm something that doesn't have a clean job title yet, that probably couldn't exist before AI made it possible to execute across domains without a full team. That's not a failure of self-definition. It's just early.&lt;/p&gt;

&lt;h3&gt;
  
  
  What actually sticks at 30
&lt;/h3&gt;

&lt;p&gt;The gym. Design. AI.&lt;/p&gt;

&lt;p&gt;And maybe that's the real pattern. The things that stuck aren't the things I chose. They're the things that were still there after the dopamine moved on. The gym was still there because I'd been going long enough it became automatic. Design was still there because clients kept paying me. AI is still there because it keeps generating new problems faster than I run out of interest.&lt;/p&gt;

&lt;p&gt;What I've stopped doing is chasing the feeling of the early phase - that first-week intensity when everything seems possible and you're learning at maximum speed. That feeling always ends. The question isn't how to keep the feeling. It's what you're building during it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I actually think about it at 30
&lt;/h2&gt;

&lt;p&gt;The "ADHD superpower" narrative is mostly incomplete. It's real but it stops too early. Yes, the hyperfocus is incredible. Yes, the breadth compounds in ways specialists can't replicate. Yes, the manic phases produce more in two weeks than most people produce in two months.&lt;/p&gt;

&lt;p&gt;But the depressive phases are the tax. The days where you look at a project you were obsessed with last week and feel nothing at all. Not tired. Not frustrated. Nothing. The graveyard of equipment bought and abandoned is real money. The projects with twelve active tabs and zero generating revenue are real.&lt;/p&gt;

&lt;p&gt;I've been through every single phase of this across eight countries and more career pivots than I can accurately count. I still think I'll find the thing. Maybe design and AI are already it and I just haven't accepted that yet. Maybe something I haven't encountered will show up and replace everything I've built my identity around. Both feel equally plausible from the inside.&lt;/p&gt;

&lt;p&gt;What I know is this: at 30, having touched more domains than most people touch in a lifetime, being functionally average at most of them and arguably expert at two - I'm not embarrassed by any of it. The search wasn't failure. The search was the work. Everything compounds in ways you can't predict when you're in the middle of a two-week obsession with intercepting radio signals from water meters.&lt;/p&gt;

&lt;p&gt;If you want to see what the current obsession looks like in practice - the AI engineering side, the multi-agent systems, the actual tools - the &lt;a href="https://blog.deeflect.com/about/" rel="noopener noreferrer"&gt;about page&lt;/a&gt; has the full context. And if you're building something similarly scattered and solo, the &lt;a href="https://blog.deeflect.com/tags/" rel="noopener noreferrer"&gt;tags page&lt;/a&gt; will probably surface something relevant.&lt;/p&gt;

&lt;p&gt;If you're in the middle of your own version of this - the cycle, the graveyard, the guilt about the 3D printer sitting in your closet - you're probably not broken. You're probably just still searching.&lt;/p&gt;

&lt;p&gt;That's an okay place to be.&lt;/p&gt;

&lt;h2&gt;
  
  
  So I asked AI to list my skills
&lt;/h2&gt;

&lt;p&gt;Based on everything I've touched, built, learned, half-learned, abandoned, revived, bought, sold, shipped, and somehow got paid for, I asked AI to write out my skill set in one paragraph.&lt;/p&gt;

&lt;p&gt;It was a mistake.&lt;/p&gt;

&lt;p&gt;Graphic design, product design, UI design, UX design, interaction design, interface design, web design, mobile design, dashboard design, platform design, application design, systems design, design systems, visual systems, component systems, branding, digital branding, visual identity, typography, layout, hierarchy, composition, spacing, iconography, illustration, digital illustration, vector illustration, marketing design, motion graphics, banner design, avatar design, forum graphics, landing page design, cover design, presentation design, pitch deck design, onboarding design, checkout flow design, conversion design, user flows, journey mapping, wireframing, prototyping, information architecture, UX strategy, usability thinking, interface critique, product thinking, product strategy, feature prioritization, product packaging, product positioning, product communication, client communication, stakeholder communication, design leadership, solo product ownership, freelance design, agency design, startup design, enterprise product design, fintech product design, dashboard UX, enterprise workflows, white-label platform design, workflow design, complexity reduction, visual clarity, frontend design, frontend implementation, HTML, CSS, responsive design, JavaScript, TypeScript, React, component libraries, design-to-code translation, web app implementation, rapid prototyping, interface implementation, code editing, debugging, code reading, AI-assisted coding, product-minded engineering, scripting, Python, Rust, SQL, Solidity, command-line tooling, CLI product thinking, developer tooling, automation scripting, API integration, webhook logic, backend glue code, database basics, schema instincts, debugging AI output, debugging code, debugging workflow failures, prompt engineering, prompt iteration, prompt structure, context design, tool calling, agent orchestration, multi-agent workflows, AI workflow design, AI product design, AI UX, AI-assisted writing, AI-assisted coding, AI-assisted research, AI tool evaluation, model comparison, reasoning model usage, local model usage, cloud model usage, local LLM setup, memory systems, second-brain systems, reminder systems, retrieval systems, RAG, embeddings, semantic search, vector search, AI assistant design, assistant workflow design, personal AI systems, research pipelines, writing pipelines, content pipelines, synthesis pipelines, information capture, note systems, knowledge systems, crypto product design, Web3 product design, smart contracts, Solidity workflows, token launch mechanics, NFT launch mechanics, minting flows, DeFi UX, blockchain UX, wallet UX, crypto campaign execution, crypto community operations, crypto marketing, launch coordination, presale mechanics, token website building, contract deployment understanding, onchain product instincts, community bot building, Telegram bot building, automation design, workflow automation, n8n-style orchestration thinking, research automation, content automation, digital marketing, internet marketing, social media growth, Instagram page growth, landing page copy, copywriting, headline writing, article writing, blog writing, long-form writing, editing, rewriting, draft development, AI-draft cleanup, humanization, publishing workflows, book writing, book formatting, self-publishing, KDP publishing, metadata writing, SEO, GEO, search intent mapping, keyword targeting, internal linking instincts, schema markup thinking, authority building, distribution strategy, launch strategy, publishing systems, website management, domain setup, CMS-light publishing, self-hosting, home server experimentation, Docker, DNS, reverse proxy basics, infrastructure curiosity, service setup, local infra experimentation, smart home experimentation, device setup, system setup, macOS setup, Windows setup, Linux setup, terminal usage, shell comfort, firmware flashing, hardware experimentation, embedded-device tinkering, cybersecurity basics, information security fundamentals, cryptography fundamentals, systems thinking, network instincts, radio experimentation, wireless experimentation, sensor experimentation, hardware debugging, hardware setup, game server hosting, home PC hosting, monetization instincts, digital hustle instincts, app install arbitrage, e-commerce experimentation, dropshipping experimentation, pricing instincts, sales instincts, client acquisition instincts, offer shaping, agency operations, productized service instincts, open source contribution, release management, tool publishing, music experimentation, music production, AI music workflows, Ableton experimentation, audio arrangement instincts, basic piano, basic guitar, basic ukulele, basic harmonica, stylophone experimentation, otamatone experimentation, creative direction, aesthetic judgment, visual taste, naming instincts, concept development, trend detection, trend synthesis, pattern recognition, fast learning, context switching, parallel execution, obsessive research, rabbit-hole depth, ambiguity tolerance, pressure-driven shipping, solo building, independent execution, figuring things out with incomplete information, reverse engineering workflows, surviving bad documentation, adapting to broken tools, evaluating software fast, comparing tools quickly, stack assembly, stack migration, no-code experimentation, low-code experimentation, API-first thinking, browser tooling, workflow compression, research summarization, synthesis, memory capture, memory retrieval instincts, file organization attempts, chaos-tolerant organization, async collaboration, self-direction, self-teaching, self-reinvention, internet-native communication, pseudonymous building, online identity experimentation, public writing, authority building through shipping, cross-domain thinking, interdisciplinary synthesis, technical taste, product taste, marketing taste, creative taste, execution bias, strategic intuition, quality smell detection, visual QA, copy QA, product QA, issue isolation, error triage, launch QA, software evaluation, tooling adoption, rollout instincts, packaging instincts, distribution instincts, positioning instincts, and generally becoming competent enough to start, ship, fix, relaunch, and repurpose work across an unreasonable number of domains without ever sitting still long enough to make any of it feel normal.&lt;/p&gt;

&lt;p&gt;Reading it felt less like a skills list and more like a forensic report.&lt;/p&gt;

</description>
      <category>adhd</category>
      <category>buildinginpublic</category>
      <category>aiengineering</category>
      <category>personal</category>
    </item>
    <item>
      <title>I ran my AI codebase triage tool on itself — here's what it found</title>
      <dc:creator>EJ Wisner</dc:creator>
      <pubDate>Mon, 06 Apr 2026 20:28:29 +0000</pubDate>
      <link>https://dumb.dev.to/ejwisner/i-ran-my-ai-codebase-triage-tool-on-itself-heres-what-it-found-2ma2</link>
      <guid>https://dumb.dev.to/ejwisner/i-ran-my-ai-codebase-triage-tool-on-itself-heres-what-it-found-2ma2</guid>
      <description>&lt;p&gt;I built Ghost Architect™ Open — a free, local AI tool that triages codebases and scores findings by severity. To test it properly, I ran it on its own source code.&lt;/p&gt;

&lt;p&gt;It found a Critical bug.&lt;/p&gt;

&lt;h2&gt;
  
  
  The finding
&lt;/h2&gt;

&lt;p&gt;The redaction engine — the module that strips API keys and secrets before sending code to Claude — had a pointer offset bug. When replacing a secret pattern, it wasn't advancing the scan position after each replacement. On files with 50+ environment variables, it would stop redacting halfway through.&lt;/p&gt;

&lt;p&gt;Users were seeing "Redacted 12 patterns" and assuming their code was safe. Pattern 13 was their database password.&lt;/p&gt;

&lt;p&gt;The bug was fixed the same day. That's the point — you can't fix what you can't see.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Ghost Architect™ Open does
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Points at any local directory, ZIP file, or GitHub repo&lt;/li&gt;
&lt;li&gt;Triages the code and scores findings: Critical, High, Medium, Low&lt;/li&gt;
&lt;li&gt;Runs entirely on your machine — your code never leaves&lt;/li&gt;
&lt;li&gt;Uses the Anthropic API with your own key (new accounts get a $5 credit)&lt;/li&gt;
&lt;li&gt;Supports PHP, Python, Node.js, Java, Go, React, and more&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Free vs Pro
&lt;/h2&gt;

&lt;p&gt;Ghost Open is free. It returns Critical and High findings in TXT and Markdown format.&lt;/p&gt;

&lt;p&gt;Ghost Pro adds Medium and Low findings, multipass analysis, project intelligence, and full PDF reports.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/EJWisner/ghost-architect-open" rel="noopener noreferrer"&gt;https://github.com/EJWisner/ghost-architect-open&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Full platform: &lt;a href="https://ghostarchitect.dev" rel="noopener noreferrer"&gt;https://ghostarchitect.dev&lt;/a&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>ai</category>
      <category>php</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Enabling Pin-Based Commenting on Live HTML Iframes: Open-Source, Framework-Agnostic Solution with Adapter Flexibility</title>
      <dc:creator>Pavel Kostromin</dc:creator>
      <pubDate>Mon, 06 Apr 2026 20:25:11 +0000</pubDate>
      <link>https://dumb.dev.to/pavkode/enabling-pin-based-commenting-on-live-html-iframes-open-source-framework-agnostic-solution-with-18m4</link>
      <guid>https://dumb.dev.to/pavkode/enabling-pin-based-commenting-on-live-html-iframes-open-source-framework-agnostic-solution-with-18m4</guid>
      <description>&lt;h2&gt;
  
  
  Introduction &amp;amp; Problem Statement
&lt;/h2&gt;

&lt;p&gt;Imagine trying to annotate a live, interactive webpage with the precision of Figma’s pin-based comments. Now, imagine doing this within an &lt;strong&gt;iframe&lt;/strong&gt;—a nested HTML document isolated by its own DOM and coordinate system. This is the core technical challenge Washi solves. While tools like Figma excel at static design files, replicating their annotation precision on &lt;em&gt;live, dynamic HTML content&lt;/em&gt; within iframes introduces a cascade of complexities: DOM mutations, cross-origin restrictions, and coordinate system discrepancies.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Problem: Why Pin-Based Commenting in Iframes is Hard
&lt;/h3&gt;

&lt;p&gt;At its core, pin-based commenting requires &lt;strong&gt;pixel-perfect synchronization&lt;/strong&gt; between the annotation layer and the underlying content. In iframes, this breaks down due to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;DOM Isolation:&lt;/strong&gt; Iframes operate in a separate document context, making direct DOM manipulation from the parent page impossible without explicit permissions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Coordinate Mismatch:&lt;/strong&gt; The iframe’s internal scroll position and element coordinates are decoupled from the parent window, causing annotations to drift or misalign.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic Content:&lt;/strong&gt; Live HTML updates (e.g., via JavaScript) require real-time recalibration of annotation positions, a task complicated by iframe boundaries.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Existing solutions often fail here. Proprietary tools lock users into specific frameworks, while open-source alternatives lack the flexibility to handle iframe-specific edge cases (e.g., nested iframes, cross-domain content). Washi’s adapter-based architecture addresses this by &lt;em&gt;abstracting the iframe’s complexities&lt;/em&gt;, enabling seamless annotation across diverse environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why This Matters: The Collaborative Workflow Bottleneck
&lt;/h3&gt;

&lt;p&gt;Without tools like Washi, teams resort to screenshots, lengthy video calls, or clunky third-party integrations. Consider a cross-functional team reviewing a live dashboard prototype: designers, developers, and stakeholders must align on specific UI elements, interactions, and bugs. Screenshots quickly become outdated; verbal descriptions lack precision. The result? &lt;strong&gt;Feedback loops slow to a crawl&lt;/strong&gt;, with miscommunications compounding delays.&lt;/p&gt;

&lt;p&gt;Washi’s pin-based approach &lt;em&gt;anchors feedback directly to the live element&lt;/em&gt;, eliminating ambiguity. For instance, a designer can drop a pin on a misaligned button, attach a comment, and link it to a Figma spec—all within the same interface. This &lt;strong&gt;spatial precision&lt;/strong&gt; accelerates resolution times, particularly in remote settings where asynchronous collaboration is the norm.&lt;/p&gt;

&lt;h3&gt;
  
  
  Technical Deep Dive: How Washi Works
&lt;/h3&gt;

&lt;p&gt;Washi’s solution hinges on three mechanisms:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Adapter Layer:&lt;/strong&gt; A framework-agnostic bridge that communicates with the iframe’s content via &lt;em&gt;postMessage&lt;/em&gt;, bypassing same-origin policy restrictions. This enables DOM queries and mutation observation without direct access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Coordinate Mapping:&lt;/strong&gt; A real-time transformation matrix calculates the offset between the parent window and iframe’s internal coordinates, ensuring pins remain anchored despite scrolling or resizing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reactive Updates:&lt;/strong&gt; Leveraging MutationObserver, Washi detects changes in the iframe’s DOM and recalibrates annotations dynamically, preventing drift in live environments.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This architecture contrasts with naive solutions (e.g., overlaying annotations via CSS positioning) that fail under iframe constraints. By treating the iframe as a &lt;em&gt;black box&lt;/em&gt; and relying on adapters, Washi maintains compatibility across React, Vue, Svelte, and vanilla HTML setups.&lt;/p&gt;

&lt;h3&gt;
  
  
  Edge Cases and Failure Modes
&lt;/h3&gt;

&lt;p&gt;Washi’s effectiveness degrades in two scenarios:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Scenario&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Mechanism of Failure&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Mitigation&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cross-Domain Iframes&lt;/td&gt;
&lt;td&gt;postMessage requires explicit permission from the iframe’s origin, which may not be granted.&lt;/td&gt;
&lt;td&gt;Use a proxy server or configure CORS headers on the iframe source.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Heavy DOM Mutations&lt;/td&gt;
&lt;td&gt;Rapid, large-scale DOM changes (e.g., full page reloads) can overwhelm the MutationObserver, causing temporary annotation lag.&lt;/td&gt;
&lt;td&gt;Throttle mutation callbacks or implement debouncing in high-churn environments.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Professional Judgment: When to Use Washi
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Rule of Thumb:&lt;/strong&gt; If your workflow involves reviewing live HTML content within iframes and requires precise, Figma-style annotations, &lt;em&gt;use Washi&lt;/em&gt;. Its adapter-based design ensures compatibility with your existing stack, while its open-source nature avoids vendor lock-in. Avoid it if your use case is static (e.g., PDF annotations) or if you operate in a closed, single-origin environment where simpler overlay tools suffice.&lt;/p&gt;

&lt;p&gt;In the arms race of collaborative tools, Washi isn’t just another annotation utility—it’s a bridge between the static precision of design tools and the dynamic chaos of live web development. As remote teams become the default, its role in streamlining feedback loops will only grow more critical.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technical Deep Dive &amp;amp; Solution Architecture
&lt;/h2&gt;

&lt;p&gt;Washi’s design philosophy is rooted in solving a deceptively complex problem: enabling Figma-style pin annotations on live HTML content within iframes while maintaining open-source accessibility and framework agnosticism. This section dissects the architectural decisions that make this possible, focusing on the mechanisms that overcome iframe-specific challenges and their real-world implications.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Adapter Layer: The Framework-Agnostic Bridge
&lt;/h3&gt;

&lt;p&gt;The core of Washi’s flexibility lies in its &lt;strong&gt;adapter-based architecture&lt;/strong&gt;. Iframes inherently operate in a separate document context, creating a DOM isolation problem. Direct manipulation of the iframe’s DOM from the parent window is blocked by the browser’s same-origin policy—a security mechanism that prevents cross-origin scripting attacks. Naive solutions relying on CSS overlays or absolute positioning fail here because they assume a shared DOM, which iframes explicitly deny.&lt;/p&gt;

&lt;p&gt;Washi circumvents this by treating the iframe as a &lt;em&gt;black box&lt;/em&gt;. Instead of direct DOM access, it uses the &lt;strong&gt;&lt;code&gt;postMessage&lt;/code&gt; API&lt;/strong&gt; to communicate with the iframe’s content window. This mechanism acts as a bridge, allowing the parent window to send commands (e.g., "query element at coordinates X,Y") and receive responses (e.g., "element is a `&lt;/p&gt;

&lt;p&gt;&lt;code&gt; with ID &lt;/code&gt;content-section`"). The adapter layer translates these commands into actionable DOM operations within the iframe, effectively bypassing the same-origin policy without compromising security.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why this works:&lt;/strong&gt; By abstracting the communication layer, Washi remains framework-agnostic. Whether the iframe contains React, Vue, Svelte, or vanilla HTML, the adapter layer handles the translation, ensuring compatibility across diverse web ecosystems.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Coordinate Mapping: Solving the Scroll and Resize Problem
&lt;/h3&gt;

&lt;p&gt;Iframes introduce a &lt;strong&gt;coordinate mismatch&lt;/strong&gt; between the parent window and the iframe’s internal document. When users scroll or resize the iframe, the annotations must remain pinned to the correct element—a challenge exacerbated by dynamic content updates. Traditional solutions relying on fixed positioning break because the iframe’s scroll position and transformation matrix are decoupled from the parent window.&lt;/p&gt;

&lt;p&gt;Washi addresses this with a &lt;strong&gt;real-time transformation matrix&lt;/strong&gt;. It continuously calculates the offset between the parent window’s coordinates and the iframe’s internal document coordinates. This matrix is updated on every scroll, resize, or zoom event, ensuring annotations remain anchored to the correct element. For example, if a user scrolls the iframe 200px down, Washi recalculates the pin’s position relative to the iframe’s new scroll offset, preventing annotation drift.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Edge Case Analysis:&lt;/strong&gt; In scenarios with nested iframes or complex CSS transforms, the transformation matrix must account for cumulative offsets. Washi handles this by traversing the iframe hierarchy and aggregating transformation matrices, ensuring precision even in deeply nested structures.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Reactive Updates: Preventing Annotation Drift
&lt;/h3&gt;

&lt;p&gt;Dynamic HTML content poses a unique challenge: annotations must persist even when the underlying DOM changes. Without a mechanism to detect and respond to these changes, pins would become misaligned or disappear entirely. Washi employs a &lt;strong&gt;&lt;code&gt;MutationObserver&lt;/code&gt;&lt;/strong&gt; to monitor the iframe’s DOM for changes. When an element is added, removed, or modified, the observer triggers a recalibration of all annotations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mechanism:&lt;/strong&gt; The &lt;code&gt;MutationObserver&lt;/code&gt; acts as a watchdog, firing callbacks whenever the DOM structure changes. These callbacks update the internal annotation map, ensuring pins are reattached to the correct elements. For example, if a `&lt;/p&gt;

&lt;p&gt;` containing an annotation is replaced with a new element, Washi detects the change and transfers the pin to the corresponding new element.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance Trade-offs:&lt;/strong&gt; Heavy DOM mutations (e.g., re-rendering an entire page) can overwhelm the &lt;code&gt;MutationObserver&lt;/code&gt;, causing annotation lag. Washi mitigates this by &lt;em&gt;throttling&lt;/em&gt; mutation callbacks or implementing &lt;em&gt;debouncing&lt;/em&gt;, ensuring updates occur at a manageable frequency without sacrificing responsiveness.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Edge Cases and Mitigation Strategies
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cross-Domain Iframes:&lt;/strong&gt; &lt;code&gt;postMessage&lt;/code&gt; requires explicit permission from the iframe’s origin. Without this, communication fails. &lt;em&gt;Mitigation:&lt;/em&gt; Use a proxy server to relay messages or configure CORS headers on the iframe’s source. &lt;strong&gt;Rule:&lt;/strong&gt; If iframe content is hosted on a different domain → implement CORS or proxy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Heavy DOM Mutations:&lt;/strong&gt; Rapid changes overwhelm &lt;code&gt;MutationObserver&lt;/code&gt;, causing lag. &lt;em&gt;Mitigation:&lt;/em&gt; Throttle or debounce callbacks. &lt;strong&gt;Rule:&lt;/strong&gt; If DOM updates exceed 100 mutations/second → apply throttling.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. Comparative Analysis: Why Washi’s Approach is Optimal
&lt;/h3&gt;

&lt;p&gt;Alternative solutions, such as CSS overlays or absolute positioning, fail under iframe constraints due to their reliance on a shared DOM. Washi’s adapter-based, coordinate-mapping, and reactive architecture provides a robust, framework-agnostic solution. While it introduces slight overhead (e.g., &lt;code&gt;postMessage&lt;/code&gt; latency), the trade-off is justified by its compatibility and precision.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Professional Judgment:&lt;/strong&gt; Use Washi if your workflow involves reviewing live HTML content within iframes and requires Figma-style annotations. Avoid it for static content (e.g., PDFs) or single-origin environments where direct DOM access is feasible.&lt;/p&gt;

&lt;p&gt;In conclusion, Washi’s architecture is a testament to solving hard problems with elegant, mechanism-driven solutions. By treating iframes as black boxes, mapping coordinates in real-time, and reacting to DOM changes, it bridges the gap between static design tools and dynamic web development—a critical enabler for modern, collaborative workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Cases &amp;amp; Real-World Applications
&lt;/h2&gt;

&lt;p&gt;Washi’s pin-based commenting system isn’t just a theoretical innovation—it’s a practical tool solving real-world problems across industries. Here are five concrete scenarios where Washi’s architecture shines, backed by the technical mechanisms that make it work.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. E-Commerce Platform Design Reviews
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Scenario:&lt;/strong&gt; A remote design team reviews a live product page prototype embedded in an iframe. Designers need to annotate specific UI elements (e.g., misaligned buttons, incorrect font sizes) directly on the dynamic HTML content.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mechanism:&lt;/strong&gt; Washi’s &lt;em&gt;Adapter Layer&lt;/em&gt; uses &lt;code&gt;postMessage&lt;/code&gt; to query the iframe’s DOM, bypassing same-origin restrictions. The &lt;em&gt;Coordinate Mapping&lt;/em&gt; system calculates the transformation matrix between the parent window and iframe, ensuring pins stay anchored to elements even during scroll or resize events. &lt;em&gt;MutationObserver&lt;/em&gt; recalibrates annotations when the product grid dynamically updates (e.g., AJAX-loaded items), preventing drift.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Edge Case:&lt;/strong&gt; If the iframe loads cross-domain content (e.g., third-party reviews widget), &lt;code&gt;postMessage&lt;/code&gt; fails without explicit origin permission. &lt;em&gt;Mitigation:&lt;/em&gt; Configure CORS headers or use a proxy server to relay messages.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. SaaS Dashboard Iteration
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Scenario:&lt;/strong&gt; A developer and product manager collaborate on a React-based dashboard iframe. The PM needs to flag data visualization inconsistencies (e.g., incorrect chart labels) on a page with heavy DOM mutations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mechanism:&lt;/strong&gt; Washi’s &lt;em&gt;Reactive Updates&lt;/em&gt; throttle &lt;code&gt;MutationObserver&lt;/code&gt; callbacks to handle &amp;gt;100 mutations/second without overwhelming the system. The &lt;em&gt;Adapter Layer&lt;/em&gt; abstracts React’s virtual DOM, ensuring framework-agnostic operation. Coordinate mapping aggregates transformation matrices for nested iframes (e.g., embedded analytics widgets).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trade-off:&lt;/strong&gt; Throttling introduces ~50ms annotation lag during intense mutations, but prevents system freeze—a critical compromise for usability.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Marketing Landing Page QA
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Scenario:&lt;/strong&gt; A QA team tests a Vue.js landing page iframe, annotating broken links and misrendered hero sections. The page uses CSS transforms and animations, complicating pin stability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mechanism:&lt;/strong&gt; Washi’s &lt;em&gt;Coordinate Mapping&lt;/em&gt; recalculates offsets on every &lt;code&gt;scroll&lt;/code&gt;/&lt;code&gt;resize&lt;/code&gt; event, accounting for CSS transforms via the aggregated transformation matrix. The &lt;em&gt;Adapter Layer&lt;/em&gt; treats Vue’s reactive DOM as a black box, ensuring compatibility without framework-specific hooks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Risk:&lt;/strong&gt; Rapid CSS animations (e.g., parallax effects) can cause temporary pin misalignment. &lt;em&gt;Mitigation:&lt;/em&gt; Debounce coordinate recalculations to 60fps, balancing responsiveness and accuracy.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Educational Platform Prototyping
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Scenario:&lt;/strong&gt; A UX researcher annotates an Svelte-built course module iframe, pinpointing accessibility issues (e.g., unreadable contrast ratios) on dynamically generated content.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mechanism:&lt;/strong&gt; &lt;em&gt;MutationObserver&lt;/em&gt; detects Svelte’s reactive DOM updates (e.g., new quiz questions loaded via API), triggering annotation recalibration. The &lt;em&gt;Adapter Layer&lt;/em&gt; translates parent commands into iframe-specific DOM queries, maintaining framework agnosticism.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Choice Error:&lt;/strong&gt; Teams might opt for CSS overlay solutions, which fail here due to iframe’s isolated DOM. &lt;em&gt;Rule:&lt;/em&gt; If targeting dynamic iframe content, use Washi’s adapter-based approach instead of naive overlays.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Internal Tool Development
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Scenario:&lt;/strong&gt; An engineering team reviews a vanilla HTML admin panel iframe, annotating performance bottlenecks (e.g., slow-loading tables). The page uses nested iframes for modular components.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mechanism:&lt;/strong&gt; Washi’s &lt;em&gt;Coordinate Mapping&lt;/em&gt; traverses the iframe hierarchy, aggregating transformation matrices for nested frames. The &lt;em&gt;Adapter Layer&lt;/em&gt; communicates with each iframe’s window via &lt;code&gt;postMessage&lt;/code&gt;, enabling cross-frame annotations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limitation:&lt;/strong&gt; Fails if nested iframes block &lt;code&gt;postMessage&lt;/code&gt; due to security policies. &lt;em&gt;Optimal Solution:&lt;/em&gt; Preconfigure iframe permissions or use a shared proxy for message relay.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparative Analysis: Why Washi Wins
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Solution&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Effectiveness&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Failure Condition&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CSS Overlay&lt;/td&gt;
&lt;td&gt;Fails under iframe isolation; assumes shared DOM.&lt;/td&gt;
&lt;td&gt;Always fails for iframes without direct DOM access.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Absolute Positioning&lt;/td&gt;
&lt;td&gt;Breaks on scroll/resize due to coordinate mismatch.&lt;/td&gt;
&lt;td&gt;Fails when iframe content scrolls or resizes.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Washi (Adapter + Mapping)&lt;/td&gt;
&lt;td&gt;Robust for dynamic iframes; handles scroll, resize, mutations.&lt;/td&gt;
&lt;td&gt;Fails only if &lt;code&gt;postMessage&lt;/code&gt; blocked or mutations &amp;gt;100/second without throttling.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Professional Judgment:&lt;/strong&gt; Use Washi if your use case involves live HTML iframes requiring Figma-style precision. Avoid for static content or single-origin environments where simpler solutions suffice.&lt;/p&gt;

</description>
      <category>iframes</category>
      <category>annotation</category>
      <category>collaboration</category>
      <category>opensource</category>
    </item>
    <item>
      <title>I built an app that helps couples decide what to watch together</title>
      <dc:creator>Martin Langaas</dc:creator>
      <pubDate>Mon, 06 Apr 2026 20:24:53 +0000</pubDate>
      <link>https://dumb.dev.to/logflix/i-built-an-app-that-helps-couples-decide-what-to-watch-together-1f21</link>
      <guid>https://dumb.dev.to/logflix/i-built-an-app-that-helps-couples-decide-what-to-watch-together-1f21</guid>
      <description>&lt;p&gt;My girlfriend and I had the same argument &lt;br&gt;
every Friday night: "What do you want to watch?" &lt;br&gt;
"I don't know, what do YOU want to watch?"&lt;/p&gt;

&lt;p&gt;So I built Logflix.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it does
&lt;/h2&gt;

&lt;p&gt;Both of you swipe on movies and series — yes or no. &lt;br&gt;
When you both swipe yes on the same title, it's a match. &lt;br&gt;
No more endless scrolling. No more compromises where &lt;br&gt;
nobody's actually happy.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Share a link or scan a QR code&lt;/li&gt;
&lt;li&gt;Both swipe independently on your own phones&lt;/li&gt;
&lt;li&gt;Match appears live on screen when you both like the same title&lt;/li&gt;
&lt;li&gt;Works across Netflix, HBO, Disney+ and more&lt;/li&gt;
&lt;li&gt;No account needed&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tech stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Next.js / Supabase / Vercel&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why I built it
&lt;/h2&gt;

&lt;p&gt;Dead simple. No setup maze. You and your partner, &lt;br&gt;
swipe, match, watch.&lt;/p&gt;

&lt;p&gt;🎁 New users get 7 days free Premium after their &lt;br&gt;
first match — no credit card needed.&lt;/p&gt;

&lt;p&gt;Live and free: logflix.app/together&lt;/p&gt;

&lt;p&gt;Would love feedback — especially if you have a partner &lt;br&gt;
who can never decide 😄&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>webdev</category>
      <category>javascript</category>
      <category>buildinpublic</category>
    </item>
  </channel>
</rss>
