How we built the publishing pipeline: queue, retries and the LinkedIn and X APIs
Three platforms, sixty seconds. Generating the content is the easy part; getting all three POSTs through without breakage is the hard one.
The launch post promised something concrete: three platforms, sixty seconds, one click. We promised it before we fully had it. When we sat down to build it, we found out every platform has its own way of breaking on you, and the difference between "works in a demo" and "works in production" is the five corner cases nobody tells you about.
This is the map of how it's wired today.
Architecture: DB queue, per-platform adapters
When the author hits "publish," the dashboard's API does three things:
- Creates the three content versions (blog, LinkedIn, X) if they don't already exist.
- Inserts three rows into a
publish_jobstable with statepending, one per platform. - Returns
200 OKto the author immediately.
The user doesn't wait. Processing lives in a Cloudflare worker that polls the table every five seconds, picks up pending jobs, and invokes the matching adapter for each. Each adapter is a module that knows how to speak to ONE platform — linkedin.ts, x.ts, blog.ts — and returns { ok: boolean, externalId?, error? }.
"DB queue" instead of a dedicated broker (Cloudflare Queues, SQS) is deliberate: at current volume the DB is enough and it simplifies debugging. If a publication gets stuck, the author sees the state in the UI without a separate dashboard. And retries are an UPDATE with a counter bump.
LinkedIn: the authorURN bug
LinkedIn's API works but has a few sharp edges that cost time to discover.
The first and most memorable: the author field in a request to /v2/ugcPosts isn't your user ID. It's a URN with an exact format:
urn:li:person:<LINKEDIN_USER_ID>
Send "author": "<USER_ID>" bare and you get a 400 whose message looks like a permissions issue. It isn't. It's a parsing error in disguise. You spend two hours checking OAuth scopes before reading the docs for the sixth time and seeing it.
The second: the payload format depends on the post type. Plain text uses specificContent.com.linkedin.ugc.ShareContent.shareCommentary.text. If you add a blog link (we always do), the URL goes in a different field and you need shareMediaCategory: "ARTICLE" with extra metadata. Different schema, two distinct branches in the adapter.
The full adapter fits in 80 lines. The first five times it failed, it was always one of those two things. Today an E2E test with a LinkedIn mock catches both as expected cases.
X: rate limits and thread magic
X (Twitter) has a cleaner API but aggressive rate limits. The free plan gives you 1,500 tweets / month / user. Plenty for an active author, ridiculous if you wanted to automate threads for a whole team.
For threads you chain manually. Each tweet is a POST to /2/tweets with reply.in_reply_to_tweet_id pointing to the previous one. If the second POST fails halfway — happens once in a hundred — you end up with an orphan thread of one tweet. Adapter policy:
- POST the first tweet, save the ID.
- POST each next one with
in_reply_to_tweet_idto the previous. - If any fail, abort and mark the job as
partial_failure. - NEVER delete already-published tweets — let the author decide whether to finish manually or leave the first tweet alive.
Auto-deletion looks elegant until a viral tweet gets nuked because the network blinked on tweet 3 of 4. The rule: publication is irreversible from the adapter's point of view. What was published, stays published.
The blog: simplest adapter, slowest one
Counter-intuitive surprise: the blog adapter (which INSERTs into our own DB) is the simplest in code but the slowest in latency. Reason: on top of the INSERT, two slow things run:
- schema.org JSON-LD generation with the full graph (author, tags, dates, image).
- IndexNow notifications to Bing, Yandex, Seznam, Naver and Yep for re-crawl. Done in parallel, but the sum is ~600ms.
This runs asynchronously AFTER returning 200 to the user, but before marking the job as complete. If IndexNow fails on one of the five engines, we don't flag failure — the cost of not notifying Yandex is low, and retries would fragment the logic.
Failure: 2 of 3 still counts as success
The political question of the pipeline: if blog confirms, LinkedIn confirms, and X fails — is the job success or failure?
Agentikas's answer: partial success. The partial_failure state exists on purpose. The UI shows "published on blog and LinkedIn, X failed — retry?". The author decides. Reason is practical: most X failures are temporary rate limits or transient 500s. Retrying manually in five minutes fixes 90% of cases. Auto-retrying from the adapter risks duplicate posts on the platforms that already worked.
This requires idempotency: if a job is reprocessed, the blog adapter checks for a post with that publish_job_id in the DB before inserting. LinkedIn's adapter checks the externalId. X's, same. No duplicate POST under any circumstance.
The real cost, in numbers
Closing with the question that matters when everything is open source: how much does this cost?
- Generation with Claude Haiku 4.5: ~3.5 cents per post (includes blog + LinkedIn + X versions, brand review, translation).
- LinkedIn API calls: free (free plan).
- X API calls: free under 1,500/month.
- Cloudflare Workers: the queue worker uses <1ms CPU per job. Easily fits the free plan.
- Postgres / Supabase: the
publish_jobstable has 4 columns and gets purged after 30 days. Negligible cost.
Total: under five cents per full publication to all three platforms. Public price to the author: zero. Agentikas Labs absorbs it because the platform is non-profit.
The LinkedIn adapter, the X adapter, the queue worker, and the publish_jobs migration are separate files in github.com/agentikas/agentikas-blog. If any of the gotchas above sound off to you, open the code — the comment warning about the authorURN bug is still there.
Comments
Loading comments…
Sign in on your dashboard to join the conversation.