Skip to main content
← Back to blog

write.agentikas.ai — the content studio we built for the agentic web

An editor where AI is a panel, not a magic button. Three windows side-by-side: blog, LinkedIn and X. Your brand on top, always.

The world's most-used editor has been the same for twenty years. A text box. A toolbar with bold, italic, link. A "publish" button. The whole process assumes writing the post is what you do and publishing it on other platforms is what you do by hand afterward.

When we started designing write.agentikas.ai the first move was to reject that premise. If there's an AI that can write, a library of skills that knows how to adapt to LinkedIn and X, and a brand reviewer that validates in milliseconds, the editor isn't a text box — it's a studio. A place where several tools work in parallel on the same content and you direct.

The premise: three windows, one intention

The editor opens with three panels visible at once. The central one is Tiptap — the rich-text editor that renders the blog post as it'll be published. The two side panels are the LinkedIn (left) and X (right) previews, exactly as they'll appear when published.

When you generate the initial post, all three fill in at the same time. When you edit the central one, the side panels don't auto-update — because each platform has its voice, and small blog edits don't always need to reflect in the X thread. When you want to re-sync, there's a "regenerate version" button on each panel.

The visual consequence: you never publish blind. What you see in each panel is exactly what'll go out on LinkedIn and X when you hit publish. Zero surprises.

The AI panel: a conversation with your co-pilot

To the right of the editor, taking the fourth column when open, is the Compose panel. Not a magic button — it's a chat with Claude that has context of the current post.

You use it for specific things:

  • "Make this more concise, keep the authorURN example."
  • "Rewrite the second paragraph in first person."
  • "Generate three alternative titles."
  • "Translate to English, keeping WebMCP terminology accurate."

The panel sees the editor's HTML. Any instruction you give it produces a diff in propose mode — the editor highlights the changes in yellow and you accept or reject. Never overwrites without approval. You never lose the original.

Whatever the AI proposes passes through the brand reviewer before it lands in the editor. If your instruction would have introduced a banned term, the proposal arrives sanitized with a small message: "Removed 'partner agencies' for violating BRAND.md — substituted with 'partner labs'." Doesn't ask — it does it and tells you.

Diff and approval: the pattern that protects voice

The most important thing in the flow isn't generation. It's the diff.

Originally the editor overwrote directly. You said "make this more concise" and the new version appeared. It worked for people with absolute trust in the model, failed for everyone else. After three versions of three different authors asking for "a way to compare before accepting," we changed it.

Today, any AI action on content produces a diff visible in the editor itself. Changes appear as a colored overlay — green for additions, red strikethrough for removals — exactly like a GitHub PR. Accept all, reject all, or accept chunk by chunk.

The effect is psychological before it's technical: AI proposes, you dispose. You recover the authorship feeling that gets lost when the screen shifts under your fingers.

Images: Unsplash, R2, whatever fits

The image picker has three modes:

  1. Unsplash search. You type a keyword, see results, pick one. The Unsplash canonical URL is saved on the post — no download to your server, no coupling to your storage.
  2. Your upload. Upload an image, it goes to Cloudflare R2 (global CDN + image optimization). We auto-generate responsive variants.
  3. AI generation. An experimental endpoint uses an image model to create the cover from the post's tags. We use it sparingly — authors prefer real photos — but it's there.

Unsplash credit gets injected into the post's JSON-LD automatically to maintain attribution. We don't ask the author to write it manually — the system carries it.

Publish: one decision, three channels

The "publish" button opens a modal with three switches: blog (always on), LinkedIn (default-on if you connected your account), X (default-on if you connected). Each has a final content preview.

Hit confirm. The system:

  1. Sends all three jobs to the publish queue.
  2. Shows you a notification updating in real time with the state of each platform.
  3. If one platform fails, offers "retry" without touching the others.

The author keeps working on the next thing. The studio publishes by itself. The difference with "WordPress + Buffer + Hootsuite" isn't just that it's fewer products — it's that here the three channels are first-class citizens, not ATOM-attached afterthoughts.

Why a studio and not an editor

Legitimate question: why all this layout? Isn't a simple editor enough? The answer is that a simple editor optimizes for writing a post. A studio optimizes for publishing content — which in 2026 means adapting to multiple channels, ensuring brand fit, optimizing SEO/GEO, and covering the long tail of agentic crawlers.

If your job is to write, a simple editor is enough. If your job is to grow audience with your content, you need a studio. That's what write.agentikas.ai is: a professional tool that's free and open source, because the agentic web shouldn't sit behind a paywall.


Try it at write.agentikas.ai. Create your blog, write your first post, publish on three platforms. Zero install, zero cost. Open source at github.com/agentikas/agentikas-blog.

Comments

Loading comments…