This series of articles is actually trying to convey one thing: AI writing blogs cannot rely solely on a single prompt in the long run. This current article first discusses why blog-writer came into existence; the next article will continue with AI Writing Blogs: Later, It Still Needs to Be Engineered (Part II): How blog-style-suite Separates Style Learning and Token Costs; and the last article concludes with AI Writing Blogs: Later, It Still Needs to Be Engineered (Part III): How Local Models, Online Models, and Minimax Will Finally Divide Labor.
What’s truly annoying isn’t writing the draft, but that sequence of mechanical actions.
The early workflow was essentially like an outsourced assembly line. I would first list out the problems clearly, or build a rough outline. The model is responsible for laying out the main body text. Then, a human comes back to complete the remaining publishing steps.
- Copy to local
mdfile - Fill in
title,date, andslug - Add tags and categories
- Insert
<!--more--> - Organize reference materials
- Finally, decide which directory it should go into Looking at each step individually, this sequence isn’t difficult. But when strung together, it becomes tedious. What’s annoying isn’t the technical difficulty; it’s that these steps are all mechanical, yet they cannot be skipped. This is why I increasingly feel that changes like AI coding interaction based on command line are not just about “changing the entry point.” When AI can directly read and write files within a repository, if blog writing still stops at the level of “copying the body text to a local document,” the entire workflow is actually outdated.
blog-writer The First Layer of Value: It’s Not the Style, It’s Locking Down the Contract
The very first node for blog-writer was at 17:00 on April 1, 2026, with the commit hash 991536a. Looking at the git commit history, this version included SKILL.md, write_post.py, and an initial set of style guidelines all together.
However, when I looked back later, the most valuable part of this draft wasn’t that “the AI learned my writing style,” but rather that it established a rigid contract for content creation.
What does “locking down the contract” mean?
- The input must include at least an outline and factual anchors.
- The output must be complete Markdown, not a work in progress.
- Frontmatter cannot rely on manual additions anymore.
- The article cannot just stay in the chat window; it must land directly into
content/post.
This point is crucial because prompts themselves are inherently unstable. If you say, “Write it like before,” today, it might understand that only the tone should be similar; if you repeat it tomorrow, it might only learn superficial sentence structures. But once it’s written as a Skill, the rule shifts from “improvisation” to a “fixed workflow.”
The subsequent nodes were actually all about reinforcing this contract.
The commit at 22:54 on April 2, 2026, with hash 8eb735a, standardized elements like author fields, writing notes, and original prompts. By this stage, the blog draft was no longer considered “finished once the body text is done”; instead, metadata, traceability, and public notes were all standardized together.
Therefore, the first layer of value for blog-writer has never been about making the model seem better at writing; it’s about finally giving the act of drafting repeatable boundaries.
Series Mode, Which is Actually One Step Forward in Writing Workflow
After stabilizing writing a single article, the next problem quickly emerged.
Some topics are simply not suitable to be crammed into one piece. If you force it, the result often becomes a long article that is information-heavy, has a scattered main thread, and fails to fully explain every point.
This is why the commit 1a5604e on April 2, 2026, at 23:55 was so crucial. That time, they directly added the series mode along with write_post_series.py. The articles are linked using relref, and the replacement is done uniformly during batch writing.
This might look like a minor upgrade to a file-writing script, but it’s not.
It illustrates one thing: content engineering is no longer just about “how to generate this single article,” but rather starts considering “how to stably save this set of content, how to guarantee the order, and how to link between them on the site.”
The next day’s commit, 04dccb9 on April 3, 2026, at 09:29, pushed this process one step further. The timestamps for series articles now increment by minutes instead of sharing a single timestamp. This change is small but very “engineering-y” because it solves real problems like Hugo list pages, previous/next article navigation, and series ordering.
Simply put, the series mode isn’t about looking advanced; it’s about eliminating the need for manual fixes when publishing multiple articles together.
But relying on just one skill will eventually hit a token wall.
The problem lies here.
Once you start seriously tinkering with style learning, the context of blog-writer will quickly become bloated. You not only want it to write, but you also want it to write like you used to. The most natural way to do this is to dump all your historical articles into it.
This works for a single run, of course.
But as soon as you’re not writing an occasional piece, but trying to make it a long-term workflow, the problems immediately arise:
- High token consumption
- Repeatedly feeding the same batch of old articles every time
- Model attention is diluted by old material
- Drafting and style maintenance are intertwined; neither is easy.
It was from here that I slowly realized that blog-writer is better suited for the consumption side, rather than trying to feed it everything.
The act of drafting should be as light, direct, and limited to reading only the effective versions as possible. As for how to generate, filter, or compress style data, that’s a matter for a separate production pipeline. This realization finally pushed me to the next step, which was AI Blog Writing: It Still Needs to Be Engineered (Part II): How blog-style-suite Separates Style Learning from Token Costs.
First, stabilize the process; only then can we talk about style and models.
Looking back now, blog-writer didn’t emerge because I suddenly wanted to build a blog writing assistant.
It was more because the original workflow started failing to keep up with new ways of working.
Once a tool like Codex can connect to the internet for supplementary material, read and write within repositories, and directly call scripts, the act of writing a blog shouldn’t stop at “copying the body text to a local document.” If you don’t automate this part, it will actually become the clumsiest link in the entire chain.
So, I’ll leave the conclusion for the first post here.
What blog-writer solved initially wasn’t writing style, but the repetitive labor of the publishing action. Without this layer of contract, any subsequent discussion about tokens, data structures, or local models is actually baseless.
References
- Repository Commit:
991536a237d04aba7c44dec501b3d98c644040c8 - Repository Commit:
8eb735aa8448c97deb2af1ea46b86772008fa9e3 - Repository Commit:
1a5604e7e6ce0a13f260fcbb8c2c1d964cdd0892 - Repository Commit:
04dccb98c55a6ea3b81408012b33a6219cf8ab77 - Repository File:
.agents/skills/blog-writer/SKILL.md - Repository File:
.agents/skills/blog-writer/scripts/write_post.py - Repository File:
.agents/skills/blog-writer/scripts/write_post_series.py
Writing Notes
Original Prompt
$blog-writer This content is quite extensive, so I've split it into a series of articles: Last year, many drafts were written using large models. Back then, the process was to create an outline or a list of questions myself, and then have the AI generate the draft, copy the content into a local md document, fill in header information, tag information, and publish the article; recently, I used Codex a lot and found that its web search capability is very strong. So, could I write a skill to automate these tasks? This led to the first draft of the skill blog-writer. I also thought about having the AI learn my previous writing style, which caused blog-writer to consume a lot of tokens when running. Subsequently, I optimized blog-writer in several versions, splitting out the data module and the data generation module. The original data generation module was still an independent skill. As I continued writing, I realized that it would be better as a Python project, which led to blog-style-suite. Then, I found that training on style data also consumes a lot of tokens, so I wanted to use a local large model and connected to one locally. I then thought about comparing the differences between the local large model and the online version, so I integrated minimax. The evolution history of blog-style-suite and blog-writer can be analyzed from the git commit history. Additionally, based on the code for local blog-writer and blog-style-suite, I can discuss the design ideas, how token saving was achieved, and how the data structure was designed—the core design concepts. If tokens are abundant, it can consume entire historical articles; preprocessing can save a lot of tokens.
Writing Strategy Summary
- The first article should focus on the workflow trigger point, without rushing to detail the division of labor between tokens and models, to avoid having all three articles compete for the main narrative.
- It retains the key insight: “The body content is not difficult; what’s troublesome are the mechanical actions before and after publishing.”
- By using nodes like
991536a,8eb735a,1a5604e, and04dccb9, we ground the concept of “process contracturization” in actual Git evolution. - The series pattern is reserved for this article to illustrate that blog writing has moved from generating single pieces to managing entire sets of deliverables.
- The ending deliberately points toward the token wall, setting up the groundwork for data engineering and preprocessing in the second article.