Writing an AI blog post, in the end, still needs to be turned into engineering (Part 3)

本地模型、在线模型和 Minimax 最后怎么分工

After going through all the configurations in the repository, I am even more certain about one thing: what matters in the end is not how strong any single model is, but rather who should bear the cost at each layer.

The most obvious signal is that the currently active published.runtime.json is still the one generated on April 2, 2026, for minimax-m2, yet the entry from April 3, 2026, at 16:38, labeled 5f17088, has switched the default provider for blog-style-suite to the local gemma-4-26b-a4b in LM Studio. This might look inconsistent, but it actually isn’t; it precisely illustrates that this pipeline has begun to specialize.

With these articles, the first two have laid out the boundaries. The first article discusses why blog-writer emerged, and the second article discusses how blog-style-suite separates style learning from token costs. This final article settles on the most practical question: where should local models, online models, and Minimax ultimately be placed?

Training Style Data, Not Worth Burning Online Models at Every Step

The issue of style data, once you start taking it seriously, quickly becomes a practical problem with tokens. It’s not about whether you want to save costs; if you don’t divide the labor, this whole setup won’t run for long. The most common mistake in the past was letting one online model handle everything.

  • Scraping historical articles
  • Performing filtering
  • Doing categorization
  • Scoring
  • Sampling
  • Enforcing style
  • Finally writing the draft The biggest problem with doing it this way isn’t that “the model isn’t strong enough,” but rather that every step burns the same level of cost. Looking back now, the truly reasonable approach should be to think in reverse: which steps must be online, which steps should ideally be localized, and which steps shouldn’t even be given to a model at all. As long as this boundary isn’t clear, no matter how powerful the model is, it will just end up helping you repeat a bunch of tasks that could have been pre-processed away.

Local Models are Better Suited for Dirty, Heavy, and Iterative Tasks

I am increasingly inclined to define local models as the “physical layer” for production use. They might not be the strongest, nor perfect every time, but they are particularly suited for tasks such as:

  • Building through repeated runs/iterations
  • Multi-round compression experiments on style data
  • Re-scanning after configuration changes
  • Low-risk recalculation on existing structures

These types of tasks share a clear commonality. The value isn’t in a single, extremely high-value output, but rather in the ability to run repeatedly, tolerate errors, and ideally avoid paying high costs every single round. Currently, scripts/blog-style-suite/config.json has switched to lm-studio-gemma4, which itself indicates a shift in judgment. It’s not that local gemma is necessarily stronger than online models, but for the production pipeline, we are finally starting to prioritize “runnability, frequency of use, and ability to iterate/modify repeatedly.” This point actually aligns with the logic I wrote previously in Don’t force strong tasks onto weak models. Local models might not be suitable for writing complex, comprehensive articles from scratch, but they are excellent for handling dirty, heavy, and batch processing tasks. Preprocessing style data is inherently more like this category of task.

Online models are better suited for the final polish, not for doing everything from scratch

Just because local models are suitable for the production side doesn’t mean online models have no value. The real value of an online model lies precisely in that final polishing touch. For example:

  • Supplementing facts based on the latest information
  • Structuring arguments within a larger context
  • Handling time-sensitive information that requires internet verification
  • Transforming already prepared structured style assets into a publishable article These tasks require higher demands on expression quality, factual integration, and contextual understanding, making online models more valuable here. In other words, the powerful model is more like the final few assembly line steps. It’s not that it can’t do more upfront work, but if you make it scan from beginning to end, the entire cost structure will quickly become distorted. This is also why blog-writer is designed to only read from the published location published.runtime.json, rather than having to switch providers or re-scan the suite directory while drafting. The lighter the consumption side, the better it is for a more powerful model to focus on finalizing the article.

The Significance of Minimax: It’s Not Just Another Provider Connection

Many people who see Minimax might first think: “It’s just another model being connected.”

I don’t think so.

The truly valuable aspect of Minimax is that it has successfully paved the way for “multiple provider outputs consumed by a single publishing contract.”

The change on April 2, 2026, at 10:18 (9f15199) modified blog-style-suite to support multi-model configurations, with outputs isolated per provider. Subsequently, the README and runtime structure have consistently emphasized one thing: while the suite can generate many sets of results, only the manually selected published.runtime.json is actually effective.

This boundary is extremely important.

Because once this boundary is clear, the role of Minimax changes from being “something that must be bound within the drafting process” to becoming:

  • Something that can participate in production-side comparisons.
  • Something that can be used to generate a runtime version.
  • Something that can be compared horizontally with local model artifacts.
  • Finally, something whose publication is decided by human judgment.

This transforms the provider from a “system dependency” into a “replaceable component.”

I believe this is the most interesting significance of Minimax within this engineering setup. It isn’t here to dominate the entire pipeline; it’s here to validate whether this pipeline has successfully cleaned up its interfaces.

True specialization is not based on model strength, but on task type

I now favor a classification method that is quite rudimentary, but very effective.

Rules and Hard Constraints

Leave to local scripts. If it can be solved with deterministic tools like scanner.py, write_post.py, or write_post_series.py, don’t let the model get involved.

Style Data Generation

Prioritize local models or lower-cost providers. Because what is most important here is reproducibility, room for iteration/error, and cacheability, not necessarily the most dazzling single output.

Final Drafting and Fact Consolidation

Hand this off to a model better suited for long-context integration, expression consolidation, and fact-checking/web retrieval. This layer is where spending money on online models is most worthwhile. When broken down like this, many previously confusing issues are actually not that complex. You don’t need to argue every day about “which model is the strongest”; you just need to ask: which layer does this task belong to?

Ultimately, what is most valuable is not the model, but the clear boundaries.

This concludes my third article. As blog-writer and blog-style-suite have evolved, I feel that what is most valuable is not which provider we connected next, or who we replaced, or which one we tested. What is most valuable is that the boundaries are finally becoming clearer.

  • blog-writer handles the consumption side.
  • blog-style-suite handles the production side.
  • published.runtime.json is the publishing point.
  • Local models are better suited for dirty and heavy lifting that needs to be run repeatedly.
  • Online models are better suited for the final polish/wrap-up.
  • Online providers like Minimax feel more like replaceable components rather than the central hub of the system. Once the boundaries are clear, the entire workflow flows smoothly. You won’t expect one model package to conquer everything, nor will you stack every step onto the most expensive layer. In the end, while it looks like selecting a model, what we are actually doing is assigning workstations for different types of tasks. Simply put, having a single strong point is certainly good. But in the long run, clear boundaries are often more important and stronger than any single point solution.

References

Writing Notes

Original Prompt

$blog-writer This content is quite extensive, so I've split it into a series of articles: Last year, many drafts were written using large models. Back then, the process was to create an outline or a list of questions myself, and then have the AI generate the draft, copy the content into a local md document, fill in header information, tag information, and publish the article; recently, I used Codex a lot and found that its web search capability is very strong. So, could I write a skill to automate these tasks? This led to the first draft of the skill blog-writer. I also thought about having the AI learn my previous writing style, which caused blog-writer to consume a lot of tokens when running. Subsequently, I optimized blog-writer in several versions, splitting out the data module and the data generation module. The original data generation module was still an independent skill. As I continued writing, I realized that it would be better as a Python project, which led to blog-style-suite. Then, I found that training on style data also consumes a lot of tokens, so I wanted to use a local large model and connected to a local LLM. I then thought about comparing the differences between the local LLM and the online version, so I integrated minimax; the evolution history of blog-style-suite and blog-writer can be analyzed from the git commit history. Additionally, based on the code for local blog-writer and blog-style-suite, I can discuss the design ideas, how token saving was achieved, and how the data structure was designed—the core design concepts. If tokens are abundant, it can consume entire historical articles; preprocessing can save a lot of tokens.

Writing Strategy Summary

  • The third article will no longer repeat the discussion on architecture, but instead focus solely on the practical issue of “model specialization/division of labor.”
  • Start directly by stating the current reality—whether to use published.runtime.json from the current repository or if it’s switched locally to gemma4 via minimax-m2 or config.json—to reduce filler content.
  • The focus should not be on proving which model is stronger, but rather on explaining why different tasks should be assigned to different cost layers.
  • Placing Minimax in the “replaceable provider” section aims to pull its significance back into the engineering boundary, rather than treating it as just another entry on a model leaderboard.
  • Conclude by returning to the overarching judgment: “Clear boundaries are more important than single points of strength,” serving as the closing statement for the entire series of articles.
A financial IT programmer's tinkering and daily life musings
Built with Hugo
Theme Stack designed by Jimmy