Tags

48 pages

Ai

Making the "AI writes blog" thing into an engineering project later (Part II)

If there are enough tokens, the least effort method is actually quite crude: just feed the model historical articles and let it learn on its own. The problem with this method is that it only suits occasional writing, not continuous work. If you treat blogging as a long-term workflow, relying solely on raw historical articles will quickly go from “simple and direct” to “expensive and messy.”

AI Writing a Blog: The Next Steps Towards Engineering (Part 1)

I wrote quite a few AI articles last year. The most basic workflow back then was: first, organize an outline or a list of questions myself; let the large model spit out the main body text; then copy the content into a local md document, add frontmatter, tags, categories, and titles, and finally publish it. This process isn’t unusable, but it’s tedious. The part that really wastes time isn’t the main body text, but the repetitive labor surrounding it. Especially after using Codex a lot recently, this awkwardness has become even stronger. It can read repositories, modify files, supplement materials, and even write articles directly into the directory. If I still have to copy and paste things manually, it feels like I’m tying down the tool’s legs.

In the era of AI, just getting people into an app is no longer enough.

Seeing the domestic AI companies spend money during this Lunar New Year, my first reaction wasn’t excitement, but familiarity. Tencent Yuanbao gave out a 1 billion cash red envelope on February 1st; Baidu Wenxin distributed red envelopes totaling 500 million from January 26th until mid-March; Alibaba’s Qwen launched a “treat plan” of 3 billion on February 6th; and Doubao leveraged the Spring Festival Gala for AI interactions to push its presence. My judgment is straightforward: this is still an inertial action left over from the previous era of the internet—first, pull people into the App, and second, build up usage frequency; everything else can wait. But the business of AI isn’t quite like a traffic-driven business.

Skill is not a new prompt, it is the job manual for the agent.

These past few days, while reading about AI programming, people were first discussing MCP, and then immediately started talking about Skill. Many people who see this term for the first time will instinctively treat it as another new protocol or another advanced prompt.

My judgment is very straightforward: Skill isn’t here to replace MCP; rather, it’s more like providing an occupational manual for the agent. MCP solves the problem of “enabling the agent to connect to the external world,” while Skill solves the problem of “how to reliably get the job done after connecting.” These two are not a replacement relationship; they are more like one following the other.

Simply put, MCP gives the agent hands and feet, and Skill tells the agent not to mess around.

Don't force weak models onto hard tasks.

Recently, I’ve been migrating some edge cases to MiniMax and local models. The more I use them, the more I feel that we shouldn’t always measure things by the standard of “the most powerful model.”

My judgment is straightforward: don’t force weak models into hard tasks. Models like MiniMax are indeed limited in capability, but for complex coding, long-chain reasoning, or ambiguous requirement decomposition, they fall a bit short. However, if you ask it to do data cleaning, document writing, or searching for proposal materials—these kinds of tasks—it can handle them perfectly well. The same logic applies to local models around the 12B size; translation, format rewriting, and batch cleaning are actually where they are best suited.

To put it plainly, it’s not that the models lack value; it’s just that we shouldn’t place them in the wrong roles.

After reviewing AI articles from the past two years, I think these are the 8 topics I should write about next.

I recently went back and reviewed the articles in my blog related to AI from the past two years, and I found that the content is no longer just simple experiences like “whether a certain model is good or not.” Instead, it has gradually formed a relatively clear main thread: How AI truly entered my development workflow, and what efficiency gains, costs, and new constraints it brought.

The End of Low-Cost API Gateways: Large Model Experiences and the Impossible Triangle in March

Throughout March, I was constantly testing between various large model API hubs. It is indeed cheap. You can test out foreign models like ChatGPT, Claude, and Gemini for a small amount of money per month, which at first glance seems like finding an extremely cost-effective solution. However, after actually using it, I increasingly feel that this path has always been constrained by an impossible triangle: Quality, Stability, and Affordability—it is difficult for all three to be achieved simultaneously. By last weekend, the situation became quite clear. During the two days from 2026-03-28 to 2026-03-29, I felt a noticeable tightening of risk controls on ChatGPT channels, and Claude was no different. Many low-cost relays that were previously usable suddenly became unstable or even completely failed. For me, this basically signaled the temporary end of the low-cost API relay model.

Computing Power Hegemony and Valuation “Bubble”: We are entering a costly new era.

Recently, I’ve been observing discussions within the industry, and it seems there’s been a fundamental shift in the definition of “growth.”

Previously, when we discussed the internet, we talked about “four ounces moving a thousand pounds” – writing a few lines of code, renting a few cloud servers, and leveraging excellent interaction and operations to unlock hundreds of millions of users. However, as of 2026, this “low-asset” illusion is being completely shattered by large models.