After reviewing AI articles from the past two years, I think these are the 8 topics I should write about next.

I recently went back and reviewed the articles in my blog related to AI from the past two years, and I found that the content is no longer just simple experiences like “whether a certain model is good or not.” Instead, it has gradually formed a relatively clear main thread: How AI truly entered my development workflow, and what efficiency gains, costs, and new constraints it brought.

When I say “the past two years” here, looking forward from the current time, it roughly spans from 2024-03-30 to 2026-03-30. Upon actually reviewing them, a very noticeable phenomenon is that there were almost no articles with a truly significant AI theme in 2024; the intensive output only began around 2025-01. This fact itself is quite interesting. It shows that for me, AI didn’t enter a stable period of use right away; rather, it slowly permeated my work and writing, and only after I encountered suitable tools and task formats did continuous documentation begin to form.

The AI articles of the past two years can generally be divided into three stages

Phase One: Tool Exploration, Testing Usability First

Representative articles from this phase include:

  • “Trying out the Cursor AI Coding IDE”
  • “Locally Deploying deepseek-R1 with ollama”
  • “Designing and Developing a Stock Selection Module Without Writing Code”
  • “Two Years of AI Development: A State Similar to Before Docker’s Release” The core question in this phase is very simple: Can AI actually help me get things done? The focus at that time was more on the tool level:
  • Whether the IDE format is convenient to use.
  • Whether local deployment can actually run.
  • Whether code generated by the model saves time.
  • Whether AI gets stuck when encountering complex requirements. Looking back, the articles from this phase feel like paving the way for heavy usage later on. Many conclusions remain valid today, such as: AI can significantly boost basic development efficiency, but complex tasks still require manual decomposition; local models are good for experimentation, but stability and speed are key when integrating into high-frequency workflows.

Phase Two: Starting to Integrate into the Workflow, But Side Effects Appear Too

Representative articles from this phase include:

  • “Overusing AI, and Some Aftermath” (or “The Side Effects of Overusing AI”)
  • “Claude 4 Release: Attempting Development with Hugo Tags and Hyperlink Translation Assistants”
  • “Some Usage Experiences with Large Models Recently”
  • “ByteDance AI Coding New Paradigm SOLO”

By this point, AI is no longer just a “tool to occasionally try out,” but it has started to directly participate in:

  • Developing blog toolchains
  • Translation caching and batch processing pipelines
  • UI design and code iteration
  • Model specialization and scenario selection

At the same time, the problems becoming more specific. The previous question was, “Can AI write code?”, while the later questions became:

  • The code is written, but how do I validate it?
  • The article is generated, but does it lack a human touch/real-world flavor?
  • The documentation has been updated synchronously, but do I truly understand it myself?
  • The tools are getting stronger, but is the intensity of human thought decreasing?

This is what I find to be the most valuable part of this set of articles. Compared to empty talk about “AI being powerful,” these records that carry a slight sense of discomfort feel more authentic.

Phase Three: Moving from Single Tool Experience to Protocols, Workflows, Stability, and Cost

Representative articles for this phase include:

  • “AI Coding Interaction Based on Command Line”
  • “From Protocol Constraints to Intelligent Release: A Deep Comparison of MCP vs. Skill”
  • “A Period of Heavy AI Programming”
  • “The End Game of Low-Cost API Gateways: Large Model Experiences and the Impossible Triangle in March” By this stage, the focus has clearly upgraded. It is no longer about “which model answers more intelligently,” but rather:
  • Which is better for continuous development: terminal interaction or IDE integration?
  • What are the boundary differences between capability extension methods like MCP and Skill?
  • Where should human intervention occur during heavy AI programming?
  • How to make realistic choices among cost, stability, and quality. These topics are no longer simple product reviews; they are closer to workflow design. This is also the most stable and recognizable thread in AI topics on blogs right now.

The biggest advantage of this batch of articles is not chasing hot topics.

Looking back, what makes the existing AI articles on the blog truly distinctive is not writing about a certain model earlier than others, nor is it covering parameters, leaderboards, and benchmark scores more completely, but rather these points below:

1. All Originate from Real-World Scenarios

Whether it’s the stock selection module, the blog translation tool, command-line coding, or records of API relay station tinkering, almost none of these topics were conceived out of thin air; they were written after encountering problems during actual use. The advantage of this type of content is that it is not easily superficial.

2. Focuses on “How to integrate AI into the workflow”

Many AI articles only write about model capabilities, but this set of existing articles is more concerned with:

  • How to decompose tasks
  • How to integrate into projects
  • How to maintain documentation
  • How to control context
  • How to divide labor between different models The lifecycle for these types of content is usually longer than simple model evaluation.

3. Beginning to realize the side effects and costs

From “Too Much AI, Some Side Effects” to “The End of Low-Cost API Relays,” a very complete line of thought has actually formed:

  • AI can boost efficiency
  • But it changes people’s search and thinking habits
  • Cheap does not equal truly saving money
  • Quality, stability, and cost-effectiveness are hard to achieve simultaneously These judgments come from personal usage accumulation, not just repeating online opinions.

But there are also several obvious gaps in the content

Although the existing article has a main thread, if we continue writing, I feel there are still some noticeable gaps.

1. Lack of a Systematic “Acceptance Methodology”

Many articles have covered the AI programming experience, and they have mentioned unit testing, performance testing, and documentation synchronization, but there hasn’t been an article that thoroughly explains the entire process of “how to accept/validate what AI writes.”

2. Lack of Team Perspective

Currently, most focus on individual development and personal workflows. This perspective is good, but if we continue writing, it can be expanded to:

  • How to limit the scope of AI modifications during team collaboration
  • How to approach AI-generated code in code reviews
  • How documentation, commit logs, and test records should work together

3. Lack of Discussion on Security and Permission Boundaries

This trend has become increasingly obvious recently; AI is no longer just a chat box, but is taking over:

  • Terminal commands
  • Repository read/write access
  • Browser operations
  • Third-party tool calling The stronger the capability, the more valuable it is to define the permission boundaries.

4. Lack of “Long-term Knowledge Base” Direction

The article currently has basic capabilities like translation cache, slugs, and tags, but it hasn’t systematically addressed: If the blog itself is a personal knowledge base, how can it be organized into content assets that are more suitable for AI consumption, retrieval, and processing.

The 8 Topics I Think Are Best to Write About Next

I have ranked these 8 areas based on “best fit with current writing style” and “easiest to generate original content.”

1. How to Build an Acceptance System for AI Programming

This is the article I most recommend writing first. You can focus on the following points:

  • Which modifications must have unit tests added
  • Which modules must run regression tests
  • Which refactorings require performance comparison
  • Which documents should be maintained synchronously
  • What to focus on during manual review Once this article is written, it can be linked back to for many subsequent AI programming articles.

2. Where MCP is Truly Useful, It’s Not About Connecting More Things

MCP is getting increasingly popular, but most discussions are still at the conceptual level. What is more worth writing about is:

  • Which tool integrations genuinely boost efficiency
  • Which ones just look cool but aren’t actually necessary
  • Among local files, documents, issues, monitoring dashboards, and design mockups, which should be prioritized for integration?
  • What is the actual difference between protocolized integration and “stuffing in a large prompt”?

3. Cross-Comparison of Claude Code, Codex, Gemini CLI, and Domestic CLI Models in Practice

Instead of simply stating “which one is better,” the goal is to conduct a comparative evaluation using a unified set of tasks. For example, fixed comparisons on:

  • Requirement decomposition ability
  • Instruction following accuracy
  • Scope control for code modification
  • Ability to supplement tests
  • Documentation synchronization capability
  • Cost and waiting time This type of article is easiest for readers to reference directly.

4. Context Management in AI Programming

Often, the issue isn’t that the model is weak, but rather that the context has become dirty, too long, or drifted. This article could specifically cover:

  • When to clear the context
  • When a new thread should be started
  • When tasks should be broken down into smaller pieces
  • When multi-agent parallelization is appropriate
  • In what situations manual re-summarization of the current state is necessary This topic is very specific and easy to combine with one’s own real-life case studies.

5. From IDE to Terminal, and then to Multi-Agent Collaboration

Over the past year, the focus of AI programming interaction has clearly shifted. Previously, it was more about in-IDE completion, chat, and local code modification; now, more and more tools are emphasizing:

  • Terminal interaction
  • Repository-level understanding
  • Multi-threaded context
  • Parallel agents
  • Worktree isolation for development This topic is suitable for connecting past articles such as Cursor, Trae, Claude Code, and Codex.

6. The Security Surface of AI Programming is Expanding

This direction is very worth writing about, and it hasn’t been covered much yet. You can approach this from these angles:

  • Risks associated with automatically executing commands
  • Trust boundaries for third-party MCP services
  • Issues of private repository and sensitive information leakage
  • Prompt injection and malicious context contamination
  • The boundary between automated scripts and human confirmation If you only write that “AI is very capable,” the article will become repetitive; incorporating the security boundaries will make the content significantly higher in quality.

7. The True Place of Local Models

Previously, the focus was more on “Can the local model run?”, but now it is more valuable to ask:

  • What tasks is it suitable for?
  • What tasks is it not suitable for?
  • When do advantages like privacy, offline capability, and low cost truly materialize?
  • At what point does continuing to insist on a local solution actually become a waste of time? This has more follow-up value than just deployment tutorials.

8. How to Organize Blog Content into AI-Friendly Knowledge Assets

This direction integrates most closely with the existing blog system. You could write about:

  • How to uniformly design slugs, tags, abstracts, and categories
  • How to minimize link drift in multilingual content
  • How article metadata can aid subsequent retrieval
  • How to make historical articles more suitable for AI retrieval, summarization, and citation If this piece is written, it will be both an AI topic and one that can serve the entire blog system in return.

There have been several changes in the industry recently that will also influence future topic selection.

First, AI programming is increasingly moving from being a “completion tool” to an “agent workflow.” Products like Codex and Claude Code no longer emphasize single-turn answers; instead, they focus on task decomposition, tool calling, parallel processing, and continuous context maintenance.

Second, protocolized access methods, such as MCP, are transitioning from being “new concepts” to becoming infrastructure. In the future, truly valuable articles will not be those that re-explain protocol definitions, but rather those that clearly articulate: which integration scenarios are genuinely effective, and which ones just look advanced.

Third, the back-and-forth between design mockups, documents, code, and command lines is increasing. Previously, tools were siloed; now, AI is attempting to connect these links. This also means that “workflow design” will be more valuable for long-term writing than simply listing “model benchmarks.”

Fourth, stability, cost, and permission risks are not going away. On the contrary, the stronger the model’s capabilities become, the more critical these issues will be.

The Final Judgment

If I were to continue writing about AI, the one thing I feel we should stick to is not chasing every new model release for reviews, but rather focusing on a more specific question: How exactly does AI integrate into real development and writing workflows step-by-step? Where does it genuinely improve efficiency, and where does it push the problem back to humans? You have actually outlined this line; it just hasn’t fully materialized yet. The most appropriate next step is not to spread out the topics further, but rather to continue digging along these four sub-themes: “Practical Tools,” “Process Design,” “Boundary Control,” and “Long-term Knowledge Assets.” Content written this way will, over time, be easier to solidify into one’s own material, rather than just a collection of quickly outdated hot topics.

A financial IT programmer's tinkering and daily life musings
Built with Hugo
Theme Stack designed by Jimmy