Tags

46 pages

AI Inspiration Hub

AI Writing a Blog: The Next Steps Towards Engineering (Part 1)

I wrote quite a few AI articles last year. The most basic workflow back then was: first, organize an outline or a list of questions myself; let the large model spit out the main body text; then copy the content into a local md document, add frontmatter, tags, categories, and titles, and finally publish it. This process isn’t unusable, but it’s tedious. The part that really wastes time isn’t the main body text, but the repetitive labor surrounding it. Especially after using Codex a lot recently, this awkwardness has become even stronger. It can read repositories, modify files, supplement materials, and even write articles directly into the directory. If I still have to copy and paste things manually, it feels like I’m tying down the tool’s legs.

After reviewing AI articles from the past two years, I think these are the 8 topics I should write about next.

I recently went back and reviewed the articles in my blog related to AI from the past two years, and I found that the content is no longer just simple experiences like “whether a certain model is good or not.” Instead, it has gradually formed a relatively clear main thread: How AI truly entered my development workflow, and what efficiency gains, costs, and new constraints it brought.

The End of Low-Cost API Gateways: Large Model Experiences and the Impossible Triangle in March

Throughout March, I was constantly testing between various large model API hubs. It is indeed cheap. You can test out foreign models like ChatGPT, Claude, and Gemini for a small amount of money per month, which at first glance seems like finding an extremely cost-effective solution. However, after actually using it, I increasingly feel that this path has always been constrained by an impossible triangle: Quality, Stability, and Affordability—it is difficult for all three to be achieved simultaneously. By last weekend, the situation became quite clear. During the two days from 2026-03-28 to 2026-03-29, I felt a noticeable tightening of risk controls on ChatGPT channels, and Claude was no different. Many low-cost relays that were previously usable suddenly became unstable or even completely failed. For me, this basically signaled the temporary end of the low-cost API relay model.

Computing Power Hegemony and Valuation “Bubble”: We are entering a costly new era.

Recently, I’ve been observing discussions within the industry, and it seems there’s been a fundamental shift in the definition of “growth.”

Previously, when we discussed the internet, we talked about “four ounces moving a thousand pounds” – writing a few lines of code, renting a few cloud servers, and leveraging excellent interaction and operations to unlock hundreds of millions of users. However, as of 2026, this “low-asset” illusion is being completely shattered by large models.

Deep Dive: Memory Corruption and Cache Pollution in C++ with Static Lambdas

This article analyzes the bizarre phenomenon in C++ development where unordered_map::find returns an object with mismatched fields after a hit. The root cause lies in defining a static lambda within the function and using reference capture to capture local variables, leading to a dangling reference after the first call, triggering undefined behavior (UB) and polluting cache data in subsequent calls. It is recommended to address this issue by explicitly passing parameters instead of implicit capture, managing lifecycles properly, and utilizing Sanitizer tools.

XiaoMi’s “New and Old Replacement” and defensive battle with the electric vehicle sector

Worrying Mindset: Holding onto stocks persistently, observing calmly like a Buddhist, and paying attention to the fulfillment of “ecosystem premium.”

I. Market Overview: From 2025’s “Mania” to 2026’s “Consolidation”

2025 was a strong year for the Hong Kong stock market, with the Hang Seng Tech Index rising by 23.45% throughout the year – its best performance since inception. However, as we entered January 2026, the market transitioned into a clear pattern of “two upward trends and one pullback.”

U.S. Kill Criteria

“Loa A” is a recently trending internet meme (particularly on platforms like Bilibili, Douyin, and Xiaohongshu) in China, primarily referring to an overseas blogger named “Squish the King” (also known as Squish), and his narrative system about American society.

Simply put, you can understand this term from the following dimensions:

1. Origin of the Name: Playful References and Nicknames

  • “Lao” Generation (Surname “Lao”): This prefix originates from the internet meme “Mamba Mentality” and the Kobe Bryant reference (“Lao Da,” “Lao K,” etc.). Within the current Chinese internet subculture, “Lao” has evolved into a playful and slightly mocking term used to refer to certain figures with specific controversial or profound labels.
  • “A”: Represents America (United States).
  • Combined Name: Due to this blogger’s long-standing coverage of the harsh realities of American life, netizens adopted the “Lao” generation naming rule and playfully nicknamed him “Lao A.”

2. Core Concept: “The American Kill Shot Line”

“牢A” went viral because he introduced a highly contagious concept – “The American Kill Shot Line.”

wrk vs. JMeter deep benchmarking

In internet system stress testing, we frequently encounter two tools with vastly different styles: one is extremely lightweight, pursuing extreme throughput—wrk; the other is feature-rich and simulates real business flows—JMeter.

Prompt: Outline the core ideas and write a科普 article (explanatory article): HTTP stress testing tools, wrk vs JMeter – what are the differences? What I know, wrk tends to use one thread with multiple connections for testing, while JMeter primarily employs a short connection mode, which can be adjusted via configuration to enable long polling.