Tags

46 pages

AI Inspiration Hub

Moutai's Net Profit Drops for the First Time, and it's Not Just Because Young People Aren't Drinking Baijiu

This matter deserves separate discussion because it directly impacts how we view Maotai. Previously, many people treated Maotai as an eternally rising consumption myth, and whether young people drank it or not was just a minor factor. Now, that is no longer the case. It is true that young people naturally have little interest in Baijiu (Chinese liquor), but this is more like a slow-moving variable. The sudden turnaround reported in the annual report appears to be driven by the contraction of an entire old system: obsolete business demands, traditional wealth distribution methods, and established status-driven consumption patterns.

Lock down the strongest model first, AI companies start selling access control.

These past couple of days, I came across Anthropic’s Project Glasswing, which is scheduled for release on April 7, 2026. My first reaction was a bit stunned. It wasn’t because another model scored higher, but because it locked the top-tier capabilities into a small circle, initially reserved for defensive players like AWS, Apple, Google, Microsoft, and Linux Foundation.

My own judgment is very direct: This matter is more important than another benchmark record. What frontier AI companies are selling now is no longer just the model itself, but an entire set of access controls—“who gets the capability first, how much capability they get, and what kind of auditing and constraints they have to endure after receiving it.” Models are becoming increasingly like dangerous tools, and the release rhythm is becoming more like issuing licenses.

What's truly terrifying isn't the layoffs, but the fact that they aren't hiring anymore.

Seeing Block cut its workforce by 4,000 people out of a group of over 10,000 at the end of February 2026 really shook me. I’ve always worked in financial IT—things like trading pipelines, Hong Kong/US stocks, and system fundamentals. Usually, I’m accustomed to buzzwords like “efficiency improvement,” “automation,” and “cost reduction while increasing efficiency.” But when a fintech company, one so close to money, compliance, and risk control, publicly cites AI as the reason for layoffs, it still hits you hard emotionally.

My current assessment is very direct: the scariest part of AI layoffs isn’t a layoff list in the news one day, but when companies start assuming that “smaller teams can do more work.” This means no backfilling for departures, fewer entry-level positions, and much tighter headcount management. From April 2025 to April 2026, this wind is still blowing in the US; while China hasn’t seen a high-profile, public wave of mass layoffs yet, the quiet squeeze has already begun.

After moving to Unicom, the US nodes aren't as good anymore.

I recently moved, and the broadband at my place switched from China Telecom to Unicom. Usually, when I watch dramas or play games, I don’t feel any noticeable difference. It wasn’t until a couple of days ago that I tried to download some materials and habitually switched to the US node, but no matter what, the speed wouldn’t go up. I was a bit dumbfounded at the time.

I figured this out later. I used to think that because the bandwidth in US data centers was sufficient, the US nodes were naturally stronger. Now I see that this understanding is only half right. Having ample server resources in the US is one thing; which broadband provider to use domestically, how the international exit path works, and whether the return trip benefits from better backbone connections are the other half. Issues that weren’t exposed when I used Telecom were all revealed after switching to Unicom.

Changing the architecture, Hermes and OpenClaw tokens are not consumed in the same way.

After finishing the article “[Mistaking Hermes for an OpenClaw Alternative, Maybe I Was Biased From the Start]"(/en/p/hermes-openclaw-not-the-same-game/), I went through a round of documentation on both sides. The more I read, the more I felt that to truly see the difference between these two things, just looking at the features isn’t enough; looking at how tokens are consumed is actually more direct.

I’ll state my judgment first.

OpenClaw is by default more like a long-term online workbench; many identities, rules, workspace files, and message constraints naturally persist across conversation rounds, so the base model is usually heavier. Hermes, on the other hand, is noticeably more restrained; much of the context is discovered and injected on demand, and the system prompt deliberately maintains a stable prefix, making it easier to control token usage by default.

Of course, this doesn’t mean that Hermes is necessarily more cost-effective. If you enable the memory provider, skills, sub-agent, and long tool output all at once, it can burn through tokens just as fast. But frankly speaking, these two architectures have not been consuming tokens in the same way since day one.

Treating Hermes as a replacement for OpenClaw might be biased from the start.

Over the past couple of days, I’ve been flipping back and forth through the documentation for Hermes and OpenClaw, and the more I read, the more I feel that many people compare these two projects as if they are on the same level; in reality, the comparison is biased from the start.

They are all doing “personal AI assistants.” They can receive messages, call models, run tools, and retain some context. Hermes even specifically created hermes claw migrate, clearly knowing it would receive a batch of OpenClaw users.

But frankly speaking, Hermes is not a skin-deep version of OpenClaw, and OpenClaw is not just an agent framework with a few extra message entry points. One grows outward from the Gateway, while the other grows outward from the AIAgent. If you don’t grasp this difference first, discussing architecture, design philosophy, and ecosystem later will just lead to more confusion.

AI writes demos quickly, and revisions are also really fast.

Recently, I started a small C++ project using AI. The most frustrating moment isn’t when it can’t write the code, but when it spits out a seemingly decent directory structure in three minutes and casually throws in a few third-party libraries, making the demo actually runnable. That’s the problem. You haven’t even figured out what the newly introduced library supports, how the compilation link works, or where its boundaries are, so rework is basically inevitable later on.

I’m increasingly feeling that what AI programming fears most is not the model being dumb, but starting too greedily. Especially with languages like C++, which don’t have much scaffolding to fall back on; if you miss one step upfront, later you have to account for several steps regarding compilation, linking, library versions, and directory structure.

VS Code for C++, don't forget CMake and GDB Printer

Previously, when I debugged C++ in VS Code, the configuration basically stopped at launch.json, maybe with an extra line for GDB. Fill in the program, fill in the gdb, set the breakpoints. And then what? Then every time before debugging, I had to manually run cmake --build in the terminal. What was even more annoying was that after setting breakpoints on custom prices, contracts, or order types, the VS Code debug window often only showed a bunch of internal fields. The data was correct, but it wasn’t human-readable. The ridiculous part is that some tutorials I saw before stopped around launch.json. It wasn’t until recently, when I had an AI set up a new project for me, that it conveniently added preLaunchTask and gdb_printers.py, that I realized: debugging C++ in VS Code isn’t just about starting GDB. You can automatically trigger CMake compilation before debugging, and after hitting a breakpoint, you can even let GDB load a Python script to format business types into something readable for us. Honestly, this isn’t some black magic. But it perfectly filled two annoying gaps in daily C++ debugging: pre-launch build and variable display after a breakpoint.