Tags

46 pages

AI Inspiration Hub

When Alipay buys 006327, which day's net value is it calculated on?

I bought 006327 today on Alipay. I thought I would place the order before 3 PM, but what I received was based on “today’s Hang Seng Tech closing price.” The profit displayed on the page didn’t update for ages, and confirming the shares was also delayed. My first reaction was sheer shock: how is this thing actually settled/transacted? And why do I have to wait another two days?

To be honest, this misunderstanding is extremely common. Alipay has made buying and selling over-the-counter (OTC) funds look too much like placing stock orders, but fundamentally, there are two pitfalls in this matter. First, 006327 is absolutely not the Hang Seng Tech Index Fund. Second, when you buy a fund through OTC channels, the price you get is not based on an index’s real-time closing point, but rather the Net Asset Value (NAV) calculated by the fund company for that specific day. Furthermore, coupled with the

Tomato Novel is very popular, but I still prefer reading classic fantasy.

I saw that Fanqie Novel was embroiled in another controversy about AI-written content in the past couple of days. My initial reaction wasn’t surprise, but rather a sense that this issue will eventually come to the surface. Considering the combination of free platforms, the pressure for daily updates, and algorithmic distribution, it is almost inevitable that authors will turn to AI to supplement their content capacity.

But honestly, I have a consistent feeling about many books on Fanqie: they are readable, and even the first hundred chapters are often quite good. However, the further along you get, they tend to be nothing more than tropes and speed, lacking that inherent power that classic Xuanhuan or cultivation novels possess. This ‘power’ is hard to explain—it’s probably that feeling when you know it’s a bit over-the-top/cringey (“zhong er”), but you still want to follow the characters all the way through.

It’s something worth noting, and this isn’t meant to criticize the platform. Fanqie being completely free definitely attracts a lot of readers; there’s no arguing that point. However, for someone like me whose taste was spoiled early on by Tian Can Tu Dou, Wo Chi Xi Hong Shi, Er Gen, and Chen Dong, AI can match the output volume, but it cannot replicate the flavor/quality.

AI can write code, what will newcomers use to level up?

In the last few months, while writing code using tools like Claude or Codex, my most striking realization wasn’t that “programmers are obsolete,” but rather that many tasks that used to be given to newcomers for practice can now generate a basic first draft themselves. Whether it’s creating a scaffold, adding several tests, or making small modifications on the fly—after running through these operations, the speed is genuinely fast, so fast it feels almost bittersweet.

For someone like me, who graduated ten years ago, frankly, this is more about increasing efficiency. Because I generally know where it’s reliable and where it isn’t; where something looks functional but actually has pitfalls hidden further down the line. But for fresh graduates, this topic isn’t so straightforward. AI isn’t just here to take over a few hours of manual labor; it feels more like it is compressing the traditional path of how a newcomer goes from zero knowledge to proficiency. This is also why I wanted to write about it separately.

Fewer tokens, so why is GPT-5.5 in Codex actually more expensive?

Stunned. / Dumbfounded.

The official ChatGPT side doesn’t make it easy to track tokens and costs directly, so I found a third-party platform and ran a round of similar tasks using GPT-5.4 and GPT-5.5 in Codex, setting the thinking mode to high. The result was very straightforward: simple questions were relatively mild (in terms of cost), with GPT-5.5 being about 30% more expensive than GPT-5.4; however, once complex tasks were involved, the costs shot up to 2.6 times, and both the request count and token consumption increased simultaneously.

My current assessment is very straightforward: this isn’t something that can be decided just because of the statement “5.5 has a higher unit price.” In simple tasks, the cost mainly comes from the unit price; but in complex tasks, what is actually expensive is the entire calling chain (or execution flow). However, looking at it another way, 5.5 does genuinely feel like it’s absorbing your rework costs for you. The model is more willing to think through multiple steps, perform more actions, and check things more thoroughly. Ultimately, the billing isn’t based on a single answer; it’s based on the complete set of actions, which also minimizes the number of back-and-forth cycles required from the human user.

ChatGPT Images 2.0 is very powerful. Can we still trust [it] after taking a screenshot? / Is it credible just by looking at a screenshot?

Initially, I didn’t actually plan on testing it. When I came across the news that OpenAI was releasing ChatGPT Images 2.0 on April 21, 2026, my first reaction was just “another image version update.” However, when I checked the Artificial Analysis leaderboard and saw that GPT Image 2 (high) ranked first for text-to-image generation with an Elo of 1332, I felt a bit compelled to test it anyway.

The results are quite impressive; the Chinese output is excellent, it can handle comics, and character/narrative consistency across multiple continuous images has also improved. However, as I tested it further, I felt that what is truly worth discussing this time isn’t “it draws better,” but rather “it starts making things that were previously taken as default truths seem unreliable.” This subject matter is more complicated than a simple leaderboard ranking.

The current wave of model competition has escalated to pricing and chips.

Scrolling through the model updates tonight was genuinely mind-boggling.

My current judgment is straightforward: this round is no longer merely a wave of model releases. It involves three fronts working simultaneously—model capabilities, API pricing, and chip stack ownership. Anyone who focuses on only one of these aspects will likely have a biased view. And it is precisely because these three dimensions are intertwining that the large model sector appears so intensely competitive.

After seeing news of sudden deaths, I'm questioning what the purpose of working is.

Over the past few days, I came across the news that Zhang Xuefeng passed away due to sudden cardiac death, and it truly made my heart drop. It wasn’t because I usually pay much attention to him, but rather, when something like this happens to someone who looks so vigorous—who is still running and working—people naturally project it onto themselves. Recently, I sometimes experience chest tightness myself. After scrolling through too many such updates, a thought popped into my head that was incredibly foolish yet profoundly real: Could next be me?

To be honest, I am increasingly feeling that this chest tightness isn’t necessarily purely a heart problem. Of course, you have to take your body seriously—especially with recurring chest discomfort that worsens after activity but slightly eases after rest. These signals cannot be ignored; public health campaigns have already warned quite clearly about this. But the other half is probably because people have been suspended in life’s limbo for too long. Long-distance couples, jobs that are stagnant, savings built up without knowing what purpose they serve. Even when taking occasional leave and not working, you don’t feel relaxed; you just feel emptier. Any news of sudden death, when you are in this state, turns into a persistent question: What exactly are you striving for

The prior adjustment (pre-adjustment) for backtesting differs between domestic and international markets.

A few days ago, someone asked me using the pre-adjusted prices of very early Kweichow Moutai stock. Honestly, I was taken aback at first glance: Looking at the “pre-adjustment” values, Yahoo had no negative numbers, while East Money showed negative numbers. When I later reviewed my article from June, titled Detailed Explanation of ‘Adjustment’ and Data Acquisition in Backtesting, I realized that I had mixed up several things. At the time, I presented the “ratio method” as if it were the single standard, but pre-adjustment in the domestic A-share context and the commonly used adjusted close for Hong Kong/US stocks are fundamentally different metrics.

This article only does one thing: separate these two metrics/standards. I will put my judgment first so that you don’t get confused later: The pre-adjustment (or forward adjustment) of leading domestic apps is more like leveling the candlesticks by following the exchange’s ex-rights/ex-dividend reference price; the commonly used international adjusted close is more like using a cumulative multiplier to express the total return from “reinvesting dividends.” They are both called pre-adjustment, but they answer different questions.