<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>Cybersecurity on Uncle Xiang&#39;s Notebook</title>
        <link>https://ttf248.life/en/tags/cybersecurity/</link>
        <description>Recent content in Cybersecurity on Uncle Xiang&#39;s Notebook</description>
        <generator>Hugo -- gohugo.io</generator>
        <language>en</language>
        <lastBuildDate>Thu, 16 Apr 2026 21:23:19 +0800</lastBuildDate><atom:link href="https://ttf248.life/en/tags/cybersecurity/index.xml" rel="self" type="application/rss+xml" /><item>
        <title>Lock down the strongest model first, AI companies start selling access control.</title>
        <link>https://ttf248.life/en/p/anthropic-mythos-preview-access-control/</link>
        <pubDate>Thu, 16 Apr 2026 20:12:14 +0800</pubDate>
        
        <guid>https://ttf248.life/en/p/anthropic-mythos-preview-access-control/</guid>
        <description>&lt;p&gt;These past couple of days, I came across Anthropic&amp;rsquo;s &lt;code&gt;Project Glasswing&lt;/code&gt;, which is scheduled for release on April 7, 2026. My first reaction was a bit stunned. It wasn&amp;rsquo;t because another model scored higher, but because it locked the top-tier capabilities into a small circle, initially reserved for defensive players like AWS, Apple, Google, Microsoft, and Linux Foundation.&lt;/p&gt;
&lt;p&gt;My own judgment is very direct: This matter is more important than another benchmark record. What frontier AI companies are selling now is no longer just the model itself, but an entire set of access controls—&amp;ldquo;who gets the capability first, how much capability they get, and what kind of auditing and constraints they have to endure after receiving it.&amp;rdquo; Models are becoming increasingly like dangerous tools, and the release rhythm is becoming more like issuing licenses.&lt;/p&gt;
&lt;h2 id=&#34;this-is-not-hoarding-its-giving-equipment-to-the-defending-side-first&#34;&gt;This is not hoarding, it&amp;rsquo;s giving equipment to the defending side first
&lt;/h2&gt;&lt;p&gt;Many news outlets like to write about these kinds of things as &amp;ldquo;the model is too dangerous so it won&amp;rsquo;t be made public.&amp;rdquo; While this isn&amp;rsquo;t entirely wrong, it&amp;rsquo;s not accurate enough. Anthropic wrote it more clearly: &lt;code&gt;Claude Mythos Preview&lt;/code&gt; is a general frontier model, but because it suddenly became a bit wild in network security tasks, the company chose to first launch &lt;code&gt;Project Glasswing&lt;/code&gt;, allowing a group of key infrastructure participants and open-source maintainers to use it for defense purposes first.&lt;/p&gt;
&lt;p&gt;What is most critical here is not the &amp;ldquo;restriction,&amp;rdquo; but the &amp;ldquo;order.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;Anthropic has presented some very hard facts in its public materials. &lt;code&gt;Mythos Preview&lt;/code&gt; has been found to have zero-day vulnerabilities in all major operating systems and mainstream browsers; it can chain multiple vulnerabilities into a complete exploit; and even engineers at Anthropic without formal security backgrounds, after working on the task overnight, were able to see a working exploit the next day. To be honest, my first reaction upon reading this was not &amp;ldquo;too strong,&amp;rdquo; but rather &amp;ldquo;the peaceful days of many old software are probably over.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;So it didn&amp;rsquo;t directly open up completely, but first deployed model capabilities to the defense side. The initial partners were not only security companies, but also cloud vendors, chip companies, banks, and infrastructure players like the Linux Foundation. This move speaks volumes: AI companies have already anticipated the security issues of the next stage; it&amp;rsquo;s no longer about a single team patching its own code, but rather who can patch the layer that the entire industry relies on first.&lt;/p&gt;
&lt;h2 id=&#34;model-starts-assigning-access-control-audit-and-price-lists&#34;&gt;Model starts assigning access control, audit, and price lists
&lt;/h2&gt;&lt;p&gt;The more interesting part is later.&lt;/p&gt;
&lt;p&gt;Anthropic is not just &amp;ldquo;internally testing,&amp;rdquo; but has turned this into a research preview with a budget, a list of partners, and subsequent pricing. The official statement is: first giving these participants up to $100 million in usage credits, and then continuing to open it to participants at a price of $25 per million input/output tokens, which can also be accessed through the Claude API, Amazon Bedrock, Google Vertex AI, and Microsoft Foundry.&lt;/p&gt;
&lt;p&gt;This is no longer just research conducted in a lab; it already looks very much like a formal product, except that the first layer of the product is not self-service activation, but an access control system.&lt;/p&gt;
&lt;p&gt;I think there&amp;rsquo;s a pretty clear signal here. In the past, when people understood model releases, it was usually two tiers: publicly available, or not released yet. Now, a third tier is emerging, which is also more realistic: capabilities come out first, but access is gated based on identity, scenario, visibility, and responsibility. Anthropic updated its &lt;code&gt;Responsible Scaling Policy 3.0&lt;/code&gt; on February 24, 2026, making the risk reporting and external review much more concrete; with Mythos&amp;rsquo; limited release on April 7, 2026, it basically brought this governance framework into the actual product rhythm.&lt;/p&gt;
&lt;p&gt;This is not the first time Anthropic has done this. OpenAI launched &lt;code&gt;Trusted Access for Cyber&lt;/code&gt; on February 5, 2026, and then moved it forward again to April 14, 2026, starting to provide a more relaxed access level, or model layer, like &lt;code&gt;GPT-5.4-Cyber&lt;/code&gt;, to defense parties with stronger certifications, specifically for cybersecurity use cases. Its statement is very direct: the capability in cybersecurity is dual-use; risk does not only depend on the model itself, but also on who the user is, what the verification signals are, and what level of permission is granted.&lt;/p&gt;
&lt;p&gt;How should I put it? After going through this whole process, I&amp;rsquo;m increasingly feeling that what will truly be valuable next is not &amp;ldquo;whose model dares to write the exploit,&amp;rdquo; but rather who can make identity verification, logging/traceability, purpose stratification, platform integration, and external collaboration—these supporting infrastructures—the default configuration. Without this layer, the stronger the capability, the more awkward the release will be.&lt;/p&gt;
&lt;h2 id=&#34;what-does-this-mean-for-a-regular-developer&#34;&gt;What does this mean for a regular developer?
&lt;/h2&gt;&lt;p&gt;If you are not in security, this news might look like something for big tech companies&amp;rsquo; high-end players, and it probably doesn&amp;rsquo;t concern you much. However, I actually think the connection is quite significant.&lt;/p&gt;
&lt;p&gt;First, when you see &amp;ldquo;the new model is not fully open,&amp;rdquo; don&amp;rsquo;t immediately assume the vendor is being deliberately obscure. Often, the more accurate situation is that the model is already functional, but the company hasn&amp;rsquo;t figured out the proper sequence for releasing the risk. Whether it can first be given to a small group of trusted users, whether logging can be implemented, or whether calls without visible tracking can be restricted—these issues now directly determine whether a model can go live.&lt;/p&gt;
&lt;p&gt;Second, the default assumptions of software development might be changing. Previously, many teams thought that critical vulnerabilities required strong human experience, long auditing periods, and a bit of luck. Now, this threshold is being lowered by agentic coding and stronger reasoning capabilities. This is good for defenders, but it&amp;rsquo;s not such good news for teams still holding onto the &amp;ldquo;just launch it first, patch security later&amp;rdquo; mentality.&lt;/p&gt;
&lt;p&gt;Third, the commercialization path for model capabilities will look more like a cloud permission system rather than just a pure SaaS subscription. Ordinary users buy chat and code generation, while enterprises might be buying finer permissions, auditing, deployment locations, data visibility, and a less error-prone refusal boundary. The model itself is certainly important, but what truly creates the price gap in the future may not just be the model score.&lt;/p&gt;
&lt;p&gt;So, in this article, I didn&amp;rsquo;t elaborate on Mythos&amp;rsquo;s benchmark, nor did I discuss the details of vulnerabilities in OpenBSD, FFmpeg, or the Linux kernel. Those things are certainly exciting, but they are only superficial. What is more worth remembering is that starting from April 2026, the release of frontier models will no longer just be about &amp;ldquo;whether it can be built,&amp;rdquo; but rather about &amp;ldquo;whether it can be controlled upon release.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;To be honest, this change is a bit bittersweet. Because it means that what will be most scarce in the future is not just smart models, but trustworthy entry points. If you understand this, then looking at the recent wave of identity verification, layered access, and industry collaborations by AI companies, many actions no longer seem like official pronouncements; they are actually paving the way for the next generation of stronger models.&lt;/p&gt;
&lt;h2 id=&#34;references&#34;&gt;References
&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;Anthropic, “Project Glasswing,” 2026-04-07: &lt;a class=&#34;link&#34; href=&#34;https://www.anthropic.com/glasswing&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://www.anthropic.com/glasswing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Anthropic Frontier Red Team, “Assessing Claude Mythos Preview’s cybersecurity capabilities,” 2026-04-07: &lt;a class=&#34;link&#34; href=&#34;https://red.anthropic.com/2026/mythos-preview/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://red.anthropic.com/2026/mythos-preview/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Anthropic, “Anthropic’s Responsible Scaling Policy: Version 3.0,” 2026-02-24: &lt;a class=&#34;link&#34; href=&#34;https://www.anthropic.com/news/responsible-scaling-policy-v3&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://www.anthropic.com/news/responsible-scaling-policy-v3&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;OpenAI, “Introducing Trusted Access for Cyber,” 2026-02-05: &lt;a class=&#34;link&#34; href=&#34;https://openai.com/index/trusted-access-for-cyber/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://openai.com/index/trusted-access-for-cyber/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;OpenAI, “Trusted access for the next era of cyber defense,” 2026-04-14: &lt;a class=&#34;link&#34; href=&#34;https://openai.com/index/scaling-trusted-access-for-cyber-defense/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://openai.com/index/scaling-trusted-access-for-cyber-defense/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;writing-notes&#34;&gt;Writing Notes
&lt;/h2&gt;&lt;h3 id=&#34;original-prompt&#34;&gt;Original Prompt
&lt;/h3&gt;&lt;blockquote&gt;
&lt;p&gt;$blog-writer I don&amp;rsquo;t know what to write, so search for hot news in the AI circle and just write something.&lt;/p&gt;&lt;/blockquote&gt;
&lt;h3 id=&#34;writing-outline-summary&#34;&gt;Writing Outline Summary
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;This article focuses on the line regarding Anthropic&amp;rsquo;s &lt;code&gt;Project Glasswing&lt;/code&gt; release on April 7, 2026, because it is both a hot topic and allows for a clear judgment.&lt;/li&gt;
&lt;li&gt;The main thread of the body is not to reiterate how powerful Mythos is, but to emphasize that AI companies are starting to sell access control, auditing, and trust layering as part of their products.&lt;/li&gt;
&lt;li&gt;In the middle section, by contrasting Anthropic&amp;rsquo;s restricted release with OpenAI&amp;rsquo;s &lt;code&gt;Trusted Access for Cyber&lt;/code&gt;, it shows that this is not an action by a single company, but rather a shift in industry rhythm.&lt;/li&gt;
&lt;li&gt;This article deliberately avoids detailing specific exploit details and numerous benchmarks; correspondingly, it focuses on the order of release, commercialization methods, and risk boundaries.&lt;/li&gt;
&lt;li&gt;The conclusion returns to the perspective of ordinary developers and enterprise purchasers, concluding with the judgment that &amp;ldquo;what will be scarcer in the future is trustworthy access points, not just stronger models.&amp;rdquo;&lt;/li&gt;
&lt;/ul&gt;</description>
        </item>
        
    </channel>
</rss>
