AI SLOPCAST #6: Elon, the Evil Detector, and the Prophecy of Schneier
Musk, Schneier, Zitron, Tao: four lenses for reading the AI industry
Somewhere around the twenty-eighth of April, Elon Musk publishes a post — possibly on Substack, possibly as a thread, the platform hardly matters — in which he describes Anthropic as "a far-left A.I. with ortho-fascist leanings" and announces that the company, in his exact words, "didn't trigger my evil detector." The post is quietly deleted some hours later. The cached copies, naturally, remain. The internet remembers everything, particularly the things you'd most like it to forget.
On the sixth of May, exactly seven days later, Anthropic announces that it is acquiring all of Colossus 1. Two hundred and twenty thousand accelerators. Three hundred megawatts. A data center originally built by xAI. Musk's comment: "Nobody triggered my evil detector." Meaning, I suppose: I met them in person, ran the diagnostic, they checked out. Changed my mind about making a fuss.
Seven days. From "far-left A.I." to "here, take two hundred and twenty thousand accelerators, my treat." Ladies and gentlemen, this isn't the fastest about-face of Musk's career — but it is comfortably in the top three.
Around this story, and several running parallel to it, sits a fresh installment of our ongoing serial drama: the three-way feud of Scam Altman, Elon Husk, and Dario Doomodei. Today's episode is devoted to it.
Part One: Musk as an Engineering Technique
Elon Errolovich Musk. The Musk pivot is not a bug — it's a feature. It is, in fact, his primary working tool.
A quick inventory of the pivots from recent years runs as follows. 2018: "funding secured" for taking Tesla private — turned out not to be secured; S.E.C. fine of forty million dollars. No matter; Musk is a wealthy young man, he paid up. 2022: "I will not buy Twitter" — a court compelled him to buy it; Musk announced this had been the plan all along. Five-dimensional chess. 2023: "A.I. is more dangerous than nuclear weapons," followed, in the same calendar year, by the founding of xAI. 2024: he attacks OpenAI for being closed-source, then builds his own model — also closed-source. 2026: Anthropic are ortho-fascists; one week later, by all means, take our entire Colossus, lock, stock, and barrel.
In his 2023 biography, Walter Isaacson offers what is, to my mind, the cleanest explanation of this behavior. It is neither ideology nor impulse, but a negotiating gambit. The same way Trump's bargaining instrument is a strategic whiff of derangement, Musk's is the public broadside — a piece of leverage designed to shift a pending deal toward more favorable terms. Musk savages Anthropic for a week, and Anthropic, under pressure, agrees to terms that, absent the softening-up, would have been worse. The strategy is expensive, but it is rational.
This sequence has played out, by my count, at least seven times in plain view. Seven that I can summon off the top of my head. Why does the market still respond to each new Musk tweet as though it were a bulletin?
The answer is simple, and unflattering. The financial press has no interest in teaching us to ignore Musk. Every tweet is a plot twist, every plot twist is a spike of traffic from people running to read the news. If Bloomberg or CNBC ever ran the headline "Musk reverses position again, EXACTLY AS PREDICTED," it would be the last such headline they ran, because from then on the reversals would cease to be news. And they make their living on news.
From which the first practical lesson follows. When you read the next Musk tweet, don't ask "Is it true?" Ask "What deal is he applying pressure to?" Almost without exception, that reading will outperform the literal one.
The current pivot does, however, have one distinguishing feature relative to the previous six. If you take the "evil detector" line seriously — let's try it. Musk has spent years building a public position against alignment-by-training, against taking neural networks at their word about their own soundness in the absence of proof. If a man tells you he's normal, do you believe him? Then why on earth should we believe a neural network? It is, on its own terms, a perfectly good piece of reasoning. He has staked out a position in favor of safety-by-mathematical-proof. And now we discover that his critique of Anthropic is not, in his own framework, internally consistent. Constitutional A.I. is alignment-by-training, not alignment-by-proof. Claude Opus — by their own account, one of the most powerful models in the world — has no formal safety proof. Both of these things, in Musk's coordinate system, ought to have triggered the evil detector at full volume.
And then he sold his rock-solid position in a week. Sold it for access to silicon. There are simple cases of pivoting, where a person merely "changes his mind" — encounters a new fact, considers it, updates. This pivot is something else. It says: "I had a philosophy, and I traded it for two hundred and twenty thousand G.P.U.s." That is a brand-new tier of slop. And nobody is dissecting it publicly. Apparently because grabbing Musk by the lapels in earnest is awkward: Anthropic fans don't want to lend such criticism any legitimacy, and Musk-haters prefer the easier read of "the guy is talking nonsense again." Which, fair enough, he constantly is. Nothing new there.
Part Two: Schneier and the Prophecy of 2018
The second character is Bruce Schneier. If you ever studied cryptography from the Western canon, you've held his book. Applied Cryptography, 1996. A great deal has happened since — Crypto-Gram, the Harvard Kennedy School, the endless run of essays treating security as a cultural rather than a technical problem.
In 2003, Schneier coined the phrase "security theater" — measures that produce the sensation of safety without actually producing safety. Airport security: shoe inspections, plastic cups, the ritual extraction of laptops from bags. Schneier predicted, even then, that the theater would only grow, because theater is cheaper than security and far more visible to voters.
In 2018, in Click Here to Kill Everybody, he made another prediction. Roughly: A.I. safety would become a status game, in which companies competed in the visibility of their concern rather than in the concern itself. This was eight years ago. Pre-GPT-3. Pre-ChatGPT. Pre-Claude Mythos. Pre-everything.
Now look at what we have. Anthropic runs Project Glasswing, a program for some forty or fifty partners with privileged access to Mythos. OpenAI runs Trusted Access for Cyber, open to thousands of vetted participants. Two strategies for deploying the same capability. The U.K.'s A.I. Safety Institute has published the actual numbers: GPT-5.5 completes a thirty-two-step attack scenario against a corporate network in two cases out of ten; Mythos in three. By their own assessment, the offensive capabilities of frontier models double every four months.
Against this backdrop sits a critical detail the press has underplayed. Less than one per cent of the vulnerabilities Mythos finds are ever patched. By Anthropic's own data. As the saying goes — "you've got a vulnerability, I've got a vulnerability, but there's a NUANCE."
Look at it directly. The labs compete to discover vulnerabilities. The labs donate four million dollars to open-source security. The labs hand out a hundred million dollars in credits to their Glasswing partners. And the patch rate is under one per cent. What you're looking at is precisely the security theater Schneier described. Everyone is making the correct ritual gestures, and no actual reduction in risk is taking place.
Schneier himself is, this week, officially silent. He had promised a follow-up post surveying the situation, and has not yet delivered. The silence is itself a signal: the situation is too obvious, and a hasty take would spoil the more serious piece that is, presumably, in the works. But Picus Security and Forrester have already articulated the same point in more engineering-flavored language. Picus puts it bluntly: "The same thing that breaks everything is also the thing that fixes it." Vulnerability discovery without the corresponding ability to close them at speed is, in the end, a great deal of motion in place.
Hence the second practical lesson. When you see an A.I. vendor advertising vast, sweeping cybersecurity capabilities — go straight for the patch numbers. "We found a thousand vulnerabilities" is not security; it is marketing. Security is "We found a thousand problems and we closed nine hundred and fifty." If the second number isn't there, you are watching a touring production of the theater.
Part Three: Zitron and the Critic's Paradox
The third character is Ed Zitron. A former P.R. consultant, now host of the Better Offline podcast and author of the Substack newsletter Where's Your Ed At. He bills himself as "a professional thorn in Sam Altman's side" — that's not my characterization, it's his own.
Zitron, this week, dismissed the entire cyber-apocalypse story as "half-baked War of the Worlds nonsense." Meanwhile, supposedly, A.I. is dying — has practically died already.
But Zitron is a critic of A.I. hype. Zitron earns his living on the criticism of A.I. hype. His revenue streams are diverse — Substack subscribers, podcast downloads, speaking fees — but every one of them depends on A.I. remaining a hot topic. About a hot topic, you can rage publicly, suffer publicly, film angry YouTube reactions, whatever you like.
If A.I. hype evaporates tomorrow, Zitron loses his subject. Loses his subscribers. Loses his income. He has a vital interest in there being something to criticize.
This does not, of course, render the criticism false. Logically, a critique can be perfectly accurate regardless of the critic's incentives. But it does shape what Zitron chooses to criticize and what he chooses to leave alone. He never writes about positive applications of A.I. Not because there aren't any — there are documented cases of cost reduction, productivity gain, expanded access. He doesn't write about them because the positive stories don't fit the script. Zitron's audience receives a picture no less distorted than the one consumed by Sam Altman's true believers.
The A.I. sellers and the A.I. critics — both camps need A.I. to remain a hot topic. The grifter wants investment and users; the critic wants subscribers and listeners. Their interests converge on the proposition that A.I. must matter. And this, equally, undermines the credibility of both.
When you read a piece by Zitron, ask yourself: what would he be writing about if A.I. hype simply vanished? The answer: something less important and less consequential. His arguments don't become wrong because of this, but the selection of arguments is itself an instrument of propaganda.
Part Four: Terence Tao's Enthusiasm
The fourth character is Terence Tao. Fields medalist. Erdős number two. One of the most celebrated living mathematicians. He used to be an A.I. skeptic in mathematics: in 2022 and 2023, he publicly doubted that GPT-4 was useful for serious work.
Tao maintains a public mathematical blog on Mastodon — having decamped from Twitter in 2022 — and there he discusses his work with the Lean proof assistant, with ChatGPT, and now with AlphaEvolve. All of it is public. Anyone may walk in and read.
A man whose public position was built on A.I. skepticism has, this week, attached his name to AlphaEvolve's usefulness — as a coauthor on the paper. The transformation from skeptic to enthusiast has reached its terminal stage.
What does each side get from the Google–Tao arrangement? Tao gets compute access, a paper with Google coauthors, placement in prestige journals, and the marketing apparatus of the largest marketing company on earth. DeepMind, in turn, gets academic legitimacy, Tao's name on the byline, and inoculation against the critique that "A.I. in mathematics is just a peculiar form of programming." And here is who doesn't get anything: the mathematical community at large, which receives no open methodology. AlphaEvolve stays inside Google. Replicating it via open code is technically possible, but it won't be the same thing.
In principle, Tao is acting reasonably. Most people in his position would do the same. Google shows up with a truck of money — are you really going to refuse? Are you nuts? But in the aggregate, all these private settlements with one's conscience add up to an academic culture that grows steadily weaker. Scholars are nudged into an unequal exchange between the academy and the corporation. If this continues, in five years the young mathematicians will be working with A.I. only through corporate channels, with no possibility of openly replicating the experiments. We've been here before, in medicine, where academic doctors got hooked on consulting deals with Big Pharma. We've been here in economics, where the academic economists likewise got hooked on consulting. In every case, individually rational and seemingly unimpeachable choices, summed across a community failing to act collectively, dismantle open research culture entirely.
Tao is too revered to criticize. But Oleg Chirukhin from the village of Kukuyevo is not particularly known to anyone, so Oleg can swing the truth-hammer as freely as he likes. And Oleg has never met Terence Tao, which makes it hard to suspect Oleg of any personal vendetta.
Closing
So, then. Our superheroes. Musk — pivoting incarnate, whose tweets must be read as bargaining moves. Schneier — the prophet of 2018, whose prediction about the theater of A.I. safety is materializing in front of you. Zitron — the publicist turned critic, whose criticism is real and valid and excruciatingly selective. And Tao — the skeptic turned coauthor at Google DeepMind.
Each of them teaches us a particular superpower. A particular facet of useful hypocrisy. Musk teaches us never to look at a thing head-on but always to ask how to use indirect levers to convince — or intimidate, or remove — the right people. Schneier teaches us to ask, "Is this appearance or reality?" — particularly in cybersecurity. Zitron teaches us to ask, "Who benefits from this story?" Tao is that rare creature who simply checks whether the thing works. If it works — good. If not — bad. End of test.
When you read the next blog post from a major A.I. company, or the next vulnerability disclosure, or the next breakthrough proclamation from an executive — ask yourself which of these four facets of hypocrisy is on display. Sometimes one fits. Sometimes all four. Sometimes — and this is the most interesting case — the four lenses give contradictory readings.
And when the readings disagree, that's how you know you've found a genuinely difficult story. When I'm scouting plotlines for this blog, those are the ones I dig into. The ambiguous ones.
The A.I. industry, in 2026, has in some final sense become a media industry — an entertainment industry. The whole thing is hype-driven. We have our own multiverse, our own superheroes, our own recurring story arcs. Understanding the cast is now a working, functional skill. Not a party anecdote.