Parameter Update: 2026-12
"claude code crackdown" edition
It feels like Anthropic can't go a week without getting caught up in something. At least this one isn't quite as bad as the DOD drama, I guess?
Anthropic
Claude Code Leak
Through the accidental inclusion of source maps, Anthropic shipped internal Claude Code source as part of the public npm package for Claude Code this week. This wouldn't be such a big deal, if the CLI had been open-source to begin with (like most of it's competitiors, including Codex), but alas it is not and so it was. The leak means we got some insight into features currently in development (like a new "always on" agent mode called KAIROS, the April fools /buddy command, and a new renderer hidden behind the hilarious CLAUDE_CODE_NO_FLICKER option), and into Anthropic's handling of the crisis which was somewhere between "surprisingly good failure culture" and "probably illegal DMCA takedowns".
Ban on OpenClaw Usage
Ever since Peter Steinberger joined OpenAI a few weeks ago, this was really only a matter of time: Anthropic is disallowing the usage of Claude subscriptions on OpenClaw or other third-party tools. This sucks, of course, but at least they're giving out a one-time credit for usage credits valued at one month of your current subscription. Now, I'm all ears for other models to power OpenClaw with - the last time I tried it with GPT-5.4, the experience was very sobering.
Google: Gemma 4
The newest generation of Google's open-source models dropped. Gemma 4 matches much larger models in benchmarks (which makes me wonder why they chose to highlight elo as a headline metric?). I assume there is a lot of benchmaxxing going on and actual performance will be more in line with other models it's size (we're not getting "Claude Sonnet on an iPhone" just yet), but this is still a very good drop that should hopefully trickle down large parts of Google's "local AI" product stack (think: powering parts of Android).
Gemma 4 is our most capable open model family yet:
— Google (@Google) April 2, 2026
🔵 Four versatile sizes
🔵 Up to 256K context window
🔵 Native function-calling for autonomous agents
🔵 Offline, high-quality code generation
🔵 Native multimodal support
🔵 Trained on 140+ languages
🔵 Commercially permissive… pic.twitter.com/9avH9dHDQP
Microsoft Model Launches
Believe it or not, Microsoft is technically still part of the AI race (even as their Superintelligence definition increasingly slips towards "meeting enterprise needs"). Three new models in this release:
- MAI-Transcribe-1, termed "most accurate transcription model in the world"
- MAI-Voice-1, a new TTS model that actually sounds surprisingly decent
- MAI-Image-2, a new image generation model.
Out of all of these, MAI-Image-2 received the most hype on my timeline. It actually dropped a while ago, but no one appears to have noticed until now? Anyway - the more players the better, but there's few things in this release that are genuinely exciting.
We’re bringing our growing MAI model family to every developer in Foundry, including …
— Satya Nadella (@satyanadella) April 2, 2026
· MAI-Transcribe-1, most accurate transcription model in world across 25 languages
· MAI-Voice-1, natural, expressive speech generation
· MAI-Image-2, our most capable image model yet
Start… pic.twitter.com/p0DZZcAUZ4
Netflix: VOID Model
Something I didn't have on my bingo card this year: Netflix is apparently building AI models now? Their new VOID model takes a video and a prompt and then, based on the prompt, removes objects from the video. The catch: It doesn't just do inpainting, it also accounts for changes in physics/interaction (i.e., if you remove a person flipping a switch, the switch shouldn't be flipped in the video output).
Netflix just dropped VOID.
— Min Choi (@minchoi) April 5, 2026
This AI removes objects from video...
And even corrects the physics after objects/people are removed.
Demo in commets👇 pic.twitter.com/vxDdnl6l7B
In regular engineering news (not even really news, just something really impressive I completely missed): Netflix was able to massively reduce streaming bandwidth using artificial film grain. Some people are saying they're pushing bandwidth as low as 200kbps. I wasn't able to verify that specific claim, but what I can say: I had some very slow Wifi the last time I wanted to watch some streaming and Netflix was the only service that was able to play video reliably. So at least the recurring subscription price increases are going somewhere, I guess?
OpenAI: Raising new round
In case you missed it, OpenAI is currently still busy burning enourmous amounts of money. This week, they announced a new cash refill, this time raising $122 Billion at a $852 Billion valuation. Yup, that puts it somewhere between the GDP of Belgium and Switzerland. Now, that's obviously not a fair comparison, but also not as far removed of as you may think, given the $122B might give them as little as 18 months of runway?
This comes the same week SpaceX IPO rumors ran wild, with it apparently targeting a valuation approaching $2 Trillion (lol).
I’m getting sick of this shit https://t.co/nF0ZZMJ8lV pic.twitter.com/wz0lt0rarI
— JT (@jiratickets) March 31, 2026