Parameter Update: 2025-37
"pulse" edition
Typed most of this on my phone on Uk rail, so apologies for typos and no hard feelings if you skip it!
OpenAI
ChatGPT Pulse
This week, OpenAI launched a feature that Altman teased as “compute-intensive offerings, limited to Pro users for now” - ChatGPT Pulse. Pulse works in the background over night, generating a small feed of personalized updates based on your past interactions. OpenAI has openly discussed wanting to make AI more proactive, and this is a big step in that direction. At the same time, it also appears to be a big step into another direction they’ve been pursuing - ads. While they haven’t directly acknowledged it at this point, launching a medium where (1) people are used to ads (algorithmic feed) and (2) latency doesn’t matter as much (could you imagine your ChatGPT response taking 20 seconds longer because it’s trying to incorporate a tasteful McDonalds ad?), seems like a calculated move.
1/
— Nick Turley (@nickaturley) September 25, 2025
Today we’re releasing a preview of ChatGPT Pulse to Pro users on mobile.
Pulse is a new interaction paradigm: instead of waiting for your questions, ChatGPT proactively does stuff and brings you useful, personalized updates each day. pic.twitter.com/0s3GE5bzJ8
Today ChatGPT created the real estate they need for ads
— Danielle Morrill (@DanielleMorrill) September 26, 2025
"OpenAI for Germany"
A few weeks ago I talked about Chinese food delivery company LongCat training their own foundation model, for what seemed like spite as much as ambition. This week we finally got the European response: SAP announcing a collab with OpenAI to host some of their tech. On Azure. With a total of 4000 GPUs.
Qwen: Six launches in a day
Qwen3-Omni
Alibaba has announced even more Qwen3 models this week. The “Qwen3-Omni” series of models, working to ”unify text, image, audio and video” in a single model. In practice, this release consists of three models - “-instruct” (the normal one), “-thinking” (with explicit reasoning) and “-captioner” (build to auto-generate video captions).
I was really scared the Qwen lab would start to move away from open source with the Qwen3 Max release, so the fact that all of these are fully open (and we got a nice tech report) makes me hope that I was wrong on that.
🚀 Introducing Qwen3-Omni — the first natively end-to-end omni-modal AI unifying text, image, audio & video in one model — no modality trade-offs!
— Qwen (@Alibaba_Qwen) September 22, 2025
🏆 SOTA on 22/36 audio & AV benchmarks
🌍 119L text / 19L speech in / 10L speech out
⚡ 211ms latency | 🎧 30-min audio… pic.twitter.com/qGn34N7Xvd
Suno v5
Suno has launched their new music generation model, v5. While I am not inclined enough to be able to differentiate the exact differences, I will note that it sounds good to the point that, for the first time ever, I started listening to stuff on their landing page to check it out, promptly forgot about it and let it play for over an hour instead of Spotify. Not sure if that says more about me or the model though.
Everything changes with Suno v5. Launching today for Pro and Premier subscribers, the world's best music model delivers more immersive audio, authentic vocals, and unparalleled creative control that will transform how you make music.
— Suno (@SunoMusic) September 23, 2025
This breakthrough goes beyond making better… pic.twitter.com/QNrci69JW2
Apollo Research
Known for their work safety testing OpenAI models, Apollo Research published a paper this week exploring models “scheming” in CoT. As far as I can tell, this is one of the best looks yet we’ve gotten as o3’s CoT, which is both surprisingly incoherent and a bit creepy while also being a bit cute? Well worth a read either way!
OpenAI o-series use a lot of non-standard phrases in their CoT, like “disclaim vantage” and “craft illusions”. Sometimes these phrases have consistent meaning: e.g. models very often use “watchers” to refer to oversight, usually by humans. pic.twitter.com/0ZOtP7iKxx
— Apollo Research (@apolloaievals) September 22, 2025
Nvidia: OpenAI Investment
Rounding out the finance announcements from last week: Nvidia has announced plans to invest $100B into OpenAI at a total valuation around ~$500B. If you need a refresher: OpenAI will spend most of this cash with Oracle, which will in turn spend it buying…. Nvidia GPUs. Saw some raised eyebrows about the whole thing on my timeline.
Cloudflare: Vibe SDK
If you didn’t think we had enough vibe coding platforms out there already, Cloudflare has something special in store for you: With their open source ”Vibe SDK”, you can now launch you own in minutes. Feels both like a parody, a logical step and an intentionally destructive action on my Twitter feed. lol.