Parameter Update: 2025-36

"basilico" edition

Parameter Update: 2025-36

Bit of a slow news week this time, but at least the Meta glasses are neat?

Meta Ray-Ban Display

Meta has finally announced a follow-up to their very successful Ray-Ban AI glasses this time featuring a display! Despite the demo fails, I am very excited about these - not because I personally want to particularly stick Facebook-Goggles on my face, but because (1) at just $800, these appear subsidised to some degree and (2) we are finally getting a real SDK for them, so broad app support might be forthcoming. Zuck's Metaverse dream is back from the dead at last!

OpenAI

gpt-5-codex

While most people have seem to have gotten around to GPT-5 being a "good model, actually", it has consistently felt just slightly too slow to really keep me in the flow while working with it. At least for coding, that changed this week with GPT-5-codex (really OpenAI? How many more things called Codex are you going to make?). The model continues the "dynamic reasoning" trend by being minimally better at code while being much more efficient (meaning fewer tokens for easier requests and more tokens for complex requests). Feels underrated so far.

Parental Controls

I'm still not sure why everyone seems to have collectively lost their minds about child protection online recently (why would I need to show ID to look at Wikipedia, UK?), but it seems that OpenAI is up next. In a new blog post, they announced that ChatGPT will start estimating user age based on platform behaviour, and adapt accordingly. While I can get behind it for things that should really just be settings (e.g., having the model flirt with me - I'd like to just be able to turn that off?), having the model contact authorities when it feels like I am exhibiting dangerous behaviour seems like a recipe for disaster. There needs to be a better way of doing this?

Grok-4-Fast

xAI has unveiled a "Fast" version of their Grok 4 model. This is especially interesting given that they were among the most limited, slow and expensive series of models so far. I haven't gotten around to trying it, but seeing the frontier of cost/intelligence being pushed forward is always good to see - we'll need somewhere to switch to once OpenAI inevitably cuts Codex rate limits.

NVIDIA x Intel

This week, Nvidia agreed to buy $5B worth of Intel stock as part of a strategic partnership to "develop AI infrastructure and personal computing products". It's not a secret that Intel has been going through a bit of a rough patch recently, so this seems like a win-win play that finally gets Nvidia some skin in the game in x86 CPU development. Not a good day to be an AMD shareholder though.

Anthropic Economic Index & OpenAI "How people are using ChatGPT"

Both Anthropic and OpenAI gave us some insights into how people are using their products this week - a very interesting insight into the use cases people are actually realizing (vs. the ones that look nice in announcements). Surprising takeaways:

  • Germany seems to be doing pretty well on adoption overall (Top-25% for Claude usage)
  • There's a shift in usage from "Doing" (40% of usage) to "Asking" (49% of usage) in ChatGPT - I would have expected the opposite to be the case, given the focus on "agentic" behaviour? Anthropic also notes a shift from "Augmenting" to "Automating" in their usage data, so there's a bifurcation happening?
  • Just 30% of ChatGPT usage is work-related!

NeurIPS Admissions

A bit of a smaller item, but last week also marked the announcement of NeurIPS acceptance notifications. As expected when receiving over 25.000 submissions, this wasn't entirely un-dramatic. On the one hand, there were some issues around Russians being blanket-sanctioned, on the other hand it seems that ~400 good papers might have been rejected due to capacity constraints (the conference denies this though).