By: Thomas Stahura
Whenever I talk to someone who doesn’t follow AI news every day, the reaction is usually some variation of the same sentiment: Impressive but scary! That feels automatic now, like it’s been rehearsed. Each week’s headlines blur into the last.
It makes AI feel like old news. People seem to be waiting for the really big announcement. But what would that even look like? And what does that say about where we are in the AI hype cycle?
The reason I bring this up is because last week, for me, really felt like one of those “holy shit!” type weeks — and it came from a flurry of announcements you may have seen but already forgot about. To catch you up:
Anthropic released its Claude 4 family of models
OpenAI acquired Jony Ive’s io design firm for $6.5 billion, catapulting OpenAI’s ambition into hardware
Microsoft debuted Windows computer use agents and open sourced Github Copilot at MS Build
Google held its annual IO developer conference, announcing Gemini updates, a new open-source Gemma model, Mariner browser agent in Chrome, and Veo 3 with audio generation (an impressive release given that it’s notoriously hard to synch generated video with audio)
So, here’s my take on the week’s announcements:
Claude 4 is incredible at coding, but average everywhere else.
If the Sam Altman–Jony Ive collaboration isn’t some kind of BCI wearable, it’ll feel like a letdown.
Microsoft made a lot of noise but showed few real products.
Google stole the show. I/O was sharp, and Veo 3 outputs flooded X/Twitter feeds.
The big announcements soaked up most of the attention, overshadowing some equally promising — but less polished — developments elsewhere in the AI world.
For starters ByteDance quietly dropped a new open-source model: BAGEL. A 7 billion parameter Omni model capable of understanding and generating language (reasoning and non reasoning) and images (generating, editing, and manipulating). The model outperforms Qwen2.5-VL and InternVL-2.5. It's only missing audio to complete the Omni modality trifecta!
Alibaba updated its Wan2.1 video model. Claiming SOTA at 14 billion parameters, it can run on a single GPU and produce impressive 720p videos or edits. Still no audio for the videos. I’m noticing a trend…
Google, during IO, open sourced MedGemma, a variant of Gemma 3 finetuned on medical text and clinical image comprehension. The model is designed to answer your medical questions like a nurse and analyze your X rays like a radiologist. It’s available for free in 4b and 27b sizes.
That was the news of the last few weeks. Plenty of flash, plenty worth watching.
But the hype cycle has a funny way of resetting itself. And I’ve been thinking more about what’s happening off to the side. The stuff that isn’t getting the spotlight, but might shape the next phase of this industry (and maybe future Token Talk topics).
Stuff like DeepMind’s AlphaEvolve paper, which introduces a Gemini-powered agent designed specifically for the discovery and optimization of algorithms. AlphaEvolve uses an evolutionary framework to propose, test, and refine entirely new algorithmic solutions. It's a tangible step towards AI systems that can do the science of computer science by actively exploring the digital codescape and uncover novel solutions, demonstrating a form of discovery.
A nonprofit out of San Francisco called Future House is pursuing a much broader goal: automating the entire process of scientific discovery. It recently unveiled Robin, a multi-agent system that achieved its first AI-generated discovery: identifying an existing glaucoma drug as a potential new treatment for dry macular degeneration. Robin basically orchestrated a team of specialized AI agents to handle everything from literature review to data analysis, proving that AI can indeed drive the key intellectual steps of scientific research
It’s easy to mistake noise for signal, hype for substance. And believe me, there is more noise than signal in the AI world right now. But that happens at some point in every tech cycle. I think it would be a huge mistake to completely dismiss today's AI ambitions of automated discovery or human-machine telepathy.
AI today feels like where 3D printing was in 2013. Still a lot of excitement but noticeably less than a few years ago. Will there be another AI winter? Almost certainly. Will it be anytime soon? No.
Hype doesn’t die as much as it transitions from one idea to another, from one industry to another. Within AI, chatbots, agents, and now discovery and robots have all been hyped. In the broader tech industry, mobile was hyped, then cloud, crypto, and now AI.
What's next? What new tech breakthrough will catch the collective consciousness the way AI has? Maybe space, carbon nanotubes, CRISPR, room temperature superconductors, fusion, quantum, or something entirely new that comes out of left field… Time will tell, so stay tuned!