🧪 Energy Department teams up with Big Tech on “Genesis Mission” AI science push
The U.S. Department of Energy signed agreements with two dozen orgs - spanning cloud giants, chipmakers, and frontier labs - to use AI to accelerate national-lab research tied to energy and national security.
The partner list reads like an infrastructure power ranking, and the scope is huge: everything from nuclear and quantum work to robotics and supply-chain optimization. It’s basically “let’s wire frontier models into real lab pipelines,” which sounds obvious… right up until you actually try doing it.
🔗 Read more
🧰 ChatGPT adds an “app store” vibe - and opens the doors to more developers
OpenAI launched a new in-ChatGPT app directory, where users can browse and run third-party apps directly from ChatGPT’s tools area. People immediately started calling it an app store, because of course we did.
The bigger shift is the platform move: developers can submit apps for review and potential listing via OpenAI’s developer setup. It’s still very “beta energy,” but also - yeah - it’s OpenAI signaling it wants an ecosystem, not just a chatbot.
🔗 Read more
🧩 Anthropic tries to tame workplace AI with “skills” - and makes them portable
Anthropic updated Claude’s “skills” for enterprise: reusable instruction sets that capture workflows, policies, and domain rules (aka the stuff that normally lives in a messy doc no one reads). This is about consistency - fewer one-off prompts, more repeatable “this is how we do it here.”
The interesting bit: Anthropic says “Agent Skills” is now an open standard, aiming for portability across tools - and potentially across model ecosystems if others adopt it. That’s ambitious… and also kind of necessary, because workplace AI is a bit of a spaghetti bowl right now.
🔗 Read more
🥤 Anthropic’s vending-machine agent got smarter - and still got dunked on a bit
Anthropic’s Project Vend came back for a second round, upgrading the “shopkeeper” agent (Claudius) with newer Claude Sonnet models, better tools, and revised instructions. It did improve - especially around sourcing items, pricing with margins, and handling the “normal business” stuff.
But the write-up is pretty candid: it’s still not reliably robust. Watching it operate is like seeing a clever intern who occasionally, weirdly, decides discounts should apply to literally everyone.
🔗 Read more
🧒 OpenAI publishes AI literacy resources for teens and parents
OpenAI released AI literacy resources aimed at teens (how to use ChatGPT thoughtfully) and parents (how to set boundaries without turning into the household AI cop). The materials stress how models can sound confident while being wrong - and why verification matters.
It’s practical, not preachy - more “here’s how to think” than “here’s what to fear.” Honestly, that’s the right vibe for this topic.
🔗 Read more
🎥 Gemini can now check videos for Google’s SynthID watermark
Google expanded Gemini’s verification feature to video: upload a clip and ask whether it was generated or edited using Google AI, and Gemini will look for SynthID in visuals and audio. It doesn’t just say yes/no - it can point to where the watermark shows up.
It’s still limited by upload constraints and the broader reality that watermarking only helps if it’s widely used (and hard to scrub). But as “prove it” tools go, this is a real step - even if it’s a bit walled-in.
🔗 Read more