AI news 17th December 2025

AI News Wrap-Up: 17th December 2025

💰 Amazon in talks to invest $10bn in developer of ChatGPT

Amazon is reportedly in discussions to put more than $10bn into OpenAI - and if it actually happens, it would push OpenAI’s valuation into the “wait, seriously?” zone above $500bn. It’s being positioned as a mix of funding and strategic alignment… plus the simplest motivator of all: compute hunger.

The reporting also suggests OpenAI could lean more heavily on AWS capacity and potentially start using Amazon’s Trainium chips, basically turning this into a supply line for the next wave of model scaling (or so it seems - these talks can wobble).
🔗 Read more

🧑💻 Developers can now submit apps to ChatGPT

OpenAI has opened app submissions for review and publication inside ChatGPT, alongside an in-product app directory where people can browse featured apps or search for anything that’s published. Apps can be triggered mid-conversation via @mentions or picked from the tools menu - very “apps, but chat-native.”

They’re also pushing an Apps SDK (beta) plus a bundle of dev resources (examples, UI library, quickstart). Monetization is cautious for now - mostly linking out to complete transactions - but it’s pretty clear OpenAI wants this to grow into a real ecosystem.
🔗 Read more

🗞️ Introducing OpenAI Academy for News Organizations

OpenAI launched a learning hub aimed at journalists, editors, and publishers, built with partners like the American Journalism Project and The Lenfest Institute. The pitch: practical training and playbooks that help newsrooms use AI without quietly eroding trust in the process.

The Academy’s launch slate includes “AI Essentials for Journalists,” plus use cases like investigative/background research, translation, data analysis, and production efficiency. There’s also a very noticeable emphasis on responsible use and internal governance - because, yeah, someone has to write the policy doc nobody wants to write.
🔗 Read more

⚡ Gemini 3 Flash: frontier intelligence built for speed

Google rolled out Gemini 3 Flash as a faster, more cost-efficient model - and made it the default in the Gemini app and AI Mode in Search. The pitch is basically “Pro-grade reasoning, Flash-level speed,” which sounds like a slogan… but also kind of describes the whole race right now.

It’s also being pushed across developer and enterprise surfaces (Gemini API, AI Studio, Vertex AI, and more). The weirdly big subtext: Google wants this model everywhere people already are, so switching costs start to feel like gravity.
🔗 Read more

🧩 OpenUSD and NVIDIA Halos Accelerate Safety for Robotaxis, Physical AI Systems

NVIDIA is bundling simulation standards and safety workflows into a more coherent “physical AI” stack - robots and autonomous vehicles that have to survive real-world chaos. A key ingredient is OpenUSD Core Specification 1.0, meant to make 3D/simulation pipelines more predictable and interoperable across tools.

On the safety side, NVIDIA highlights the Halos AI Systems Inspection Lab (and certification program) for robotaxi fleets, AV stacks, sensors, and platforms. Early participants named include Bosch, Nuro, and Wayve, with Onsemi called out as the first to pass inspection - a nice little “badge unlocked” moment.
🔗 Read more

🧪 UC San Diego Lab Advances Generative AI Research With NVIDIA DGX B200 System

UC San Diego’s Hao AI Lab received an NVIDIA DGX B200 system to push research on low-latency LLM inference - the unglamorous plumbing that decides whether “AI feels instant” or “AI feels like waiting for toast.” NVIDIA also notes that production inference systems like Dynamo draw on concepts from the lab’s work, including DistServe.

The story leans hard into “goodput” vs throughput - basically, throughput that still hits latency targets. They also describe splitting prefill and decode across different GPUs to reduce resource interference, which is nerdy, yes, but it’s the kind of nerdy that changes how a product feels.
🔗 Read more

🏗️ Hut 8 signs 245MW capacity deal with Fluidstack as part of multi-gigawatt partnership with Anthropic

Hut 8 signed a long-term deal for 245MW of capacity at its River Bend campus, leasing to AI cloud firm Fluidstack in a structure valued at $7bn (with options that could push it much higher). Anthropic is tied in as the end user via the broader partnership - this is crypto-mining infrastructure pivoting into AI muscle again, just… bigger.

There’s also a right of first offer for up to an additional 1GW at River Bend, plus financing involvement from major banks and a Google backstop. Honestly, the whole thing reads like “AI wants power and real estate - and it wants them yesterday.”
🔗 Read more

Find the Latest AI at the Official AI Assistant Store

About Us

Back to blog