💰 SoftBank has completed its $40 billion investment in OpenAI, source says ↗
SoftBank has reportedly finished funding the full $40B commitment into OpenAI - a figure with the kind of heft that makes your eyes refocus.
The story presents it as a fully funded commitment, neatly threading into the broader “scale the compute” race that has become AI’s central plotline.
🧠 Nvidia in advanced talks to buy Israel's AI21 Labs for up to $3 billion, report says ↗
Nvidia is reportedly in advanced discussions to acquire AI21 Labs, with the price mentioned at up to $3B. It reads as either audacious or entirely unsurprising, depending on how you view the GPU-to-everything pipeline.
It’s framed as part of Nvidia’s continuing push to bring more model talent and product capability closer to its core ecosystem.
🇪🇺 Poland urges Brussels to probe TikTok over AI-generated content ↗
Poland is urging the European Commission to look into TikTok after AI-generated videos carrying anti-EU messaging spread and were later removed. The episode carries a brisk “this is why platform rules exist” energy.
It leans on the idea that major platforms should respond faster and more consistently to AI-amplified disinformation risks - especially when the political temperature rises.
🏛️ Texas set to create a council to monitor certain uses of AI ↗
Texas is set to establish a council focused on monitoring certain government uses of AI. It’s governance by committee - deliberate, yes, and perhaps deliberately so when the tech outruns common sense.
The emphasis sits on oversight, guidance, and surfacing risky uses - including bias and other harmful outcomes that can slip in under the warm glow of “automation.”
🧯 The office block where AI 'doomers' gather to predict the apocalypse ↗
A reported look at a cluster of AI safety groups who spend their days mapping catastrophic failure modes - deception, misuse, cyber risk, and models doing things we didn’t explicitly ask for (which sits at the heart of the fear).
What stands out is how close-to-mainstream it all feels. Less tinfoil hat, more “risk lab” - like running fire drills while the building is still being assembled.
🕵️♀️ The latest AI news Google announced in December ↗
Google highlighted new Gemini-related updates, including tooling for identifying AI-generated or AI-edited media using SynthID signals. The subtext lands clearly: provenance is becoming a product feature.
It also reads like the opening of a new normal, where every clip carries a quiet asterisk unless detection matures enough to soothe the room. No guarantees come packaged with that.