🧑⚖️ Authors drag xAI, OpenAI, Google and others into a fresh copyright fight
John Carreyrou (the Theranos guy) plus five other writers filed a lawsuit against a big stack of AI companies, alleging their books were used - without permission - for chatbot training.
It’s not a class action either, and that’s part of the design. The authors are saying class actions tend to end in settlements where individual creators get… crumbs, so they’re choosing a route that lets them push harder than that.
They also point to earlier book-related settlements in the AI world as a cautionary tale - a clear “this is why we’re doing it differently” signal.
🔗 Read more
⚡ Alphabet goes shopping for clean power - because AI eats electricity
Alphabet agreed to buy clean-energy developer Intersect in a deal valued around $4.75 billion (cash, plus assumed debt), slotting neatly into the scramble for power capacity as data centers multiply.
The subtext stays blunt: model training and always-on inference don’t just need chips - they demand a ridiculous amount of steady electricity. “AI strategy” keeps sliding into “energy strategy,” which feels very… industrial revolution, only with GPUs.
Some operating assets sit outside the structure, but Alphabet flagged projects like large storage systems positioned near Google data-center campuses - a plug-it-directly-into-the-beast kind of buildout.
🔗 Read more
🕵️ OpenAI says prompt-injection for agents might never fully die
OpenAI argues prompt injection for browser-style agents may be a forever-problem - closer to spam or phishing than something you “solve” once. That’s not soothing, but it’s strikingly candid.
They described shipping security updates to ChatGPT Atlas, including an adversarially trained agent model and tightened guardrails, after internal automated red-teaming surfaced a new class of attacks.
The implied playbook reads as “constant pressure testing, fast patches, repeat.” Like teaching your guard dog to recognise a new disguise every week… except the disguise is just text sitting on a webpage.
🔗 Read more
🎮 Indie Game Awards yanks trophies over generative AI use
The Indie Game Awards retracted Game of the Year and Debut Game awards from Clair Obscur: Expedition 33 after it emerged generative AI was used during development - despite an earlier agreement that no gen AI was involved. Ouch.
Even though the devs reportedly patched out the assets in question, the awards group held to a strict ineligibility policy and reassigned Game of the Year to Blue Prince (with the publisher publicly stressing “no AI”).
They also pulled an Indie Vanguard award tied to Chantey due to its association with ModRetro (and, indirectly, Anduril connections). So it’s not only “AI or not” - it’s “who are you aligned with,” too… or so it seems.
🔗 Read more
🏛️ Trump executive order aims to box out state-by-state AI rules
A new executive order sets out a national AI policy framework with a stated goal of strengthening US “AI dominance,” while keeping regulatory burden “minimal.” That wording carries a lot of weight.
One big signal is the preemption vibe: it’s framed as a way to avoid a patchwork of state-by-state AI rules, especially after federal preemption efforts elsewhere didn’t land.
If you ship AI products across the country, the idea probably feels elegant. If you’re a state trying to regulate faster than Washington, it probably feels like a clamp. Simple - but also not simple.
🔗 Read more