🦞 OpenClaw founder Steinberger joins OpenAI, open-source bot becomes foundation ↗
Peter Steinberger is heading to OpenAI to push “personal agents,” while OpenClaw itself gets parked in a foundation so it stays open source (and supported). That split is… kind of clever - hire the builder, keep the project public.
OpenClaw’s pitch is refreshingly practical: email triage, insurance paperwork, flight check-ins, the nagging life-admin tasks. It’s also blown up on GitHub, and that popularity has pulled in security worries, especially if people deploy it carelessly.
🪖 Pentagon ‘fed up' with Anthropic pushback over Claude AI model use by military, may cut ties, says report ↗
The core fight: the Pentagon wants broad, “all lawful purposes” access, and Anthropic is still trying to keep hard limits around fully autonomous weapons and mass surveillance. That’s the kind of disagreement that sounds philosophical until someone says, “we might replace you.”
One underrated wrinkle - officials don’t want the model to suddenly block workflows midstream, and they don’t want to negotiate edge cases forever (fair… but also yikes). There’s a real “who’s holding the keys” tension here, and it’s not subtle.
🧠 Startup building model to predict human behavior ↗
Simile pulled in a $100M round to build a “limited learning” model aimed at predicting what people might do - including, very specifically, anticipating likely questions in things like earnings calls. Narrow target, big ambition, slightly eerie combination.
The approach leans on interviews with real people plus behavioral research data, then runs simulations with AI agents meant to mirror real preferences. It’s like making a weather model for human decisions… which sounds impossible right up until it isn’t.
🧑⚖️ Scoop: White House pressures Utah lawmaker to kill AI transparency bill ↗
A state-level AI transparency push in Utah is getting direct heat from the White House, with officials urging the bill’s sponsor not to move it forward. The bill’s framing is all about transparency and kids’ safety - hard to argue with on pure optics.
But the larger fight is jurisdictional: who gets to set the rules, states or the federal government. And yeah, it’s a snarl - like two people grabbing the same steering wheel and insisting they’re the calm one.
🎬 ByteDance pledges to prevent unauthorised IP use on AI video tool after Disney threat ↗
Disney sent a cease-and-desist over ByteDance’s AI video generator, and ByteDance says it’s strengthening safeguards to prevent unauthorized use of IP and likeness. The complaint - allegedly - is that the tool can spit out familiar franchise characters as if they’re just… public-domain stickers.
It’s the collision everyone saw coming: viral AI video tools move fast, studios move litigious, and “we’ll add safeguards” becomes the default apology language. In a twist, the tech looks like magic - and the legal side looks like gravity.
FAQ
What does it mean that OpenClaw’s founder joined OpenAI while OpenClaw moved to a foundation?
It signals a split between the person building “personal agents” and the project remaining publicly governed. Steinberger joining OpenAI suggests he’ll concentrate on advancing agent-style products there. Placing OpenClaw in a foundation is intended to keep it open source and sustainably supported. In practice, the move aims to preserve community trust while the builder goes where the resources are.
Why are OpenClaw-style AI agents focused on chores like email and paperwork?
Because “life-admin” work is repetitive, rules-based, and time-consuming, making it a practical target for automation. The examples here - email triage, insurance paperwork, and flight check-ins - are narrow tasks with clear success criteria. That focus can make agents feel valuable sooner than more open-ended assistants. It also underlines why careful access controls matter when agents touch personal accounts.
How can you deploy an open-source AI agent like OpenClaw without creating security problems?
Treat it like software that can see sensitive data, not like a toy script. A common approach is to lock down credentials, limit permissions to the minimum required, and keep logs and audit trails. Run it in a constrained environment and separate it from high-value systems. Many security worries stem from careless deployment, especially when people expose endpoints or tokens without strong safeguards.
Why is the Pentagon unhappy with Anthropic’s restrictions on Claude for military use?
The dispute centers on scope and control: the Pentagon wants broad “all lawful purposes” access, while Anthropic is described as keeping hard limits around fully autonomous weapons and mass surveillance. Officials also don’t want models to block workflows midstream or require endless edge-case negotiations. That tension is less abstract than it sounds - it’s about who decides what the model can do in real operations.
How are startups trying to predict human behavior with AI, and why does it feel controversial?
The example here, Simile, is pursuing a “limited learning” model aimed at forecasting what people might do, including anticipating likely questions in contexts like earnings calls. The approach described blends interviews with behavioral research data and simulations using AI agents meant to mirror real preferences. It feels eerie because it shifts AI from responding to people to forecasting them. The challenge is keeping claims bounded and avoiding overconfidence.
What happens when AI video tools generate copyrighted characters, like in the ByteDance–Disney clash?
The reported pattern is familiar: a studio issues a cease-and-desist, and the platform responds by strengthening safeguards to prevent unauthorized IP or likeness use. In many tools, safeguards mean tighter content filters, improved detection of recognizable characters, and clearer user policy enforcement. The underlying conflict is speed versus liability - viral generation moves fast, and rights enforcement acts like gravity. Expect more of these collisions as video generators spread.