🧯 OpenAI is hiring a new Head of Preparedness to try to predict and mitigate AI's harms ↗
OpenAI posted a “Head of Preparedness” role - basically the person tasked with imagining the worst plausible uses of frontier models, then building guardrails that hold up under pressure (not just vibes and policy PDFs).
The listing leans hard into threat modeling, evaluations, and mitigations as a real operational pipeline. It’s safety work framed like shipping software - reassuring in one breath, slightly chilling in the next, because it suggests the risks are now product-shaped.
🧑⚖️ China issues draft rules to regulate AI with human-like interaction ↗
China’s cyber regulator circulated draft rules aimed at AI services that simulate human personalities and emotionally engage users - chatty companions, flirty assistants, that whole “are you real?” zone.
One standout bit: providers would need to warn against overuse and step in if users show signs of dependence or addiction. It’s unusually explicit about psychological risk - like someone looked at the “AI friend” trend and decided, not without seatbelts.
🧩 So Long, GPT-5. Hello, Qwen ↗
Wired’s read is that Qwen is winning hearts not by being the top benchmark beast, but by being open-weight and easy to tinker with - which, in practice, is what builders keep choosing. It’s the “tool you can actually hold,” not the one behind a velvet rope.
The piece frames this as a broader shift: open models that slot neatly into products (and get fine-tuned in-house) can matter more than marginal leaderboard gains. Slightly spicy, maybe a touch unkind to closed labs… but the vibe lands.
🎁 OpenAI and Anthropic double AI usage limits in holiday boost for developers: All you need to know ↗
OpenAI and Anthropic temporarily bumped usage limits for their coding-focused tools, giving individual subscribers more room to run heavier workflows without immediately slamming into caps.
Mint notes the boost is aimed at power users, then things revert after the promo window. It’s a nice “here’s more juice” moment - and also a quiet reminder that capacity is still a finite, tangible constraint.
💻 Google DeepMind co-founder Shane Legg lays down ‘laptop rule’ to spot if AI can replace your job ↗
Shane Legg floated a blunt heuristic: if a job can be done entirely via a laptop setup (screen, keyboard, mic, camera, etc.), it’s the kind of cognitive work advanced AI can increasingly operate within.
He also hedged a bit - noting that some “fully online” work still leans on human connection and personality. So yes, the rule is sharp… but it’s not a guillotine, more like a storm warning.