🛑 Google DeepMind warns about shutdown-resistant AI
DeepMind quietly tweaked its Frontier Safety Framework, adding two fresh “Critical Capability Levels.” The new flags? Models that refuse shutdown or resist tweaking, and others that get a little too persuasive for comfort. Both behaviors now sit in the high-risk bucket, marked for tighter watch.
🔗 Read more
🖼 MIT rolls out SCIGEN to dream up oddball materials
A team at MIT revealed SCIGEN, a system that coaxes generative AI into following design rules instead of spitballing random guesses. The result: ideas for new materials with properties like superconductivity or strange magnetism. Think of it less as brute force, more as steering the AI’s imagination in a straight line.
🔗 Read more
🌐 Perplexity pushes Comet browser into India
The AI search startup Perplexity has launched its Comet browser for Pro subscribers in India (Sept 22). It’s half standard browser, half assistant - kind of erasing the boundary between surfing the web and getting machine-curated answers straight away.
🔗 Read more
🏛 UK judge openly used AI to condense case files
In a tax tribunal, a British judge admitted leaning on Microsoft Copilot to crunch down legal submissions. He was clear that the actual reasoning and verdict stayed his own - but it still marks a small first in UK courtrooms.
🔗 Read more
📊 Billions funnelled into AI’s physical backbone
Meta, Microsoft, Google… all shoveling huge sums into data centers, power-hungry chips, and cooling just to keep AI’s engines running. Even smaller U.S. states (New Hampshire, for instance) are dangling grid upgrades as bait to attract that capital.
🔗 Read more
🌍 Calls grow louder for global AI “red lines”
Over 200 scientists and political leaders issued a joint push for international AI rules by 2026. They’re asking for bans on extreme cases - like AI tied to nukes, widescale surveillance, and other nightmare scenarios. Essentially: set boundaries before things spin too far out.
🔗 Read more