🧾 EU countries, lawmakers clinch provisional deal on watered-down AI rules ↗
Europe’s AI rulebook got a softer rewrite. Lawmakers and EU countries reached a provisional deal to simplify parts of the AI Act, with key high-risk system rules pushed back and some machinery carved out.
There’s still bite, though. The deal adds a ban on AI-made sexually explicit images without consent and keeps watermarking for AI-generated output. Somehow, it’s both a rollback and a crackdown - politics doing yoga, basically.
🎙️ Advancing voice intelligence with new models in the API ↗
OpenAI rolled out a new batch of realtime audio models for developers: GPT-Realtime-2, GPT-Realtime-Translate, and GPT-Realtime-Whisper.
The pitch is bigger than “talking chatbot.” These models are meant to reason while speaking, translate live across languages, transcribe as people talk, and handle more agent-like voice workflows. Voice AI is starting to look less like a phone menu and more like a slightly caffeinated assistant.
🛟 Introducing Trusted Contact in ChatGPT ↗
OpenAI introduced Trusted Contact, an optional ChatGPT safety feature that lets adults nominate someone who can be alerted if systems and trained reviewers detect a serious self-harm concern.
The feature is designed as an extra layer, not a replacement for crisis lines, emergency services, or professional care. The privacy angle is doing a lot of heavy lifting here too - alerts are meant to connect someone with support outside the app without dumping the whole conversation into another person’s lap.
💸 China’s Moonshot AI raises $2B at $20B valuation as demand for open source AI skyrockets ↗
Moonshot AI reportedly raised about $2 billion at a $20 billion valuation, with backing led by Meituan’s VC arm and other Chinese investors.
The bigger story is the appetite for open-weight models. Moonshot’s Kimi line has been gaining serious developer traction, especially among users willing to trade a bit of peak performance for cheaper inference. Not glamorous, perhaps, but extremely powerful.
🛡️ Financial Stability Risks Mount as Artificial Intelligence Fuels Cyberattacks ↗
The IMF warned that AI is making cyberattacks faster, cheaper, and more scalable - which is not exactly the kind of productivity boost anyone asked for.
Its concern is financial stability. If attackers can use AI tools to find and exploit weaknesses at industrial speed, banks, markets, and payment systems need stronger resilience, supervision, and international coordination. Dry phrase, serious problem.
⚖️ US judicial panel delays action on AI-generated evidence, deep fakes ↗
A U.S. judicial panel delayed action on proposed rules for AI-generated evidence and deepfakes in court, after judges and lawyers pushed back on draft ideas.
The tension is obvious: courts don’t want fake audio or video slipping into trials, but some judges worried the system may be trying to regulate problems it hasn’t fully seen yet. Sensible caution, or foot-dragging? Bit of both, probably.
🏗️ Nvidia to invest up to $2.1 billion in IREN as part of AI data center deal ↗
Nvidia plans to invest up to $2.1 billion in data center operator IREN as part of a deal tied to up to 5 gigawatts of AI infrastructure.
That is the AI boom in one sentence: chips are not enough anymore, everyone is chasing power, land, cooling, and giant compute boxes in the middle of somewhere. The factory metaphor is becoming almost literal.
FAQ
What changed in the EU AI Act provisional deal?
The provisional deal simplifies parts of Europe’s AI rulebook and delays some rules for high-risk AI systems. It also carves out certain machinery from the rules. At the same time, it adds tougher measures, including a ban on non-consensual AI-made sexually explicit images and watermarking requirements for AI-generated output.
Why does this AI news roundup describe the EU deal as both a rollback and a crackdown?
The deal softens some parts of the AI Act by delaying or simplifying requirements, especially around high-risk systems. But it also tightens enforcement in sensitive areas, including non-consensual explicit AI images and labeling AI-generated content. That combination makes it feel less like simple deregulation and more like a targeted recasting.
What are OpenAI’s new realtime voice models meant to do?
OpenAI’s new realtime audio models are designed for more advanced voice workflows for developers. The article mentions GPT-Realtime-2, GPT-Realtime-Translate, and GPT-Realtime-Whisper. These models are positioned for live speech reasoning, translation, transcription, and agent-like voice interactions rather than basic chatbot-style voice replies.
How does Trusted Contact in ChatGPT work?
Trusted Contact is described as an optional safety feature for adults using ChatGPT. It lets a user nominate someone who may be alerted if systems and trained reviewers detect a serious self-harm concern. The article stresses that it is an extra support layer, not a replacement for crisis lines, emergency services, or professional care.
Why is Moonshot AI’s funding important in AI news?
Moonshot AI reportedly raised about $2 billion at a $20 billion valuation, backed by Chinese investors including Meituan’s venture arm. The article frames this as part of rising demand for open-weight models. Moonshot’s Kimi line is described as gaining developer traction, especially where cheaper inference is attractive.
What AI cybersecurity risks did the IMF warn about?
The IMF warned that AI can make cyberattacks faster, cheaper, and easier to scale. Its concern is that this could threaten financial stability if attackers exploit weaknesses across banks, markets, or payment systems more efficiently. The article points to stronger resilience, supervision, and international coordination as important responses.