AI News 9th February 2026

AI News Wrap-Up: 9th February 2026

🏗️ US pushes companies toward a new AI data-center “compact,” Politico reports

The US is reportedly trying to get major firms to sign onto a new “compact” for AI data centers - essentially a bundle of commitments about how these enormous compute builds should be handled.

Details are still a bit foggy (classic), but the direction seems familiar: standardising expectations around energy, security, and possibly reporting too - a polite policy-version of “please don’t let this turn into a shambles.”

🎬 Google sued by Autodesk over AI-powered movie-making software

Autodesk is suing Google over the name “Flow,” saying it already used “Flow” for production and VFX management software - and Google’s AI film-making tool arrived with the same branding.

The sharper detail is the allegation that Google previously suggested it wouldn’t commercialize the name… then went ahead and pursued trademarks anyway. It’s a trademark fight, sure, but it also carries a small-scale “big platform vs specialist toolmaker” energy.

🏥 AI no better than other methods for patients seeking medical advice, study shows

A new study found that using AI for patient medical advice didn’t outperform other approaches - which feels unsurprising and mildly reassuring, depending on how hard you’ve been side-eyeing symptom checkers.

It doesn’t mean AI is useless in healthcare - just that “ask a bot” isn’t automatically an upgrade over existing options, especially when accuracy and safety are the whole point.

🩺 AI-powered apps and bots are barging into medicine. Doctors have questions.

An investigation digs into how AI health apps and chatbots are spilling into clinical spaces - sometimes faster than guidance, oversight, or plain-old evidence can keep up.

Doctors are raising concerns about reliability, patient harm, and who’s accountable when a bot gives advice that sounds confident-but-wrong… like a satnav insisting you drive into a lake, except with medication.

📈 OpenAI CEO says ChatGPT back to over 10% monthly growth, CNBC reports

OpenAI’s CEO reportedly said ChatGPT has returned to over 10% monthly growth - which is a big deal if you assume the “everyone already tried it” phase had peaked.

It suggests either new users are still pouring in, or existing users are finding more reasons to stick around - or both. Either way, the product is behaving less like a fad and more like infrastructure… or so it seems.

FAQ

What is the proposed AI data-center “compact” the US is pushing?

It’s described as a bundle of commitments that major firms would agree to when building or operating large AI data centers. The intent is to standardise expectations so these massive compute projects don’t become scattered or inconsistent across companies. While the specifics still sound unsettled, the emphasis seems to sit in practical territory: energy use, security, and possibly some form of reporting.

Why would the US want companies to sign an AI data-center compact?

A compact can establish shared baseline expectations without forcing lawmakers to draft a new rule for every edge case. With AI data centers expanding quickly, policymakers often worry about grid impact, security risks, and operational transparency. A common strategy is to align the biggest players early, so sound practices spread faster and accountability is easier to trace if problems arise.

What kinds of commitments could be included in an AI data-center compact?

Based on what’s been floated, commitments could cover energy planning (how power is sourced and managed), security measures (physical and cyber), and some form of reporting or disclosure. In many pipelines, reporting becomes the “enforcement-lite” layer that makes standards legible and measurable. If the compact is voluntary, those commitments may be framed as guidelines that later help shape regulation.

What is the lawsuit about Google’s AI movie-making tool called “Flow”?

Autodesk is suing Google over the name “Flow,” arguing Autodesk already used “Flow” for production and VFX management software. The dispute is framed as a trademark and branding conflict, alongside an allegation that Google previously suggested it wouldn’t commercialize the name but later pursued trademarks anyway. These cases often turn on brand priority and the likelihood of confusion.

What does it mean that AI wasn’t better than other methods for patient medical advice?

It suggests that “ask a bot” isn’t automatically more accurate or safer than existing ways patients seek guidance. That can feel reassuring if you’re concerned about overconfident answers from symptom-checkers or chatbots. It doesn’t rule out AI’s potential value in healthcare, but it does underline the need for evidence, oversight, and careful integration where mistakes can cause harm.

Why are doctors concerned about AI-powered health apps and chatbots?

Doctors worry about reliability, patient harm, and who is accountable when a tool delivers confident but incorrect advice. The concern isn’t only accuracy; it’s also how patients interpret outputs and whether the system nudges people toward unsafe self-management. In clinical settings, unclear responsibility can become a major risk: patients may trust the tool, clinicians may not control it, and guidance may lag behind adoption.

Yesterday's AI News: 8th February 2026

Find the Latest AI at the Official AI Assistant Store

About Us

Back to blog