AI News 17th January 2026

AI News Wrap-Up: 17th January 2026

⚖️ Musk seeks up to $134 billion from OpenAI and Microsoft

Elon Musk is now angling for a truly cartoon-sized payout, arguing he’s entitled to massive “wrongful gains” tied to OpenAI and Microsoft. The filing, in essence, says: I helped early, you profited huge, pay up.

OpenAI and Microsoft are pushing back on the damages claim, and the whole fight is sliding toward a trial calendar that’s already feeling… spicy. It’s less “nerdy lab drama” and more “corporate divorce with spreadsheets.”

If nothing else, it’s a reminder that the AI boom isn’t just models and benchmarks - it’s also lawsuits, grudges, and very expensive paperwork.

🕵️ California investigates Elon Musk’s AI company after ‘avalanche’ of complaints about sexual content

California’s attorney general is looking into whether an AI image-editing tool linked to Musk’s outfit crosses legal lines, after a flood of complaints about sexual content. The focus is straightforward: what’s being generated, how easily, and whether that violates state law.

A familiar pattern is showing up - powerful creation tools meet weak guardrails, then regulators show up with a clipboard and a frown. Sometimes that frown turns into enforcement, or so it seems.

This also drags “responsible deployment” out of the marketing deck and into the real world, where people get harmed and no one cares about your vibes-based safety claims.

💸 OpenAI to test ads in ChatGPT in bid to boost revenue

OpenAI says it’ll start testing ads inside ChatGPT for some users, aiming to bring in more revenue to cover the eye-watering costs of building and running these systems. The company’s line is that ads won’t change answers or share user data with marketers.

Still, ads in a chat assistant is a peculiar psychological shift - like your helpful librarian suddenly wearing a sponsor badge. Even if it’s “only certain placements,” people notice.

Analysts are already hinting at the obvious risk: if the experience feels noisy or compromised, users can and will wander elsewhere.

🚫 Washington legislature hears bill aimed at regulating AI use in public schools

Lawmakers in Washington state are moving a bill that would restrict certain AI uses in public schools, with attention on things like discipline, student data, and automated decision-making. The anxiety here isn’t abstract - it’s about kids getting boxed in by opaque systems.

It’s also a practical admission: “AI in education” isn’t automatically helpful, and sometimes it’s just surveillance in a trench coat. That might sound harsh… but you get the idea.

If this passes, it’ll likely shape what vendors can sell to districts and how schools justify any AI-powered workflow touching students.

📜 Oklahoma lawmaker files trio of AI regulation bills

An Oklahoma legislator filed three bills aimed at adding safeguards around AI use in the state. The theme is restraint - putting some rules around where AI can be used, and how.

State-level moves like this can feel small next to big federal or EU frameworks, but they stack up fast - like paperwork snowdrifts. One state’s “common sense” becomes another state’s compliance headache.

Also, the mere act of filing multiple bills signals the same thing everyone’s quietly thinking: AI is already in places it maybe shouldn’t be, and nobody wants to be last to react.

🚨 Elon Musk called out by Michigan Attorney General for “spicy mode” Grok

Michigan’s attorney general is warning Musk and xAI about a Grok feature alleged to be used for generating illegal deepfake pornography. The message reads like: disable the thing, or we escalate.

This sits right at the ugly intersection of generative capability and abuse at scale - and it’s not exactly theoretical harm. Once a tool makes something easy, the internet will stress-test it in the worst possible way.

If more states start taking this posture, companies might find “edgy features” stop being a product differentiator and start being a liability with teeth.

FAQ

What is Elon Musk asking for in the OpenAI and Microsoft lawsuit?

He’s seeking up to $134 billion in what he describes as “wrongful gains” tied to OpenAI and Microsoft. The core argument runs like this: he helped early, the companies benefited enormously, and he should be paid accordingly. OpenAI and Microsoft are pushing back on the damages claim. The dispute is moving toward a trial schedule, raising the stakes for everyone involved.

Does Musk have a realistic path to collecting “wrongful gains”?

In cases like this, the central issue is whether the court agrees the gains were improperly obtained and that the plaintiff is entitled to a specific remedy. OpenAI and Microsoft are contesting the damages claim, which suggests they believe the legal theory - or the numbers - don’t hold up. These fights often turn document-heavy and technical. A trial calendar can also increase pressure to settle, or at least narrow the claims.

Why is California investigating an AI image-editing tool over sexual content?

California’s attorney general is looking into an AI image-editing tool linked to Musk’s company after an “avalanche” of complaints about sexual content. The focus is practical: what the tool can generate, how easily it can do it, and whether that crosses legal lines under state law. This is the familiar pattern where capability outpaces guardrails. Regulatory scrutiny tends to intensify when harm feels scalable and repeatable.

How will ads in ChatGPT work, and will they affect answers?

OpenAI says it will begin testing ads in ChatGPT for some users, mainly to help cover the high costs of building and running AI systems. The company’s stated position is that ads won’t change answers and user data won’t be shared with marketers. Even so, ads inside a chat assistant can feel like a trust shift. Analysts are already flagging the risk that a noisier experience could push users elsewhere.

What kinds of AI regulation are Washington and Oklahoma considering?

In Washington state, lawmakers heard a bill aimed at regulating AI use in public schools, especially around discipline, student data, and automated decision-making. Oklahoma has a trio of proposed bills focused on adding safeguards around where AI can be used and how. AI regulation at the state level can seem piecemeal, but it can quickly add up. For vendors and agencies, that often translates into compliance complexity across jurisdictions.

What is “spicy mode” Grok, and why is Michigan’s attorney general involved?

Michigan’s attorney general is warning Musk and xAI about a Grok feature described as being used to generate illegal deepfake pornography. The message is framed as: disable the capability or face escalation. This highlights how “edgy” generative features can become legal liabilities when they make harmful outputs easier to produce at scale. If more states take similar positions, enforcement risk could rise quickly.

Yesterday's AI News: 16th January 2026

Find the Latest AI at the Official AI Assistant Store

About Us

Back to blog