AI News 30th March 2026

AI News Wrap-Up: 30th March 2026

🤝 Microsoft unveils AI upgrades, rolls out Copilot Cowork to early-access customers

Microsoft pushed Copilot further into multi-model territory, which feels like the main theme now - not one model winning, but several being stitched together. Its new "Critique" flow has GPT draft a response while Claude checks it for accuracy and quality, with Microsoft saying it wants that review loop to become two-way later on.

It also launched "Council" for side-by-side model comparisons and widened access to Copilot Cowork through the Frontier early-access program. The pitch is simple enough: fewer hallucinations, faster work, better output - or so it seems, anyway, because that promise has become standard industry language. (Reuters)

🇰🇷 South Korea's AI chip startup Rebellions raises $400 million in latest funding round

Rebellions pulled in $400 million at a roughly $2.34 billion valuation, a hefty round that shows investors are still quite willing to back AI infrastructure plays, especially outside the usual U.S. suspects. The company said the money will help it expand in the U.S., grow its Rebel100 platform, and move toward an eventual IPO. 

What stands out most is the political-industrial framing around it. Korea's growth fund made a direct investment under the country's "K-Nvidia" push, while Rebellions is betting that the real AI battleground has shifted from flashy chatbots to the cheaper, more efficient machinery underneath them. That feels less glamorous, more important. (Reuters)

🇫🇷 France's Mistral raises $830 million in debt for AI data centre build-up

Mistral secured $830 million in debt financing to build out a data centre near Paris and buy 13,800 Nvidia chips. That's a great deal of iron, a great deal of power, and a very blunt statement that Europe does not want to rent its AI future forever from American cloud giants. 

The site is due to go live in Q2 and is part of a broader plan to reach 200 megawatts of compute capacity across Europe by the end of 2027. Mistral is also serving customers including the French armed forces, so this is not just a startup growth story - it is infrastructure, sovereignty, and strategy all tangled together a bit like cables under a raised floor. (Reuters)

🍎 Apple Intelligence mistakenly launched in China.

Apple accidentally exposed Apple Intelligence features to some iPhone users in China, then pulled them back after reports spread online. According to The Verge's summary of reporting, the rollout happened in error, which is far from ideal, given how tightly AI features are regulated in the Chinese market. 

The awkward part is that Apple still needs a local partner to power AI tools there, with Alibaba among the companies previously mentioned in that context. So this was not just a buggy switch flip - it briefly exposed how unfinished Apple's China AI strategy still is. (The Verge)

⚖️ Majority of US federal judges are using AI, study finds

A new study found 60% of U.S. federal judges use at least one AI tool in judicial work, though only 22% said they use it daily or weekly. Most are leaning on legal-specific systems rather than general chatbots, and the top use case is legal research, followed by document review. 

That is a quietly huge shift. Courts are usually glacial, then suddenly not, and here one in three judges said they permit or encourage AI use in chambers while 20% formally prohibit it. At the same time, more than 45% said court administration had not provided AI training, which feels like a familiar modern-tech pattern - adoption first, guardrails later. (Reuters)

📉 As more Americans adopt AI tools, fewer say they can trust the results

New polling showed Americans are using AI more while trusting it less, which sounds contradictory until you remember that's basically how people treat half the internet. The survey found 70% think AI advances will reduce job opportunities, while just 7% think they'll create more jobs. 

The mood is sour beyond jobs, too. Two-thirds of respondents said companies are not being transparent enough about AI use, and the same share said government is not regulating it enough. So adoption is up, but enthusiasm is not - more grim coexistence than love affair. (TechCrunch)

FAQ

Why is Microsoft combining multiple AI models inside Copilot?

Microsoft appears to be steering Copilot toward a multi-model setup because one model can generate while another reviews. In this case, GPT drafts the response and Claude checks it for quality and accuracy. The practical aim is to reduce hallucinations and strengthen the output without asking a single system to do everything at once. That makes Copilot feel less like a standalone chatbot and more like a managed workflow.

What does Microsoft’s new Critique feature actually do?

Critique is described as a review loop in which one model writes an answer and another evaluates it. That matters because it adds a second pass before the result reaches the user. Teams often want this kind of setup to catch mistakes, weak reasoning, or unclear wording earlier in the process. Microsoft also signaled that it wants this feedback loop to become more interactive over time.

Why are investors still backing AI chip companies like Rebellions?

The article suggests investors still see strong upside in the infrastructure layer of AI. Rebellions is positioning itself around cheaper, more efficient compute rather than consumer-facing chatbot claims. That pitch can be compelling because AI demand still depends on the hardware beneath the models. Its latest funding round also suggests that regional governments and funds want local champions in this part of the stack.

Why does Mistral’s data centre expansion matter for Europe’s AI strategy?

Mistral’s expansion matters because it is not only about company growth, but also about control over compute capacity. Buying thousands of Nvidia chips and building a site near Paris signals that Europe wants more of its AI infrastructure on home soil. Across many AI pipelines, access to data centres, power, and chips helps determine who can build and deploy models at scale.

Why was Apple Intelligence showing up in China such a big deal?

It became a major issue because AI features in China operate within a tightly regulated environment. Apple reportedly exposed the tools by mistake before pulling them back, which highlighted how unfinished its local AI rollout still appears. The article also notes that Apple needs a local partner there, so the incident was not just a software slip, but a strategy problem made visible.

Why are more people using AI tools even as trust in AI keeps falling?

The article points to a pattern in which adoption rises because AI is practical, even when confidence in the results remains weak. U.S. judges are already using AI for research and document review, while polling shows that many Americans still worry about jobs, transparency, and regulation. That combination suggests AI is becoming embedded in work before trust, training, and governance have fully caught up.

Yesterday's AI News: 29th March 2026

Find the Latest AI at the Official AI Assistant Store

About Us

Back to blog