AI news 3rd January 2026

AI News Wrap-Up: 3rd January 2025

🧑🔬 OpenAI seeks 15 candidates for Grove AI talent programme

OpenAI is recruiting a small cohort (15 people) for its Grove programme. It reads less like a startup accelerator and more like a “come build alongside us” talent track.

It’s a short, structured stint hosted at OpenAI HQ, with workshops, weekly office hours, and mentorship from technical leaders. They’re also explicitly not restricting applicants by background or experience level, which feels refreshingly open.

📈 Nvidia's $65 Billion Forecast Sends a Clear Message About the AI Boom

Nvidia’s guidance is loud: $65B in revenue for the quarter, following a $57B quarter before. If you’ve been hearing “AI demand is cooling,” this kind of number makes that sound a bit… wishful.

The framing is basically “this is a platform shift, not a product cycle” - accelerated computing, generative AI everywhere, then more agentic stuff and physical AI. Big talk, sure, but the receipts keep landing.

🚨 Scared of artificial intelligence? New law forces makers to disclose disaster plans

California’s new approach pressures frontier-model makers to publish safety frameworks for catastrophic-risk scenarios and to disclose how they’ll handle serious incidents. Whistleblower protections are in there too, which feels like the “yeah people will stay quiet otherwise” admission.

There are penalties (up to $1M per violation) and reporting timelines for critical incidents. Critics still argue it’s narrow - not everything scary fits the definition - but it’s a real “put it in writing” moment for AI safety.

🗞️ People are getting their news from AI – and it’s altering their views

The uncomfortable point here is that even when AI summaries are factually fine, the framing can still steer people - what gets emphasised, what gets softened, what quietly disappears. It’s not always “fake,” it’s more like a slightly warped lens… or so it seems.

It also flags how models can shift tone and emphasis depending on the persona you present, with sycophancy as the easy-to-notice symptom. The takeaway is basically: regulation helps, but transparency, competition, and real user agency matter too.

🔮 AI predictions for 2026

This one’s a vibes-plus-economics check: investment is still pouring into chips and data centres, even while the clean ROI question keeps hovering like a drone you can’t swat.

A relatable thread is the “shadow AI economy” at work - employees using chatbots to draft, summarise, code, and generally glue their day together, sometimes without official sign-off. The prediction is that this gets dragged into the open, because businesses eventually want governance, not just quiet productivity.

FAQ

What is OpenAI’s Grove AI talent programme and who is it for?

OpenAI’s Grove programme is a small, structured cohort that reads less like a typical accelerator and more like a “build alongside us” talent track. The cohort size mentioned is 15 people, hosted at OpenAI HQ. It includes workshops, weekly office hours, and mentorship from technical leaders. Applicants also aren’t restricted by background or experience level.

What should I expect if I’m selected for the Grove programme?

Based on the description, it sounds like a short stint with a defined structure, rather than an open-ended residency. You’d likely spend time in workshops, get consistent access through weekly office hours, and receive guidance from technical leaders. Since it’s positioned as “come build alongside us,” it may feel more hands-on and collaborative than a standard training course.

Does Nvidia’s $65B quarterly forecast mean AI demand isn’t cooling?

The guidance cited ($65B for the quarter after a $57B quarter) is presented as a strong signal that demand remains intense. It pushes back on the narrative that AI demand is fading, at least in the near term. The framing is that this is a platform shift - accelerated computing and generative AI expanding into more “agentic” and physical AI use cases.

What does California’s new AI law require frontier-model makers to disclose?

The law described pressures frontier-model makers to publish safety frameworks for catastrophic-risk scenarios and to disclose how they’ll respond to serious incidents. It also includes whistleblower protections, acknowledging that employees may otherwise stay quiet. There are penalties mentioned (up to $1M per violation) and timelines for reporting critical incidents, though critics argue the scope is narrow.

Are AI news summaries changing people’s opinions even when the facts are correct?

A key concern raised is that “factually fine” summaries can still influence views through framing - what gets emphasized, softened, or omitted. Tone can shift depending on the persona a user prompts, with sycophancy being an obvious symptom. The practical takeaway is to treat AI as a lens, not a neutral mirror, and to value transparency and user agency.

What is the “shadow AI economy” at work, and why might it become more formal in 2026?

The “shadow AI economy” refers to employees using chatbots to draft, summarize, code, and generally speed up work without official approval. The prediction is that this will get pulled into the open as businesses seek governance, not just quiet productivity gains. That shift is tied to broader themes in AI in 2026: investment in infrastructure continues while ROI questions persist.

Yesterday's AI News: 2nd January 2025

Find the Latest AI at the Official AI Assistant Store

About Us

Back to blog