🧩 Broadcom signs long-term deal to develop Google’s custom AI chips ↗
Broadcom secured a long-term agreement with Google to build future generations of custom AI chips and related rack components. It also signed a separate deal giving Anthropic access to about 3.5 gigawatts of AI compute built on Google processors - which is no minor side note.
The larger point is that custom silicon keeps gaining ground as companies search for something cheaper, or at least less dependent on Nvidia. Google’s TPUs are becoming central to its cloud pitch, while Anthropic said Claude’s revenue run rate has climbed sharply as demand keeps accelerating. (Reuters)
⚙️ Nvidia acquisition of SchedMD sparks worry among AI specialists about software access ↗
Nvidia’s move to acquire SchedMD is unsettling people who rely on Slurm, the open-source workload manager that helps run AI training jobs and major supercomputers. It can sound niche, yet Slurm is one of those quiet pipes in the wall - you notice it the moment someone buys the plumbing.
Researchers and infrastructure specialists worry that giving the dominant AI chip company control over such an important scheduler could tilt the field against rivals and independent data center operators. Slurm is used far beyond chatbots too, including in government supercomputing environments, which makes the fairness question feel larger than a routine software deal. (Reuters)
💧 Investors press Amazon, Microsoft and Google on water, power use in US data centers ↗
More than a dozen investors are pushing major tech companies for clearer disclosures on how their data centers use water and power, after several large projects reportedly ran into community opposition. The pressure is moving away from broad sustainability language toward pointed local questions - how much water, where, and who bears the strain.
Reuters notes that North American data centers used nearly 1 trillion liters of water, while investors argued current reporting is patchy and often not site-specific. Alphabet is facing shareholder pressure over climate targets, Amazon says it is disclosing more local data, and Microsoft says sustainability remains a core value - but the underlying complaint is that AI expansion is moving faster than transparency. (Reuters)
🏛️ OpenAI’s vision for the AI economy: public wealth funds, robot taxes, and a four-day workweek ↗
OpenAI put out a policy vision that mixes public wealth funds, stronger safety nets, robot taxes, and even a shorter workweek. It reads as both ambitious and carefully pragmatic - a big-company manifesto trying not to sound like one.
TechCrunch frames it as OpenAI explaining how it thinks wealth and work could be reorganized in an AI-heavy economy, amid rising anxiety over job displacement, concentration of power, and giant data center buildouts. So yes, it is policy talk - but also reputation management, or something close to it. (TechCrunch)
🎙️ Google has launched a free, offline AI dictation app that will automatically polish your speech. ↗
Google quietly launched AI Edge Eloquent, a dictation app that works offline, costs nothing, and comes with no usage caps. It transcribes live speech, then cleans it up by removing filler words and self-corrections - which lands as either helpful or slightly unsettling, depending on your mood.
Right now it is available on iOS, with Android and macOS planned later. The interesting part is not just the app itself, but the signal: Google is pushing more AI features onto devices directly, where privacy, latency, and cost all start to matter much more. (The Verge)
🛰️ Spain’s Xoople raises $130 million Series B to map the Earth for AI ↗
Xoople raised $130 million to build higher-quality Earth observation data for AI systems, pitching itself as a source of "ground truth" for enterprise use. The company is building a satellite constellation and says its sensors should deliver data far better than existing monitoring systems - a bold claim, plainly, but that is the lane.
The startup also announced a partnership with L3Harris to build sensors for its spacecraft, and said it wants to embed its data directly into enterprise platforms. The broader idea is to become something like an Earth system of record for AI models used in logistics, agriculture, infrastructure, and disaster monitoring. A bit grand, perhaps - but not small. (TechCrunch)
FAQ
Why is Broadcom’s long-term deal with Google for custom AI chips a big deal?
It matters because it shows how seriously major tech companies are investing in their own AI hardware. Google is not merely buying generic capacity; it is developing future chip generations and related rack systems with Broadcom. That can sharpen cost control, improve supply planning, and fine-tune performance. It also strengthens Google’s case for using its own processors as a core part of its cloud AI strategy.
Are custom AI chips becoming a real alternative to Nvidia?
They are becoming more significant, especially for companies seeking lower costs or less reliance on a single supplier. In this roundup, Google’s TPUs look increasingly central to its AI infrastructure and cloud position. That does not make Nvidia irrelevant. It suggests the market is widening, with custom silicon gaining traction where companies want tighter control over performance and economics.
Why are AI specialists worried about Nvidia buying SchedMD?
The concern is not limited to one software company. SchedMD is tied to Slurm, an open-source workload manager used to schedule AI training jobs and major supercomputers. Because Nvidia already holds so much power in AI chips, some researchers worry that owning an important software layer could create unfair advantages. The issue is, at heart, about neutrality in shared infrastructure.
How could control of Slurm affect researchers and independent data centers?
A scheduler like Slurm occupies a critical position because it helps determine how workloads are managed across compute resources. If a dominant chip company controls that layer, rivals and independent operators may worry about access, priorities, or future compatibility. The article does not claim any specific abuse. It shows why people see this as a broader fairness concern, especially in research and government computing environments.
Why are investors pushing for more transparency on AI data centers’ water and power use?
Investors seem to want more than broad sustainability promises. They are asking for local, site-specific disclosure because community opposition often turns on practical questions such as water use, electricity demand, and who bears the strain. The article suggests AI infrastructure is expanding faster than reporting standards. That gap makes it harder for shareholders and communities to judge the environmental trade-offs with any precision.
What is OpenAI proposing for the AI economy?
OpenAI’s policy vision includes ideas such as public wealth funds, stronger safety nets, robot taxes, and even a four-day workweek. The common thread is how to spread the gains from AI more broadly if automation reshapes jobs and wealth. The piece frames this as both policy thinking and image management. In other words, it is about economics, but also about how OpenAI wants to be perceived.
What does Google’s offline dictation app say about the direction of on-device AI?
It suggests Google sees growing value in putting AI directly on devices rather than routing everything through the cloud. The app works offline, has no usage caps, and automatically cleans up spoken text by removing filler words and self-corrections. That combination points to a focus on privacy, lower latency, and lower serving costs. It also shows how consumer AI tools are becoming more deeply woven into everyday workflows.
Why does Xoople’s funding matter for AI models that rely on physical-world data?
Xoople is betting that better Earth observation data can become a key input for enterprise AI systems. Its pitch is not merely satellites for imagery, but higher-quality “ground truth” for uses such as logistics, agriculture, infrastructure, and disaster monitoring. That matters because many production AI systems depend on reliable data from the physical world. Better inputs can be every bit as important as better models.