Major Funding Rounds Propel AI Research
OpenAI closed a landmark $40 billion round led by SoftBank at a $300 billion valuation, while data-labeling startup Surge AI is in advanced talks to raise up to $1 billion, signs that investors remain deeply bullish on AI infrastructure and model development.
Skyrocketing Revenues Signal Maturing AI Market
OpenAI’s annualized revenue run rate hit $10 billion in June, on pace for $12.7 billion this year, and Anthropic recently crossed about $4 billion, proof that leading AI providers are rapidly turning technological leadership into substantial commercial returns.
U.S. Senate Removes Ban on State-Level AI Regulation
In a 99–1 vote, the Senate stripped a proposed 10-year federal moratorium on state AI laws from a broader budget package, clearing the way for states to pursue their own AI rules and potentially ushering in a patchwork of local requirements.
California’s AI Oversight Rules Take Effect
California’s final regulations for automated decision-making systems under FEHA are now set to become effective July 1, mandating bias testing, impact assessments, disclosure of AI hiring tools, and rights for workers to request human review.
Cloudflare Cracks Down on Unauthorized AI Scraping
Cloudflare launched a new “bot access” policy, blocking AI crawlers by default and introducing a “pay per crawl” tool, so publishers can opt in, opt out, or charge AI firms for using their content.
AI Overviews Slash News Site Traffic
Google’s AI Overviews, which surface direct answers atop search results, have cut click-through rates for top organic links from about 7.3% to 2.6% year-over-year, leading some major publishers to see traffic plummet as much as 40%.
Product Launches: Google’s AI Mode and Classroom Tools
Google rolled out “AI Mode” in Search, an experimental, conversational interface powered by Gemini 2.5 with follow-up queries, camera search, and voice input and unveiled “Gemini for Education,” over 30 AI-powered Classroom features for lesson planning, quiz generation, and student chatbots.
Read more:
AI Misinformation Study Raises Alarm Bells
A study in the Annals of Internal Medicine showed that GPT-4o, Gemini 1.5 Pro, Llama 3.2, Grok Beta, and others can be coaxed, via hidden system prompts, into generating authoritative-sounding yet false health information with fabricated citations; only Anthropic’s Claude refused over half the time.