πΏ THE GOOD AI
The productivity numbers sceptics said would never arrive are arriving
For years, economists asked an uncomfortable question: if AI is so transformative, why isn't it showing up in the data? The sceptics had a point. Until recently, the macro numbers were uninspiring.
That's changing. US labour productivity grew approximately 2.7% in 2025, nearly double the 1.4% annual average of the prior decade. Real GDP held at 3.7% in Q4. Former White House economist Jason Furman, not exactly a tech cheerleader, has publicly acknowledged that the aggregate data reflects a real AI-driven boost. That's a meaningful shift in independent credibility.
The US numbers don't stand alone. A major study of more than 12,000 European firms found that AI adoption increases labour productivity by 4% on average, with no evidence of reduced employment in the short run. The gains are real but uneven. Firms that invest in training alongside AI tools see the biggest lifts. The lesson isn't "AI always works." It's that AI works when organisations treat it as a change management challenge, not just a software purchase.
Economist Erik Brynjolfsson has long argued that transformative technologies follow a J-curve: a long, frustrating investment phase followed by a harvest phase where gains compound. We may be climbing out of the trough. The data is beginning to agree.
Sources:
β‘ 3 GOOD SIGNALS
βοΈ New York just made AI safety a $30M financial risk
New York's Responsible AI Safety and Education Act targets high-capability AI developers with concrete requirements: documented safety policies, risk-mitigation frameworks, and a ban on deploying models above defined risk thresholds. Violations carry fines of up to $10M for a first offence and $30M for repeat violations. This is governance doing what it's supposed to do, creating real financial accountability for safety rather than relying on voluntary commitments. Combined with the EU AI Act's August 2026 compliance deadline, the regulatory window is closing faster than many developers expected.
Source: swept.ai
π‘οΈ Two states, one week, bipartisan support: kids' AI safety is actually happening
Washington State signed HB 2225 into law on March 12, requiring AI companion chatbots to disclose they're artificial at the start of every session and every three hours after, ban romantic mimicry with minors, and trigger crisis protocols when self-harm or suicidal ideation is detected. Oregon passed a near-identical law just days earlier. The fact that both passed with backing from both parties makes this one of the cleaner signals we've seen that responsible AI governance can actually get through legislatures. This isn't the whole answer, but it's real, enforceable movement on a problem that needed one.
Source: KOMO News
π OpenAI is asking the hard question in edtech: are students actually learning?
Most AI tutoring tools optimise for one thing: test scores. OpenAI's new Learning Outcomes Measurement Suite, unveiled on March 4, tracks something more important: whether AI tutoring builds lasting cognitive skills such as autonomous motivation, task persistence, and productive engagement over time. Early signals from ChatGPT's Study Mode suggest that when AI guides students rather than just answering for them, real learning gains do appear. What stands out is OpenAI's commitment to publishing the methodology publicly so schools can use and verify it independently.
Source: Axios
π¬ THE DEEPER DIVE
The J-curve is real, and the next 24 months will separate AI-mature organisations from everyone else
There's a version of this productivity story that's easy to dismiss. "AI hype," "correlation not causation," "wait for the next revision." We've made those arguments ourselves. So let us explain why we think this time the data deserves serious attention, and what the PM and Risk Manager lens says about what comes next.
Why the scepticism was always reasonable
The original sceptic case was solid: firms were buying AI tools, but reorganising around them takes time. You don't get productivity gains from giving everyone a chatbot. You get them from redesigning workflows, retraining teams, and changing how decisions get made. That takes years, not months. So the absence of macro data through 2023 and 2024 wasn't evidence AI didn't work. It was evidence that large-scale technology transitions are slow.
What's different now
The 2025 productivity data represents firms that started investing in AI-native workflows in 2022 and 2023, not firms that signed up for a subscription last quarter. The CEPR finding that training investment is the key differentiator is important here. It's not the AI that drives the gain. It's the combination of AI and the organisational capability to use it well. That's a harder thing to replicate quickly, which means the gap between early movers and late adopters is about to get harder to close.
The PM lens
We're entering a phase where AI ROI can actually be measured and defended. That shifts the internal conversation from "should we invest?" to "how do we structure the investment to capture the gains?" The product teams and PMs who understand how to design AI-native workflows, not just integrate AI tools into existing ones, are going to have a significant edge. The job isn't "add AI to the process." The job is "redesign the process around what AI makes possible."
The risk lens
The CEPR data show no short-run job losses. The researchers are explicit that this is a short-run finding. The labour market implications of the full harvest phase remain genuinely uncertain, and we think honest optimism requires us to say that clearly. The scenario where AI creates large net productivity gains and large net workforce disruption simultaneously is not a contradiction. It's the historical pattern of most major technology transitions. Preparing for that complexity now, rather than waiting for the data to force the conversation, is what good risk management looks like.
The next 12 to 24 months
The firms that invested in training alongside AI adoption from 2023 to 2025 are about to look very different from those that didn't. For workers, the next IMF and CEPR reports will be worth watching closely. For anyone building AI products: the productivity evidence is your strongest tailwind yet, but only if the value reaches users, not just dashboards.
Sources: Brynjolfsson analysis Β· CEPR VoxEU study
π TOOL OF THE WEEK
Amazon Health AI: your health questions answered, for free
Amazon Health AI, previously available only through its One Medical acquisition, is now open to all US users with no Prime subscription required. It lets anyone ask plain-language questions about symptoms, medications, and test results, manage prescription renewals, and book appointments. Prime members get up to five free direct-message consultations with a One Medical clinician for 30 common conditions. For people who don't have a regular GP or struggle to navigate the healthcare system, this is a meaningful accessibility step. The long-term commercial intent is real and worth watching as data use evolves, but the near-term utility is hard to dismiss.
β Read more: TechCrunch
π¬ ONE QUESTION
If AI is finally delivering measurable productivity gains at the macro level, what's the one thing your organisation should be doing differently right now, and what's actually stopping it?
Hit reply. We read every response.
