πΏ THE GOOD AI
A 100x energy cut, with better results
One of the most credible criticisms of AI is one we take seriously: the technology consumes enormous amounts of energy. Training a single large model can emit as much carbon as five cars over their entire lifetimes. So when researchers claim a breakthrough on efficiency, we want to see the numbers. The Tufts University numbers are real. Researchers there built a neuro-symbolic AI, a hybrid architecture that pairs neural networks with structured, human-style logical reasoning. On a complex robotics benchmark, it achieved a 95% success rate. The standard comparison system managed just 34%. Training the neuro-symbolic model required only 1% of the energy a conventional system needs. Inference, meaning the actual work the model does once deployed, used just 5%. That is not an incremental improvement. That is a different approach entirely. The insight is straightforward: neural networks are excellent at recognizing patterns, while formal logic excels at reasoning through them. Combining both, rather than forcing a single neural network to handle everything, turns out to be more capable and vastly more efficient. The research was published April 5 and will be presented at the International Conference on Robotics and Automation in Vienna this May. Here is the honest caveat. This result comes from a robotics lab, not a deployed consumer product. Whether this architecture scales to the large language models powering most commercial AI applications remains an open research question. But it demonstrates something important: efficiency and capability do not have to trade off against each other. That idea matters more than any single benchmark. |
β‘ 3 GOOD SIGNALS
β‘ The Stethoscope That Catches Heart Disease Years Early
Researchers at Florida International University built a machine-learning algorithm that detects cardiovascular disease from digital stethoscope sounds, before patients feel any symptoms. Lab results: 95% accuracy on healthy hearts, 85% on diseased. FIU has now partnered with Baptist Health to enroll real cardiovascular patients and begin real-world training, with a five-year goal of nationwide deployment. Heart disease kills more Americans than anything else, and catching it earlier changes the odds significantly.
π€ MIT Says AI Is a Rising Tide, Not a Tidal Wave
The largest evaluation of AI work capability yet tested 41 models across more than 11,000 real Labor Department job tasks. Finding: AI handles about 65% of text-based work at a βminimally sufficientβ level, but human experts rated those outputs as barely passing most of the time. The researchers describe the effect as a rising tide that shifts work gradually, not a crashing wave that eliminates sectors overnight. For anyone bracing for sudden mass displacement, this is the most data-rich reassurance available, with the honest caveat that βgood enoughβ keeps improving.
Source: Axios
π± $30 Million to Build AI That Serves the Public
Google.org has committed $30 million, its largest AI-for-good challenge yet, to fund nonprofits and academic institutions building AI solutions for public services. Grants range from $1 million to $3 million per organization, and winners also receive placement in a Google.org Accelerator where Google engineers work alongside them directly. Focus areas are health, resilience, and the economy. What sets this apart from generic philanthropy: money paired with hands-on engineering support from the people who actually build these systems.
π¬ THE DEEPER DIVE
Six States Said No to AI-Only Claim Denials
The moment that crystallized this regulatory shift was not a dramatic Senate hearing. It was a pattern: patients receiving denial notices for medical care without a single clinician ever reviewing their case. AI systems trained to flag claims would issue denials at scale, and most patients had no way to know whether a human had been involved at all.
Six US states have now passed laws that change this directly. California went first, with the Physicians Make Decisions Act taking effect in January 2025. Arizonaβs HB 2175, signed by Governor Hobbs, bans AI from issuing medical necessity or prior authorization denials without human review, effective July 1, 2026. Texas, Maryland, Nebraska, and Connecticut have each enacted their own versions. The coalition is bipartisan, the language is specific, and the enforcement mechanisms have real consequences.
Why this problem has been so hard to fix
Health insurers have used automated systems to process claims for decades. What changed is the opacity and scale of modern AI. An older rules-based system at least followed legible logic that physicians could challenge. A machine learning model trained on aggregate population data makes statistical inferences that feel arbitrary when applied to an individual patientβs situation. These laws are now catching up to that distinction.
Our PM + Risk Manager lens
From a product perspective, these laws are drawing the blueprint for compliant AI in high-stakes healthcare decisions. Insurers and health-tech companies now have to design human-in-the-loop workflows as a legal requirement in at least six markets. That constraint is clarifying: βAI recommends, human decidesβ is actually a better product architecture for these situations, and it creates the audit trail compliance and legal teams need. Companies building this now will not have to retrofit it later.
The open question is enforcement. A law that creates rights without mechanisms to detect violations can become a paper protection. The specific risks to watch: how βAI involvementβ gets defined in practice, whether required human reviewers are genuinely empowered to override AI recommendations or effectively become rubber stamps, and whether federal preemption arguments complicate state-level enforcement. The spirit of these laws is strong. Whether the letter holds up under pressure requires watching.
The next 12β24 months
We expect the six-state footprint to grow to 15 or more by end of 2027, with federal discussion of a national standard likely by mid-2027. The pivotal moment will be a legal challenge from an insurer contesting a stateβs right to mandate human review of an AI recommendation. That case, whenever it arrives, will clarify the legal architecture for AI in high-stakes decisions well beyond insurance.
Source: Beckerβs Payer | Also covered by: Governing, Transparency Coalition
π TOOL OF THE WEEK
AI Accessibility Auditing: The Deadline Changing Government Websites
On April 24, 2026, every US public entity serving 50,000 or more people must comply with WCAG 2.1 Level AA web accessibility standards or face DOJ enforcement, lawsuits, and potential loss of federal funding. Thousands of municipal websites and public school systems were not there yet. AI-powered tools are stepping in fast: automated image alt-text generation, document tagging, and real-time accessibility auditing are helping organizations close the gap at a scale manual review could never match. For the 61 million Americans with disabilities who rely on these systems, this is a concrete, measurable win for digital inclusion.
β Read more: Accessibility works
π¬ ONE QUESTION
As AI begins making consequential decisions in medicine, insurance, and public services, what is the one domain in your own work or life where you most want a human to stay in the loop?
Hit reply. We read every response.

