🔍What is in today flow:

    🌊  EPA Water Reuse Action Plan 2.0 (released 16 April 2026) explicitly names AI data centre cooling as a new driver of strategic water reuse investment, for the first time in federal water policy.

📄  A new peer-reviewed study in Water Research (April 2026) quantifies AI infrastructure's global water footprint at 4.2 to 6.6 billion cubic metres annually by 2027, with two-thirds of post-2022 data centres already in water-stressed regions.

🛠️ A Nature npj Clean Water study demonstrates that machine learning applied to coagulant dosing at a real drinking water treatment plant reduced turbidity by 16.36% and cut dosing costs by 9.64%, with a Random Forest model accuracy of R² = 0.9922.

🔒  Anthropic restricted access to its most powerful model, Mythos, to 12 US-headquartered companies only. European cyber agencies were excluded, raising significant governance concerns for critical infrastructure operators worldwide.

⚖️  The EU AI Act high-risk AI obligations take effect 2 August 2026. Final trilogue negotiations are in their last stretch, with a political trilogue scheduled for 28 April.

📄  AI research spotlight: AI is thirsty and water utilities should know it

Source: The Water Footprint of Artificial Intelligence: Emerging Solutions and Governance Imperatives, Water Research, Elsevier, April 2026. DOI: 10.1016/j.watres.2026.125866

 We talk a lot about AI solving water problems. This paper flips the lens: AI is becoming a water problem.

Published in Water Research, one of the sector's leading peer-reviewed journals, this study quantifies the freshwater cost of running AI at scale. AI's global water footprint could reach 4.2 to 6.6 billion cubic metres annually by 2027. Two-thirds of post-2022 data centres are already sited in water-stressed regions. The consumption spans three pathways: direct evaporative cooling, electricity generation, and semiconductor manufacturing.

The authors introduce "digital water sobriety" as a governance framework, asking which AI applications actually justify their freshwater draw and calling for mandatory facility-level water use transparency. For water utilities, the implication is direct: you are not just an AI adopter. You are increasingly an AI supplier. Whether your organisation has a seat in data centre siting decisions is now a strategic question, not a theoretical one.  

🛠️ Case study: Machine learning cuts coagulant dosing costs by 9.64% at a 300,000 m3/day drinking water plant

 This is the AI case study that should resonate most with process engineers: machine learning applied directly to coagulation, the foundational step in drinking water treatment.

Researchers developed a Random Forest model with knowledge embedding for coagulation control at a conventional drinking water treatment plant, integrating multi-stage water quality parameters alongside hyper-parametric constraints on turbidity thresholds, energy use, and economic cost. The model achieved an accuracy of R² = 0.9922 with a percentage error of only 2.53%.

Validated across 10 days at a real plant, the system reduced post-coagulation turbidity by 16.36% and cut dosing costs by 9.64% compared to conventional jar test-based dosing. Critically, the model was also interpretable, meaning operators could understand why the system was making each dosing recommendation, not just accept a black-box output. This addresses one of the most common reasons water utilities hesitate to adopt AI in treatment processes: the trust deficit.

The approach is directly applicable to any plant running conventional coagulation-flocculation-sedimentation. The data requirements are manageable. The results are quantifiable. This is practical AI adoption, not a research pilot.

 

⚖️ Regulation watch: Four months to the EU AI Act deadline and water utilities should be preparing now

Source: EU AI Act Newsletter #100, Future of Life Institute, April 2026  |  EPA WRAP 2.0 press release, 16 April 2026  |  DTA AI Policy, Australian Government

 Three regulatory developments this week that water professionals cannot afford to miss.

EU AI Act: High-risk AI obligations take effect 2 August 2026, less than four months away. Final trilogue negotiations between the European Parliament and Council continue through April, with a political trilogue on 28 April. Water utilities in Europe deploying AI in network monitoring, process control, or demand forecasting need to assess now whether those systems qualify as high-risk. For most, the answer will be yes.

EPA WRAP 2.0: Released 16 April 2026, the plan explicitly states its goal is to make the US "the AI capital of the world" by securing recycled water for data centre cooling. This is a policy signal, not a mandate, but it will shape utility planning frameworks and procurement priorities across the sector in coming years.

Australia: The DTA AI Policy and NSW AI Assessment Framework already set expectations for government-affiliated water entities. If your utility operates under a government business enterprise structure, AI governance is compliance, not optional best practice.

Latest in AI: Google's Deep Research Agent and what it means for water professionals

Source: Google Deep Research announcement, Google Blog, April 2026

 Google released Deep Research and Deep Research Max this week, two AI agents powered by Gemini 3.1 Pro that generate full research reports from the web, uploaded files, and connected data sources, with charts and infographics included. Early partnerships with providers like S&P and PitchBook indicate the intent to pipe proprietary datasets directly into research workflows.

For water utilities and engineers, the implications are practical. Research-heavy tasks, including regulatory benchmarking, treatment technology comparisons, effluent standard reviews, and permit literature surveys, consume significant time at most organisations. Tools like Deep Research compress that from days to hours.

The more significant near-term possibility is connecting organisational data via Model Context Protocol (MCP). Utilities could eventually query their own SCADA history, lab data, or asset records alongside open-source literature in a single workflow. That integration pathway is being built now by multiple vendors. Water organisations that have not started thinking about data architecture will find themselves behind when these tools arrive at scale.  

AI tool of the week: data-driven chemical optimisation for wastewater treatment

Provider: BlueGreen Services  |  Category: Process optimisation  |  Focus: Wastewater chemical dosing  |  Further reading: AI in chemical dosing review, ScienceDirect 2025

 What it does: AI-assisted analytics applied to chemical dosing decisions in real time, covering phosphorus removal, odour control, solids settling, and final effluent polishing. The system correlates live process conditions with chemical performance to recommend dosing adjustments, replacing reliance on lab turnaround times and operator experience alone.

Why it matters: Chemical dosing is one of the largest controllable cost lines in wastewater operations. Tightening permit limits for nitrogen and phosphorus make manual optimisation harder to sustain. This tool targets that gap without requiring new infrastructure or a major procurement cycle.

Research backing: A 2025 ScienceDirect systematic review of AI in drinking water chemical dosing confirms that ML models for coagulant and disinfectant control consistently outperform conventional jar-test methods on both accuracy and cost.

Best for: Operators managing nutrient-sensitive or high-compliance wastewater plants. Vendor engagement required; no self-serve trial available.

 🔒 The shadow of AI: when private companies govern the most powerful AI models

 Anthropic's new frontier model, Mythos, reportedly surpasses most humans at identifying and exploiting cyber vulnerabilities. Rather than a broad or regulated release, Anthropic gave access to just 12 US-headquartered tech companies, including Apple, Microsoft, and Amazon, while also granting the UK's AI Security Institute limited testing access.

Of eight European national cyber agencies contacted by POLITICO, only Germany had entered conversations with Anthropic, and none had been able to test the model. AI researcher Yoshua Bengio described it as "deeply concerning" that private companies, not regulators, are deciding who accesses capabilities of this magnitude.

For water utilities managing critical infrastructure, this is not an abstract debate. The most capable AI security tools are being concentrated in a small group of US tech companies, and the rest of the world's infrastructure operators are outside that circle. Governance matters. Vendor selection matters. Responsible AI is not a compliance checkbox. It is a security question.

Thanks for reading! I hope you’ve enjoyed this week’s edition and look forward to seeing you next week!

Dr. Andrea G.T

Keep reading