Sensible AI #20: Your Responsible AI newsletter - Friday edition

From News to Action: what happened in Responsible AI and how to move from headlines to operations.

👋🏻 One last one before the weekend! It’s Friday’s Sensible AI turning headlines into decisions in minutes.

A downpour of Responsible AI news: a safety ranking that leaves Big Tech looking shaky, new public policies, a LATAM regulatory wave, and enterprise “guardrails” with practical guides.

And as always, your “from the news to your action plan” section with concrete steps to implement.

Let’s begin 👇

Spanish version here: IA con Sentido – Español

TOP OF THE WEEK

🤖 Future of Life releases its 2025 AI Safety Index
→ Publishes an index where experts score AI companies on 35 indicators—and none of the major companies scores above a C+ for safety.

🇦🇺 Australia approves a national AI plan
→ The Australian government releases its 2025 plan to boost the AI industry with three goals: opportunities, shared benefits, and public safety.

🛡️ EU enables anonymous AI complaints channel
→ The European AI Office launched an anonymous tool to report AI Act violations from any EU country.

⚠️ UN warns of AI-driven inequality
→ A UNDP report warns AI could widen gaps between rich and poor countries without inclusive policies.

🇺🇦 Ukraine receives an AI ethics award
→ Earns international recognition for developing ethical, safe AI during the war and reconstruction.

WHAT HAPPENED WORLDWIDE?

🇬🇧 UK–Israel agree on AI cooperation
→ Both countries launched a joint AI cooperation framework with Abraham Accords partners to promote responsible use.

🇪🇸 Madrid opens consultation on a regional AI law
→ The Community of Madrid opened a public consultation through Dec 19, 2025 for a law to ethically regulate AI in public services.

⚖️ Global declaration sets AI standards
→ International bodies approved the Seoul Declaration guiding global principles for Responsible AI across sectors.

🇵🇾 Paraguay assesses AI readiness
→ With support from UNESCO and the EU, Paraguay released its first national assessment of institutional and social preparedness for an ethical AI strategy.

🛰️ US: Securus monitors calls with AI
→ Since 2023, it has used an AI model trained on years of inmate calls in the US to detect alleged crimes in real time.

🤝 Warner Music strikes deal with Suno
→ Reaches an agreement to allow AI-generated music with official licensing and artist compensation.

🏛️ Seattle approves a Responsible AI plan
→ The city launched a municipal Responsible AI plan alongside community hackathons to promote transparent civic applications.

🇵🇪 Peru explores AI cooperatives
→ Experts highlight the potential of AI cooperatives to share benefits and governance across economic actors.

🌐 India sets a national AI roadmap
→ The India Internet Governance Forum presented a strategy to improve digital infrastructure and ensure responsible AI deployments.

🇲🇽 State of Mexico prepares a state AI law
→ The State of Mexico’s Congress set up a commission to craft a pioneering AI law in the country.

🇧🇩 Bangladesh advances an ethical AI agenda
→ UNESCO reported progress on a national Responsible AI agenda via government–industry–academia collaboration in Bangladesh.

🌍 Ghana urges protection of vulnerable people
→ A minister called for AI governance frameworks that prioritize protecting vulnerable groups and fundamental rights.

🇲🇽 ECLAC calls for balance between AI and innovation
→ At Mexico Digital Summit 2025, ECLAC urged Latin America to balance AI innovation with regulation focused on inclusion.

🇪🇸 Spain proposes an AI-authors law
→ Minister Ernest Urtasun asked the EU for a specific rule to protect authors’ rights from AI use.

🌐 CAF and UNDP push digital governance
→ Latin American leaders are shaping a regional, people-centered digital governance roadmap in Montevideo.

🇨🇴 Colombia flags AI governance and talent gaps
→ Authorities estimate Colombia must invest in institutional regulation and talent development to sustain AI adoption.

🏢 RSM recommends data governance
→ RSM Global says strong data governance is key to scaling analytics and AI with control and quality.

AND AT ENTERPRISE LEVEL?

🤖 OpenAI launches an alignment blog
→ Releases an early-stage technical research blog on safety and alignment for advanced AI and AGI.

⚠️ Meta faces EU antitrust probe
→ Meta Platforms is under EU investigation for potential competition violations related to its AI deployment in WhatsApp.

🏦 BBVA implements Responsible AI
→ The bank announced deployment of its Gemini tool with risk and compliance controls in multiple countries.

🛡️ Cybersecurity Insiders warns about uncontrolled AI
→ The report shows 83% of organizations use AI, but only 13% have real visibility into sensitive data.

💳 Visa promotes Responsible AI in finance
→ Issued recommendations to integrate AI in digital payments while balancing innovation, customer protection, and risk control.

🔍 Analytics Insight warns of identity risks
→ Analysts note accelerated AI adoption is outpacing governance frameworks, creating digital identity security risks.

🛠️ O’Reilly calls for guardrails for AI agents
→ O’Reilly Media underscores the need for clear technical limits when enterprises use AI agents in complex environments.

🏦 WOCCU issues Responsible AI guidance for credit unions
→ Published a white paper guiding safe, responsible AI adoption for credit unions.

📈 EY advances Responsible GenAI in finance
→ Released a framework for financial institutions to use GenAI with robust governance and risk mitigation.

ORGANIZATIONS & STANDARDS

🌍 Diversity in AI worries the sector
→ Experts warn that low representation of women and minorities in AI teams reinforces bias in algorithms and automated decisions.

📌 Pulitzer funds AI accountability
→ The Pulitzer Center announced new global projects investigating responsibility and accountability in AI systems across countries.

🌐 WEF calls for clear governance of AI agents
→ The World Economic Forum stresses the need for human oversight and clear rules for AI agents in global organizations.

🌐 Raconteur highlights the real risk: ungoverned data
→ The analysis argues the main AI danger isn’t the algorithm but poor data management—access, flow, and weak controls.

⚖️ Lexpert urges clarity on AI legal liability
→ Legal outlet analyzes how to assign responsibility when AI-driven automated decisions cause harm in commercial contexts.

🎓 UNESCO trains officials in ethical AI
→ Launched a global AI literacy program for public officials to strengthen ethical and regulatory governance.

🧩 WEF warns about synthetic data risks
→ Alerts that using synthetic data demands strong governance to avoid distortions, loss of trust, or hidden risks.

 Center for Democracy & Technology (CDT) issues a checklist
→ Published a “Checklist for Governments” to guide state and local leaders on adopting Responsible AI in the public sector.

🌍 UNDP — Mongolia and Responsible AI
→ Describes how Mongolia uses AI to close rural–urban gaps, improve public services, and protect nomadic livelihoods.

FROM THE NEWS TO YOUR ACTION PLAN

I turn these headlines into clear actions you can start today to strengthen AI governance. Pick just 1 to execute this week:

🟪 Treat AI as its own identity → Assign dedicated identities to AI agents with the minimum required permissions.
🟪 Classify and protect sensitive data → Label high-risk data and apply encryption or special controls before AI use.
🟪 Restrict AI agents’ scope → Limit which systems, data, or functions an AI agent can access based on its role.
🟪 Develop clear AI usage policies → Define when and how AI or autonomous agents may be used internally.
🟪 Prepare training for teams and leaders → Train technical and business owners on AI risks, governance, and best practices.
🟪 Run periodic security & compliance reviews → Assess risks, vulnerabilities, and regulatory compliance of AI systems at least every six months.

🟣 AI Governance Services for Your Organization

100% written delivery: clear, practical, and ready to implement.

Vendor Evaluation Vendor Risk Check
Full evaluation of AI vendors · go / hold / no-go traffic light · contractual recommendations.
Internal AI Use BYOAI Express Policy
Custom BYOAI & GenAI policy · launch checklist · employee email · FAQ document.
AI Diagnostics Mini-Audit Flash
Diagnostics + risk heatmap · 30–60–90 plan · executive summary.
EU AI Act EU AI Act Readiness
AI system inventory · risk classification · minimum controls matrix.
Ongoing Support Asynchronous Retainer
Monthly 100% written support · policy review · case-by-case guidance.
Team Training “Use AI Responsibly” Workshop
Slides + policy template + 10-question checklist · 2h/4h/full-day · online or onsite.

📩 Request a personalized proposal or reply to this email.

That’s it for today!

Thanks for reading to the end and for being part of a community pushing for more transparent, human-centered AI. What topic would you like me to explore on Tuesday? Your reply helps me keep each edition useful.

If this helped you, forward it to your team and reply with feedback; it helps a lot.

With gratitude and tea in hand ☕,

Karine

Mom, passionate about human-centered Responsible AI - AI Governance Consultant & WiAIG Mexico Leader
LinkedIn - [email protected] - www.karine.ai

All sources have been verified and dated. If you see any mistake, reply and I’ll fix it on the web version. - ENGLISH