- IA con Sentido
- Posts
- Sensible AI #14: Your Responsible AI newsletter - Friday edition
Sensible AI #14: Your Responsible AI newsletter - Friday edition
From News to Action: what happened in Responsible AI and how to apply it, with headlines, risk and and free trusted practical resources.

👋🏻 Hello! Thanks for being here reading ”Sensible AI” to turn Responsible AI headlines into concrete actions.
In this edition, you’ll see new international regulations emerging, LATAM investing in capabilities, and companies adjusting their AI policies.
I’m sharing a concrete tool: a checklist to evaluate any vendor trying to sell you an AI system before you sign.
And as always, your “from the news to your action plan” section with concrete steps to implement.
Let’s begin 👇
Spanish version here: IA con Sentido – Español
TOP OF THE WEEK

⚖️ OpenAI sued over copyright
→ A judge orders the handover of 20 million anonymized chats in a New York copyright lawsuit.
🇪🇺 Europe — EDPS issues AI guidance
→ The European Data Protection Supervisor releases recommendations for comprehensive AI risk management in EU institutions.
🇸🇬 Singapore reinforces AI autonomy
→ The government will share cyber threat intelligence and support local firms to strengthen tech sovereignty and security.
📘 Capgemini releases its AI governance guide
→ A practical playbook to operationalize ethical governance and meet new regulatory demands in enterprises.
Shoppers are adding to cart for the holidays
Over the next year, Roku predicts that 100% of the streaming audience will see ads. For growth marketers in 2026, CTV will remain an important “safe space” as AI creates widespread disruption in the search and social channels. Plus, easier access to self-serve CTV ad buying tools and targeting options will lead to a surge in locally-targeted streaming campaigns.
Read our guide to find out why growth marketers should make sure CTV is part of their 2026 media mix.
WHAT HAPPENED WORLDWIDE?

🇦🇺 Australia unveils APS AI 2025
→ The public-sector plan sets pillars of trust, people, and tools, with Chief AI Officers and a use-case register.
🏦 Singapore — MAS issues AI risk guidance
→ The Monetary Authority publishes AI risk-management guidelines for financial institutions.
🇰🇷 South Korea — Basic AI Law
→ Releases a draft Basic AI Law with transparency requirements, slated to take effect January 1.
🇳🇱 Netherlands drafts global strategy
→ Starts designing a national policy intended to shape international AI governance.
🇪🇸 Catalonia invests €1 billion
→ Pledges funding through 2030 to lead in ethical AI, focusing on business, training, and technological sovereignty.
🎓 U.S. — students build “AI for social good”
→ New England students developed responsible-AI projects aimed at public benefit across multiple institutions.
🏛️ European Parliament calls for oversight of algorithmic management
→ Urges rules for human supervision and data protection on labor platforms.
🌍 GPEN global children’s sweep
→ More than 30 authorities review children’s websites and apps.
🇲🇽 Mexico opens a public AI center
→ Claudia Sheinbaum inaugurated the first Public AI Training Center to develop national talent.
🌿 Green Alliance tackles climate AI
→ Analyzes how responsible AI can support climate-crisis solutions.
🇺🇸 Pennsylvania expands AI collaboration
→ Signs an agreement with Carnegie Mellon for state AI policy advice and responsible leadership.
🇺🇿 Uzbekistan–NVIDIA national centers
→ Will launch AI infrastructure and training with an emphasis on compliance and international standards.
🇵🇾 Paraguay reaffirms OECD commitment
→ Reiterated responsible AI use to the OECD, backed by its new Data Protection Law.
AND AT ENTERPRISE LEVEL?

📊 Trends redefining AI ethics by 2026
→ Bernard Marr identifies eight trends that will reshape trust and accountability in AI through 2026.
🌱 Allianz outlines sustainable AI
→ Details a responsible-AI strategy emphasizing sustainability and environmental risk for investment.
🧩 ISG — frameworks for agentic AI
→ Reports that large enterprises are adopting data and talent frameworks to manage agentic AI under global standards.
ORGANIZATIONS & STANDARDS

🌐 WEF addresses public uncertainty about AI
→ The World Economic Forum warns about social distrust in AI and calls for greater global transparency.
🏛️ UNESCO trains Asia-Pacific judiciary
→ Launches regional training on AI, rule of law, and human rights for judicial officials.
FROM THE NEWS TO YOUR ACTION PLAN

I turn these headlines into clear actions you can start today to strengthen AI governance. Pick 1 to execute this week:
🟪 Shadow AI Amnesty → Run a confidential campaign to declare tools in use and migrate them to controlled environments within 30 days.
🟪 C2PA labels & watermarks → Embed C2PA in content and apply watermarks on outputs for traceability and dispute resolution.
🟪 Bounties & open auditing → Launch security/bias bounties with remediation SLAs and public reporting.
🟪 Cost & carbon governance → Set compute budgets, measure CO₂ per inference, and prioritize efficiency without sacrificing quality.
And above all: 🟪 Responsible AI Committee → Form a cross-functional committee with executive mandate, use-case owners, and monthly risk reviews.
TEMPLATE OF THE WEEK
✅ Assess Any AI Vendor in ~10 Minutes
Today’s gem: a one-pager to evaluate any AI vendor in 10 questions before you sign. ✍🏼 How I use it: I request a demo/fact sheet, check the traffic-light scoring on answers; if it passes, I run a KPI-based pilot and add the clauses my use case needs. |
Need help to apply to your organisation?
THE BOOK OF THE WEEK
📚 At the “Responsible AI Book Club“, I recommend “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference”.
Why this book stands out?
It punctures false promises without doom-mongering and helps you tell what works, what doesn’t, and what likely never will. Read my review.
INTERNATIONAL JOB OFFERS
I put together for you a shortlist of top roles in Responsible AI, AI governance, ethics, compliance and AI systems auditing. See the complete global list:
🇮🇹 Experienced - AI Governance - Deloitte
🇪🇺 Policy Officer (AI for Public Good) - European Commission
🇦🇺 AI Lead (Department of Treasury & Finance) - Government of South Australia
🇺🇸 Global Fellowship Program (AI Research) - Microsoft Research
🇺🇸 Chiefs of Staff / Program Officer (AI Governance & Policy) - Open Philanthropy
🇬🇧 SMEs in AI Ethics and Technical Governance - Sony Europe
🇺🇸 AI Initiatives Fellow - The New York Times
🌎 Researchers (AI in Education) - UNESCO
That’s it for today!
Thanks for reading to the end and for being part of a community pushing for more transparent, human-centered AI. What topic would you like me to explore on Tuesday? Your reply helps me keep each edition useful.
If this helped you, forward it to your team and reply with feedback; it helps a lot.
With gratitude and tea in hand ☕, Karine Mom, passionate about human-centered Responsible AI - AI Governance Consultant & WiAIG Mexico Leader |
All sources have been verified and dated. If you see any mistake, reply and I’ll fix it on the web version. - ENGLISH



