Sensible AI #11: Your Responsible AI newsletter - Tuesday edition

Responsible AI news you can use: concise headlines, risk of the week and ready-to-use resources to act now.

In partnership with

👋🏻 Hello! We’re now 2,110+ reading Sensible AI to move from headlines to decisions, in minutes.

Today you’ll see clear developments: China aiming to lead in AI; platforms hardening safeguards for minors; Europe accelerating standards while several countries lag on the AI Act; and the Perplexity–Getty deal reopening the data-and-licensing debate.

We’ll close with your “from the news to your action plan” section, the risk of the week and a key resource to protect yourself when using external AI vendors.

Let’s begin 👇

Spanish version here: IA con Sentido – Español

TOP NEWS

🚫 Character.AI bans under-18s
→ After five lawsuits, it will bar minors starting November 25 and implement age verification, while senators propose banning chatbots for minors.

🇪🇺 EU - AI Act delays
→ Only 2 of 17 countries have laws in place; most won’t be ready before 2026 and the Commission has ruled out slowing down.

🤝 Perplexity and Getty Images: global deal
→ Signed a multi-year agreement to use Getty’s visual catalog with source attributions in search.

🇪🇺 EU AI Act framework as a global standard
→ The European Commission’s AI law is consolidating as a global blueprint for responsible AI regulation and companies worldwide are beginning to adapt to the new regulatory environment.

🇨🇳 China proposes a World AI Organization
→ Xi Jinping pushed for creation of the World Artificial Intelligence Cooperation Organization (WAICO) as a multilateral mechanism for AI governance.

The Future of the Content Economy

beehiiv started with newsletters. Now, they’re reimagining the entire content economy.

On November 13, beehiiv’s biggest updates ever are dropping at the Winter Release Event.

For the people shaping the next generation of content, community, and media, this is an event you won’t want to miss.

WHAT HAPPENED WORLDWIDE?

🇪🇺 EU - DSA for ChatGPT Search
→ Considering classifying it as a very large search engine with new transparency and risk-mitigation obligations.

🧭 SiliconANGLE - “smart governance” for AI
→ Experts propose simplifying and automating controls to sustain adoption with responsibility and transparency.

🌐 EU & Taiwan - AI regulation
→ The EU is applying its law from 2024 and Taiwan is designing balanced legislation while the U.S. explores its own approaches.

🇨🇳 China - rules for data transfers
→ Requires certification and audits to export personal data on more than 100,000 people starting January 1, 2026.

🇨🇱 Chile - companies resist AI regulation
→ Tech firms are pushing back against the government’s bill as data-center investments and public debate grow.

🇨🇳 China - Cybersecurity Law amended
→ Takes effect January 1, 2026 that adds AI support and oversight and raises maximum fines.

🌏 APEC - AI initiative 2026–2030
→ Adopts a voluntary framework for 21 economies.

🇭🇺 Hungary - national responsible-AI framework
→ Defines rules and ethical control mechanisms for AI development and use.

🇳🇴 Norway - human-centered AI conference
→ ICHCAI brings together academia and public officials to integrate human rights, creativity and sustainability into new applications.

🇪🇺 EU - AI in Science Summit
→ The AI in Science Summit 2025 explores how AI is transforming research, with governance and funding implications.

🇹🇯 Tajikistan - government AI browser
→ Launches Comet AI Browser with Perplexity and Epsilon3.ai to digitize administration, with US$117M investment and 140 trained employees.

🇸🇬 Singapore - consultation on agentic AI
→ CSA publishes a security addendum and opens consultation through December 31, 2025 for autonomous systems.

🇦🇺 Australia - explanations required from AI chatbots
→ eSafety asks four providers to detail how they protect minors under the Online Safety Act.

🇮🇳 India - preparing an ethical-AI framework
→ The Prime Minister announces a human-centered framework and a global summit in February 2026.

🇲🇼 Malawi - climate platform using AI
→ Tracks emissions and environmental projects to report Paris Agreement compliance to the UN.

🌎 Brazil - 1st LATAM country in the global HealthAI regulatory network
→ Adoption of international standards aims to accelerate safe AI for health and foster regional cooperation with Europe and the U.S.

AND AT ENTERPRISE LEVEL?

🧠 Microsoft - only humans feel emotions
→ Mustafa Suleyman stated that only humans experience emotions, dismissing artificial consciousness in AI.

🧭 Info-Tech - responsible-AI framework
→ Publishes a four-step roadmap to balance innovation and governance in companies.

🛡️ ISACA Europe - AI risks and governance
→ Over 600 experts analyze rising cyber risks; 90% use AI and only 30% have internal policies.

⚠️ BSI Group - AI governance gap
→ Study warns of investments without adequate processes and cautions companies about crises due to lack of controls.

📊 EY - survey on agentic AI
→ 84% of employees are enthusiastic about agentic AI, but 50%+ fear job loss; only 52% of companies clearly communicate their AI strategy.

ORGANIZATIONS & STANDARDS

🤖 CEN-CENELEC accelerates AI standards
→ Adopted measures to launch AI standards required by the AI Act before Q4 2026, creating a specialized drafting group.

FROM THE NEWS TO YOUR ACTION PLAN

I turn these headlines into concrete steps you can start today to strengthen your AI governance. Pick 1 action to drive impact in 30 days:

🟪 External participation & alliances → Engage in regulatory consultations and technical forums (CEN-CENELEC, IEEE, APEC) and sign MOUs with universities for safe innovation.
🟪 Measure social & environmental impact → Assess productivity, inclusion and sustainability; align ESG metrics with WEF and HealthAI frameworks.
🟪 Security & incident response → Monitor for abuses like deepfakes or misuse involving minors, with reporting protocols and rapid mitigation.
🟪 AI committee & executive governance → Create a cross-functional committee reporting to the board to oversee risk, compliance and responsible AI use.
🟪 BYOAI policy & internal training → Regulate employees’ use of external tools with mandatory training on bias, privacy and security..

RISK OF THE WEEK

🔴 Using an external AI provider without proper controls.

Rite Aid deployed third-party facial recognition without requiring solid metrics or testing. Result: thousands of false positives (including an 11-year-old girl) and an FTC order banning the technology for 5 years. The absence of a contractual model card (accuracy by demographic group, thresholds, human review, data retention) prevented timely detection of failures.

👉🏼 What I’d do in your shoes: 

🟩 Require vendor model cards and test reports (FPR/FNR by subgroups, usage limits, data & retention).
🟩 Include audit rights, a controlled pilot and a kill switch if metrics aren’t met.
🟩 Activate your incident-response playbook and mandatory human review for sensitive cases.

Need help?
If you’re not sure how to do this, no problem: we’ll create a plan together.

USEFUL RESOURCES

🔴 Model Card Toolkit

A model card is like your AI’s “package insert”: what it’s for (and not for), what data it was trained on, how it performs (including by subgroups), the risks and the safeguards in place.

👉🏼 How I use it: I embed the toolkit across the lifecycle and also require vendor model cards as part of the contract.

RESPONSIBLE HUMOR

That’s all for today!

What topic would you like me to dig into on Friday? Your reply guides the next edition.

If this helps your team, forward it and tell me what worked: I improve the newsletter with your feedback 💌

With gratitude and tea in hand ☕,

Karine

Mom, passionate about human-centered Responsible AI - AI Governance Consultant & WiAIG Mexico Leader
LinkedIn - [email protected] - www.karine.ai

All sources have been verified and dated. If you see any mistake, reply and I’ll fix it on the web version. - ENGLISH