World Summit AI | Blog

Juliet MacDowell on Ethics, Impact, and the Future of Social AI

Written by World Summit AI | Oct 3, 2025 3:51:23 PM

What if AI could defend truth, dignity, and human connection at scale? Ahead of World Summit AI 2025 (October 08–09, Amsterdam), Juliet MacDowell, founder of Mission AI, shares how her work with PASSERELLE and CLEAR is turning AI into a tool for social impact—and issues a warning about the consequences if we fail to code AI for people rather than profit.

What’s your most compelling dream scenario for AI — a breakthrough that would fundamentally improve life on a global scale?

My most compelling dream: AI as universal public infrastructure that makes quality healthcare instantly accessible to anyone, anywhere through diagnostics, translation, and care coordination. Simultaneously, it eliminates repetitive work and redistributes productivity gains, enabling a three-day work week focused on creativity, care, and community. This isn't utopian
it's a political choice to code AI for people rather than profit.

What’s a recent project or breakthrough you're especially proud of — and what kind of impact do you hope it will have in the real world?

I'm especially proud of PASSERELLE. Its name means "bridge," which captures exactly what it does: connecting French administrative services with the migrants who depend on them. It's an AI system that augments human interpreters rather than replacing them, ensuring confidentiality, empathy, and cultural nuance in healthcare, legal processes, and social services. The impact is both personal and systemic. For individuals, it means fewer misunderstandings and more dignity in vulnerable moments. For institutions, it demonstrates how AI can strengthen human connection rather than erode it.
 
What’s a use case for AI that you think more people should know about — something positive that’s flying under the radar?

AI can do more than debunk lies. It can defend truth. Our prototype CLEAR initiative shows how AI can track extremist rhetoric in real time and counter it with fact-based, emotionally resonant narratives. But the deeper opportunity is addressing the root causes: fear, resentment, isolation. If we use AI to detect those signals and respond with dignity and connection, we reduce the appeal of hate before it metastasizes. This is about building narrative infrastructure for democracy, not leaving the information space to those who weaponize it.

If you had to choose one nightmare scenario that keeps you up at night — whether realistic or speculative — what would it be, and what warning signs should we be watching for today?
That we ignore AI's environmental toll until it's too late. Training and running large models already consume staggering amounts of energy and water, with warning signs everywhere: ballooning data center demands, soaring emissions, and hidden costs on vulnerable communities. The nightmare is AI becoming indispensable while accelerating climate collapse. We need radical alternatives now: frugal architectures, decentralized micro-models, renewable-first compute, and nature-inspired designs. AI must become an engine of sustainability, not a driver of collapse.

Who or what do you think has the power to prevent your nightmare scenario above?

It cannot be left to tech giants. Big AI labs and cloud providers won't self-regulate when their business model depends on scale. Governments must impose environmental standards with the same seriousness as they do for transport and energy. Civil society must demand transparency and end data center greenwashing. As builders, we must reject "bigger is better" and design AI that is frugal, ethical, and renewable by default. Responsibility is collective, but accountability starts with those profiting most.

What are we not talking about enough in the AI conversation today — something you believe could be hugely important five years from now?

We obsess over models and benchmarks while ignoring narrative infrastructure: how AI shapes stories, beliefs, and culture. In five years, the greatest risks won't be technical failures but narrative control. Without safeguards and public alternatives, the same engines generating misinformation will dominate civic life. This is where the real power struggle will unfold.

If you look ahead 10 years, what do you think will be the biggest change in our daily lives?

AI will become invisible infrastructure shaping healthcare, justice, and education. The critical question isn't technical but political: will institutions use AI's gains for universal care, shorter workweeks, and stronger civic life, or will they concentrate wealth and deepen inequality? Our biggest challenge will be ensuring AI serves human flourishing rather than division.

Do you think AGI is near? When will we have AGI?

I came to issue a call to action: we must code for the future we want to live in. Mission AI demonstrates this with PASSERELLE defending dignity, fundraising intelligence giving NGOs more time for impact, and CLEAR defending truth at scale. My message is urgent: AI can strip away trust or reinforce solidarity. The window to choose is closing fast.
 

 

World Summit AI global Summit series 

World Summit AI
08 - 09 October 2025
Taets Art & Event Park, Amsterdam
worldsummit.ai
 
World Summit AI Qatar
09 – 10 December 2025
Doha Exhibition & Convention Center 
qatar.worldsummit.ai