AI CHALLENGED

Mini-Conference: AI AND CULTURE CHALLENGED

Start Date: 18. March 2026

Start Time: 15:00 CET

End Time: 18:00 CET

The global conversation around AI is no longer only about technology- it is about how our institutions, governance, and collective sense-making are being re-shaped. This mini conference of the Year of AI Challenged invites researchers, educators, civic leaders, designers, and community stewards into a high-signal inquiry: how culture and power shape “intelligence,” and what ethical co-creation looks like when AI becomes a participant in the meaning-making field.

Key discussion pillars (the actual focus of the mini conference)

  1. Content–context fracture: culture as “built into the stack”

How value hierarchies enter AI systems through data inclusion/exclusion, benchmarks, alignment/reward models, refusal styles, and “helpfulness” norms—and why many “bias” incidents are better understood as context failures.  

  1. Cultural translation and downstreaming of dominant AI contexts

What happens when content authored in one worldview is interpreted through default contexts shaped elsewhere – especially for smaller nations and minority cultures. Translation is treated as value-translation between worlds of meaning, not just language conversion.  

  1. From “should we use AI?” to “how do we use it without colonising or outsourcing responsibility?”

Prototyping “interaction hygiene”: spiral interrogation, triangulation across models, explicit role allocation (human / AI / jointly produced), provenance tracking, and epistemic humility – so responsibility stays with humans and communities.  

Who this is for

Researchers, educators, civic leaders, designers, and community stewards working at the intersection of technology and meaning.  

Expected outcomes

  • A clear map of where culture enters the AI pipeline (from data to interface norms and deployment).  
  • A shared diagnostic lens for cultural mismatch and extractive risks around protected/sacred/place-based knowledge.  
  • Draft protocols for ethical human–AI co-creation and early governance requirements for community-controlled, culturally protected AI approaches. 

Speakers:

Robbie Stamp: Chief Executive of Bioss International, Robbie Stamp works at the intersection of AI governance, institutional accountability, and cultural imagination. Contributing to UK and ISO AI governance work, he helps translate values and ethics into practical questions for boards and decision-makers. His background in storytelling and participatory thought-experiments, including work connected to The Hitchhiker’s Guide to the GalaxyHappened Here, and the AI Goosebumps Club, brings a rare ability to connect policy, culture, and lived human perception.

Dr. Ekaterina Matveeva (Ph.D. h.c.): Dr. Ekaterina Matveeva is a linguist, AI-learning designer, Founder of Amolingua and Lingo+, creator of the “Language Alter Ego” theory, and an advisory committee member of Language Connects Foundation. Her work on multilingualism and multiculturalism makes visible how meaning shifts across languages, value systems, and cultural norms. She brings a vital perspective on how AI mediates not only words, but also epistemic attitudes, politeness norms, refusals, and implicit hierarchies of meaning.

Dr. Alexander Laszlo: Professor of Sustainability Leadership at the School of Leadership Studies, Fielding Graduate University (USA), Dr. Alexander Laszlo is a leading thinker in evolutionary systems design and learning ecosystems. His work offers a rigorous framework for understanding how intelligence and knowledge emerge through relational systems, feedback, and learning loops. He brings a powerful perspective on designing pluriversal learning ecosystems and governance architectures oriented toward thrivability.

Sattie Persaud: Founder of the World Heritage Cultural Center (WHCC), Sattie Persaud is a leader in cultural heritage, inclusion, and intercultural understanding through the performing, visual, and culinary arts. She is also an active voice at the intersection of cultural heritage and AI, including through her Forbes Nonprofit Council article, AI And The Future Of Cultural Heritage: A Human-Centered Imperative. Her work foregrounds cultural integrity, ethical stewardship, and human-centred governance in an era when AI increasingly mediates identity, memory, and public discourse.

Paul van Schaik: Paul van Schaik is a founder of integralMENTORS and the pioneering force behind the Integral UrbanHub series, through which he has spent decades exploring the intersection of human development, systemic design, and urban thriveability. His work adds methodological rigor to questions of human-AI collaboration, helping translate cultural and ethical concerns into structured protocols, inquiry methods, and auditability. He contributes a disciplined approach to designing more reflective, accountable, and developmentally aware forms of interaction.

Co-organisers: Ecocivilisation and Living Cities Action Research Ecosystem (LCARE)

Facilitated by: Dr. Marina Demchenko (LCARE, Research Lead) and Violeta Bulc (Ecocivilisation, Founder).

You may also like

AI CHALLENGED

20. May 2026

AI AND EDUCATION CHALLENGED

AI CHALLENGED

15. April 2026

AI AND GEOPOLITICS CHALLENGED

AI CHALLENGED

25. February 2026

AI AND HUMANITY CHALLENGED