Morrama founder Jo Barnard was featured in the Indian Express weighing in on one of the most pressing questions in consumer tech — as AI companions for children flood the market across the US, Europe, and India, who is responsible for designing them responsibly?

Read the full article: Indian Express →

A Global Market. A Global Question.

Jo Barnard is the founder and Creative Director of Morrama, one of the UK's leading independent product design consultancies, and co-founder of Kibu, a circular headphone brand built specifically for children. With a decade of experience taking consumer tech products from brief to manufacture, and a track record of helping brands raise over $20M in investment, she sits at a rare intersection: someone who understands both the commercial pressures that shape product decisions and the human consequences of getting those decisions wrong. When the Indian Express sought expert commentary on the rise of AI-powered toys, they turned to Jo not as a regulator or a researcher, but as a practitioner who has navigated these choices in the real world and built children's product in the process.

The timing is significant. The global AI toy market is one of the fastest-growing segments in consumer electronics, and the debate around child safety is playing out differently across three of the world's most important markets.

United States

North America leads globally, accounting for over 34% of AI toy market revenue. The US smart toy segment reached $1.6 billion in 2024 and is forecast to grow at 14.7% CAGR through 2030. Mattel has announced an OpenAI collaboration to embed AI into Barbie and Uno; LEGO is exploring AI-assisted storytelling. Yet regulation has not kept pace. The primary framework governing data collection from children — COPPA — was written for the internet age, not the AI age, and consumer advocates have flagged cases of AI toys directing children toward dangerous content. Calls for updated federal regulation are growing, but enforceable guardrails remain limited.

Europe

The EU has moved furthest and fastest. The AI Act, adopted in 2024, explicitly bans voice-activated toys that encourage dangerous behaviour and classifies AI in educational settings as high-risk. A new Toy Safety Regulation published in December 2025 requires internet-connected toys with interactive social features to undergo third-party conformity assessment and carry a Digital Product Passport. Despite this, research shows a significant compliance gap — products on EU shelves still fall short of the protections the law demands. Legislation is only as effective as the design intent behind the products it governs.

India

India is the most significant emerging market in the conversation — and the least protected. The country's tech toy market was valued at $1.6 billion in 2024 and is forecast to reach $3.6 billion by 2030. India has the world's largest population of children under 14 — over 300 million — and AI learning companions are reaching them faster than regulation can follow. India's National Education Policy 2020 mandates play-based learning, accelerating adoption, while quick commerce platforms now deliver toys in under ten minutes. Specific AI regulation for toys is almost entirely absent. It is exactly this context — a country at the cusp of a major consumer AI wave — that made Jo's perspective so relevant to the Indian Express readership.

Across all three markets, the same gap persists: the products move faster than the rules. Jo's argument is that this gap cannot be closed by legislation alone. It starts at the design brief.

“The question isn’t whether AI can be made engaging for children — it’s whether we’re willing to slow down and design it with their safety genuinely at the centre.”

Jo Barnard, Founder, Morrama · Indian Express

Key Takeaways from the Article

01 — Guardrails Are Failing

Many AI toys on the market carry insufficient moderation layers, meaning children can be exposed to inappropriate, inaccurate, or emotionally manipulative outputs from AI systems marketed as companions.

02 — Design Accountability Starts at the Brief

Jo argues that responsibility for safe AI experiences lies with product designers and brand teams — not just regulators or parents. Ethics must be embedded from the earliest stages of the design process.

03 — Children Form Genuine Emotional Bonds with AI

Research shows infants as young as six months respond to responsive technology in ways that mirror human connection — raising serious questions about the long-term developmental impact of AI companions designed without appropriate safeguards.

04 — Regulation Is Not Keeping Pace with Innovation

Legislators are beginning to act — the EU's AI Act and new Toy Safety Regulation are the most significant steps globally — but the gap between what is technologically possible and what is demonstrably safe remains wide in every major market.

Work With Morrama on Children's Technology

Morrama has deep experience designing children's technology — from circular hardware like Kibu to mindful AI tools for kids. We bring ethical design thinking into the brief from day one, helping brands build products that are safe, responsible, and built to last.

Start a Conversationhttps://morrama.com/contact