Projects
Lab
About
Latest
Publications
Let's talk
Let's talk
Cart
0

AI for kids, what we learned from CES 2026

As AI starts to creep into everyday kids products, who is doing what and what protections are being put in place?
Date
January 16, 2026

It’s January, so everyone in tech has had their eyes on CES. With robots that can now climb stairs and fold clothes, it seems as if we’ll all have an AI companion and a handy robot housemaid by the end of the year. 

However there is a big question around safety, especially when it comes to kids. As AI starts to creep into everyday products, who is doing what and what protections are being put in place?

With many of us being parents of young children and having researched this topic heavily during our Mindful AI project back in 2024, it’s been a particular focus for us this CES. 

Here are a few things that really stood out.

Kids + safety: when AI doesn’t understand risk

Debut AI toys included Sweekar’s AI pocket pet that is being dubbed the “world’s first emotionally intelligent pocket sized pet”. Designed by Chinese startup Takway, imagine a smart - and slightly cuter - Furbie without the fur. Another cute AI robot was revealed by Ludens AI, this one capable of following you around on its wheels. Both of these AI companions (toys?, pets?) are supposed to evolve as they learn about their owners. 

What is still not clear is how these devices are supposedly safe for kids. Kids who can both be innocently curious and actively deviant. A news highlight of FoloToys’ AI teddy bear that was released in November last year shows it encouraging behaviour it simply didn’t recognise as dangerous, including how to light a match. It’s a sharp reminder that intelligence without context, judgement, or real-world awareness can tip from helpful to harmful very quickly. And despite assurances that these responses shouldn’t be possible, it’s becoming increasingly difficult to trust companies when even their programmers admit that they don’t fully understand how their models work

Opportunity:
Brands in this space have a huge amount of work to do to prove that they can guarantee safety. But this presents an opportunity for those developing products where safety is designed in from the start, not bolted on later via parental controls. Physical cues, constrained behaviours, and calm guardrails feel essential.

This is closely tied to how we’ve been thinking about Mindful AI for kids at Morrama. Intelligence alone isn’t enough. Behaviour needs to be shaped, limited, and grounded in the physical world. Simplifying AI capabilities such as colouring only or presentation of dream scapes are all ways we thought to mitigate risks of AI.

AI + play: creativity, with boundaries

LEGO’s smart play bricks were one of the more joyful products we saw launched last week. Whilst not intelligent per se, they add a digital layer to physical play without replacing hands-on creativity or imagination. The jury is still out within the Morrama team on whether these are really going to take off or whether it’s been a decade of hard work for a fleeting fad. However, whether the restraint here is deliberate or the limitation of what can be squeezed into a 2 x 4 lego brick, this new level interactivity unlocks possibilities with Lego play, but it doesn’t perform creativity on the child’s behalf. 

AI slowly taking away the imagination of children who will grow up being able to rely on it doing the legwork is a big concern. The benefits of boredom in kids are well documented but with AI comes endless possibilities for conversation and interaction. This was a question we raised internally with our Mindful AI tools concepts in 2024 and why the Create Printer will print out a picture based on a verbal prompt from a child, but leave them to colour it in.

Opportunity:

Smart toys and objects that are restrained, utilising AI to extend play rather than replace it. Systems that respond to physical interaction, spark ideas, then get out of the way.

Rethinking how kids chat and stay safe

From play to communication, there has been a lot of conversation around kids' access to social media, owning smartphones and being constantly connected online over the past few years. With many urging the UK to follow Australia in curbing access to social media for children under 16, and many schools both here and in the U.S. putting phone bans in place, it’s no surprise that we are seeing a move to rethinking how kids connect digitally. 

CES saw the launch of Tin Can and Pinwheel, two kids-first landlines that lets kids call only an approved list of contacts - and no sneaking off to the bathroom to have secret conversations, these are old-school corded phones, so kids are stuck within a metre of the hallway table. Screenless and incredibly simplified, the hardware carries as much responsibility here as the software bringing with it opportunity for playful industrial design and interactions. 

Opportunity:

Unlike the surface level group connection that social media and messaging platforms such as Whatsapp offer, there is a growing opportunity to products that help children foster deeper, more meaningful, one-on-one relationships with their friends and family, and encourages them to develop conversational skills.

This is also the intention of the Connect Flower, a product intended to aid connection between parent and child through prompted questions and discussion. An example of using AI in a mindful, controlled and constrained way. 

The bigger pattern

As we see leaps forwards in AI-enabled technology on one hand, we are going to see an equal and opposite pull back towards low-fi ‘dumb’ tech. Much of this is driven by the growing concern about where AI technology is heading and how much control is in the hands of the big tech companies - or in some cases, just a single person. Elon Musk’s decision to take down Grok’s safeguards that meant that generative-AI could be used to produce sexually explicit images of children, has been met with outrage at both the decision, but also that this was allowed to happen at all. With AI technology developing faster than any internationally agreed rules and regulations can be put in place, many parents in-particular feel that it’s safer to avoid it altogether.

This pull in two directions opens a chasmic gap in the kids AI-enhanced tech market that, whilst a challenge to navigate, provides a huge opportunity for companies who can communicate their stance on AI safety with complete transparency and conviction. Limitations become the driving feature, over the idea of ‘endless possibilities’, and industrial design becomes vital in guiding a positive, playful and safe experience.

Curious what others spotted at CES 2026, especially in kids tech, play, or communication. Email us at info@morrama.com or join the conversation on LinkedIin.

Author

Jo Barnard & Andy Trewin Hutt