At-home AI health care: accountability, privacy and the environmental impact.

TL; DR
• Regulatory: the line between medical device and lifestyle product is collapsing and so is accountability.
• Privacy: 99.98% of people can be re-identified from ‘anonymised’ data using just 15 attributes.
• Environmental: decentralised testing means hundreds of single-user devices replacing one shared clinical machine - with no second-hand market and no end-of-life plan.
From robots that will look after us when we need later-life care to at-home monitoring solutions to help keep us alive and well for longer, there has been a flood of AI enabled wellbeing and medical devices over the past few years.
For those living in Australia and getting annual skin cancer screening, you can imagine it won’t be long before you will just have a device in your home that you can point at a suspicious mole and it will tell you whether it might be malignant or not. For those with female health issues or fertility challenges, real-time tracking via smart menstrual products isn’t far off. And with an ageing population and a strained NHS, it certainly feels like the more that can be done from home the better. Particularly when it’s convenient and often less stressful for those intimidated by clinical settings.
With the rapid developments of AI, the technology is here, but there are some questions that have yet to be answered.
The blurring line between medical device and lifestyle product
Whilst there has historically been a clearly defined line between a medical and wellbeing device, these lines are getting blurred. The Apple Watch can now detect irregular heart rhythms (AFib), take single-lead ECGs, and measure blood oxygen. You don’t have to look far to find a ‘My Apple Watch Saved My Life’ headline with a story of how a person’s watch detected and flagged a cardiac abnormality that allowed them to get treatment before it turned serious.
For now, an Apple Watch is not purchased as an ECG. It’s a lifestyle product that, due to constant monitoring, is very good at spotting atypical symptoms. As these devices gain a reputation for doing what a dedicated medical instrument should be used for, there will be people using them, and trusting them, to perform diagnostic functions they are not approved for. Put AI in the mix and it becomes even more complex. On the one hand you have the cautious AI models issuing false-positives and causing worried, but otherwise healthy people, to take up NHS time. In the Apple Heart Study, with over 400,000 participants, 0.5% received irregular pulse notifications - and of those, only 34% had AFib confirmed on follow-up. On the other hand, you have the question of liability of when the AI gets it wrong. For this reason we will continue to see most devices sold as ‘wellness’ products for this reason, with companies stating clearly that they are not diagnostic tools, even if they do work in remarkably similar ways.

In 2021 we speculated three future products designed to support mothers and prenatal babies. One product was a kick counter that tracks a baby's movements. The project was designed to raise the question of when does data become a cause of anxiety rather than a relief from it, and what happens when the data isn’t 100% reliable.
What your wearable data really reveals
Then there is the question of data. Health data infers more than people realise - heart rate, sleep, cycle and weight together can infer pregnancy, mental illness, substance use, neurodivergence, and sexuality. ‘Anonymised’ health data becomes easily re-identifiable when combined with other datasets, first demonstrated in 2000 by Latanya Sweeney, who showed that 87% of the US population could be identified using only their gender, ZIP code, and birthdate. A 2019 study published in Nature Communications found the problem has only got worse: 99.98% of people can be correctly re-identified from any dataset using just 15 demographic attributes. As well as health data being used to train AI models, it’s also being sold to insurance providers and advertisers, often legally but buried in lengthy terms of service.
Most people would likely share their data if they were told there was a chance it could save their life, but trust will be built on transparency, something tech brands have historically been bad at.
The sustainability problem no one’s talking about
An underdiscussed area, the rise of decentralised sensing and testing means hundreds of products in place of a single device in a clinic. Add on top of that the intensive computing of the AI models themselves. Whilst we accept that these devices are used on multiple people within the clinical setting, a second-hand market for an at-home cervical screening or saliva testing device feels unlikely. Extending the lifespan of these types of products, or offering them under a service model are important considerations from the beginning when it comes to design. The service model also has the potential to solve the inevitable challenge of accessibility. Doing something to address the inevitable two-tier health system created by solutions priced at £200–£500 per device, plus an ongoing subscription.
The companies that win this next phase won’t be the ones with the slickest AI or the most sensors. They’ll be the ones that earn trust — through transparency about data, regulatory honesty about what their device actually does, and design choices that take the full lifecycle seriously, not just the unboxing. That’s a design problem before it’s a tech one. And it’s the work we’re most interested in doing right now.
No. The Apple Watch is sold as a lifestyle product, even though it can detect irregular heart rhythms, take single-lead ECGs, and measure blood oxygen. Keeping it on the "wellness" side avoids the regulatory burden and liability of a diagnostic tool. One of the challenges is the responsibility of accuracy - who is responsibility for false-positives or worse false-negatives. Because of this, devices are designed to be overly cautious. In the Apple Heart Study, 0.5% of 400,000 participants received irregular pulse notifications, but only 34% of those had AFib confirmed on follow-up.
In 2000, Harvard researcher Latanya Sweeney demonstrated that 87% of the US population could be uniquely identified using only their gender, ZIP code, and date of birth. A 2019 study published in Nature Communications found the problem has only got worse: 99.98% of people can be correctly re-identified from any dataset using just 15 demographic attributes. For at-home health devices, this is particularly significant — heart rate, sleep, cycle, and weight data combined can infer pregnancy, mental illness, substance use, neurodivergence, and sexuality.
Whilst a clinic device serves hundreds of patients; an at-home equivalent serves one, which means more products and more waste. There's no second-hand market for such devices and as the software evolves, without proper design consideration, the hardware soon becomes obsolete. Designing for longer upgrade, repair, and service-based models is essential from the start - and offers a route to addressing the two-tier health system created by devices priced at £200–£500 plus subscriptions.


