The United Nations office that promotes international public health advised AI practitioners to pay closer attention to elders.
What’s new: A report by the World Health Organization (WHO) warns that elders may not receive the full benefit of AI in healthcare. It highlights sources of bias in such systems and offers recommendations for building them.
What it says: The report calls attention to four primary issues: datasets, technological literacy, diversity of development teams, and virtual care.
- The datasets used to train healthcare systems frequently underrepresent older people. They may not account for broad variations in elder health and lifestyle.
- Older people are less eager than younger people to adopt new technology, and that may affect their access to AI-assisted care. Their reticence may lead developers to view them as less relevant to the market and focus on serving younger people.
- Development teams dominated by younger people may build products that are biased against elders or don’t address their needs, particularly those of elders in socially marginalized groups.
- AI applications that automate care or monitor a patient’s health remotely may reduce contact between caregivers and patients. This may deprive elders of human contact and deepen the disconnect between young developers and older patients.
Recommendations: The authors recommend that datasets be audited for ageism, teams include a variety of ages, elders be involved in product design, elders have rights to consent and contest use of AI systems in their own care, and AI products undergo rigorous ethics reviews.
Behind the news: Relatively little research has examined age bias in AI systems. Nevertheless, elders themselves have complained about some existing systems.
- Users criticized QuietCare, a bracelet that uses AI-enhanced motion detection to recognize when a user has fallen or needs emergency medical help. They claimed that the system didn’t suit their routines and generated false alarms.
- Researchers found a dramatic gap between elders and their adult children in their acceptance of an in-home computer vision system designed to detect falls. The children were far more enthusiastic than their parents. The children also tended to underestimate their parents’ competence.
Why it matters: Learning algorithms have a well-documented history of absorbing biases from their training data with respect to ethnicity, gender, sexual orientation, and religion. It stands to reason that they would also absorb biases with respect to older people — a population that, like the very young, is at greater risk of illness and injury and generally needs greater care than the general population.
We’re thinking: Let’s do what we can to ensure that the benefits of AI are shared fairly among all people.