It ignores the existence of biological sex and socio-cultural gender and their effects on individual health and disease states. Thus, the ‘thinking machines’ not only reproduce discriminating bias and prejudices but also produce suboptimal and even outright wrong results. German experts discussed the gender problem in healthcare AI at the Econ Forum by TK at virtual.MEDICA 2020.
This was a women-only panel, underlining that it is mostly women who are interested in the gender issue. More so, the audience reflected the panel composition, as Dr Christiane Gross, Chair of the Medical Committee on Telematics North Rhine Westphalia and President of female physicians’ association Deutscher Ärztinnenbund, pointed out: ‘In training sessions on gender issues I often mainly see women. But the moment you call the session ‘men’s healthcare’ the audience is split evenly in 50 percent women and 50 percent men.’ The word ‘gender’ is still widely considered a women’s niche topic, although gender-specific differences concern all humans. Men, however, have fewer reasons to feel excluded in medical sciences because they are over-represented at all levels. After all, the majority of participants in medical research projects are men, thus the training data fed into AI systems is either focused on men or neglects the gender dimension altogether.
When machines think like men
‘The gender bias in medicine has been known for years but it took the research community quite a while to catch on,’ said Brigitte Strahwald, the Chair for Public Health and Health Services Research at Ludwig Maximilian University, Munich. During development of AI technologies, legacy data was used that carry the gender-related distortions from the analogue into the digital world. ‘This is due, among other reasons, to the gender gap in AI research and development,’ Strahwald pointed out. Algorithms are not only the product of the data sets on which they are based, they are also the product of the humans who program them – the majority of whom are men. Since most healthcare AI applications do not differentiate between sexes and genders, it can be assumed that the IT experts in charge either consider this aspect irrelevant or are not even aware of it.
Gendering as a success factor
Neglecting the gender dimension is not only a matter of discrimination against women – it undermines the usefulness of AI. Take the digital health applications that are being reimbursed by health insurers under the German Digital Care Act (DVG), which came into effect last October. Barbara Steffens, former health secretary of North Rhine Westphalia and current head of regional policy support of a large health insurer, Techniker Krankenkasse, points out that the ‘non-differentiating apps’ help neither men nor women, but only generate costs: ‘In the first certified apps we are aware of five health areas in which we know men and women are affected differently, not to mention different perceptions and courses of disease.’ More men than women, for example, suffer from chronic tinnitus, but women experience the disease more intensely. Thus it can be expected that the success of a tinnitus app also hinges on the question of whether contents are prepared and communicated in a gender-sensitive manner. Currently, this is not the case.
The majority of top-rated fertility apps collect and even share intimate information without the users’ knowledge or permission, a collaborative study by Newcastle and Umea Universities has found. Researchers are now calling for a tightening of the categorisation of these apps by platforms to protect women from intimate and deeply personal information being exploited and sold.
Gender was, and is, the big unknown, not only in digital healthcare but also in teaching, training and research. There appears to be an easy way to strengthen awareness of gendered healthcare, said Dr Christiane Gross: ‘If we manage to get men on board we will have reached a major milestone.’