Moral psychological exploration of the asymmetry effect in AI-assisted euthanasia decisions
Jazyk angličtina Země Nizozemsko Médium print-electronic
Typ dokumentu časopisecké články, přehledy
PubMed
40367579
DOI
10.1016/j.cognition.2025.106177
PII: S0010-0277(25)00117-9
Knihovny.cz E-zdroje
- Klíčová slova
- AI ethics, Moral judgment, Moral psychology of AI, Moral psychology of robotics, Passive euthanasia,
- MeSH
- dospělí MeSH
- eutanazie * psychologie MeSH
- lidé středního věku MeSH
- lidé MeSH
- mínění MeSH
- mladý dospělý MeSH
- mravy * MeSH
- rozhodování * MeSH
- umělá inteligence * MeSH
- Check Tag
- dospělí MeSH
- lidé středního věku MeSH
- lidé MeSH
- mladý dospělý MeSH
- mužské pohlaví MeSH
- ženské pohlaví MeSH
- Publikační typ
- časopisecké články MeSH
- přehledy MeSH
A recurring discrepancy in attitudes toward decisions made by human versus artificial agents, termed the Human-Robot moral judgment asymmetry, has been documented in moral psychology of AI. Across a wide range of contexts, AI agents are subject to greater moral scrutiny than humans for the same actions and decisions. In eight experiments (total N = 5837), we investigated whether the asymmetry effect arises in end-of-life care contexts and explored the mechanisms underlying this effect. Our studies documented reduced approval of an AI doctor's decision to withdraw life support relative to a human doctor (Studies 1a and 1b). This effect persisted regardless of whether the AI assumed a recommender role or made the final medical decision (Studies 2a and 2b and 3), but, importantly, disappeared under two conditions: when doctors kept on rather than withdraw life support (Studies 1a, 1b and 3), and when they carried out active euthanasia (e.g., providing a lethal injection or removing a respirator on the patient's demand) rather than passive euthanasia (Study 4). These findings highlight two contextual factors-the level of automation and the patient's autonomy-that influence the presence of the asymmetry effect, neither of which is not predicted by existing theories. Finally, we found that the asymmetry effect was partly explained by perceptions of AI incompetence (Study 5) and limited explainability (Study 6). As the role of AI in medicine continues to expand, our findings help to outline the conditions under which stakeholders disfavor AI over human doctors in clinical settings.
Department of Philosophy 1 Faculty of Psychology University of Granada Granada Spain
Department of Psychology Faculty of Medicine University of Helsinki Helsinki Finland
Department of Social Research Faculty of Social Sciences University of Turku Turku Finland
Health and Well Being Promotion Unit Finnish Institute for Health and Welfare Helsinki Finland
Institute of Philosophy Czech Academy of Sciences Prague Czechia
School of Psychology Faculty of Health and Medicine University of Leeds United Kingdom
The Karel Čapek Center for Values in Science and Technology Czech Republic
Citace poskytuje Crossref.org