Abstract
An instrument potentially useful in assisting medicine in making difficult ethical decisions, artificial intelligence is advancing in its imitation of human thought and behavior. Machine intelligence may also come to mimic human foibles, including the psychosocial phenomenon of "groupthink," in which excessive conformity to the group dynamic inhibits appropriate critical reassessment of a group's policies and actions. This essay predicts a parallel phenomenon in which an artificial intelligence that is bound to the assumptions and biases inherent in the programming shared by a cyber collective suppresses data processing pathways that might otherwise redirect its executive output. Locked into allied algorithms, the artificial intelligence exhibiting "loopthink" would tend toward utilitarian assessments while omitting or excluding abstract moral principles, such as human dignity, that are vital to bioethics yet defy translation into arrangeable lines of computer code.
Original language | English (US) |
---|---|
Pages (from-to) | 7-12 |
Number of pages | 6 |
Journal | Ethics and Medicine |
Volume | 33 |
Issue number | 1 |
State | Published - 2017 |
ASJC Scopus subject areas
- Religious studies
- Philosophy
- Health Policy