You are currently browsing the tag archive for the ‘humanity’ tag.
Call for Chapters: Machine Medical Ethics, Edited Collection
You are warmly invited to submit your research chapter for possible inclusion in an edited collection entitled Machine Medical Ethics. Target publication date: 2014.
The new field of Artificial Intelligence called Machine Ethics is concerned with ensuring that the behaviour of machines towards human users and other machines is ethical. This unique edited collection aims to provide an interdisciplinary platform for researchers in this field to present new research and developments in Machine Medical Ethics. Areas of interest for this edited collection include, but are not limited to, the following topics:
What is medical ethics?
What is machine medical ethics?
What are the consequences of creating or not creating ethical medical machines?
Can medical machines be autonomous?
Ought medical machines to operate autonomously, or under (complete or partial) human physician control?
Theories of Machine Medical Ethics
What theories of machine medical ethics are most theoretically plausible and most empirically supported?
Ought machine medical ethics be rule-based (top-down), case- based (bottom-up), or a hybrid view of both top-down and bottom-up?
Is an interdisciplinary approach suited to designing a machine medical ethical theory? (e.g., collaboration between philosophy, psychology, AI, computational neuroscience…)
Medical Machine Training
What does ethical training for medical machines consist in: ethical principles, ethical theories, or ethical skills? Is a hybrid approach best?
What training regimes currently tested and/or used are most successful?
Can ethically trained medical machines become unethical?
Can a medical machine learn empathy (caring) and skills relevant to the patient-physician relationship?
Can a medical machine learn to give an apology for a medical error?
Ought medical machines to be trained to detect and respond to patient embarrassment and/or issues of patient privacy? What social norms are relevant for training?
Ought medical machines to be trained to show sensitivity to gender, cultural and age-differences?
Ought machines to teach medicine and medical ethics to human medical students?
What role ought imitation or mimicry to play in the patient-machine-physician relationship?
What role ought empathy or caring to play in the patient-machine-physician relationship?
What skills are necessary to maintain a good patient-machine-physician relationship?
Ought medical machines be able to detect patient fakery and malingering?
Under what conditions ought medical machines to operate with a nurse?
In what circumstances should a machine physician consult with human or other machine physicians regarding patient assessment or diagnosis?
Medical Machine Physical Appearance
Is there a correlation between physical appearance and physician trustworthiness?
Ought medical machines to appear human or non-human?
Is a highly plastic human-like face essential to medical machines? Or, is a static face sufficient?
What specific morphological facial features ought medical machines to have?
Ought medical machines to be gendered or androgynous?
Ought medical machines to possess a human-like body with mobile limbs?
What vocal characteristics ought medical machines to have?
As a new field, the target audiences are expected to be from the scientists, researchers, and practitioners working in the field of machine ethics and medical ethics. The target audience will also include various stakeholders, like academics, research institutes, and individuals interested in this field, and the huge audience in the public sector comprising health service providers, government agencies, ministries, education institutions, social service providers and other types of government, commercial and not-for-profit agencies.
Please indicate your intention to submit your full paper by email to the editor who emails you with the title of the paper, authors, and abstract. The full manuscript, as PDF file, should be emailed to that same editor by the deadline indicated below. Authoring guidelines will be mailed to you after we receive your letter of intent.
Please feel free to contact the editors, Simon van Rysewyk or Dr. Matthijs Pontier, if you have any questions or concerns. Many thanks!
Intent to Submit: June 10, 2013
Full Version: October 20, 2013
Decision Date: November 10, 2013
Final Version: December 31, 2013
Simon van Rysewyk
School of Humanities
University of Tasmania
Private Bag 41
Dr. Matthijs Pontier
The Centre for Advanced Media Research (CAMeRA)
Vrije Universiteit Amsterdam
1081 HV Amsterdam
How do we think about reality in a way that improves upon the old ways?
There is good news here: it is not entirely up to you to improve reality. Your children, and their children will do the job. So, sit back a little. Enjoy the ride!
Human beings have the unique capacity to play life’s ‘ratchet game’. Children learn the best society has to offer, and can improve upon it. And, your children’s children can start where your children left off. And so on.
My kids are already way ahead of me, since they started where I left off long, long ago, and also vastly ahead of cro-magnon humans. By contrast, chimpanzees start where their ancestors left off, and stay there. They don’t move from this place (chimps are still very cute, though).
Thus, humans can produce science and technology, and pass it on to their descendents. This gives human beings the chance to deploy science and AI tech to create increasingly accurate representations of ‘mind’, ‘DNA’, ‘autism’, ‘pain’, ‘happiness’, and so on. The ratchet game takes us beyond the familiar into exciting new territories.
(I wonder: Can academic philosophy play life’s ‘ratchet game’? It seems to me that philosophy is not terribly good at reaching out to other disciplines, and learning from them in the way that children naturally learn from parents.)
One day, artificial thought will be achieved.
An artificially intelligent computer will say, “that makes me happy.”
Will it feel happy? Assume it will not.
Still: it will act as if it did. It will act like an intelligent human being. And then what?
My hunch is that adult human beings will view intelligent computers as simplified versions of themselves (child-like). Human children will view them as peers; ‘friendships’ will form between children and intelligent computers.
Why? I am reminded of Wittgenstein’s remark: ‘The human body is the best picture of the human soul’.
Look at this video of ASIMO.
How would you interact with ASIMO? What would your reactions be?
It is also remarkable that ASIMO does not possess any physiology.
John Donne was partly right: A person is an island. But – every island is surrounded by water.
I am half myself, half you.
I am surrounded by your facial expressions. I adopt them as my own.
I cannot predict your every thought and action for the simple reason that most of my own thoughts and actions are completely spontaneous.
I cannot predict what I will do in most instances. I cannot know myself, so I cannot know you. True enough? We are both in the dark, it seems.
That sounds a bit bleak.
Is there any good news?
Yes: A person is not a vacuum. Human thought and action is shared. Shared, copied, modified, suppressed, distilled – we live in each other’s facial expressions.
Academics sometimes lament that the number of scholars working in their chosen field is less than the population density per square kilometer of Antarctica. They may have forgotten that to be considered interesting by the half-dozen other researchers in the field is already achievement enough.