You are currently browsing the tag archive for the ‘robots’ tag.

Abstract

The perceived weaknesses of philosophical normative theories as machine ethic candidates have led some philosophers to consider combining them into some kind of a hybrid theory. This chapter develops a philosophical machine ethic which integrates “top-down” normative theories (rule-utilitarianism and prima-facie deontological ethics) and “bottom-up” (case-based reasoning) computational structure. This hybrid ethic is tested in a medical machine whose input-output function is treated as a simulacrum of professional human ethical action in clinical medicine. In six clinical medical simulations run on the proposed hybrid ethic, the output of the machine matched the respective acts of human medical professionals. Thus, the proposed machine ethic emerges as a successful model of medical ethics, and a platform for further developments.

Here.

Advertisements

Capture

 

 

 

 

 

 

 

 

Pentti Haikonen

Abstract. Functionalism of robot pain claims that what is definitive of robot pain is functional role, defined as the causal relations pain has to noxious stimuli, behavior and other subjective states. Here, I propose that the only way to theorize role-functionalism of robot pain is in terms of type-identity theory. I argue that what makes a state pain for a neuro-robot at a time is the functional role it has in the robot at the time, and this state is type identical to a specific circuit state. Support from an experimental study shows that if the neural network that controls a robot includes a specific ’emotion circuit’, physical damage to the robot will cause the disposition to avoid movement, thereby enhancing fitness, compared to robots without the circuit. Thus, pain for a robot at a time is type identical to a specific circuit state.

Here.

Movie site, ‘Tears of Steel’.

Call for Chapters: Machine Medical Ethics, Edited Collection

You are warmly invited to submit your research chapter for possible inclusion in an edited collection entitled Machine Medical Ethics. Target publication date: 2014.

The new field of Artificial Intelligence called Machine Ethics is concerned with ensuring that the behaviour of machines towards human users and other machines is ethical. This unique edited collection aims to provide an interdisciplinary platform for researchers in this field to present new research and developments in Machine Medical Ethics. Areas of interest for this edited collection include, but are not limited to, the following topics:

Foundational Concepts

What is medical ethics?

What is machine medical ethics?

What are the consequences of creating or not creating ethical medical machines?

Can medical machines be autonomous?

Ought medical machines to operate autonomously, or under (complete or partial) human physician control?

Theories of Machine Medical Ethics

What theories of machine medical ethics are most theoretically plausible and most empirically supported?

Ought machine medical ethics be rule-based (top-down), case- based (bottom-up), or a hybrid view of both top-down and bottom-up?

Is an interdisciplinary approach suited to designing a machine medical ethical theory? (e.g., collaboration between philosophy, psychology, AI, computational neuroscience…)

Medical Machine Training

What does ethical training for medical machines consist in: ethical principles, ethical theories, or ethical skills? Is a hybrid approach best?

What training regimes currently tested and/or used are most successful?

Can ethically trained medical machines become unethical?

Can a medical machine learn empathy (caring) and skills relevant to the patient-physician relationship?

Can a medical machine learn to give an apology for a medical error?

Ought medical machines to be trained to detect and respond to patient embarrassment and/or issues of patient privacy? What social norms are relevant for training?

Ought medical machines to be trained to show sensitivity to gender, cultural and age-differences?

Ought machines to teach medicine and medical ethics to human medical students?

Patient-Machine-Physician Relationship

What role ought imitation or mimicry to play in the patient-machine-physician relationship?

What role ought empathy or caring to play in the patient-machine-physician relationship?

What skills are necessary to maintain a good patient-machine-physician relationship?

Ought medical machines be able to detect patient fakery and malingering?

Under what conditions ought medical machines to operate with a nurse?

In what circumstances should a machine physician consult with human or other machine physicians regarding patient assessment or diagnosis?

Medical Machine Physical Appearance

Is there a correlation between physical appearance and physician trustworthiness?

Ought medical machines to appear human or non-human?

Is a highly plastic human-like face essential to medical machines? Or, is a static face sufficient?

What specific morphological facial features ought medical machines to have?

Ought medical machines to be gendered or androgynous?

Ought medical machines to possess a human-like body with mobile limbs?

What vocal characteristics ought medical machines to have?

As a new field, the target audiences are expected to be from the scientists, researchers, and practitioners working in the field of machine ethics and medical ethics. The target audience will also include various stakeholders, like academics, research institutes, and individuals interested in this field, and the huge audience in the public sector comprising health service providers, government agencies, ministries, education institutions, social service providers and other types of government, commercial and not-for-profit agencies.

Please indicate your intention to submit your full paper by email to the editor who emails you with the title of the paper, authors, and abstract. The full manuscript, as PDF file, should be emailed to that same editor by the deadline indicated below. Authoring guidelines will be mailed to you after we receive your letter of intent.

Please feel free to contact the editors, Simon van Rysewyk or Dr. Matthijs Pontier, if you have any questions or concerns. Many thanks!

IMPORTANT DATES:

Intent to Submit: June 10, 2013

Full Version: October 20, 2013

Decision Date: November 10, 2013

Final Version: December 31, 2013

Editors:

Simon van Rysewyk

School of Humanities
University of Tasmania
Private Bag 41
Hobart
Tasmania 7001
Australia

Email: simonvanrysewyk@utas.edu.au

Dr. Matthijs Pontier

Post-Doctoral Researcher
The Centre for Advanced Media Research (CAMeRA)
Vrije Universiteit Amsterdam
Buitenveldertselaan 3
1081 HV Amsterdam
The Netherlands

Email: matthijspon@gmail.com

One day, artificial thought will be achieved.

An artificially intelligent computer will say, “that makes me happy.”

Will it feel happy? Assume it will not.

Still: it will act as if it did.  It will act like an intelligent human being. And then what?

My hunch is that adult human beings will view intelligent computers as simplified versions of  themselves (child-like). Human children will view them as peers; ‘friendships’ will form between children and intelligent computers.

Why? I am reminded of Wittgenstein’s remark: ‘The human body is the best picture of the human soul’.

Look at this video of ASIMO.

How would you interact with ASIMO? What would your reactions be?

It is also remarkable that ASIMO does not possess any physiology.

 

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 230 other followers

Simon van Rysewyk

Blog Stats

  • 14,413 hits
free counters