You are currently browsing the monthly archive for May 2013.

Movie site, ‘Tears of Steel’.

Advertisements

Variations in response to pain have been reported in clinical settings (e.g., Bates et al. 1996; Cherkin et al. 1994; Jensen et al. 1986; Unruh, 1996; Wormslev et al. 1994). Patients with similar types and degrees of wounds vary from showing no pain to showing severe and disabling pain. Many chronic pain patients show disabling chronic pain despite showing no observable wound. Other patients show severe wounds but do not show pain. Why is it that two persons with identical lesions do not show the same pain or no pain at all? Why are all pain patients unique?

I propose that mind-brain identity theory may offer an answer to this difficult question. There are two main versions of identity theory: type and token identity. A sample type identical property is to identify “Being in pain” (X) with “Being the operation of the nervous-endocrine-immune mechanism” (Y) (i.e., X iff Y) (Chapman et al. 2008; van Rysewyk, 2013). For any person in pain the nervous-endocrine-immune mechanism (NEIM) must be active, and when NEIM is active in a person, he or she is in pain. Thus, type identity theory strongly limits the pattern of covariation across persons. According to token identity theory, for a person in mental state X at time t, X is identical to some neurophysiological state Y. However, in the same person at time t1, the same mental state X may be identical to a different neurophysiological state Y2. Token identity theory doesn’t limit the pattern of covariation across persons; it only claims that, at any given time, some mind-brain identity must be true.

In response to the topic question, I propose a hybrid version of identity theory – ‘type-token mind-brain identity theory’. Accordingly, for every person, there is a type identity between a mental state X and some neurophysiological state Y. So, when I am in pain, I am in NEIM state Y (and vice versa), but this NEIM state Y may be quite different across persons. Type-token identity theory therefore proposes a type identity model at the level of every person (i.e., it may vary across persons). A type-token identity theory implies that group-level type identities (i.e., type-type) cannot fully explain the pattern of covariation in pain responses across persons. Measuring changes of a pattern of psychological and neurophysiological indicators over time may then support a unidimensional model of chronic pain for each pain patient. Thus, being in chronic pain for me is identical with a specific pattern of NEIM activity (Chapman et al. 2008; van Rysewyk, 2013), but for a different patient, the same state of pain may be identical to a different pattern of NEIM activity. In preventing and alleviating chronic pain, it is therefore essential to best fit the intervention to the type-token pain identity profile of the patient.

References

Bates, M. S., Edwards, W. T., & Anderson, K. O. (1993). Ethnocultural influences on variation in chronic pain perception. Pain, 52(1), 101-112.

Chapman, C. R., Tuckett, R. P., & Song, C. W. (2008). Pain and stress in a systems perspective: reciprocal neural, endocrine, and immune interactions. Journal of Pain 9: 122-145.

Cherkin, D. C., Deyo, R. A., Wheeler, K., & Ciol, M. A. (1994). Physician variation in diagnostic testing for low back pain. Who you see is what you get. Arthritis & Rheumatism, 37(1), 15-22.

Jensen, M. P., Karoly, P., & Braver, S. (1986). The measurement of clinical pain intensity: a comparison of six methods. Pain, 27(1), 117-126.

Unruh, A. M. (1996). Gender variations in clinical pain experience. Pain, 65(2), 123-167.

van Rysewyk, S. (2013). Pain is Mechanism. Unpublished PhD Thesis. University of Tasmania.

Wormslev, M., Juul, A. M., Marques, B., Minck, H., Bentzen, L., & Hansen, T. M. (1994). Clinical examination of pelvic insufficiency during pregnancy: an evaluation of the interobserver variation, the relation between clinical signs and pain and the relation between clinical signs and physical disability. Scandinavian journal of rheumatology, 23(2), 96-102.

Sound the Alarm: Fraud in Neuroscience – Dana Foundation

Here.

Proof of Heaven: A Neurosurgeon’s Journey into the Afterlife‘ (2012), by neurosurgeon Eben Alexander, presents a narration and interpretation of the near-death experience (NDE) of its author. Alexander developed bacterial meningitis, and was hospitalized. During hospitalization, he became deeply comatose, a condition which lasted seven days. Alexander was fortunate to come out of his coma state and retain full wakeful consciousness. Following wakefulness, Alexander reported remarkably clear visions, sensations and thoughts he claims to have had during his near-death coma. In his book, Alexander interprets this NDE as proof that life follows death, death is not the end, there exists an extremely pleasant and serene afterlife, and that consciousness is independent of the cortical brain. It is the last claim of Alexander’s that I will consider in this post. Specifically, is consciousness independent of cortex?

According to Alexander, his coma-induced NDE occured when his cerebral cortex was ‘completely shut down’, ‘inactivated’, and ‘totally offline’. In the article he wrote for Newsweek, Alexander writes that the absence of cortical activity in his brain was ‘clear from the severity and duration of my meningitis, and from the global cortical involvement documented by CT scans and neurological examinations.’ The problem with Alexander’s view of coma is that it is not supported by evidence. First, ‘global’ (complete) cortical ‘shut down’ does not result in coma, as Alexander believes. Complete cortical ‘shut down’ is fatal, and results in brain death (e.g., Cavanna et al. 2010; Charland-Verville et al. 2012; Laureys et al. 2004a; Laureys et al. 2004b). Second, ‘flat’ EEG recordings concurrent with high alpha cortical brain activity are frequently observed in comatose patients; this event is termed ‘event-related desynchronization’. There is a vast and well-established scientific literature on this topic (e.g., Pfurtscheller & Aranibar, 1979; Pfurtscheller, 1992; Pfurtscheller et al. 1999). Thus, coma does not require complete cortical deactivation.

Alexdander’s claim that NDEs require complete cortical shut down carries the implication that fully (wakeful) sensory consciousness must involve only cortex. Alexander’s argument is in line with a trend in consciousness studies research to investigate cortical regions, pathways, and activity guided by the slogan ‘seeking the neural correlates of consciousness.’ Clinical studies of cortical lesions have motivated this approach, largely due to robust correlations such as fusiform lesions leading to prosopagnosia, or ventral stream lesions leading to the visual inability to percieve shapes. The convenience of neuroimaging cortical activity with MEG, EEG, PET and fMRI has likely also played a part in the focus on cortex.

However, viewing (wakeful) sensory consciousness as purely cortical neglects essential subcortical-cortical behavioural aspects (e.g., Churchland, 2002; Damasio, 1999; Guillery & Sherman, 2002; Llinas, 2001; van Rysewyk, 2013). Put very simply (and briefly), a basic function of mammalian and non-mammalian nervous systems is to enable and regulate movements necessary to evolutionary goals such as feeding and reproducing. Peripheral axons that carry sensory information have collateral branches that project both to subcortical motor structures (primarily, thalamus) and cortical motor structures (primary motor cortex, M1). According to Guillery and Sherman (2002), all peripheral sensory input communicates information about ongoing instructions to such subcortical-cortical motor stuctures, which implies that a sensory signal can become a prediction about what movement will happen next. Thus, as an organism learns the effects of a specific movement, it learns about what in the world will likely occur next (planning), and thus what it might do following that event (deciding, acting). Temporality emerges as central to the nature of consciousness. In order to keep the body alive, nervous systems face numerous complex challenges in learning, continuous effective prediction, attention to different sensorimotor events, and calling up stored (timing) information. Neuroanatomical loops between thalamocortico structures are a plausible physical substrate involved in (identical to?) the temporal and causal aspects of the world, and of one’s own body (e.g., Damasio, 1999; Guillery & Sherman, 2002; Llinas, 2001). This leads to the empirical prediction that in a near-death event, normal functioning of thalamocortico loops is compromised.

References

Cavanna, A. E., Cavanna, S. L., Servo, S., & Monaco, F. (2010). The neural correlates of impaired consciousness in coma and unresponsive states. Discovery medicine, 9(48), 431.

Charland-Verville, V., Habbal, D., Laureys, S., & Gosseries, O. (2012). Coma and related disorders. Swiss archives of neurology and psychiatry, 163(8): 265-72.

Churchland, P. M. (2007). Neurophilosophy at work. Cambridge, UK: Cambridge University Press.

Churchland, P. S. (1989). Neurophilosophy: Toward a unified science of the mind-brain. Cambridge, Mass.: The MIT Press.

Churchland, P. S. (2002). Brain-wise: Studies in neurophilosophy. Cambridge, Mass.: The MIT Press.

Churchland, P. S. (2011). Braintrust: What neuroscience tells us about morality. Princeton: Princeton University Press.

Damasio, A. R. (1999). The Feeling of What Happens. New York: Harcourt Brace.

Guillery, R. W., & Sherman, S. M. (2002). The thalamus as a monitor of motor outputs. Philos. Trans. R Soc. Lond. B Biol. Sci., 357: 1809-1821.

Laureys, S., Owen, A. M., & Schiff, N. D. (2004a). Brain function in coma, vegetative state, and related disorders. The Lancet Neurology, 3(9), 537-546.

Laureys, S., Perrin, F., Faymonville, M. E., Schnakers, C., Boly, M., Bartsch, V., Majerus, S., Moonen, G., & Maquet, P. (2004b). Cerebral processing in the minimally conscious state. Neurology, 63(5), 916-918.

Llinas, R. R. (2001). I of the Vortex: From Neurons to Self. Cambridge, Mass.: MIT Press.

Pfurtscheller, G., & Aranibar, A. (1979). Evaluation of event-related desynchronization (ERD) preceding and following voluntary self-paced movement. Electroencephalography and clinical neurophysiology, 46(2), 138-146.

Pfurtscheller, G. (1992). Event-related synchronization (ERS): an electrophysiological correlate of cortical areas at rest. Electroencephalography and clinical neurophysiology, 83(1), 62-69.

Pfurtscheller, G., & Lopes da Silva, F. H. (1999). Event-related EEG/MEG synchronization and desynchronization: basic principles. Clinical neurophysiology, 110(11), 1842-1857.

van Rysewyk, S. (2013). Pain is Mechanism. Unpublished PhD Thesis. University of Tasmania.

Call for Chapters: Machine Medical Ethics, Edited Collection

You are warmly invited to submit your research chapter for possible inclusion in an edited collection entitled Machine Medical Ethics. Target publication date: 2014.

The new field of Artificial Intelligence called Machine Ethics is concerned with ensuring that the behaviour of machines towards human users and other machines is ethical. This unique edited collection aims to provide an interdisciplinary platform for researchers in this field to present new research and developments in Machine Medical Ethics. Areas of interest for this edited collection include, but are not limited to, the following topics:

Foundational Concepts

What is medical ethics?

What is machine medical ethics?

What are the consequences of creating or not creating ethical medical machines?

Can medical machines be autonomous?

Ought medical machines to operate autonomously, or under (complete or partial) human physician control?

Theories of Machine Medical Ethics

What theories of machine medical ethics are most theoretically plausible and most empirically supported?

Ought machine medical ethics be rule-based (top-down), case- based (bottom-up), or a hybrid view of both top-down and bottom-up?

Is an interdisciplinary approach suited to designing a machine medical ethical theory? (e.g., collaboration between philosophy, psychology, AI, computational neuroscience…)

Medical Machine Training

What does ethical training for medical machines consist in: ethical principles, ethical theories, or ethical skills? Is a hybrid approach best?

What training regimes currently tested and/or used are most successful?

Can ethically trained medical machines become unethical?

Can a medical machine learn empathy (caring) and skills relevant to the patient-physician relationship?

Can a medical machine learn to give an apology for a medical error?

Ought medical machines to be trained to detect and respond to patient embarrassment and/or issues of patient privacy? What social norms are relevant for training?

Ought medical machines to be trained to show sensitivity to gender, cultural and age-differences?

Ought machines to teach medicine and medical ethics to human medical students?

Patient-Machine-Physician Relationship

What role ought imitation or mimicry to play in the patient-machine-physician relationship?

What role ought empathy or caring to play in the patient-machine-physician relationship?

What skills are necessary to maintain a good patient-machine-physician relationship?

Ought medical machines be able to detect patient fakery and malingering?

Under what conditions ought medical machines to operate with a nurse?

In what circumstances should a machine physician consult with human or other machine physicians regarding patient assessment or diagnosis?

Medical Machine Physical Appearance

Is there a correlation between physical appearance and physician trustworthiness?

Ought medical machines to appear human or non-human?

Is a highly plastic human-like face essential to medical machines? Or, is a static face sufficient?

What specific morphological facial features ought medical machines to have?

Ought medical machines to be gendered or androgynous?

Ought medical machines to possess a human-like body with mobile limbs?

What vocal characteristics ought medical machines to have?

As a new field, the target audiences are expected to be from the scientists, researchers, and practitioners working in the field of machine ethics and medical ethics. The target audience will also include various stakeholders, like academics, research institutes, and individuals interested in this field, and the huge audience in the public sector comprising health service providers, government agencies, ministries, education institutions, social service providers and other types of government, commercial and not-for-profit agencies.

Please indicate your intention to submit your full paper by email to the editor who emails you with the title of the paper, authors, and abstract. The full manuscript, as PDF file, should be emailed to that same editor by the deadline indicated below. Authoring guidelines will be mailed to you after we receive your letter of intent.

Please feel free to contact the editors, Simon van Rysewyk or Dr. Matthijs Pontier, if you have any questions or concerns. Many thanks!

IMPORTANT DATES:

Intent to Submit: June 10, 2013

Full Version: October 20, 2013

Decision Date: November 10, 2013

Final Version: December 31, 2013

Editors:

Simon van Rysewyk

School of Humanities
University of Tasmania
Private Bag 41
Hobart
Tasmania 7001
Australia

Email: simonvanrysewyk@utas.edu.au

Dr. Matthijs Pontier

Post-Doctoral Researcher
The Centre for Advanced Media Research (CAMeRA)
Vrije Universiteit Amsterdam
Buitenveldertselaan 3
1081 HV Amsterdam
The Netherlands

Email: matthijspon@gmail.com

Ali’s excellent and revealing article is here.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 231 other followers

Simon van Rysewyk

Blog Stats

  • 14,725 hits
free counters