Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Complexity. Show all posts
Showing posts with label Complexity. Show all posts

Saturday, January 13, 2024

Consciousness does not require a self

James Coook
iai.tv
Originally published 14 DEC 23

Here is an excerpt:

Beyond the neuroscientific study of consciousness, phenomenological analysis also reveals the self to not be the possessor of experience. In mystical experiences induced by meditation or psychedelics, individuals typically enter a mode of experience in which the psychological self is absent, yet consciousness remains. While this is not the default state of the mind, the presence of consciousness in the absence of a self shows that consciousness is not dependent on an experiencing subject. What is consciousness if not a capacity of an experiencing subject? Such an experience reveals consciousness to consist of a formless awareness at its core, an empty space in which experience arises, including the experience of being a self. The self does not possess consciousness, consciousness is the experiential space in which the image of a psychological self can appear. This mode of experience can be challenging to conceptualise but is very simple when experienced – it is a state of simple appearances arising without the extra add-on of a psychological self inspecting them.

We can think of a conscious system as a system that is capable of holding beliefs about the qualitative character of the world. We should not think of belief here as referring to complex conceptual beliefs, such as believing that Paris is the capital of France, but as the simple ability to hold that the world is a certain way. You do this when you visually perceive a red apple in front of you, the experience is one of believing the apple to exist with all of its qualities such as roundness and redness. This way of thinking is in line with the work of Immanuel Kant, who argued that we never come to know reality as it is but instead only experience phenomenal representations of reality [9]. We are not conscious of the world as it is, but as we believe it to be.


Here is my take:

For centuries, we've assumed consciousness and the sense of self are one and the same. This article throws a wrench in that assumption, proposing that consciousness can exist without a self. Imagine experiencing sights, sounds, and sensations without the constant "me" narrating it all. That's what "selfless consciousness" means – raw awareness untouched by self-reflection.

The article then posits that our familiar sense of self, complete with its stories and memories, isn't some fundamental truth but rather a clever prediction concocted by our brains. This "predicted self" helps us navigate the world and interact with others, but it's not necessarily who we truly are.

Decoupling consciousness from the self opens a Pandora's box of possibilities. We might find consciousness in unexpected places, like animals or even artificial intelligence. Understanding brain function could shift dramatically, and our very notions of identity, free will, and reality might need a serious rethink. This is a bold new perspective on what it means to be conscious, and its implications are quite dramatic.

Tuesday, October 26, 2021

The Fragility of Moral Traits to Technological Interventions

J. Fabiano
Neuroethics 14, 269–281 (2021). 
https://doi.org/10.1007/s12152-020-09452-6

Abstract

I will argue that deep moral enhancement is relatively prone to unexpected consequences. I first argue that even an apparently straightforward example of moral enhancement such as increasing human co-operation could plausibly lead to unexpected harmful effects. Secondly, I generalise the example and argue that technological intervention on individual moral traits will often lead to paradoxical effects on the group level. Thirdly, I contend that insofar as deep moral enhancement targets higher-order desires (desires to desire something), it is prone to be self-reinforcing and irreversible. Fourthly, I argue that the complex causal history of moral traits, with its relatively high frequency of contingencies, indicates their fragility. Finally, I conclude that attempts at deep moral enhancement pose greater risks than other enhancement technologies. For example, one of the major problems that moral enhancement is hoped to address is lack of co-operation between groups. If humanity developed and distributed a drug that dramatically increased co-operation between individuals, we would likely see a paradoxical decrease in co-operation between groups and a self-reinforcing increase in the disposition to engage in further modifications – both of which are potential problems.

Conclusion: Fragility Leads to Increased Risks 

Any substantial technological modification of moral traits would be more likely to cause harm than benefit. Moral traits have a particularly high proclivity to unexpected disturbances, as exemplified by the co-operation case, amplified by its self-reinforcing and irreversible nature and finally as their complex aetiology would lead one to suspect. Even the most seemingly simple improvement, if only slightly mistaken, is likely to lead to significant negative outcomes. Unless we produce an almost perfectly calibrated deep moral enhancement, its implementation will carry large risks. Deep moral enhancement is likely to be hard to develop safely, but not necessarily be impossible or undesirable. Given that deep moral enhancement could prevent extreme risks for humanity, in particular decreasing the risk of human extinction, it might as well be the case that we still should attempt to develop it. I am not claiming that our current traits are well suited to dealing with global problems. On the contrary, there are certainly reasons to expect that there are better traits that could be brought about by enhancement technologies. However, I believe my arguments indicate there are also much worse, more socially disruptive, traits accessible through technological intervention.

Monday, March 29, 2021

The problem with prediction

Joseph Fridman
aeon.com
Originally published 25 Jan 21

Here is an excerpt:

Today, many neuroscientists exploring the predictive brain deploy contemporary economics as a similar sort of explanatory heuristic. Scientists have come a long way in understanding how ‘spending metabolic money to build complex brains pays dividends in the search for adaptive success’, remarks the philosopher Andy Clark, in a notable review of the predictive brain. The idea of the predictive brain makes sense because it is profitable, metabolically speaking. Similarly, the psychologist Lisa Feldman Barrett describes the primary role of the predictive brain as managing a ‘body budget’. In this view, she says, ‘your brain is kind of like the financial sector of a company’, predictively allocating resources, spending energy, speculating, and seeking returns on its investments. For Barrett and her colleagues, stress is like a ‘deficit’ or ‘withdrawal’ from the body budget, while depression is bankruptcy. In Blackmore’s day, the brain was made up of sentries and soldiers, whose collective melancholy became the sadness of the human being they inhabited. Today, instead of soldiers, we imagine the brain as composed of predictive statisticians, whose errors become our neuroses. As the neuroscientist Karl Friston said: ‘[I]f the brain is an inference machine, an organ of statistics, then when it goes wrong, it’ll make the same sorts of mistakes a statistician will make.’

The strength of this association between predictive economics and brain sciences matters, because – if we aren’t careful – it can encourage us to reduce our fellow humans to mere pieces of machinery. Our brains were never computer processors, as useful as it might have been to imagine them that way every now and then. Nor are they literally prediction engines now and, should it come to pass, they will not be quantum computers. Our bodies aren’t empires that shuttle around sentrymen, nor are they corporations that need to make good on their investments. We aren’t fundamentally consumers to be tricked, enemies to be tracked, or subjects to be predicted and controlled. Whether the arena be scientific research or corporate intelligence, it becomes all too easy for us to slip into adversarial and exploitative framings of the human; as Galison wrote, ‘the associations of cybernetics (and the cyborg) with weapons, oppositional tactics, and the black-box conception of human nature do not so simply melt away.’

Saturday, August 1, 2020

How to Fix Science's Diversity Problem

Benjamin Deen
Scientific American
Originally posted 11 July 20

Here is an excerpt:

As bothered as I was by my own behavior, I’m inspired by the simplicity of the fix: just flag the importance of representation when making decisions about who exists and is heard in academia. Scientists at all levels make these decisions. As trainees, we decide whom to cite in our manuscripts, whose research to read and engage with. Later, we begin choosing people to invite to talks, and students to mentor. Ultimately, as senior scientists, we have an even more direct gatekeeping role, deciding whom to hire and, thus, who constitutes the scientific enterprise. These choices are all levers that can be used to nudge the system away from its default, white male–heavy state.

We tend to think about racism as a personality trait: someone can be racist, nonracist or antiracist. But this simple model that we use to understand other people belies an incredibly complex underlying reality. We contain multitudes. We can be aware of the problems, read Baldwin and Coates, and still have patterns of thinking and behavior that perpetuate racial discrimination.

I’m not sure if we can convince the rest of the country to make concrete behavioral changes, to focus their effort on this issue and face the uncomfortable need to change. But I have more hope for science, which is largely composed of liberal and thoughtful people.

The info is here.

Thursday, May 7, 2020

What Is 'Decision Fatigue' and How Does It Affect You?

Rachel Fairbank
LifeHacker
Originally published 14 April 20

Here is an excerpt:

Too many decisions result in emotional and mental strain

“These are legitimately difficult decisions,” Fischhoff says, adding that people shouldn’t feel bad about struggling with them. “Feeling bad is adding insult to injury,” he says.

This added complexity to our decisions is leading to decision fatigue, which is the emotional and mental strain that comes when we are forced to make too many choices. Decision fatigue is the reason why thinking through a decision is harder when we are stressed or tired.

“These are difficult decisions because the stakes are often really high, while we are required to master unfamiliar information,” Fischhoff says.

But if all of this sounds like too much, there are actions we can take to reduce decision fatigue. For starters, it’s best to minimize the number of small decisions, such as what to eat for dinner or what to wear, you make in a day. The fewer smaller decisions you have to make, the more bandwidth you’ll have for the bigger one.

For this particular crisis, there are a few more steps you can take, in order to reduce your decision fatigue.

The info is here.

Wednesday, January 29, 2020

Why morals matter in foreign policy

Joseph Nye
aspistrategist.org.au
Originally published 10 Jan 20

Here is the conclusion:

Good moral reasoning should be three-dimensional, weighing and balancing intentions, consequences and means. A foreign policy should be judged accordingly. Moreover, a moral foreign policy must consider consequences such as maintaining an institutional order that encourages moral interests, in addition to particular newsworthy actions such as helping a dissident or a persecuted group in another country. And it’s important to include the ethical consequences of ‘nonactions’, such as President Harry S. Truman’s willingness to accept stalemate and domestic political punishment during the Korean War rather than follow General Douglas MacArthur’s recommendation to use nuclear weapons. As Sherlock Holmes famously noted, much can be learned from a dog that doesn’t bark.

It’s pointless to argue that ethics will play no role in the foreign policy debates that await this year. We should acknowledge that we always use moral reasoning to judge foreign policy, and we should learn to do it better.

The info is here.

Tuesday, August 27, 2019

Engineering Ethics Isn't Always Black And White

Elizabeth Fernandez
Forbes.com
Originally posted August 6, 2019

Here is an excerpt:

Dr. Stephan's has thought a lot about engineering ethics. He goes on to say that, while there are not many courses completely devoted to engineering ethics, many students now at least have some exposure to it before graduating.

Education may fall into one of several categories. Students may encounter a conflict of interest or why it may be unethical to accept gifts as an engineer. Some examples may be clear. For example, a toy may be found to have a defective part which could harm a child. Ethically, the toy should be pulled from the market, even if it causes the company loss of revenue.

But other times, the ethical choice may be less clear. For example, how should a civil engineer make a decision about which intersection should receive funds for a safety upgrade, which may come down to weighing some lives against others? Or what ethical decisions are involved in creating a device that eliminates second-hand smoke from cigarettes, but might reinforce addiction or increase the incidence of children who smoke?

Now engineering ethics may even be more important. "The advances in artificial intelligence that have occurred over the last decade are raising serious questions about how this technology should be controlled with respect to privacy, politics, and even personal safety," says Dr. Stephan.

The info is here.

Friday, August 9, 2019

The Human Brain Project Hasn’t Lived Up to Its Promise

Ed Yong
www.theatlantic.com
Originally published July 22, 2019

Here is an excerpt:

Markram explained that, contra his TED Talk, he had never intended for the simulation to do much of anything. He wasn’t out to make an artificial intelligence, or beat a Turing test. Instead, he pitched it as an experimental test bed—a way for scientists to test their hypotheses without having to prod an animal’s head. “That would be incredibly valuable,” Lindsay says, but it’s based on circular logic. A simulation might well allow researchers to test ideas about the brain, but those ideas would already have to be very advanced to pull off the simulation in the first place. “Once neuroscience is ‘finished,’ we should be able to do it, but to have it as an intermediate step along the way seems difficult.”

“It’s not obvious to me what the very large-scale nature of the simulation would accomplish,” adds Anne Churchland from Cold Spring Harbor Laboratory. Her team, for example, simulates networks of neurons to study how brains combine visual and auditory information. “I could implement that with hundreds of thousands of neurons, and it’s not clear what it would buy me if I had 70 billion.”

In a recent paper titled “The Scientific Case for Brain Simulations,” several HBP scientists argued that big simulations “will likely be indispensable for bridging the scales between the neuron and system levels in the brain.” In other words: Scientists can look at the nuts and bolts of how neurons work, and they can study the behavior of entire organisms, but they need simulations to show how the former create the latter. The paper’s authors drew a comparison to weather forecasts, in which an understanding of physics and chemistry at the scale of neighborhoods allows us to accurately predict temperature, rainfall, and wind across the whole globe.

The info is here.

Saturday, December 22, 2018

Complexities for Psychiatry's Identity As a Medical Specialty

Mohammed Abouelleil Rashed
Kan Zaman Blog
Originally posted November 23, 2018

Here is an excerpt:

Doctors, researchers, governments, pharmaceutical companies, and patient groups each have their own interests and varying abilities to influence the construction of disease categories. This creates the possibility for disagreement over the legitimacy of certain conditions, something we can see playing out in the ongoing debates surrounding Chronic Fatigue Syndrome, a condition that “receives much more attention from its sufferers and their supporters than from the medical community” (Simon 2011: 91). And, in psychiatry, it has long been noted that some major pharmaceutical companies influence the construction of disorder in order to create a market for the psychotropic drugs they manufacture. From the perspective of medical anti-realism (in the constructivist form presented here), these influences are no longer seen as a hindrance to the supposedly objective, ‘natural kind’ status of disease categories, but as key factors involved in their construction. Thus, the lobbying power of the American Psychiatric Association, the vested interests of pharmaceutical companies, and the desire of psychiatrists as a group to maintain their prestige do not undermine the identity of psychiatry as a medical specialty; what they do is highlight the importance of emphasizing the interests of patient groups as well as utilitarian and economic criteria to counteract and respond to the other interests. Medical constructivism is not a uniquely psychiatric ontology, it is a medicine-wide ontology; it applies to schizophrenia as it does to hypertension, appendicitis, and heart disease. Owing to the normative complexity of psychiatry (outlined earlier) and to the fact that loss of freedom is often involved in psychiatric practice, the vested interests involved in psychiatry are more complex and harder to resolve than in many other medical specialties. But that in itself is not a hindrance to psychiatry’s identity as a medical speciality.

The info is here.

Monday, December 10, 2018

What makes a ‘good’ clinical ethicist?

Trevor Bibler
Baylor College of Medicine Blog
Originally posted October 12, 2018

Here is an excerpt:

Some hold that the complexity of clinical ethics consultations couldn’t be reduced to multiple-choice questions based on a few sources, arguing that creating multiple-choice questions that reflect the challenges of doing clinical ethics is nearly impossible. Most of the time, the HEC-C Program is careful to emphasize that they are testing knowledge of issues in clinical ethics, not the ethicist’s ability to apply this knowledge to the practice of clinical ethics.

This is a nuanced distinction that may be lost on those outside the field. For example, an administrator might view the HEC-C Program as separating a good ethicist from an inadequate ethicist simply because they have 400 hours of experience and can pass a multiple-choice exam.

Others disagree with the source material (called “core references”) that serves as the basis for exam questions. I believe the core references, if repetitious, are important works in the field. My concern is that these works do not pay sufficient attention to some of the most pressing and challenging issues in clinical ethics today: income inequality, care for non-citizens, drug abuse, race, religion, sex and gender, to name a few areas.

Also, it’s feasible that inadequate ethicists will become certified. I can imagine an ethicist might meet the requirements, but fall short of being a good ethicist because in practice they are poor communicators, lack empathy, are authoritarian when analyzing ethics issues, or have an off-putting presence.

On the other hand, I know some ethicists I would consider experts in the field who are not going to undergo the certification process because they disagree with it. Both of these scenarios show that HEC certification should not be the single requirement that separates a good ethicist from an inadequate ethicist.

The info is here.

Friday, June 15, 2018

The danger of absolute thinking is absolutely clear

Mohammed Al-Mosaiwi
aeon.co
Originally posted May 2, 2018

Here is an excerpt:

There are generally two forms of absolutism; ‘dichotomous thinking’ and ‘categorical imperatives’. Dichotomous thinking – also referred to as ‘black-and-white’ or ‘all-or-nothing’ thinking – describes a binary outlook, where things in life are either ‘this’ or ‘that’, and nothing in between. Categorical imperatives are completely rigid demands that people place on themselves and others. The term is borrowed from Immanuel Kant’s deontological moral philosophy, which is grounded in an obligation- and rules-based ethical code.

In our research – and in clinical psychology more broadly – absolutist thinking is viewed as an unhealthy thinking style that disrupts emotion-regulation and hinders people from achieving their goals. Yet we all, to varying extents, are disposed to it – why is this? Primarily, because it’s much easier than dealing with the true complexities of life. The term cognitive miser, first introduced by the American psychologists Susan Fiske and Shelley Taylor in 1984, describes how humans seek the simplest and least effortful ways of thinking. Nuance and complexity is expensive – it takes up precious time and energy – so wherever possible we try to cut corners. This is why we have biases and prejudices, and form habits. It’s why the study of heuristics (intuitive ‘gut-feeling’ judgments) is so useful in behavioural economics and political science.

But there is no such thing as a free lunch; the time and energy saved through absolutist thinking has a cost. In order to successfully navigate through life, we need to appreciate nuance, understand complexity and embrace flexibility. When we succumb to absolutist thinking for the most important matters in our lives – such as our goals, relationships and self-esteem – the consequences are disastrous.

The article is here.

Monday, May 14, 2018

Computer Says No: Part 2 - Explainability

Jasmine Leonard
theRSA.org
Originally posted March 23, 2018

Here is an expert:

The trouble is, since many decisions should be explainable, it’s tempting to assume that automated decision systems should also be explainable.  But as discussed earlier, automated decisions systems don’t actually make decisions; they make predictions.  And when a prediction is used to guide a decision, the prediction is itself part of the explanation for that decision.  It therefore doesn’t need to be explained itself, it merely needs to be justifiable.

This is a subtle but important distinction.  To illustrate it, imagine you were to ask your doctor to explain her decision to prescribe you a particular drug.  She could do so by saying that the drug had cured many other people with similar conditions in the past and that she therefore predicted it would cure you too.  In this case, her prediction that the drug will cure you is the explanation for her decision to prescribe it.  And it’s a good explanation because her prediction is justified – not on the basis of an explanation of how the drug works, but on the basis that it’s proven to be effective in previous cases.  Indeed, explanations of how drugs work are often not available because the biological mechanisms by which they operate are poorly understood, even by those who produce them.  Moreover, even if your doctor could explain how the drug works, unless you have considerable knowledge of pharmacology, it’s unlikely that the explanation would actually increase your understanding of her decision to prescribe the drug.

If explanations of predictions are unnecessary to justify their use in decision-making then, what else can justify the use of a prediction made by an automated decision system?  The best answer, I believe, is that the system is shown to be sufficiently accurate.  What “sufficiently accurate” means is obviously up for debate, but at a minimum I would suggest that it means the system’s predictions are at least as accurate as those produced by a trained human.  It also means that there are no other readily available systems that produce more accurate predictions.

The article is here.

Thursday, January 25, 2018

Minding matter

Adam Frank
aeon.com
Originally posted March 13, 2017

Here are two excerpts:

You can see how this throws a monkey wrench into a simple, physics-based view of an objective materialist world. How can there be one mathematical rule for the external objective world before a measurement is made, and another that jumps in after the measurement occurs? For a hundred years now, physicists and philosophers have been beating the crap out of each other (and themselves) trying to figure out how to interpret the wave function and its associated measurement problem. What exactly is quantum mechanics telling us about the world? What does the wave function describe? What really happens when a measurement occurs? Above all, what is matter?

(cut)

Some consciousness researchers see the hard problem as real but inherently unsolvable; others posit a range of options for its account. Those solutions include possibilities that overly project mind into matter. Consciousness might, for example, be an example of the emergence of a new entity in the Universe not contained in the laws of particles. There is also the more radical possibility that some rudimentary form of consciousness must be added to the list of things, such as mass or electric charge, that the world is built of. Regardless of the direction ‘more’ might take, the unresolved democracy of quantum interpretations means that our current understanding of matter alone is unlikely to explain the nature of mind. It seems just as likely that the opposite will be the case.

The article is here.

Sunday, October 29, 2017

Courage and Compassion: Virtues in Caring for So-Called “Difficult” Patients

Michael Hawking, Farr A. Curlin, and John D. Yoon
AMA Journal of Ethics. April 2017, Volume 19, Number 4: 357-363.

Abstract

What, if anything, can medical ethics offer to assist in the care of the “difficult” patient? We begin with a discussion of virtue theory and its application to medical ethics. We conceptualize the “difficult” patient as an example of a “moral stress test” that especially challenges the physician’s character, requiring the good physician to display the virtues of courage and compassion. We then consider two clinical vignettes to flesh out how these virtues might come into play in the care of “difficult” patients, and we conclude with a brief proposal for how medical educators might cultivate these essential character traits in physicians-in-training.

Here is an excerpt:

To give a concrete example of a virtue that will be familiar to anyone in medicine, consider the virtue of temperance. A temperate person exhibits appropriate self-control or restraint. Aristotle describes temperance as a mean between two extremes—in the case of eating, an extreme lack of temperance can lead to morbid obesity and its excess to anorexia. Intemperance is a hallmark of many of our patients, particularly among those with type 2 diabetes, alcoholism, or cigarette addiction. Clinicians know all too well the importance of temperance because they see the results for human beings who lack it—whether it be amputations and dialysis for the diabetic patient; cirrhosis, varices, and coagulopathy for the alcoholic patient; or chronic obstructive pulmonary disease and lung cancer for the lifelong smoker. In all of these cases, intemperance inhibits a person’s ability to flourish. These character traits do, of course, interact with social, cultural, and genetic factors in impacting an individual’s health, but a more thorough exploration of these factors is outside the scope of this paper.

The article is here.

Tuesday, July 25, 2017

Should a rapist get Viagra or a robber get a cataracts op?

Tom Douglas
Aeon Magazine
Originally published on July 7, 2017

Suppose a physician is about to treat a patient for diminished sex drive when she discovers that the patient – let’s call him Abe – has raped several women in the past. Fearing that boosting his sex drive might lead Abe to commit further sex offences, she declines to offer the treatment. Refusal to provide medical treatment in this case strikes many as reasonable. It might not be entirely unproblematic, since some will argue that he has a human right to medical treatment, but many of us would probably think the physician is within her rights – she’s not obliged to treat Abe. At least, not if her fears about further offending are well-founded.

But now consider a different case. Suppose an eye surgeon is about to book Bert in for a cataract operation when she discovers that he is a serial bank robber. Fearing that treating his developing blindness might help Bert to carry off further heists, she declines to offer the operation. In many ways, this case mirrors that of Abe. But morally, it seems different. In this case, refusing treatment does not seem reasonable, no matter how well-founded the surgeon’s fear. What’s puzzling is why. Why is Bert’s surgeon obliged to treat his blindness, while Abe’s physician has no similar obligation to boost his libido?

Here’s an initial suggestion: diminished libido, it might be said, is not a ‘real disease’. An inconvenience, certainly. A disability, perhaps. But a genuine pathology? No. By contrast, cataract disease clearly is a true pathology. So – the argument might go – Bert has a stronger claim to treatment than Abe. But even if reduced libido is not itself a disease – a view that could be contested – it could have pathological origins. Suppose Abe has a disease that suppresses testosterone production, and thus libido. And suppose that the physician’s treatment would restore his libido by correcting this disease. Still, it would seem reasonable for her to refuse the treatment, if she had good grounds to believe providing it could result in further sex offences.

Thursday, February 2, 2017

Will artificial intelligence help to crack biology?

The Economist
Originally published January 7, 2017

Here is an excerpt:

Another important biological hurdle that AI can help people surmount is complexity. Experimental science progresses by holding steady one variable at a time, an approach that is not always easy when dealing with networks of genes, proteins or other molecules. AI can handle this more easily than human beings.

At BERG Health, the firm’s AI system starts by analysing tissue samples, genomics and other clinical data relevant to a particular disease. It then tries to model from this information the network of protein interactions that underlie that disease. At that point human researchers intervene to test the model’s predictions in a real biological system. One of the potential drugs BERG Health has discovered this way—for topical squamous-cell carcinoma, a form of skin cancer—passed early trials for safety and efficacy, and now awaits full-scale testing. The company says it has others in development.

For all the grand aspirations of the AI folk, though, there are reasons for caution. Dr Mead warns: “I don’t think we are in a state to model even a single cell. The model we have is incomplete.” Actually, that incompleteness applies even to models of single proteins, meaning that science is not yet good at predicting whether a particular modification will make a molecule intended to interact with a given protein a better drug or not. Most known protein structures have been worked out from crystallised versions of the molecule, held tight by networks of chemical bonds. In reality, proteins are flexible, but that is much harder to deal with.

The article is here.

Tuesday, January 10, 2017

Why are doctors burned out? Our health care system is a complicated mess

By Steven Adelman and Harris A. Berman
STAT News
Originally posted December 15, 2016

Here is an excerpt:

Burnout and dissatisfaction with work-life balance are particularly acute for adult primary care physicians — the central figures in our unsystematic health care “system.” A system that was already teetering in 2011 has been stressed by the addition of 20 million covered lives by the Affordable Care Act. It’s little wonder that in Massachusetts, where near-universal coverage has filled up the offices of primary care physicians, malpractice claims against them are rising. Patients and physicians alike complain about the unsatisfying brevity of office visits, and many harbor intense feelings of antipathy towards cumbersome electronic health records and growing administrative burdens.

We believe that to alleviate the stress and burnout in the medical professions, we must pay attention to system factors that lead to what we call the “occupational health crisis in medicine.” We recently surveyed 425 practicing physicians and health care leaders and executives, seeking their opinions on the importance of eight approaches to transforming health care. We presented the results this fall at the International Conference on Physician Health.

The article is here.

Thursday, October 15, 2015

More Doubts Over The Oxytocin And Trust Theory

By Neuroskeptic
Originally published on September 16, 2015

The claim that the hormone oxytocin promotes trust in humans has drawn a lot of attention. But today, a group of researchers reported that they’ve been unable to reproduce their own findings concerning that effect.

The new paper, in PLoS ONE, is by Anthony Lane and colleagues from Louvain in Belgium. The same team have previously published evidence supporting the link between oxytocin and trust.

Back in 2010 they reported that “oxytocin increases trust when confidential information is in the balance”. An intranasal spray of oxytocin made volunteers more likely to leave a sensitive personal document lying around in an open envelope, rather than sealing it up, suggesting that they trusted people not to peek at it.

However, the authors now say that they failed to replicate the 2010 ‘envelope task’ result in two subsequent studies.

The entire blog post is here.

Friday, October 2, 2015

You're not irrational, you're just quantum probabilistic

Science Daily
Originally posted September 15, 2015

Here is an excerpt:

Their work suggests that thinking in a quantum-like way¬--essentially not following a conventional approach based on classical probability theory--enables humans to make important decisions in the face of uncertainty, and lets us confront complex questions despite our limited mental resources.

When researchers try to study human behavior using only classical mathematical models of rationality, some aspects of human behavior do not compute. From the classical point of view, those behaviors seem irrational, Wang explained.

For instance, scientists have long known that the order in which questions are asked on a survey can change how people respond--an effect previously thought to be due to vaguely labeled effects, such as "carry-over effects" and "anchoring and adjustment," or noise in the data. Survey organizations normally change the order of questions between respondents, hoping to cancel out this effect. But in the Proceedings of the National Academy of Sciences last year, Wang and collaborators demonstrated that the effect can be precisely predicted and explained by a quantum-like aspect of people's behavior.

The entire article is here.

Tuesday, February 18, 2014

Ten Things I Learned About Me

And maybe about you, too, while writing a book about the self.

By Jennifer Ouellette
Slate
Originally published January 30, 2014

Here are some excerpts:

But while I might not have found the Ultimate Answer to the source of the self, it proved to be an exciting journey and I learned some fascinating things along the way.

1. Genes are deterministic but they are not destiny. Except for earwax consistency. My earwax is my destiny. We tend to think of our genome as following a “one gene for one trait” model, but the real story is far more complicated. 

(cut)

2. It’s nature and nurture, not one or the other, ....

(cut)

3. My brain scan—courtesy of neuroscientist David Eagleman’s lab—told me nothing about who I am, but it did confirm that I have very clear sinuses.