Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Awareness. Show all posts
Showing posts with label Awareness. Show all posts

Saturday, October 9, 2021

Nudgeability: Mapping Conditions of Susceptibility to Nudge Influence

de Ridder, D., Kroese, F., & van Gestel, L. (2021). 
Perspectives on psychological science 
Advance online publication. 
https://doi.org/10.1177/1745691621995183

Abstract

Nudges are behavioral interventions to subtly steer citizens' choices toward "desirable" options. An important topic of debate concerns the legitimacy of nudging as a policy instrument, and there is a focus on issues relating to nudge transparency, the role of preexisting preferences people may have, and the premise that nudges primarily affect people when they are in "irrational" modes of thinking. Empirical insights into how these factors affect the extent to which people are susceptible to nudge influence (i.e., "nudgeable") are lacking in the debate. This article introduces the new concept of nudgeability and makes a first attempt to synthesize the evidence on when people are responsive to nudges. We find that nudge effects do not hinge on transparency or modes of thinking but that personal preferences moderate effects such that people cannot be nudged into something they do not want. We conclude that, in view of these findings, concerns about nudging legitimacy should be softened and that future research should attend to these and other conditions of nudgeability.

From the General Discussion

Finally, returning to the debates on nudging legitimacy that we addressed at the beginning of this article, it seems that concerns should be softened insofar as nudges do impose choice without respecting basic ethical requirements for good public policy. More than a decade ago, philosopher Luc Bovens (2009) formulated the following four principles for nudging to be legitimate: A nudge should allow people to act in line with their overall preferences; a nudge should not induce a change in preferences that would not hold under nonnudge conditions; a nudge should not lead to “infantilization,” such that people are no longer capable of making autonomous decisions; and a nudge should be transparent so that people have control over being in a nudge situation. With the findings from our review in mind, it seems that these legitimacy requirements are fulfilled. Nudges do allow people to act in line with their overall preferences, nudges allow for making autonomous decisions insofar as nudge effects do not depend on being in a System 1 mode of thinking, and making the nudge transparent does not compromise nudge effects.

Thursday, October 1, 2020

Intentional Action Without Knowledge

Vekony, R., Mele, A. & Rose, D.
Synthese (2020).

Abstract

In order to be doing something intentionally, must one know that one is doing it? Some philosophers have answered yes. Our aim is to test a version of this knowledge thesis, what we call the Knowledge/Awareness Thesis, or KAT. KAT states that an agent is doing something intentionally only if he knows that he is doing it or is aware that he is doing it. Here, using vignettes featuring skilled action and vignettes featuring habitual action, we provide evidence that, in various scenarios, a majority of non-specialists regard agents as intentionally doing things that the agents do not know they are doing and are not aware of doing. This puts pressure on proponents of KAT and leaves it to them to find a way these results can coexist with KAT.

Conclusion

Our aim was to evaluate KAT empirically. We found that majority responses to our vignettes
are at odds with KAT. Our results show that, on an ordinary view of matters, neither knowledge nor
awareness of doing something is necessary for doing it intentionally. We tested cases of skilled action
and habitual action, and we found that, for both, people ascribed intentionality to an action at an
appreciably higher rate than knowledge and awareness.

The research is here.

Tuesday, November 5, 2019

Will Robots Wake Up?

Susan Schneider
orbitermag.com
Originally published September 30, 2019

Machine consciousness, if it ever exists, may not be found in the robots that tug at our heartstrings, like R2D2. It may instead reside in some unsexy server farm in the basement of a computer science building at MIT. Or perhaps it will exist in some top-secret military program and get snuffed out, because it is too dangerous or simply too inefficient.

AI consciousness likely depends on phenomena that we cannot, at this point, gauge—such as whether some microchip yet to be invented has the right configuration, or whether AI developers or the public want conscious AI. It may even depend on something as unpredictable as the whim of a single AI designer, like Anthony Hopkins’s character in Westworld. The uncertainty we face moves me to a middle-of-the-road position, one that stops short of either techno-optimism (believing that technology can solve our problems) or biological naturalism.

This approach I call, simply, the “Wait and See Approach.”

In keeping with my desire to look at real-world considerations that speak to whether AI consciousness is even compatible with the laws of nature—and, if so, whether it is technologically feasible or even interesting to build—my discussion draws from concrete scenarios in AI research and cognitive science.

The info is here.

Thursday, October 24, 2019

The consciousness illusion

Keith Frankish
aeon.co
Originally published September 26, 2019

Here is an excerpt:

The first concerns explanatory simplicity. If we observe something science can’t explain, then the simplest hypothesis is that it’s an illusion, especially if it can be observed only from one particular angle. This is exactly the case with phenomenal consciousness. Phenomenal properties cannot be explained in standard scientific ways and can be observed only from the first-person viewpoint (no one but me can experience my sensations). This does not show that they aren’t real. It could be that we need to radically rethink our science but, as Dennett says, the theory that they are illusory is the obvious default one.

A second argument concerns our awareness of phenomenal properties. We are aware of features of the natural world only if we have a sensory system that can detect them and generate representations of them for use by other mental systems. This applies equally to features of our own minds (which are parts of the natural world), and it would apply to phenomenal properties too, if they were real. We would need an introspective system that could detect them and produce representations of them. Without that, we would have no more awareness of our brains’ phenomenal properties than we do of their magnetic properties. In short, if we were aware of phenomenal properties, it would be by virtue of having mental representations of them. But then it would make no difference whether these representations were accurate. Illusory representations would have the same effects as veridical ones. If introspection misrepresents us as having phenomenal properties then, subjectively, that’s as good as actually having them. Since science indicates that our brains don’t have phenomenal properties, the obvious inference is that our introspective representations of them are illusory.

There is also a specific argument for preferring illusionism to property dualism. In general, if we can explain our beliefs about something without mentioning the thing itself, then we should discount the beliefs.

The info is here.

Thursday, October 10, 2019

Our illusory sense of agency has a deeply important social purpose

<p>French captain Zinedine Zidane is sent off during the 2006 World Cup final in Germany. <em>Photo by Shaun Botterill/Getty</em></p>Chris Frith
aeon.com
Originally published September 22, 2019

Here are two excerpts:

We humans like to think of ourselves as mindful creatures. We have a vivid awareness of our subjective experience and a sense that we can choose how to act – in other words, that our conscious states are what cause our behaviour. Afterwards, if we want to, we might explain what we’ve done and why. But the way we justify our actions is fundamentally different from deciding what to do in the first place.

Or is it? Most of the time our perception of conscious control is an illusion. Many neuroscientific and psychological studies confirm that the brain’s ‘automatic pilot’ is usually in the driving seat, with little or no need for ‘us’ to be aware of what’s going on. Strangely, though, in these situations we retain an intense feeling that we’re in control of what we’re doing, what can be called a sense of agency. So where does this feeling come from?

It certainly doesn’t come from having access to the brain processes that underlie our actions. After all, I have no insight into the electrochemical particulars of how my nerves are firing or how neurotransmitters are coursing through my brain and bloodstream. Instead, our experience of agency seems to come from inferences we make about the causes of our actions, based on crude sensory data. And, as with any kind of perception based on inference, our experience can be tricked.

(cut)

These observations point to a fundamental paradox about consciousness. We have the strong impression that we choose when we do and don’t act and, as a consequence, we hold people responsible for their actions. Yet many of the ways we encounter the world don’t require any real conscious processing, and our feeling of agency can be deeply misleading.

If our experience of action doesn’t really affect what we do in the moment, then what is it for? Why have it? Contrary to what many people believe, I think agency is only relevant to what happens after we act – when we try to justify and explain ourselves to each other.

The info is here.

Monday, April 1, 2019

Neuroscience Readies for a Showdown Over Consciousness Ideas

Philip Ball
Quanta Magazine
Originally published March 6, 2019

Here is an excerpt:

Philosophers have debated the nature of consciousness and whether it can inhere in things other than humans for thousands of years, but in the modern era, pressing practical and moral implications make the need for answers more urgent. As artificial intelligence (AI) grows increasingly sophisticated, it might become impossible to tell whether one is dealing with a machine or a human  merely by interacting with it — the classic Turing test. But would that mean AI deserves moral consideration?

Understanding consciousness also impinges on animal rights and welfare, and on a wide range of medical and legal questions about mental impairments. A group of more than 50 leading neuroscientists, psychologists, cognitive scientists and others recently called for greater recognition of the importance of research on this difficult subject. “Theories of consciousness need to be tested rigorously and revised repeatedly amid the long process of accumulation of empirical evidence,” the authors said, adding that “myths and speculative conjectures also need to be identified as such.”

You can hardly do experiments on consciousness without having first defined it. But that’s already difficult because we use the word in several ways. Humans are conscious beings, but we can lose consciousness, for example under anesthesia. We can say we are conscious of something — a strange noise coming out of our laptop, say. But in general, the quality of consciousness refers to a capacity to experience one’s existence rather than just recording it or responding to stimuli like an automaton. Philosophers of mind often refer to this as the principle that one can meaningfully speak about what it is to be “like” a conscious being — even if we can never actually have that experience beyond ourselves.

The info is here.

Wednesday, October 10, 2018

Urban Meyer, Ohio State Football, and How Leaders Ignore Unethical Behavior

David Mayer
Harvard Business Review
Originally posted September 4, 2018

Here is an excerpt:

A sizable literature in management and psychology helps us understand how people become susceptible to moral biases and make choices that are inconsistent with their values and the values of their organizations. Reading the report with that lens can help leaders better understand the biases that get in the way of ethical conduct and ethical organizations.

Performance over principles. One number may surpass all other details in this case: 90%. That’s the percentage of games the team has won under Meyer as head coach since he joined Ohio State in 2012. Psychological research shows that in almost every area of life, being moral is weighted as more important than being competent. However, in competitive environments such as work and sports, the classic findings flip: competence is prized over character. Although the report does not mention anything about the team’s performance or the resulting financial and reputational benefits of winning, the program’s success may have crowded out concerns over the allegations against Smith and about the many other problematic behaviors he showed.

Unspoken values. Another factor that can increase the likelihood of making unethical decisions is the absence of language around values. Classic research in organizations has found that leaders tend to be reluctant to use “moral language.” For example, leaders are more likely to talk about deadlines, objectives, and effectiveness than values such as integrity, respect, and compassion. Over time, this can license unethical conduct.

The info is here.

Friday, September 14, 2018

What Are “Ethics in Design”?

Victoria Sgarro
slate.com
Originally posted August 13, 2018

Here is an excerpt:

As a product designer, I know that no mandate exists to integrate these ethical checks and balances in our process. While I may hear a lot of these issues raised at speaking events and industry meetups, more “practical” considerations can overshadow these conversations in my day-to-day decision making. When they have to compete with the workaday pressures of budgets, roadmaps, and clients, these questions won’t emerge as priorities organically.

Most important, then, is action. Castillo worries that the conversation about “ethics in design” could become a cliché, like “empathy” or “diversity” in tech, where it’s more talk than walk. She says it’s not surprising that ethics in tech hasn’t been addressed in depth in the past, given the industry’s lack of diversity. Because most tech employees come from socially privileged backgrounds, they may not be as attuned to ethical concerns. A designer who identifies with society’s dominant culture may have less personal need to take another perspective. Indeed, identification with a society’s majority is shown to be correlated with less critical awareness of the world outside of yourself. Castillo says that, as a black woman in America, she’s a bit wary of this conversation’s effectiveness if it remains only a conversation.

“You know how someone says, ‘Why’d you become a nurse or doctor?’ And they say, ‘I want to help people’?” asks Castillo. “Wouldn’t it be cool if someone says, ‘Why’d you become an engineer or a product designer?’ And you say, ‘I want to help people.’ ”

The info is here.

Sunday, June 17, 2018

Does Non-Moral Ignorance Exculpate? Situational Awareness and Attributions of Blame and Forgiveness

Kissinger-Knox, A., Aragon, P. & Mizrahi, M.
Acta Anal (2018) 33: 161. https://doi.org/10.1007/s12136-017-0339-y

Abstract

In this paper, we set out to test empirically an idea that many philosophers find intuitive, namely that non-moral ignorance can exculpate. Many philosophers find it intuitive that moral agents are responsible only if they know the particular facts surrounding their action (or inaction). Our results show that whether moral agents are aware of the facts surrounding their (in)action does have an effect on people’s attributions of blame, regardless of the consequences or side effects of the agent’s actions. In general, it was more likely that a situationally aware agent will be blamed for failing to perform the obligatory action than a situationally unaware agent. We also tested attributions of forgiveness in addition to attributions of blame. In general, it was less likely that a situationally aware agent will be forgiven for failing to perform the obligatory action than a situationally unaware agent. When the agent is situationally unaware, it is more likely that the agent will be forgiven than blamed. We argue that these results provide some empirical support for the hypothesis that there is something intuitive about the idea that non-moral ignorance can exculpate.

The article is here.

Tuesday, February 27, 2018

After long battle, mental health will be part of New York's school curriculum

Bethany Bump
Times Union
Originally published January 27, 2018

Here is an excerpt:

The idea of teaching young people about mental health is not a new one.

The mental hygiene movement of the early 1900s introduced society to the concept that mental wellness could be just as important as physical wellness.,

In 1928, a nationwide group of superintendents recommended that mental hygiene be included in the teaching of health education, but it was not.

"When you talk about mental health and mental illness, people are still, because of the stigma, in the closet about it," Liebman said. "People just don't talk about it like they talk about physical illness."

Social media has strengthened the movement to de-stigmatize mental illness, he said. "People are being more candid about their mental health issues and seeking support and using social media as kind of a fulcrum for gaining support, peers and friends in their recovery," Liebman said.

Making the case

Advocates of the law want people to know they are not pushing for students or schoolteachers to become diagnosticians. They say that is best left to professionals.

Adding mental health literacy to the curriculum will provide youth with the knowledge of how to prevent mental disorders, recognize when a disorder is developing, know how and where to seek help and treatment, strategies for dealing with milder issues, and strategies for supporting others who are struggling.

The information is here.

Tuesday, December 12, 2017

Can AI Be Taught to Explain Itself?

Cliff Kuang
The New York Times Magazine
Originally published November 21, 2017

Here are two excerpts:

In 2018, the European Union will begin enforcing a law requiring that any decision made by a machine be readily explainable, on penalty of fines that could cost companies like Google and Facebook billions of dollars. The law was written to be powerful and broad and fails to define what constitutes a satisfying explanation or how exactly those explanations are to be reached. It represents a rare case in which a law has managed to leap into a future that academics and tech companies are just beginning to devote concentrated effort to understanding. As researchers at Oxford dryly noted, the law “could require a complete overhaul of standard and widely used algorithmic techniques” — techniques already permeating our everyday lives.

(cut)

“Artificial intelligence” is a misnomer, an airy and evocative term that can be shaded with whatever notions we might have about what “intelligence” is in the first place. Researchers today prefer the term “machine learning,” which better describes what makes such algorithms powerful. Let’s say that a computer program is deciding whether to give you a loan. It might start by comparing the loan amount with your income; then it might look at your credit history, marital status or age; then it might consider any number of other data points. After exhausting this “decision tree” of possible variables, the computer will spit out a decision. If the program were built with only a few examples to reason from, it probably wouldn’t be very accurate. But given millions of cases to consider, along with their various outcomes, a machine-learning algorithm could tweak itself — figuring out when to, say, give more weight to age and less to income — until it is able to handle a range of novel situations and reliably predict how likely each loan is to default.

The article is here.

Wednesday, October 4, 2017

Better Minds, Better Morals: A Procedural Guide to Better Judgment

G. Owen Schaefer and Julian Savulescu
Journal of Posthuman Studies
Vol. 1, No. 1, Journal of Posthuman Studies (2017), pp. 26-43

Abstract:

Making more moral decisions – an uncontroversial goal, if ever there was one. But how to go about it? In this article, we offer a practical guide on ways to promote good judgment in our personal and professional lives. We will do this not by outlining what the good life consists in or which values we should accept. Rather, we offer a theory of  procedural reliability : a set of dimensions of thought that are generally conducive to good moral reasoning. At the end of the day, we all have to decide for ourselves what is good and bad, right and wrong. The best way to ensure we make the right choices is to ensure the procedures we’re employing are sound and reliable. We identify four broad categories of judgment to be targeted – cognitive, self-management, motivational and interpersonal. Specific factors within each category are further delineated, with a total of 14 factors to be discussed. For each, we will go through the reasons it generally leads to more morally reliable decision-making, how various thinkers have historically addressed the topic, and the insights of recent research that can offer new ways to promote good reasoning. The result is a wide-ranging survey that contains practical advice on how to make better choices. Finally, we relate this to the project of transhumanism and prudential decision-making. We argue that transhumans will employ better moral procedures like these. We also argue that the same virtues will enable us to take better control of our own lives, enhancing our responsibility and enabling us to lead better lives from the prudential perspective.

A copy of the article is here.

Friday, March 3, 2017

Doctors suffer from the same cognitive distortions as the rest of us

Michael Lewis
Nautilus
Originally posted February 9, 2017

Here are two excerpts:

What struck Redelmeier wasn’t the idea that people made mistakes. Of course people made mistakes! What was so compelling is that the mistakes were predictable and systematic. They seemed ingrained in human nature. One passage in particular stuck with him—about the role of the imagination in human error. “The risk involved in an adventurous expedition, for example, is evaluated by imagining contingencies with which the expedition is not equipped to cope,” the authors wrote. “If many such difficulties are vividly portrayed, the expedition can be made to appear exceedingly dangerous, although the ease with which disasters are imagined need not reflect their actual likelihood. Conversely, the risk involved in an undertaking may be grossly underestimated if some possible dangers are either difficult to conceive of, or simply do not come to mind.” This wasn’t just about how many words in the English language started with the letter K. This was about life and death.

(cut)

Toward the end of their article in Science, Daniel Kahneman and Amos Tversky had pointed out that, while statistically sophisticated people might avoid the simple mistakes made by less savvy people, even the most sophisticated minds were prone to error. As they put it, “their intuitive judgments are liable to similar fallacies in more intricate and less transparent problems.” That, the young Redelmeier realized, was a “fantastic rationale why brilliant physicians were not immune to these fallibilities.” Error wasn’t necessarily shameful; it was merely human. “They provided a language and a logic for articulating some of the pitfalls people encounter when they think. Now these mistakes could be communicated. It was the recognition of human error. Not its denial. Not its demonization. Just the understanding that they are part of human nature.”

The article is here.

Wednesday, November 2, 2016

A Day in the Life of the Brain by Susan Greenfield: Consciousness

Steven Rose
The Guardian
Originally posted October 12, 2016

Here is an excerpt:

Neuroscientists are rarely trained in philosophy, but a little modesty might not go amiss. Some committed reductionists among them maintain that consciousness is merely a “user illusion” – that you may think you are making conscious decisions but in “reality” all the hard work is being done by the interactions of nerve cells within the brain. Most, however, are haunted by what their philosophical sympathisers call the “hard problem” of the relationship between objective measures – say of light of a particular wavelength – and qualia, the subjective experience of seeing red.

Within their restricted definition there are two potentially productive questions that neuroscientists can ask about consciousness: first, how and when it emerged along the evolutionary path that led to humans? And second, what and where in the brain are the structures and processes that enable conscious experience? The evolutionary question has been discussed extensively by the neurologist Antonio Damasio, who has mapped the transitions between reflex responses to external stimuli in primitive animals through awareness to fully developed self-consciousness, on to the emergence of increasingly complex, large brains.

Greenfield is concerned with the second question, the identification of the neural correlates of consciousness.

The article is here.

Tuesday, October 4, 2016

Whatever you think, you don’t necessarily know your own mind

Keith Frankish
aeon.co
Originally published May 27, 2016

Do  you think racial stereotypes are false? Are you sure? I’m not asking if you’re sure whether or not the stereotypes are false, but if you’re sure whether or not you think that they are. That might seem like a strange question. We all know what we think, don’t we?

Most philosophers of mind would agree, holding that we have privileged access to our own thoughts, which is largely immune from error. Some argue that we have a faculty of ‘inner sense’, which monitors the mind just as the outer senses monitor the world. There have been exceptions, however. The mid-20th-century behaviourist philosopher Gilbert Ryle held that we learn about our own minds, not by inner sense, but by observing our own behaviour, and that friends might know our minds better than we do. (Hence the joke: two behaviourists have just had sex and one turns to the other and says: ‘That was great for you, darling. How was it for me?’) And the contemporary philosopher Peter Carruthers proposes a similar view (though for different reasons), arguing that our beliefs about our own thoughts and decisions are the product of self-interpretation and are often mistaken.

Wednesday, September 9, 2015

How can healthcare professionals better manage their unconscious racial bias?

By April Dembosky
MedCity News
Originally published August 21, 2015

Here is an excerpt:

Racial Disparity In Medical Treatment Persists

Even as the health of Americans has improved, the disparities in treatment and outcomes between white patients and black and Latino patients are almost as big as they were 50 years ago.

A growing body of research suggests that doctors’ unconscious behavior plays a role in these statistics, and the Institute of Medicine of the National Academy of Sciences has called for more studies looking at discrimination and prejudice in health care.

For example, several studies show that African-American patients are often prescribed less pain medication than white patients with the same complaints. Black patients with chest pain are referred for advanced cardiac care less often than white patients with identical symptoms.

Doctors, nurses and other health workers don’t mean to treat people differently, says Howard Ross, founder of management consulting firm Cook Ross, who has worked with many groups on diversity issues. But all these professionals harbor stereotypes that they’re not aware they have, he says. Everybody does.

The entire article is here.

Monday, July 20, 2015

Can you teach people to have empathy?

By Roman Krznaric
BBC News
Originally posted June 30, 2015

Empathy is a quality that is integral to most people's lives - and yet the modern world makes it easy to lose sight of the feelings of others. But almost everyone can learn to develop this crucial personality trait, says Roman Krznaric.

Open Harper Lee's classic novel To Kill A Mockingbird and one line will jump out at you: "You never really understand another person until you consider things from his point of view - until you climb inside of his skin and walk around in it."

Human beings are naturally primed to embrace this message. According to the latest neuroscience research, 98% of people (the exceptions include those with psychopathic tendencies) have the ability to empathise wired into their brains - an in-built capacity for stepping into the shoes of others and understanding their feelings and perspectives.

The problem is that most don't tap into their full empathic potential in everyday life.

The entire article is here.


Monday, July 13, 2015

Chimpanzees can tell right from wrong

By Richard Gray
Daily Mail Online
Originally published June 26, 2015

They are our closest relatives in the animal kingdom, capable of using tools and solving problems much like their human cousins, but it appears chimpanzees also share our sense of morality too.
A new study of the apes reacting to an infant chimp being killed by another group has shown the animals have a strong sense of right and wrong.

The researchers found chimpanzees reacted to videos showing the violent scenes in a similar way to humans.

The entire article is here.

Saturday, November 1, 2014

Are We Really Conscious?

By Michael Graziano
The New York Times Sunday Review
Originally published October 10, 2014

Here is an excerpt:

The brain builds models (or complex bundles of information) about items in the world, and those models are often not accurate. From that realization, a new perspective on consciousness has emerged in the work of philosophers like Patricia S. Churchland and Daniel C. Dennett. Here’s my way of putting it:

How does the brain go beyond processing information to become subjectively aware of information? The answer is: It doesn’t. The brain has arrived at a conclusion that is not correct. When we introspect and seem to find that ghostly thing — awareness, consciousness, the way green looks or pain feels — our cognitive machinery is accessing internal models and those models are providing information that is wrong. The machinery is computing an elaborate story about a magical-seeming property. And there is no way for the brain to determine through introspection that the story is wrong, because introspection always accesses the same incorrect information.

The entire article is here.

Saturday, March 15, 2014

The moral pop-out effect: Enhanced perceptual awareness of morally relevant stimuli

Gantman, A. P. & Van Bavel, J. J. (in press). The moral pop-out effect: Enhanced perceptual
awareness of morally relevant stimuli. Cognition.

Abstract 

Every day people perceive religious and moral iconography in ambiguous objects, ranging from grilled cheese to bird feces. In the current research, we examined whether moral concerns can shape awareness of perceptually ambiguous stimuli. In three experiments, we presented masked moral and non-moral words around the threshold for conscious awareness as part of a lexical decision task. Participants correctly identified moral words more frequently than non-moral words—a phenomenon we term the moral pop-out effect. The moral pop-out effect was only evident when stimuli were presented at durations that made them perceptually ambiguous, but not when the stimuli were presented too quickly to perceive or slowly enough to easily perceive.  The moral pop-out effect was not moderated by exposure to harm and cannot be explained by differences in arousal, valence, or extremity. Although most models of moral psychology assume the initial perception of moral stimuli, our research suggests that moral beliefs and values may shape perceptual awareness.

The entire article is here.