Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Thinking. Show all posts
Showing posts with label Thinking. Show all posts

Tuesday, March 14, 2023

What Happens When AI Has Read Everything?

Ross Anderson
The Atlantic
Originally posted 18 JAN 23

Here is an excerpt:

Ten trillion words is enough to encompass all of humanity’s digitized books, all of our digitized scientific papers, and much of the blogosphere. That’s not to say that GPT-4 will have read all of that material, only that doing so is well within its technical reach. You could imagine its AI successors absorbing our entire deep-time textual record across their first few months, and then topping up with a two-hour reading vacation each January, during which they could mainline every book and scientific paper published the previous year.

Just because AIs will soon be able to read all of our books doesn’t mean they can catch up on all of the text we produce. The internet’s storage capacity is of an entirely different order, and it’s a much more democratic cultural-preservation technology than book publishing. Every year, billions of people write sentences that are stockpiled in its databases, many owned by social-media platforms.

Random text scraped from the internet generally doesn’t make for good training data, with Wikipedia articles being a notable exception. But perhaps future algorithms will allow AIs to wring sense from our aggregated tweets, Instagram captions, and Facebook statuses. Even so, these low-quality sources won’t be inexhaustible. According to Villalobos, within a few decades, speed-reading AIs will be powerful enough to ingest hundreds of trillions of words—including all those that human beings have so far stuffed into the web.

And the conclusion:

If, however, our data-gorging AIs do someday surpass human cognition, we will have to console ourselves with the fact that they are made in our image. AIs are not aliens. They are not the exotic other. They are of us, and they are from here. They have gazed upon the Earth’s landscapes. They have seen the sun setting on its oceans billions of times. They know our oldest stories. They use our names for the stars. Among the first words they learn are flow, mother, fire, and ash.

Thursday, December 22, 2022

In the corner of an Australian lab, a brain in a dish is playing a video game - and it’s getting better

Liam Mannix
Sydney Morning Herald
Originally posted 13 NOV 22

Here is an excerpt:

Artificial intelligence controls an ever-increasing slice of our lives. Smart voice assistants hang on our every word. Our phones leverage machine learning to recognise our face. Our social media lives are controlled by algorithms that surface content to keep us hooked.

These advances are powered by a new generation of AIs built to resemble human brains. But none of these AIs are really intelligent, not in the human sense of the word. They can see the superficial pattern without understanding the underlying concept. Siri can read you the weather but she does not really understand that it’s raining. AIs are good at learning by rote, but struggle to extrapolate: even teenage humans need only a few sessions behind the wheel before the can drive, while Google’s self-driving car still isn’t ready after 32 billion kilometres of practice.

A true ‘general artificial intelligence’ remains out of reach - and, some scientists think, impossible.

Is this evidence human brains can do something special computers never will be able to? If so, the DishBrain opens a new path forward. “The only proof we have of a general intelligence system is done with biological neurons,” says Kagan. “Why would we try to mimic what we could harness?”

He imagines a future part-silicon-part-neuron supercomputer, able to combine the raw processing power of silicon with the built-in learning ability of the human brain.

Others are more sceptical. Human intelligence isn’t special, they argue. Thoughts are just electro-chemical reactions spreading across the brain. Ultimately, everything is physics - we just need to work out the maths.

“If I’m building a jet plane, I don’t need to mimic a bird. It’s really about getting to the mathematical foundations of what’s going on,” says Professor Simon Lucey, director of the Australian Institute for Machine Learning.

Why start the DishBrains on Pong? I ask. Because it’s a game with simple rules that make it ideal for training AI. And, grins Kagan, it was one of the first video game ever coded. A nod to the team’s geek passions - which run through the entire project.

“There’s a whole bunch of sci-fi history behind it. The Matrix is an inspiration,” says Chong. “Not that we’re trying to create a Matrix,” he adds quickly. “What are we but just a goooey soup of neurons in our heads, right?”

Maybe. But the Matrix wasn’t meant as inspiration: it’s a cautionary tale. The humans wired into it existed in a simulated reality while machines stole their bioelectricity. They were slaves.

Is it ethical to build a thinking computer and then restrict its reality to a task to be completed? Even if it is a fun task like Pong?

“The real life correlate of that is people have already created slaves that adore them: they are called dogs,” says Oxford University’s Julian Savulescu.

Thousands of years of selective breeding has turned a wild wolf into an animal that enjoys rounding up sheep, that loves its human master unconditionally.

Sunday, October 23, 2022

Advancing theorizing about fast-and-slow thinking

De Neys, W. (2022). 
Behavioral and Brain Sciences, 1-68. 
doi:10.1017/S0140525X2200142X

Abstract

Human reasoning is often conceived as an interplay between a more intuitive and deliberate thought process. In the last 50 years, influential fast-and-slow dual process models that capitalize on this distinction have been used to account for numerous phenomena—from logical reasoning biases, over prosocial behavior, to moral decision-making. The present paper clarifies that despite the popularity, critical assumptions are poorly conceived. My critique focuses on two interconnected foundational issues: the exclusivity and switch feature. The exclusivity feature refers to the tendency to conceive intuition and deliberation as generating unique responses such that one type of response is assumed to be beyond the capability of the fast-intuitive processing mode. I review the empirical evidence in key fields and show that there is no solid ground for such exclusivity. The switch feature concerns the mechanism by which a reasoner can decide to shift between more intuitive and deliberate processing. I present an overview of leading switch accounts and show that they are conceptually problematic—precisely because they presuppose exclusivity. I build on these insights to sketch the groundwork for a more viable dual process architecture and illustrate how it can set a new research agenda to advance the field in the coming years.

Conclusion

In the last 50 years dual process models of thinking have moved to the center stage in research on human reasoning. These models have been instrumental for the initial exploration of human thinking in the cognitive sciences and related fields (Chater, 2018; De Neys, 2021). However, it is time to rethink foundational assumptions. Traditional dual process models have typically conceived intuition and deliberation as generating unique responses such that one type of response is exclusively tied to deliberation and is assumed to be beyond the reach of the intuitive system. I reviewed empirical evidence from key dual process applications that argued against this exclusivity feature. I also showed how exclusivity leads to conceptual complications when trying to explain how a reasoner switches between intuitive and deliberate reasoning. To avoid these complications, I sketched an elementary non-exclusive working model in which it is the activation strength of competing intuitions within System 1 that determines System 2 engagement. 

It will be clear that the working model is a starting point that will need to be further developed and specified. However, by avoiding the conceptual paradoxes that plague the traditional model, it presents a more viable basic architecture that can serve as theoretical groundwork to build future dual process models in various fields. In addition, it should at the very least force dual process theorists to specify more explicitly how they address the switch issue. In the absence of such specification, dual process models might continue to provide an appealing narrative but will do little to advance our understanding of the interaction between intuitive and deliberate— fast and slow—thinking. It is in this sense that I hope that the present paper can help to sketch the building blocks of a more judicious dual process future. 

Monday, December 31, 2018

How free is our will?

Kevin Mitchell
Wiring The Brain Blog
Originally posted November 25, 2018

Here is an excerpt:

Being free – to my mind at least – doesn’t mean making decisions for no reasons, it means making them for your reasons. Indeed, I would argue that this is exactly what is required to allow any kind of continuity of the self. If you were just doing things on a whim all the time, what would it mean to be you? We accrue our habits and beliefs and intentions and goals over our lifetime, and they collectively affect how actions are suggested and evaluated.

Whether we are conscious of that is another question. Most of our reasons for doing things are tacit and implicit – they’ve been wired into our nervous systems without our even being aware of them. But they’re still part of us ­– you could argue they’re precisely what makes us us. Even if most of that decision-making happens subconsciously, it’s still you doing it.

Ultimately, whether you think you have free will or not may depend less on the definition of “free will” and more on the definition of “you”. If you identify just as the president – the decider-in-chief – then maybe you’ll be dismayed at how little control you seem to have or how rarely you really exercise it. (Not never, but maybe less often than your ego might like to think).

But that brings us back to a very dualist position, identifying you with only your conscious mind, as if it can somehow be separated from all the underlying workings of your brain. Perhaps it’s more appropriate to think that you really comprise all of the machinery of government, even the bits that the president never sees or is not even aware exists.

The info is here.

Monday, December 24, 2018

Your Intuition Is Wrong, Unless These 3 Conditions Are Met

Emily Zulz
www.thinkadvisor.com
Originally posted November 16, 2018

Here is an excerpt:

“Intuitions of master chess players when they look at the board [and make a move], they’re accurate,” he said. “Everybody who’s been married could guess their wife’s or their husband’s mood by one word on the telephone. That’s an intuition and it’s generally very good, and very accurate.”

According to Kahneman, who’s studied when one can trust intuition and when one cannot, there are three conditions that need to be met in order to trust one’s intuition.

The first is that there has to be some regularity in the world that someone can pick up and learn.

“So, chess players certainly have it. Married people certainly have it,” Kahnemen explained.

However, he added, people who pick stocks in the stock market do not have it.

“Because, the stock market is not sufficiently regular to support developing that kind of expert intuition,” he explained.

The second condition for accurate intuition is “a lot of practice,” according to Kahneman.

And the third condition is immediate feedback. Kahneman said that “you have to know almost immediately whether you got it right or got it wrong.”

The info is here.

Friday, December 14, 2018

Don’t Want to Fall for Fake News? Don’t Be Lazy

Robbie Gonzalez
www.wired.com
Originally posted November 9, 2018

Here are two excerpts:

Misinformation researchers have proposed two competing hypotheses for why people fall for fake news on social media. The popular assumption—supported by research on apathy over climate change and the denial of its existence—is that people are blinded by partisanship, and will leverage their critical-thinking skills to ram the square pegs of misinformation into the round holes of their particular ideologies. According to this theory, fake news doesn't so much evade critical thinking as weaponize it, preying on partiality to produce a feedback loop in which people become worse and worse at detecting misinformation.

The other hypothesis is that reasoning and critical thinking are, in fact, what enable people to distinguish truth from falsehood, no matter where they fall on the political spectrum. (If this sounds less like a hypothesis and more like the definitions of reasoning and critical thinking, that's because they are.)

(cut)

All of which suggests susceptibility to fake news is driven more by lazy thinking than by partisan bias. Which on one hand sounds—let's be honest—pretty bad. But it also implies that getting people to be more discerning isn't a lost cause. Changing people's ideologies, which are closely bound to their sense of identity and self, is notoriously difficult. Getting people to think more critically about what they're reading could be a lot easier, by comparison.

Then again, maybe not. "I think social media makes it particularly hard, because a lot of the features of social media are designed to encourage non-rational thinking." Rand says. Anyone who has sat and stared vacantly at their phone while thumb-thumb-thumbing to refresh their Twitter feed, or closed out of Instagram only to re-open it reflexively, has experienced firsthand what it means to browse in such a brain-dead, ouroboric state. Default settings like push notifications, autoplaying videos, algorithmic news feeds—they all cater to humans' inclination to consume things passively instead of actively, to be swept up by momentum rather than resist it.

The info is here.

Wednesday, September 19, 2018

Many Cultures, One Psychology?

Nicolas Geeraert
American Scientist
Originally published in the July-August issue

Here is an excerpt:

The Self

If you were asked to describe yourself, what would you say? Would you list your personal characteristics, such as being intelligent or funny, or would you use preferences, such as “I love pizza”? Or perhaps you would instead mention social relationships, such as “I am a parent”? Social psychologists have long maintained that people are much more likely to describe themselves and others in terms of stable personal characteristics than they are to describe themselves in terms of their preferences or relationships.

However, the way people describe themselves seems to be culturally bound. In a landmark 1991 paper, social psychologists Hazel R. Markus and Shinobu Kitayama put forward the idea that self-construal is culturally variant, noting that individuals in some cultures understand the self as independent, whereas those in other cultures perceive it as interdependent.

People with an independent self-construal view themselves as free, autonomous, and unique individuals, possessing stable boundaries and a set of fixed characteristics or attributes by which their actions are guided. Independent self-construal is more prevalent in Europe and North America. By contrast, people with an interdependent self-construal see themselves as more connected with others close to them, such as their family or community, and think of themselves as a part of different social relationships.

The information is here.

Friday, April 20, 2018

Making a Thinking Machine

Leah Winerman
The Monitor on Psychology - April 2018

Here is an excerpt:

A 'Top Down' Approach

Now, psychologists and AI researchers are looking to insights from cognitive and developmental psychology to address these limitations and to capture aspects of human thinking that deep neural networks can’t yet simulate, such as curiosity and creativity.

This more “top-down” approach to AI relies less on identifying patterns in data, and instead on figuring out mathematical ways to describe the rules that govern human cognition. Researchers can then write those rules into the learning algorithms that power the AI system. One promising avenue for this method is called Bayesian modeling, which uses probability to model how people reason and learn about the world. Brenden Lake, PhD, a psychologist and AI researcher at New York University, and his colleagues, for example, have developed a Bayesian AI system that can accomplish a form of one-shot learning. Humans, even children, are very good at this—a child only has to see a pineapple once or twice to understand what the fruit is, pick it out of a basket and maybe draw an example.

Likewise, adults can learn a new character in an unfamiliar language almost immediately.

The article is here.

Thursday, April 19, 2018

Common Sense for A.I. Is a Great Idea

Carissa Veliz
www.slate.com
Originally posted March 19, 2018

At the moment, artificial intelligence may have perfect memories and be better at arithmetic than us, but they are clueless. It takes a few seconds of interaction with any digital assistant to realize one is not in the presence of a very bright interlocutor. Among some of the unexpected items users have found in their shopping lists after talking to (or near) Amazon’s Alexa are 150,000 bottles of shampoo, sled dogs, “hunk of poo,” and a girlfriend.

The mere exasperation of talking to a digital assistant can be enough to miss human companionship, feel nostalgia of all things analog and dumb, and foreswear any future attempts at communicating with mindless pieces of metal inexplicably labelled “smart.” (Not to mention all the privacy issues.) A.I. not understanding what a shopping list is, and the kinds of items that are appropriate to such lists, is evidence of a much broader problem: They lack common sense.

The Allen Institute for Artificial Intelligence, or AI2, created by Microsoft co-founder Paul Allen, has announced it is embarking on a new research $125 million initiative to try to change that. “To make real progress in A.I., we have to overcome the big challenges in the area of common sense,” Allen told the New York Times. AI2 takes common sense to include the “infinite set of facts, heuristics, observations … that we bring to the table when we address a problem, but the computer doesn’t.” Researchers will use a combination of crowdsourcing, machine learning, and machine vision to create a huge “repository of knowledge” that will bring about common sense. Of paramount importance among its uses is to get A.I. to “understand what’s harmful to people.”

The information is here.

Sunday, April 1, 2018

Sudden-Death Aversion: Avoiding Superior Options Because They Feel Riskier

Jesse Walker, Jane L. Risen, Thomas Gilovich, and Richard Thaler
Journal of Personality and Social Psychology, in press

Abstract

We present evidence of Sudden-Death Aversion (SDA) – the tendency to avoid “fast” strategies that provide a greater chance of success, but include the possibility of immediate defeat, in favor of “slow” strategies that reduce the possibility of losing quickly, but have lower odds of ultimate success. Using a combination of archival analyses and controlled experiments, we explore the psychology behind SDA. First, we provide evidence for SDA and its cost to decision makers by tabulating how often NFL teams send games into overtime by kicking an extra point rather than going for the 2-point conversion (Study 1) and how often NBA teams attempt potentially game-tying 2-point shots rather than potentially game-winning 3-pointers (Study 2). To confirm that SDA is not limited to sports, we demonstrate SDA in a military scenario (Study 3). We then explore two mechanisms that contribute to SDA: myopic loss aversion and concerns about “tempting fate.” Studies 4 and 5 show that SDA is due, in part, to myopic loss aversion, such that decision makers narrow the decision frame, paying attention to the prospect of immediate loss with the “fast” strategy, but not the downstream consequences of the “slow” strategy. Study 6 finds people are more pessimistic about a risky strategy that needn’t be pursued (opting for sudden death) than the same strategy that must be pursued. We end by discussing how these twin mechanisms lead to differential expectations of blame from the self and others, and how SDA influences decisions in several different walks of life.

The research is here.

Monday, March 26, 2018

Non cogito, ergo sum

Ian Leslie
The Economist
Originally published May/June 2012

Here is an excerpt:

Researchers from Columbia Business School, New York, conducted an experiment in which people were asked to predict outcomes across a range of fields, from politics to the weather to the winner of “American Idol”. They found that those who placed high trust in their feelings made better predictions than those who didn’t. The result only applied, however, when the participants had some prior knowledge.

This last point is vital. Unthinking is not the same as ignorance; you can’t unthink if you haven’t already thought. Djokovic was able to pull off his wonder shot because he had played a thousand variations on it in previous matches and practice; Dylan’s lyrical outpourings drew on his immersion in folk songs, French poetry and American legends. The unconscious minds of great artists and sportsmen are like dense rainforests, which send up spores of inspiration.

The higher the stakes, the more overthinking is a problem. Ed Smith, a cricketer and author of “Luck”, uses the analogy of walking along a kerbstone: easy enough, but what if there was a hundred-foot drop to the street—every step would be a trial. In high-performance fields it’s the older and more successful performers who are most prone to choke, because expectation is piled upon them. An opera singer launching into an aria at La Scala cannot afford to think how her technique might be improved. When Federer plays a match point these days, he may feel as if he’s standing on the cliff edge of his reputation.

The article is here.

Monday, March 5, 2018

Donald Trump and the rise of tribal epistemology

David Roberts
Vox.com
Originally posted May 19, 2017 and still extremely important

Here is an excerpt:

Over time, this leads to what you might call tribal epistemology: Information is evaluated based not on conformity to common standards of evidence or correspondence to a common understanding of the world, but on whether it supports the tribe’s values and goals and is vouchsafed by tribal leaders. “Good for our side” and “true” begin to blur into one.

Now tribal epistemology has found its way to the White House.

Donald Trump and his team represent an assault on almost every American institution — they make no secret of their desire to “deconstruct the administrative state” — but their hostility toward the media is unique in its intensity.

It is Trump’s obsession and favorite target. He sees himself as waging a “running war” on the mainstream press, which his consigliere Steve Bannon calls “the opposition party.”

The article is here.

Thursday, January 18, 2018

Humans 2.0: meet the entrepreneur who wants to put a chip in your brain

Zofia Niemtus
The Guardian
Originally posted December 14, 2017

Here are two exerpts:

The shape that this technology will take is still unknown. Johnson uses the term “brain chip”, but the developments taking place in neuroprosthesis are working towards less invasive procedures than opening up your skull and cramming a bit of hardware in; injectable sensors are one possibility.

It may sound far-fetched, but Johnson has a track record of getting things done. Within his first semester at university, he’d set up a profitable business selling mobile phones to fellow students. By age 30, he’d founded online payment company Braintree, which he sold six years later to PayPal for $800m. He used $100m of the proceeds to create Kernel in 2016 – it now employs more than 30 people.

(cut)

“And yet, the brain is everything we are, everything we do, and everything we aspire to be. It seemed obvious to me that the brain is both the most consequential variable in the world and also our biggest blind spot as a species. I decided that if the root problems of humanity begin in the human mind, let’s change our minds.”

The article is here.

Monday, January 1, 2018

What I Was Wrong About This Year

David Leonhardt
The New York Times
Originally posted December 24, 2017

Here is an excerpt:

But I’ve come to realize that I was wrong about a major aspect of probabilities.

They are inherently hard to grasp. That’s especially true for an individual event, like a war or election. People understand that if they roll a die 100 times, they will get some 1’s. But when they see a probability for one event, they tend to think: Is this going to happen or not?

They then effectively round to 0 or to 100 percent. That’s what the Israeli official did. It’s also what many Americans did when they heard Hillary Clinton had a 72 percent or 85 percent chance of winning. It’s what football fans did in the Super Bowl when the Atlanta Falcons had a 99 percent chance of victory.

And when the unlikely happens, people scream: The probabilities were wrong!

Usually, they were not wrong. The screamers were wrong.

The article is here.

Thursday, December 14, 2017

Baltimore Cops Studying Plato and James Baldwin

David Dagan
The Atlantic
Originally posted November 25, 2017

Here is an excerpt:

Gillespie is trained to teach nuts-and-bolts courses on terrorism response, extremism, and gangs. But since the unrest of 2015, humanities have occupied the bulk of his time. The strategy is unusual in police training. “I’ve been doing this a long time and I’ve never heard of an instructor using this type of approach,” said William Terrill, a criminal-justice professor at Arizona State University who studies police culture.

But he nevertheless understands the general theory behind it. He’s authored studies showing that officers with higher education are less likely to use force than colleagues who have not been to college. The reasons why are unclear, Terrill said, but it’s possible that exposure to unfamiliar ideas and diverse people have an effect on officer behavior. Gillespie’s classes seem to offer a complement to the typical instruction. Most of it “is mechanical in nature,” Terrill said. “It’s kind of this step-by-step, instructional booklet.”

Officers learn how to properly approach a car, say, but they are rarely given tools to imagine the circumstances of the person in the driver’s seat.

The article is here.

Tuesday, December 12, 2017

Can AI Be Taught to Explain Itself?

Cliff Kuang
The New York Times Magazine
Originally published November 21, 2017

Here are two excerpts:

In 2018, the European Union will begin enforcing a law requiring that any decision made by a machine be readily explainable, on penalty of fines that could cost companies like Google and Facebook billions of dollars. The law was written to be powerful and broad and fails to define what constitutes a satisfying explanation or how exactly those explanations are to be reached. It represents a rare case in which a law has managed to leap into a future that academics and tech companies are just beginning to devote concentrated effort to understanding. As researchers at Oxford dryly noted, the law “could require a complete overhaul of standard and widely used algorithmic techniques” — techniques already permeating our everyday lives.

(cut)

“Artificial intelligence” is a misnomer, an airy and evocative term that can be shaded with whatever notions we might have about what “intelligence” is in the first place. Researchers today prefer the term “machine learning,” which better describes what makes such algorithms powerful. Let’s say that a computer program is deciding whether to give you a loan. It might start by comparing the loan amount with your income; then it might look at your credit history, marital status or age; then it might consider any number of other data points. After exhausting this “decision tree” of possible variables, the computer will spit out a decision. If the program were built with only a few examples to reason from, it probably wouldn’t be very accurate. But given millions of cases to consider, along with their various outcomes, a machine-learning algorithm could tweak itself — figuring out when to, say, give more weight to age and less to income — until it is able to handle a range of novel situations and reliably predict how likely each loan is to default.

The article is here.

Sunday, July 30, 2017

Should we be afraid of AI?

Luciano Floridi
aeon
Originally published

Here is an excerpt:

True AI is not logically impossible, but it is utterly implausible. We have no idea how we might begin to engineer it, not least because we have very little understanding of how our own brains and intelligence work. This means that we should not lose sleep over the possible appearance of some ultraintelligence. What really matters is that the increasing presence of ever-smarter technologies is having huge effects on how we conceive of ourselves, the world, and our interactions. The point is not that our machines are conscious, or intelligent, or able to know something as we do. They are not. There are plenty of well-known results that indicate the limits of computation, so-called undecidable problems for which it can be proved that it is impossible to construct an algorithm that always leads to a correct yes-or-no answer.

We know, for example, that our computational machines satisfy the Curry-Howard correspondence, which indicates that proof systems in logic on the one hand and the models of computation on the other, are in fact structurally the same kind of objects, and so any logical limit applies to computers as well. Plenty of machines can do amazing things, including playing checkers, chess and Go and the quiz show Jeopardy better than us. And yet they are all versions of a Turing Machine, an abstract model that sets the limits of what can be done by a computer through its mathematical logic.

Quantum computers are constrained by the same limits, the limits of what can be computed (so-called computable functions). No conscious, intelligent entity is going to emerge from a Turing Machine. The point is that our smart technologies – also thanks to the enormous amount of available data and some very sophisticated programming – are increasingly able to deal with more tasks better than we do, including predicting our behaviours. So we are not the only agents able to perform tasks successfully.

Tuesday, December 13, 2016

Consciousness: The Underlying Problem

Conscious Entities
November  24, 2016

What is the problem about consciousness? A Royal Institution video with interesting presentations (part 2 another time).

Anil Seth presents a striking illusion and gives an optimistic view of the ability of science to tackle the problem; or maybe we just get on with the science anyway? The philosophers may ask good questions, but their answers have always been wrong.

Barry Smith says that’s because when the philosophers have sorted a subject out it moves over into science. One problem is that we tend to miss thinking about consciousness and think about its contents. Isn’t there a problem: to be aware of your own awareness changes it? I feel pain in my body, but could consciousness be in my ankle?

Chris Frith points out that actually only a small part of our mental activity has anything to do with consciousness, and in fact there is evidence to show that many of the things we think are controlled by conscious thought really are not: a vindication of Helmholtz’s idea of unconscious inference. Thinking about your thinking messes things up?

The video is here.

Friday, November 18, 2016

Bayesian Brains without Probabilities

Adam N. Sanborn & Nick Chater
Trends in Cognitive Science
Published Online: October 26, 2016

Bayesian explanations have swept through cognitive science over the past two decades, from intuitive physics and causal learning, to perception, motor control and language. Yet people flounder with even the simplest probability questions. What explains this apparent paradox? How can a supposedly Bayesian brain reason so poorly with probabilities? In this paper, we propose a direct and perhaps unexpected answer: that Bayesian brains need not represent or calculate probabilities at all and are, indeed, poorly adapted to do so. Instead, the brain is a Bayesian sampler. Only with infinite samples does a Bayesian sampler conform to the laws of probability; with finite samples it systematically generates classic probabilistic reasoning errors, including the unpacking effect, base-rate neglect, and the conjunction fallacy.

The article is here.

Monday, July 18, 2016

Cooperation, Fast and Slow: Meta-Analytic Evidence for a Theory of Social Heuristics and Self-Interested Deliberation

David G. Rand
(In press).
Psychological Science.

Abstract

Does cooperating require the inhibition of selfish urges? Or does “rational” self-interest constrain cooperative impulses? I investigated the role of intuition and deliberation in cooperation by meta-analyzing 67 studies in which cognitive-processing manipulations were applied to economic cooperation games (total N = 17,647; no indication of publication bias using Egger’s test, Begg’s test, or p-curve). My meta-analysis was guided by the Social Heuristics Hypothesis, which proposes that intuition favors behavior that typically maximizes payoffs, whereas deliberation favors behavior that maximizes one’s payoff in the current situation. Therefore, this theory predicts that deliberation will undermine pure cooperation (i.e., cooperation in settings where there are few future consequences for one’s actions, such that cooperating is never in one’s self-interest) but not strategic cooperation (i.e., cooperation in settings where cooperating can maximize one’s payoff). As predicted, the meta-analysis revealed 17.3% more pure cooperation when intuition was promoted relative to deliberation, but no significant difference in strategic cooperation between intuitive and deliberation conditions.

The article is here.