Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Intelligence. Show all posts
Showing posts with label Intelligence. Show all posts

Sunday, January 7, 2024

The power of social influence: A replication and extension of the Asch experiment

Franzen A, Mader S (2023)
PLoS ONE 18(11): e0294325.

Abstract

In this paper, we pursue four goals: First, we replicate the original Asch experiment with five confederates and one naïve subject in each group (N = 210). Second, in a randomized trial we incentivize the decisions in the line experiment and demonstrate that monetary incentives lower the error rate, but that social influence is still at work. Third, we confront subjects with different political statements and show that the power of social influence can be generalized to matters of political opinion. Finally, we investigate whether intelligence, self-esteem, the need for social approval, and the Big Five are related to the susceptibility to provide conforming answers. We find an error rate of 33% for the standard length-of-line experiment which replicates the original findings by Asch (1951, 1955, 1956). Furthermore, in the incentivized condition the error rate decreases to 25%. For political opinions we find a conformity rate of 38%. However, besides openness, none of the investigated personality traits are convincingly related to the susceptibility of group pressure.

My summary:

This research aimed to replicate and extend the classic Asch conformity experiment, investigating the extent to which individuals conform to group pressure in a line-judging task. The study involved 210 participants divided into groups, with one naive participant and five confederates who provided deliberately incorrect answers. Replicating the original findings, the researchers observed an average error rate of 33%, demonstrating the enduring power of social influence in shaping individual judgments.

Furthering the investigation, the study explored the impact of monetary incentives on conformity. The researchers found that offering rewards for independent judgments reduced the error rate, suggesting that individuals are more likely to resist social pressure when motivated by personal gain. However, the study still observed a significant level of conformity even with incentives, indicating that social influence remains a powerful force even when competing with personal interests.

Saturday, August 19, 2023

Reverse-engineering the self

Paul, L., Ullman, T. D., De Freitas, J., & Tenenbaum, J.
(2023, July 8). PsyArXiv
https://doi.org/10.31234/osf.io/vzwrn

Abstract

To think for yourself, you need to be able to solve new and unexpected problems. This requires you to identify the space of possible environments you could be in, locate yourself in the relevant one, and frame the new problem as it exists relative to your location in this new environment. Combining thought experiments with a series of self-orientation games, we explore the way that intelligent human agents perform this computational feat by “centering” themselves: orienting themselves perceptually and cognitively in an environment, while simultaneously holding a representation of themselves as an agent in that environment. When faced with an unexpected problem, human agents can shift their perceptual and cognitive center from one location in a space to another, or “re-center”, in order to reframe a problem, giving them a distinctive type of cognitive flexibility. We define the computational ability to center (and re-center) as “having a self,” and propose that implementing this type of computational ability in machines could be an important step towards building a truly intelligent artificial agent that could “think for itself”. We then develop a conceptually robust, empirically viable, engineering-friendly implementation of our proposal, drawing on well established frameworks in cognition, philosophy, and computer science for thinking, planning, and agency.


The computational structure of the self is a key component of human intelligence. They propose a framework for reverse-engineering the self, drawing on work in cognition, philosophy, and computer science.

The authors argue that the self is a computational agent that is able to learn and think for itself. This agent has a number of key abilities, including:
  • The ability to represent the world and its own actions.
  • The ability to plan and make decisions.
  • The ability to learn from experience.
  • The ability to have a sense of self.
The authors argue that these abilities can be modeled as a POMDP, a type of mathematical model that is used to represent sequential decision-making processes in which the decision-maker does not have complete information about the environment. They propose a number of methods for reverse-engineering the self, including:
  • Using data from brain imaging studies to identify the neural correlates of self-related processes.
  • Using computational models of human decision-making to test hypotheses about the computational structure of the self.
  • Using philosophical analysis to clarify the nature of self-related concepts.
The authors argue that reverse-engineering the self is a promising approach to understanding human intelligence and developing artificial intelligence systems that are capable of thinking for themselves.

Friday, July 28, 2023

Humans, Neanderthals, robots and rights

Mamak, K.
Ethics Inf Technol 24, 33 (2022).

Abstract

Robots are becoming more visible parts of our life, a situation which prompts questions about their place in our society. One group of issues that is widely discussed is connected with robots’ moral and legal status as well as their potential rights. The question of granting robots rights is polarizing. Some positions accept the possibility of granting them human rights whereas others reject the notion that robots can be considered potential rights holders. In this paper, I claim that robots will never have all human rights, even if we accept that they are morally equal to humans. I focus on the role of embodiment in the content of the law. I claim that even relatively small differences in the ontologies of entities could lead to the need to create new sets of rights. I use the example of Neanderthals to illustrate that entities similar to us might have required different legal statuses. Then, I discuss the potential legal status of human-like robots.

Conclusions

The place of robots in the law universe depends on many things. One is our decision about their moral status, but even if we accept that some robots are equal to humans, this does not mean that they have the same legal status as humans. Law, as a human product, is tailored to a human being who has a body. Embodiment impacts the content of law, and entities with different ontologies are not suited to human law. As discussed here, Neanderthals, who are very close to us from a biological point of view, and human-like robots cannot be counted as humans by law. Doing so would be anthropocentric and harmful to such entities because it could ignore aspects of their lives that are important for them. It is certain that the current law is not ready for human-like robots.


Here is a summary: 

In terms of robot rights, one factor to consider is the nature of robots. Robots are becoming increasingly sophisticated, and some experts believe that they may eventually become as intelligent as humans. If this is the case, then it is possible that robots could deserve the same rights as humans.

Another factor to consider is the relationship between humans and robots. Humans have a long history of using animals, and some people argue that robots are simply another form of animal. If this is the case, then it is possible that robots do not deserve the same rights as humans.
  • The question of robot rights is a complex one, and there is no easy answer.
  • The nature of robots and the relationship between humans and robots are two important factors to consider when thinking about robot rights.
  • It is important to start thinking about robot rights now, before robots become too sophisticated.

Friday, February 11, 2022

Social Neuro AI: Social Interaction As the "Dark Matter" of AI

S. Bolotta & G. Dumas
arxiv.org
Originally published 4 JAN 22

Abstract

We are making the case that empirical results from social psychology and social neuroscience along with the framework of dynamics can be of inspiration to the development of more intelligent artificial agents. We specifically argue that the complex human cognitive architecture owes a large portion of its expressive power to its ability to engage in social and cultural learning. In the first section, we aim at demonstrating that social learning plays a key role in the development of intelligence. We do so by discussing social and cultural learning theories and investigating the abilities that various animals have at learning from others; we also explore findings from social neuroscience that examine human brains during social interaction and learning. Then, we discuss three proposed lines of research that fall under the umbrella of Social NeuroAI and can contribute to developing socially intelligent embodied agents in complex environments. First, neuroscientific theories of cognitive architecture, such as the global workspace theory and the attention schema theory, can enhance biological plausibility and help us understand how we could bridge individual and social theories of intelligence. Second, intelligence occurs in time as opposed to over time, and this is naturally incorporated by the powerful framework offered by dynamics. Third, social embodiment has been demonstrated to provide social interactions between virtual agents and humans with a more sophisticated array of communicative signals. To conclude, we provide a new perspective on the field of multiagent robot systems, exploring how it can advance by following the aforementioned three axes.

Conclusion

At the crossroads of robotics, computer science, and psychology, one of the main challenges for humans is to build autonomous agents capable of participating in cooperative social interactions. This is important not only because AI will play a crucial role in our daily life, but also because, as demonstrated by results in social neuroscience and evolutionary psychology, intrapersonal intelligence is tightly connected with interpersonal intelligence, especially in humans Dumas et al. [2014a]. In this opinion article, we have attempted to unify the lines of research that, at the moment, are separated from each other; in particular, we have proposed three research directions that are expected to enhance efficient exchange of information between agents and, as a consequence, individual intelligence (especially in out-of-distribution generalization: OOD). This would contribute to creating agents that not only do have humanlike OOD skills, but are also able to exhibit such skills in extremely complex and realistic environments Dennis et al.
[2021], while interacting with other embodied agents and with humans.


Saturday, May 2, 2020

Decision-Making Competence: More Than Intelligence?

Bruine de Bruin, W., Parker, A. M., & Fischhoff, B.
(2020). Current Directions in Psychological Science.
https://doi.org/10.1177/0963721420901592

Abstract

Decision-making competence refers to the ability to make better decisions, as defined by decision-making principles posited by models of rational choice. Historically, psychological research on decision-making has examined how well people follow these principles under carefully manipulated experimental conditions. When individual differences received attention, researchers often assumed that individuals with higher fluid intelligence would perform better. Here, we describe the development and validation of individual-differences measures of decision-making competence. Emerging findings suggest that decision-making competence may tap not only into fluid intelligence but also into motivation, emotion regulation, and experience (or crystallized intelligence). Although fluid intelligence tends to decline with age, older adults may be able to maintain decision-making competence by leveraging age-related improvements in these other skills. We discuss implications for interventions and future research.

(cut)

Implications for Interventions

Better understanding of how fluid intelligence and other skills support decision-making competence should facilitate the design of interventions. Below, we briefly consider directions for future research into potential cognitive, motivational, emotional, and experiential interventions for promoting decision-making competence.

In one intervention that aimed to provide cognitive support, Zwilling and colleagues (2019) found that training in core cognitive abilities improved decision-making competence, compared to an active control group (in which participants practiced to process visual information faster.) Effects of cognitive training can be enhanced by high-intensity cardioresistance fitness training, which improves connectivity in the brain (Zwilling et al., 2019).  Rosi, Vecchi, & Cavallini (2019) found that prompting older people to ask ‘metacognitive’ questions (e.g., what is the main information?) was more effective than general memory training for improving performance on Applying Decision Rules. This finding is in line with suggestions that older adults perform better when they are asked to explain their choices (Kim, Goldstein, Hasher, & Zachs, 2005). Additional intervention approaches have aimed to reduce the need to rely on fluid intelligence. Using simple instead of complex decision rules may decrease cognitive demands, and cause fewer errors (Payne et al., 1993). Reducing the number of options also reduces cognitive demands, and may help especially older adults to improve their choices (Tanius, Wood, Hanoch, & Rice, 2009).

Monday, March 9, 2020

The dangerous veneer of ‘science’

Regini Rini
Times Literary Supplement
Originally posted Feb 20

"Race science” seems to be the hideous new trend of 2020. Last month, an article in the journal Philosophical Psychology calling for greater investigation of purported genetic bases for racial IQ differences became the focus of intense academic controversy. Then came a new book, Human Diversity, from Charles Murray, prompting the New York Times columnist Jamelle Bouie to tweet: “i guess we’re gonna spend february arguing with phrenologists”. And then just this week, a twenty-seven-year-old consultant to the British government quickly resigned following media reports of his apparent musings on eugenics.

What’s going on? Why are we suddenly talking about this nonsense again? Donald Trump, and the winks he tosses to torch-wielding “blood and soil” marchers, may have something to do with it. Clearly there is a market for white coats who cater to white hoods. But the “race science” racket is growing, and we needn’t assume that all its practitioners have such transparently bigoted motives. Rather, I suspect that some are in it for the iconoclastic thrill of prodding at bien pensant pieties from behind the intellectual shield of capital-S Science.

There has always been a certain sort of mind that delights in saying whatever is verboten, from the Marquis de Sade to Christopher Hitchens. The writer George Packer worries that, in the high-stakes moral atmosphere of the Trump era, we no longer have cultural space for such fearless exploration of opinion. But I think this gets things exactly backwards. Trumpism is partly a result of the fact that it is now much easier to acquire an audience for heterodoxy. You don’t have to be a gifted essayist; you need only a Twitter account and lack of moral inhibition. Thoughtful iconoclasts aren’t silenced, they’re merely lost amid the din of so many icons being artlessly shattered.

The info is here.

Thursday, February 27, 2020

Liar, Liar, Liar

S. Vedantam, M. Penmann, & T. Boyle
Hidden Brain - NPR.org
Originally posted 17 Feb 20

When we think about dishonesty, we mostly think about the big stuff.

We see big scandals, big lies, and we think to ourselves, I could never do that. We think we're fundamentally different from Bernie Madoff or Tiger Woods.

But behind big lies are a series of small deceptions. Dan Ariely, a professor of psychology and behavioral economics at Duke University, writes about this in his book The Honest Truth about Dishonesty.

"One of the frightening conclusions we have is that what separates honest people from not-honest people is not necessarily character, it's opportunity," he said.

These small lies are quite common. When we lie, it's not always a conscious or rational choice. We want to lie and we want to benefit from our lying, but we want to be able to look in the mirror and see ourselves as good, honest people. We might go a little too fast on the highway, or pocket extra change at a gas station, but we're still mostly honest ... right?

That's why Ariely describes honesty as something of a state of mind. He thinks the IRS should have people sign a pledge committing to be honest when they start working on their taxes, not when they're done. Setting the stage for honesty is more effective than asking someone after the fact whether or not they lied.

The info is here.

There is a 30 minute audio file worth listening.

Friday, June 21, 2019

It's not biology bro: Torture and the Misuse of Science

Shane O'Mara and John Schiemann
PsyArXiv Preprints
Last edited on December 24, 2018

Abstract

Contrary to the (in)famous line in the film Zero Dark Thirty, the CIA's torture program was not based on biology or any other science. Instead, the Bush administration and the CIA decided to use coercion immediately after the 9/11 terrorist attacks and then veneered the program's justification with a patina of pseudoscience, ignoring the actual biology of torturing human brains. We reconstruct the Bush administration’s decision-making process from released government documents, independent investigations, journalistic accounts, and memoirs to establish that the policy decision to use torture took place in the immediate aftermath of the 9/11 attacks without any investigation into its efficacy. We then present the pseudo-scientific model of torture sold to the CIA based on a loose amalgamation of methods from the old KUBARK manual, reverse-engineering of SERE training techniques, and learned helplessness theory, show why this ad hoc model amounted to pseudoscience, and then catalog what the actual science of torturing human brains – available in 2001 – reveals about the practice. We conclude with a discussion of how process of policy-making might incorporate countervailing evidence to ensure that policy problems are forestalled, via the concept of an evidence-based policy brake, which is deliberately instituted to prevent a policy going forward that is contrary to law, ethics and evidence.

The info is here.

Monday, June 3, 2019

IVF couples could be able to choose the ‘smartest’ embryo

Hannah Devlin
TheGuardian.com
Originally posted May 24, 2019

Couples undergoing IVF treatment could be given the option to pick the “smartest” embryo within the next 10 years, a leading US scientist has predicted.

Stephen Hsu, senior vice president for research at Michigan State University, said scientific advances mean it will soon be feasible to reliably rank embryos according to potential IQ, posing profound ethical questions for society about whether or not the technology should be adopted.

Hsu’s company, Genomic Prediction, already offers a test aimed at screening out embryos with abnormally low IQ to couples being treated at fertility clinics in the US.

“Accurate IQ predictors will be possible, if not the next five years, the next 10 years certainly,” Hsu told the Guardian. “I predict certain countries will adopt them.”

Genomic Prediction’s tests are not currently available in the UK, but the company is planning to submit an application to the Human Fertilisation and Embryology Authority by the end of the year, initially to offer a test for risk of type 1 diabetes.

The info is here.

Tuesday, December 18, 2018

Super-smart designer babies could be on offer soon. But is that ethical?

A micro image of embryo selection for IVF.Philip Ball
The Guardian
Originally posted November 19, 2018

Here is an excerpt:


Before we start imagining a Gattaca-style future of genetic elites and underclasses, there’s some context needed. The company says it is only offering such testing to spot embryos with an IQ low enough to be classed as a disability, and won’t conduct analyses for high IQ. But the technology the company is using will permit that in principle, and co-founder Stephen Hsu, who has long advocated for the prediction of traits from genes, is quoted as saying: “If we don’t do it, some other company will.”

The development must be set, too, against what is already possible and permitted in IVF embryo screening. The procedure called pre-implantation genetic diagnosis (PGD) involves extracting cells from embryos at a very early stage and “reading” their genomes before choosing which to implant. It has been enabled by rapid advances in genome-sequencing technology, making the process fast and relatively cheap. In the UK, PGD is strictly regulated by the Human Fertilisation and Embryology Authority (HFEA), which permits its use to identify embryos with several hundred rare genetic diseases of which the parents are known to be carriers. PGD for other purposes is illegal.

The info is here.

Wednesday, August 1, 2018

65% of Americans believe they are above average in intelligence: Results of two nationally representative surveys

Patrick R. Heck, Daniel J. Simons, Christopher F. Chabris
PLoS One
Originally posted July 3, 2018

Abstract

Psychologists often note that most people think they are above average in intelligence. We sought robust, contemporary evidence for this “smarter than average” effect by asking Americans in two independent samples (total N = 2,821) whether they agreed with the statement, “I am more intelligent than the average person.” After weighting each sample to match the demographics of U.S. census data, we found that 65% of Americans believe they are smarter than average, with men more likely to agree than women. However, overconfident beliefs about one’s intelligence are not always unrealistic: more educated people were more likely to think their intelligence is above average. We suggest that a tendency to overrate one’s cognitive abilities may be a stable feature of human psychology.

The research is here.

Wednesday, June 20, 2018

How the Enlightenment Ends

Henry A. Kissinger
The Atlantic
Posted in the June 2018 Issue

Here are two excerpts:

Second, that in achieving intended goals, AI may change human thought processes and human values. AlphaGo defeated the world Go champions by making strategically unprecedented moves—moves that humans had not conceived and have not yet successfully learned to overcome. Are these moves beyond the capacity of the human brain? Or could humans learn them now that they have been demonstrated by a new master?

(cut)

Other AI projects work on modifying human thought by developing devices capable of generating a range of answers to human queries. Beyond factual questions (“What is the temperature outside?”), questions about the nature of reality or the meaning of life raise deeper issues. Do we want children to learn values through discourse with untethered algorithms? Should we protect privacy by restricting AI’s learning about its questioners? If so, how do we accomplish these goals?

If AI learns exponentially faster than humans, we must expect it to accelerate, also exponentially, the trial-and-error process by which human decisions are generally made: to make mistakes faster and of greater magnitude than humans do. It may be impossible to temper those mistakes, as researchers in AI often suggest, by including in a program caveats requiring “ethical” or “reasonable” outcomes. Entire academic disciplines have arisen out of humanity’s inability to agree upon how to define these terms. Should AI therefore become their arbiter?

The article is here.

Saturday, April 14, 2018

The AI Cargo Cult: The Myth of a Superhuman AI

Kevin Kelly
www.wired.com
Originally published April 25, 2017

Here is an excerpt:

The most common misconception about artificial intelligence begins with the common misconception about natural intelligence. This misconception is that intelligence is a single dimension. Most technical people tend to graph intelligence the way Nick Bostrom does in his book, Superintelligence — as a literal, single-dimension, linear graph of increasing amplitude. At one end is the low intelligence of, say, a small animal; at the other end is the high intelligence, of, say, a genius—almost as if intelligence were a sound level in decibels. Of course, it is then very easy to imagine the extension so that the loudness of intelligence continues to grow, eventually to exceed our own high intelligence and become a super-loud intelligence — a roar! — way beyond us, and maybe even off the chart.

This model is topologically equivalent to a ladder, so that each rung of intelligence is a step higher than the one before. Inferior animals are situated on lower rungs below us, while higher-level intelligence AIs will inevitably overstep us onto higher rungs. Time scales of when it happens are not important; what is important is the ranking—the metric of increasing intelligence.

The problem with this model is that it is mythical, like the ladder of evolution. The pre-Darwinian view of the natural world supposed a ladder of being, with inferior animals residing on rungs below human. Even post-Darwin, a very common notion is the “ladder” of evolution, with fish evolving into reptiles, then up a step into mammals, up into primates, into humans, each one a little more evolved (and of course smarter) than the one before it. So the ladder of intelligence parallels the ladder of existence. But both of these models supply a thoroughly unscientific view.

The information is here.

Thursday, January 11, 2018

Is Blended Intelligence the Next Stage of Human Evolution?

Richard Yonck
TED Talk
Published December 8, 2017

What is the future of intelligence? Humanity is still an extremely young species and yet our prodigious intellects have allowed us to achieve all manner of amazing accomplishments in our relatively short time on this planet, most especially during the past couple of centuries or so. Yet, it would be short-sighted of us to assume our species has reached the end of our journey, having become as intelligent as we will ever be. On the contrary, it seems far more likely that if we should survive our “infancy," there is probably much more time ahead of us than there is looking back. If that’s the case, then our descendants of only a few thousand years from now will probably be very, very different from you and I.


Thursday, August 24, 2017

Brain Augmentation: How Scientists are Working to Create Cyborg Humans with Super Intelligence

Hannah Osborne
Newsweek
Originally published June 14, 2017

For most people, the idea of brain augmentation remains in the realms of science fiction. However, for scientists across the globe, it is fast becoming reality—with the possibility of humans with “super-intelligence” edging ever closer.

In laboratory experiments on rats, researchers have already been able to transfer memories from one brain to another. Future projects include the development of telepathic communication and the creation of “cyborgs,” where humans have advanced abilities thanks to technological interventions.

Scientists Mikhail Lebedev, Ioan Opris and Manuel Casanova have now published a comprehensive collection of research into brain augmentation, and their efforts have won a major European science research prize—the Frontiers Spotlight Award. This $100,000 prize is for the winners to set up a conference that highlights emerging research in their field.

Project leader Lebedev, a senior researcher at Duke University, North Carolina, said the reality of brain augmentation—where intelligence is enhanced by brain implants—will be part of everyday life by 2030, and that “people will have to deal with the reality of this new paradigm.”

Their collection, Augmentation of brain function: facts, fiction and controversy, was published by Frontiers and includes almost 150 research articles by more than 600 contributing authors. It focuses on current brain augmentation, future proposals and the ethical and legal implications the topic raises.

The article is here.

Wednesday, August 9, 2017

Career of the Future: Robot Psychologist

Christopher Mims
The Wall Street Journal
Originally published July 9, 2017

Artificial-intelligence engineers have a problem: They often don’t know what their creations are thinking.

As artificial intelligence grows in complexity and prevalence, it also grows more powerful. AI already has factored into decisions about who goes to jail and who receives a loan. There are suggestions AI should determine who gets the best chance to live when a self-driving car faces an unavoidable crash.

Defining AI is slippery and growing more so, as startups slather the buzzword over whatever they are doing. It is generally accepted as any attempt to ape human intelligence and abilities.

One subset that has taken off is neural networks, systems that “learn” as humans do through training, turning experience into networks of simulated neurons. The result isn’t code, but an unreadable, tangled mass of millions—in some cases billions—of artificial neurons, which explains why those who create modern AIs can be befuddled as to how they solve tasks.

Most researchers agree the challenge of understanding AI is pressing. If we don’t know how an artificial mind works, how can we ascertain its biases or predict its mistakes?

We won’t know in advance if an AI is racist, or what unexpected thought patterns it might have that would make it crash an autonomous vehicle. We might not know about an AI’s biases until long after it has made countless decisions. It’s important to know when an AI will fail or behave unexpectedly—when it might tell us, “I’m sorry, Dave. I’m afraid I can’t do that.”

“A big problem is people treat AI or machine learning as being very neutral,” said Tracy Chou, a software engineer who worked with machine learning at Pinterest Inc. “And a lot of that is people not understanding that it’s humans who design these models and humans who choose the data they are trained on.”

The article is here.

Sunday, July 23, 2017

Stop Obsessing Over Race and IQ

John McWhorter
The National Review
Originally published July 5, 2017

Here are three excerpts:

Suppose that, at the end of the day, people of African descent have lower IQs on average than do other groups of humans, and that this gap is caused, at least in part, by genetic differences.

(cut)

There is, however, a question that those claiming black people are genetically predisposed to have lower IQs than others fail to answer: What, precisely, would we gain from discussing this particular issue?

(cut)

A second purpose of being “honest” about a racial IQ gap would be the opposite of the first: We might take the gap as a reason for giving not less but more attention to redressing race-based inequities. That is, could we imagine an America in which it was accepted that black people labored — on average, of course — under an intellectual handicap, and an enlightened, compassionate society responded with a Great Society–style commitment to the uplift of the people thus burdened?

I am unaware of any scholar or thinker who has made this argument, perhaps because it, too, is an obvious fantasy. Officially designating black people as a “special needs” race perpetually requiring compensatory assistance on the basis of their intellectual inferiority would run up against the same implacable resistance as condemning them to menial roles for the same reason. The impulse that rejects the very notion of IQ differences between races will thrive despite any beneficent intentions founded on belief in such differences.

The article is here.

Monday, July 3, 2017

How Scientists are Working to Create Cyborg Humans with Super Intelligence

Hannah Osborne
Newsweek
Originally posted on June 14, 2017

Here is an excerpt:

There are three main approaches to doing this. The first involves recording information from the brain, decoding it via a computer or machine interface, and then utilizing the information for a purpose.

The second is to influence the brain by stimulating it pharmacologically or electrically: “So you can stimulate the brain to produce artificial sensations, like the sensation of touch, or vision for the blind,” he says. “Or you could stimulate certain areas to improve their functions—like improved memory, attention. You can even connect two brains together—one brain will stimulate the other—like where scientists transferred memories of one rat to another.”

The final approach is defined as “futuristic.” This would include humans becoming cyborgs, for example, and would raise the ethical and philosophical questions that will need to be addressed before scientists merge man and machine.

Lebedev said these ethical concerns could become real in the next 10 years, but the current technology poses no serious threat.

The article is here.

Thursday, June 8, 2017

The AI Cargo Cult: The Myth of Superhuman AI

Kevin Kelly
Backchannel.com
Originally posted April 25, 2017

Here is an excerpt:

The most common misconception about artificial intelligence begins with the common misconception about natural intelligence. This misconception is that intelligence is a single dimension. Most technical people tend to graph intelligence the way Nick Bostrom does in his book, Superintelligence — as a literal, single-dimension, linear graph of increasing amplitude. At one end is the low intelligence of, say, a small animal; at the other end is the high intelligence, of, say, a genius—almost as if intelligence were a sound level in decibels. Of course, it is then very easy to imagine the extension so that the loudness of intelligence continues to grow, eventually to exceed our own high intelligence and become a super-loud intelligence — a roar! — way beyond us, and maybe even off the chart.

This model is topologically equivalent to a ladder, so that each rung of intelligence is a step higher than the one before. Inferior animals are situated on lower rungs below us, while higher-level intelligence AIs will inevitably overstep us onto higher rungs. Time scales of when it happens are not important; what is important is the ranking—the metric of increasing intelligence.

The problem with this model is that it is mythical, like the ladder of evolution. The pre-Darwinian view of the natural world supposed a ladder of being, with inferior animals residing on rungs below human. Even post-Darwin, a very common notion is the “ladder” of evolution, with fish evolving into reptiles, then up a step into mammals, up into primates, into humans, each one a little more evolved (and of course smarter) than the one before it. So the ladder of intelligence parallels the ladder of existence. But both of these models supply a thoroughly unscientific view.

The article is here.

Sunday, April 16, 2017

Yuval Harari on why humans won’t dominate Earth in 300 years

Interview by Ezra Klein
Vox.com
Originally posted March 27, 2017

Here are two excerpts:

I totally agree that for success, cooperation is usually more important than just raw intelligence. But the thing is that AI will be far more cooperative, at least potentially, than humans. To take a famous example, everybody is now talking about self-driving cars. The huge advantage of a self-driving car over a human driver is not just that, as an individual vehicle, the self-driving car is likely to be safer, cheaper, and more efficient than a human-driven car. The really big advantage is that self-driving cars can all be connected to one another to form a single network in a way you cannot do with human drivers.

It's the same with many other fields. If you think about medicine, today you have millions of human doctors and very often you have miscommunication between different doctors, but if you switch to AI doctors, you don't really have millions of different doctors. You have a single medical network that monitors the health of everybody in the world.

(cut)

I think the other problem with AI taking over is not the economic problem, but really the problem of meaning — if you don't have a job anymore and, say, the government provides you with universal basic income or something, the big problem is how do you find meaning in life? What do you do all day?

Here, the best answers so far we've got is drugs and computer games. People will regulate more and more their moods with all kinds of biochemicals, and they will engage more and more with three-dimensional virtual realities.

The entire interview is here.