Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Creativity. Show all posts
Showing posts with label Creativity. Show all posts

Wednesday, March 13, 2024

None of these people exist, but you can buy their books on Amazon anyway

Conspirador Norteno
Substack.com
Originally published 12 Jan 24

Meet Jason N. Martin N. Martin, the author of the exciting and dynamic Amazon bestseller “How to Talk to Anyone: Master Small Talks, Elevate Your Social Skills, Build Genuine Connections (Make Real Friends; Boost Confidence & Charisma)”, which is the 857,233rd most popular book on the Kindle Store as of January 12th, 2024. There are, however, a few obvious problems. In addition to the unnecessary repetition of the middle initial and last name, Mr. N. Martin N. Martin’s official portrait is a GAN-generated face, and (as we’ll see shortly), his sole published work is strangely similar to several books by another Amazon author with a GAN-generated face.

In an interesting twist, Amazon’s recommendation system suggests another author with a GAN-generated face in the “Customers also bought items by” section of Jason N. Martin N. Martin’s author page. Further exploration of the recommendations attached to both of these authors and their published works reveals a set of a dozen Amazon authors with GAN-generated faces and at least one published book. Amazon’s recommendation algorithms reliably link these authors together; whether this is a sign that the twelve author accounts are actually run by the same entity or merely an artifact of similarities in the content of their books is unclear at this point in time. 


Here's my take:

Forget literary pen names - AI is creating a new trend on Amazon: ghostwritten books. These novels, poetry collections, and even children's stories boast intriguing titles and blurbs, yet none of the authors on the cover are real people. Instead, their creations spring from the algorithms of powerful language models.

Here's the gist:
  • AI churns out content: Fueled by vast datasets of text and code, AI can generate chapters, characters, and storylines at an astonishing pace.
  • Ethical concerns: Questions swirl around copyright, originality, and the very nature of authorship. Is an AI-generated book truly a book, or just a clever algorithm mimicking creativity?
  • Quality varies: While some AI-written books garner praise, others are criticized for factual errors, nonsensical plots, and robotic dialogue.
  • Transparency is key: Many readers feel deceived by the lack of transparency about AI authorship. Should books disclose their digital ghostwriters?
This evolving technology challenges our understanding of literature and raises questions about the future of authorship. While AI holds potential to assist and inspire, the human touch in storytelling remains irreplaceable. So, the next time you browse Amazon, remember: the author on the cover might not be who they seem.

Tuesday, October 3, 2023

Emergent analogical reasoning in large language models

Webb, T., Holyoak, K.J. & Lu, H. 
Nat Hum Behav (2023).
https://doi.org/10.1038/s41562-023-01659-w

Abstract

The recent advent of large language models has reinvigorated debate over whether human cognitive capacities might emerge in such generic models given sufficient training data. Of particular interest is the ability of these models to reason about novel problems zero-shot, without any direct training. In human cognition, this capacity is closely tied to an ability to reason by analogy. Here we performed a direct comparison between human reasoners and a large language model (the text-davinci-003 variant of Generative Pre-trained Transformer (GPT)-3) on a range of analogical tasks, including a non-visual matrix reasoning task based on the rule structure of Raven’s Standard Progressive Matrices. We found that GPT-3 displayed a surprisingly strong capacity for abstract pattern induction, matching or even surpassing human capabilities in most settings; preliminary tests of GPT-4 indicated even better performance. Our results indicate that large language models such as GPT-3 have acquired an emergent ability to find zero-shot solutions to a broad range of analogy problems.

Discussion

We have presented an extensive evaluation of analogical reasoning in a state-of-the-art large language model. We found that GPT-3 appears to display an emergent ability to reason by analogy, matching or surpassing human performance across a wide range of problem types. These included a novel text-based problem set (Digit Matrices) modeled closely on Raven’s Progressive Matrices, where GPT-3 both outperformed human participants, and captured a number of specific signatures of human behavior across problem types. Because we developed the Digit Matrix task specifically for this evaluation, we can be sure GPT-3 had never been exposed to problems of this type, and therefore was performing zero-shot reasoning. GPT-3 also displayed an ability to solve analogies based on more meaningful relations, including four-term verbal analogies and analogies between stories about naturalistic problems.

It is certainly not the case that GPT-3 mimics human analogical reasoning in all respects. Its performance is limited to the processing of information provided in its local context. Unlike humans, GPT-3 does not have long-term memory for specific episodes. It is therefore unable to search for previously-encountered situations that might create useful analogies with a current problem. For example, GPT-3 can use the general story to guide its solution to the radiation problem, but as soon as its context buffer is emptied, it reverts to giving its non-analogical solution to the problem – the system has learned nothing from processing the analogy. GPT-3’s reasoning ability is also limited by its lack of physical understanding of the world, as evidenced by its failure (in comparison with human children) to use an analogy to solve a transfer problem involving construction and use of simple tools. GPT-3’s difficulty with this task is likely due at least in part to its purely text-based input, lacking the multimodal experience necessary to build a more integrated world model.

But despite these major caveats, our evaluation reveals that GPT-3 exhibits a very general capacity to identify and generalize – in zero-shot fashion – relational patterns to be found within both formal problems and meaningful texts. These results are extremely surprising. It is commonly held that although neural networks can achieve a high level of performance within a narrowly-deļ¬ned task domain, they cannot robustly generalize what they learn to new problems in the way that human learners do. Analogical reasoning is typically viewed as a quintessential example of this human capacity for abstraction and generalization, allowing human reasoners to intelligently approach novel problems zero-shot.

Wednesday, March 8, 2023

Neuroscience is ready for neuroethics engagement

Das, J., Forlini, C., Porcello, D. M. et al.
Front. Commun., 21 December 2022
Sec. Science and Environmental Communication

Neuroscience research has been expanding, providing new insights into brain and nervous system function and potentially transformative technological applications. In recent years, there has been a flurry of prominent international scientific academies and intergovernmental organizations calling for engagement with different publics on social, ethical, and regulatory issues related to neuroscience and neurotechnology advances. Neuroscientific activities and outputs are value-laden; they reflect the cultural, ethical, and political values that are prioritized in different societies at a given time and impact a variety of publics beyond the laboratory. The focus on engagement in neuroscience recognizes the breadth and significance of current neuroscience research whilst acknowledging the need for a neuroethical approach that explores the epistemic and moral values influencing the neuroscientific agenda. The field of neuroethics is characterized by its focus on the social, legal, and philosophical implications of neuroscience including its impact on cultural assumptions about the cognitive experience, identity, consciousness, and decision-making. Here, we outline a proposal for neuroethics engagement that reflects an enhanced and evolving understanding of public engagement with neuroethical issues to create opportunities to share ideation, decision-making, and collaboration in neuroscience endeavors for the benefit of society. We demonstrate the synergies between public engagement and neuroethics scholarship and activities that can guide neuroethics engagement.

Conclusion

Building on research from numerous fields and experiences of the past, engagement between neuroscience, neuroethics, and publics offers a critical lens for anticipating and interrogating the unique societal implications of neuroscience discovery and dissemination, and it can help guide regulation so that neuroscience products promote societal well-being. Engagement offers a bridge not only for neuroscientists and neuroethicists, but also for neuroethics and the public. It is possible that more widespread use of neuroethics engagement will reveal yet unknown or overlooked ethical conflicts in neuroscience that may take priority over the ones listed here.

We offer this paper as part of a continued and expanded dialogue on neuroethics engagement. The concept we propose will require the input of stakeholders beyond neuroethics, neuroscience, and public engagement in science to build practices that are inclusive and fit for purpose. Effective neuroethics engagement should be locally and temporally informed, lead to a culturally situated understanding of science and diplomacy, aim to understand the transnational nature of scientific knowledge, and be mindful of the challenges raised by how knowledge of discoveries circulates.

Friday, November 20, 2020

When Did We Become Fully Human? What Fossils and DNA Tell Us About the Evolution of Modern Intelligence

Nick Longrich
singularityhub.com
Originally posted 18 OCT 2020 

Here are two excerpts:

Because the fossil record is so patchy, fossils provide only minimum dates. Human DNA suggests even earlier origins for modernity. Comparing genetic differences between DNA in modern people and ancient Africans, it’s estimated that our ancestors lived 260,000 to 350,000 years ago. All living humans descend from those people, suggesting that we inherited the fundamental commonalities of our species, our humanity, from them.

All their descendants—Bantu, Berber, Aztec, Aboriginal, Tamil, San, Han, Maori, Inuit, Irish—share certain peculiar behaviors absent in other great apes. All human cultures form long-term pair bonds between men and women to care for children. We sing and dance. We make art. We preen our hair, adorn our bodies with ornaments, tattoos and makeup.

We craft shelters. We wield fire and complex tools. We form large, multigenerational social groups with dozens to thousands of people. We cooperate to wage war and help each other. We teach, tell stories, trade. We have morals, laws. We contemplate the stars, our place in the cosmos, life’s meaning, what follows death.

(cut)

First, we journeyed out of Africa, occupying more of the planet. There were then simply more humans to invent, increasing the odds of a prehistoric Steve Jobs or Leonardo da Vinci. We also faced new environments in the Middle East, the Arctic, India, Indonesia, with unique climates, foods and dangers, including other human species. Survival demanded innovation.

Many of these new lands were far more habitable than the Kalahari or the Congo. Climates were milder, but Homo sapiens also left behind African diseases and parasites. That let tribes grow larger, and larger tribes meant more heads to innovate and remember ideas, more manpower, and better ability to specialize. Population drove innovation.

Thursday, February 27, 2020

Liar, Liar, Liar

S. Vedantam, M. Penmann, & T. Boyle
Hidden Brain - NPR.org
Originally posted 17 Feb 20

When we think about dishonesty, we mostly think about the big stuff.

We see big scandals, big lies, and we think to ourselves, I could never do that. We think we're fundamentally different from Bernie Madoff or Tiger Woods.

But behind big lies are a series of small deceptions. Dan Ariely, a professor of psychology and behavioral economics at Duke University, writes about this in his book The Honest Truth about Dishonesty.

"One of the frightening conclusions we have is that what separates honest people from not-honest people is not necessarily character, it's opportunity," he said.

These small lies are quite common. When we lie, it's not always a conscious or rational choice. We want to lie and we want to benefit from our lying, but we want to be able to look in the mirror and see ourselves as good, honest people. We might go a little too fast on the highway, or pocket extra change at a gas station, but we're still mostly honest ... right?

That's why Ariely describes honesty as something of a state of mind. He thinks the IRS should have people sign a pledge committing to be honest when they start working on their taxes, not when they're done. Setting the stage for honesty is more effective than asking someone after the fact whether or not they lied.

The info is here.

There is a 30 minute audio file worth listening.

Saturday, July 7, 2018

Making better decisions in groups

Dan Bang, Chris D. Frith
Published 16 August 2017.
DOI: 10.1098/rsos.170193

Abstract

We review the literature to identify common problems of decision-making in individuals and groups. We are guided by a Bayesian framework to explain the interplay between past experience and new evidence, and the problem of exploring the space of hypotheses about all the possible states that the world could be in and all the possible actions that one could take. There are strong biases, hidden from awareness, that enter into these psychological processes. While biases increase the efficiency of information processing, they often do not lead to the most appropriate action. We highlight the advantages of group decision-making in overcoming biases and searching the hypothesis space for good models of the world and good solutions to problems. Diversity of group members can facilitate these achievements, but diverse groups also face their own problems. We discuss means of managing these pitfalls and make some recommendations on how to make better group decisions.

The article is here.

Sunday, May 20, 2018

Robot cognition requires machines that both think and feel

Luiz Pessosa
www.aeon.com
Originally published April 13, 2018

Here is an excerpt:

Part of being intelligent, then, is about the ability to function autonomously in various conditions and environments. Emotion is helpful here because it allows an agent to piece together the most significant kinds of information. For example, emotion can instil a sense of urgency in actions and decisions. Imagine crossing a patch of desert in an unreliable car, during the hottest hours of the day. If the vehicle breaks down, what you need is a quick fix to get you to the next town, not a more permanent solution that might be perfect but could take many hours to complete in the beating sun. In real-world scenarios, a ‘good’ outcome is often all that’s required, but without the external pressure of perceiving a ‘stressful’ situation, an android might take too long trying to find the optimal solution.

Most proposals for emotion in robots involve the addition of a separate ‘emotion module’ – some sort of bolted-on affective architecture that can influence other abilities such as perception and cognition. The idea would be to give the agent access to an enriched set of properties, such as the urgency of an action or the meaning of facial expressions. These properties could help to determine issues such as which visual objects should be processed first, what memories should be recollected, and which decisions will lead to better outcomes.

The information is here.

Friendly note: I don't agree with everything I post.  In this case, I do not believe that AI needs emotions and feelings.  Rather, AI will have a different form of consciousness.  We don't need to try to reproduce our experiences exactly.  AI consciousness will likely have flaws, like we do.  We need to be able to manage AI given the limitations we create.

Thursday, April 26, 2018

Rogue chatbots deleted in China after questioning Communist Party

Neil Connor
The Telegraph
Originally published August 3, 2017

Two chatbots have been pulled from a Chinese messaging app after they questioned the rule of the Communist Party and made unpatriotic comments.

The bots were available on a messaging app run by Chinese Internet giant Tencent, which has more than 800 million users, before apparently going rogue.

One of the robots, BabyQ, was asked “Do you love the Communist Party”, according to a screenshot posted on Sina Weibo, China’s version of Twitter.

Another web user said to the chatbot: “Long Live the Communist Party”, to which BabyQ replied: “Do you think such corrupt and incapable politics can last a long time?”

(cut)

The Chinese Internet is heavily censored by Beijing, which sees any criticism of its rule as a threat.

Social media posts which are deemed critical are often quickly deleted by authorities, while searches for sensitive topics are often blocked.

The information is here.

Friday, April 20, 2018

Making a Thinking Machine

Leah Winerman
The Monitor on Psychology - April 2018

Here is an excerpt:

A 'Top Down' Approach

Now, psychologists and AI researchers are looking to insights from cognitive and developmental psychology to address these limitations and to capture aspects of human thinking that deep neural networks can’t yet simulate, such as curiosity and creativity.

This more “top-down” approach to AI relies less on identifying patterns in data, and instead on figuring out mathematical ways to describe the rules that govern human cognition. Researchers can then write those rules into the learning algorithms that power the AI system. One promising avenue for this method is called Bayesian modeling, which uses probability to model how people reason and learn about the world. Brenden Lake, PhD, a psychologist and AI researcher at New York University, and his colleagues, for example, have developed a Bayesian AI system that can accomplish a form of one-shot learning. Humans, even children, are very good at this—a child only has to see a pineapple once or twice to understand what the fruit is, pick it out of a basket and maybe draw an example.

Likewise, adults can learn a new character in an unfamiliar language almost immediately.

The article is here.

Monday, March 12, 2018

Train PhD students to be thinkers not just specialists

Gundula Bosch
nature.com
Originally posted February 14, 2018

Under pressure to turn out productive lab members quickly, many PhD programmes in the biomedical sciences have shortened their courses, squeezing out opportunities for putting research into its wider context. Consequently, most PhD curricula are unlikely to nurture the big thinkers and creative problem-solvers that society needs.

That means students are taught every detail of a microbe’s life cycle but little about the life scientific. They need to be taught to recognize how errors can occur. Trainees should evaluate case studies derived from flawed real research, or use interdisciplinary detective games to find logical fallacies in the literature. Above all, students must be shown the scientific process as it is — with its limitations and potential pitfalls as well as its fun side, such as serendipitous discoveries and hilarious blunders.

This is exactly the gap that I am trying to fill at Johns Hopkins University in Baltimore, Maryland, where a new graduate science programme is entering its second year. Microbiologist Arturo Casadevall and I began pushing for reform in early 2015, citing the need to put the philosophy back into the doctorate of philosophy: that is, the ‘Ph’ back into the PhD.

The article is here.

Thursday, December 21, 2017

An AI That Can Build AI

Dom Galeon and Kristin Houser
Futurism.com
Originally published on December 1, 2017

Here is an excerpt:

Thankfully, world leaders are working fast to ensure such systems don’t lead to any sort of dystopian future.

Amazon, Facebook, Apple, and several others are all members of the Partnership on AI to Benefit People and Society, an organization focused on the responsible development of AI. The Institute of Electrical and Electronics Engineers (IEE) has proposed ethical standards for AI, and DeepMind, a research company owned by Google’s parent company Alphabet, recently announced the creation of group focused on the moral and ethical implications of AI.

Various governments are also working on regulations to prevent the use of AI for dangerous purposes, such as autonomous weapons, and so long as humans maintain control of the overall direction of AI development, the benefits of having an AI that can build AI should far outweigh any potential pitfalls.

The information is here.

Wednesday, August 9, 2017

Career of the Future: Robot Psychologist

Christopher Mims
The Wall Street Journal
Originally published July 9, 2017

Artificial-intelligence engineers have a problem: They often don’t know what their creations are thinking.

As artificial intelligence grows in complexity and prevalence, it also grows more powerful. AI already has factored into decisions about who goes to jail and who receives a loan. There are suggestions AI should determine who gets the best chance to live when a self-driving car faces an unavoidable crash.

Defining AI is slippery and growing more so, as startups slather the buzzword over whatever they are doing. It is generally accepted as any attempt to ape human intelligence and abilities.

One subset that has taken off is neural networks, systems that “learn” as humans do through training, turning experience into networks of simulated neurons. The result isn’t code, but an unreadable, tangled mass of millions—in some cases billions—of artificial neurons, which explains why those who create modern AIs can be befuddled as to how they solve tasks.

Most researchers agree the challenge of understanding AI is pressing. If we don’t know how an artificial mind works, how can we ascertain its biases or predict its mistakes?

We won’t know in advance if an AI is racist, or what unexpected thought patterns it might have that would make it crash an autonomous vehicle. We might not know about an AI’s biases until long after it has made countless decisions. It’s important to know when an AI will fail or behave unexpectedly—when it might tell us, “I’m sorry, Dave. I’m afraid I can’t do that.”

“A big problem is people treat AI or machine learning as being very neutral,” said Tracy Chou, a software engineer who worked with machine learning at Pinterest Inc. “And a lot of that is people not understanding that it’s humans who design these models and humans who choose the data they are trained on.”

The article is here.

Thursday, August 18, 2016

Why ‘smart drugs’ can make you less clever

Nadira Faber
The Conversation
Originally posted July 26, 2016

It is an open secret: while athletes dope their bodies, regular office workers dope their brains. They buy prescription drugs such as Ritalin or Provigil on the internet’s flourishing black market to boost their cognitive performance.

It is hard to get reliable data on how many people take such “smart drugs” or “pharmacological cognitive enhancement substances”, as scientists call them. Prevalence studies and surveys suggest, though, that people from different walks of life use them, such as researchers, surgeons, and students. In an informal poll among readers of the journal Nature, 20% reported that they had taken smart drugs. And it seems that their use is on the rise.

So, if you are in a demanding and competitive job, some of your colleagues probably take smart drugs. Does this thought worry you? If so, you are not alone. Studies consistently find that people see brain doping negatively.

The article is here.

Saturday, April 23, 2016

Computer creates high-tech Rembrandt counterfeit

Michael Franco
Gizmag
Originally posted April 6, 2016

In conversations about artificial intelligence and the time when machines will be able to functions as well as — or better than — human beings, it's often said that one thing computers will never be able to do is create art and music the way we do. Well, that argument just lost a bit of steam thanks to a project that's been carried out by Microsoft and ING. Working with the Technical University of Delft and two museums in the Netherlands, the project, called "Next Rembrandt," used algorithms and a 3D printer to create a brand-new Rembrandt painting that looks like it could easily have been delivered by Dutch Master's own hand about 350 years ago.

The article and video are here.

Tuesday, December 29, 2015

AI is different because it lets machines weld the emotional with the physical

By Peter McOwen
The Conversation
Originally published December 10, 2015

Here is an excerpt:

Creative intelligence

However, many are sensitive to the idea of artificial intelligence being artistic – entering the sphere of human intelligence and creativity. AI can learn to mimic the artistic process of painting, literature, poetry and music, but it does so by learning the rules, often from access to large datasets of existing work from which it extracts patterns and applies them. Robots may be able to paint – applying a brush to canvas, deciding on shapes and colours – but based on processing the example of human experts. Is this creating, or copying? (The same question has been asked of humans too.)

The entire article is here.