Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Computational Modeling. Show all posts
Showing posts with label Computational Modeling. Show all posts

Thursday, February 8, 2024

People's thinking plans adapt to the problem they're trying to solve

Ongchoco, J. D., Knobe, J., & Jara-Ettinger, J. (2024).
Cognition, 243, 105669.

Abstract

Much of our thinking focuses on deciding what to do in situations where the space of possible options is too large to evaluate exhaustively. Previous work has found that people do this by learning the general value of different behaviors, and prioritizing thinking about high-value options in new situations. Is this good-action bias always the best strategy, or can thinking about low-value options sometimes become more beneficial? Can people adapt their thinking accordingly based on the situation? And how do we know what to think about in novel events? Here, we developed a block-puzzle paradigm that enabled us to measure people's thinking plans and compare them to a computational model of rational thought. We used two distinct response methods to explore what people think about—a self-report method, in which we asked people explicitly to report what they thought about, and an implicit response time method, in which we used people's decision-making times to reveal what they thought about. Our results suggest that people can quickly estimate the apparent value of different options and use this to decide what to think about. Critically, we find that people can flexibly prioritize whether to think about high-value options (Experiments 1 and 2) or low-value options (Experiments 3, 4, and 5), depending on the problem. Through computational modeling, we show that these thinking strategies are broadly rational, enabling people to maximize the value of long-term decisions. Our results suggest that thinking plans are flexible: What we think about depends on the structure of the problems we are trying to solve.


Some thoughts:

The study is based on the idea that people have "thinking plans" which are essentially roadmaps that guide our thoughts and actions when we are trying to solve a problem. These thinking plans are not static, but rather can change and adapt depending on the specific problem we are facing.

For example, if we are trying to solve a math problem, our thinking plan might involve breaking the problem down into smaller steps, identifying the relevant information, and applying the appropriate formulas. However, if we are trying to solve a social problem, our thinking plan might involve considering the different perspectives of the people involved, identifying potential solutions, and evaluating the consequences of each solution.

The study used computational modeling to simulate how people would solve different types of problems. The model showed that people's thinking plans were flexible and adapted to the specific problem at hand. The model also showed that these thinking plans were broadly rational, meaning that they helped people to make decisions that were in their best interests.

The findings of the study have important implications for education and other fields that are concerned with human decision-making. The study suggests that it is important to teach people how to think flexibly and adapt their thinking plans to different situations. It also suggests that we should not expect people to always make the "right" decision, as the best course of action will often depend on the specific circumstances.

Friday, January 5, 2024

Mathematical and Computational Modeling of Suicide as a Complex Dynamical System

Wang, S. B., Robinaugh, D., et al.
(2023, September 24). 

Abstract

Background:

Despite decades of research, the current suicide rate is nearly identical to what it was 100 years ago. This slow progress is due, at least in part, to a lack of formal theories of suicide. Existing suicide theories are instantiated verbally, omitting details required for precise explanation and prediction, rendering them difficult to effectively evaluate and difficult to improve.  By contrast, formal theories are instantiated mathematically and computationally, allowing researchers to precisely deduce theory predictions, rigorously evaluate what the theory can and cannot explain, and thereby, inform how the theory can be improved.  This paper takes the first step toward addressing the need for formal theories in suicide research by formalizing an initial, general theory of suicide and evaluating its ability to explain suicide-related phenomena.

Methods:

First, we formalized a General Escape Theory of Suicide as a system of stochastic and ordinary differential equations. Second, we used these equations to simulate behavior of the system over time. Third, we evaluated if the formal theory produced robust suicide-related phenomena including rapid onset and brief duration of suicidal thoughts, and zero-inflation of suicidal thinking in time series data.

Results:

Simulations successfully produced the proposed suicidal phenomena (i.e.,rapid onset, short duration, and high zero-inflation of suicidal thoughts in time series data). Notably, these simulations also produced theorized phenomena following from the General Escape Theory of Suicide:that suicidal thoughts emerge when alternative escape behaviors failed to effectively regulate aversive internal states, and that effective use of long-term strategies may prevent the emergence of suicidal thoughts.

Conclusions:

To our knowledge, the model developed here is the first formal theory of suicide, which was able to produce –and, thus, explain –well-established phenomena documented in the suicide literature. We discuss the next steps in a research program dedicated to studying suicide as a complex dynamical system, and describe how the integration of formal theories and empirical research may advance our understanding, prediction, and prevention of suicide. 

My take:

In essence, the paper demonstrates the potential value of using computational modeling and formal theorizing to improve understanding and prediction of suicidal behaviors, breaking from a reliance on narrative theories that have failed to significantly reduce suicide rates over the past century. The formal modeling approach allows more rigorous evaluation and refinement of theories over time.

Saturday, August 27, 2022

Counterfactuals and the logic of causal selection

Quillien, T., & Lucas, C. G. (2022, June 13)
https://doi.org/10.31234/osf.io/ts76y

Abstract

Everything that happens has a multitude of causes, but people make causal judgments effortlessly. How do people select one particular cause (e.g. the lightning bolt that set the forest ablaze) out of the set of factors that contributed to the event (the oxygen in the air, the dry weather. . . )? Cognitive scientists have suggested that people make causal judgments about an event by simulating alternative ways things could have happened. We argue that this counterfactual theory explains many features of human causal intuitions, given two simple assumptions. First, people tend to imagine counterfactual possibilities that are both a priori likely and similar to what actually happened. Second, people judge that a factor C caused effect E if C and E are highly correlated across these counterfactual possibilities. In a reanalysis of existing empirical data, and a set of new experiments, we find that this theory uniquely accounts for people’s causal intuitions.

From the General Discussion

Judgments of causation are closely related to assignments of blame, praise, and moral responsibility.  For instance, when two cars crash at an intersection, we say that the accident was caused by the driver who went through a red light (not by the driver who went through a green light; Knobe and Fraser, 2008; Icard et al., 2017; Hitchcock and Knobe, 2009; Roxborough and Cumby, 2009; Alicke, 1992; Willemsen and Kirfel, 2019); and we also blame that driver for the accident. According to some theorists, the fact that we judge the norm-violator to be blameworthy or morally responsible explains why we judge that he was the cause of the accident. This might be because our motivation to blame distorts our causal judgment (Alicke et al., 2011), because our intuitive concept of causation is inherently normative (Sytsma, 2021), or because of pragmatics confounds in the experimental tasks that probe the effect of moral violations on causal judgment (Samland & Waldmann, 2016).

Under these accounts, the explanation for why moral considerations affect causal judgment should be completely different than the explanation for why other factors (e.g.,prior probabilities, what happened in the actual world, the causal structure of the situation) affect causal judgment. We favor a more parsimonious account: the counterfactual approach to causal judgment (of which our theory is one instantiation) provides a unifying explanation for the influence of both moral and non-moral considerations on causal judgment (Hitchcock & Knobe, 2009)16.

Finally, many formal theories of causal reasoning aim to model how people make causal inferences (e.g. Cheng, 1997; Griffiths & Tenenbaum, 2005; Lucas & Griffiths, 2010; Bramley et al., 2017; Jenkins & Ward, 1965). These theories are not concerned with the problem of causal selection, the focus of the present paper. It is in principle possible that people use the same algorithms they use for causal inference when they engage in causal selection, but in practice models of causal inference have not been able to predict how people select causes (see Quillien and Barlev, 2022; Morris et al., 2019).

Wednesday, February 16, 2022

AI ethics in computational psychiatry: From the neuroscience of consciousness to the ethics of consciousness

Wiese, W. and Friston, K.J.
Behavioural Brain Research
Volume 420, 26 February 2022, 113704

Abstract

Methods used in artificial intelligence (AI) overlap with methods used in computational psychiatry (CP). Hence, considerations from AI ethics are also relevant to ethical discussions of CP. Ethical issues include, among others, fairness and data ownership and protection. Apart from this, morally relevant issues also include potential transformative effects of applications of AI—for instance, with respect to how we conceive of autonomy and privacy. Similarly, successful applications of CP may have transformative effects on how we categorise and classify mental disorders and mental health. Since many mental disorders go along with disturbed conscious experiences, it is desirable that successful applications of CP improve our understanding of disorders involving disruptions in conscious experience. Here, we discuss prospects and pitfalls of transformative effects that CP may have on our understanding of mental disorders. In particular, we examine the concern that even successful applications of CP may fail to take all aspects of disordered conscious experiences into account.


Highlights

•  Considerations from AI ethics are also relevant to the ethics of computational psychiatry.

•  Ethical issues include, among others, fairness and data ownership and protection.

•  They also include potential transformative effects.

•  Computational psychiatry may transform conceptions of mental disorders and health.

•  Disordered conscious experiences may pose a particular challenge.

From the Discussion

At present, we are far from having a formal account of conscious experience. As mentioned in the introduction, many empirical theories of consciousness make competing claims, and there is still much uncertainty about the neural mechanisms that underwrite ordinary conscious processes (let alone psychopathology). Hence, the suggestion to foster research on the computational correlates of disordered conscious experiences should not be regarded as an invitation to ignore subjective reports. The patient’s perspective will continue to be central for normatively assessing their experienced condition. Computational models offer constructs to better describe and understand elusive aspects of a disordered conscious experience, but the patient will remain the primary authority on whether they are suffering from their condition. 

Wednesday, November 24, 2021

Moral masters or moral apprentices? A connectionist account of sociomoral evaluation in preverbal infants

Benton, D. T., & Lapan, C. 
(2021, February 21). 
https://doi.org/10.31234/osf.io/mnh35

Abstract

Numerous studies suggest that preverbal infants possess the ability to make sociomoral judgements and demonstrate a preference for prosocial agents. Some theorists argue that infants possess an “innate moral core” that guides their sociomoral reasoning. However, we propose that infants’ capacity for putative sociomoral evaluation and reasoning can just as likely be driven by a domain-general associative-learning mechanism that is sensitive to agent action. We implement this theoretical account in a connectionist computational model and show that it can account for the pattern of results in Hamlin et al. (2007), Hamlin and Wynn (2011), Hamlin (2013), and Hamlin, Wynn, Bloom, and Mahajan (2011). These are pioneering studies in this area and were among the first to examine sociomoral evaluation in preverbal infants. Based on the results of 5 computer simulations, we suggest that an associative-learning mechanism—instantiated as a computational (connectionist) model—can account for previous findings on preverbal infants’ capacity for sociomoral evaluation. These results suggest that an innate moral core may not be necessary to account for sociomoral evaluation in infants.

From the General Discussion

The simulations suggest the preverbal infants’ reliable choice of helpers over hinderers inHamlin et al. (2007), Hamlin and Wynn (2011), Hamlin (2013), and Hamlin et al. (2011) could have been based on extensive real-world experience with various kinds of actions (e.g., concordant action and discordant action) and an expectation—based on a learned second-order correlation—that agents that engage in certain kinds of actions (e.g., concordant action) have the capacity for interaction, whereas agents that engage in certain kinds of other actions (e.g., discordant action) either do not have the capacity for interaction or have less of a capacity for it. 

Broadly, these results are consistent with work by Powell and Spelke (2018). They found that 4- to 5½-month-old infants looked longer at characters that engaged in concordant (i.e., imitative) action with other characters than characters that engaged in discordant (i.e., non-imitative) action with other characters (Exps. 1 and 2). Specifically, infants looked longer at characters that engaged in the same jumping motion and made the same sound as a target character than those characters that engaged in the same jumping motion but made a different sound than the target character. Our results are also consistent with their finding—which was based on a conceptual replication of Hamlin et al. (2007)—that 12-month-olds reliably reached for a character that engaged in concordant (i.e., imitative) action with the climber than a character that engaged in discordant (i.e., non-imitative) action with it (Exp. 4), even when those actions were non-social.