Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, June 19, 2023

On the origin of laws by natural selection

DeScioli, P.
Evolution and Human Behavior
Volume 44, Issue 3, May 2023, Pages 195-209

Abstract

Humans are lawmakers like we are toolmakers. Why do humans make so many laws? Here we examine the structure of laws to look for clues about how humans use them in evolutionary competition. We will see that laws are messages with a distinct combination of ideas. Laws are similar to threats but critical differences show that they have a different function. Instead, the structure of laws matches moral rules, revealing that laws derive from moral judgment. Moral judgment evolved as a strategy for choosing sides in conflicts by impartial rules of action—rather than by hierarchy or faction. For this purpose, humans can create endless laws to govern nearly any action. However, as prolific lawmakers, humans produce a confusion of contradictory laws, giving rise to a perpetual battle to control the laws. To illustrate, we visit some of the major conflicts over laws of violence, property, sex, faction, and power.

(cut)

Moral rules are not for cooperation

We have briefly summarized the  major divisions and operations of moral judgment. Why then did humans evolve such elaborate powers of the mind devoted to moral rules? What is all this rule making for?

One common opinion is that moral rules are for cooperation. That is, we make and enforce a moral code in order to cooperate more effectively with other people. Indeed, traditional  theories beginning with Darwin assume that morality is  the  same  as cooperation. These theories  successfully explain many forms of cooperation, such as why humans and other  animals  care  for  offspring,  trade  favors,  respect  property, communicate  honestly,  and  work  together  in  groups.  For  instance, theories of reciprocity explain why humans keep records of other people’s deeds in the form of reputation, why we seek partners who are nice, kind, and generous, why we praise these virtues, and why we aspire to attain them.

However, if we look closely, these theories explain cooperation, not moral  judgment.  Cooperation pertains  to our decisions  to  benefit  or harm someone, whereas moral judgment pertains to  our judgments of someone’s action  as right or  wrong. The difference  is crucial because these  mental  faculties  operate  independently  and  they  evolved  separately. For  instance,  people can  use moral judgment  to cooperate but also to cheat, such as a thief who hides the theft because they judge it to be  wrong, or a corrupt leader who invents a  moral rule  that forbids criticism of the leader. Likewise, people use moral judgment to benefit others  but  also  to  harm  them, such  as falsely  accusing an enemy of murder to imprison them. 

Regarding  their  evolutionary  history, moral  judgment is  a  recent adaptation while cooperation is ancient and widespread, some forms as old  as  the origins  of  life and  multicellular  organisms.  Recalling our previous examples, social animals like gorillas, baboons, lions, and hyenas cooperate in numerous ways. They care for offspring, share food, respect property, work together in teams, form reputations,  and judge others’ characters as nice or nasty. But these species do not communicate rules of action, nor do they learn, invent, and debate the rules. Like language, moral judgment  most likely evolved  recently in the  human lineage, long after complex forms of cooperation. 

From the Conclusion

Having anchored ourselves to concrete laws, we next asked, What are laws for? This is the central question for  any mental power because it persists only  by aiding an animal in evolutionary competition.  In this search,  we  should  not  be  deterred  by  the  magnificent creativity  and variety of laws. Some people suppose that natural selection could impart no more than  a  few fixed laws in  the  human mind, but there  are  no grounds for this supposition. Natural selection designed all life on Earth and its creativity exceeds our own. The mental adaptations of animals outperform our best computer programs on routine tasks such as loco-motion and vision. Why suppose that human laws must be far simpler than, for instance, the flight controllers in the brain of a hummingbird? And there are obvious counterexamples. Language is a complex  adaptation but this does not mean that humans speak just a few sentences. Tool use comes from mental adaptations including an intuitive theory of physics, and again these abilities do not limit but enable the enormous variety of tools.

Sunday, June 18, 2023

Gender-Affirming Care for Trans Youth Is Neither New nor Experimental: A Timeline and Compilation of Studies

Julia Serano
Medium.com
Originally posted 16 May 23

Trans and gender-diverse people are a pancultural and transhistorical phenomenon. It is widely understood that we, like LGBTQ+ people more generally, arise due to natural variation rather than the result of pathology, modernity, or the latest conspiracy theory.

Gender-affirming healthcare has a long history. The first trans-related surgeries were carried out in the 1910s–1930s (Meyerowitz, 2002, pp. 16–21). While some doctors were supportive early on, most were wary. Throughout the mid-twentieth century, these skeptical doctors subjected trans people to all sorts of alternate treatments — from perpetual psychoanalysis, to aversion and electroshock therapies, to administering assigned-sex-consistent hormones (e.g., testosterone for trans female/feminine people), and so on — but none of them worked. The only treatment that reliably allowed trans people to live happy and healthy lives was allowing them to transition. While doctors were initially worried that many would eventually come to regret that decision, study after study has shown that gender-affirming care has a far lower regret rate (typically around 1 or 2 percent) than virtually any other medical procedure. Given all this, plus the fact that there is no test for being trans (medical, psychological, or otherwise), around the turn of the century, doctors began moving away from strict gatekeeping and toward an informed consent model for trans adults to attain gender-affirming care.

Trans children have always existed — indeed most trans adults can tell you about their trans childhoods. During the twentieth century, while some trans kids did socially transition (Gill-Peterson, 2018), most had their gender identities disaffirmed, either by parents who disbelieved them or by doctors who subjected them to “gender reparative” or “conversion” therapies. The rationale behind the latter was a belief at that time that gender identity was flexible and subject to change during early childhood, but we now know that this is not true (see e.g., Diamond & Sigmundson, 1997; Reiner & Gearhart, 2004). Over the years, it became clear that these conversion efforts were not only ineffective, but they caused real harm — this is why most health professional organizations oppose them today.

Given the harm caused by gender-disaffirming approaches, around the turn of the century, doctors and gender clinics began moving toward what has come to be known as the gender affirmative model — here’s how I briefly described this approach in my 2016 essay Detransition, Desistance, and Disinformation: A Guide for Understanding Transgender Children Debates:

Rather than being shamed by their families and coerced into gender conformity, these children are given the space to explore their genders. If they consistently, persistently, and insistently identify as a gender other than the one they were assigned at birth, then their identity is respected, and they are given the opportunity to live as a member of that gender. If they remain happy in their identified gender, then they may later be placed on puberty blockers to stave off unwanted bodily changes until they are old enough (often at age sixteen) to make an informed decision about whether or not to hormonally transition. If they change their minds at any point along the way, then they are free to make the appropriate life changes and/or seek out other identities.

Saturday, June 17, 2023

Debt Collectors Want To Use AI Chatbots To Hustle People For Money

Corin Faife
vice.com
Originally posted 18 MAY 23

Here are two excerpts:

The prospect of automated AI systems making phone calls to distressed people adds another dystopian element to an industry that has long targeted poor and marginalized people. Debt collection and enforcement is far more likely to occur in Black communities than white ones, and research has shown that predatory debt and interest rates exacerbate poverty by keeping people trapped in a never-ending cycle. 

In recent years, borrowers in the US have been piling on debt. In the fourth quarter of 2022, household debt rose to a record $16.9 trillion according to the New York Federal Reserve, accompanied by an increase in delinquency rates on larger debt obligations like mortgages and auto loans. Outstanding credit card balances are at record levels, too. The pandemic generated a huge boom in online spending, and besides traditional credit cards, younger spenders were also hooked by fintech startups pushing new finance products, like the extremely popular “buy now, pay later” model of Klarna, Sezzle, Quadpay and the like.

So debt is mounting, and with interest rates up, more and more people are missing payments. That means more outstanding debts being passed on to collection, giving the industry a chance to sprinkle some AI onto the age-old process of prodding, coaxing, and pressuring people to pay up.

For an insight into how this works, we need look no further than the sales copy of companies that make debt collection software. Here, products are described in a mix of generic corp-speak and dystopian portent: SmartAction, another conversational AI product like Skit, has a debt collection offering that claims to help with “alleviating the negative feelings customers might experience with a human during an uncomfortable process”—because they’ll surely be more comfortable trying to negotiate payments with a robot instead. 

(cut)

“Striking the right balance between assertiveness and empathy is a significant challenge in debt collection,” the company writes in the blog post, which claims GPT-4 has the ability to be “firm and compassionate” with customers.

When algorithmic, dynamically optimized systems are applied to sensitive areas like credit and finance, there’s a real possibility that bias is being unknowingly introduced. A McKinsey report into digital collections strategies plainly suggests that AI can be used to identify and segment customers by risk profile—i.e. credit score plus whatever other data points the lender can factor in—and fine-tune contact techniques accordingly. 

Friday, June 16, 2023

ChatGPT Is a Plagiarism Machine

Joseph Keegin
The Chronicle
Originally posted 23 MAY 23

Here is an excerpt:

A meaningful education demands doing work for oneself and owning the product of one’s labor, good or bad. The passing off of someone else’s work as one’s own has always been one of the greatest threats to the educational enterprise. The transformation of institutions of higher education into institutions of higher credentialism means that for many students, the only thing dissuading them from plagiarism or exam-copying is the threat of punishment. One obviously hopes that, eventually, students become motivated less by fear of punishment than by a sense of responsibility for their own education. But if those in charge of the institutions of learning — the ones who are supposed to set an example and lay out the rules — can’t bring themselves to even talk about a major issue, let alone establish clear and reasonable guidelines for those facing it, how can students be expected to know what to do?

So to any deans, presidents, department chairs, or other administrators who happen to be reading this, here are some humble, nonexhaustive, first-aid-style recommendations. First, talk to your faculty — especially junior faculty, contingent faculty, and graduate-student lecturers and teaching assistants — about what student writing has looked like this past semester. Try to find ways to get honest perspectives from students, too; the ones actually doing the work are surely frustrated at their classmates’ laziness and dishonesty. Any meaningful response is going to demand knowing the scale of the problem, and the paper-graders know best what’s going on. Ask teachers what they’ve seen, what they’ve done to try to mitigate the possibility of AI plagiarism, and how well they think their strategies worked. Some departments may choose to take a more optimistic approach to AI chatbots, insisting they can be helpful as a student research tool if used right. It is worth figuring out where everyone stands on this question, and how best to align different perspectives and make allowances for divergent opinions while holding a firm line on the question of plagiarism.

Second, meet with your institution’s existing honor board (or whatever similar office you might have for enforcing the strictures of academic integrity) and devise a set of standards for identifying and responding to AI plagiarism. Consider simplifying the procedure for reporting academic-integrity issues; research AI-detection services and software, find one that works best for your institution, and make sure all paper-grading faculty have access and know how to use it.

Lastly, and perhaps most importantly, make it very, very clear to your student body — perhaps via a firmly worded statement — that AI-generated work submitted as original effort will be punished to the fullest extent of what your institution allows. Post the statement on your institution’s website and make it highly visible on the home page. Consider using this challenge as an opportunity to reassert the basic purpose of education: to develop the skills, to cultivate the virtues and habits of mind, and to acquire the knowledge necessary for leading a rich and meaningful human life.

Thursday, June 15, 2023

Moralization and extremism robustly amplify myside sharing

Marie, A, Altay, S., et al.
PNAS Nexus, Volume 2, Issue 4, April 2023.

Abstract

We explored whether moralization and attitude extremity may amplify a preference to share politically congruent (“myside”) partisan news and what types of targeted interventions may reduce this tendency. Across 12 online experiments (N = 6,989), we examined decisions to share news touching on the divisive issues of gun control, abortion, gender and racial equality, and immigration. Myside sharing was systematically observed and was consistently amplified when participants (i) moralized and (ii) were attitudinally extreme on the issue. The amplification of myside sharing by moralization also frequently occurred above and beyond that of attitude extremity. These effects generalized to both true and fake partisan news. We then examined a number of interventions meant to curb myside sharing by manipulating (i) the audience to which people imagined sharing partisan news (political friends vs. foes), (ii) the anonymity of the account used (anonymous vs. personal), (iii) a message warning against the myside bias, and (iv) a message warning against the reputational costs of sharing “mysided” fake news coupled with an interactive rating task. While some of those manipulations slightly decreased sharing in general and/or the size of myside sharing, the amplification of myside sharing by moral attitudes was consistently robust to these interventions. Our findings regarding the robust exaggeration of selective communication by morality and extremism offer important insights into belief polarization and the spread of partisan and false information online.

General discussion

Across 12 experiments (N = 6,989), we explored US participants’ intentions to share true and fake partisan news on 5 controversial issues—gun control, abortion, racial equality, sex equality, and immigration—in social media contexts. Our experiments consistently show that people have a strong sharing preference for politically congruent news—Democrats even more so than Republicans. They also demonstrate that this “myside” sharing is magnified when respondents see the issue as being of “absolute moral importance”, and when they have an extreme attitude on the issue. Moreover, issue moralization was found to amplify myside sharing above and beyond attitude extremity in the majority of the studies. Expanding prior research on selective communication, our work provides a clear demonstration that citizens’ myside communicational preference is powerfully amplified by their moral and political ideology (18, 19, 39–43).

By examining this phenomenon across multiple experiments varying numerous parameters, we demonstrated the robustness of myside sharing and of its amplification by participants’ issue moralization and attitude extremity. First, those effects were consistently observed on both true (Experiments 1, 2, 3, 5a, 6a, 7, and 10) and fake (Experiments 4, 5b, 6b, 8, 9, and 10) news stories and across distinct operationalizations of our outcome variable. Moreover, myside sharing and its amplification by issue moralization and attitude extremity were systematically observed despite multiple manipulations of the sharing context. Namely, those effects were observed whether sharing was done from one's personal or an anonymous social media account (Experiments 5a and 5b), whether the audience was made of political friends or foes (Experiments 6a and 6b), and whether participants first saw intervention messages warning against the myside bias (Experiments 7 and 8), or an interactive intervention warning against the reputational costs of sharing mysided falsehoods (Experiments 9 and 10).

Wednesday, June 14, 2023

Can Robotic AI Systems Be Virtuous and Why Does This Matter?

Constantinescu, M., Crisp, R. 
Int J of Soc Robotics 14, 
1547–1557 (2022).

Abstract

The growing use of social robots in times of isolation refocuses ethical concerns for Human–Robot Interaction and its implications for social, emotional, and moral life. In this article we raise a virtue-ethics-based concern regarding deployment of social robots relying on deep learning AI and ask whether they may be endowed with ethical virtue, enabling us to speak of “virtuous robotic AI systems”. In answering this question, we argue that AI systems cannot genuinely be virtuous but can only behave in a virtuous way. To that end, we start from the philosophical understanding of the nature of virtue in the Aristotelian virtue ethics tradition, which we take to imply the ability to perform (1) the right actions (2) with the right feelings and (3) in the right way. We discuss each of the three requirements and conclude that AI is unable to satisfy any of them. Furthermore, we relate our claims to current research in machine ethics, technology ethics, and Human–Robot Interaction, discussing various implications, such as the possibility to develop Autonomous Artificial Moral Agents in a virtue ethics framework.

Conclusion

AI systems are neither moody nor dissatisfied, and they do not want revenge, which seems to be an important advantage over humans when it comes to making various decisions, including ethical ones. However, from a virtue ethics point of view, this advantage becomes a major drawback. For this also means that they cannot act out of a virtuous character, either. Despite their ability to mimic human virtuous actions and even to function behaviourally in ways equivalent to human beings, robotic AI systems cannot perform virtuous actions in accordance with virtues, that is, rightly or virtuously; nor for the right reasons and motivations; nor through phronesis take into account the right circumstances. And this has the consequence that AI cannot genuinely be virtuous, at least not with the current technological advances supporting their functional development. Nonetheless, it might well be that the more we come to know about AI, the less we know about its future.Footnote9 We therefore leave open the possibility of AI systems being virtuous in some distant future. This might, however, require some disruptive, non-linear evolution that includes, for instance, the possibility that robotic AI systems fully deliberate over their own versus others' goals and happiness and make their own choices and priorities accordinglyFootnote10. Indeed, to be a virtuous agent one needs to have the possibility to make mistakes, to reason over virtuous and vicious lines of action. But then this raises a different question: are we prepared to experience interaction with vicious robotic AI systems?

Tuesday, June 13, 2023

Using the Veil of Ignorance to align AI systems with principles of justice

Weidinger, L. McKee, K.R., et al. (2023).
PNAS, 120(18), e2213709120

Abstract

The philosopher John Rawls proposed the Veil of Ignorance (VoI) as a thought experiment to identify fair principles for governing a society. Here, we apply the VoI to an important governance domain: artificial intelligence (AI). In five incentive-compatible studies (N = 2, 508), including two preregistered protocols, participants choose principles to govern an Artificial Intelligence (AI) assistant from behind the veil: that is, without knowledge of their own relative position in the group. Compared to participants who have this information, we find a consistent preference for a principle that instructs the AI assistant to prioritize the worst-off. Neither risk attitudes nor political preferences adequately explain these choices. Instead, they appear to be driven by elevated concerns about fairness: Without prompting, participants who reason behind the VoI more frequently explain their choice in terms of fairness, compared to those in the Control condition. Moreover, we find initial support for the ability of the VoI to elicit more robust preferences: In the studies presented here, the VoI increases the likelihood of participants continuing to endorse their initial choice in a subsequent round where they know how they will be affected by the AI intervention and have a self-interested motivation to change their mind. These results emerge in both a descriptive and an immersive game. Our findings suggest that the VoI may be a suitable mechanism for selecting distributive principles to govern AI.

Significance

The growing integration of Artificial Intelligence (AI) into society raises a critical question: How can principles be fairly selected to govern these systems? Across five studies, with a total of 2,508 participants, we use the Veil of Ignorance to select principles to align AI systems. Compared to participants who know their position, participants behind the veil more frequently choose, and endorse upon reflection, principles for AI that prioritize the worst-off. This pattern is driven by increased consideration of fairness, rather than by political orientation or attitudes to risk. Our findings suggest that the Veil of Ignorance may be a suitable process for selecting principles to govern real-world applications of AI.

From the Discussion section

What do these findings tell us about the selection of principles for AI in the real world? First, the effects we observe suggest that—even though the VoI was initially proposed as a mechanism to identify principles of justice to govern society—it can be meaningfully applied to the selection of governance principles for AI. Previous studies applied the VoI to the state, such that our results provide an extension of prior findings to the domain of AI. Second, the VoI mechanism demonstrates many of the qualities that we want from a real-world alignment procedure: It is an impartial process that recruits fairness-based reasoning rather than self-serving preferences. It also leads to choices that people continue to endorse across different contexts even where they face a self-interested motivation to change their mind. This is both functionally valuable in that aligning AI to stable preferences requires less frequent updating as preferences change, and morally significant, insofar as we judge stable reflectively endorsed preferences to be more authoritative than their nonreflectively endorsed counterparts. Third, neither principle choice nor subsequent endorsement appear to be particularly affected by political affiliation—indicating that the VoI may be a mechanism to reach agreement even between people with different political beliefs. Lastly, these findings provide some guidance about what the content of principles for AI, selected from behind a VoI, may look like: When situated behind the VoI, the majority of participants instructed the AI assistant to help those who were least advantaged.

Monday, June 12, 2023

Why some mental health professionals avoid self-care

Dattilio, F. M. (2023).
Journal of Consulting and Clinical Psychology, 
91(5), 251–253.
https://doi.org/10.1037/ccp0000818

Abstract

This article briefly discusses reasons why some mental health professionals are resistant to self-care. These reasons include the savior complex, avoidance, and lack of collegial assiduity. Several proposed solutions are offered.

Here is an excerpt:

Savior Complex

One hypothesis used to explain professionals’ resistance is what some refer to as a “savior complex.” Certain MHPs may be engaging in the cognitive distortion that it is their duty to save as many people from suffering and demise as they can and in turn need to sacrifice their own psychological welfare for those facing distress. MHPs may be skewed in their thinking that they are also invulnerable to psychological and other stressors. Inherent in this distortion is their fear of being viewed as weak or ineffective, and as a result, they overcompensate by attempting to be stronger than others. This type of thinking may also involve a defense mechanism that develops early in their professional lives and emerges during the course of their work in the field. This may stem from preexisting components of their personality dynamics. 

Another reason may be that the extreme rewards that professionals experience from helping others in such a desperate state of need serve as a euphoric experience for them that can be addictive. In essence, the “high” that they obtain from helping others often spurs them on.
Avoidance

Another less complicated explanation for MHPs’ blindness to their own vulnerabilities may be their strong desire to avoid admitting to their own weaknesses and sense of vulnerability. The defense mechanism of rationalization that they are stronger and healthier than everyone else may embolden them to push on even when there are visible signs to others of the stress in their lives that is compromising their functioning. 

Avoidance is also a way of sidestepping the obvious and putting it off until later. This may be coupled with the need that has increased, particularly with the recent pandemic that has intensified the demand for mental health services.

Denial

The dismissal of MHPs’ own needs or what some may term as, “denial” is a deeper aspect that goes hand-in-hand with cognitive distortions that develop with MHPs, but involve a more complex level of blindness to the obvious (Bearse et al., 2013). It may also serve as a way for professionals to devalue their own emotional and psychological challenges. 

Denial may also stem from an underlying fear of being determined as incapacitated or not up to the challenge by their colleagues and thus prohibited from returning to their work or having to face limitations or restrictions. It can sometimes emanate from the fear of being reported as having engaged in unethical behavior by not seeking assistance sooner. This is particularly so with cases of MHPs who have become involved with illicit drug or alcohol abuse or addiction. 

Most ethical principles mandate that MHPs strive to remain cognizant of the potential effects that their work has on their own physical and mental health status while they are in the process of treating others and to recognize when their ability to be effective has been compromised. 

Last, in some cases, MHPs’ denial can even be a response to genuine and accurately perceived expectations in a variety of work contexts where they do not have control over their schedules. This may occur more commonly with facilities or institutions that do not support the disclosure of vulnerability and stress. It is for the aforementioned reasons that the American and Canadian Psychological Associations as well as other mental health organizations have mandated special education on this topic in graduate training programs (American Psychiatric Association, 2013; Maranzan et al., 2018).

Lack of Collegial Assiduity

A final reason may involve a lack of collegial assiduity, where fellow MHPs observe their colleagues enduring signs of stress but fail to confront the individual of concern and alert them to the obvious. It is often very awkward and uncomfortable for a colleague to address this issue and risk rebuke or a negative outcome. As a result, they simply avoid it altogether, thus leaving the issue of concern unaddressed.

The article is paywalled here, which is a complete shame.  We need more access to self-care resources.

Sunday, June 11, 2023

Podcast: Ethics Education and the Impact of AI on Mental Health

Hi All-

I recently had the privilege of being interviewed on the Psyched To Practice podcast. During this wide-ranging and unscripted interview, Ray Christner, Paul Wagner, and I engage in an insightful discussion about ethics, ethical decision-making, morality, and the potential impact of artificial intelligence on the practice psychotherapy.

After sharing a limited biographical account of my personal journey towards becoming a clinical psychologist, we delve into various topics including ethical codes, decision science, and the significant role that morality plays in shaping the practice of clinical psychology.

While the interview has a duration of approximately one hour and 17 minutes, I recommend taking the time to listen to it, particularly if you are an early or mid-career mental health professional. The conversation offers valuable insights and perspectives that can greatly contribute to your professional growth and development.

I provide the references below to our discussion, in alphabetical order, not the order in which I spoke about it during the podcast.




Even though the podcast was not scripted, here is a reference list of ideas I addressed during the interview.

References

Baxter, R. (2023, June 8). Lawyer’s AI Blunder Shows Perils of ChatGPT in ‘Early Days.’ Bloomberg Law News. Retrieved from https://news.bloomberglaw.com/business-and-practice/lawyers-ai-blunder-shows-perils-of-chatgpt-in-early-days


Chen, J., Zhang, Y., Wang, Y., Zhang, Z., Zhang, X., & Li, J. (2023). Deep learning-guided discovery of an antibiotic targeting Acinetobacter baumannii. Nature Biotechnology, 31(6), 631-636. doi:10.1038/s41587-023-00949-7


Dillon, D, Tandon, N., Gu, Y., & Gray, K. (2023). Can AI language models replace human participants? Trends in Cognitive Sciences, in press.


Fowler, A. (2023, June 7). Artificial intelligence could help predict breast cancer risk. USA Today. Retrieved from https://www.usatoday.com/story/news/health/2023/06/07/artificial-intelligence-breast-cancer-risk-prediction/70297531007/


Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. New York, NY: Pantheon Books.


Handelsman, M. M., Gottlieb, M. C., & Knapp, S. (2005). Training ethical psychologists: An acculturation model. Professional Psychology: Research and Practice, 36(1), 59–65. https://doi.org/10.1037/0735-7028.36.1.59


Heinlein, R. A. (1961). Stranger in a strange land. New York, NY: Putnam.


Knapp, S. J., & VandeCreek, L. D. (2006). Practical ethics for psychologists: A positive approach. Washington, DC: American Psychological Association.


MacIver M. B. (2022). Consciousness and inward electromagnetic field interactions. Frontiers in human neuroscience, 16, 1032339. https://doi.org/10.3389/fnhum.2022.1032339


Persson, G., Restori, K. H., Emdrup, J. H., Schussek, S., Klausen, M. S., Nicol, M. J., Katkere, B., Rønø, B., Kirimanjeswara, G., & Sørensen, A. B. (2023). DNA immunization with in silico predicted T-cell epitopes protects against lethal SARS-CoV-2 infection in K18-hACE2 mice. Frontiers in Immunology, 14, 1166546. doi:10.3389/fimmu.2023.1166546


Schwartz, S. H. (1992). Universalism-particularism: Values in the context of cultural evolution. In M. Zanna (Ed.), Advances in experimental social psychology (Vol. 25, pp. 1-65). New York, NY: Academic Press.