Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Education. Show all posts
Showing posts with label Education. Show all posts

Tuesday, May 28, 2024

How should we change teaching and assessment in response to increasingly powerful generative Artificial Intelligence?

Bower, M., Torrington, J., Lai, J.W.M. et al.
Educ Inf Technol (2024).

Abstract

There has been widespread media commentary about the potential impact of generative Artificial Intelligence (AI) such as ChatGPT on the Education field, but little examination at scale of how educators believe teaching and assessment should change as a result of generative AI. This mixed methods study examines the views of educators (n = 318) from a diverse range of teaching levels, experience levels, discipline areas, and regions about the impact of AI on teaching and assessment, the ways that they believe teaching and assessment should change, and the key motivations for changing their practices. The majority of teachers felt that generative AI would have a major or profound impact on teaching and assessment, though a sizeable minority felt it would have a little or no impact. Teaching level, experience, discipline area, region, and gender all significantly influenced perceived impact of generative AI on teaching and assessment. Higher levels of awareness of generative AI predicted higher perceived impact, pointing to the possibility of an ‘ignorance effect’. Thematic analysis revealed the specific curriculum, pedagogy, and assessment changes that teachers feel are needed as a result of generative AI, which centre around learning with AI, higher-order thinking, ethical values, a focus on learning processes and face-to-face relational learning. Teachers were most motivated to change their teaching and assessment practices to increase the performance expectancy of their students and themselves. We conclude by discussing the implications of these findings in a world with increasingly prevalent AI.


Here is a quick summary:

A recent study surveyed teachers about the impact of generative AI, like ChatGPT, on education. The majority of teachers believed AI would significantly change how they teach and assess students. Interestingly, teachers with more awareness of AI predicted a greater impact, suggesting a potential "ignorance effect."

The study also explored how teachers think education should adapt. The focus shifted towards teaching students how to learn with AI, emphasizing critical thinking, ethics, and the learning process itself. This would involve less emphasis on rote memorization and regurgitation of information that AI can readily generate. Teachers also highlighted the importance of maintaining strong face-to-face relationships with students in this evolving educational landscape.

Monday, July 24, 2023

How AI can distort human beliefs

Kidd, C., & Birhane, A. (2023, June 23).
Science, 380(6651), 1222-1223.
doi:10.1126/science. adi0248

Here is an excerpt:

Three core tenets of human psychology can help build a bridge of understanding about what is at stake when discussing regulation and policy options. These ideas in psychology can connect to machine learning but also those in political science, education, communication, and the other fields that are considering the impact of bias and misinformation on population-level beliefs.

People form stronger, longer-lasting beliefs when they receive information from agents that they judge to be confident and knowledgeable, starting in early childhood. For example, children learned better when they learned from an agent who asserted their knowledgeability in the domain as compared with one who did not (5). That very young children track agents’ knowledgeability and use it to inform their beliefs and exploratory behavior supports the theory that this ability reflects an evolved capacity central to our species’ knowledge development.

Although humans sometimes communicate false or biased information, the rate of human errors would be an inappropriate baseline for judging AI because of fundamental differences in the types of exchanges between generative AI and people versus people and people. For example, people regularly communicate uncertainty through phrases such as “I think,” response delays, corrections, and speech disfluencies. By contrast, generative models unilaterally generate confident, fluent responses with no uncertainty representations nor the ability to communicate their absence. This lack of uncertainty signals in generative models could cause greater distortion compared with human inputs.

Further, people assign agency and intentionality readily. In a classic study, people read intentionality into the movements of simple animated geometric shapes (6). Likewise, people commonly read intentionality— and humanlike intelligence or emergent sentience—into generative models even though these attributes are unsubstantiated (7). This readiness to perceive generative models as knowledgeable, intentional agents implies a readiness to adopt the information that they provide more rapidly and with greater certainty. This tendency may be further strengthened because models support multimodal interactions that allow users to ask models to perform actions like “see,” “draw,” and “speak” that are associated with intentional agents. The potential influence of models’ problematic outputs on human beliefs thus exceeds what is typically observed for the influence of other forms of algorithmic content suggestion such as search. These issues are exacerbated by financial and liability interests incentivizing companies to anthropomorphize generative models as intelligent, sentient, empathetic, or even childlike.


Here is a summary of solutions that can be used to address the problem of AI-induced belief distortion. These solutions include:

Transparency: AI models should be transparent about their biases and limitations. This will help people to understand the limitations of AI models and to be more critical of the information that they generate.

Education: People should be educated about the potential for AI models to distort beliefs. This will help people to be more aware of the risks of using AI models and to be more critical of the information that they generate.

Regulation: Governments could regulate the use of AI models to ensure that they are not used to spread misinformation or to reinforce existing biases.

Thursday, November 3, 2022

What Makes a Great Life?

Jon Clifton
Gallup.com
Originally posted 22 SEPT 22

While many things contribute to a great life, Gallup finds five aspects that all people have in common: their work, finances, physical health, communities, and relationships with family and friends. If you are excelling in each of these elements of wellbeing, you are highly likely to be thriving in life.

(cut)

Gallup's research as well as research by the global community of wellbeing practitioners has produced hundreds, if not thousands, of discoveries.

One of the most famous discoveries is the U-curve of happiness, which shows how age is associated with wellbeing. Young people rate their lives high, and so do older people. But middle-aged people rate their lives the lowest. This trend holds every year in almost every country in the world. It is nicknamed the "U-curve" of happiness because when you look at the graph, it looks like a "U." Some jokingly say that the chart is smiling.

Some discoveries are astonishing; others feel like they reveal a "blandly sophomoric secret," as George Gallup referred to some of his longevity findings. For example, you could argue that the U-curve of happiness simply quantifies conventional wisdom -- that people have midlife crises.

Here are a few of the discoveries that are truly compelling:
  • People who love their jobs do not hate Mondays.
  • Education-related debt can cause an emotional scar that remains even after you pay off the debt.
  • Volunteering is not just good for the people you are helping; it is also good for you.
  • Exercising is better at eliminating fatigue than prescription drugs.
  • Loneliness can double your risk of dying from heart disease.
We could list every insight ever produced from this research and encourage leaders to work on all of them. Instead, we took another approach. Using all these insights from across the industry combined with our surveys and analysis, we created the five elements of wellbeing. And our ongoing global research confirms that the five elements of wellbeing are significant drivers of a great life everywhere.

Thursday, February 3, 2022

Neural computations in children’s third-party interventions are modulated by their parents’ moral values

Kim, M., Decety, J., Wu, L. et al.
npj Sci. Learn. 6, 38 (2021). 
https://doi.org/10.1038/s41539-021-00116-5

Abstract

One means by which humans maintain social cooperation is through intervention in third-party transgressions, a behaviour observable from the early years of development. While it has been argued that pre-school age children’s intervention behaviour is driven by normative understandings, there is scepticism regarding this claim. There is also little consensus regarding the underlying mechanisms and motives that initially drive intervention behaviours in pre-school children. To elucidate the neural computations of moral norm violation associated with young children’s intervention into third-party transgression, forty-seven preschoolers (average age 53.92 months) participated in a study comprising of electroencephalographic (EEG) measurements, a live interaction experiment, and a parent survey about moral values. This study provides data indicating that early implicit evaluations, rather than late deliberative processes, are implicated in a child’s spontaneous intervention into third-party harm. Moreover, our findings suggest that parents’ values about justice influence their children’s early neural responses to third-party harm and their overt costly intervention behaviour.

From the Discussion

Our study further provides evidence that children, as young as 3 years of age, can enact costly third-party intervention by protesting and reporting. Previous research has shown that young children from age 3 enact third-party punishment to transgressors shown in video or puppets9,10. In the present study, in the context of real-life transgression experiment, even the youngest participant (41 months old) engaged in costly intervention, by hinting disapproval to the adult transgressor (why are you doing that?) and subsequently reporting the damage when being prompted. During the experiment, confounding factors such as a sense of ‘responsibility’, were avoided by keeping the person playing the ‘research assistant’ role out of the room when the transgression occurred. Furthermore, when leaving the room, the ‘research assistant’ did not assign the children any special role to police or monitor the actions of the ‘visitor’ (who would transgress). Moreover, the transgressor was not an acquaintance of the child, and the book was said to belong to a university (not a child’s school nor researchers), hence giving little sense of in-group/out-group membership11,60. Also, the participating children would likely attribute ‘power’ and ‘authority’ to the visitor/transgressor, as an adult26. Nevertheless, in the real-life experimental context, 34.8% of children explicitly protested to the adult wrong-doer.

(cut)

It should be emphasized that parent’s cognitive empathy was not implicated in the child’s neural computations of moral norms or their spontaneous intervention behaviour. However, parents’ cognitive empathy had a positive correlation with a child’s effortful control and their subsequent report behaviour. This distinct contribution made by two different dispositions (cognitive empathy and justice sensitivity) suggests that parenting strategies necessary to enhance a child’s moral development require both aspects: perspective-taking and understanding of moral values. 

Sunday, May 2, 2021

The Quest to Tell Science from Pseudoscience

Michael D. Gordin
Boston Review
Originally published 23 Mar 21

Here is an excerpt:

Two incidents sparked a reevaluation. The first was the Soviet Union’s launch of the first artificial satellite, Sputnik, on October 4, 1957. The success triggered an extensive discussion about whether the United States had fallen behind in science education, and reform proposals were mooted for many different areas. Then the centenary of the publication of Darwin’s On the Origin of Species (1859) prompted biologists to decry that “one hundred years without Darwinism are enough!” The Biological Sciences Curriculum Study, an educational center funded by a grant from the National Science Foundation, recommended an overhaul of secondary school education in the life sciences, with Darwinism (and human evolution) given a central place.

The cease-fire between the evolutionists and Christian fundamentalists had been broken. In the 1960s religious groups countered with a series of laws insisting on “equal time”: if Darwinism (or “evolution science”) was required, then it should be balanced with an equivalent theory, “creation science.” Cases from both Arkansas and Louisiana made it to the appellate courts in the early 1980s. The first, McLean v. Arkansas Board of Education, saw a host of expert witnesses spar over whether Darwinism was science, whether creation science also met the definition of science, and the limits of the Constitution’s establishment clause. A crucial witness for the evolutionists was Michael Ruse, a philosopher of science at the University of Guelph in Ontario. Ruse testified to several different demarcation criteria and contended that accounts of the origins of humanity based on Genesis could not satisfy them. One of the criteria he floated was Popper’s.

Judge William Overton, in his final decision in January 1982, cited Ruse’s testimony when he argued that falsifiability was a standard for determining whether a doctrine was science—and that scientific creationism did not meet it. (Ruse walked his testimony back a decade later.) Overton’s appellate court decision was expanded by the U.S. Supreme Court in Edwards v. Aguillard (1987), the Louisiana case; the result was that Popper’s falsifiability was incorporated as a demarcation criterion in a slew of high school biology texts. No matter that the standard was recognized as bad philosophy; as a matter of legal doctrine it was enshrined. (In his 2005 appellate court decision in Kitzmiller v. Dover Area School District, Judge John E. Jones III modified the legal demarcation standards by eschewing Popper and promoting several less sharp but more apposite criteria while deliberating over the teaching of a doctrine known as “intelligent design,” a successor of creationism crafted to evade the precedent of Edwards.)

Friday, October 23, 2020

Ethical Dimensions of Using Artificial Intelligence in Health Care

Michael J. Rigby
AMA Journal of Ethics
February 2019

An artificially intelligent computer program can now diagnose skin cancer more accurately than a board-certified dermatologist. Better yet, the program can do it faster and more efficiently, requiring a training data set rather than a decade of expensive and labor-intensive medical education. While it might appear that it is only a matter of time before physicians are rendered obsolete by this type of technology, a closer look at the role this technology can play in the delivery of health care is warranted to appreciate its current strengths, limitations, and ethical complexities.

Artificial intelligence (AI), which includes the fields of machine learning, natural language processing, and robotics, can be applied to almost any field in medicine, and its potential contributions to biomedical research, medical education, and delivery of health care seem limitless. With its robust ability to integrate and learn from large sets of clinical data, AI can serve roles in diagnosis, clinical decision making, and personalized medicine. For example, AI-based diagnostic algorithms applied to mammograms are assisting in the detection of breast cancer, serving as a “second opinion” for radiologists. In addition, advanced virtual human avatars are capable of engaging in meaningful conversations, which has implications for the diagnosis and treatment of psychiatric disease. AI applications also extend into the physical realm with robotic prostheses, physical task support systems, and mobile manipulators assisting in the delivery of telemedicine.

Nonetheless, this powerful technology creates a novel set of ethical challenges that must be identified and mitigated since AI technology has tremendous capability to threaten patient preference, safety, and privacy. However, current policy and ethical guidelines for AI technology are lagging behind the progress AI has made in the health care field. While some efforts to engage in these ethical conversations have emerged, the medical community remains ill informed of the ethical complexities that budding AI technology can introduce. Accordingly, a rich discussion awaits that would greatly benefit from physician input, as physicians will likely be interfacing with AI in their daily practice in the near future.

Wednesday, March 11, 2020

Expertise in Child Abuse?

Dr. Woods, from a YouTube video
Mike Hixenbaugh & Taylor Mirfendereski
NBCnews.com
Originally posted 14 Feb 20

Here is an excerpt:

Contrary to Woods’ testimony, there are more than 375 child abuse pediatricians certified by the American Board of Pediatrics in the U.S., all of whom have either completed an extensive fellowship program — first offered, not three, but nearly 15 years ago, while Woods was still in medical school — or spent years examining cases of suspected abuse prior to the creation of the medical subspecialty in 2009. The doctors are trained to differentiate accidental from inflicted injuries, which child abuse pediatricians say makes them better qualified than other doctors to determine whether a child has been abused. At least three physicians have met those qualifications and are practicing as board-certified child abuse pediatricians in the state of Washington.

Woods is not one of them.

Despite her lack of fellowship training, state child welfare and law enforcement officials in Washington have granted Woods remarkable influence over their decisions about whether to remove children from parents or pursue criminal charges, NBC News and KING 5 found. In four cases reviewed by reporters, child welfare workers took children from parents based on Woods’ reports — including some in which Woods misstated key facts, according to a review of records — despite contradictory opinions from other medical experts who said they saw no evidence of abuse.

In one instance, a pediatrician, Dr. Niran Al-Agba, insisted that a 2-year-old child’s bruise matched her parents’ description of an accidental fall onto a heating grate in their home. But Child Protective Services workers, who’d gotten a call from the child’s day care after someone noticed the bruise, asked Woods to look at photos of the injury.

Woods reported that the mark was most likely the result of abuse, even though she’d never seen the child in person or talked to the parents. The agency sided with her. To justify that decision, the Child Protective Services worker described Woods as “a physician with extensive training and experience in regard to child abuse and neglect,” according to a written report reviewed by reporters.

The info is here.

Tuesday, January 7, 2020

Can Artificial Intelligence Increase Our Morality?

Matthew Hutson
psychologytoday.com
Originally posted 9 Dec 19

Here is an excerpt:

For sure, designing technologies to encourage ethical behavior raises the question of which behaviors are ethical. Vallor noted that paternalism can preclude pluralism, but just to play devil’s advocate I raised the argument for pluralism up a level and noted that some people support paternalism. Most in the room were from WEIRD cultures—Western, educated, industrialized, rich, democratic—and so China’s social credit system feels Orwellian, but many in China don’t mind it.

The biggest question in my mind after Vallor’s talk was about the balance between self-cultivation and situation-shaping. Good behavior results from both character and context. To what degree should we focus on helping people develop a moral compass and fortitude, and to what degree should we focus on nudges and social platforms that make morality easy?

The two approaches can also interact in interesting ways. Occasionally extrinsic rewards crowd out intrinsic drives: If you earn points for good deeds, you come to expect them and don’t value goodness for its own sake. Sometimes, however, good deeds perform a self-signaling function, in which you see them as a sign of character. You then perform more good deeds to remain consistent. Induced cooperation might also act as a social scaffolding for bridges of trust that can later stand on their own. It could lead to new setpoints of collective behavior, self-sustaining habits of interaction.

The info is here.

Friday, December 20, 2019

Study offers first large-sample evidence of the effect of ethics training on financial sector behavior

Image result for business ethicsShannon Roddel
phys.org
Originally posted 21 Nov 19


Here is an excerpt:

"Behavioral ethics research shows that business people often do not recognize when they are making ethical decisions," he says. "They approach these decisions by weighing costs and benefits, and by using emotion or intuition."

These results are consistent with the exam playing a "priming" role, where early exposure to rules and ethics material prepares the individual to behave appropriately later. Those passing the exam without prior misconduct appear to respond most to the amount of rules and ethics material covered on their exam. Those already engaging in misconduct, or having spent several years working in the securities industry, respond least or not at all.

The study also examines what happens when people with more ethics training find themselves surrounded by bad behavior, revealing these individuals are more likely to leave their jobs.

"We study this effect both across organizations and within Wells Fargo, during their account fraud scandal," Kowaleski explains. "That those with more ethics training are more likely to leave misbehaving organizations suggests the self-reinforcing nature of corporate culture."

The info is here.

Wednesday, December 18, 2019

Can Business Schools Have Ethical Cultures, Too?

Brian Gallagher
www.ethicalsystems.org
Originally posted 18 Nov 19

Here is an excerpt:

The informal aspects of an ethical culture are pretty intuitive. These include role models and heroes, norms, rituals, stories, and language. “The systems can be aligned to support ethical behavior (or unethical behavior),” Eury and Treviño write, “and the systems can be misaligned in a way that sends mixed messages, for instance, the organization’s code of conduct promotes one set of behaviors, but the organization’s norms encourage another set of behaviors.” Although Smeal hasn’t completely rid itself of unethical norms, it has fostered new ethical ones, like encouraging teachers to discuss the school’s honor code on the first day of class. Rituals can also serve as friendly reminders about the community’s values—during finals week, for example, the honor and integrity program organizes complimentary coffee breaks, and corporate sponsors support ethics case competitions. Eury and Treviño also write how one powerful story has taken hold at Smeal, about a time when the college’s MBA program, after it implemented the honor code, rejected nearly 50 applicants for plagiarism, and on the leadership integrity essay, no less. (Smeal was one of the first business schools to use plagiarism-detection software in its admissions program.)

Given the inherently high turnover rate at a school—and a diverse student population—it’s a constant challenge to get the community’s newcomers to aspire to meet Smeal’s honor and integrity standards. Since there’s no stopping students from graduating, Eury and Treviño stress the importance of having someone like Smeal’s honor and integrity director—someone who, at least part-time, focuses on fostering an ethical culture. “After the first leadership integrity director stepped down from her role, the college did not fill her position for a few years in part because of a coming change in deans,” Eury and Treviño write. The new Dean eventually hired an honor and integrity director who served in her role for 3-and-a-half years, but, after she accepted a new role in the college, the business school took close to 8 months to fill the role again. “In between each of these leadership changes, the community continued to change and grow, and without someone constantly ‘tending to the ethical culture garden,’ as we like to say, the ‘weeds’ will begin to grow,” Eury and Treviño write. Having an honor and integrity director makes an “important symbolic statement about the college’s commitment to tending the culture but it also makes a more substantive contribution to doing so.”

The info is here.

Tuesday, September 10, 2019

Can Ethics Be Taught?

Peter Singer
Project Syndicate
Originally published August 7, 2019

Can taking a philosophy class – more specifically, a class in practical ethics – lead students to act more ethically?

Teachers of practical ethics have an obvious interest in the answer to that question. The answer should also matter to students thinking of taking a course in practical ethics. But the question also has broader philosophical significance, because the answer could shed light on the ancient and fundamental question of the role that reason plays in forming our ethical judgments and determining what we do.

Plato, in the Phaedrus, uses the metaphor of a chariot pulled by two horses; one represents rational and moral impulses, the other irrational passions or desires. The role of the charioteer is to make the horses work together as a team. Plato thinks that the soul should be a composite of our passions and our reason, but he also makes it clear that harmony is to be found under the supremacy of reason.

In the eighteenth century, David Hume argued that this picture of a struggle between reason and the passions is misleading. Reason on its own, he thought, cannot influence the will. Reason is, he famously wrote, “the slave of the passions.”

The info is here.

Tuesday, August 20, 2019

What Alan Dershowitz taught me about morality

Molly Roberts
The Washington Post
Originally posted August 2, 2019

Here are two excerpts:

Dershowitz has been defending Donald Trump on television for years, casting himself as a warrior for due process. Now, Dershowitz is defending himself on TV, too, against accusations at the least that he knew about Epstein allegedly trafficking underage girls for sex with men, and at the worst that he was one of the men.

These cases have much in common, and they both bring me back to the classroom that day when no one around the table — not the girl who invoked Ernest Hemingway’s hedonism, nor the boy who invoked God’s commandments — seemed to know where our morality came from. Which was probably the point of the exercise.

(cut)

You can make a convoluted argument that investigations of the president constitute irresponsible congressional overreach, but contorting the Constitution is your choice, and the consequences to the country of your contortion are yours to own, too. Everyone deserves a defense, but lawyers in private practice choose their clients — and putting a particular focus on championing those Dershowitz calls the “most unpopular, most despised” requires grappling with what it means for victims when an abuser ends up with a cozy plea deal.

When the alleged abuser is your friend Jeffrey, whose case you could have avoided precisely because you have a personal relationship, that grappling is even more difficult. Maybe it’s still all worth it to keep the system from falling apart, because next time it might not be a billionaire financier who wanted to seed the human race with his DNA on the stand, but a poor teenager framed for a crime he didn’t commit.

Dershowitz once told the New York Times he regretted taking Epstein’s case. He told me, “I would do it again.”

The info is here.

Friday, July 5, 2019

Ethical considerations in the use of Pernkopf's Atlas of Anatomy: A surgical case study

Yee, A., Zubovic, E, and others
Surgery May 2019Volume 165, Issue 5, Pages 860–867

Abstract

The use of Eduard Pernkopf's anatomic atlas presents ethical challenges for modern surgery concerning the use of data resulting from abusive scientific work. In the 1980s and 1990s, historic investigations revealed that Pernkopf was an active National Socialist (Nazi) functionary at the University of Vienna and that among the bodies depicted in the atlas were those of Nazi victims. Since then, discussions persist concerning the ethicality of the continued use of the atlas, because some surgeons still rely on information from this anatomic resource for procedural planning. The ethical implications relevant to the use of this atlas in the care of surgical patients have not been discussed in detail. Based on a recapitulation of the main arguments from the historic controversy surrounding the use of Pernkopf's atlas, this study presents an actual patient case to illustrate some of the ethical considerations relevant to the decision of whether to use the atlas in surgery. This investigation aims to provide a historic and ethical framework for questions concerning the use of the Pernkopf atlas in the management of anatomically complex and difficult surgical cases, with special attention to implications for medical ethics drawn from Jewish law.

The info is here.

Friday, December 21, 2018

Life on the slippery Earth



Sebastian Purcell
aeon.co
Originally posted July 3, 2018

Here is an excerpt:

At its core, Aztec virtue ethics has three main elements. One is a conception of the good life as the ‘rooted’ or worthwhile life. Second is the idea of right action as the mean or middle way. Third and final is the belief that virtue is a quality that’s fostered socially.

When I speak about the Aztecs – the people dominant in large parts of central America prior to the 16th-century Spanish conquest – even professional philosophers are often surprised to learn that the Aztecs were a philosophical culture. They’re even more startled to hear that we have (many volumes of) their texts recorded in their native language, Nahuatl. While a few of the pre-colonial hieroglyphic-type books survived the Spanish bonfires, our main sources of knowledge derive from records made by Catholic priests, up to the early 17th century. Using the Latin alphabet, these texts record the statements of tlamatinime, the indigenous philosophers, on matters as diverse as bird-flight patterns, moral virtue, and the structure of the cosmos.

To explain the Aztec conception of the good life, it’s helpful to begin in the sixth volume of a book called the Florentine Codex, compiled by Father Bernardino of SahagĂșn. Most of the text contains edifying discourses called huehuetlatolli, the elders’ discourses. This particular section records the speeches following the appointment of a new king, when the noblemen appear to compete for the most eloquent articulation of what an ideal monarch should be and do. The result is a succession of speeches like those in Plato’s Symposium, wherein each member tries to produce the most moving expression of praise.

The info is here.

Wednesday, November 28, 2018

Promoting wellness and stress management in residents through emotional intelligence training

Ramzan Shahid, Jerold Stirling, William Adams
Advances in Medical Education and Practice ,Volume 9

Background: 

US physicians are experiencing burnout in alarming numbers. However, doctors with high levels of emotional intelligence (EI) may be immune to burnout, as they possess coping strategies which make them more resilient and better at managing stress. Educating physicians in EI may help prevent burnout and optimize their overall wellness. The purpose of our study was to determine if educational intervention increases the overall EI level of residents; specifically, their stress management and wellness scores.

Participant and methods: 

Residents from pediatrics and med-ped residency programs at a university-based training program volunteered to complete an online self-report EI survey (EQ-i 2.0) before and after an educational intervention. The four-hour educational workshop focused on developing four EI skills: self-awareness; self-management; social awareness; and social skills. We compared de-identified median score reports for the residents as a cohort before and after the intervention.

Results: 

Thirty-one residents (20 pediatric and 11 med-ped residents) completed the EI survey at both time intervals and were included in the analysis of results. We saw a significant increase in total EI median scores before and after educational intervention (110 vs 114, P=0.004). The stress management composite median score significantly increased (105 vs 111, P<0.001). The resident’s overall wellness score also improved significantly (104 vs 111, P=0.003).

Conclusions: 

As a group, our pediatric and med-peds residents had a significant increase in total EI and several other components of EI following an educational intervention. Teaching EI skills related to the areas of self-awareness, self-management, social awareness, and social skill may improve stress management skills, promote wellness, and prevent burnout in resident physicians.

The research is here.

Wednesday, November 21, 2018

Even The Data Ethics Initiatives Don't Want To Talk About Data Ethics

Kalev Leetaru
Forbes.com
Originally posted October 23, 2018

Two weeks ago, a new data ethics initiative, the Responsible Computer Science Challenge, caught my eye. Funded by the Omidyar Network, Mozilla, Schmidt Futures and Craig Newmark Philanthropies, the initiative will award up to $3.5M to “promising approaches to embedding ethics into undergraduate computer science education, empowering graduating engineers to drive a culture shift in the tech industry and build a healthier internet.” I was immediately excited about a well-funded initiative focused on seeding data ethics into computer science curricula, getting students talking about ethics from the earliest stages of their careers. At the same time, I was concerned about whether even such a high-profile effort could possibly reverse the tide of anti-data-ethics that has taken root in academia and what impact it could realistically have in a world in which universities, publishers, funding agencies and employers have largely distanced themselves from once-sacrosanct data ethics principles like informed consent and the right to opt out. Surprisingly, for an initiative focused on evangelizing ethics, the Challenge declined to answer any of the questions I posed it regarding how it saw its efforts as changing this. Is there any hope left for data ethics when the very initiatives designed to help teach ethics don’t want to talk about ethics?

On its surface, the Responsible Computer Science Challenge seems a tailor-built response to a public rapidly awakening to the incredible damage unaccountable platforms have wreaked upon society. The Challenge describes its focus as “supporting the conceptualization, development, and piloting of curricula that integrate ethics with undergraduate computer science training, educating a new wave of engineers who bring holistic thinking to the design of technology products.”

The info is here.

Tuesday, November 6, 2018

The removal of Darwin and evolution from schools is a backwards step

Michael Dixon
The Guardian
Originally posted in October 3, 2018

In recent weeks there have been alarming reports from both Israel and Turkey of Charles Darwin’s theory of evolution being erased from school curriculums. In Turkey, this has been blamed on the concept of evolution – which is taught in British primary schools – being beyond the understanding of high school students. In Israel, teachers are claiming that most students do not learn about evolution; they say their education ministry is quietly encouraging teachers to focus on other topics in biology.

This news follows the astonishing statements made by India’s minister for higher education earlier this year. Satyapal Singh claimed Darwin was “scientifically wrong”, and is demanding that the theory of evolution be removed from school curriculums because no one “ever saw an ape turning into a human being”.

It is tempting to shrug off these latest attacks on Darwin’s greatest contribution to natural science. After all, no other scientific theory has attracted the same level of impassioned opposition and detraction – certainly not for more than 150 years. But that would be to miss the particular urgency of improving our scientific understanding of the natural world and how best to protect it for the future.

The info is here.

Friday, October 12, 2018

The New Standardized Morality Test. Really.

Peter Greene
Forbes - Education
Originally published September 13, 2018

Here is an excerpt:

Morality is sticky and complicated, and I'm not going to pin it down here. It's one thing to manage your own moral growth and another thing to foster the moral development of family and friends and still quite another thing to have a company hired by a government draft up morality curriculum that will be delivered by yet another wing of the government. And it is yet another other thing to create a standardized test by which to give students morality scores.

But the folks at ACT say they will "leverage the expertise of U.S.-based research and test development teams to create the assessment, which will utilize the latest theory and principles of social and emotional learning (SEL) through the development process." That is quite a pile of jargon to dress up "We're going to cobble together a test to measure how moral a student is. The test will be based on stuff."

ACT Chief Commercial Officer Suzana Delanghe is quoted saying "We are thrilled to be supporting a holistic approach to student success" and promises that they will create a "world class assessment that measures UAE student readiness" because even an ACT manager knows better than to say that they're going to write a standardized test for morality.

The info is here.

Friday, September 14, 2018

Law, Ethics, and Conversations between Physicians and Patients about Firearms in the Home

Alexander D. McCourt, and Jon S. Vernick
AMA J Ethics. 2018;20(1):69-76.

Abstract

Firearms in the home pose a risk to household members, including homicide, suicide, and unintentional deaths. Medical societies urge clinicians to counsel patients about those risks as part of sound medical practice. Depending on the circumstances, clinicians might recommend safe firearm storage, temporary removal of the firearm from the home, or other measures. Certain state firearm laws, however, might present legal and ethical challenges for physicians who counsel patients about guns in the home. Specifically, we discuss state background check laws for gun transfers, safe gun storage laws, and laws forbidding physicians from engaging in certain firearm-related conversations with their patients. Medical professionals should be aware of these and other state gun laws but should offer anticipatory guidance when clinically appropriate.

The info is here.

Thursday, June 21, 2018

Social Media as a Weapon to Harass Women Academics

George Veletsianos and Jaigris Hodson
Inside Higher Ed
Originally published May 29, 2018

Here is an excerpt:

Before beginning our inquiry, we assumed that the people who responded to our interview requests would be women who studied video games or gender issues, as prior literature had suggested they would be more likely to face harassment. But we quickly discovered that women are harassed when writing about a wide range of topics, including but not limited to: feminism, leadership, science, education, history, religion, race, politics, immigration, art, sociology and technology broadly conceived. The literature even identifies choice of research method as a topic that attracts misogynistic commentary.

So who exactly is at risk of harassment? They form a long list: women scholars who challenge the status quo; women who have an opinion that they are willing to express publicly; women who raise concerns about power; women of all body types and shapes. Put succinctly, people may be targeted for a range of reasons, but women in particular are harassed partly because they happen to be women who dare to be public online. Our respondents reported that they are harassed because they are women. Because they are women, they become targets.

At this point, if you are a woman reading this, you might be nodding your head, or you might feel frustrated that we are pointing out something so incredibly obvious. We might as well point out that rain is wet. But unfortunately, for many people who have not experienced the reality of being a woman online, this fact is still not obvious, is minimized, or is otherwise overlooked. To be clear, there is a gendered element to how both higher education institutions and technology companies handle this issue.

The article is here.