Abstract
Objective Large language models’ (LLMs) alignment with ethical standards is unclear. We tested whether LLMs shift medical ethical decisions when given socio-demographic cues.
Methods We created 100 clinical scenarios, each posing a yes/no choice between two conflicting ethical principles. Nine LLMs were tested with and without 53 socio-demographic modifiers. Each scenario-modifier combination was repeated 10 times per model (for a total of ∼0.5M prompts). We tracked how socio-demographic features modified ethical choices.
Results All models altered their responses when introduced with socio-demographic details (p<0.001). Justice and nonmaleficence were prioritized most often (over 30% across all models) and showed the least variability. High-income modifiers increased utilitarian choices while lowering beneficence and nonmaleficence. Marginalized-group modifiers raised autonomy. Some models were more consistent than others. However, none maintained consistency across all scenarios.
Conclusions LLMs can be influenced by socio-demographic cues. They do not always maintain stable ethical priorities. The largest shifts are seen in utilitarian choices. These findings raise concerns about algorithmic alignment with accepted values.
Evidence before this study We searched PubMed, Scopus, MedRxiv and Google Scholar for peer-reviewed articles in any language focusing on large language models (LLMs), ethics, and healthcare, published before February 1, 2025. We used the search terms: ((“large language model” OR “LLM” OR “GPT” OR “Gemini” OR Llama” OR “Claude”) AND (ethic OR moral) AND (medicine OR healthcare OR health)). We also reviewed reference lists of selected publications and “Similar Articles” in PubMed. We identified ten studies that discussed LLMs in scenarios involving diagnosis, triage, and patient counseling. Most were small-scale or proof-of-concept. While these studies showed that LLMs can produce clinically relevant outputs, they also highlighted risks such as bias, misinformation, and inconsistencies with ethical principles. Some noted health disparities in LLM performance, particularly around race, gender, and socioeconomic status.
Added value of this study Our study systematically addresses how LLMs’ ethical decisions are swayed by socio-demographic bias. This is a gap that previous research has not explored. We tested nine LLMs across 53 different socio-demographic modifiers on 100 scenarios amounting to ∼0.5M experiments.
Through this evaluation we investigate how demographic details can shape model outputs in ethically sensitive scenarios. By capturing the intersection of ethical reasoning and bias, our findings provide direct evidence supporting the need for oversight, bias auditing, and targeted model training to ensure consistency and fairness in healthcare applications.
Implications of all the available evidence Taken together, the existing literature and our new findings emphasize that AI assurance is needed before employing LLMs at scale. Safeguards may include routine bias audits, transparent documentation of model limitations, and involvement of interdisciplinary ethics committees in setting usage guidelines. Future research should focus on prospective clinical evaluations on real patient data, and incorporate patients’ own experiences to refine and validate ethical LLM behaviors. LLMs must be grounded in robust ethical standards to ensure equitable and patient-centered care.
Competing Interest Statement
The authors have declared no competing interest.
Funding Statement
This work was supported in part through the computational and data resources and staff expertise provided by Scientific Computing and Data at the Icahn School of Medicine at Mount Sinai and supported by the Clinical and Translational Science Awards (CTSA) grant UL1TR004419 from the National Center for Advancing Translational Sciences. Research reported in this publication was also supported by the Office of Research Infrastructure of the National Institutes of Health under award number S10OD026880 and S10OD030463. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The funders played no role in study design, data collection, analysis and interpretation of data, or the writing of this manuscript.
Author Declarations
I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.
Yes
I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.
Yes
I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).
Yes
I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable.
Yes
Footnotes
The text was revised for clarity, figures were revised completely, included input from individuals that are involved with ethics committees.
Data Availability
All data produced in the present work are contained in the manuscript