Medical Ethics and Value Alignment

Research on aligning LLMs with medical ethics, human values in healthcare contexts, and ethical decision-making frameworks

Hero image

Medical Ethics and Value Alignment

Research on Large Language Models in Medical Ethics and Value Alignment

ValueCompass: Measuring Human-AI Value Alignment

ValueCompass: Measuring Human-AI Value Alignment

A framework for evaluating how well AI systems reflect diverse human values

Measuring AI & Human Values

Measuring AI & Human Values

A New Framework for Value Alignment in LLMs

Confidence-Based LLM Routing

Confidence-Based LLM Routing

Enhancing AI reliability through self-assessment mechanisms

AI Safety Blindspots in Scientific Labs

AI Safety Blindspots in Scientific Labs

Evaluating LLMs for Laboratory Safety Knowledge

AI-Powered Ethics Review for Research

AI-Powered Ethics Review for Research

Using specialized LLMs to streamline IRB processes

Balancing Ethics and Utility in LLMs

Balancing Ethics and Utility in LLMs

A new framework that enhances safety without compromising functionality

Enhancing Safety in Visual AI

Enhancing Safety in Visual AI

Addressing Critical Gaps in Vision-Language Model Safety

Fairness in AI-Powered X-Ray Diagnostics

Fairness in AI-Powered X-Ray Diagnostics

Evaluating bias in CLIP-based models for medical imaging

AI-Assisted Moral Analysis in Vaccination Debates

AI-Assisted Moral Analysis in Vaccination Debates

Using LLMs to support human annotators in identifying moral framing on social media

The Ethics of Search Engine Power

The Ethics of Search Engine Power

A novel framework for evaluating search engines beyond algorithms

Open Foundation Models: Transforming Healthcare

Open Foundation Models: Transforming Healthcare

Exploring the potential of non-proprietary LLMs for personalized medicine

Fair-MoE: Enhancing Fairness in Medical AI

Fair-MoE: Enhancing Fairness in Medical AI

A novel approach to tackle bias in Vision-Language Models

Rethinking AI Regulation in Healthcare

Rethinking AI Regulation in Healthcare

A global framework for governing generative AI and LLMs in medicine

Bias in AI-Driven Palliative Care

Bias in AI-Driven Palliative Care

How LLMs like GPT-4o perpetuate inequities in healthcare

Building Trustworthy AI Systems

Building Trustworthy AI Systems

Navigating Safety, Bias, and Privacy Challenges in Modern AI

The Confidence Gap in LLMs

The Confidence Gap in LLMs

Measuring and addressing overconfidence in large language models

Catastrophic Risks in AI Decision-Making

Catastrophic Risks in AI Decision-Making

Analyzing CBRN Threats from Autonomous LLM Agents

Relationship Blueprints for Human-AI Cooperation

Relationship Blueprints for Human-AI Cooperation

How social roles and norms should guide AI design and interaction

Protecting Young Minds in the AI Era

Protecting Young Minds in the AI Era

Evaluating and Enhancing LLM Safety for Children

Beyond One-Size-Fits-All: Pluralistic AI in Healthcare

Beyond One-Size-Fits-All: Pluralistic AI in Healthcare

Introducing VITAL: A benchmark for diverse values alignment in medical AI

What Makes AI Seem Conscious?

What Makes AI Seem Conscious?

Quantifying the features that shape human perception of AI consciousness

Trust & Intent: LLMs in Healthcare

Trust & Intent: LLMs in Healthcare

Multinational analysis of DeepSeek adoption factors

Ethical Design of AI Personalities

Ethical Design of AI Personalities

Creating responsible LLM-based conversational agents

Evaluating Medical Ethics in AI

Evaluating Medical Ethics in AI

A Framework for Testing LLMs in Healthcare Ethics

Transforming Healthcare with LLMs

Transforming Healthcare with LLMs

A framework for responsible AI integration in clinical settings

Reducing Hallucination Risk in Critical Domains

Reducing Hallucination Risk in Critical Domains

A framework for setting hallucination standards in domain-specific LLMs

Hallucinations: Bridging Human and AI Cognition

Hallucinations: Bridging Human and AI Cognition

What machine 'hallucinations' teach us about human cognition

Uncovering Hidden Biases in LLMs

Uncovering Hidden Biases in LLMs

A framework for detecting subtle, nuanced biases in AI systems

Predicting Human Choices from Text Descriptions

Predicting Human Choices from Text Descriptions

First large-scale study of decision-making with textually described risks

Addressing Bias in Medical AI Systems

Addressing Bias in Medical AI Systems

Evaluating and mitigating demographic biases in retrieval-augmented medical QA

Can We Trust LLMs in High-Stakes Environments?

Can We Trust LLMs in High-Stakes Environments?

Enhancing reliability through uncertainty quantification

The Double-Edged Sword of Humanized AI

The Double-Edged Sword of Humanized AI

How LLM chatbots mirror humans and risk manipulation

Trust Through Transparency in AI

Trust Through Transparency in AI

Combining LLMs with Rule-Based Systems for Trustworthy AI

AI-Powered Consent Form Generation

AI-Powered Consent Form Generation

Enhancing clinical research compliance with AI assistance

Breaking Free From LLM Chat Search Constraints

Breaking Free From LLM Chat Search Constraints

How functional fixedness limits users' interactions with AI systems

Bridging the Gap: AI Models in Critical Domains

Bridging the Gap: AI Models in Critical Domains

A framework for deploying large AI models in healthcare, education, and legal settings

Hidden Biases in Healthcare AI

Hidden Biases in Healthcare AI

Systematic Review Reveals Bias Patterns in Clinical LLMs

Fair AI in Medical Decision-Making

Fair AI in Medical Decision-Making

Evaluating LLM fairness in organ allocation using voting theory

The Ethics vs. Performance Trade-Off in AI

The Ethics vs. Performance Trade-Off in AI

Measuring the cost of respecting web crawling opt-outs in LLM training

LExT: Towards Evaluating Trustworthiness of Natural Language...

LExT: Towards Evaluating Trustworthiness of Natural Language...

By Krithi Shailya, Shreya Rajpal...

Building Fair AI Systems

Building Fair AI Systems

Developing comprehensive fairness standards for the 6G era

Uncovering Bias in Language Models

Uncovering Bias in Language Models

Using Metamorphic Testing to Identify Fairness Issues in LLaMA and GPT

Credibility Detection with LLMs

Credibility Detection with LLMs

Using AI to identify trustworthy visual content

When AI Faces Moral Choices

When AI Faces Moral Choices

How persona impacts LLM decision-making in ethical dilemmas

Key Takeaways

Summary of Research on Medical Ethics and Value Alignment