Privacy, Security and Ethics in Medical AI

Research addressing privacy concerns, security issues, and ethical considerations in medical applications of LLMs

Hero image

Privacy, Security and Ethics in Medical AI

Research on Large Language Models in Privacy, Security and Ethics in Medical AI

TabuLa: Revolutionizing Tabular Data Synthesis

TabuLa: Revolutionizing Tabular Data Synthesis

Leveraging LLMs to generate realistic tabular data with enhanced privacy & security

Privacy Vulnerabilities in Federated Learning

Privacy Vulnerabilities in Federated Learning

How malicious actors can extract sensitive data from language models

Beyond De-identification: Rethinking Medical Data Privacy

Beyond De-identification: Rethinking Medical Data Privacy

Comparing de-identified vs. synthetic clinical notes for research use

Virtual Humans: The Future of Research

Virtual Humans: The Future of Research

Using LLMs to create realistic human simulacra for experimental research

Cognitive Testing for AI Vision

Cognitive Testing for AI Vision

Benchmarking Visual Reasoning in Large Vision-Language Models

Debiasing LLMs for Fair Decision-Making

Debiasing LLMs for Fair Decision-Making

A Causality-Guided Approach to Mitigate Social Biases

Detecting AI Hallucinations in Decision Systems

Detecting AI Hallucinations in Decision Systems

Combating Foundation Model Inaccuracies for Safer Autonomous Systems

Smart Deferral Systems in Healthcare AI

Smart Deferral Systems in Healthcare AI

Enhancing trustworthiness through guided human-AI collaboration

GuardAgent: Enhanced Security for LLM Agents

GuardAgent: Enhanced Security for LLM Agents

A dynamic guardrail system for safer AI agent deployment

Benchmarking LLM Safety Refusal

Benchmarking LLM Safety Refusal

A systematic approach to evaluating how LLMs reject unsafe requests

Cross-Modal AI Safety Dangers

Cross-Modal AI Safety Dangers

How seemingly safe inputs can lead to unsafe AI outputs

Faster Circuit Discovery in LLMs

Faster Circuit Discovery in LLMs

A More Efficient Approach to Understanding Model Mechanisms

Protecting Privacy in the Age of LLMs

Protecting Privacy in the Age of LLMs

Critical threats and practical safeguards for sensitive data

Privacy Violation Detection Framework

Privacy Violation Detection Framework

A context-aware approach based on Contextual Integrity Theory

Uncovering Privacy Biases in LLMs

Uncovering Privacy Biases in LLMs

How training data shapes information flow appropriateness

Navigating AI Safety in Medical Applications

Navigating AI Safety in Medical Applications

Balancing innovation with caution as LLMs transform healthcare

Protecting User Prompts in Cloud LLMs

Protecting User Prompts in Cloud LLMs

New security framework balances privacy and performance

Securing DNA Language Models Against Attacks

Securing DNA Language Models Against Attacks

Testing robustness of AI models that interpret genetic code

Balancing Privacy and Data Efficiency

Balancing Privacy and Data Efficiency

A Novel Framework for Privacy-Preserving Active Learning

Making AI Generation Reliable

Making AI Generation Reliable

Statistical guarantees for generative models in safety-critical applications

The Illusion of LLM Unlearning Progress

The Illusion of LLM Unlearning Progress

Why current benchmarks fail to measure true unlearning effectiveness

Privacy-Preserving Knowledge Transfer for LLMs

Privacy-Preserving Knowledge Transfer for LLMs

Balancing domain-specific knowledge utility with data privacy

Protecting Privacy in AI Prompts

Protecting Privacy in AI Prompts

Differentially Private Synthesis for Safer In-Context Learning

Balancing Privacy & Performance in LLM Interactions

Balancing Privacy & Performance in LLM Interactions

Interactive Control for User Privacy Protection

Smart Safeguards for AI Security

Smart Safeguards for AI Security

Balancing Protection and Performance in Large Language Models

Network-Based Rumor Detection

Network-Based Rumor Detection

Using epidemic modeling to combat misinformation spread

Federated Fine-tuning for Multimodal LLMs

Federated Fine-tuning for Multimodal LLMs

Enabling Privacy-Preserving Training on Heterogeneous Data

Securing Vision-Language Models

Securing Vision-Language Models

A novel approach to defend AI systems against adversarial attacks

Combating Hallucinations in Healthcare Chatbots

Combating Hallucinations in Healthcare Chatbots

A dual approach using RAG and NMISS for Italian medical AI systems

Securing LLMs for Sensitive Data Applications

Securing LLMs for Sensitive Data Applications

Privacy-Preserving RAG with Differential Privacy

Protecting Privacy in LLM Fine-tuning

Protecting Privacy in LLM Fine-tuning

Addressing security vulnerabilities in the fine-tuning process

LVLM Privacy Assessment Benchmark

LVLM Privacy Assessment Benchmark

A multi-perspective approach to evaluating privacy risks in vision-language models

AI-Powered PHI Detection in Medical Images

AI-Powered PHI Detection in Medical Images

Protecting patient privacy through advanced computer vision and language models

VeriFact: Ensuring Truth in AI-Generated Medical Text

VeriFact: Ensuring Truth in AI-Generated Medical Text

A novel system that verifies clinical facts using patient records

Securing AI in Healthcare

Securing AI in Healthcare

Evaluating LLM vulnerabilities to jailbreaking in clinical settings

Protecting Emotional Privacy in Voice Data

Protecting Emotional Privacy in Voice Data

Using Simple Audio Editing as Defense Against LLM Emotion Detection

Privacy-Preserving Language Models at Scale

Privacy-Preserving Language Models at Scale

Understanding the tradeoffs between privacy, computation, and model utility

Privacy-Preserving Data Synthesis

Privacy-Preserving Data Synthesis

Creating high-quality synthetic data with privacy guarantees

Precision Unlearning for AI Security

Precision Unlearning for AI Security

A novel approach to selectively remove harmful information from language models

Combating AI Hallucinations

Combating AI Hallucinations

SelfCheckAgent: A Zero-Resource Framework for Detection

Protecting Medical AI from Intellectual Theft

Protecting Medical AI from Intellectual Theft

Novel adversarial domain alignment attacks on medical multimodal models

Protecting Patient Privacy in AI Medical Training

Protecting Patient Privacy in AI Medical Training

Reducing Data Memorization in Federated Learning with LoRA

Safeguarding AI Giants

Safeguarding AI Giants

A Comprehensive Framework for Large Model Security

Backdoor Threats in LLMs: A Critical Security Challenge

Backdoor Threats in LLMs: A Critical Security Challenge

Understanding vulnerabilities, attacks, and defenses in today's AI landscape

Fighting Hallucinations in Large Language Models

Fighting Hallucinations in Large Language Models

Delta: A Novel Contrastive Decoding Method Reduces False Outputs

Securing Personal Advice with Zero-Knowledge Proofs

Securing Personal Advice with Zero-Knowledge Proofs

Combining ZKP Technology with LLMs for Privacy-Preserving Personalization

Balancing Safety and Scientific Discourse in AI

Balancing Safety and Scientific Discourse in AI

A benchmark for evaluating LLM safety without restricting legitimate research

Adaptive Abstention in AI Decision-Making

Adaptive Abstention in AI Decision-Making

Enhancing LLM/VLM Safety Through Dynamic Risk Management

Privacy-Preserving Federated Learning for LLMs

Privacy-Preserving Federated Learning for LLMs

Interactive Framework for Balancing Privacy and Performance

Genetic Data Governance Crisis

Genetic Data Governance Crisis

Policy frameworks to protect privacy and prevent discrimination

Unlocking Transparency in LLMs

Unlocking Transparency in LLMs

Using Sparse Autoencoders for Interpretable Feature Extraction

Securing Federated Large Language Models

Securing Federated Large Language Models

A robust framework to protect distributed LLMs against adversarial attacks

Generating Private Medical Data with LLMs

Generating Private Medical Data with LLMs

Using prompt engineering to create privacy-preserving synthetic text

Securing User Privacy in LLM Interactions

Securing User Privacy in LLM Interactions

A novel pipeline for protecting sensitive data with cloud-based language models

Synthetic Clinical Data for Privacy-Preserving AI

Synthetic Clinical Data for Privacy-Preserving AI

Using LLMs to create training data for de-identification systems

Guarding Medical Research Integrity

Guarding Medical Research Integrity

AI-powered detection of fraudulent biomedical publications

Privacy-Preserving Knowledge Editing for LLMs

Privacy-Preserving Knowledge Editing for LLMs

A federated approach to updating AI models in decentralized environments

Privacy Ripple Effects in LLMs

Privacy Ripple Effects in LLMs

How adding or removing personal data impacts model security

Building Trust in Healthcare AI

Building Trust in Healthcare AI

Ensuring LLMs are safe, reliable, and ethical for medical applications

AI-Powered Vulnerability Assessment

AI-Powered Vulnerability Assessment

Using LLMs to automate medical device security evaluation

Privacy-Preserving LLM Fine-Tuning

Privacy-Preserving LLM Fine-Tuning

Protecting Sensitive Data in Healthcare and Finance with Reward-Driven Synthesis

Protecting Privacy in LLM Interactions

Protecting Privacy in LLM Interactions

A Framework for Evaluating PII Protection Systems

Making LLMs Safer for Women's Healthcare

Making LLMs Safer for Women's Healthcare

Using Semantic Entropy to Reduce Hallucinations in Clinical Contexts

Combating Misinformation with AI

Combating Misinformation with AI

Comparing LLM-based strategies for detecting digital falsehoods

Forgetting What They Know

Forgetting What They Know

Systematic approaches to data removal in LLMs without retraining

Combating Online Drug Trafficking with AI

Combating Online Drug Trafficking with AI

Using LLMs to address class imbalance challenges in social media monitoring

Advancing Synthetic Tabular Data Generation

Advancing Synthetic Tabular Data Generation

Preserving Inter-column Logical Relationships in Sensitive Data

LLM-Generated Data: Not a Silver Bullet for Misinformation Detection

LLM-Generated Data: Not a Silver Bullet for Misinformation Detection

Evaluating limitations of AI-augmented training data for COVID-19 stance detection

Teaching AI to Know When It Doesn't Know

Teaching AI to Know When It Doesn't Know

A reinforcement learning approach for confidence calibration in LLMs

Edge-Based Medical Assistants

Edge-Based Medical Assistants

Privacy-Focused Healthcare AI Without Internet Dependency

Federated CLIP for Medical Imaging

Federated CLIP for Medical Imaging

Adapting Vision-Language Models for Distributed Healthcare Applications

Protecting Mental Health Data in AI

Protecting Mental Health Data in AI

Privacy-Preserving LLMs for Mental Healthcare via Federated Learning

Unified Medical Image Re-Identification

Unified Medical Image Re-Identification

A groundbreaking all-in-one approach across medical imaging modalities

Combating Vaccine Misinformation from AI

Combating Vaccine Misinformation from AI

A Novel Dataset for Detecting LLM-Generated Health Misinformation

Uncovering Hidden Misinformation in LLMs

Uncovering Hidden Misinformation in LLMs

First benchmark for detecting implicit misinformation in AI systems

Detecting Hidden Bias in Medical AI

Detecting Hidden Bias in Medical AI

A Framework for Auditing Dataset Bias Across Medical Modalities

Securing Fine-tuned LLMs with Identity Lock

Securing Fine-tuned LLMs with Identity Lock

Preventing unauthorized API access through wake word authentication

Surgical Knowledge Removal in LLMs

Surgical Knowledge Removal in LLMs

New technique to selectively unlearn harmful information from AI models

Privacy-Preserving LLM Adaptation

Privacy-Preserving LLM Adaptation

Federated Learning for Secure, Collaborative AI Development

Enhancing LLM Reliability

Enhancing LLM Reliability

A Clustering Approach for Safer AI Outputs

Privacy-Preserving Synthetic Text Generation

Privacy-Preserving Synthetic Text Generation

Enhancing data privacy without costly LLM fine-tuning

Hidden Threats in AI Systems

Hidden Threats in AI Systems

Token-level backdoor attacks against multi-modal LLMs

Preserving Safety in Fine-Tuned LLMs

Preserving Safety in Fine-Tuned LLMs

A selective layer merging approach that maintains alignment while optimizing for tasks

Building Efficient Language Agents in Resource-Limited Settings

Building Efficient Language Agents in Resource-Limited Settings

A Korean approach to specialized language agents when resources are scarce

Lightweight Hallucination Detection for LLMs

Lightweight Hallucination Detection for LLMs

A novel entropy-based approach for edge devices

Evaluating LLMs for Synthetic Tabular Data

Evaluating LLMs for Synthetic Tabular Data

New benchmarking methods for AI-generated structured data

Security Vulnerabilities in Medical AI Agents

Security Vulnerabilities in Medical AI Agents

Revealing cyber attack risks in LLM-powered healthcare assistants

Enhancing Privacy with AI-Powered Data Enrichment

Enhancing Privacy with AI-Powered Data Enrichment

Using LLMs to balance data utility and privacy protection

Defending LLMs Against Bias Attacks

Defending LLMs Against Bias Attacks

A scalable framework for measuring adversarial robustness

Hyper-RAG: Fighting LLM Hallucinations in Healthcare

Hyper-RAG: Fighting LLM Hallucinations in Healthcare

Using hypergraph structures to improve factual accuracy in medical contexts

Safeguarding Personal Data in LLM Applications

Safeguarding Personal Data in LLM Applications

Strategies for Privacy Preservation in Generative AI Systems

ControlNET: Securing RAG Systems

ControlNET: Securing RAG Systems

A Firewall to Protect Enterprise LLMs from Data Breaches and Poisoning

Smarter Federated Learning for Healthcare NLP

Smarter Federated Learning for Healthcare NLP

Training Large Language Models Efficiently While Preserving Privacy

Smarter Federated Learning for Healthcare

Smarter Federated Learning for Healthcare

Boosting Privacy and Efficiency in Medical NLP

Key Takeaways

Summary of Research on Privacy, Security and Ethics in Medical AI