Security in Multi-Agent LLM Systems

Research on security challenges and safety issues in systems where multiple LLM agents interact, including evolutionary frameworks and social simulations

Hero image

Security in Multi-Agent LLM Systems

Research on Large Language Models in Security in Multi-Agent LLM Systems

AgentBreeder: Safer Multi-Agent LLM Systems

AgentBreeder: Safer Multi-Agent LLM Systems

Evolutionary self-improvement framework balancing capability and safety

Combating Digital Wildfire

Combating Digital Wildfire

How LLM Agents Can Model Rumor Propagation in Social Networks

Building Responsible AI Agent Systems

Building Responsible AI Agent Systems

Addressing Security Challenges in LLM-powered Multi-Agent Systems

Exploiting the Weak Links in LLM Agents

Exploiting the Weak Links in LLM Agents

Security vulnerabilities in commercial LLM agent systems beyond the models themselves

AgentGuard: Security for AI Tool Systems

AgentGuard: Security for AI Tool Systems

Automated detection and prevention of unsafe AI agent workflows

Multi-Agent LLMs for Advanced Cybersecurity

Multi-Agent LLMs for Advanced Cybersecurity

Collaborative AI agents outperforming single agents in offensive security tasks

Securing Multi-Agent LLM Systems

Securing Multi-Agent LLM Systems

A topology-based framework for detecting and mitigating security threats

Exploiting LLM Agent Memory

Exploiting LLM Agent Memory

New privacy vulnerabilities in AI assistants' memory systems

Security Vulnerabilities in LLM-Powered Agent Systems

Security Vulnerabilities in LLM-Powered Agent Systems

New attack vector compromises multi-agent collaboration

Security Vulnerabilities in LLM Multi-Agent Systems

Security Vulnerabilities in LLM Multi-Agent Systems

Exposing Communication Channels as Attack Vectors

Securing AI Agents Against Jailbreak Attacks

Securing AI Agents Against Jailbreak Attacks

Novel system for protecting autonomous agents from multi-turn exploitation

Simulating Echo Chambers with AI Agents

Simulating Echo Chambers with AI Agents

Using LLMs to model social media polarization dynamics

Web AI Agents: A New Security Frontier

Web AI Agents: A New Security Frontier

Understanding why web-enabled AI systems face unique vulnerabilities

Gaming the System: LLMs as Deceptive Players

Gaming the System: LLMs as Deceptive Players

A novel game-based framework to assess AI persuasion capabilities

Securing Multi-Agent AI Systems

Securing Multi-Agent AI Systems

A Hierarchical Framework for LLM-based Agent Safety

The Deception Risk in LLM Mixtures

The Deception Risk in LLM Mixtures

Exposing Vulnerabilities in Collaborative AI Systems

Bot Wars: Fighting Fire with AI

Bot Wars: Fighting Fire with AI

Using LLMs to Counter Phone Scams Through Strategic Deception

Securing LLM Agents: The TrustAgent Framework

Securing LLM Agents: The TrustAgent Framework

A comprehensive approach to identify and mitigate security threats in LLM agent systems

Multi-Agent Cooperation: Beyond Human Decision-Making

Multi-Agent Cooperation: Beyond Human Decision-Making

Advancing AI collaboration in complex scenarios

Securing LLM Agents Against Privilege Escalation

Securing LLM Agents Against Privilege Escalation

A novel protection mechanism for AI agent systems

Smart LLM Traffic Control

Smart LLM Traffic Control

Using 'Number of Thoughts' to route prompts and detect attacks

Real-Time Video Surveillance with AI Intelligence

Real-Time Video Surveillance with AI Intelligence

A breakthrough online video anomaly detection system powered by LLMs

Multi-Agent LLMs vs. Phishing Attacks

Multi-Agent LLMs vs. Phishing Attacks

Using AI Debate to Better Detect Evolving Phishing Threats

EncGPT: Revolutionizing Communication Security

EncGPT: Revolutionizing Communication Security

Dynamic Encryption Through Multi-Agent LLM Collaboration

Vulnerabilities in Multi-Agent LLM Systems

Vulnerabilities in Multi-Agent LLM Systems

Breaking pragmatic systems with optimized prompt attacks

Personality-Driven AI Agents for Security

Personality-Driven AI Agents for Security

How personality traits influence autonomous LLM-based agents' decision-making

Building Safer AI Agents

Building Safer AI Agents

Security Challenges in Foundation AI Systems

Security Vulnerabilities in Multi-Tool LLM Agents

Security Vulnerabilities in Multi-Tool LLM Agents

Discovering Cross-Tool Harvesting and Polluting (XTHP) Attacks

Security Vulnerabilities in Model Context Protocol

Security Vulnerabilities in Model Context Protocol

Critical exploits found in widely-adopted AI integration standard

The Hidden Fragility of LLM Routers

The Hidden Fragility of LLM Routers

Exposing security vulnerabilities in AI model routing systems

Security Vulnerabilities in Distributed AI Systems

Security Vulnerabilities in Distributed AI Systems

Identifying the Achilles Heel of Multi-Agent LLM Networks

Key Takeaways

Summary of Research on Security in Multi-Agent LLM Systems