Enhancing GNN Trustworthiness with LLMs

Enhancing GNN Trustworthiness with LLMs

A systematic approach to more reliable graph-based AI

This research establishes a comprehensive taxonomy for improving Graph Neural Networks (GNNs) using Large Language Models (LLMs) to enhance trustworthiness.

  • Integrates LLMs with GNNs to improve semantic understanding and generation capabilities
  • Provides a structured framework for researchers to comprehend principles and applications
  • Addresses critical security concerns in graph-based AI systems
  • Offers practical approaches for building more reliable and secure GNN models

For security professionals, this research matters because it systematically addresses trustworthiness challenges in graph-based AI systems that are increasingly deployed in sensitive applications like fraud detection, network security, and social network analysis.

Trustworthy GNNs with LLMs: A Systematic Review and Taxonomy

78 | 141