
Trust Through Transparency in AI
Combining LLMs with Rule-Based Systems for Trustworthy AI
MoRE-LLM integrates domain knowledge rules with LLMs to create more interpretable and trustworthy AI systems, bridging the gap between data scientists and domain experts.
- Combines black-box models with transparent rule-based systems
- Uses LLMs to automate the extraction of domain knowledge
- Creates a mixture-of-experts architecture guided by language models
- Enhances transparency and accountability for security-critical applications
For security professionals, this approach offers a critical advantage: AI systems that provide transparent predictions that can be audited, verified, and aligned with regulatory requirements—essential for deployment in high-risk environments.
Original Paper: MoRE-LLM: Mixture of Rule Experts Guided by a Large Language Model