Generative AI (GenAI) has seen a remarkable surge in popularity, transforming productivity across a wide range of sectors and everyday tasks. However, this rapid adoption has also introduced significant security challenges. What new risks and attack vectors have emerged? How severe are they? And can traditional security solutions effectively safeguard the use of AI?

We recently assessed mainstream large language models (LLMs) against prompt-based attacks, which revealed significant vulnerabilities. Three attack vectors—guardrail bypass, information leakage, and goal hijacking—demonstrated consistently high success rates across various models. In particular, some attack techniques achieved success rates exceeding 50% across models of different scales, from several-billion parameter models to trillion-parameter models, with certain cases reaching up to 88%.

Download Securing GenAI Whitepaper

Securing GenAI