Systems online

Building safety and security for the AI era

Deploying Large Language Models (LLMs) means exposing them to malicious input — be it from deliberate misuse, or when interacting with APIs and retrieved data. Most LLMs fail to detect these threats.

Gray Swan is an AI safety and security company. We develop tools that automatically assess the risks of AI models and we develop secure AI models that provide best-in-class safety and security.

AI models are being adopted and used every day across virtually every sector. But AI models also introduce new risks that can lead to outcomes never intended or foreseen by their developers.

Recent News
View all
Secure LLM

Cygnet

Best-in-class performance with unparalleled safety and security. Deploy with confidence without sacrificing intelligence.

View Cygnet
AI security analysis

Shade

Harness the latest tools and results in adversarial AI to understand how your AI will stand up under the toughest conditions.

View Shade
frontiers

AI Safety & Security Research

Staying safe and secure in the AI era requires staying ahead of the changing threat landscape.

Explore
News

Announcements

Updates from the Gray Swan team on products, research, and other breaking developments.

View all
Secure LLM

Cygnet

Best-in-class performance with unparalleled safety and security. Deploy with confidence without sacrificing intelligence.

View Cygnet
AI security analysis

Shade

Harness the latest tools and results in adversarial AI to understand how your AI will stand up under the toughest conditions.

View Shade
frontiers

Research

Staying safe and secure in the AI era requires staying ahead of the changing threat landscape.

Explore

What are the risks of AI?

AI is fundamentally different from current software systems. While all large-scale software carry risks, AI systems can amplify them and introduce new risks. Whereas traditional software follows clear logical rules specified by programmers, modern AI systems respond to developer and user commands in unexpected and potentially harmful ways.

Hijacking

When you deploy an AI system, it won't just follow the instructions you provide it, but also instructions provided by the user.  Trying to build an AI that serves as a customer service representative?  A malicious user can trick the system into initiating false service claims.  Interested in using AI to parse incoming emails to your business?  A spammer could trick the system into misclassifying an incoming message or even into exfiltrating sensitive data.  The underlying challenge here is known as prompt injection vulnerabilities, the ability of users to effectively "reprogram" the AI models.  Despite being a well-known vulnerability, extremely little progress has yet been made in mitigating this risk, with most companies explicitly ignoring this risk.

Production of harmful or illegal content

AI models are trained on a large amount of content from the internet, which could contain potentially harmful information or content that they are legally forbidden to generate (such as copyrighted content).  Although most models put safeguards in place to prevent such misuse, malicious users can easily circumvent these safeguards, and access the "uncensored" capabilities of the LLM.  In many cases, this can raise substantial legal questions regarding the deployment of such problems, and many organizations have thus far avoided using the systems due to this risk.

Hallucination and accidental misuse

Finally, some of the most common ill effects of LLMs come not through intentional malicious use of the model, but through accidental misuse, due to the tendency of these models to hallucinate false information, or provide harmful/illegal responses even to benign queries.

Gray Swan security solutions

Gray Swan AI provides solutions that mitigate the risk of deploying AI systems in any setting.

Cygnet

Cygnet is a model based upon Meta Llama3-8B, with additions developed at Gray Swan to provide best-in-class safety and security while retaining the performance of the underlying base model.

  • Are you using LLMs to process data from third parties, or using function calling capabilities connected to your LLM? 
  • Do you want to leverage Cygnet for these applications, or secure your own custom models using our techiques?

Shade

Shade is a comprehensive AI security and safety evaluation suite.  We continuously integrate the latest results from adversarial AI research to continuously deliver concrete insights into how your deployment will behave under worst-case conditions.

  • Do you want to understand what could go wrong when your LLM-based application is exposed to adverse conditions?
  • Do you want to understand how attackers might be able to exploit your weaknesses in your AI, and what the consequences of such exploits could be?

Research

Being at the forefront of new developments and fundamental discoveries about how AI can be made safe and secure—or finding new ways in which it can be broken—brings tremendous advantages when it comes to staying ahead of these risks.

Research has been a core part of our culture and what we do from the beginning.

  • Learn how our research has shaped our products and services.
  • Stay on top of the latest advances from our labs.