skip to Main Content
bitcoin
Bitcoin (BTC) $ 94,847.41 1.95%
ethereum
Ethereum (ETH) $ 3,283.72 1.68%
tether
Tether (USDT) $ 0.998625 0.14%
xrp
XRP (XRP) $ 2.18 1.82%
bnb
BNB (BNB) $ 650.67 1.68%
solana
Solana (SOL) $ 181.23 0.01%
dogecoin
Dogecoin (DOGE) $ 0.31096 1.67%
usd-coin
USDC (USDC) $ 1.00 0.11%
cardano
Cardano (ADA) $ 0.881306 1.27%
staked-ether
Lido Staked Ether (STETH) $ 3,264.82 2.24%

Meta releases ‘Purple Llama’ AI security suite to meet White House commitments

Meta believes that this is “the first industry-wide set of cyber security safety evaluations for Large Language Models (LLMs).”

58 Total views

2 Total shares

Meta releases ‘Purple Llama’ AI security suite to meet White House commitments

Meta released a suite of tools for securing and benchmarking generative artificial intelligence models (AI) on Dec. 7. 

Dubbed “Purple Llama,” the toolkit is designed to help developers build safely and securely with generative AI tools, such as Meta’s open-source model, Llama-2.

Announcing Purple Llama — A new project to help level the playing field for building safe & responsible generative AI experiences.

Purple Llama includes permissively licensed tools, evals & models to enable both research & commercial use.

More details ➡️ https://t.co/k4ezDvhpHp pic.twitter.com/6BGZY36eM2

— AI at Meta (@AIatMeta) December 7, 2023

AI purple teaming

According to a blog post from Meta, the “Purple” part of “Purple Llama” refers to a combination of “red-teaming” and “blue teaming.”

Red teaming is a paradigm wherein developers or internal testers attack an AI model on purpose to see if they can produce errors, faults, or unwanted outputs and interactions. This allows developers to create resiliency strategies against malicious attacks and safeguard against security and safety faults.

Blue teaming, on the other hand, is pretty much the polar opposite. Here, developers or testers respond to red teaming attacks in order to determine the mitigating strategies necessary to combat actual threats in production, consumer, or client-facing models.

Per Meta:

“We believe that to truly mitigate the challenges that generative AI presents, we need to take both attack (red team) and defensive (blue team) postures. Purple teaming, composed of both red and blue team responsibilities, is a collaborative approach to evaluating and mitigating potential risks.”

Safeguarding models

The release, which Meta claims is the “first industry-wide set of cyber security safety evaluations for Large Language Models (LLMs),” includes:

  • Metrics for quantifying LLM cybersecurity risk
  • Tools to evaluate the frequency of insecure code suggestions
  • Tools to evaluate LLMs to make it harder to generate malicious code or aid in carrying out cyber attacks

The big idea is to integrate the system into model pipelines in order to reduce unwanted outputs and insecure code while simultaneously limiting the usefulness of model exploits to cybercriminals and bad actors.

“With this initial release,” writes the Meta AI team, “we aim to provide tools that will help address risks outlined in the White House commitments.”

Related: Biden administration issues executive order for new AI safety standards

Loading data ...
Comparison
View chart compare
View table compare
Back To Top