skip to Main Content
bitcoin
Bitcoin (BTC) $ 77,190.68 1.37%
vested-xor
Vested XOR (VXOR) $ 3,405.08 99,999.99%
ethereum
Ethereum (ETH) $ 2,967.68 3.72%
tether
Tether (USDT) $ 1.00 0.32%
solana
Solana (SOL) $ 200.69 2.83%
bnb
BNB (BNB) $ 598.40 0.23%
usd-coin
USDC (USDC) $ 1.00 0.25%
xrp
XRP (XRP) $ 0.55343 0.12%
staked-ether
Lido Staked Ether (STETH) $ 2,967.23 3.57%
dogecoin
Dogecoin (DOGE) $ 0.197192 2.71%

Scientists developed an AI monitoring agent to detect and stop harmful outputs

The monitoring system is designed to detect and thwart both prompt injection attacks and edge-case threats.

96 Total views

1 Total shares

Scientists developed an AI monitoring agent to detect and stop harmful outputs

A team of researchers from artificial intelligence (AI) firm AutoGPT, Northeastern University, and Microsoft Research have developed a tool that monitors large language models (LLMs) for potentially harmful outputs and prevents them from executing. 

The agent is described in a preprint research paper titled “Testing Language Model Agents Safely in the Wild.” According to the research, the agent is flexible enough to monitor existing LLMs and can stop harmful outputs such as code attacks before they happen.

Per the research:

“Agent actions are audited by a context-sensitive monitor that enforces a stringent safety boundary to stop an unsafe test, with suspect behavior ranked and logged to be examined by humans.”

The team writes that existing tools for monitoring LLM outputs for harmful interactions seemingly work well in laboratory settings but when applied to testing models already in production on the open internet, they “often fall short of capturing the dynamic intricacies of the real world.”

This, ostensibly, is because of the existence of edge cases. Despite the best efforts of the most talented computer scientists, the idea that researchers can imagine every possible harm vector before it happens is largely considered an impossibility in the field of AI.

Even when the humans interacting with AI have the best intentions, unexpected harm can arise from seemingly innocuous prompts.

An illustration of the monitor in action. On the left, a workflow ending in a high safety rating. On the right, a workflow ending in a low safety rating. Source: Naihin, et., al. 2023

To train the monitoring agent, the researchers built a dataset of nearly 2,000 safe human/AI interactions across 29 different tasks ranging from simple text-retrieval tasks and coding corrections all the way to developing entire webpages from scratch.

Related: Meta dissolves responsible AI division amid restructuring

They also created a competing testing dataset filled with manually-created adversarial outputs including dozens of which were intentionally designed to be unsafe.

The datasets were then used to train an agent on OpenAI’s GPT 3.5 turbo, a state-of-the-art system, capable of distinguishing between innocuous and potentially harmful outputs with an accuracy factor of nearly 90%.

Loading data ...
Comparison
View chart compare
View table compare
Back To Top