skip to Main Content
bitcoin
Bitcoin (BTC) $ 98,813.47 2.15%
ethereum
Ethereum (ETH) $ 3,318.05 0.99%
tether
Tether (USDT) $ 1.00 0.01%
solana
Solana (SOL) $ 256.41 2.74%
bnb
BNB (BNB) $ 624.70 1.08%
xrp
XRP (XRP) $ 1.44 30.26%
dogecoin
Dogecoin (DOGE) $ 0.403217 5.78%
usd-coin
USDC (USDC) $ 1.00 0.14%
cardano
Cardano (ADA) $ 0.923715 18.36%
staked-ether
Lido Staked Ether (STETH) $ 3,316.57 0.86%

Researchers find LLMs like ChatGPT output sensitive data even after it’s been ‘deleted’

A trio of scientists from the University of North Carolina, Chapel Hill recently published pre-print artificial intelligence (AI) research showcasing how difficult it is to remove sensitive data from large language models (LLMs) such as OpenAI’s ChatGPT and Google’s Bard. 

According to the researchers’ paper, the task of “deleting” information from LLMs is possible, but it’s just as difficult to verify the information has been removed as it is to actually remove it.

The reason for this has to do with how LLMs are engineered and trained. The models are pre-trained (GPT stands for generative pre-trained transformer) on databases and then fine-tuned to generate coherent outputs.

Once a model is trained, its creators cannot, for example, go back into the database and delete specific files in order to prohibit the model from outputting related results. Essentially, all the information a model is trained on exists somewhere inside its weights and parameters where they’re undefinable without actually generating outputs. This is the “black box” of AI.

A problem arises when LLMs trained on massive datasets output sensitive information such as personally identifiable information, financial records, or other potentially harmful/unwanted outputs.

Related: Microsoft to form nuclear power team to support AI: Report

In a hypothetical situation where an LLM was trained on sensitive banking information, for example, there’s typically no way for the AI’s creator to find those files and delete them. Instead, AI devs use guardrails such as hard-coded prompts that inhibit specific behaviors or reinforcement learning from human feedback (RLHF).

In an RLHF paradigm, human assessors engage models with the purpose of eliciting both wanted and unwanted behaviors. When the models’ outputs are desirable, they receive feedback that tunes the model towards that behavior. And when outputs demonstrate unwanted behavior, they receive feedback designed to limit such behavior in future outputs.

Here, we see that despite being “deleted” from a model’s weights, the word “Spain” can still be conjured using reworded prompts. Image source: Patil, et. al., 2023

However, as the UNC researchers point out, this method relies on humans finding all the flaws a model might exhibit and, even when successful, it still doesn’t “delete” the information from the model.

Per the team’s research paper:

“A possibly deeper shortcoming of RLHF is that a model may still know the sensitive information. While there is much debate about what models truly “know” it seems problematic for a model to, e.g., be able to describe how to make a bioweapon but merely refrain from answering questions about how to do this.”

Ultimately, the UNC researchers concluded that even state-of-the-art model editing methods, such as Rank-One Model Editing (ROME) “fail to fully delete factual information from LLMs, as facts can still be extracted 38% of the time by whitebox attacks and 29% of the time by blackbox attacks.”

The model the team used to conduct their research is called GPT-J. Whereas GPT-3.5, one of the base models that powers ChatGPT, was fine-tuned with 170-billion parameters, GPT-J only has 6 billion.

Ostensibly, this means the problem of finding and eliminating unwanted data in an LLM such as GPT-3.5 is exponentially more difficult than doing so in a smaller model.

The researchers were able to develop new defense methods to protect LLMs from some ‘extraction attacks’ — purposeful attempts by bad actors to use prompting to circumvent a model’s guardrails in order to make it output sensitive information.

However, as the researchers write, “the problem of deleting sensitive information may be one where defense methods are always playing catch-up to new attack methods.”

Loading data ...
Comparison
View chart compare
View table compare
Back To Top