skip to Main Content
bitcoin
Bitcoin (BTC) $ 98,954.51 0.43%
ethereum
Ethereum (ETH) $ 3,328.61 1.14%
tether
Tether (USDT) $ 1.00 0.01%
solana
Solana (SOL) $ 256.32 0.39%
bnb
BNB (BNB) $ 633.19 1.53%
xrp
XRP (XRP) $ 1.46 18.46%
dogecoin
Dogecoin (DOGE) $ 0.410804 5.98%
usd-coin
USDC (USDC) $ 1.00 0.01%
cardano
Cardano (ADA) $ 1.00 22.75%
staked-ether
Lido Staked Ether (STETH) $ 3,327.63 1.09%

Researchers in China developed a hallucination correction engine for AI models

The “Woodpecker” hallucination correction system can, ostensibly, be applied to any multimodal large language model, according to the research.

232 Total views

3 Total shares

Researchers in China developed a hallucination correction engine for AI models

A team of scientists from the University of Science and Technology of China and Tencent’s YouTu Lab have developed a tool to combat “hallucination” by artificial intelligence (AI) models. 

Hallucination is the tendency for an AI model to generate outputs with a high level of confidence that don’t appear based on information present in its training data. This problem permeates large language model (LLM) research, and its effects can be seen in models such as OpenAI’s ChatGPT and Anthropic’s Claude.

The USTC/Tencent team developed a tool called “Woodpecker” that they claim is capable of correcting hallucinations in multimodal large language models (MLLMs). 

This subset of AI involves models such as GPT-4 (especially its visual variant, GPT-4V) and other systems that roll vision and/or other processing into the generative AI modality alongside text-based language modeling. 

According to the team’s preprint research paper, Woodpecker uses three separate AI models, apart from the MLLM being corrected for hallucinations, to perform hallucination correction. 

These include GPT-3.5 turbo, Grounding DINO and BLIP-2-FlanT5. Together, these models work as evaluators to identify hallucinations and instruct the model being corrected to regenerate its output in accordance with its data. 

In each of the above examples, an LLM hallucinates an incorrect answer (green background) to prompting (blue background). The corrected Woodpecker responses are shown with a red background. Source: Yin, et. al., 2023

To correct hallucinations, the AI models powering Woodpecker use a five-stage process that involves “key concept extraction, question formulation, visual knowledge validation, visual claim generation, and hallucination correction.”

Related: Humans and AI often prefer sycophantic chatbot answers to the truth — Study

The researchers claim these techniques provide additional transparency and “a 30.66%/24.33% improvement in accuracy over the baseline MiniGPT-4/mPLUG-Owl.” They evaluated numerous “off the shelf” MLLMs using their method and concluded that Woodpecker could be “easily integrated into other MLLMs.”

Loading data ...
Comparison
View chart compare
View table compare
Back To Top