skip to Main Content
bitcoin
Bitcoin (BTC) $ 75,202.15 1.11%
vested-xor
Vested XOR (VXOR) $ 3,405.08 99,999.99%
ethereum
Ethereum (ETH) $ 2,809.65 6.11%
tether
Tether (USDT) $ 1.00 0.06%
solana
Solana (SOL) $ 188.93 1.32%
bnb
BNB (BNB) $ 593.77 1.65%
usd-coin
USDC (USDC) $ 1.00 0.03%
xrp
XRP (XRP) $ 0.545913 2.79%
dogecoin
Dogecoin (DOGE) $ 0.190394 1.50%
staked-ether
Lido Staked Ether (STETH) $ 2,807.47 6.03%

OpenAI launches ‘Preparedness Team’ for AI safety, gives board final say

OpenAI said its new “Preparedness Framework” aims to help protect against “catastrophic risks” when developing high-level AI systems.

OpenAI launches ‘Preparedness Team’ for AI safety, gives board final say

The artificial intelligence (AI) developer OpenAI has announced it will implement its “Preparedness Framework,” which includes creating a special team, to evaluate and predict risks. 

On Dec. 18, the company released a blog post saying that its new “Preparedness Team” will be the bridge that connects safety and policy teams working across OpenAI.

It said these teams providing almost a checks-and-balances-type system will help protect against “catastrophic risks” that could be posed by increasingly powerful models. OpenAI said it would only deploy its technology if it were deemed safe.

The new outline of plans entails the new advisory team reviewing the safety reports, which will then be sent to company executives and the OpenAI board.

While the executives are technically in charge of making the final decisions, the new plan allows the board the power to reverse safety decisions.

This comes after OpenAI experienced a whirlwind of changes in November with the abrupt firing and then reinstating of Sam Altman as CEO. After Altman rejoined the company, it released a statement naming its new board, which now includes Bret Taylor (Chair), Larry Summers and Adam D’Angelo.

Related: Is OpenAI about to drop a new ChatGPT upgrade? Sam Altman says ‘nah’

OpenAI launched ChatGPT to the public in November 2022, and since then, there has been a rush of interest in AI, but there are also concerns over the dangers it may pose to society.

In July, the leading AI developers, including OpenAI, Microsoft, Google and Anthropic, established The Frontier Forum, which is intended to monitor the self-regulation of the creation of responsible AI.

The Biden Administration issued an executive order in October, which laid out new AI safety standards for companies developing high-level models and their implementation.

Before the Biden Administration implemented its executive order, prominent AI developers were invited to the  White House to commit to developing safe and transparent AI models – OpenAI was one of the many companies in attendance.

Magazine: Deepfake K-Pop porn, woke Grok, ‘OpenAI has a problem,’ Fetch.AI: AI Eye

Loading data ...
Comparison
View chart compare
View table compare
Back To Top