
OpenAI CEO Suggests International Agency Like UN's Nuclear Watchdog Could Oversee AI 36
OpenAI CEO Sam Altman warned during a visit to the United Arab Emirates that AI poses an "existential risk" to humanity and suggested the establishment of an international agency, similar to the International Atomic Energy Agency (IAEA), to oversee AI. The Associated Press reports: "We face serious risk. We face existential risk," said Altman, 38. "The challenge that the world has is how we're going to manage those risks and make sure we still get to enjoy those tremendous benefits. No one wants to destroy the world." Altman made a point to reference the IAEA, the United Nations nuclear watchdog, as an example of how the world came together to oversee nuclear power. That agency was created in the years after the U.S. dropping atom bombs on Japan at the end of World War II.
"Let's make sure we come together as a globe -- and I hope this place can play a real role in this," Altman said. "We talk about the IAEA as a model where the world has said 'OK, very dangerous technology, let's all put some guard rails.' And I think we can do both. "I think in this case, it's a nuanced message 'cause it's saying it's not that dangerous today but it can get dangerous fast. But we can thread that needle."
"Let's make sure we come together as a globe -- and I hope this place can play a real role in this," Altman said. "We talk about the IAEA as a model where the world has said 'OK, very dangerous technology, let's all put some guard rails.' And I think we can do both. "I think in this case, it's a nuanced message 'cause it's saying it's not that dangerous today but it can get dangerous fast. But we can thread that needle."
Desperation.. (Score:2, Insightful)
Re: (Score:3)
"The only bad publicity is your obituary."
Re: Desperation.. (Score:4, Insightful)
Basically the same advertising tricks you see from doomsday cults and/or televangelists where they hype up the danger to draw attention and earn money. And unfortunately this is far too common in the tech industry.
Personally I find it all tiresome. We have been through this phase so many times in history, yet we keep falling for it time and time again.
Altman is doing this to advertise his product, build a legal moat around it, and maintain a clean PR image. Everything else is theatrics.
If Altman really cared about transparency he would release the dataset to GPT-4 to the public instead of building a walled garden around it. Surely the dataset is clean and does not contain any copyrighted and/or confidential data in it if he cares about regulation that much. Or that the research papers about it have not been potentially framed in such a way as to hype up the abilities of the model if he truly cares about scientific research.
Re: (Score:2)
Indeed. This is yet another money-grab, and as usual they need to execute before the usual fanbois realize how restricted this product actually is in what it can do.
Re: Desperation.. (Score:2)
Still, even as incomplete as the tech may be it can cause a ton of harm to people without proper legal, technical, and cultural oversight.
Altman and his ilk are using the whole AI-apocalypse angle to distract people from the fact that he and many others are trying to normalize mass data scraping to make commercial products. That alone should be massively problematic for everyone, not just the creative industry. It might sound a bit on the Luddite side, but I just cannot accept LLM tech as is until the issue
Re: (Score:2)
It might also be a personal effort by people like Altman to secure a lucrative position in such a UN agency.
Re:Desperation.. (Score:4, Interesting)
What's amazing is that it keeps working.
First, GPT-2 was too dangerous to release [openai.com] It seems laughable now, but it's still on their website.
Our model, called GPT-2 (a successor to GPT), was trained simply to predict the next word in 40GB of Internet text. Due to our concerns about malicious applications of the technology, we are not releasing the trained model.
Then it was GTP-3 that too dangerous to release [itpro.com] ... before offering us a free trial.
The doomsday device known as GPT-4 is now available by subscription.
They don't have anything new to terrify us with at the moment, but that doesn't mean they can't turn the fear dial up to 11.
Re: Desperation.. (Score:2)
It likely has something to do with this. [semianalysis.com]
Regulation is the best defense of giant monopolists against small contesters, in particular F/OSS. Add tonthe mix that the dangers of AI aren't Skynet-like (i.e. from the "intelligence" of the, capable but stupid, model itself). They're from greedy and fast-and-loose corporations and applications, i.e. calculating intransparent credit scoring, replacing dialog where a person is reasonably expected at the other end of the line by AI which then fucks up (e.g. eating dis
Re: (Score:2)
Yes. Seems to be marketing, because the actual capabilities of ChatAI are not that impressive and not a new threat. That replacing low-level white-collar workers with automation would become possible has been expected for a decade or so now. There are basically no other real threats that did not already exist before. And "tremendous benefits"? That seems rather unlikely. Some small benefits, sure, but that is it.
Re: (Score:2)
It's a nice two-pronged attack on others interested in the technology.
First? It's so very scary. OMG! IT COULD END HUMANITY! Do not touch this without our specialized guidance if you know what's good for you!
Second? If they hype it enough, they'll get regulations, regulations they are DESPERATE to get implemented worldwide, so that they can prevent other players from joining the game that they've started.
So by hyping the danger they're both creating fear of non-expert dabbling, and creating the proper envir
Re: (Score:2)
Here are some of the dangers of LLM: https://youtu.be/xoVJKj8lcNQ [youtu.be]
Looks pretty real to me, especially given the fact that no one understands how they got at this point.
A watchdog agency is a silly idea (Score:5, Insightful)
This isn't nuclear technology that requires difficult to source resources and a lot of effort to put together... 'AI' can run on standard computers that pretty much anyone with a room temperature IQ and a manual can set up.
There is no monitoring it. There is no deciding how it's going to be directed. Once it exists, someone, somewhere will do something bad with it and all we can really do is try to predict what, where, and when and try to plan to deal with that fact. Prevention of malicious or merely deleterious deployment of the technology is not possible.
Re: (Score:2)
Yeah, what they want is a watchdog agency keeping tabs on the LEGAL entities. That way you can't legally compete with them without having the overhead necessary to deal with a watchdog.
They just want to raise that barrier to entry a little higher.
Re: (Score:2)
This shit needs to be reigned in. If there's one group I don't trust with overseeing AI, it's
Re: (Score:2)
> Once it exists, someone, somewhere will do something bad with it
People have been doing harmful stuff with it for a while. Check out Cathy O'Neil's Weapons of Math Destruction.
Re: (Score:1)
Re: (Score:2)
Anyone with a computer can create malware, but in practice you have a relatively small number of companies that sell it to governments and corporations. The vendors and the customers are regulated.
Skeptical (Score:4, Insightful)
I am skeptical that this can be contained at all. This kind of AI is a lot easier to implement than making a plutonium bomb and building a delivery vehicle. There already are multiple open source LLM implementations that you can install and run on your laptop. How can anything like that be constrained?
This is entering a realm that was science fiction just a short time ago. And in those kind of stories there are often competing AI's, some are malicious and some are defensive. It evolves into a cyber-war that humans can't grasp at all. I'm thinking that the only chance we have is to seriously beef up the defense.
Re: (Score:1)
Indeed! All a watch-dog could do is monitor for oddities and alert the authorities if something dodgy shows up. It's better than nothing, but probably won't stop smaller crooks and state-sponsored secret labs.
Sarah Connor, please start packing...
Re: (Score:1)
Re: (Score:1)
Re: (Score:1)
I think you mean the guys that are apparently fooled by what must be the biggest collection of inflatable balloon nuke look-a-likes over in The ruSSia.
Yeah, they been doing a BANG UP job so far....
And do what? (Score:2)
Re: (Score:2)
What would they do?
Simple - ban any country that isn't on perfect terms with the US from importing, owning or operating any AI related technologies.
Which is why the suggestion wont work.
Kill all AI Researchers (Score:2)
Re: (Score:1)
Ted Kaczynski, they let you post from prison?
Re: (Score:2)
We could have an agency that stops non-profits (Score:2)
That would be just.
Fuck Sam Altman.
Proposing Corruption (Score:2)
The UN "Watchdogs" are corrupt.
https://thegrayzone.com/2019/1... [thegrayzone.com]
Wealthy AI implementers can just buy them off.
Because "nuclear watchdog" is so very useful... (Score:2)
Like most UN crap, it is an organization without control, with huge budgets, spent on pointless perks for its few employees and zero useful results. Under the "nuclear watchdog" the nuclear industry managed to have its worst accidents, all due to lack of meaningful safety and at the same time kill itself by useless PR. We also saw the worst nuclear proliferation in ages. So yeah, the same approach will work MIRACLES for the "AI threat".
Only if they name it - (Score:2)
Anything The UN Does (Score:2)
...generally ends up being a cluster-flop.
That sounds effective (Score:2)