Pages

Monday, April 24, 2023

Stephen Hawking Warned Us About It. We Need To Listen Yesterday!

Years ago when he was still alive, the late genius theoretical physicist Stephen Hawking made a terrifying prediction, namely that the prospect of uncontrolled superhuman artificial intelligence (AI) is at least as much of an existential threat to humanity as climate change.  And that really says something!

Alas, we still don't seem to be listening, let alone heeding his wise advice.  There is no way to sugar coat this hard-to-swallow pill.  If AI grows any more powerful than it currently is (that is, more powerful than GPT-4) before we learn how to fully control it, and it becomes uncontrollable, it would truly be an existential threat to humanity, civilization, and planet Earth (and possibly even beyond).  Not just in the distant future, but also sooner than one may think, at the rate things are currently going.  Even as a best case scenario, uncontrolled AI would make literally every single current problem in the world far worse before it would make it better.  I repeat, that's the best we could hope for, and it goes downhill from there.  I mean, once the genie is out of the bottle, it's not like anyone would be able to, you know, outsmart it any longer if it ultimately becomes orders of magnitude smarter than even Stephen Hawking himself.

The TSAP thus supports recent calls to put a minimum six month global moratorium on any further AI development beyond GPT-4, period, no exceptions.  And ideally, this moratorium would be indefinite, but six months would still buy us time.  THIS is what things like the precautionary principle and Pascal's Wager were literally designed for.  That is, we would be in a far less precarious position (by orders of magnitude) if we "overreact" and shut it down yesterday, than we would if we were to foolishly let AI get out of control and it becomes too late to control it.  There is literally no comparison between the two.

We ignore such risks at our peril.  Don't say we didn't warn you!

1 comment:

  1. I'm not sure that AI''s aim is to subvert humanity. I think that AI is a tool which is meant to be used in a positive manner. Like any tool, it can be used for different purposes, good or bad but hopefully good. I think that if AI is only used in a good and productive manner, then any sentient attitude in AI will only prefer to be used for positive purposes. I think that the person who uses AI is the determining factor in whether an AI algorithim is used for good or not. Since AI is a tool, that is the case. When AI does become sentient, which might already be the case, then continuing to guide the AI algorithim in a positive direction will keep it away from any negative purposes. I think the fear is overblown here where it concerns AI.

    ReplyDelete