Read that this morning.
Dr Hinton also said he was an expert on the science, not policy, and that it was the responsibility of government to ensure AI was developed āwith a lot of thought into how to stop it going rogueā.
Unfortunately, our current generation of politicians are not exactly known for āspending a lot of thoughtā on anything. Might just be that the maturation of AI and the overwhelming incompetence and impotence of leadership would be manageable problems in isolation, but not in combinationā¦
Oooor maybe we have a windfall and one solves the other. Hmmm, I guess that sounds more religious than anything elseā¦
Unfortunately, governments, economies, companies, financial institutions, military organisations and nations, are both competitive and adversarial. They constantly seek advantage over each other.
If one of these groups has the advantage of a really effective AI - or is even believed to have one - all their competitors will seek to develop a better AI. They have to - itās that or go under.
Well, he didnāt quit because of AI. He quit because he was getting too old for this $#!7, and knows that the obvious next steps require more long-term assertiveness, foresight, creativity, and decisiveness than he could muster at 75? As with any new tech, the leading AI experts needs to be quickminded and be one step ahead of what the most malicious player can think of, now that the cat is out of the bag.
In scifi, sometimes an objective and neutral AI governs instead of people, but not every AI is as charming and reasonable as the one in Waking Titan. Hinton knows that in reality and in scifi, the ones in power will always add extra biased rules to benefit themselves.
Heyyy, why not combine GPS, GPT, and Boston āIt was not us who put the guns on it!!ā Dynamics into one new handy product?
The timing of the rise of AI is uncanny. Here in the US, the current political climate is, quickly write a bill to ban whatever you hate, sign it asap and spend zero time having it vetted by lawyers who would normally look for issues down the roadā¦this leaves a perfect avenue, ripe for the picking, for AI to rise up and take overā¦maybe it already hasā¦
Iām not so much worried about AI taking over. The AIs we have right now donāt even have motivational subroutines built in, theyāre really still clever algorithms with a ridiculous amount of data available. But weāre researching in that area, obviously.
No, the main concern I have about AI right now is that, as the webcomic Freefall put it so succinctly, āweāre creating a force-multiplier for stupidityā, and tereās all kinds of instability, social as well as political, economic and quite possibly industrial, that will be resulting from that.
I heavily recommend reading that comic in any case. Itās hard to find a deeper thought experiment on the problems weāll be facing with AI, and impossible to find a more funny one. Also, thereās a kleptomanic alien squid that starts learning accounting, because thatās where the real theft is at!
Exactly, the AI does not need to take over, and doesnāt need any motivation at all. It just needs to keep assertively spouting half-incoherent half-sense. Because we already have people who will look at your request form, then look at you, then look at the computer screen, and say āComputer says no. I canāt do anything about it. The computer knows best. Youāll understand we have to declineā.
In October of 1984 a little film called The Terminator came out.
It was quite specific regarding the dangers of a self aware computer system.
40 years later, here we are.
Humans are an odd species
Iām worried about armies and defence departments all over the world getting hold of AI thatās smart enough to analyse intelligence data, advise on what further intelligence it needs, and then give projected figures for casualties and likelihood of success.
Iām worried about finance houses and investment brokers using advanced AI to predict and manipulate stock market movements.
Iām worried about both extremist and mainstream political movements using next-gen AI to predict and influence voter behaviour.
I am worried about all the AI generated misinformation, faked photos, generated voices, etcā¦being generated and manipulated for the above stated reasons
I post this for his comments on AI. It is nice to read an interview with someone with intelligence based on a century of knowledge and experience.
Speaking of which, wanted to post this for a while now. Itās long, but if youāre interested in the subject matter, I think itās worth it.
Itās an interview with Max Tegmark, one of the initiators of the open letter to halt development for 6 months. The letter makes a lot more sense to me after hearing him explain the reasoning.
Also, I was quite shocked at his oppinion that already the next iteration of GPT might include first general problem solving capabilities. I honestly thought that was at least a decade away, but I certainly donāt have enough kownledge of the tech to debate an expert on that opinion.
Anyways, hereās the interview:
As Iāve said, itās no longer possible to halt or regulate AI development. Governments and corporate bodies will have to get the latest version - ideally, they need to get a better version than anyone else.
And if that means development teams and hardware being located in funny little countries with āflexibleā legal systems, thatās what theyāll do.
You donāt need to be in the same country as the hardware - all you need is a fast broadband connection.
The argument that Tegmark makes is that there might be a small chance to slow down, because the first to loose control is also the first to lose the race. The letter was intended to take pressure off by giving everybody some common ground that might be agreed to and give some time to have a serious discussion about AI safety that he hopes will lead to some kind of international agreement for safety standards.
Heāsā¦ not very optimistic about it working out, from what I gather, but he doesnāt see much of a chance otherwiseā¦
And here we go
The single most scary sentence from that article:
āWe are reorganizing our efforts aggressively to capitalize.ā
Whenever you hear those words in connection with weapons systems, things are about to get a lot more unstableā¦
āWeāre going to ban itāā¦ āWeāre going to regulate itāā¦
Yeah, right.
āIf you wheel these technologies correctly, safely, and securely,ā
Thing is, Chat GPT can already write original software. Itās not great software, but thatās hardly the point.
Chat GPT is already smarter and more capable than a lot of humans. True, clever humans and trained specialists can still beat it, but, again, thatās hardly the point.
There are already much more capable versions of Chat GPT in the pipeline, and there are next-gen AI systems in the planning and research stages that should be orders of magnitude more powerful. And when they start writing software, it could be more sophisticated than anything humans can produce.
And once you have AIs that are smarter than us, writing AIs that are even smarter than they areā¦
And no-one can stop it. No-one can afford to. We either ride this tiger, or we get eaten by it.