Latest Space Missions (& Other Science Stuff)

Read that this morning. :scream:

2 Likes

Dr Hinton also said he was an expert on the science, not policy, and that it was the responsibility of government to ensure AI was developed ā€œwith a lot of thought into how to stop it going rogueā€.

Unfortunately, our current generation of politicians are not exactly known for ā€œspending a lot of thoughtā€ on anything. Might just be that the maturation of AI and the overwhelming incompetence and impotence of leadership would be manageable problems in isolation, but not in combinationā€¦

Oooor maybe we have a windfall and one solves the other. Hmmm, I guess that sounds more religious than anything elseā€¦ :rofl:

6 Likes

Unfortunately, governments, economies, companies, financial institutions, military organisations and nations, are both competitive and adversarial. They constantly seek advantage over each other.

If one of these groups has the advantage of a really effective AI - or is even believed to have one - all their competitors will seek to develop a better AI. They have to - itā€™s that or go under.

3 Likes

Well, he didnā€™t quit because of AI. He quit because he was getting too old for this $#!7, and knows that the obvious next steps require more long-term assertiveness, foresight, creativity, and decisiveness than he could muster at 75? As with any new tech, the leading AI experts needs to be quickminded and be one step ahead of what the most malicious player can think of, now that the cat is out of the bag.

In scifi, sometimes an objective and neutral AI governs instead of people, but not every AI is as charming and reasonable as the one in Waking Titan. :wink: Hinton knows that in reality and in scifi, the ones in power will always add extra biased rules to benefit themselves.

Heyyy, why not combine GPS, GPT, and Boston ā€œIt was not us who put the guns on it!!ā€ Dynamics into one new handy product? :fearful:

2 Likes

The timing of the rise of AI is uncanny. Here in the US, the current political climate is, quickly write a bill to ban whatever you hate, sign it asap and spend zero time having it vetted by lawyers who would normally look for issues down the roadā€¦this leaves a perfect avenue, ripe for the picking, for AI to rise up and take overā€¦maybe it already hasā€¦

3 Likes

Iā€™m not so much worried about AI taking over. The AIs we have right now donā€™t even have motivational subroutines built in, theyā€™re really still clever algorithms with a ridiculous amount of data available. But weā€™re researching in that area, obviously.
No, the main concern I have about AI right now is that, as the webcomic Freefall put it so succinctly, ā€œweā€™re creating a force-multiplier for stupidityā€, and tereā€™s all kinds of instability, social as well as political, economic and quite possibly industrial, that will be resulting from that.

I heavily recommend reading that comic in any case. Itā€™s hard to find a deeper thought experiment on the problems weā€™ll be facing with AI, and impossible to find a more funny one. Also, thereā€™s a kleptomanic alien squid that starts learning accounting, because thatā€™s where the real theft is at!

4 Likes

Exactly, the AI does not need to take over, and doesnā€™t need any motivation at all. It just needs to keep assertively spouting half-incoherent half-sense. Because we already have people who will look at your request form, then look at you, then look at the computer screen, and say ā€œComputer says no. I canā€™t do anything about it. The computer knows best. Youā€™ll understand we have to declineā€.

4 Likes

In October of 1984 a little film called The Terminator came out.
It was quite specific regarding the dangers of a self aware computer system.
40 years later, here we are.
Humans are an odd species :crazy_face:

5 Likes

Iā€™m worried about armies and defence departments all over the world getting hold of AI thatā€™s smart enough to analyse intelligence data, advise on what further intelligence it needs, and then give projected figures for casualties and likelihood of success.

Iā€™m worried about finance houses and investment brokers using advanced AI to predict and manipulate stock market movements.

Iā€™m worried about both extremist and mainstream political movements using next-gen AI to predict and influence voter behaviour.

4 Likes

I am worried about all the AI generated misinformation, faked photos, generated voices, etcā€¦being generated and manipulated for the above stated reasons

4 Likes

NASA: Black Holes

3 Likes

I post this for his comments on AI. It is nice to read an interview with someone with intelligence based on a century of knowledge and experience.

3 Likes

Speaking of which, wanted to post this for a while now. Itā€™s long, but if youā€™re interested in the subject matter, I think itā€™s worth it.
Itā€™s an interview with Max Tegmark, one of the initiators of the open letter to halt development for 6 months. The letter makes a lot more sense to me after hearing him explain the reasoning.
Also, I was quite shocked at his oppinion that already the next iteration of GPT might include first general problem solving capabilities. I honestly thought that was at least a decade away, but I certainly donā€™t have enough kownledge of the tech to debate an expert on that opinion.

Anyways, hereā€™s the interview:

4 Likes

As Iā€™ve said, itā€™s no longer possible to halt or regulate AI development. Governments and corporate bodies will have to get the latest version - ideally, they need to get a better version than anyone else.

And if that means development teams and hardware being located in funny little countries with ā€œflexibleā€ legal systems, thatā€™s what theyā€™ll do.

You donā€™t need to be in the same country as the hardware - all you need is a fast broadband connection.

2 Likes

The argument that Tegmark makes is that there might be a small chance to slow down, because the first to loose control is also the first to lose the race. The letter was intended to take pressure off by giving everybody some common ground that might be agreed to and give some time to have a serious discussion about AI safety that he hopes will lead to some kind of international agreement for safety standards.
Heā€™sā€¦ not very optimistic about it working out, from what I gather, but he doesnā€™t see much of a chance otherwiseā€¦

3 Likes

And here we go :scream:

4 Likes

The single most scary sentence from that article:

ā€œWe are reorganizing our efforts aggressively to capitalize.ā€

Whenever you hear those words in connection with weapons systems, things are about to get a lot more unstableā€¦

4 Likes

ā€œWeā€™re going to ban itā€ā€¦ ā€œWeā€™re going to regulate itā€ā€¦

Yeah, right.

2 Likes

ā€œIf you wheel these technologies correctly, safely, and securely,ā€

2 Likes

Thing is, Chat GPT can already write original software. Itā€™s not great software, but thatā€™s hardly the point.

Chat GPT is already smarter and more capable than a lot of humans. True, clever humans and trained specialists can still beat it, but, again, thatā€™s hardly the point.

There are already much more capable versions of Chat GPT in the pipeline, and there are next-gen AI systems in the planning and research stages that should be orders of magnitude more powerful. And when they start writing software, it could be more sophisticated than anything humans can produce.

And once you have AIs that are smarter than us, writing AIs that are even smarter than they areā€¦

And no-one can stop it. No-one can afford to. We either ride this tiger, or we get eaten by it.

2 Likes