Elon Musk warns U.S. Governors of the A.I. threat

I’ve nowhere said that AI should not be regulated, and I don’t oppose Musk’s views because of my career interests, as @ekult suggested (instead of trying to refute my point).

Quite the contrary! I, as many others currently involved in AI, call for regulation. I simply call for the regulation of existing and prospective technologies, not arbitrary hypotheticals.

1 Like

The problem with claiming AI is not an issue, means clandestine military type government companies are able to develop it in secret, and then it’s definitely too late, which is already happening. Bringing the issue out into public debate is the best way to counter that. Bashing Musk just because you are worried about how his comments affect your career is blatantly unhelpful.

2 Likes
  • The use of AI for speculation on goods of first need… Or the ability to influence any market, and really any sort of life-threatening decision you can think could be soon taken or supervised by an AI with direct influence over your life’s outcome… The kind of goals, an AI with direct power over humans is going to relentlessly pursue at the speed of light should be kind of regulated and open to scrutiny.

  • The use of AI for illegal profiling that could potentially affect every employer’s decision to destroy your free-will based on statistics…

  • Weaponized AI, made specifically to cause harm on a country’s or potentially worldwide vital infrastructure…

I could keep going…

5 Likes

Potentially rebellious AI that poses an existential threat to human civilization does not exist right now.

This is a perfectly valid concern, but it is not what Musk is advocating for. It’s perfectly fine, and actually necessary, to raise awareness of these issues and try to tackle them. It’s the kind of thing AI experts are frequently denouncing, too. They are specific problems that are taking place right now, so we can actually start thinking how to regulate them. Robots killing people in the streets? Not so much. Fake news bots are most likely powered by machine learning, by the way.

I get so upset because Musk is actually saying nothing. A fundamental existential threat to human civilization? What does that mean? How does AI pose that threat exactly? Until he provides a precise description of his concerns, there’s no point in paying attention to him, let alone write legislation.

I don’t know if you’re just skimming my posts or deliberately discharging these ad hominem attacks to win over the readers. I don’t expect it to work.

1 Like

Once again Musk is not only talking about potentially rebellious AI, in fact he specifies an example of AI just doing what it is told, like maximizing profits, and it does not take into account all aspects of human safety to do so. Honestly why would it? So these potential issues need to be addressed if you are going to take AI and human life seriously. Absolutely no harm in addressing these topics so I really don’t see your argument at this point, other than you just don’t like Elon.

Notice it is only you and the people with your career interest that try to attack Musk. It reinforces my point that Musk is not just limited to machine learning. He has a broader picture of concern.

These are valid points, but I insist, this is miles away from what Musk suggests. He gives vague statements about how AI might end humanity as we know it. What we need is to analyze each case individually, assess the potential risks and try to push for adequate legislation. And for that we need level-headed, knowledgeable discussions of the technologies involved, how they function and where they might lead us.

Also, the methods powering each of the cases you mentioned might be vastly different. Things like “The use of AI for speculation on goods of first need” don’t go far beyond the use of statistical and computational methods to maximize profits. That would be more effectively controlled by regulating trade. It’s not so much a matter of AI as it is of curtailing the free market.

By the way, all the cases that you mention, to the extent they might use AI-related technology, are most likely heavily reliant on machine learning.

2 Likes

This is the perfect formula for failure, and ensuring all attempts at regulation is only reactive and too slow. AI could influence the economy of another small nation faster than any team of humans in their parliament can react, enact law and recover. Human dictators can cripple a nation in a decade, let alone how quick AI can do so. There needs to be a line between what AI is allowed to encroach upon in our daily lives. If there are fundamental rules of AI in commerce at least, then we can sleep easier. Same applies for military, and so on.

If some entrepreneur came out and said “There is an asteroid heading to Earth that will destroy us. I know because I’m around telescopes all day. We must devote all our research to stopping it.” and all astrophysicists said “That’s nonsense. You don’t know what you’re talking about. There’s no asteroid coming.”, would you accuse them of acting out of self-interest?

Also, what’s the broader picture Musk is considering? What makes his understanding of AI better than that of AI scientists?

You give AI much more credit than it deserves. You don’t tell AI to maximize profits and set it loose to do as it pleases. AI is as vague a term as they get. If you want to rely on computational methods to maximize your profits (which is what the application of “AI” in this case really means), at the very best you’ll have a fine-tuned algorithm that’ll give you an “optimal” quantity to invest, or a “good” time to do so, or something of the sort.

Worrying about AI not taking all aspects of human safety into account is like worrying about the sensors of a plane not taking all aspects of human safety into account. It’s just technology. It’s going to do what you built it for. AI won’t take human safety into account because it can’t. It’s not actually intelligent. It’s the designer that must take those concerns into account when designing it. If it’s going to be deployed in an environment where it poses risks, just design it thoroughly and test it extensively, just like any other technology. There’s nothing special about it, really.

2 Likes

Problem solved then, no need to get upset.

1234567890

3 Likes

That sounds like analyzing each case individually to me… Actually, it doesn’t really make sense to think of writing regulation for those two cases together. You might find common factors after analyzing them, but I wouldn’t expect substantial similarities. Banks and the military surely use vastly different technologies (even though they might be grounded on common theoretical principles), and apply them to definitely different scenarios. We would get nowhere by trying to regulate all AI at once.

I think the use of the term AI is a problem. There is no such thing as artificial intelligence today. The term is used to encompass a wide family of methods and techniques, which perhaps have in common that they involve computations. But in reality it’s just mathematics, statistics, pattern recognition and algorithms. The term AI, however, is suggestive of an entity with intellectual capabilities close to ours, which is still science fiction, I’m afraid.

1 Like

Nope, fundamental rules does not mean analyzing each case individually.

“all astrophysicists”, are you fantasizing that all experts are in agreement with you in regard to AI? Your analogies fall very flat.

You’re repeating my point when I said “why would it?”

Would be more like an astrophysicist who concentrates his work on black-hole detection, would deny the possibility of said impact possibility, and demanded all effort and thinking be concentrated on black-holes…

I think you are right, there… We might be referring to Autonomous General Artificial Intelligence, when we “neophytes” talk AI, we are indeed talking about the next logical step… Highly theoretical, matter of time, for granted /scenario…

1 Like

I haven’t really thought much about that scenario, although I do find it very interesting indeed.

Right off the bat I would say that we shouldn’t be too worried. All computational devices we can conceive today are grounded on logic, and no matter how intelligent a machine is, it would not be able to contradict the logic that governs its circuits. Even if its “brain” were too complex, we could bottleneck all its output to make it conditional to an external switch, inaccessible to the machine.

The only risk then would be some component physically malfunctioning, for instance a transistor momentarily giving a wrong value and allowing the machine to bypass the security measure. Even in that case, we could combine various failsafe mechanisms to make a fatal error arbitrarily unlikely.

So in principle, I think we should be fine…

Even that attracts regulations and failsafes, so anything exponentially more intelligent should have even more caution applied.

1 Like

As I said, I believe that fundamental rules can only be extracted as common factors after case-by-case analysis, and nowadays I don’t see much ground for fundamental rules. I could be wrong, but it seems unlikely for the most part.

Ok, some astrophysicists. The point is the same. Bear in mind that I only made it after you accused me and everyone in my line of research who confronts Musk of simply acting to preserve our career interests, which is a pretty wild accusation and dodges the argument.

In any case, I would be happy (honestly) to hear some AI experts with completely opposite views to mine, but so far I haven’t. I can name a few that think along the same lines, though.

Then why not raise the same concerns about airplane sensors, or any other technology for that matter?

We can always wait for a solar storm to let us come out of the caves, again… if, eventually. But by then it would have self-replicating abilities, backups and expanded through our solar system… even radiating it’s own code to the cosmos, waiting to pollinate some less advanced hardwared specie. I’m optimistic by nature!

2 Likes

Stay Vigilant Citizen Scientist!

3 Likes

Correct, it will only happen when it’s too late as usual.

no dodge, I’m just responding to someone on a forum that purports to be smarter than Elon Musk.

Yes, drones are a big concern for example.

1 Like