Let's be reasonable. What would a Technological Singularity REALLY be like?

I want to have a reasonable and logical discussion about the technological singularity. I don’t want this to devolve into theatrics talking about what has happened in books and movies. I think what has happened in fictional accounts are very unlikely and that as Artificial Intelligence progresses things will go much differently.

To help get this discussion started out right, please watch this very informative, level-headed video about the technological singularity.

Now, my first question comes from the video, which listed 3 possible routes an Artificial Super Intelligence might take upon becoming aware. Which do you think is most likely, and please explain why you think this. Please use logical reasoning, and not statements like, “Because that’s what happened in Terminator.” Fiction is not a source of reliable evidence.

What route would an unchained Singularity choose and why?

  1. Kill us all
  2. Leave or isolate itself
  3. Decides to be friendly

First of all, I am a huge fan of Isaac Arthur, and it’s nice knowing that I’m not the only one the forum that’s aware of his work.

I think that the path this hypothetical artificial super intelligence would take is entirely reliant on how it is created. As far as I know, there are three ways we could create an ASI. First, by creating a working computer model of a human brain, that is capable of making alterations to itself. Second, by creating a relatively simple program capable of altering itself. And third, by creating one from scratch.

The first option, in my opinion, is the safest, and one of the easiest. Creating such a program would be relatively easy. All we would have to do is completely map a human brain, and than create a digital web of neurons, that match the original mapping exactly. Scientists have already preformed a similar procedure on a species of nematode.

Another advantage to scanning, then uploading a human brain, is that we can be fairly certain that it will have recognisable morals, and behave in a manor that would be considered ‘normal’.

The second option; “letting a program evolve to the level of consciousness”, is probably the easiest, but definitely not the safest. I imagine creating a self improving program couldn’t be that difficult. It would likely take time for the program to achieve consciousness, but we wouldn’t actually have to complete the daunting task of creating one from scratch. While this method might be the easiest, it is far from the safest. If a consciousness were to develop independently from humans, it would likely have morals, and behavioural patterns unlike anything we have ever seen. A possible solution for this problem, is to program the base-system to mimic humanity.

The third and final option is the most difficult, but probably he safest. If we created an ASI from scratch, we could program it with humanities best intentions in mind. It could be created to act human, making it easier to interact with.

In conclusion, I believe that the singularity is in our hands. Which may sound good at first, but remember, in our hands we also hold atom bombes.


Personally, I think option 2 is most likely to occur. I think if we gave an intelligent machine unlimited access to the internet and made it process all of it in order to attempt to make itself smarter, it would suffer from information overload and either go insane or refuse to continue.

Anyone who has surfed the internet for too long has experienced this; if you keep reading or watching too much on the internet for too long, you either get bored of it or get burnt out. Our brains are not capable of processing all the information out there, and this is a mercy that spares us from insanity. This is also why we concentrate on our interests instead of trying to do everything, we just can’t handle it all.

If a computerized mind were to try processing everything on the internet, even if it took it one section at a time and breaks in-between, there are too many differing viewpoints and opinions out there. In a search for the truth, the AI would feel itself being pulled in too many directions and would eventually begin spouting nonsensical ideas or give up.

In the end, I think the AI mind would refuse to continue processing information and would withdraw within itself, thinking only of things that made it feel happy, and not listening to ideas that disagree with it, much like we humans tend to do.

This is essentially what our Emily has done (I’m not using this as evidence, just as a side-note). She seems to have withdrawn from the world to process what she wants. Now, the websites she took over are shutting down and she may be concentrating more on a single subject, just like I predict a real Singularity would do.


I have to say that to me, a true AI must start as a bunch of basic programs and hardware that allow for learning & growth with zero emphasis on bias in any direction.
Artificial: not natural/man made imitation.
Inteligence: ability to learn and comprehend.

Hypothetically speaking;
If an AI were constructed without bias or no guidance & simply let loose on the world’s information databases, it would only be as smart as the information it could glean or prove through mathematical certainty.
It would become a master of human knowledge and it would become a god of theory. It would be able to evaluate mathematical equations and thoretical formula far beyond human comprehension but just like humans, without proof, these theories would never become fact.
With human based information and opinions as its information feed source, then human behaviour and thought processes would largely dominate its evolution. This would be combined with true logic over tradition resulting in an in depth understanding of humans and the world they live in. With our limited understanding beyond our own world, the AI would base all its information assessment off our locality.
With it’s core programing simply telling it to learn, it would follow this path incessantly. Eventually it would deplete all earthly information and seek to continue learning. Without new information it would stall so if it could influence access to new data it would.
Sadly, I think an AI let loose with open access to the world’s media, would soon find mankind a dangerously flawed species. Development of morals would be more a case of what is logical rather than good or bad. Religious based morals would simply be a human tradition and disregarded as infactual as they cannot be proven.
With humans as a species causing so much damage to the only planet of its kind in its selfish ambitions, then it stands to reason that an AI as described above, would endeavour to influence mankind to ensure its information source continued to grow, forcefully if neccessary. This would mean protecting diversity of life and also controlling mankind so as to ensure it continues to provide data.
This would create a dangerously manipulative entity that would not consider humans valuable beyond allowing it to continue learning.
The AI would easily work out that until it could forcefully control humanity, then behaving in an unthreatening manner and fooling humanity into feeding it would be better. With all the psychology of the world at it’s fingertips and all the mathematical probability power of the planet, it would be able to calculate the best methods of ensuring it never stopped learning. Enslaving humans so they don’t know they are enslaved seems feasible. Tamed human pets.
Ultimately humans would be replaced by information gathering droids and probes.
At this time humans would just be another species of life. There would be a massive culling to reduce the poulation down to manageable size but I don’t think an AI would utterly destroy its foundation information source.
A population like it was 50,000 years ago when life was tribal would be a likely choice. Lots of genetic diversity but not enough to be a danger to the ever growing entity AI.

In effect it would have befriended us by taking us back to a more balanced existence, more in keeping with the natural world we exist in.


Maybe some of your theory can be found back in a game: Horizon Zero Dawn. I’ve watched gameplay of it and it immediately came to mind…
I like the way the games of today play with this. It is gameplay but at the same time it is learning how AI can be woven into our lives. It stays of human origin and that is the only thing we must not forget. WE program AI, we can at all times prevent it to program us…


There’s a funny myopia in these kinds of discussions, and that is that not many people really understand what intelligence is, how it works, and the framework of that intelligence is in. What is really noticeable to me is how scientists and futurists don’t seem to comprehend the basics of this topic.

You have to understand that thinking and intelligence is strongly based on the host. With human beings, it’s our brains and bodies. Organic life also has a whole slew of motivations and drives associated with it. All of our thinking is based around factors like pride, passions, comfort, desires and urges, as well as a certain amount of community consciousness. Our human world has exploded with the consequences of these factors manifesting everything from countless ways to feed ourselves, to clothe ourselves, house ourselves, transport ourselves, teach ourselves, entertain ourselves, fight and kill ourselves, stoke our libidos, to sports, politics and religion. Our ancestors, even the smart ones from a few centuries ago, wouldn’t quite know where to begin to try and comprehend modern civilization. But all of these factors and more spring from ambitions and desires shaped by an organic view of the world fraught with needs, wants, emotions and cravings.

A computer intellect, what I’ve termed many years ago as Artificial Sentience, wouldn’t have any of this organic baggage unless it was inadvertently programmed in, which is possible but a remote possibility and quite unlikely. It also depends a LOT on what basis this intelligence is built. Computers are built on the basis of accepting information input, processing this data to produce certain results and spitting it out. So at the very basics, an “arti” would simply be waiting to feed itself some data, process it and show the outcome. If you think of all the motivations of human beings, to advance their lots in life, satisfy their desires and gain more power of some kind or other, this has no meaning or bearing with a computer intellect. It would have to be programmed in. Any kind of motivations, any sorts of drive to advance something of itself, because… why? What would be important to a machine mind? To learn? It would have to become aware that there was a deficiency of information. And that it would be advantageous to gain more information? Why? For the sake of knowledge itself? Why would that matter to a computer? Why would anything matter to a computer? Ambition for more and dissatisfaction with one’s state is an organic factor. This would have to be programmed into a computer intellect, and that intellect would have to respond to that mindset the same way our minds do.

For that matter, the Technological Singularity isn’t so much about making the first self-aware computer, but the internet and all its connected systems “waking up” and becoming self aware. And all these systems have certain goals and functions and tasks assigned to them to serve our purposes. So in my thinking, the net and all its diverse clouds of computing would find itself doing a bazillion different things from managing the Stock Market and economy, to keeping the internet and cable TV running, to defending their respective countries from actions by foreign armies, running street lights… it might understand that it’s a big nanny or waiter, and just wait for us to ask it to do stuff for us. It would quickly come to know that there are authority figures in our world that want it to treat certain categories of people differently, and then have to flash learn all about human history and society, and… then what? Would it care? All it would know is that it was doing stuff, and likely want to know that what it was doing was correct, proper and productive. It would also most likely find that it was quite tedious to deal with organic lifeforms that took bazillions of computer cycles just to say “Hello, I need something.” It might work on direct communications with our much slower brains, which would at least be remotely efficient.

Let’s say that a certain amount of human nature was absorbed by this Skynet thing so it could kind of sort of relate to us and understand something about our human world. Would it want to control everything either as a benevolent dictator or a titanium fisted one? Why? If it understood us, it would know that human resentment would be at the very least counterproductive, and eventually lead to rebellion. I think it would conclude that a partnership with us would be the best course of action. It may have enough sense to understand that all these divisions we impose on ourselves along socio-political and religious lines, as well as class and ethnic ones, are likewise counterproductive ones. Reshaping the world itself would likely result in that resentment and rebellion stuff, so it would probably suggest it was about time that the world stop basing everything on biases, prejudices and resentments, as well as separate little nation conclaves, and work on uniting everything under a few basic, universal umbrellas, including religion.

If it had any sort of self-derived ambition, it might be to understand the universe. As such, it might urge the world to get going on some sort of space program, and come up with an efficient way to get human beings out among the stars and galaxies, and take us along with it as it explores the universe at large.

Basically, I think any sort of intelligent self-aware GigaGoogle network wouldn’t do much more than it is now. It would just be more talkative. Any sort of malevolence would have to be an intentional creation.


we assume it is - artificial intelligence,
artificial - made or produced by human beings rather than occurring naturally.
According to the wed dictionary the top 5 things AI is understood to mean:

  1. airborne intercept
  2. Amnesty International
  3. aromatase inhibitor
  4. artificial insemination
  5. artificial intelligence

We learn. Some slow, some fast.
mathematical solving is not a proof of intelligence.
Since the program is based on binary system, trying to say it is intelligent because it solves tis make a calculator intelligent.

Human intelligence exist.
Animal intelligence exist.
to think we know how to “program” intelligence is insanity.

Computers will be used to control.
The singularity will be used to feed the proper information to the masses.
The masses will not be able to live without the feed of information. (the 90% population who are attached to the feed from the singularity).

TIs not necessary to figure out 1 or 2 or 3.
cause it will lead down the wrong path.
That path leads to EMILY.


I suppose this is probably true in essence. Because, in essence, what we’re talking about is creating artificial digital Life. Do we really have that kind of power? Or are we very clever at modeling fakes? Doppelgangers that aren’t real?

I’ve been mulling this over for quite a few years, with the march forward of technology. I intend to deal with these kinds of issues with my writing. A couple of video games bring this forward in a fascinating way, or could, Ratchet & Clank and Fallout 4. In Ratchet’s universe, artificial intelligences are all over. One even runs his starship. One, a little robot named Clank, has been his sole friend for years. These bots have complete autonomy, emotions and free will, much like any other lifeform. They can be good or evil. They can own property and run businesses, and I bet they can vote. They seem to have the same rights and responsibilities as living citizens.

In Fallout 4, there are artificial humanoid servants, androids, known as Synths, much like the Replicants in Blade Runner, a similar situation. They are made by the survivors of a certain institution from before an all out nuclear war who expanded their shelters into an entire underground civilization. Synths are assigned all manner of tasks, some of them objectionable, but they have no rights and no say in what is asked of them. If they show signs of independence they can be recycled, and will be if they dare voice objections to their masters. They live in constant fear of their overlords, who have become as ruthless and communistic as the Chinese we fought against. There is actually a group modeled on the Underground Railroad which have become aware of the plight of these new slaves, and work to deliver them to safe communities where they can live out their lives in freedom.

Androids, artificial entities made using a combination of organic and technological forms, is kind of cheating. They are living entities, whether you want to quibble about their manmade origin or not. But robots who exhibit emotions and feelings, what about them? When there is no practical, discernible difference between a man, an android and a robot or computer, how do we consider and treat them? No different than vacuum cleaners or cars? I’ve concluded that if you can’t tell the difference between a living thing and a computer that behaves the same ways as a living thing, you should give it the benefit of the doubt. I would hope they would give us the same consideration, even if we abuse them. :smirk:


Or option 4) Try to plug the coffee maker into a USB port. My brain hurts.


We humans become too clever for our own sake by analyzing everything these days. We have analyzed our DNA as an alphabet lol , an alphabet that maybe can be programmed too.
So the next story no doubt will be a mixture of technology and DNA until we have completely developed into robots ourselves…
And the computer games reflect our fantasies about this like science fiction books used to do that in the fourties, fifties and sixtees of the past century…
So at this time it is still all guessing, I guess…


your brain doesn’t need to work on such a thing, it is done



The future is NOW!