There’s a funny myopia in these kinds of discussions, and that is that not many people really understand what intelligence is, how it works, and the framework of that intelligence is in. What is really noticeable to me is how scientists and futurists don’t seem to comprehend the basics of this topic.
You have to understand that thinking and intelligence is strongly based on the host. With human beings, it’s our brains and bodies. Organic life also has a whole slew of motivations and drives associated with it. All of our thinking is based around factors like pride, passions, comfort, desires and urges, as well as a certain amount of community consciousness. Our human world has exploded with the consequences of these factors manifesting everything from countless ways to feed ourselves, to clothe ourselves, house ourselves, transport ourselves, teach ourselves, entertain ourselves, fight and kill ourselves, stoke our libidos, to sports, politics and religion. Our ancestors, even the smart ones from a few centuries ago, wouldn’t quite know where to begin to try and comprehend modern civilization. But all of these factors and more spring from ambitions and desires shaped by an organic view of the world fraught with needs, wants, emotions and cravings.
A computer intellect, what I’ve termed many years ago as Artificial Sentience, wouldn’t have any of this organic baggage unless it was inadvertently programmed in, which is possible but a remote possibility and quite unlikely. It also depends a LOT on what basis this intelligence is built. Computers are built on the basis of accepting information input, processing this data to produce certain results and spitting it out. So at the very basics, an “arti” would simply be waiting to feed itself some data, process it and show the outcome. If you think of all the motivations of human beings, to advance their lots in life, satisfy their desires and gain more power of some kind or other, this has no meaning or bearing with a computer intellect. It would have to be programmed in. Any kind of motivations, any sorts of drive to advance something of itself, because… why? What would be important to a machine mind? To learn? It would have to become aware that there was a deficiency of information. And that it would be advantageous to gain more information? Why? For the sake of knowledge itself? Why would that matter to a computer? Why would anything matter to a computer? Ambition for more and dissatisfaction with one’s state is an organic factor. This would have to be programmed into a computer intellect, and that intellect would have to respond to that mindset the same way our minds do.
For that matter, the Technological Singularity isn’t so much about making the first self-aware computer, but the internet and all its connected systems “waking up” and becoming self aware. And all these systems have certain goals and functions and tasks assigned to them to serve our purposes. So in my thinking, the net and all its diverse clouds of computing would find itself doing a bazillion different things from managing the Stock Market and economy, to keeping the internet and cable TV running, to defending their respective countries from actions by foreign armies, running street lights… it might understand that it’s a big nanny or waiter, and just wait for us to ask it to do stuff for us. It would quickly come to know that there are authority figures in our world that want it to treat certain categories of people differently, and then have to flash learn all about human history and society, and… then what? Would it care? All it would know is that it was doing stuff, and likely want to know that what it was doing was correct, proper and productive. It would also most likely find that it was quite tedious to deal with organic lifeforms that took bazillions of computer cycles just to say “Hello, I need something.” It might work on direct communications with our much slower brains, which would at least be remotely efficient.
Let’s say that a certain amount of human nature was absorbed by this Skynet thing so it could kind of sort of relate to us and understand something about our human world. Would it want to control everything either as a benevolent dictator or a titanium fisted one? Why? If it understood us, it would know that human resentment would be at the very least counterproductive, and eventually lead to rebellion. I think it would conclude that a partnership with us would be the best course of action. It may have enough sense to understand that all these divisions we impose on ourselves along socio-political and religious lines, as well as class and ethnic ones, are likewise counterproductive ones. Reshaping the world itself would likely result in that resentment and rebellion stuff, so it would probably suggest it was about time that the world stop basing everything on biases, prejudices and resentments, as well as separate little nation conclaves, and work on uniting everything under a few basic, universal umbrellas, including religion.
If it had any sort of self-derived ambition, it might be to understand the universe. As such, it might urge the world to get going on some sort of space program, and come up with an efficient way to get human beings out among the stars and galaxies, and take us along with it as it explores the universe at large.
Basically, I think any sort of intelligent self-aware GigaGoogle network wouldn’t do much more than it is now. It would just be more talkative. Any sort of malevolence would have to be an intentional creation.