So Sean Murray and Elon Musk have met. At least a couple times. They met briefly at E3. But Musk also invited Sean to a tour of SpaceX in 2015, and they apparently spent some time talking.
So what did they talk about? Among other things, apparently a crazy thought experiment called Roko’s Basilisk.
What is that? Well, you should probably read the Slate article on it, but long story short, if some malevolent, sentient AI comes to pass that will punish anyone that doesn’t help it, then you’re left with two choices in the present: do everything you can to make the Roko’s Basilisk AI come to pass, or do nothing and be punished if and when it is created and exacts revenge upon you. Or nothing happens if it doesn’t. This idea so freaked out some techno-futurists who came up with the idea that they deleted the thread. Some even had mental breakdowns.
One day, LessWrong user Roko postulated a thought experiment: What if, in the future, a somewhat malevolent AI were to come about and punish those who did not do its bidding? What if there were a way (and I will explain how) for this AI to punish people today who are not helping it come into existence later? In that case, weren’t the readers of LessWrong right then being given the choice of either helping that evil AI come into existence or being condemned to suffer?
You may be a bit confused, but the founder of LessWrong, Eliezer Yudkowsky, was not. He reacted with horror:
Listen to me very closely, you idiot.
YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.
You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.
This post was STUPID.
Yudkowsky said that Roko had already given nightmares to several LessWrong users and had brought them to the point of breakdown. Yudkowsky ended up deleting the thread completely, thus assuring that Roko’s Basilisk would become the stuff of legend. It was a thought experiment so dangerous that merely thinking about it was hazardous not only to your mental health, but to your very fate.
What if Waking Titan, or Atlas, or whatever it is, wants to punish those who worked against its creation? What if it is using the Atlas Foundation’s time manipulation technologies to do so? What if this AI was so malevolent that it decided the ultimate punishment was to put those who tried to stop it in an eternal hell? A time loop; a temporal, existential, unending nightmare?