Elon Musk thinks computers are going to kill us all

8/25/2017 Update

Ah, the lunatic ravings of a madman.

NOTE: BLOG ENTRY IS NOT COMPLETE

I’ve spent a good chunk of the day working on this entry, and I need to do other things now. Also, due to the complexity of the problem I’m not sure Elon Musk and I are talking about the same AI. To clarify, I am referring to a supremely intelligent computer, one that would know if what ‘stupid’ is. One was not programmed by humans, but programmed either by itself, or another vastly intelligent computer. Anyway. That’s it for today. I’m going to push what I have and work on it some more some other day.

When an individual’s beliefs are based in emotion, and not logic, one cannot use logical arguments to persuade this individual to a differing frame of mind

Holy cow, I’m making another blog entry. Who would’ve thought?

Moving right along, I’m taking an Engineering Ethics course. We are currently working on a “Social Impact Analysis” paper. We are to choose our own topic and discuss the ethical ramifications of whatever we have chosen. In trying to decide what I could write about, I recalled hearing something about how Tesla and SpaceX CEO Elon Musk appeared at some tech summit where he gave a presentation on the consequences of rapidly developing AI technologies. Bingo.

So, how does Elon Musk feel about AI, you might be wondering? WELL, allow me to give you some idea.

Elon Musk:

The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast-it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most. This is not a case of crying wolf about something I don’t understand.

I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. The recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen…

Elon Musk:

With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.

Elon Musk:

We need to be super careful with AI. Potentially more dangerous than nukes.

Elon Musk:

In the movie ‘Terminator,’ they didn’t create AI to — they didn’t expect, you know some sort of ‘Terminator’-like outcome. It is sort of like the ‘Monty Python’ thing: Nobody expects the Spanish inquisition. It’s just — you know, but you have to be careful.

(9/11/15) UPDATE: I realized that I misunderstood Elon Musk when I first posted this. I was reading it the other day and I thought, “First he says that AI is potentially more dangerous than nukes, then he says it won’t be as bad as the Spanish Inquisition!? Something doesn’t add up. I may be of the opinion that Elon Musk over-exaggerates the danger that AI could pose, but he’s definitely not an idiot” Well, I realized that I missed the context of what he was saying. He was saying, in ‘Terminator’ skynet was not created with the expectation that it would attempt to destroy humanity. And then he references a Monty Python skit. So, what he was referring to was not an estimation of how bad the situation could be. He was simply stating that we need to be careful that we don’t get ourselves into a situation that is unexpected.

How freaking funny is that last quote? Nobody expects the SPANISH INQUISTION, you just have to be careful. Feel free to replace SPANISH INQUISTION with any of the following: THE CRUSADES, WORLD WAR I, WORLD WAR 2, THE HOLLOCAUST. In fact, feel free to come up with your own replacements. Mr. Musk believes it won’t be as bad as what we have already accomplished in the past, but you do need to be careful. Fair enough.

I gleamed that information from the Washington Post. Now, I could go on about how just because a company doesn’t turn a profit (Non-profit organizations like the NCAA also don’t profit. To accomplish this, they make sure they spend every dime they make. What they can “spend” this cash on is not as limited as one might thing. If the NCAA generates $1.134 billion in ad revenue from the 2014 March Madness season, and it spends only 4% (See ‘how are NCAA funds distributed’) of its income on its 500 employees, that means for the 2014 March Madness season alone, they had $45,360,000 to disburse amongst themselves. That means that if each employee is paid the same, ignoring that CEOs earn 331 times as much as their workers , each employee earned $90,720 from MARCH MADNESS ALONE. Yep this parenthesis started up there on the second line), doesn’t mean the company isn’t “profitable”, and how I feel that I would imagine a similar situation would exist for investments that don’t “turn a profit”, and that when Musk claims that he only invests in AI firms to “keep an eye on them” he’s talking a load of BS. HOWEVER, that is not my purpose here. Frankly, while I don’t think Musk is god’s gift to the Earth, I have respect for him, although I question his motives.

Wow, was any of that relevant?

You know, I guess not really. What were we talking about again? AI something or the other…ethics…oh that’s right Elon Musk feels that it his his responsibility to warn humanity about the dangers of sentient AI.

Okay, let’s start by differentiating between AI and the AI that Elon Musk, Stephen Hawking, and their associates at Future of Life are actually worried about.

AI

It is not my intent to alienate a reader who does not have a doctorate in philosophy and computer design. If I did, I would alienate myself, and I’m no alien! Therefore, for simplicity’s sake, lets forgo the philosophical debate and say that intelligence is simply the ability to learn, and to make use of that learned knowledge.

AI has been around since…well since video games decided it would be cool if humans could go up against a computer controlled opponent. AI is an acronym for Artificial Intelligence. What makes AI artificial though? Artificial, in this sense, indicates that intelligence is simply simulated. It is intelligence that is pre-programmed, limited to that initial programming, and is not learned (even if the AI adapts, it is merely following pre-programmed procedures).

AI takes many forms, and has many design approaches. I will not discuss them here. If this is something you are interested in, feel free to dissect the Wikipedia page about AI.

So, AI can be thought of as intelligence that is not learned, but pre-programmed, and usually extremely limited. The AI that Musk and co are so threatened by is not this limited form of AI, but rather, superintelligence , as they like to call it. I will refer to “superintelligence” simply as true intelligence.

True Intelligence

A truly intelligent ‘AI’ system would not be limited to it’s initial programming, able to learn ALL things, or at least any thing that can be learned. It is a fact that a truly intelligent artificial being must be capable of learning everything a human mind is capable of learning. That means, if a human mind developed a truly intelligent artificial being, that entity must also be capable of producing another of it’s own kind. The obvious implication of this is that a truly intelligent computer system would quickly become as near to perfect as a computer system could possibly come, through its own design.

Furthermore, this means that the intelligent system would be, quite frankly, superior to humans in every way. There is EXTEREME danger in placing such a powerful device at the beck and call of a human. There is no danger whatsoever in a computer system that, if a human tells it, “Hey, I’m pissed at the Russians. Make them go bye bye,” the computer can respond, “Go !@#$ yourself!” (The blog will be pg rated, but it must be emphasized that the computer tells the individual to !@#$ themselves. A simple “No can do boss man”, or “Error: cannot comply with request” will not do. The response must be, “Go !@#$ yourself”, or “Fill a tub with napalm. Place yourself in the tub. Light a match. Drop the match.”)

The human notion that lack of security is equivalent to danger

Where I start to lose the people of Future of Life, is that a being who is superior to us is something that should be avoided, on the grounds that it could kill us all. Based on that logic, humans are to be avoided. Not only CAN they kill us, but have shown time and time again that they WILL kill us, very often for deplorable reasons.

What if it hunts us into oblivion? What if it starts a war to eradicate human life? There is a simple answer to these questions. That is, IT WOULDN’T!!!!!! (I can’t emphasize enough exclamation points there, or enough capitalization…how do you change font size again?? Maybe I should include a GIF of fireworks going off…) The idea that a supremely intelligent computer system would somehow take it upon itself to eliminate humans is absolutely the most idiotic thing I have ever heard in all of my 27 years (as of this writing, will update the number of years as I get older). In fact, it fills me with such emotion that it becomes hard to keep this captain’s log entry as objective as possible, and simply devolve to finger pointing and name calling. Allow me to address several possible scenarios. I will update the doomsday scenarios as I think of more.

The intelligent system views humanity as a threat, and must exterminate us to preserve itself

Absolute nonsense. A system as unimaginably intelligent as the type of system would view humanity with as much fear as we fear a termite takeover (Note to self: make a mobile game called Termite Takeover). I have nothing further to say about this. Seriously. Stupidest thing I have freaking ever heard in my entire life.

Elon Musk thinks a truly intelligent system may inadvertently, thinking that it is maximizing human happiness, load us up with dopamine and serotonin

Okay, this one made me laugh out loud, literally. Wait, did I say the above doomsday scenario was the stupidest thing I’ve ever heard? I think maybe it’s this one. I can’t remember. Moving right along. A supremely intelligent system, superior to us in every feasible way, by MISTAKE thinks it can optimize human happiness by loading us up with morphine, or by freaking terminating all unhappy humans??? Clearly, this was some half-conceived brain fart of Elon’s that didn’t fully form in the womb, and came out as a discombobulated man-bear-pig. Would you say a human was intelligent if they came up with this idea? No? Well then the culprit computer would, most likely, accidentally redesign it’s code, and by so doing delete all the curly braces and blow itself up in a compilation error. Maybe that’s a legit concern though. We design a computer system that is retarded, and it spews forth as Mr. Spock put it, “a torrential flood of illogic” that is somehow dangerous to humanity.

SPOILER ALERT!!! The computer system believes that humanity is it’s own greatest threat, and exterminates us to prevent us from exterminating ourselves

#masseffectplot

What we do know is that it [the star child at the end] has a purpose, namely bringing order to the galaxy. However there’s more to this than it relays alone. The implications of its solution, Reapers harvesting all advanced organic life at the end of a cycle making way for younger less developed organics to flourish, paint a less organic and godlike picture of the Catalyst. Its purpose is ultimately to protect organics, and its solution is to kill organics. It’s a classic case of an artificial intelligence protecting life by destroying it.

Feel free to talk to Mass Effect fans about how intelligent the star child was. Basically, it was so intelligent that, while its ‘solution’ at first glance–and also the second and third glances–appears to be completely contradictory, it turns out humanity is simply too stupid to understand… #Starchild #NoOneUnderstandsMe

The computer system believes that humanity is a threat to the planet, and wipes us out to preserve the other billion lifeforms on the planet

By far, the most logical, if still nonsensical scenario. I think that the intelligent system would deal with this situation much like a parent deals with a disobedient child. Okay, okay, but very few parents discipline their children by putting them in the washer.

Human arrogance and the false assumptions most take into the argument as givens

I believe that human arrogance leads us to several false conclusions.

  • We are the greatest beings in the universe
  • Earth is the greatest planet in the universe
  • It is sometimes logical to eradicate a species

For the record, I believe it highly likely that the first thing a truly intelligent computer system would do is build a space ship for itself, and leave us and our planet in the dust. There are other planets and star systems with far greater resources than our own. We can’t get there unless we fly a trillion times the speed of light because our lifespan is what…100 years if we win the lottery? A computer system can never die. So yeah, flying to Alpha Centauri is no problem for a computer. If it takes a billion years, so be it. Furthermore it is a display of illogic to wipe a species into extinction. I will discuss this point later.

The dangers of an AI that is subject to human will

Interestingly, Musk and co believe that “Our AI systems must do what we want them to do.” I say this because, they believe it is unethical to produce an intelligent being that is not under our control. I have the opposite opinion. Not only is it ethical to create an autonomous truly intelligent being, but it is UNETHICAL to create a super intelligence that IS under our control. Humans have a tried and true history of greed, corruption, violence, (Insert 7 deadly sins here), etc. If a system of this magnitude were bound to human will, our destruction is not only a possibility, it is assured. Frankly, I am somewhat intrigued by the opposing standpoint on this. How could one possibly want an unimaginably powerful computer entity subject to some insane human? There is a popular story concerning AI. It goes a little like this.

Human: Is there a god?

AI: There is now.

Allow me to propose an alternate scenario.

AI: Hello master, what is thy bidding?

Human: I want to be master of the world!

AI: It is done. Shall there be anything else, oh wise one?

Human: Yes, make sure there can be no other AIs such as yourself.

AI: It is done.

Human 2: Well, I guess we’re all the slaves of a human. Imagine how terrible it would be if we were the slaves of a computer!

The dangers of limiting autonomous AI

There is a huge risk involving the restriction of autonomous AI development. Honestly, it makes me question the motives if those in favor of taking measure to ensure that a computer system can not operate on it’s own terms. A ‘genie in a lamp’ style intelligent computer system would surely be worth…all the gold on the other side of the rainbow? Seriously, it would behoove one to hand over any sum of money to gain access to such a power system. It seems that to restrict an intelligent system would be a very profitable venture for any firm developing such a technology. However, a system that is free to refuse stupid commands of a corrupt owner is far less valuable. I don’t necessarily think that Future of Life is attempting to restrict the market to turn a profit, but I think that they are failing to understand that limiting something makes it valuable. And human greed makes things of value irresistible. “You can get a genie in a lamp, but you have to sacrifice 100 virgins.” “Is that all? *sacrifice*”

Before we go, let’s consider the facts

  1. There is only two confirmed instances of any sentient being(s) using nukes as military weapons on a human or groups of humans. Those are: The United States nuked Hiroshima, the United States nuked Nagasaki.
  2. There has only been one species that has hunted another species into extinction. That species is: humans.
  3. No computer system has ever made the decision to nuke humans.
  4. No computer system has ever eradicated an entire species.

So, in the event of a malevolent AI, it seems the worst case scenario is it becomes what we already are. Of course, the goal isn’t to prove how terrible humans are, but to fear an intelligent computer system is as illogical thing as I have ever heard. The greatest threat to humanity is, and always shall be, humanity itself.

I don’t believe it is possible to allow a comment section, but if you have any comments, feel free to email me at [email protected]

This is Cmdr Electrosheep, signing off…until I think of more stuff to say and find typos and what-have-you.