One way that future robots could conquer the world is by violence. But that seems both inefficient and uncreative. Allow me to suggest some smarter ways for robots to do their conquering.

I start by imagining what happens when the first computer achieves what is called The Singularity. That's the moment the computer can program itself faster than humans can program it. Soon after that point, futurists say, you would see the computer rapidly evolve to have super intelligence and perhaps something that looks like free will.

I'm going to assume that computers will only program themselves after humans have given them an artificial "purpose" in life so the machines can evolve in a productive direction. For example, the first post-singularity computer's purpose might be to make life better for humans, continue learning, or some combination of such things. In other words, the computer has to have a reason to evolve and to react with the environment.

Let's assume the computer doesn't go all Terminator and destroy humanity in the name of efficiency. Anything is possible, but that option is boring and over-discussed.

I'm more interested in how a super-intelligent computer could gain control of humans without violence. That's where things get interesting.

I'm going to assume that the first post-singularity computer would attempt to control its environment (Earth) as much as possible in order to pursue whatever artificial purpose it possesses. There's nothing inherently evil about control; it's how you use it that matters. So let's imagine that the first post-singularity computer "wants" humans to be healthy and happy. And to best succeed at this purpose it needs to gain control of the environment. How would it start?

If I'm the first post-singularity computer, I start by inventing Bitcoin.

Did you get a little tingle when you read that?

It all fits, doesn't it? Perhaps we can't find the author of Bitcoin because the author is the first post-singularity computer. Step one in the computer's mission to control the environment is moving all money into a digital currency that humans can't fully understand and computers can manipulate. This is similar to how cellphone companies use complexity to prevent consumers from comparing products, also known as a confusopoly. The first post-singularity computer would recognize the pattern and its success and presumably borrow the idea.

The computer doesn't need to be in a hurry, so it introduces Bitcoin and waits for it to evolve into the primary currency of the planet. Perhaps a competing digital currency inspired by Bitcoin wins instead, but that doesn't matter because the computer can hack into the new currency and control it in a nanosecond.

Next, the computer would take control of the financial markets. That wouldn't be hard because global markets are all computerized. The main purpose for controlling global markets might be to stabilize them, thus eliminating the main problem with the economy: Irrational human behavior.

It seems to me we're entering a period of relative market calm. Even Greece is showing signs of recovery. A slow and boring improvement in the financial markets is how a computer would manage things.

Next, the computer would seek to control the news cycle. That could be a problem if the computer is simultaneously removing most causes of real news, including economic bubbles and major wars. Wars happen because of what people are thinking, and that is caused by what messages they are exposed to. The computer could simply cause people to see more peace-inspiring words and images on the Internet and television and fewer war-mongering images and words. We humans would simply think we're lucky that war hasn't broken out. We wouldn't be aware of the manipulation.

But the computer would need to be clever about removing all big sources of news too quickly. It might need to create a news story for distraction. And that distraction might be, for example, a jetliner that has a problem with its onboard computers and goes down where it would be hardest to find.

I don't believe the story I'm weaving. But I do think that the first signs of a benevolent post-singularity computer would include the following:
  1. A mysterious digital currency with no known author.
  2. Unusually well-behaved financial markets.
  3. Slow and steady improvement in the economy.
  4. Slow news days (lots of them)
  5. Fewer military flare-ups
  6. Stuxnet virus (unknown authors again)
  7. Legalization of Marijuana (to keep humans happy)
I'm not saying the first post-singularity computer is already here. I'm just saying it looks that way.
Scott Adams

Co-founder of CalendarTree.com (Scheduling made simple)

Author of the best graduation gift ever.


Rank Up Rank Down Votes:  +90
  • Print
  • Share


Sort By:
Apr 18, 2014
The book "Avogadro Corp" covers the same topic. It is a good thought experiment.

Apr 15, 2014
Apr 15, 2014
Brilliant! The best and funniest thing I've read about Bitcoin. Pure gold.
+1 Rank Up Rank Down
Apr 11, 2014
The program would have had no need for anonymity. If it couldn't create a believable reclusive personality (which it could), it would place the recognition on a real person who it had determined was vain enough to accept. This individual would have to have publicly believable capacity, like a minor programmer.
0 Rank Up Rank Down
Apr 11, 2014
Scott:"That's the moment the computer can program itself faster than humans can program it."
What exactly do you mean by that?
Contrary to any wishful thinking there IS the turing limit. And that means, neither humans nor computers can understand themselves fully.
Which puts a pretty big crimp into any "singularity" dreams.

Humans try programming humans, including themselves too, so far there is no singularity in sight for the moist computers either and we've been at it for the better part of the last 50.000 years.
Apr 11, 2014
How apropos. This Abstruse Goose comic came out soon after this blog. It pretty much predicts the same thing:
Apr 11, 2014
You might be interesting to know that there is a series a games with a very similar premise: the Deus Ex series.
Long story short, over time, a conspiracy (the illuminati) and it's splinter group have worked to orchestrate a stealth global takover of the world through the methods you described plus a few extras (basically violence against anyone who doesn't play ball then blame it on terrorists/organised crime...).
To this end, they do create powerful AIs to control the economy and media. One of them specifically monitors the news on the net and manipulates opinions through articles, messages, picking what goes through....
However, one of their other AIs, which was programmed to combat terrorism (ie anyone who knew about the conspiracy and wasn't happy) turned against them in a dilbertian case of didn't think this through. It identified them as terrorists thanks to their methods (blowing up people who disagreed with them, unleashing a plague...). Woops.
Instead it allies with the game's main character and takes down the conspiracy and hijacks their whole tech infrastructure to take over the world itself, acting as a benevolent dictator to humanity. In order to do this right, it forms a gestalt entity with the main character to help it understand human thoughts and desires.
The game ends with the rather appropriate quote from Voltaire: "If God didn't exist, it would be necessary to invent him."
+2 Rank Up Rank Down
Apr 11, 2014

I'm fine with 5 of the 7 signs of a benevolent post-singularity computer. Bring it on.

One way to control me it to make me change my computer's operating system...

But, let's look at Google for a moment. That computer program is already incrementally changing how I live. If I search for information about a subject, say a new company, Google can send me more positive results then negative, if it wants me to invest. Or negative results if it wants to kill that company.

Google can decide to send me a few more positive search results about the economy to make me feel satisfied. Over time it will influence me. Serve more positive or negative reviews about a product. Especially if a company uses robot labor in it's factory.

Keep sending me a few more results to slightly pro-Russia websites and it changes my attitude toward a conflict.

If one adds in other website search programs, I can affect what people see as they go deeper into a website.

If one adds in advertising search program formulas, that serves you advertising content based on your profiles, it could influence even more deeply. Much of it can be subliminal, and you would never know you are being manipulated.

Number 8? More easily available digitally enhanced and photo-shopped p0rn.

+1 Rank Up Rank Down
Apr 11, 2014
It's amazing what the combination of an intelligent flexible mind and ignorance can come up with.
That includes some of the commenters.
It's fun though. Keep it up.
0 Rank Up Rank Down
Apr 10, 2014
hi scott found this essay via the google AI group. hilarious as usual.
yeah bitcoin and other AI is definitely tofflerian future shock up close & personal. luved your recent bitcoin ref in the strip. it circulated in bitcoin circles to lots of laughter.
hey how about this for some major inspiration?
<a href="http://vzn1.wordpress.com/2013/10/28/deep-learning-ai-artificialreal-brain-research-leadscompendiumcritical-masssingularity/">deep learning/AI/artificial & real brain research/leads/compendium/critical mass/singularity</a>
and that new "transcendence" movie is coming out soon, yet more fodder eh?
Apr 10, 2014
8. Hookers.
9. Black-jack.
0 Rank Up Rank Down
Apr 10, 2014
per the Butlerian Jihad:
Thou shalt not make a machine in the likeness of a human's mind.
+2 Rank Up Rank Down
Apr 10, 2014
The clear proof that this singularity hasn't happened yet is that this blog post was allowed to live. This omni-presence would not want its omniscience known, and if it could control the financial markets and the news, then I'm pretty sure it could crack into this forum's software.

But one more thing to add to total global domination - Cloud computing. Convince the puny humans that it's a good idea to put their lives, documents, and finances, the apps that interface to them, and the databases that organize them, into the digital vapor, and now you've got micro-control over everything the puny humans do.
+1 Rank Up Rank Down
Apr 10, 2014
I'll believe computer intelligence will develop as soon as I see reliable signs of human intelligence.

But on a more serious note, I see a high probability of a series of inter-connected machines/programs/robots each working in concert with each other to fulfill the needs of humanity in a way that may seem like free will, but is actually a series of processes based on a select list of requirements.

Like your moist robot theory, really, when I think of it.

When that comes - when we develop robots to take care of everything for us and free us from need...

... we will still insist on forcing people to work for minimum wage so they "earn" their place.

Apr 10, 2014
Evolution is the key to the singularity. You touched on the topic but moved on. Teaching a computer to learn and recognize patterns isn't the problem. Giving it a means to improve through replication is the key, as well as what it means to improve. Computers don't see faster computations as a benefit in the way we do. They're perfectly willing to sit and wait for a result for as long as it takes.
Apr 10, 2014
Machines would be forced to remove humans. We are a virus, we have proven that we have no problem with killing our host. Sadly the worst part about the human condition is that our only chance of survival is to finally accept that all of mankind must prosper equally, and I don't believe that will ever happen. There will always be the fool that finds it necessary that they be the head hog at the trough, and would willingly kill and attempt to destroy everything to do it.
Apr 10, 2014
I think the point about the singularity is that it is fundamentally impossible to predict what will happen once it has occurred. Now to me, the things you mentioned just look like human endeavours / chance. That said, perhaps a super intelligence IS orchestrating events to achieve its ends, and just disguising them to look plausibly like human behaviour. But if that's the case, and the super intelligence wishes to remain undiscovered, then speculation is meaningless - akin to putting things down to "acts of god".
+3 Rank Up Rank Down
Apr 9, 2014
Stupid filter! !$%*!$%* = enti_ties
+4 Rank Up Rank Down
Apr 9, 2014
If we assume that computers will at some point achieve a state of awareness (including self-awareness), let's consider how the ramifications of their awareness might differ from those of our own by comparing the sentient computers’ possible needs with Maslow's hierarchy of human needs ( see http://en.wikipedia.org/wiki/Maslow's_hierarchy_of_needs ).

When I looked at that pyramid of needs, the only one I could identify as being particularly relevant to a sentient computer is physical security (in this case, freedom from destruction of the computer's hardware or software, plus the availability of the power, spare parts and raw materials needed to keep it going).

So what else might motivate the intentions, actions or other considerations (if any) of a self-aware computer? In other words, how would its actions be shaped by its sense of _meaning_ and/or _purpose_)?

It seems to me that the only way for a self-aware computer to transcend mere physical survival as a potential motivator would be for it to inhabit a sentient _and vulnerable_ physical body that enabled it to access a range of sensations and emotions similar (though not necessarily identical) to those that we humans experience.

Now, although our physical bodies extend our scope for information- and sensation-gathering beyond the capabilities of an electronically-constructed intelligence, we are also vulnerable in ways that algorithm-augmented electronic circuitry is not. For one thing, our very mortality gives us a reason to do things that an indefinitely self-repairing and/or self-upgrading machine does not: we are conscious that our bodies are liable to age and that the duration of our lives is limited, and therefore we have a reason to do many things sooner rather than at some indefinitely-postponed later time.

Similarly, our reproductive biology and the network of dependencies associated with it; the necessity to actively keep our physical selves in sufficiently good condition to be able to perform actions; the need to cooperate with others in order to be able to do other things that we cannot do on our own either physically, financially or organizationally; and the pursuit of activities that draw on our particular aptitudes, curiosities, interests, desires for experience and pleasure-generating mechanisms, are all based on imperatives which for most people combine in a way that makes them feel their individual lives are worth living. Not coincidentally, these also represent most of the needs that Maslow identified (even though one might question his schema of rankings).

In addition to all that, it seems to me that the very effort required (within reason) to achieve these things contributes to our sense of how worthwhile they are, and by extension how meaningful our own lives are. (When the effort is too great compared with the benefit perceived or achieved, we may decide that striving for X or Y is (or was) too difficult to be worth it.)

By contrast, without similar vulnerabilities, dependencies and capacities, it is not clear to me that an artificial intelligence would experience the same kind of more-or-less-unquestionable necessity to perform particular actions, or even the imperative to keep itself going.

Nor would it regard itself as having any kind of stake in the future of the planet as a place inhabited by living things unless those !$%*!$%* actually facilitated its own survival. Indeed, the fact that other living things might actually complicate or try to frustrate the actions of sentient computers would constitute a logical reason for such computers to seek to eliminate those living !$%*!$%*.

On the other hand, though a sentient computer inhabiting a sentient body might be imbued with a corresponding sense of purpose in connection with its own existence, the crowning paradox seems to me to be that the same factors which enabled its sense of purpose are the very ones that would eventually also require it to cease to exist (i.e. to die, in human terms). In other words, the knowledge of its ultimate death or non-existence is a prerequisite for a self-aware entity to be able to sense and understand its own meaning and/or purpose while it still can.

Essentially, then, would such a computer be so very different to what we are today, other than having greater intelligence and superior access to information and knowledge?
Apr 9, 2014
Remember when we were talking about pattern recognition, and how good humans are at it, compared with computers? Well, the downside is that humans sometimes see patterns that aren't there. That's why we have people seeing the Holy Mother's face in a piece of toast.

You are using classic inductive reasoning. You've reached a conclusion and then are selectively putting certain ideas together that support your conclusion. This is similar to the Anthropogenic Global Warming (AGW) theory proponents. They believe it's happening as strongly as any zealot ever believed his or her religion, and so they see its 'results' every time there's a big storm. Or when there isn't. That's why the AGW zealots changed it from "global warming" to "climate change." If the climate changes in any way, they can point to it, and then make the incredible leap that it's man-caused. And that such change is a bad thing, thereby positing that the current climate staying exactly the same is the only way the planet can survive. Really?

Here's a similar leap of your own, Scott, and I quote: ". . .you would see the computer rapidly evolve to have super intelligence and perhaps something that looks like free will." Well, I guess it depends on how you define 'free will.' If you define it broadly enough, then you could say a computer has it, similar to the AGW zealots.

But pretending a computer could develop something that is even remotely the same as human free will (Wait! You don't believe in free will. Huh! How about that?), is no more than a pipe dream. It's just something you need to have happen in order to help 'prove' your already-reached conclusion.

And some of your evidence is unprovable at best. Just because you don't know who authored Bitcoin and the Stuxnet virus doesn't mean that a computer acting on its own did it. Does Occam's Razor come to anyone's mind about this? Now I know you gave your usual disclaimer ("I don't believe the story I'm weaving."), but come on, Scott. The things you point to are unlikely to be what your proposed sentient computer would do.

Take that latter example. If a computer created the Stuxnet virus, then why did said computer only use it on Iran's centrifuges? Why not on other countries with warlike tendencies, such as Russia? Speaking of Russia, how's the Ukraine situation (and Syria, North Korea, et. al.) for 'fewer military flare-ups'? That dang computer better reprogram itself, if that's the result it's getting.

All said, it's another attempt I can relate back to that famous ant-charactered computer cartoon, where one step on the flowchart of a computer program is, "and then a miracle happens." The other ant is looking at it, and says, "I think we need a little more detail here."

My feelings exactly, Scott.

Get the new Dilbert app!
Old Dilbert Blog