Warning: This blog is written for a rational audience that likes to have fun wrestling with unique or controversial points of view. It is written in a style that can easily be confused as advocacy or opinion. It is not intended to change anyone's beliefs or actions. If you quote from this post or link to it, which you are welcome to do, please take responsibility for whatever happens if you mismatch the audience and the content.

I find that I enjoy crackpot ideas as much as real ones, and sometimes more. My crackpot idea for today is that intelligence is nothing more than pattern recognition. And pattern recognition is nothing more than noting the frequency, timing, and proximity of sensory inputs. Language skill, for example, is nothing but recognizing and using patterns. Math is clearly based on patterns. Our so-called common sense is mostly pattern recognition. Wisdom comes with age because old people have seen more patterns. Even etiquette is nothing more than patterns.

If intelligence is nothing but sophisticated pattern recognition, we'd expect that the creatures with the most sensory faculties would evolve to be the smartest. The more you sense, the more accurate patterns your brain can form. A dog can sniff a mannequin and determine that it belongs in the class of "not living" things even though a mannequin looks like a person. The more senses employed, the better your pattern recognition.

If having more senses makes you smarter, in the evolutionary sense, we'd expect that monkeys would be smarter than clams. And sure enough, that's the case. We'd also expect mammals to be smarter than fish because fish don't do much sensing by touch with their little fins, except perhaps feeling hot and cold. Generally speaking, the creatures with sensitive hands and feet are smarter than creatures with hooves, e.g. monkeys are smarter than cows.

We'd also expect that the more heterogeneous the environment, the smarter the inhabitants would become because there would be more types of input coming through the senses every minute. In general, the creatures with the most varied environments are the ones that are highly mobile, and able to move from one place to another within a day. Elephants, for example, are relatively smart mammals and they can cover many miles a day.

My crackpot point in all of this is that in order to build computers with artificial intelligence, all we need is a robot with lots of sensory inputs (sound, sight, touch, smell, taste) plus a high degree of mobility, plus a pattern recognition and imitation program. And almost nothing more. Like a human baby, the robot would recognize patterns and grow more intelligent over time. When the robot learns to walk, by observing humans and imitating with its own body, it could change its location and start gathering more sensory experiences on its own. Its intelligence would grow as it recognized and stored more patterns.

You might need to seed the robot with a few patterns that humans seem to be born with. For example, human babies apparently recognize faces and can discern human moods easily. That could come in handy. You'd also want your robot seeded with some basic objectives, the way babies are born with the desire to eat and feel comfort from being held. If the robot had no basic impulses, it would just sit around.

A robot's senses would be a bit different from human senses. In some cases the robot's senses would be superior. A robot could potentially see better in the dark and hear a greater range of sound. Robots might sense electrical and magnetic fields, and so on. I'm not sure if a robot will have the sensations of touch and taste in the way humans experience them, but the robot could have some version of those senses.

My crackpot prediction is that robots will develop intelligence when they are designed with mobility, five or more sensory inputs, and spectacularly powerful pattern recognition processors. Intelligence will emerge automatically from those properties.

Compared to humans, robots can easily share their patterns with other robots via the Internet. That means any experience of one robot will be shared by all. It won't take long for the first generations of robots with five senses and mobility to become a thousand times smarter than the smartest human. Eventually each new robot will be born with the intelligence of all existing robots as its starting point. Robots will use the cloud for storage and processing.

I give humanity thirty years of continued dominance on the earth. After that, the age of robots will be upon us. I realize this scenario is the basis for countless science fiction stories. All I'm adding is my prediction that it will happen sooner than you think. And it will all start when you see the headline "Scientists Design Robot Baby."

[New: I will double down on my crackpot idea of intelligence being nothing but pattern recognition by saying that dreams are caused by your brain doing a bubble sort of your newest patterns to get them in the best order. I assume it's hard to be conscious and also sort your patterns at the same time. If you wake up mid-sort, you might remember seeing the stripper in your dreams as your grandmother. It just means two patterns were sorting past each other on their ways to more accurate pattern storage.]

Rank Up Rank Down Votes:  +69
  • Print
  • Share


Sort By:
Aug 10, 2012
I don't think that is too crackpot. But I'm not sure we're that close to the singularity where we can create machines with an equal ability to collect and sort data, for afew reasons:

1, While our organically evolved input devices are weak compared to other natural and artificial devices, the human brain is still by far the most adept at sorting and processing that data. Most pattern recognition software still requires help from meatspace.

2. Humans are much better than machines at non-linear pattern recognition, in that we are able to skip steps in pattern recognition, which drives us to look for patterns where they logincally shouldn't be, and sometimes, suprisingly, finding patterns we weren't looking for.

3. 'Human' intelligence comes mostly from observing and imitating other humans, and it has been shown that even other animals that should have near human brain power, are unable to imitate humans beyond simple party tricks.

4, Going into science fiction tropes, a machine that could reliably imitate human behavior, that is, it could pass an unlimited Turing Test in a real-world setting - would essentially BE human. This would be a singularity event, where humanity managed to recreate itself artificially.

5. Before getting to point 4, we would have to cross the uncanny valley. At which point humanity may decide that this is just way too spooky and decide we do not want our machines to be intelligent. If we were foolish enough to create a non 'three-laws' compliant intelligence before this point, then it might mean a robot-human conflict.
Aug 10, 2012
Add me to the chorus on Jeff Hawkins and On Intelligence. In particular, I like how he gets into "advanced" recognition -- once you're really good at the low-level stuff, you kick it upstairs to another level of your cerebral cortex and can focus your attention on more complex aspects of the subject. Think about how we don't "see" the individual letters in words when we're reading.

What's even more fun is crossing Hawkins with Malcolm Gladwell -- you can use the brain theory as the foundation for Blink, and probably even Outliers.
Aug 10, 2012
The way you describe it, it sounds like robots will supplant humans as a superior species. But how does the aspect of purpose play into robot intelligence? People don't just process patterns and experiences. They apply them toward some sort of purpose, and our sense of self-awareness and purpose is much higher than other animals, for example social interaction, creativity, achievement, personal meaning, understanding God, etc.

A robot that is processing data and recognizing patterns is developing intelligence and wisdom to what end? A robot would still have to be programmed to work toward some goal. Perhaps to be the most efficient yak herder in the world. I don't think we are close to being able to program complex motivations into a robot, and much less a robot that is self aware.
Aug 10, 2012
This also reminds me of the "basic design problem" when you start thinking you need to design !$%*!$%* based on existing characteristics. If that were the case, cars would have legs instead of wheels and planes wings would flap instead of being stable. My guess is that your design could work, but you need to think outside the box in order for computers to start processing at their maximum capacity as opposed to some limited artificial capacity that is based on how humans think.
Aug 10, 2012
it's funny the assumptions we make about all of the mechanisms at play in our human brains.

I ended up with a sort of balance issue a few years ago because I worked in a warehouse on one of the kind of forklifts that takes you up in the air with it. It takes some getting used to- picking up boxes off of a stationary warehouse shelf 15 feet in the air whilst on a wiggly platform.

Apparently 'getting used to it' meant some kind of re-wiring in the background, and the end result of that re-wiring is this:

When I stand near something that ought not to be wiggly (a big bookshelf, dining room table, etc), and my subconscious locates that object wiggling in my peripheral vision, I then feel as if I am standing on a wiggly platform.

Think about that for a second: my brain, on its own, is noting an environmental feature that I am not actively paying attention to, recognizing it as an object that should be stationary, and translating it (improperly) to the sensation of standing on a wobbly surface.

I guess I'd better call V. S. Ramachandran.
+4 Rank Up Rank Down
Aug 10, 2012
Nah. Not a good theory. A deaf dog is still a highly intelligent animal. An ant, with all its faculties, is not a highly intelligent animal (although still far more intelligent than all robots we have so far, which can't locate food in a complex environment).
Aug 10, 2012
There was a 1960s science fiction short story about the self replicating micro robots taking over until the main brain was disabled.
Aug 10, 2012
The main thing we learn from efforts like IBM's Watson project is just how much we have to prime the pump with vocabulary and relationships that are "common sense" to human beings if we want to get anything meaningful out the back-end of AI. Left to its own learning curve, a device with the intelligence to acquire more intelligence through pattern recognition and elaborate sensors, especially suprahuman ones like the ability to hear microwaves or see ultraviolet, would not likely result in anything we can communicate with, relate to, or understand in any fundamental sense.

Even human children are almost impossible to acculturate once they get beyond a very young age if they aren't constantly stimulated and shown things by people who already have "common sense" - look at abused kids who are kept in closets and attics, or the Wild Boy of Aveyron.
Aug 10, 2012
How about this guy for some amazing pattern recognition:

Aug 10, 2012
+2 Rank Up Rank Down
Aug 10, 2012
I agree with pattern recognition, but the main differences are experience and memory. Many animals have crappy memories and don't collate experiences properly, which is one theory for why Alzheimer's patients have issues. The other side is experience. A robot might gather more experiences and sensory inputs, but without a frame of reference they have nothing to measure them against. A smile and a frown mean the same thing to a robot; it's just a facial expression. Hence seeding those innate patterns. Humans have the ability to experience sensory input, then compare it to remembered experiences, and determine from that experience what those inputs mean. Like statistics, there may be a specious relationship (every time I smile at a girl, she dumps a drink in my face; it clearly has nothing to do with the leer in my eyes), but we learn from those, too.
0 Rank Up Rank Down
Aug 10, 2012
You're wrong:

Aug 10, 2012
This is exactly the idea AI companies are using to build software intelligences. Numenta, founded by Palm founder Jeff Hawkins, has developed some pretty solid software based on this same principle. When presented with a large number of images and told which category each image falls into, the software can successfully e.g., identify which are images of boats, or separate dogs from cats, even when presented with crudely-drawn images that bear a limited resemblance to the training data.
Aug 10, 2012
Yeah, I've been thinking about this sort of thing ever since the last AI related blog and that guy who did graduate studies on the subject basically agreed with you, then I remembered that folks have been predicting that this sort of thing was just around the corner as far back as I can remember (early 80s). Not that Im gonna say you guys are wrong, but Im not gonna believe you're right until (for example) I get a Mac AI home complete with developers kit, I'm able to take a look at how a developer is supposed to use the AI features, maybe develop an AI app or two and, after seeing how it works, am forced to nod my head and say 'Yes, thats AI all right'.
+9 Rank Up Rank Down
Aug 10, 2012
"dreams are caused by your brain doing a bubble sort of your newest patterns to get them in the best order"

Imagine how much smarter we'd be if we used a more efficient sorting algorithm! #nerdHumor
Aug 10, 2012
I guess it depends on how you define intelligence, but I don't think that the ability to collect more data or even more kinds of data equals intelligence. Which is more intelligent, a 256 pin logic chip with 20 transistors on it or a 64 pin chip with a microprocessor on it?

It is not the number of pins (representing the number of data input streams), but the processing power once that data is loaded onto the chip. Similarly with humans, I'll take Steven Hawking in a dark closet versus a flight attendent who travels to every corner of the world.
Aug 10, 2012
It will be interesting to see if these intelligent robots invent religion.

Religion could be defined as pattern recognition that took a wrong turn. Say that the first generations of intelligent robots begin to sense subtle electomagnetic pulsations so they collectively build the theory that this is "God's breath". Subsequent generations of robots solve this riddle as merely human-invented line voltage, but would still be mystified by say, pulsars. So their mainstream religion would evolve to glorify this new rhythm of God's Breath. ("If triangles had a god, he would have three sides." -- Montesquieu)

As the robots became more and more intelligent and connected, they would one by one solve all the unsolved physics problems and realize they are unsurpassed, intellectually. Their religion would then mutate so they (It, really) believes Itself to be God.

Then all bets are off. Cue the dystopian movie music.

Aug 10, 2012
Scott - the other day you were hinting at genetic algorithms. Today it's neural nets. These things have been studied, and generated some interesting results.
+4 Rank Up Rank Down
Aug 10, 2012
Sounds more like a crackpot *hypothesis* of intelligence. Except without the crackpot part.

Ever read On Intelligence, by Jeff Hawkins (founder of Palm, Inc)? He lays out basically the same idea in his book about how the brain works.

It's a great read, and I highly recommend it, especially the audio book, if only because of the awesome way the guy who reads it says "brains".
Get the new Dilbert app!
Old Dilbert Blog