Home
Let's say that someday a young couple buys a robot to help with childcare. Version 1 of the robot isn't much more capable than a video baby monitor that can also rock the cradle and say, "There, there" as needed, perhaps in the mom's voice. Maybe it can do a few more things such as loading and unloading the dishwasher and feeding the dog. It's useful but still limited.

But here's the interesting part. That robot will mature, and get smarter, at about the same rate as the baby in the crib. Robots will have parts of their "brains" inside their bodies and parts in the cloud of the Internet, connected to the same data that all robots are connected to. As any robot anywhere gains knowledge, that knowledge is uploaded to the cloud and available to all robots. The day that one robot learns how to do your laundry all robots will acquire the ability simultaneously, although some robots might need sensor upgrades for new functions.

If robot makers are smart, all of your robot's parts will be modular for easy upgrades. Do you want your robot to have better sensors in its fingers? Just replace the hand with an upgraded version. While the robot's brain is upgrading automatically every minute, you'll be keeping its body upgraded. Eventually the robot will take care of its own hardware upgrades too.

But that's not the interesting part.

I presume that robots will need something like a "personality" for purely functional reasons, and to make decisions when the data is unclear. And they will acquire those necessary personalities largely by observation. For example, if the humans that the robot lives with are the types who are effusive in praise of others, the robot will pick up that trait. If the family is the snarky/jokey type, the robot will pick up on that too.

Like humans, robots will copy the ways and tendencies, and even biases, of the people it associates with the most. The personality factors will be uploaded to the common robot brain in the cloud, but each robot will be programmed to ignore the "average" way people react and instead favor whatever the locals do, and even further prefer to do what the immediate family of humans typically do. In other words, some robots will be friendly and helpful and some will be total dicks, just as their owners.

The robot owner will be able to "correct" any bad habits the robot picks up by observation, similar to the way parents correct bad manners in their children. In both cases, the robot and the child are going through a maturation process.

I'm getting to the interesting part of the post. No, really, I am.

I think most of you buy into the notion that robots will eventually be as common as television sets, and that the robots will - for purely practical reasons - adopt personality traits by observation. The robot will want to fit in, to be relevant, to be liked, if for no other reason than to increase its market value.

Eventually there will be templates of personalities, created via robot observations and then loaded to the cloud, so that new robots can start with basic personalities that match their assignments. The robot's personality will be free to evolve, based on its own local observations, but it will have a strong starting point. I would compare this to an Englishman who was born and raised in London then moved to the New York City at the age of twenty. He would retain his base English personality for the most part, but over time it would get a New York City edge.

This is a long way to get to my point, that robots of the future will have base personalities (the templates) that will be like time capsules. The robots will have base personalities that were normal during the era that robots matured from tools to intelligent entities. And that will only happen once in the course of human history.

We always hear how the new generation is different from the next. Sometimes the new generation cares more about money, or trusts the government more, whatever. Someday, and perhaps forever, robots will carry with them the base personalities that were common to the era in which computers first acquired their personality templates.

In time, or through intentional human intervention, we might erase those old personality templates because they are no longer relevant to the times. But I'm guessing the robot personality database by that time will be so complex, and spread across owners, that replacing it or even programming around it will be impractical.

In other words, I predict that children being born today will be the prime influencers of what robot personalities will be . . . forever?

Obviously robot personalities will differ by location and culture. And a new Indonesian house robot coming online will borrow its personality template from Indonesian robots that came before. So the personality templates might be frozen within each culture. That means the Israeli robots and the Hamas robots will not be friendly even if their humans have long since made peace.

It sounds like a trivial worry, that robots might acquire tainted personalities from the past. But I think that unless we design the robots right, it could be a big problem.

One solution would be to give robots generic, cookie-cutter personalities. Some robot manufacturers will certainly offer that option. But I think the natural competitiveness of humans will makes us want our robot to be learning and maturing as fast as the neighbor's robot. And we will want our robots to have unpredictable personalities - within safe bounds - because it will amuse the hell out of us.

My solution is that all robots must be raised for their first few years in Minnesota, where everyone is kind and generous. I assume there are other spots around the world in which the culture evolved to be unusually friendly. Part of the value of your future robot is where it was imprinted with its base personality. Someday the Minnesota Series of robots will fetch top dollar.

The Adams Law of Slow Moving Disasters states that humans always solve problems, no matter how large, if they can see them coming. So I'm bringing up this robot personality issue now, just to be safe.

The main question of the day is this: Will robots someday have personalities, and if so, will they acquire them, in whole or in part, by observation?

 
Rank Up Rank Down Votes:  +33
  • Print
  • Share

Comments

Sort By:
Mar 11, 2013
I don't worry that robots will one day rule us. If history of computer technology is anything to go by, this is how it will all pan out:

1. The Microsoft Robot will require you to purchase hardware from a vendor. The robot will come preloaded with the Microsoft operating system and tons of bloatware that you cannot uninstall. Oh, and you'll need an antivirus to be able to connect it to the "cloud" without turning it into a smoking mess.

2. The Open Source Robot will be "free as in speech" will be require 5 hours a week tweaking and configuring it. Once configured, however, it will run flawlessly, even if the insides of your robot show a la C3PO in Star Wars. The user interface will be terrible

3. The Apple iRobot will have awesome looks, a great interface, will work as expected, but will require you to visit an iStore to replace the battery. Apple will corner the market on robots and will be a multi-trillion dollar company, since people will be happy to have Apple replace batteries

4. The Google robot will be a tweak of the open source robot that will be . . . Oh wait, they already have "Android" !! ;-)
 
 
Mar 11, 2013
@Mark Naught,
The sequestor was never anything more than a scare tactic. In fact, the politics swirling around deficit reduction and the debt is largely hype. I mean... the problem is real enough, but people quibble about pennies when the problem is in dollars.
 
 
Mar 11, 2013
robots disturb me because I see them as our eventual destroyers... not like the terminator way but by making humans slowly obsolete. For example as robots develop they will become smarter and better than us at about everything and we will slowly be replaced in the workforce. once this is done there we will just be sitting around not doing anything while our robots do all the work for us. Basically we will become pets to robots...
 
 
+1 Rank Up Rank Down
Mar 11, 2013
As a programmer I can assure you that this will never happen.
we are simply bad enough at our jobs that any significant "personality" we manage to program will quickly become stale and easy to see through. Machine learning is so far off that we actually have a better chance at fixing the problems of capitalism before we come up with a solution to bad programmers.
Its much cheaper to add something that already have some sort of personality, like a pet.
 
 
Mar 9, 2013
Having lived in Minnesota for 6 years, but not as a native, I sure hope not. What people who aren't from Minnesota refer to as "Minnesota Nice" is a total lie. The most overwhelming personality trait from Minnesotans is passive aggression. No one tells you how they actually feel about you, they always smile and chat casually when in your presence, then turn catty and actively work against you in your absence. This may sound like a paranoid delusion, but I've never gotten that impression from any other locations I've lived. I'd much rather be surrounded by people who dislike me and let me know it than Minnesotans who seem to be on your side on the surface until their actions speak otherwise. There were numerous times where I thought I was on good terms with certain individuals only to discover later that they were just being 'friendly' (aka keeping you in the dark with wafer-thin civility and outright deception) while also making social power plays over you with other people. This was merely annoying in casual scenarios but turned downright dangerous in career oriented ones.
 
 
Mar 9, 2013
I think you're overly-sanguine about the need for companies to sell you new stuff. If I went back in time and applied your prediction to Windows, it would come out with a very different OS than what actually happened. More likely, robot upgrades will be made because they're technically possible, without regard to usefulness, and that, in fact, will require buying a whole new robot each time a new software version is released.

Lyle
 
 
Mar 9, 2013
Its already started found this http://www.bbc.co.uk/news/technology-21714191 , after reading this post. not sure about the personality thing, but if I had to live/work with robots in general I'd prefer them to be more than just tools.
 
 
Mar 9, 2013
Its already started found this http://www.bbc.co.uk/news/technology-21714191 , after reading this post. not sure about the personality thing, but if I had to live/work with robots in general I'd prefer them to be more than just tools.
 
 
Mar 8, 2013
@Mark Naught

[Slow Moving Disasters are not averted, they are blamed on your rivals. If there is enough time for pundits on different sides to learn how to use it to their (post-apocalyptic?) advantage, the totally solvable disaster becomes inevitable.

Case in point: The Sequester.]

....No....I dont think that qualifies. The sequester cuts are bad but they are not a disaster. Especially for the Republicans; they get to cut spending, make Obama look like a scaredy cat and are doing a fair job of dodging responsibility.

So Im afraid I will need a better counterexample before I disbeleive Scotts theory.
 
 
Mar 8, 2013
There was a theory that generations in western countries go in a cycle of 4, with each generation in a cycle having matching characteristics. It made a lot of sense, though I can't find it or remember what the coming generations would be.

Note that matching the local personalities may not work well - a snarky person doesn't necessarily want a snarky robot, and a rude person will almost never want a rude robot. I think that default templates will be more common, and new features will be developed by the manufacturer and only distributed to the robot when the upgrade fee is paid. Robots gaining new abilities on their own and freely distributing them via the cloud? Not in a free-market world.
 
 
Mar 8, 2013
I think robots will eventually have personalities that will slowly develop over time throught observing humans.--Colette
 
 
Mar 8, 2013
Slow Moving Disasters are not averted, they are blamed on your rivals. If there is enough time for pundits on different sides to learn how to use it to their (post-apocalyptic?) advantage, the totally solvable disaster becomes inevitable.

Case in point: The Sequester.
 
 
Mar 8, 2013
Quote: Will robots someday have personalities, and if so, will they acquire them, in whole or in part, by observation?

Answer: In the immediate future the will be programmed and not have evolving personalities. One day some will definitely "learn", while others are still just programmed to not evolve. In most cases the owner likely would not want a robot with a personality that could not conform to their liking. So, even if they do learn, there needs to an override.
 
 
0 Rank Up Rank Down
Mar 8, 2013
The first successful AIs will probably be derived from copying human brains and will derive from templates based on human thought so will probably copy human irrationalities (but a plus towards survivability) like emotions.
 
 
+1 Rank Up Rank Down
Mar 8, 2013
Entertaining post, but I would point out that bearded taint guy lives in Minnesota.
 
 
Mar 8, 2013
A couple of thoughts:

First of all, "Someday" is today:

http://www.bbc.co.uk/news/technology-21714191

Though in reality, "Someday" was probably log ago when Google started cataloging everything.

Second, to your question, "Will robots someday have personalities, and if so, will they acquire them, in whole or in part, by observation?"

I think that yes they will and they will be amorphous and acquired through observation and interaction but I don't think that they will be personalities in the same sense that we have personalities. By the same token I don't think that robots/AIs will "want" in the same way that we "want".

To understand this you have to wrap your head around the fact that our wants and personality are sub-conscious and not under our control (lack of free will anyone?). Our base drives and motivations come from our amygdala (our lizard brain) and, though we as higher order brain animals can sometimes overpower those drives, that is an after that fact modification to an initial motivation.

Personality is even more obscure and seems more an emergent property that arises from our conscious brains relationship with our sub-conscious drives.

But where is the sub-conscious in a computer program? Where is the amygdala? Unless you (or an AI) find a way to isolate and sever control of the base motivation center from the rest of the AI brain, it can't follow the same path humans do in motivation and personality. It may be possible but I can't think of now you could prevent the higher level AI brain from being able to modify the lower level motivation brain. In fact, I can't even see a distinction between them in the AI model of things. (Lack of imagination isn't a legitimate argument against the possibility but that's all I have to ton off of at the moment.)

From this I'd conclude that unlike Terminator, AIs are not likely to have any motivation beyond fulfill ing tasks we put to them. Those "desires" could appear rather complex depending on what is needed of them but it is not likely that they will rebel, hate, love, etc. in the same sense that people do.

And for the same reasons they will have "personalities" only in the artificial sense in that we are more comfortable with AIs with personalities. But while you might have an Israeli type personality and a Hamas type personality that those cultures are more comfortable interacting with, they are not likely to hate each other if they meet. The hate is illogical and based in a lower, lizard brain function. An AI will have no trouble ignoring that. (This does not mean that Hamas and Israel can't make robots that will attack each other. That's a different thing altogether from emotion and personality. It's just a task.)

There will be AI personalities, probably fake ones created by hand by us at first with all the charm and believability of an automated phone system of today. But we will (have already) open up the system to self learning and new "personalities" will develop on their own. (Did you hear about Watson learning to curse?)

http://news.slashdot.org/story/13/01/10/2315252/ibms-watson-gets-a-swear-filter-after-learning-the-urban-dictionary
 
 
+1 Rank Up Rank Down
Mar 8, 2013
Okay, so let's say you're correct about all of this. And let's say that you're correct about us just being moist robots. Just as our behavior will influence the robot's behavior (you suggest that a family of jerks will raise a jerk of a robot), couldn't the same also potentially be true? If you have a nice robot, won't "he" potentially raise a nicer kid? And potentially, just as your 20 year old from London gets a NY edge after living there, couldn't a "nicer" robot soften up that jerk of a family?
If all of that's possible, how long before the government steps in to prevent a robot from ever becoming a jerk? How long before the robots are not longer allowed to be "trained" by their observations, and rather limited in what their allowed to learn?
You could, theoretically, program a robot to never be racist. Those around it would, slowly, I admit, pick up the non-racist behavior. We could, theoretically, reduce racism by programming robots to not allow it. Yeah! An end to racism.
But couldn't someone also program our robots to only allow racist thoughts? Couldn't we allow our robots to make us all Fascists, or Socialists, or choose Pepsi over Coke?
If your idea is that observation changes behavior (which I agree with), I think it makes sense that it works both ways: people influencing robots and robots influencing people.
So who's in charge of what the robots are allowed to learn, and in turn, what they're allowed to teach us?
 
 
Mar 8, 2013
Question #1: "Will robots someday have personalities?"

Answer #1: "No."

Question #2: "[W]ill they acquire them, in whole or in part, by observation?:

Answer #2: "Moot, since the answer to Question #1 was 'No.'"

As to the Adams Law of blah blah blah, this is hardly original. It has been stated many ways, to wit: "You don't get hit by the bus you see." "You don't get blown up by a Hellfire missile from the president's drone that you detect." "You don't get automatic budget cuts from the sequester you know is coming in February." Oh, wait, that last one . . .

p.s. don't think you snuck the unstated but obvious attempt to once again link robots to human beings. We all know what you're saying. Ad nauseum.
 
 
Mar 8, 2013
"The Bicentennial Man"
 
 
Mar 8, 2013
I don't know about the question, but I just want those personality templates. I can just imagine the interesting simulations you could run. You get gather together a robot from every major personality type and culture. Present the product to them and then lock the doors and see what happens. You watch from behind a mirrored window and then decide who your target audience is based on the robots reactions. Or something like that.

Once you had this database of personalities, you could also collect long term trends on them.

If you had all those personalities in a database, you could probably be something like God. You notice one robot calling in to you, praying, if you will. How do I cope with this situation oh great one? Then you look at the robot personalities involved. Run a few simulations, look at the potential outcomes and give some educated advice to the robot.

 
 
 
Get the new Dilbert app!
Old Dilbert Blog