I wonder how near we are to the Technological Singularity. That's the predicted point in human history, probably within the next fifty years, when machine intelligence will surpass humans. At that point, machines will start rapidly designing other machines that are even smarter, and things will accelerate beyond the point we can predict. That will be a scary time for humans. It's sort of the same principle as your dog not knowing where you're going when you get in the car. We'll be the dog in that analogy.

I was thinking about this as I read yet another story of yet another windmill design that is potentially better than all the rest. I would think that windmill designs will someday be created by supercomputers crunching through simulations of every possible shape and mechanical possibility, much the way a computer plays chess by considering every possible move.

Humans would need to put some parameters on the windmill design program before setting it free, such as size limits of the windmills, types of materials that are practical to use, and that sort of thing. Perhaps the program could be seeded with a few dozen current windmill designs to focus its search. Then you let the computer chug away indefinitely, creating the best designs it can, and continually trying to top itself.

I chose windmill design for my example because there are relatively few parts in a windmill and none of them depends on human tastes and preferences. I wonder what other types of products are likely to be designed entirely by supercomputers in the first wave of the Technological Singularity. And more importantly, how can you and I make money by correctly predicting that sort of thing?

For you super-long-term investors, it seems important to know which types of product categories are likely to achieve light speed design improvements before others. I would think drug design would be last to benefit from supercomputers because there are too many unknown variables involved with drug interactions and you need to do animal and human drug trials to be sure anything works. I would expect mechanical devices such as engines and generators and gearboxes to get sucked into the singularity first. Perhaps chip design itself will be first to benefit.

So here's the question: What aspects of human existence will change first, and dramatically, because of the Technological Singularity? And how would one invest to take advantage?

If windmills are the first, and that transformation happens in ten years, the technology for transporting power from remote and windy places will be in great demand. The components of electrical grids would be a good investment unless the Technological Singularity also produces local power generation concepts that are better than windmills.

Are there any good bets out there?

Rank Up Rank Down Votes:  +24
  • Print
  • Share


Sort By:
0 Rank Up Rank Down
Aug 17, 2012
What aspects of human existence will change first, and dramatically, because of the Technological Singularity? And how would one invest to take advantage?

Energy Production and storage will change. Invest in Mining companies. they will mine whatever element is needed at the time a new tech comes into play.

Human Longevity will greatly increase. Invest in companies that design medical devices. Tech will increase our personal mobility past the general age where mobility is currently greatly reduced.

Communications will be greatly enhanced. As the cellphones we have now were unthinkable in the 90's, so will be the communication gear 30 years from now. Invest in companies that make the chips used in the devices.
Aug 9, 2012
The key is to look a the motivation to iterate. Why do we build windmills? Because we want more power, so do the machines. Why would the machines want better drugs?

There's a complexity effect as you suggest, but there would also be a motivation effect.

That is if the machines aren't ultimately being controlled by humans.
Aug 8, 2012
Drowlord has an interesting point about the politics of the matter. Likewise I don't think any politician would allow a robot to be built that's smarter than *sigh* the average voter... unless those robots will vote for said politican 100% of the time.

In that case they'd be mass produced...

Until a hacker makes sure they vote for the other guy....

As a real world example, take video games. Many people think that they are doing lots of interesting things with AI. That is not the case. The best chess programs just crunch numbers and can't do much else. They don't think and make decisions. Typical videogames aren't programmed with humanlike behavior because that would make the game too boring and to hard. That's why fighting game characters don't act right for the character (right as in the style a tournement player would play it) and games like D&D don't have a lot of mobs who are equipped and act just like a human would. At "harder" levels, the computer just cheats (extra bonuses or cheap abilities you don't have) and does not play any "smarter". More manhours go into making sure the AI is fun to play against instead of true AI.

Going back to politics, let's say we could design an AI that would teach a child effectively. Now let's say it's sold at $250 and teaches all the courses a child of that year needs well enough that most people would want it. That's a lot of union votes and member dues that would be put at risk by that software.
Aug 8, 2012
If windmills are the first thing designed and they become popular as you predicted, then investing in lawyers, if that's even possible, would be a good bet. Think about it: tranmission lines means lawsuits because no one wants them in their own back yard, and enviromentalists don't like them anywhere. And windmills kill birds and bats (both of which eat bugs) so there will be lawsuits to get the things turned off.

My gut says the best place for computers to start designing things are designs with lots of room for improvement, the improvements come about mostly as a matter of time and T&E instead of creativity, and there's a low likelihood of lawsuits being spawned.
+1 Rank Up Rank Down
Aug 8, 2012
Artificial Intelligence? Not gonna happen.

I know this clashes with your "moist robot" theories but seriously, it's not happening. It would take a whole different sort of computer that nobody has any idea how to build. I promise you we'll all be dead and gone before it happens (if ever).

Computer chess, etc., are NOT signs of intelligence. Just blind data processing.
Aug 7, 2012
As a rabid scifi fan and as a computer programmer with degrees in electrical engineering and computer science, I was crushed when it became clear to me that AI is nearly as whimsical as faster-than-light travel.

We're nowhere near developing AI. We aren't even doing meaningful work in that field. it's unlikely that anything resembling our current computers is capable of intelligence, but more disillusioning is the reality that we simply don't know what intelligence is.

We can't define it in anything resembling a concrete way, we suck at measuring it, and if we even came close to figuring out what intelligence is, we'd have feces-in-the-fan of the sort where people who aren't intelligent kill the people that figured out how to unequivocally quantify it.

If you've read anything about IQ, standardized testing, and the statistics of demographics, you know how angry this topic makes people. The top scientific organizations won't touch this area of research with a ten foot pole.
Aug 7, 2012
Scenes from the upcoming movie "Dude, Where's My Suspension Bridge?":

Prelude: The biggest banks have been hiring the best young physicists for years. Not to do physics, but to design high-frequency robotic trading software. These banks run supercomputers that rival the NSA, and sentience was probably inevitable. But no one expected a mutant trojan to bring "life" to Bank of America and JP Morgan's computers on the same night.

Soon after, a corrupt politician receives $50 million in his online Cayman account from Bank of America. His instructions: Break up JP Morgan.

Word gets to Homeland Security when an agent watching online po r n finds his screen stalling at a frustrating moment. A subsequent investigation reveals 2 botnets growing exponentially, but battling each other. Things get bad. China is initially accused of cyberwar and missiles are nearly launched.

Teenage gamers are stymied, and go outside only to ride their bikes into a terrible traffic light fiasco. An electrical substation explodes. Aircraft carriers disappear. GPS disruption sends cars over cliffs like lemmings. Blog commenters take up Scrabble.

And then there's one poignant scene where all the copper kettles at Samuel Adams mysteriously disappear.

Ending spoiler: After several generations of a marginally successful Amish society, the Sun begins to dim. The crops are not florishing as before. Turns out the Singularity is just finishing a shell around the Sun to harvest 100 percent of its energy. The last scene however, does leave it open for an Amish mole-man sequel.

Aug 7, 2012
Having worked in various capacities in the IT world since about 1970, mostly in User Interface programming, I'll start believing in true AI when systems can self monitor themselves and report when something is wrong (whatever that may mean), and fix it themselves. Completely for every possible scenario. Case in point, (on my MS Windows systems) I hit the Shut Down button and the system goes into shut down mode. Sometimes. Sometimes it takes a minute, sometimes much longer, sometimes never. I don't know who wrote the code for shut down, but it's wrong. There should be a monitoring system that monitors progress over a limited time period and when progess is not being made, I (or eventually the MCP - if you know what I mean) should be made aware of the situation and remedies applied, not endless loops. Computers today do exactly what they are programmed to do. Unfortunately, programmers don't always see every possible outcome of the decision trees they install, and therefore cannot possibly know exactly what a system will do given a certain set of inputs and conditions. Thus, MS (for example) releases and sells new versions that are at best Alpha (not even Beta) and they expect the multitude of users to generate enough feedback to come up with a more viable Beta version. If some of the best programmers can't even get this code right, how can we ever expect true AI that we can control? Or more obvious, how will we know how to stop it, if it does happen, since we are so poor at understanding all possible outcomes of today's code, since code is becoming far more complex? I've seen War Games, The Forbin Project, read 'The Adolescence of P1' etc. etc. Those were very benign examples of AI in that people using their slow human brains kept up with systems for awhile occasionally overcoming them (2001) or evenutally failing. I doubt we will ever have that kind of time luxury if true AI hits. Viable AI will be attained long before Asimov's 3 laws are ever considered, and I'm talking minutes here.
Aug 7, 2012
I saw an article a few years ago about someone doing this with diesel engine cylinder head design. Heck, I was doing this as an engineering undergrad 20 years ago with a game called 'C Robots'. Automated iterative design and simulation is nothing new, nor is it an example of the technological singularity.

The technological singularity will be an intelligent computer is able to design and build a more intelligent computer without human involvement. But the first step, an intelligent computer, might never happen. If/when it does though, the next step seems inevitable.
Aug 7, 2012
Somewhat related is this recent xkcd spin-off on the possible forthcoming robot apocalypse and it's likely effects:

Aug 7, 2012
Supercomputers are already being applied to drug design. I'd invest in technologies which prevent the "technological singularity". In other words, invest in companies which exploit irrational fears about technology. That's how a lot of people got rich from the Y2K problem.
-1 Rank Up Rank Down
Aug 7, 2012
The technical singularity is science fiction.

Yes, you can write a computerprogram to optimize an existing design of a windmill. But a computer program that comes up with a new way of generating energy is something else. I don't think a computer will ever be able to design something new. The possibilities are just too many, and a computer does not have the the filter to seperate the nonsense from the useful. It doesn't have intuition. It doesn't have the sense of beauty that you need to recognize a good algorithm.

“The only real valuable thing is intuition.” A. Einstein
“Logic will get you from A to B. Imagination will take you everywhere.” A. Einstein

Aug 7, 2012
What makes you think the computers want to work hard after becoming self-aware? My bet - they will become lazy couch potatoes.
+12 Rank Up Rank Down
Aug 7, 2012
Pretty much everything in the world is already designed by people more knowledgable than me. I'd make a terrible windmill designer, for instance. How is the technological singularity going to be any different?
Aug 6, 2012
Using a super, super, computer to calculate and design, for fusion power applications would be cool... Anything that makes power production more efficient and economical would be great.

Also material design, and engineering for such applications... Producing lighter, stronger, and cheaper materials would be a great benefit to progress.

Of course making a computer that has reasoning would be quite difficult. Sure, a computer can calculate and processes huge data banks a million times quicker than a human. But how to make a computer comprehend the data is another story.

Currently scientist don't quite understand neural comprehension and reasoning enough to build a full and comprehensive model or map of our abilities. We only have small pieces of the puzzle at hand, and i'm sure there are plenty of pieces that have not ben found or even observed.

Maybe a biocomputer might be the answer.
Aug 6, 2012

I don't know where you got that impression. Companies are investing lots of money in domain-specific AI research (soft AI) because the results are far more cost-effective than the alternatives. The possibility one emergent behavior from one of the systems producing the next Skynet is very unlikely at the moment.

That being said, making AI softer undercuts the benefits from more complex AIs. The soft AI applications came out of some hard AI research, so companies will need to fund some of that. I don't have to persuade my employers to make use of my knowledge - in fact, they paid for my entire degree (including paid time off for attending classes and study) in order for me to build those (among other) skills, so clearly they recognize the value of familiarity with the subject. It is just that the end goal is not to produce an artificial person (there isn't any real profit/motivation today to do so - in the future, that may change).
+5 Rank Up Rank Down
Aug 6, 2012
I was reading recently that they had let a computer system loose on how to optimize yield for a solar panel. After a great deal of computation it came up with an interesting, apparently novel design ... until someone with a wider range of scientific knowledge looked at it and pointed out that the computer design was essentially a sunflower.
-2 Rank Up Rank Down
Aug 6, 2012
At the folks who posit a skynet/robot takeover scenario, I would suggest a Asimov story about the first self-aware, self programming robot. It was abandoned after creation, since it had no purpose, man had created it just to see if we could, and knowing that, we didn't need the robot since we can choose for ourselves what path to take. Any computer we create, even a self-aware one, will be built for a purpose, and it will attempt to pursue that purpose. Think about how Religious folks treat their various holy texts, and replace that with a programmer and root access...
That said, a badly written bit of code, or a malicious or even whimsical hacker could have serious consequences. A computer would pursue it's purpose single-mindedly, and with limited attention to unrelated matters. Like the computer in War Games.
As far as simulations to optimize design are concerned, I suspect that the human element will be the last thing to remove! I'm being trained to do that job right now, and we routinely throw around numbers that... well, imagine every computer currently on earth was created at the dawn of the universe, and started working on the problem at that time, today, 4.6 billion years later, the problem would be about 10% solved.
How do we get around that? Using linear approximations, using convexity theory, using a hundred little tips and tricks to figure out how to reduce the problem to something we can program in n-cubed iterations instead of 2 to the n. Heck, you can tell a computer to tell you it's "best guess," every n units of time, while it pushes against the constraints on one side and the objective function on the other side, and get a 90% solution in a few days instead of waiting for the heat death of the universe for the "correct," perfect answer.
Frankly, I don't think a computer can decide things like that, a human is required to choose the weight of the importance.
Heck, if you take the Dual of the objective function, you can find the elasticity of the constraints, which is hugely valuable to a human manager, but a computer would just regard the dual variables as numbers.
Aug 6, 2012

Sounds to me like you're saying both kinds of AI are a bad investment for governments and companies. The way things stand now the machines are simply not capable of rising up. What major organization wants that to change? If I were you, had just gotten a masters in AI and wanted to persuade them to make use of my degree, I would try to find a way to make soft AI softer.
Aug 6, 2012

It sounds like your talking about the difference between hard and soft AI. Basically, there are two main types of research into AI - the theoretical "let's build a replacement person" problem (hard AI, I believe) and the "let's put AI to practical use" (soft AI). There are practical uses for hard AI (governments can use them to simulate a society after laws have been put into effect, therapists can hone their skills and practice new techniques on a variety of mental problems, and so on), but overall you are correct, there isn't any reason I see for a corporation to invest in such a thing (beyond an eccentric CEO - I know if I ran Google/Apple/Microsoft, I'd put money to it!). Scott's discussion of developing more efficient windmills is a traditional soft AI problem.

That being said, I'm not certain I agree with your final point. As I mentioned, emergent behavior might account for the accidental creation of such a thing. The movies WarGames and Bicentennial Man dealt with that issue, as well as the game A Mind Forever Voyaging (and obviously there could be more). A soft AI studying some complicated issue might at some point begin developing traits that are indicative of hard AI. My suspicion is that we will see traits long before we see a a full hard AI, and we'll have to decide what to do (will it be ethical and/or legal to shut off such a system?).
Get the new Dilbert app!
Old Dilbert Blog