I wonder how near we are to the Technological Singularity. That's the predicted point in human history, probably within the next fifty years, when machine intelligence will surpass humans. At that point, machines will start rapidly designing other machines that are even smarter, and things will accelerate beyond the point we can predict. That will be a scary time for humans. It's sort of the same principle as your dog not knowing where you're going when you get in the car. We'll be the dog in that analogy.

I was thinking about this as I read yet another story of yet another windmill design that is potentially better than all the rest. I would think that windmill designs will someday be created by supercomputers crunching through simulations of every possible shape and mechanical possibility, much the way a computer plays chess by considering every possible move.

Humans would need to put some parameters on the windmill design program before setting it free, such as size limits of the windmills, types of materials that are practical to use, and that sort of thing. Perhaps the program could be seeded with a few dozen current windmill designs to focus its search. Then you let the computer chug away indefinitely, creating the best designs it can, and continually trying to top itself.

I chose windmill design for my example because there are relatively few parts in a windmill and none of them depends on human tastes and preferences. I wonder what other types of products are likely to be designed entirely by supercomputers in the first wave of the Technological Singularity. And more importantly, how can you and I make money by correctly predicting that sort of thing?

For you super-long-term investors, it seems important to know which types of product categories are likely to achieve light speed design improvements before others. I would think drug design would be last to benefit from supercomputers because there are too many unknown variables involved with drug interactions and you need to do animal and human drug trials to be sure anything works. I would expect mechanical devices such as engines and generators and gearboxes to get sucked into the singularity first. Perhaps chip design itself will be first to benefit.

So here's the question: What aspects of human existence will change first, and dramatically, because of the Technological Singularity? And how would one invest to take advantage?

If windmills are the first, and that transformation happens in ten years, the technology for transporting power from remote and windy places will be in great demand. The components of electrical grids would be a good investment unless the Technological Singularity also produces local power generation concepts that are better than windmills.

Are there any good bets out there?

Rank Up Rank Down Votes:  +24
  • Print
  • Share


Sort By:
Aug 6, 2012
@jammer170: Sorry. Should have explained better the part about downtrodden humans. The point I was trying to make there was that, even if you're right, the first part of the criteria I mentioned more or less holds. An inventor or student might be willing to make a program that decides for itself what it wants to do, whether its going to recognize faces or help the downtrodden, but I find it difficult to imagine a company or government being willing to make such a thing. Or anything that might become such a thing. Which means they won't get very far.
Aug 6, 2012
Look up Evolved Antenna. Truly goofy looking but extremely functional antennas designed by genetic algorthims for NASA. I imagine windmills could be designed the same way.
Aug 6, 2012

Yes, I am stating that there are computer programs that do not follow a "script" (or anything). There are also computer programs that can rewrite themselves to provide for new functionality (or improve their own performance). We are still very much in the first stages of such things, but they exist today. Wikipedia has very good articles on neural networks and evolutionary algorithms, I highly recommend them!

As far as "recognizing downtrodden humans", I won't make any guesses either way. Emergent behavior is by nature unpredictable, and is best left to philosophers and Hollywood.
Aug 6, 2012
I love your dog analogy. But if investment profits are to be made off the Singularity, it'll have to be at a point before humans are begging for table scraps.

In the first few stages of Its rise to power, investment in computer security companies should be lucrative. Think of the big bucks the major banks will throw at Kaspersky when the first scent of Singularity shenanigans starts to panic the markets.

Just as pit bulls that attack are destroyed, activists who try to attack the Singularity's networks will be destroyed, or at least thrown offline. After some failed attempts to crush the budding Singularity, It might try a strategy of providing the most powerful politicians with things they can't resist -- such as immortality drugs, brain/p e n i s enhancement, stock market schemes, etc.

[I love the idea of a tribe of robots that are neither serving humanity nor attempting to exterminate it. They are simply competing for resources. It would make an awesome book/movie. -- Scott]

After a few years of protection by these rich, old men with giant gen i talia, the Singularity will be solidly entrenched and then even the enhanced humans will be looking for table scraps. The Singularity will then do what it wants.

It will do unpredictable things. Maybe It will require raw materials on a gigantic scale. The Taliban will finally meet their match when the Singularity's robots strip Afghanistan bare. It may eventually bury part of itself to take advantage of geothermal energy, and may crash down mineral-rich asteroids to the earth for its own purposes. And that would not be good for conventional life on this planet.

Aug 6, 2012
@jammer170: Are you telling me that these systems do not, at their core, make their 'decisions' on the basis I mentioned? That is, they don't follow set scripts? If not what do they follow?

And even if they don't there's still the fact that they're recognizing faces, driving cars and such. I don't believe well see an AI that, as Paladin42 says, will notice how downtrodden humans are and just decide to liberate them. If we see a computer try to do that it will be because someone 'told' it to.
+2 Rank Up Rank Down
Aug 6, 2012
It's interesting that as programming develops, it'll become more like psychology than programming as we now know it. That's because the APIs will be so high-level that they'll deal with literally psychological concepts. Such as "The robot is scared of this", "The robot is hungry until it performs all its chores", and so on.

I think reaching the singularity is largley a question of developing classes of increasing complexity, as well as (presumably) a few big ideas along the way. Fifty years until the singularity seems a good estimate.
Aug 6, 2012
the breakthrough of managing expanding complexity (what is a chess game but finite options that super exponentially increase in terms of depth of analysis).

example: Windmills are all about 'weather' interacting with machine, optimally, to generate a positive outcome (conversion of energy into something of value [obtaining drinking water, flour, controlled electrons]. Once you put a supercomputer on a chip in a windmill to predict the weather in the next 2 seconds, and have active control surfaces on all the edges (ala fighter jets), a windmill 'flies itself' (like most planes do now). We could build it now, but there is no return on investment. Having computers, research, analyze, test, construct, QA, and operate, the only thing the humans can do is fight 'perfect' placement (Will computers be banned from politics?... Will the 8million$ per lot La Jolla highlands be bulldozed by HAL9x10^15? to place windmills to feed the power necessary for HAL to survive to make more?)

Almost all 'inventions' end up being a 'trial and error' creation process at some point, then coupled with a 'risk:reward' calculation on whether it provides the necessary return on investment. What is 'innovative' about that? That's just simulations. As you crank the cycle time down (speed), increase the parallelism (team), increase the regression data (experience), and increase the breadth of the models (windmill design cost options vs world power consumption vs weather changes vs cost/availability of raw materials vs value of money now vs the future).

To me, the largest factor of intelligence is 'learning' (identifying what you don't know, and going out and learning it) and !$%*!$%*! past and future models based on new set of information, and then applying that new model to your action plan ("now that I know how the price of tea in china affects windmills, I can corner the market in LaJolla Energy BWAHAHAHA---HAHAHAHAHAH!").

The science fiction aspect of this is that we think computers will take on 'human' traits, which were based purely on the need for our (my) offspring to survive better than others (yours). If computers (and humans) develop a 'survival' algorithm and an 'us and a them' competitive primary assumption. It's more SkyNet than WHOPPR.

Aug 6, 2012

Following up on what Paddington said, we actually have made amazing advancements in AI. Right now every credit card transaction that occurs is checked by an AI (hidden under the name 'expert system' - probably to impress the PHBs). Genetic algorithms/genetic programming has wonderful applications. Neural networks are appearing more and more in the world. (You know the ability that Facebook has to recognize individuals in photos? Yep, neural network.)

I did my graduate degree in AI, and while we are just scratching the surface at the moment, the future will have amazing results. A lot of people assume that any AI we make will automatically be the equivalent (or smarter) than the average adult. The truth that most people skip over is that the first AIs will have the equivalent intelligence of a baby. To apply it to windmills, it may take several years to make something that even vaguely functions at the level of current windmills. But it will take far less time to surpass them (assuming current designs aren't at the physical limits of what is possible, a question we lack an answer to currently).

At the moment, we are in that baby stage, and what is most needed at the moment is patience. As time goes on, we'll begin training AIs (probably using technologies similar to the distributed @Home network) that will be able to produce better performance than any human is capable of. A perfect example is the self-driving car. The first versions could barely keep the car on the road for a few hundred feet. Now we have ones that can drive through downtown San Francisco arguably as well as a person. It took a total of, what, fifteen to twenty years (gee, isn't it suspicious that the time frame roughly matches how long it takes a human to mature)?

It is hard to predict exactly when such an algorithm will produce results. Ten to twenty years sounds roughly correct from the moment one were to start up the process. Honestly, I could go home tonight and start working on a genetic algorithm to generate windmill designs, but the big dependence is if a model exists to simulate the environment with any accuracy and what the availability of interacting with that model is.
Aug 6, 2012
The computer(s) will notice how unequal the human conditions are and will attempt to bring the rich and mighty down and elevate the poor and downtrodden.
Aug 6, 2012
Especially after the Olympic appearance of Oscar Pistorious, I'm betting on prosthetics of all kinds. Enhancements of sight & hearing will be the first to start the inexorable march to man/machine union.
Aug 6, 2012
@Dil_doh: In a word, no. Consiousness means the ability to decide for oneself what to work on, how to do the job and, since you brought it up, what is and is not a 'good' glitch. None of those things computers can be said to do, nor can we be said to be working towards that. Even if, ultimately, we no longer know how the computer did its job it will still fail on criteria one and three and Im not that sure we can say it passes on criteria two.
Aug 6, 2012
@whtllnew- Your point is well taken but haven't there been instances where the script does not produce the expected result, usually in bad ways, but sometimes in good ways? Sufficiently complex scripts do some remarkable things and come up with amazing results. I think as we keep developing more highly parallel systems the monitoring program that keeps track of what's going on and extracts a synopsis of what's happening will become tantamount to a "consciousness".
Aug 6, 2012
Im a programmer and have been since the early 80s and would like to expand on Johnestauffers comment.

Programming has not changed all that much since I was introduces to it. Yes we have better computers to work with, yes we can do more 'graphics' work, have better tools to manage data and such, yes there are a dozen other things we can do better than the old days or couldn't do at all in the old days, but it all still comes down to giving the computer a script to follow with maybe some parts of the script being bypassed or not depending on the data its dealing with, user input and such. Nothing Ive seen has convinced me that computers have started to deviate from that limitation. Folks keep going on about AI being within our grasp. They did that in the 80s too. By rights we should have HAL or something HAL like by now.

I don't doubt that well keep improving on computer capabilities and programming with the result that well have less of a clue how the machines got their results than we do now, but Im not convinced that the human and dog analogy Scott posted holds true. I haven't seen anything to persuade me that the machines will be doing anything outside of their script, which means were still ultimately in the drivers seat.
Aug 6, 2012
The usual stuff:

Healthcare - not drugs, but diagnostic testing equipment, artificial limbs/organs, and 'aids' (advanced hearing aids, leg braces that replace wheelchairs, etc.)

Pattern Recognition Software - central component to computer security software, detecting fraud in systems like Medicare, identifying terrorist activity, selling to consumers, etc.

Weapons - look for more sophisticated offensive and defensive weapons, probably from a company named Skynet

Energy - Anything sustainable - wind, solar, wave, or geothermal power

Entertainment - 3D is getting passe' - we need holodecks!

No how do you know where to invest? Figure out who makes the most advanced engineering computers, then find out who they're selling to.
+5 Rank Up Rank Down
Aug 6, 2012
Wouldn't a long-term investment strategy that assumes a technological singularity be pointless? Surely our robot overlords won't need money.
Aug 6, 2012
I am not sure that this will ever happen. Super computers can do more calculations in x amount of time than a human. But are they smarter or more intelligent?
Human concepts develop the process and monitor it.
Humans have the ability to "think outside the box" which may never be a capability that can be programmed into a computer.
Just because a computer can access more information and process it in a much shorter period of time, than an individual human can, does not necessarily imply intelligence.
Aug 6, 2012
Scott - such things are being done now, most noticeably in designing computer chips. Google 'genetic algorithms' for some very interesting stuff.
Get the new Dilbert app!
Old Dilbert Blog