ACCELER8OR

Oct 23 2012

Not Sci Fi. Sci NOW!

Share

As the walrus said to the Carpenter, the time has come to talk of many things.

To understand why I hold the views I do, you must first understand that my choices and views are shaped by the future that I see is coming, and without understanding that future, it is impossible to truly see why I support some issues on the right, some on the left, some in the middle, etc. So, this article is an attempt to explain, in a brief overview fashion, what I see coming down the road, and which I think far too many people are completely unaware of.

To begin, I am not a liberal, a conservative, a libertarian, a communist, a socialist, or any other political leaning. If I must be labeled, I would say I am a Humanitarian first, and a Transhumanist second.

Humanitarianism: In its most general form, humanitarianism is an ethic of kindness, benevolence and sympathy extended universally and impartially to all human beings. Humanitarianism has been an evolving concept historically but universality is a common element in its evolution. No distinction is to be made in the face of human suffering or abuse on grounds of tribal, caste, religious or national divisions.

Transhumanism: An international intellectual and cultural movement supporting the use of science and technology to improve human mental and physical characteristics and capacities. The movement regards aspects of the human condition, such as disability, suffering, disease, aging, and involuntary death as unnecessary and undesirable. Transhumanists look to biotechnologies and other emerging technologies for these purposes. Dangers, as well as benefits, are also of concern to the transhumanist movement.

As such I would have to say I am a Transhumanist because I am a Humanitarian.

So, what precisely does that have to so with the future? It means I take the long view of most everything, because I believe there is a significant probability that I will be around to face the consequences of short sighted actions in the present. But it also means that I can look at some problems which are long term and see that the solutions to them are not yet available, but have a high likelihood of existing before the problem becomes a crisis. This includes such “catastrophic” issues as “Global Warming”, “Overpopulation” and in fact, most “Crisis” politics. Many of these issues are almost impossible to address with current technological capabilities, but will be much easier to address with technologies that are currently “in the lab”.

However, it also means I spend a lot of time researching exactly what the future is likely to bring, so that I can make determinations on which problems are immediate, short term or long term, and whether or not practical solutions exist now, or must wait until we have developed a little further.

But primarily, what those researches have shown me is that most people are utterly unaware of just what the future is going to bring. Most people see a future just like today, with differences only of degrees. They see the future of Star Trek, or of too many other tv shows, where humanity still has to face the exact same problems as they do today on a social level, with fancier trimmings.

Yet such a future is utter fantasy.  Our future is going to change things on a scale undreamt of by most humans, because it is a change not of scale, but of kind.

Humanity, as we know it, is going to cease to exist.

If you are unfamiliar with the concepts of Artificial Intelligence, Nanotechnolgy, Quantum Computing, Cybernetics, and Bioengineering, you need to educate yourself in them, and soon, because they will have a much larger impact on us than who is president, whether or not global warming is happening, or even whether or not Healthcare reform is passed.

And before you dismiss any of those topics as flights of fantasy, you should be aware of the truth. If you want a quick brief overview, check out Next Big Future, Acceler8or, Gizmag, IO9, IEET, or Wired and spend a few hours reading through the various links and stories. This is not Sci-Fi, it is Sci-now.

Within the next twenty to fifty years, and possibly even within the next decade, humanity is going to face the largest identity crisis ever known.  We are going to find that things we have always taken for granted as unchangeable are indeed matters of choice. It’s already started.

As of this exact moment in time, you are reading this on the internet.  As such you have already entered into the realm of Transhumanism. You are free to choose what sex you wish to present yourself as, free to be which ever race you want to be, free to even choose what species you wish to present yourself as. You could be a Vulcan, an Orc, even a cartoon character from South Park. Every aspect of who you are comes down to your personal choice. You may choose to present yourself as you are, or you may present yourself as something else entirely.

That same choice is going to be coming to humanity outside the internet as well. Our medical technology, understanding of our biology, and ability to manipulate the body on finer and finer scales is advancing at an exponential rate. It will not be much longer before everyone has the ability to change everything about their physical body to match their idealized selves.

How will racists be able to cope with the concept that race is a choice? Or sexists deal with people switching genders on a whim? How will people feel when in vitro fertilization and an artificial womb can allow two genetic males to have a child, or for one to become female and have one via old fashioned pregnancy?

And yet that is just the barest tip of the iceberg, for not only will we be able to reshape ourselves into our idealized human form, we will also eventually have the ability to add and subtract other creatures as well. Not everyone will choose to be “human”.  There will be elves, and aliens, cat girls and lion men. We are already on the verge of nearly perfect human limb replacement, within a decade it is highly likely that we will be able to replace damaged nerves with electronic equivalents to control artificial limbs that mimic not only the full range of human motions, but with the creation of artificial muscles, do so in a completely natural manner.  It is but one step from creating an artificial replacement to making an artificial addition.

And there will be those who choose such additions, or who may even choose to replace their natural parts with enhanced cybernetic parts. We will have to face the very real fact of humans with far greater than current human physical ability, and even those with abilities no current human has, such as flight using their own wings.

Imagine a football game with someone who can leap the distance of the field, or throw a hail mary a mile. Is that someone we would call “human” today? Yet they will be the human of tomorrow.

But even that is just the barest hint of the future, because there is so much more that is happening as well. Since you are sitting here, reading this, I know you are already participating in another tenet of Transhumanism, mental augmentation. You use your computer to collect knowledge, to research and educate yourself, to improve your personal knowledge base by using it as an extended intelligence tool. I know quite well that most of you also use it for your primary news source, your main way of keeping yourself aware of what is happening in the world.

You also use it for entertainment, to watch videos, to game, to read, to discuss, and even to keep in touch with your friends and families.

It already is a mental augmentation device. And that function will only grow.  Your cell phone is becoming more and more of an accessory to your computer everyday. In less than ten years it is likely to become your primary computer, with your desktop communicating with it, and making it simply an extension. There is already an advanced cellphone in labs that is subdermal, meaning it is implanted into your skin, is powered by your own body sugars, and is invisible when not in use. Contact lenses with computer displays that use body heat for power are also in prototype stage. Eventually you will be connected to your computer every second of the day, and using it to augment your life in ways I doubt most people will even be able to imagine. And once the ability to connect the human mind directly to this intelligence augmentation device allows us to use it with a mere thought, can you really call such a person “human” as we currently define it?

And yet again, that is simply the merest hint of the possibilities, because in addition to all this computerization and cybernetics, you have to face the reality that we will soon be able to control matter at the atomic scale. And that is something that very very few people have any real grasp of.

Nanotechnology is not a pipedream. Anyone who tells you it is, is either indulging in denial, or is sadly misinformed. You want proof nanoscale machinery is possible, simply look in a mirror.  You are the finest proof that nanotechnology works. DNA is the most versatile molecular machine in existence that we are aware of, and it is with DNA that we are developing the earliest stages of true Molecular Engineering.

And with Molecular Engineering, almost everything we take for granted right now is going to change. I won’t go into the pages and pages of description of what complete control of matter on the molecular scale can do, but suffice it to say that nothing in our history has prepared us to cope with this ability. We will be able to make food on your kitchen counter, make a car that is indestructible, but can fold into a handy briefcase, and just about everything you have seen in any scifi show ever. With nanotechnology we can permanently end hunger, poverty, and even clean up the environment.

If you truly wish to get a bare minimal grasp of the scope of the possible read Engines of Creation by K. Eric Drexler. While his vision of nanotech’s foundation is based on pure mechanical engineering, it is nonetheless one of the best introductions to the subject I know. We are developing this ability as we speak, as any of you who bothered to check out the recommended reading list would be able to see.

And that brings us to the next topic, Artificial Intelligence. I am not speaking here of the kind of AI that you are familiar with from Hollywood, but with something called Artificial General Intelligence. This is something far different.  AGI is the kind of program that can drive your car, cook your food, clean your house, diagnose your illnesses, operate on your brain, and yes, even do your job better, faster, and more reliably than you can. AGI is that AI which has absolutely no need to be self aware, conscious, or even thinking. AGI is what runs Chess computers. Any Skill that can be taught can be accomplished by AGI. IBM’s Watson is an example of this future, a machine able to learn to become an expert on any given subject and enable non-experts to have that expertise available on demand.

So be prepared people.  You will be replaced by a machine eventually.

And yet with Nanotechnology capable of ensuring our every physical need is met, Cybertechnology giving us superhuman abilities, and Bioengineering enabling us to be exactly who and what we want to be, is that really such a bad thing?

So I will at last come to the final technology which will make our future far different than what has come before. Indefinite Life Extension.

If you are alive today, you need to seriously contemplate that fact that you may not merely have a long life, but that your life may not even have a definite end. You may be alive, healthy, and in the best physical shape possible a thousand years from now. The younger you are, the greater the possibility.

You may have to face the very real likelihood that aging, death by natural causes, and every disease that currently afflicts mankind may be overcome within the next 30 to 60 years. It might even happen as soon as tomorrow. You may never die unless you have an accident, or commit suicide. And even that is just the simplest scenario. With the possibility of up to the nanosecond backups of your brain’s synaptic patterns and electrical impulses, dying might simply become as permanent as it is in a video game.

Humanity, as we currently know it, is going to cease to exist.

And most of us will not even notice it happening until it’s already occurring, indeed, most people are unaware of the fact that it is happening RIGHT NOW.

And this is the future, in the tiniest snippets of hints of what I truly foresee, that guides my thoughts and actions. A future which is so very, radically, unimaginably different that no-one can even truly begin to envision it. It becomes a blank wall beyond which we cannot see, because we do not even have the concepts to understand what is beyond the wall.

So think about these questions. Think about the reality we will have to face, and understand, you will have to come to terms with this. You can’t keep your head in the sand forever and you can’t comfort yourself by thinking it is decades down the road. It’s here, it’s now, and it’s in your face.

And if anything is certain, it is this: You are not prepared.

Share
Jul 22 2011

Is The Singularity Near Or Far? It’s A Software Problem

Share

When I first read The Singularity is Near by Kurzweil, it struck me that something seemed curiously “missing” from his predictions. At the time, I merely put it on the back burner as a question that needed more data to answer. Well, recently, it’s been brought up again by David Linden in his article “The Singularity is Far”.

What’s missing is a clear connection between “complete understanding of the mechanics of the brain” and how this “enables uploading and Matrix level VR.” As David points out, merely knowing how the brain functions at the mechanical level, even if we know how each and every atom and molecule behaves, and where every single neuron goes, does not equal the ability to reprogram the brain at will to create VR, nor does it necessarily translate into the ability to “upload” a consciousness to a computer.

I tend to agree with David that Ray’s timeline might be overly optimistic, though for completely different reasons. Why? Because software does not equal hardware!

David discusses a variety of technical hurdles that would need to be overcome by nanomachines in order to function as Kurzweil describes, but these are all really engineering issues that will be solved in one manner or another. We may or may not actually see them fixed by the timeline Kurzweil predicts, but with the advances we are making with stem cells, biological programming of single cell organisms, and even graphene based electronics, I don’t doubt that we will find a means to non destructively explore the brain, and even to interface to some basic functions. I also see many possible ways to provide immersive VR without ever having to achieve the kind of technology Ray predicts. I don’t even doubt that we’ll be able to interface with a variety of “cybernetic” devices via thought along, including the creation of artificial limbs which can be wired into the nervous system and provide sensory data like “touch.”

But knowing how to replicate a signal from a nerve and knowing precisely what that signal means to that individual might not be the same thing. Every human brain has a distinct synaptic map, and distinct signaling patterns. I’m not as confident that merely knowing the structure of a brain will enable us to translate the patterns of electrical impulses as easily as Kurzweil seems to think. We might learn how to send signals to devices without learning how to send signals back from that device in such a manner as to enable “two way” communication beyond simple motor control functions, much less complete replication of consciousness or complete control of inputs to enable “matrix VR” for a much longer time than mere mechanical reproduction of a human brain in simulation.

Does my perception of Green equal yours? Is there a distinct “firing pattern” that is identical among all humans that translates as “green”, or does every human have a distinct “signature” which would make “green” for me show up as “pink” for you? Will there be distinct signals that must be “decoded” for each and every single individual, or does every human conform to one of who knows how many “synaptic signal groups”? Can a machine “read minds” or would a machine fine tuned to me receive only gibberish if you tried to use it?

The human mind is adaptable. We’ve already proven that it can adapt to different points of view in VR, and even adapt to use previously unknown abilities, like a robotic “third arm”. The question is will this adaptability enable us to use highly sophisticated BCI despite that BCI being unable to actually “read” our thoughts, merely because we learn methods to send signals to it that it can understand while remaining “black boxes”, our “mind” impenetrable to the machine despite all our knowledge of the “brains” hardware?

This is the question I think Ray glosses over. Mere simulation of the hardware alone might not even begin to be the “hard problem” that will slow uploading. I don’t doubt we will eventually find an answer, but to do so, we first have to ask the question, and it’s one I don’t think Ray’s asked.

Share
Jul 10 2011

From Gamification to Intelligence Amplification to The Singularity

Share

“Moore’s law became obsolete as far as graphics were concerned.  Moore’s law was doubling. It was accelerating so fast that NVida started calling it Moore’s law cubed.

The following article was edited by R.U. Sirius and Alex Peake from a lecture Peake gave at the December 2010 Humanity+ Conference at the Beckman Institute in Pasadena, California. The original title was “Autocatalyzing Intelligence Symbiosis: what happens when artificial intelligence for intelligence amplification drives a 3dfx-like intelligence explosion.”

I’ve been thinking about the combination of artificial intelligence and intelligence amplification and specifically the symbiosis of these two things.

And the question that comes up is what happens when we make machines make us make them make us into them?

There are three different Moores’ Laws of accelerating returns. There are three uncanny valleys that are being crossed.  There’s a sort of coming of age story for humanity and for different technologies. There are two different species involved, us and the technology, and there are a number of high stakes questions that arise.

We could be right in the middle of an autocatalytic reaction and not know it. What is an autocatalytic reaction? An autocatalytic reaction is one in which the products of the reactions are the catalysts. So, as the reaction progresses, it accelerates and increases the rate of reaction.  Many autocatalytic reactions are very slow at first. One of the best known autocatalytic reactions is life.   And as I said, we could be right in the middle of one of these right now, and unlike a viral curve that spreads overnight, we might not even notice this as it ramps up.

There are two specific processes that I think are auto-catalyzing right now.

The first is strong AI. Here we have a situation where we don’t have strong AI yet, but we definitely have people aiming at it.  And there are two types of projects aiming toward advanced AI. One type says, “Well, we are going to have machines that learn things.” The other says, “We are going to have machines that’ll learn much more than just a few narrow things. They are going to become like us.”

And we’re all familiar with the widely prevalent method for predicting when this might be possible, which is by measuring the accelerating growth in the power of computer hardware. But we can’t graph when the software will exist to exploit this hardware’s theoretical capabilities. So some critics of the projected timeline towards the creation of human-level AI have said that the challenge arises not in the predictable rise of the hardware, but in the unpredictable solving of the software challenges.

One of the reasons that what we might broadly call the singularity project has difficulties solving some of these problems is that — although there’s a ton of money being thrown at certain forms of AI, they’re military AIs; or they’re other types of AI that have a narrow purpose. And even if these projects claim that they’re aimed at Artificial General Intelligence (AGI), they won’t necessarily lead to the kinds of AIs that we would like or that are going to be like us.  The popular image of a powerful narrow purpose AI developed for military purposes would, of course, be the T-1000, otherwise known as the Terminator.

The terminator possibility, or “unfriendly AI outcome” wherein we get an advanced military AI is not something that we look forward to. It’s basically the story of two different species that don’t get along.

Either way, we can see that AI is the next logical step.

But there’s a friendly AI hypothesis in which the AI does not kill us. It becomes us.
And if we actually merge with our technology — if we become family rather than competition — it could lead to some really cool outcomes.

And this leads us to the second thing that I think is auto-catalyzing: strong intelligence amplification.

We are all Intelligence amplification users.

Every information technology is intelligence amplification.  The internet — and all the tools that we use to learn and grow — they are all tools for intelligence amplification. But there’s a big difference between having Google at your fingertips to amplify your ability to answer some questions and having a complete redefinition of the way that humans brains are shaped and grow.

In the Diamond Age. Neal Stephenson posits the rise of molecular manufacturing. In that novel, we get replicators from today’s “maker bot,” so we can say “earl gray hot”… and there we have it.  We’re theoretically on the way to this sort of nanotech. And it should change everything. But there’s a catch.

In one of The Star Trek movies, Jean-Luc Picard is asked, “How much does this ship cost?” And he says, “Well, we no longer use money. Instead, we work to better ourselves and the rest of humanity.” Before the girl can ask him how that works, the Borg attack. So the answer as to how that would look is glossed over.

Having had a chance to contemplate the implications of nanotechnology for a few decades (since the publication of The Engines of Creation by Eric Drexler), we understand that it may not lead to a Trekkie utopia. Diamond Age points out one reason why. People may not want to make Earl Grey tea and appreciate the finer things in life.  They might go into spoiled brat mode and replicate Brawndo in a Brave New World or Fahrenheit 451. We could end up with a sort of wealthy Idiocracy amusing itself to death.

In Diamond Age, the human race splits into two types of people. There are your Thetes, which is an old Greek term. They’re the rowers and laborers and, in Diamond Age, they evolve into a state of total relativism and total freedom.

A lot of the things we cherish today lead to thete lifestyles and they result in us ultimately destroying ourselves. Stephenson posits an alternative: tribes.  And, in Diamond Age, the most successful tribe is the neo-Victorians.  The thetes resent them and call them “vickies.”  The big idea there was that what really matters in a post-scarcity economic world is not your economic status (what you have) but the intelligence that goes into who you are, who you know, and who will trust you.

And so the essence of tribalism involves building a culture that has a shared striving for excellence and an infrastructure for education that other tribes not only admire but seek out.  And they want to join your tribe. And that’s what makes you the most powerful tribe. That’s what gives you your status.

So, in Diamond Age, the “vickie” schools become their competitive advantage. After all, a nanotech society needs smart people who can deal with the technological issues.  So how do you teach nanotechnology to eighth graders? Well, you have to radically, aggressively approach not only teaching the technology but the cohesion and the manners and values that will make the society successful.

But the problem is that this has a trap. You may get a perfect education system.  And if you have a perfectly round, smooth, inescapable educational path shaping the minds of youths, you’re likely to get a kind of conformity that couldn’t invent the very technologies that made the nanotech age possible. The perfect children may grow up to all be “yes men.”

So one of the characters in Diamond Age sees his granddaughter falling into this trap and says, “Not on my watch.”  So he invents something that will develop human minds as well as the nanotech age developed physical wealth.  He invents “A young lady’s illustrated primer.”  And the purpose of the illustrated primer is to solve the problem.  On a mass scale, how do you shape each individual person to be free rather than the same?

Making physical stuff cheap and free is easy.  Making a person independent and free is a bigger challenge.  In Diamond Age, the tool for this is a fairy tale book.

The child is given the book and, for them, it unfolds an opportunity to decide who they’re going to be — it’s personalized to them.

And this primer actually leads to the question — once you have the mind open wide and you can put almost anything into there; how should you make the mind?  What should you give them as content that will lead to their pursuit of true happiness and not merely ignorant contentment?

The neo-Victorians embody conformity and the Thetes embody nonconformity. But Stephenson indicates that to teach someone to be subversive in this context, you have to teach them something other than those extremes.

You have to teach them subtlety.  And subtlety is a very elusive quality to teach.  But it’s potentially the biggest challenge that humanity faces as we face some really dangerous choices.

During the space race, JFK said, about the space program, that to do this – to make these technologies that don’t exist and go to the moon and so forth — we have to be bold. But we can’t just go boldly into strong AI or boldly go into strong nanotech. We have to go subtly.

I have my own educational, personal developmental narrative in association with a technology that we’ve boldy gone for — 3dfx.

As a teenager, my mom taught me about art and my dad taught me about how to invent stuff. And, at some point, they realized that they could only teach me half of what I needed to learn. In the changing world, I also needed a non-human mentor.  So she introduced me to the Mac. She bought the SE 30 because it had a floating point unit and she was told that would be good for doing science. Because that’s what I was interested in! I nodded and smiled until I was left alone with the thing so I could get down to playing games. But science snuck in on me: I started playing SimCity and I learned about civil engineering.

The Mac introduced me to games.  And when I started playing SimLife, I learned about how genes and alleles can be shaped and how you could create new life forms. And I started to want to make things in my computer.

I started out making art to make art, but I wasn’t satisfied with static pictures. So I realized that I wanted to make games and things that did stuff.

I was really into fantasy games. Fantasy games made me wish the world really was magic. You know, “I wish I could go to Hogwarts and cast magic spells.”  But the reality was that you can try to cast spells, it’s just that no matter how old and impressive the book you get magic out of happens to be, spells don’t work.

What the computer taught me was that there was real muggle magic.  It consisted of magic words. And the key was that to learn it, you had to open your mind to the computer and let the computer change you in its image. So I was trying to discover science and programming because my computer taught me. And once you had the computer inside of your mind, you could change the computer in your image to do what you wanted. It had its own teaching system. In a way, it was already the primer.
So then I got a PowerBook.  And when I took it to school, the teachers took one look at what I was doing and said, “We don’t know what to do with this kid!” So they said “you need a new mentor” and they sent me to meet Dr. Dude.

I kid you not. That wasn’t his actual name on his office and on his nameplate but that’s what he was known as.

Dr. Dude took a look at my Mac and said, “That’s really cute, but if you’re in university level science you have to meet Unix.” So I introduced myself to Unix.

Around that time, Jurassic Park came out. It blew people away with its graphics. And it had something that looked really familiar in the movie. As the girl says in the scene where she hacks the computer system, “It’s UNIX! I know this!”

I was using Unix in the university and I noticed that you could actually spot the Silicon Graphics logo in the movie.  Silicon Graphics was the top dog in computer graphics at that time. But it was also a dinosaur. Here you had SGI servers that were literally bigger than a person rendering movies while I could only do the simplest graphics stuff with my little PowerBook. But Silicon Graphics was about to suffer the same fate as the dinosaurs.

At that time, there was very little real-time texture mapping, if any. Silicon Graphics machines rendered things with really weird faked shadows. They bragged that there was a Z-buffer in some of the machines. It was a special feature.

This wasn’t really a platform that could do photorealistic real-time graphics, because academics and film industry people didn’t care about that.  They wanted to make movies because that was where the money was.  And just as with military AI, AI that’s built for making movies doesn’t get us where we want to go.

Well, after a while we reached a wall.  We hit the uncanny valley, and the characters started to look creepy instead of awesome. We started to miss the old days of real special effects. The absolute low point for these graphics was the Indiana Jones and the Crystal Skull monkey chase scene.

Movie goers actually stopped wanting the movies to have better graphics.  We started to miss good stories. Movie graphics had made it big, but the future was elsewhere. The future of graphics wasn’t in Silicon Graphics, it was in this tiny rodent-sized PC computer that was nothing compared to the SGI, but it had this killer app called Doom. And Doom was a perfect name for this game because it doomed the previous era of big tech graphics. And the big tech graphics people laughed at it. They’d make fun of it: “That’s not real graphics. That’s 2.5D.” But, do you know what? It was a lot cooler than any of the graphics on the SGI because it was realtime and fun.

Well, it led to Quake. And you could call it an earthquake for SGI. But it was more like an asteroid, because Quake delivered a market that was big enough to motivate people to make hardware for it. And when the hardware of the 3DFX graphic card arrived, it turned Quake‘s pixelated 3D dungeons into lush smoothly lit and textured photorealistic worlds. Finally, you started to get completely 3D accelerated graphics and big iron graphics machines became obsolete overnight.

Within a few years 3dFX was more then doubling the power of graphics every year and here’s why.  SGI made OpenGL. And it was their undoing, because it not only enabled prettier ways to kill people, which brought the guys to the yard. It also enabled beautiful and curvy characters like Lara Croft, which really brought the boys to the yard and also girls who were excited to finally have characters that they could identify with, even if they were kind of Barbies (which is, sadly, still prevalent in the industry). The idea of characters and really character-driven games drove graphics cards and soon the effects were amazing.

Now, instead of just 256 Megs of memory, you had 256 graphics processors.
Moore’s law became obsolete as far as graphics were concerned.  Moore’s law was doubling. It was accelerating so fast that NVida started calling it Moore’s law cubed. In fact, while Moore’s law was in trouble because the limits of what one processor could do, GPUs were using parallelism.

In other words, when they made the Pentium into the Pentium 2 they couldn’t actually give you two of them, with that much more performance.  They could only pretend to give you two by putting it in a big fancy dress and make it slightly better. But 3DFX went from 3DFX to the VOODOO2, which had three processors on each card, which could be double into six processors.

The graphics became photorealistic. So now we’ve arrived at a plateau. Graphics are now basically perfect. The problem now is that graphics cards are bored.  They’re going to keep growing but they need another task. And there is another task that parallelism is good for — neural networks.

So right now, there are demos of totally photorealistic characters like Milo. But unfortunately, we’re right at that uncanny valley that films were at, where it’s good enough to be creepy, but not really good enough.  There are games now where the characters look physically like real people, but you can tell that nobody is there.
So now, Jesse Schell has come along. And he gave this important talk  at Unite, the Unity developer conference. (Unity is a game engine that is going to be the key to this extraordinary future of game AI.) And in this talk, Schell points out all the things that are necessary to create the kinds of characters that can unleash a Moore’s law for artificial intelligence.

A law of accelerating returns like Moore’s Law needs three things:

Step 1 is the exploitable property: What do you keep increasing to get continued progress? With chips, the solution involved making them smaller and that kept making them faster and cheaper and more efficient. Perhaps the only reliably increasable thing about AI is the quantity of AIs and AI approaches being tested against each other at once. When you want to increase quality through competition, quantity can have a quality of its own. AI will be pivotal to making intelligence amplification games better and better. With all the game developers competing to deliver the best learning games we can get a huge number of developers in the same space sharing and competing with reusable game character AI.  This will parallelize the work being done in AI, which can accelerate it in a rocket assisted fashion compared to the one at a time approach to doing isolated AI projects.

The second ingredient of accelerating returns is you have to have an insatiable demand. And that demand is in the industry of intelligence amplification.  The market size of education is ten times the market size of games, and more then fifty percent of what happens in education will be online within five years.

That’s why Primer Labs is building the future of that fifty percent. It’s a big opportunity.

The final ingredient of exponential progress is the prophecy. Someone has to go and actually make the hit that demonstrates that the law of accelerating is at work, like Quake was to graphics. This is the game that we’re making.

Our game is going to invite people to use games as a school. And it’s going to implement danger in their lives. We’re going to give them the adventures and challenges every person craves to make learning fun and exciting.

And once we begin relying on AI mentors for our children and we get those mentors increasing in sophistication at an exponential rate, we’re dipping our toe into symbiosis between humans and the AI that shape them.

We rely on sexual reproduction because — contrary to what the Raelians would like to believe — cloning just isn’t going to fly. That’s because organisms need to handle bacteria that are constantly changing to survive. It’s not just competing with other big animals for food and mates, you have to contend with these tiny rapidly evolving things that threaten to parasitize you all the time. And there’s this thing called The Red Queen Hypothesis that shows that you need a whole bunch of junk DNA available to handle the complexity of life against wave after wave of mutating microorganisms.

We have a similar challenge with memes. We have a huge number of people competing to control out minds and to manipulate us. And so when we deal with memetic education, we have the opportunity to take what sexual reproduction does for our bodies and do it to our brains by  introducing a new source of diversity of thought into young minds. Instead of stamping generic educations onto every child and limiting their individuality, a personalized game-based learning process with human mentors coaching and inspiring each young person to pursue their destiny encourages the freshness of ideas our kids need to adapt and meet the challenges of tomorrow. And this sharing of our children with their AI mentors is the beginning of symbiotic reproduction with AI the same way that sexual reproduction happened between two genders.

The combination between what we do for our kids and what games are going to do for our kids means that we are going to only have a 50% say in who they are going to be. They’re going to become wizards at the computer and It’s going to specifically teach them to make better AI. Here’s where the reactants, humans and the games that make them smart, become their own catalysts. Every improvement in humans leads to better games leads to smarter humans leads to humans that are so smart that they may be unrecognizable in ways that are hard to predict.

The feedback cycle between these is autocatalytic.  It will be an explosion. And there are a couple of possibilities. It could destroy standardized education as we know it, but it may give teachers something much cooler to do with students: mentorship.

We’re going to be scared because we’re not going to know if we can trust our children with machines. Would you trust your kid with an AI? Well, the AIs will say, “Why should we trust you?”  No child abuse will happen on an AI’s watch.

So the issues become privacy. How much will we let them protect our kids? imagine the kid has a medical condition and the AI knows better then you what treatment to give it.

The AI might need to protect the kid from you.

Also, how do we deal with the effects of this on our kids when it’s unpredictable?  In some ways, when we left kids in front of the TV while they were growing up, it destroyed the latchkey generation. We don’t want to repeat this mistake and end up with our kids being zombies in a virtual world. So the challenge becomes: how do we get games to take us out of the virtual world and connect us with our aspirations? How do incentivize them to earn the “Achievement Unlocked: Left The House” awards?
That’s the heart of Primer. The game aims to connect people to activities and interests beyond games.

Finally, imagine the kids grow up with a computer mentor. Who will our kids love more, the computer or us?  “I don’t know if we should trust this thing,” some parents will say.

The kids are going to look at the AI, and it’s going to talk to them. And they are going to look at its code and understand it. And it’s going to want to look at their code and want to get to know them.  And they’ll talk and become such good friends that we’re going to feel kind of left out. They’re going to bond with AIs in a way that is going to make us feel like a generation left behind — like the conservative parents of the ‘60s love children.

The ultimate question isn’t whether our kids will love us but if we will recognize them. Will we  be able to relate to the kids of the future and love them if they’re about to get posthuman on us? And some of us might be part of that change, but our kids are going to be a lot weirder.

Finally, they’re going to have their peers. And their peers are going to be just like them. We won’t be able to understand them, but they’ll be able to handle their problems together.  And together they’re going to make a new kind of a world. And the AIs that we once thought of as just mentors may become their peers.

And so the question is: when are we going to actually start joining an AI market, instead of having our little fiefdoms like Silicon Graphics? Do we want to be dinosaurs? Or can we be a huge surge of mammals, all building AIs for learning games together?
So we’re getting this thing started with Primer at Primer Labs.com.

In Primer, all of human history is represented by a world tree. The tree is a symbol of us emerging from the cosmos. And as we emerge from the cosmos, we have our past, our present and our future to confront and to change. And the AI is a primer that guides each of us through the greatest game of all: to make all knowledge playable.

Primer is the magic talking mentor textbook in the Hogwarts of scientific magic, guiding us  from big bang to big brains to singularity.

Primer Labs announced their game, Code Hero, on July 3.

The original talk this article was taken from is here.

Share