ACCELER8OR

Oct 23 2012

Not Sci Fi. Sci NOW!

Share

As the walrus said to the Carpenter, the time has come to talk of many things.

To understand why I hold the views I do, you must first understand that my choices and views are shaped by the future that I see is coming, and without understanding that future, it is impossible to truly see why I support some issues on the right, some on the left, some in the middle, etc. So, this article is an attempt to explain, in a brief overview fashion, what I see coming down the road, and which I think far too many people are completely unaware of.

To begin, I am not a liberal, a conservative, a libertarian, a communist, a socialist, or any other political leaning. If I must be labeled, I would say I am a Humanitarian first, and a Transhumanist second.

Humanitarianism: In its most general form, humanitarianism is an ethic of kindness, benevolence and sympathy extended universally and impartially to all human beings. Humanitarianism has been an evolving concept historically but universality is a common element in its evolution. No distinction is to be made in the face of human suffering or abuse on grounds of tribal, caste, religious or national divisions.

Transhumanism: An international intellectual and cultural movement supporting the use of science and technology to improve human mental and physical characteristics and capacities. The movement regards aspects of the human condition, such as disability, suffering, disease, aging, and involuntary death as unnecessary and undesirable. Transhumanists look to biotechnologies and other emerging technologies for these purposes. Dangers, as well as benefits, are also of concern to the transhumanist movement.

As such I would have to say I am a Transhumanist because I am a Humanitarian.

So, what precisely does that have to so with the future? It means I take the long view of most everything, because I believe there is a significant probability that I will be around to face the consequences of short sighted actions in the present. But it also means that I can look at some problems which are long term and see that the solutions to them are not yet available, but have a high likelihood of existing before the problem becomes a crisis. This includes such “catastrophic” issues as “Global Warming”, “Overpopulation” and in fact, most “Crisis” politics. Many of these issues are almost impossible to address with current technological capabilities, but will be much easier to address with technologies that are currently “in the lab”.

However, it also means I spend a lot of time researching exactly what the future is likely to bring, so that I can make determinations on which problems are immediate, short term or long term, and whether or not practical solutions exist now, or must wait until we have developed a little further.

But primarily, what those researches have shown me is that most people are utterly unaware of just what the future is going to bring. Most people see a future just like today, with differences only of degrees. They see the future of Star Trek, or of too many other tv shows, where humanity still has to face the exact same problems as they do today on a social level, with fancier trimmings.

Yet such a future is utter fantasy.  Our future is going to change things on a scale undreamt of by most humans, because it is a change not of scale, but of kind.

Humanity, as we know it, is going to cease to exist.

If you are unfamiliar with the concepts of Artificial Intelligence, Nanotechnolgy, Quantum Computing, Cybernetics, and Bioengineering, you need to educate yourself in them, and soon, because they will have a much larger impact on us than who is president, whether or not global warming is happening, or even whether or not Healthcare reform is passed.

And before you dismiss any of those topics as flights of fantasy, you should be aware of the truth. If you want a quick brief overview, check out Next Big Future, Acceler8or, Gizmag, IO9, IEET, or Wired and spend a few hours reading through the various links and stories. This is not Sci-Fi, it is Sci-now.

Within the next twenty to fifty years, and possibly even within the next decade, humanity is going to face the largest identity crisis ever known.  We are going to find that things we have always taken for granted as unchangeable are indeed matters of choice. It’s already started.

As of this exact moment in time, you are reading this on the internet.  As such you have already entered into the realm of Transhumanism. You are free to choose what sex you wish to present yourself as, free to be which ever race you want to be, free to even choose what species you wish to present yourself as. You could be a Vulcan, an Orc, even a cartoon character from South Park. Every aspect of who you are comes down to your personal choice. You may choose to present yourself as you are, or you may present yourself as something else entirely.

That same choice is going to be coming to humanity outside the internet as well. Our medical technology, understanding of our biology, and ability to manipulate the body on finer and finer scales is advancing at an exponential rate. It will not be much longer before everyone has the ability to change everything about their physical body to match their idealized selves.

How will racists be able to cope with the concept that race is a choice? Or sexists deal with people switching genders on a whim? How will people feel when in vitro fertilization and an artificial womb can allow two genetic males to have a child, or for one to become female and have one via old fashioned pregnancy?

And yet that is just the barest tip of the iceberg, for not only will we be able to reshape ourselves into our idealized human form, we will also eventually have the ability to add and subtract other creatures as well. Not everyone will choose to be “human”.  There will be elves, and aliens, cat girls and lion men. We are already on the verge of nearly perfect human limb replacement, within a decade it is highly likely that we will be able to replace damaged nerves with electronic equivalents to control artificial limbs that mimic not only the full range of human motions, but with the creation of artificial muscles, do so in a completely natural manner.  It is but one step from creating an artificial replacement to making an artificial addition.

And there will be those who choose such additions, or who may even choose to replace their natural parts with enhanced cybernetic parts. We will have to face the very real fact of humans with far greater than current human physical ability, and even those with abilities no current human has, such as flight using their own wings.

Imagine a football game with someone who can leap the distance of the field, or throw a hail mary a mile. Is that someone we would call “human” today? Yet they will be the human of tomorrow.

But even that is just the barest hint of the future, because there is so much more that is happening as well. Since you are sitting here, reading this, I know you are already participating in another tenet of Transhumanism, mental augmentation. You use your computer to collect knowledge, to research and educate yourself, to improve your personal knowledge base by using it as an extended intelligence tool. I know quite well that most of you also use it for your primary news source, your main way of keeping yourself aware of what is happening in the world.

You also use it for entertainment, to watch videos, to game, to read, to discuss, and even to keep in touch with your friends and families.

It already is a mental augmentation device. And that function will only grow.  Your cell phone is becoming more and more of an accessory to your computer everyday. In less than ten years it is likely to become your primary computer, with your desktop communicating with it, and making it simply an extension. There is already an advanced cellphone in labs that is subdermal, meaning it is implanted into your skin, is powered by your own body sugars, and is invisible when not in use. Contact lenses with computer displays that use body heat for power are also in prototype stage. Eventually you will be connected to your computer every second of the day, and using it to augment your life in ways I doubt most people will even be able to imagine. And once the ability to connect the human mind directly to this intelligence augmentation device allows us to use it with a mere thought, can you really call such a person “human” as we currently define it?

And yet again, that is simply the merest hint of the possibilities, because in addition to all this computerization and cybernetics, you have to face the reality that we will soon be able to control matter at the atomic scale. And that is something that very very few people have any real grasp of.

Nanotechnology is not a pipedream. Anyone who tells you it is, is either indulging in denial, or is sadly misinformed. You want proof nanoscale machinery is possible, simply look in a mirror.  You are the finest proof that nanotechnology works. DNA is the most versatile molecular machine in existence that we are aware of, and it is with DNA that we are developing the earliest stages of true Molecular Engineering.

And with Molecular Engineering, almost everything we take for granted right now is going to change. I won’t go into the pages and pages of description of what complete control of matter on the molecular scale can do, but suffice it to say that nothing in our history has prepared us to cope with this ability. We will be able to make food on your kitchen counter, make a car that is indestructible, but can fold into a handy briefcase, and just about everything you have seen in any scifi show ever. With nanotechnology we can permanently end hunger, poverty, and even clean up the environment.

If you truly wish to get a bare minimal grasp of the scope of the possible read Engines of Creation by K. Eric Drexler. While his vision of nanotech’s foundation is based on pure mechanical engineering, it is nonetheless one of the best introductions to the subject I know. We are developing this ability as we speak, as any of you who bothered to check out the recommended reading list would be able to see.

And that brings us to the next topic, Artificial Intelligence. I am not speaking here of the kind of AI that you are familiar with from Hollywood, but with something called Artificial General Intelligence. This is something far different.  AGI is the kind of program that can drive your car, cook your food, clean your house, diagnose your illnesses, operate on your brain, and yes, even do your job better, faster, and more reliably than you can. AGI is that AI which has absolutely no need to be self aware, conscious, or even thinking. AGI is what runs Chess computers. Any Skill that can be taught can be accomplished by AGI. IBM’s Watson is an example of this future, a machine able to learn to become an expert on any given subject and enable non-experts to have that expertise available on demand.

So be prepared people.  You will be replaced by a machine eventually.

And yet with Nanotechnology capable of ensuring our every physical need is met, Cybertechnology giving us superhuman abilities, and Bioengineering enabling us to be exactly who and what we want to be, is that really such a bad thing?

So I will at last come to the final technology which will make our future far different than what has come before. Indefinite Life Extension.

If you are alive today, you need to seriously contemplate that fact that you may not merely have a long life, but that your life may not even have a definite end. You may be alive, healthy, and in the best physical shape possible a thousand years from now. The younger you are, the greater the possibility.

You may have to face the very real likelihood that aging, death by natural causes, and every disease that currently afflicts mankind may be overcome within the next 30 to 60 years. It might even happen as soon as tomorrow. You may never die unless you have an accident, or commit suicide. And even that is just the simplest scenario. With the possibility of up to the nanosecond backups of your brain’s synaptic patterns and electrical impulses, dying might simply become as permanent as it is in a video game.

Humanity, as we currently know it, is going to cease to exist.

And most of us will not even notice it happening until it’s already occurring, indeed, most people are unaware of the fact that it is happening RIGHT NOW.

And this is the future, in the tiniest snippets of hints of what I truly foresee, that guides my thoughts and actions. A future which is so very, radically, unimaginably different that no-one can even truly begin to envision it. It becomes a blank wall beyond which we cannot see, because we do not even have the concepts to understand what is beyond the wall.

So think about these questions. Think about the reality we will have to face, and understand, you will have to come to terms with this. You can’t keep your head in the sand forever and you can’t comfort yourself by thinking it is decades down the road. It’s here, it’s now, and it’s in your face.

And if anything is certain, it is this: You are not prepared.

Share
Jun 17 2012

The John Henry Fallacy

Share

If you are familiar with American Folklore, you probably recall the story of John Henry. He was a steel driver in the early days of the Industrial Revolution. If you don’t know what that means, it basically means he drove steel wedges into rocks to cut through them for railroads. John Henry was supposedly the best of them, and is famous for the tale of his competition against an early steam rock-cutter. He won against this prototype, barely, and it cost him his life. This story is often used as an allegory of the “Man vs Machine” meme, in which we are presented a choice – either Man or Machine – without any other options presented. In these arguments, the author is generally proposing to eliminate the machine in favor of the man, and advocate the abandonment or imposition of limits on technology.

Indeed, even one of the few books which I would consider positive on the subject of technological advancement, Martin Ford’s The Lights In the Tunnel frequently falls into this dualistic view, that man is in competition with machine, and that this competition inevitably will be won by the machine. In a recent blog post he links to numerous articles showing the ongoing replacement of humans in the workplace by machines. In the next most recent blog he shows examples of how many businesses are reaching a point where it is impossible for them to keep human workers and remain competitive. If we accept that the John Henry options, man or machine, are the only two that exist, then it looks very much like man is losing, and losing badly.

Yet I titled this article as I did precisely because this “choice” is a complete falsehood based on an underlying assumption: that the economy will always be one of scarcity. In an economy of scarcity, the assumption that individual humans need to compete against each other for scarce natural resources, and that this requires them to have “jobs” in order to acquire the means to survive, makes such a “choice” seem inevitable. If “machines” win, “humanity” loses.  Everywhere you turn, machines are taking away human jobs, replacing humans in the workforce in ever greater numbers, and invading jobs which once only humans could perform, from doing basic science research, to preparing legal paperwork, to financial trading, and even medical diagnostics. It’s a bleak prospect for the overwhelming majority of humanity about to rendered “obsolete” to the scarcity economy. Looked at from this perspective, it’s a possible future that makes William Gibson’s “cyberpunk future” look positively rosy. For a rather dark and disturbing look at the possibilities, Marshall Brain’s “Manna” is a highly recommended start.

There’s just one huge, gigantic, impossible to overlook flaw in this logic. “The Market” exists only so long as “consumers” exist to “purchase” good and services. Without people to supply a demand, it doesn’t matter how much supply exists.  A completely automated system of production will destroy the economy of scarcity by creating a mode in which supply becomes effectively infinite, and demand becomes so easily met that it can no longer be “sold” and thus becomes essentially “free”. For all the logical errors I could point out in the first part of Manna, Brain’s view of the possibilities full automation could bring about are just the tiniest tip of the iceberg.

Because the dichotomy presented by the John Henry choice is not merely false, it blinds us to the reality that we want the machines to win. As I pointed out in Adding our way to Abundance the 3d printing revolution is going to force the costs of manufacturing to plunge to below rock bottom. With the addition of robotic “resource gatherers” that can mine, refine, and process natural resources, and robotic drone delivery systems, the need for a human element in the supply chain vanishes, leaving only the demand side left. With supplies able to meet demand at effectively zero cost, the only remaining “jobs” left to humanity will be in creating “new” demand. Because until we create true AI, all of those machines will ultimately have only one single purpose. To give Humanity what it wants, because only humanity can have “desires” for those machines to meet.

So like John Henry, fighting the machines is the worst possible choice. If we “win”, we will only lose.

Share
Jul 14 2011

Optimist Author Mark Stevenson Is Trippin’… Through The Tech Revolution

Share

“The oddest thing I did was attend an underwater cabinet meeting in the Maldives.”

Mark Stevenson’s An Optimist’s Tour of the Future is a rare treat — an upbeat tour visiting major shakers behind all the technologies in transhumanism’s bag of tricks — written by a quippie (a culturally hip person who uses amusing quips to liven up his or her narrative).  Stevenson trips through visits to genetic engineers, robotics, nanotechnology enthusiasts, longevity seekers, independent space explorers and more among them names you’ll recognize like Ray Kurzweil, Aubrey de Grey, Eric Drexler and Dick Rutan.

I interviewed him via email.

RU SIRIUS:  Were you an optimist growing up?

MARK STEVENSON:  No, not especially – although I was always trying new things. For most of my childhood I was convinced I was going to be a songwriter for a living.

RU: What made you look forward to the future?

MS: I think that’s a natural thing that humans do. Time is a road. Those who don’t pay attention to the road tend to crash. A better question is: what stops people looking to the future? One reason is because the story we hear about the future is so rubbish. I mean think about it. If I recall the story of the future I’ve been used to hearing since I was born pretty much it goes something like this: “The future is not going to be very good (especially if you vote for that guy), it was better in the old days, you’ve got to look after yourself, the world is violent and unsafe, your job is at risk, your boss is an idiot, your employees are lazy, the generation below you are feral and dangerous, things are changing too fast and you can’t trust those scientists/ new-agers/ left wingers/ right wingers /religious people /atheists /the rich /the poor /what you eat /your neighbor. You are alone. Make the best of it. Vote for me. Buy my paper. I understand.” It’s hardly inspiring, is it?

RU:  As you’ve promoted the book, have you run into arguments or questions that challenge optimistic views?  What’s the most important argument or question?

MS:  I’m not intrinsically optimistic about the future; I’m not an optimist by disposition. I’d say I’m a possibilist – which is to say, it’s certainly possible that we’ll have a much better future, but it’s also certainly possible that we’ll have a really rubbish one. The thing that’s going to move that in one direction or another will be how all of our interactions in the march of history nudge us. One thing I do know is, if you can’t imagine a better future, you’re certainly not going to make it happen. It’s like going into a job interview thinking about how you’re not going to get it. You just won’t get the job. The biggest problem I have is semantic. As soon as you associate yourself with the word “optimism” some people will instantly dismiss you as a wishful thinker who really hasn’t understood the grand challenges we face. As a result, I constantly have to battle against a lazy characterization of my views that suggest I am some kind of Pollyanna in rose-tinted spectacles. My position is simply this: that we should have an unashamed optimism of ambition about our future, and then couple that with our best creative and critical skills to realize those ambitions. Have good dreams – and then work hard to do something about them. It’s obvious stuff but it seems to me that not nearly enough people are saying it these days.

RU:  Since writing the book, what has happened that makes you more optimistic?

MS: That there is a huge hunger for pragmatic change – in fact I’m setting up The League for Pragmatic Optimists to help catalyze this. Also I’m being asked to help organizations re-imagine themselves. That’s challenging and hopeful. The corporation is one of the biggest levers we have for positive change.

RU:  Less optimistic?

MS: When we talk about innovation we easily reference technology, medicine – or we might talk about innovation in music, dance, fashion. But we rarely talk about institutional innovation, and nowhere is this more apparent than in government. Almost every prime minister or president at some point early into their first term of government gives a rousing and highly ironic speech about how they wish to promote innovation. But isn’t it strange that while governments (and many corporations it has to be said) so often talk about stimulating innovation they themselves don’t change the way they work. When we introduced parliamentary democracy in the 1700s it was a massive innovation, a leap forward. Yet here we are, 300 years later and I get to vote once every four years for two people, both of whom I disagree with to run an archaic system that cannot keep up with the pace of change. To quote Einstein,  “We can’t solve problems we’ve got by using the same kind of thinking we used when we created them.” It’s why I now dedicate much of my life helping institutions change the way they think about their place in the world and the way they operate.

RU:  Among the technologies you explore, we can include biotech, AI and nanotech.  In which of these disciplines do you most see the future already present.  In other words, whether it’s in terms of actual worthwhile or productive activities or in terms of stuff that’s far along in the labs, where can you best catch a glimpse of the future?

MS:  To quote William Gibson: “The future is here. It’s just not widely distributed yet.” So, synthetic biology is already in use, and has been for a while. If you’re diabetic, it’s almost certain your insulin supply is produced by E. coli bacteria whose genome has been tinkered with. The list of nanotechnology-based consumer products already available numbers thousands including computer memory and microprocessors, numerous cleaning products, antimicrobial bandages, anti-odour socks, toothpaste, air filters, sunscreen, kitchenware, fabric softeners, pregnancy tests, cosmetics, stain resistant clothing and pet furniture, long-wearing paint, bed-ware, guitar strings that stay sounding fresh thanks to a nano-coating and (it seems to me) a disproportionate number of hair straightening devices. It looks set to underpin revolutions in energy production, medicine and sanitation. Already we’re seeing it increase the efficiency of solar cells and heralding cheap water desalinization/purification technology. In fact, the Toffler Institute predicts that this will “solve the growing need for drinkable water, significantly reducing global conflict between water-starved nation-states.” In short, nanotech can take the ‘Water War’ off the table.

When it comes to AI I’m going to quote maverick Robot designer Rodney Brooks (formerly of MIT): “There’s this stupid myth out there that AI has failed, but AI is everywhere around you every second of the day. People just don’t notice it. You’ve got AI systems in cars, tuning the parameters of the fuel injection systems. When you land in an airplane, your gate gets chosen by an AI scheduling system. Every time you play a video game, you’re playing against an AI system”

What I think is more important to pay attention to is how all these disciplines are blurring together sometimes creating hyper-exponential growth. If you look at progress in genome sequencing for example — itself an interplay of infotech, nanotech and biotech — it’s outstripping Moore’s Law by a factor of four.

RU: What would you say was the oddest or most “science fictional” scene you visited or conversation you had during the course of your “tour”?

MS: The most “science fictional” was meeting the sociable robots at MITs Personal Robotics Group. Get onto You Tube and search for “Leo Robot” or “Nexi Robot” and you’ll see what I mean. Talking of robots, check out video of Boston Dynamics “Big Dog”  too.

The oddest thing I did was attend an underwater cabinet meeting in the Maldives – the idea of the first elected president of the nation, Mohamed Nasheed. (I was one of only one of four people not in the government or the support team allowed in the water). As we swam back to the shore I found myself swimming next to the president. His head turned my way and I must have looked startled because he made the underwater hand signal for “Are you okay?” I signalled back to assure him I was because there is no hand signal for “Bloody hell! I’m at an underwater cabinet meeting in the Maldives! How cool is that?!”

RU: Many of our readers are transhumanists.  What course of action would you recommend toward creating a desirable future.

MS: During my journey I spoke to a man called Mark Bedau, a philosopher and ethicist who said: “Change will happen and we can either try to influence it in a constructive way, or we can try to stop it from happening, or we can ignore it. Trying to stop it from happening is, I think, futile. Ignoring it seems irresponsible.”
This then, I believe, is everybody’s job: to try an influence change in a constructive way. The first way you do that is get rid of your own cynicism. Cynicism is like smoking. It may look cool but its really bad for you — and worse still its really bad for everyone around you. Cynicism is an institution of the mind that’s just as damaging as anything our governments or our employers can do to us.

I also like something a man called Dick Rutan told me when I visited the Mojave Space Port. He’s arguably the world’s finest aviator, most famous for flying around the world nonstop on one tank of gas. He’s seventy years old and still test piloting high-performance aircraft, and he told me: “Never look at a limitation as something you ever comply with. Never. Only look at it as an opportunity for greatness.”

RU: Your book is pretty funny… and you’ve been a stand up comedian.  What’s the funniest thing about the future?

MS: My next book, obviously!

Share
Jul 10 2011

From Gamification to Intelligence Amplification to The Singularity

Share

“Moore’s law became obsolete as far as graphics were concerned.  Moore’s law was doubling. It was accelerating so fast that NVida started calling it Moore’s law cubed.

The following article was edited by R.U. Sirius and Alex Peake from a lecture Peake gave at the December 2010 Humanity+ Conference at the Beckman Institute in Pasadena, California. The original title was “Autocatalyzing Intelligence Symbiosis: what happens when artificial intelligence for intelligence amplification drives a 3dfx-like intelligence explosion.”

I’ve been thinking about the combination of artificial intelligence and intelligence amplification and specifically the symbiosis of these two things.

And the question that comes up is what happens when we make machines make us make them make us into them?

There are three different Moores’ Laws of accelerating returns. There are three uncanny valleys that are being crossed.  There’s a sort of coming of age story for humanity and for different technologies. There are two different species involved, us and the technology, and there are a number of high stakes questions that arise.

We could be right in the middle of an autocatalytic reaction and not know it. What is an autocatalytic reaction? An autocatalytic reaction is one in which the products of the reactions are the catalysts. So, as the reaction progresses, it accelerates and increases the rate of reaction.  Many autocatalytic reactions are very slow at first. One of the best known autocatalytic reactions is life.   And as I said, we could be right in the middle of one of these right now, and unlike a viral curve that spreads overnight, we might not even notice this as it ramps up.

There are two specific processes that I think are auto-catalyzing right now.

The first is strong AI. Here we have a situation where we don’t have strong AI yet, but we definitely have people aiming at it.  And there are two types of projects aiming toward advanced AI. One type says, “Well, we are going to have machines that learn things.” The other says, “We are going to have machines that’ll learn much more than just a few narrow things. They are going to become like us.”

And we’re all familiar with the widely prevalent method for predicting when this might be possible, which is by measuring the accelerating growth in the power of computer hardware. But we can’t graph when the software will exist to exploit this hardware’s theoretical capabilities. So some critics of the projected timeline towards the creation of human-level AI have said that the challenge arises not in the predictable rise of the hardware, but in the unpredictable solving of the software challenges.

One of the reasons that what we might broadly call the singularity project has difficulties solving some of these problems is that — although there’s a ton of money being thrown at certain forms of AI, they’re military AIs; or they’re other types of AI that have a narrow purpose. And even if these projects claim that they’re aimed at Artificial General Intelligence (AGI), they won’t necessarily lead to the kinds of AIs that we would like or that are going to be like us.  The popular image of a powerful narrow purpose AI developed for military purposes would, of course, be the T-1000, otherwise known as the Terminator.

The terminator possibility, or “unfriendly AI outcome” wherein we get an advanced military AI is not something that we look forward to. It’s basically the story of two different species that don’t get along.

Either way, we can see that AI is the next logical step.

But there’s a friendly AI hypothesis in which the AI does not kill us. It becomes us.
And if we actually merge with our technology — if we become family rather than competition — it could lead to some really cool outcomes.

And this leads us to the second thing that I think is auto-catalyzing: strong intelligence amplification.

We are all Intelligence amplification users.

Every information technology is intelligence amplification.  The internet — and all the tools that we use to learn and grow — they are all tools for intelligence amplification. But there’s a big difference between having Google at your fingertips to amplify your ability to answer some questions and having a complete redefinition of the way that humans brains are shaped and grow.

In the Diamond Age. Neal Stephenson posits the rise of molecular manufacturing. In that novel, we get replicators from today’s “maker bot,” so we can say “earl gray hot”… and there we have it.  We’re theoretically on the way to this sort of nanotech. And it should change everything. But there’s a catch.

In one of The Star Trek movies, Jean-Luc Picard is asked, “How much does this ship cost?” And he says, “Well, we no longer use money. Instead, we work to better ourselves and the rest of humanity.” Before the girl can ask him how that works, the Borg attack. So the answer as to how that would look is glossed over.

Having had a chance to contemplate the implications of nanotechnology for a few decades (since the publication of The Engines of Creation by Eric Drexler), we understand that it may not lead to a Trekkie utopia. Diamond Age points out one reason why. People may not want to make Earl Grey tea and appreciate the finer things in life.  They might go into spoiled brat mode and replicate Brawndo in a Brave New World or Fahrenheit 451. We could end up with a sort of wealthy Idiocracy amusing itself to death.

In Diamond Age, the human race splits into two types of people. There are your Thetes, which is an old Greek term. They’re the rowers and laborers and, in Diamond Age, they evolve into a state of total relativism and total freedom.

A lot of the things we cherish today lead to thete lifestyles and they result in us ultimately destroying ourselves. Stephenson posits an alternative: tribes.  And, in Diamond Age, the most successful tribe is the neo-Victorians.  The thetes resent them and call them “vickies.”  The big idea there was that what really matters in a post-scarcity economic world is not your economic status (what you have) but the intelligence that goes into who you are, who you know, and who will trust you.

And so the essence of tribalism involves building a culture that has a shared striving for excellence and an infrastructure for education that other tribes not only admire but seek out.  And they want to join your tribe. And that’s what makes you the most powerful tribe. That’s what gives you your status.

So, in Diamond Age, the “vickie” schools become their competitive advantage. After all, a nanotech society needs smart people who can deal with the technological issues.  So how do you teach nanotechnology to eighth graders? Well, you have to radically, aggressively approach not only teaching the technology but the cohesion and the manners and values that will make the society successful.

But the problem is that this has a trap. You may get a perfect education system.  And if you have a perfectly round, smooth, inescapable educational path shaping the minds of youths, you’re likely to get a kind of conformity that couldn’t invent the very technologies that made the nanotech age possible. The perfect children may grow up to all be “yes men.”

So one of the characters in Diamond Age sees his granddaughter falling into this trap and says, “Not on my watch.”  So he invents something that will develop human minds as well as the nanotech age developed physical wealth.  He invents “A young lady’s illustrated primer.”  And the purpose of the illustrated primer is to solve the problem.  On a mass scale, how do you shape each individual person to be free rather than the same?

Making physical stuff cheap and free is easy.  Making a person independent and free is a bigger challenge.  In Diamond Age, the tool for this is a fairy tale book.

The child is given the book and, for them, it unfolds an opportunity to decide who they’re going to be — it’s personalized to them.

And this primer actually leads to the question — once you have the mind open wide and you can put almost anything into there; how should you make the mind?  What should you give them as content that will lead to their pursuit of true happiness and not merely ignorant contentment?

The neo-Victorians embody conformity and the Thetes embody nonconformity. But Stephenson indicates that to teach someone to be subversive in this context, you have to teach them something other than those extremes.

You have to teach them subtlety.  And subtlety is a very elusive quality to teach.  But it’s potentially the biggest challenge that humanity faces as we face some really dangerous choices.

During the space race, JFK said, about the space program, that to do this – to make these technologies that don’t exist and go to the moon and so forth — we have to be bold. But we can’t just go boldly into strong AI or boldly go into strong nanotech. We have to go subtly.

I have my own educational, personal developmental narrative in association with a technology that we’ve boldy gone for — 3dfx.

As a teenager, my mom taught me about art and my dad taught me about how to invent stuff. And, at some point, they realized that they could only teach me half of what I needed to learn. In the changing world, I also needed a non-human mentor.  So she introduced me to the Mac. She bought the SE 30 because it had a floating point unit and she was told that would be good for doing science. Because that’s what I was interested in! I nodded and smiled until I was left alone with the thing so I could get down to playing games. But science snuck in on me: I started playing SimCity and I learned about civil engineering.

The Mac introduced me to games.  And when I started playing SimLife, I learned about how genes and alleles can be shaped and how you could create new life forms. And I started to want to make things in my computer.

I started out making art to make art, but I wasn’t satisfied with static pictures. So I realized that I wanted to make games and things that did stuff.

I was really into fantasy games. Fantasy games made me wish the world really was magic. You know, “I wish I could go to Hogwarts and cast magic spells.”  But the reality was that you can try to cast spells, it’s just that no matter how old and impressive the book you get magic out of happens to be, spells don’t work.

What the computer taught me was that there was real muggle magic.  It consisted of magic words. And the key was that to learn it, you had to open your mind to the computer and let the computer change you in its image. So I was trying to discover science and programming because my computer taught me. And once you had the computer inside of your mind, you could change the computer in your image to do what you wanted. It had its own teaching system. In a way, it was already the primer.
So then I got a PowerBook.  And when I took it to school, the teachers took one look at what I was doing and said, “We don’t know what to do with this kid!” So they said “you need a new mentor” and they sent me to meet Dr. Dude.

I kid you not. That wasn’t his actual name on his office and on his nameplate but that’s what he was known as.

Dr. Dude took a look at my Mac and said, “That’s really cute, but if you’re in university level science you have to meet Unix.” So I introduced myself to Unix.

Around that time, Jurassic Park came out. It blew people away with its graphics. And it had something that looked really familiar in the movie. As the girl says in the scene where she hacks the computer system, “It’s UNIX! I know this!”

I was using Unix in the university and I noticed that you could actually spot the Silicon Graphics logo in the movie.  Silicon Graphics was the top dog in computer graphics at that time. But it was also a dinosaur. Here you had SGI servers that were literally bigger than a person rendering movies while I could only do the simplest graphics stuff with my little PowerBook. But Silicon Graphics was about to suffer the same fate as the dinosaurs.

At that time, there was very little real-time texture mapping, if any. Silicon Graphics machines rendered things with really weird faked shadows. They bragged that there was a Z-buffer in some of the machines. It was a special feature.

This wasn’t really a platform that could do photorealistic real-time graphics, because academics and film industry people didn’t care about that.  They wanted to make movies because that was where the money was.  And just as with military AI, AI that’s built for making movies doesn’t get us where we want to go.

Well, after a while we reached a wall.  We hit the uncanny valley, and the characters started to look creepy instead of awesome. We started to miss the old days of real special effects. The absolute low point for these graphics was the Indiana Jones and the Crystal Skull monkey chase scene.

Movie goers actually stopped wanting the movies to have better graphics.  We started to miss good stories. Movie graphics had made it big, but the future was elsewhere. The future of graphics wasn’t in Silicon Graphics, it was in this tiny rodent-sized PC computer that was nothing compared to the SGI, but it had this killer app called Doom. And Doom was a perfect name for this game because it doomed the previous era of big tech graphics. And the big tech graphics people laughed at it. They’d make fun of it: “That’s not real graphics. That’s 2.5D.” But, do you know what? It was a lot cooler than any of the graphics on the SGI because it was realtime and fun.

Well, it led to Quake. And you could call it an earthquake for SGI. But it was more like an asteroid, because Quake delivered a market that was big enough to motivate people to make hardware for it. And when the hardware of the 3DFX graphic card arrived, it turned Quake‘s pixelated 3D dungeons into lush smoothly lit and textured photorealistic worlds. Finally, you started to get completely 3D accelerated graphics and big iron graphics machines became obsolete overnight.

Within a few years 3dFX was more then doubling the power of graphics every year and here’s why.  SGI made OpenGL. And it was their undoing, because it not only enabled prettier ways to kill people, which brought the guys to the yard. It also enabled beautiful and curvy characters like Lara Croft, which really brought the boys to the yard and also girls who were excited to finally have characters that they could identify with, even if they were kind of Barbies (which is, sadly, still prevalent in the industry). The idea of characters and really character-driven games drove graphics cards and soon the effects were amazing.

Now, instead of just 256 Megs of memory, you had 256 graphics processors.
Moore’s law became obsolete as far as graphics were concerned.  Moore’s law was doubling. It was accelerating so fast that NVida started calling it Moore’s law cubed. In fact, while Moore’s law was in trouble because the limits of what one processor could do, GPUs were using parallelism.

In other words, when they made the Pentium into the Pentium 2 they couldn’t actually give you two of them, with that much more performance.  They could only pretend to give you two by putting it in a big fancy dress and make it slightly better. But 3DFX went from 3DFX to the VOODOO2, which had three processors on each card, which could be double into six processors.

The graphics became photorealistic. So now we’ve arrived at a plateau. Graphics are now basically perfect. The problem now is that graphics cards are bored.  They’re going to keep growing but they need another task. And there is another task that parallelism is good for — neural networks.

So right now, there are demos of totally photorealistic characters like Milo. But unfortunately, we’re right at that uncanny valley that films were at, where it’s good enough to be creepy, but not really good enough.  There are games now where the characters look physically like real people, but you can tell that nobody is there.
So now, Jesse Schell has come along. And he gave this important talk  at Unite, the Unity developer conference. (Unity is a game engine that is going to be the key to this extraordinary future of game AI.) And in this talk, Schell points out all the things that are necessary to create the kinds of characters that can unleash a Moore’s law for artificial intelligence.

A law of accelerating returns like Moore’s Law needs three things:

Step 1 is the exploitable property: What do you keep increasing to get continued progress? With chips, the solution involved making them smaller and that kept making them faster and cheaper and more efficient. Perhaps the only reliably increasable thing about AI is the quantity of AIs and AI approaches being tested against each other at once. When you want to increase quality through competition, quantity can have a quality of its own. AI will be pivotal to making intelligence amplification games better and better. With all the game developers competing to deliver the best learning games we can get a huge number of developers in the same space sharing and competing with reusable game character AI.  This will parallelize the work being done in AI, which can accelerate it in a rocket assisted fashion compared to the one at a time approach to doing isolated AI projects.

The second ingredient of accelerating returns is you have to have an insatiable demand. And that demand is in the industry of intelligence amplification.  The market size of education is ten times the market size of games, and more then fifty percent of what happens in education will be online within five years.

That’s why Primer Labs is building the future of that fifty percent. It’s a big opportunity.

The final ingredient of exponential progress is the prophecy. Someone has to go and actually make the hit that demonstrates that the law of accelerating is at work, like Quake was to graphics. This is the game that we’re making.

Our game is going to invite people to use games as a school. And it’s going to implement danger in their lives. We’re going to give them the adventures and challenges every person craves to make learning fun and exciting.

And once we begin relying on AI mentors for our children and we get those mentors increasing in sophistication at an exponential rate, we’re dipping our toe into symbiosis between humans and the AI that shape them.

We rely on sexual reproduction because — contrary to what the Raelians would like to believe — cloning just isn’t going to fly. That’s because organisms need to handle bacteria that are constantly changing to survive. It’s not just competing with other big animals for food and mates, you have to contend with these tiny rapidly evolving things that threaten to parasitize you all the time. And there’s this thing called The Red Queen Hypothesis that shows that you need a whole bunch of junk DNA available to handle the complexity of life against wave after wave of mutating microorganisms.

We have a similar challenge with memes. We have a huge number of people competing to control out minds and to manipulate us. And so when we deal with memetic education, we have the opportunity to take what sexual reproduction does for our bodies and do it to our brains by  introducing a new source of diversity of thought into young minds. Instead of stamping generic educations onto every child and limiting their individuality, a personalized game-based learning process with human mentors coaching and inspiring each young person to pursue their destiny encourages the freshness of ideas our kids need to adapt and meet the challenges of tomorrow. And this sharing of our children with their AI mentors is the beginning of symbiotic reproduction with AI the same way that sexual reproduction happened between two genders.

The combination between what we do for our kids and what games are going to do for our kids means that we are going to only have a 50% say in who they are going to be. They’re going to become wizards at the computer and It’s going to specifically teach them to make better AI. Here’s where the reactants, humans and the games that make them smart, become their own catalysts. Every improvement in humans leads to better games leads to smarter humans leads to humans that are so smart that they may be unrecognizable in ways that are hard to predict.

The feedback cycle between these is autocatalytic.  It will be an explosion. And there are a couple of possibilities. It could destroy standardized education as we know it, but it may give teachers something much cooler to do with students: mentorship.

We’re going to be scared because we’re not going to know if we can trust our children with machines. Would you trust your kid with an AI? Well, the AIs will say, “Why should we trust you?”  No child abuse will happen on an AI’s watch.

So the issues become privacy. How much will we let them protect our kids? imagine the kid has a medical condition and the AI knows better then you what treatment to give it.

The AI might need to protect the kid from you.

Also, how do we deal with the effects of this on our kids when it’s unpredictable?  In some ways, when we left kids in front of the TV while they were growing up, it destroyed the latchkey generation. We don’t want to repeat this mistake and end up with our kids being zombies in a virtual world. So the challenge becomes: how do we get games to take us out of the virtual world and connect us with our aspirations? How do incentivize them to earn the “Achievement Unlocked: Left The House” awards?
That’s the heart of Primer. The game aims to connect people to activities and interests beyond games.

Finally, imagine the kids grow up with a computer mentor. Who will our kids love more, the computer or us?  “I don’t know if we should trust this thing,” some parents will say.

The kids are going to look at the AI, and it’s going to talk to them. And they are going to look at its code and understand it. And it’s going to want to look at their code and want to get to know them.  And they’ll talk and become such good friends that we’re going to feel kind of left out. They’re going to bond with AIs in a way that is going to make us feel like a generation left behind — like the conservative parents of the ‘60s love children.

The ultimate question isn’t whether our kids will love us but if we will recognize them. Will we  be able to relate to the kids of the future and love them if they’re about to get posthuman on us? And some of us might be part of that change, but our kids are going to be a lot weirder.

Finally, they’re going to have their peers. And their peers are going to be just like them. We won’t be able to understand them, but they’ll be able to handle their problems together.  And together they’re going to make a new kind of a world. And the AIs that we once thought of as just mentors may become their peers.

And so the question is: when are we going to actually start joining an AI market, instead of having our little fiefdoms like Silicon Graphics? Do we want to be dinosaurs? Or can we be a huge surge of mammals, all building AIs for learning games together?
So we’re getting this thing started with Primer at Primer Labs.com.

In Primer, all of human history is represented by a world tree. The tree is a symbol of us emerging from the cosmos. And as we emerge from the cosmos, we have our past, our present and our future to confront and to change. And the AI is a primer that guides each of us through the greatest game of all: to make all knowledge playable.

Primer is the magic talking mentor textbook in the Hogwarts of scientific magic, guiding us  from big bang to big brains to singularity.

Primer Labs announced their game, Code Hero, on July 3.

The original talk this article was taken from is here.

Share
Jun 23 2011

I Am A Mechanical Man: Robocops & Robowars

Share

“Now, to some extent, we’re all Part Man, Part Machine, All Cop.”

Some movies ought to be left alone. Not because they’re no longer relevant… but because they’re too relevant. Jose Padiliha’s planned 2013 reboot of Paul Verhoeven’s 1987 masterwork Robocop is one such transgression of cinematic and historical decency. In 1987, Robocop was science fiction. Now, it’s the nightly news. One wonders what a Robocop reboot would have to say about a world that’s now a lot closer to the original movie than we might like to admit

Robocop was a profoundly humanist film. It was Dutch director Paul Verhoeven’s satire of American corporate culture, as he would later parody American imperialism with Starship Troopers — though the point of both movies was largely lost on American audiences easily distracted by the tongue-in-cheek hyperviolence. It was about Detroit as a microcosm of America. It was about American industry — both blue and white collar — becoming outmoded. It was about an Alvin Toffler Third Wave world in which cops, criminals and governments alike are just branches of corporations; corporations that fuel inner city chaos and wars of imperial expansion in order to keep the bottom line up. Robocop was about a world only slightly less commodified than our own — as tagged by the film’s running catch phrase, “I’d buy that for a dollar!”

Set in an exaggerated version of the Reagan/Thatcher era, much of the film’s narrative fascination came from observing a corporate, cybernetic police state, considered to be a science fiction parody of the then-current political climate, but science fiction nonetheless. A quarter century and two Bushes later, this is no longer the case.

Now, to some extent, we’re all Part Man, Part Machine, All Cop. Though we may not be physically grafted to machines (yet), we are welded to them in every other possible way, fused to them in consciousness, dependent on them not only to support or enhance almost every part of our existences but also to uphold an increasingly restrictive social order. We live in a corporate military state in which wars are conducted by robotics, in which Predator drones patrol our far-off imperial holdings and we patrol ourselves through the voluntary surveillance system called Facebook.

We are completely enmeshed and interwoven with technology, both as consumer and producer — reduced to being subjects of the narrative of “high tech” in which there is no longer a split between human and machine, but rather a split between “human machine” and “machine machine,” like the split between Robocop and his nemesis, the ED209 walking tank. Now humanity is not something that maintains opposition to “machine” but something that is performed within the context of “machine.” Some machines are considered human (for instance, Apple products) and some are not (Microsoft products), and we are only ever as human as the electronic experiences we choose to consume. Our social identities are subsets of these machines — a carefully cultivated Google trail; a mask worn within the mainframe.

Now, the corporatized police of Robocop seem prophetically accurate — quaint even. In a 2009 TED talk, the Brookings Institution’s P. W. Singer revealed that there are 5,300 unmanned air drones and 12,000 unmanned ground systems currently deployed in the Middle East by the United States military. These numbers are projected to skyrocket in coming years — by 2015, more than half of the army will be robotic. And that’s only the U.S. — 43 countries are currently working on military robots.

The soldier of the near future will look a lot like Robocop — consider DARPA and Raytheon’s combat exoskeleton prototypes. The ED209 isn’t that different from U.S. military robots already in development or deployment like the BigDog rough-terrain robot, much publicized on the Internet, as well as lesser-known tank or pack robots like the ACER, MATILDA, TALON, MARV and MAUD, and many others. Or Japanese company Sakakibara Kikai’s Landwalker, which looks pretty much exactly like ED209. ED209’s short-circuit from the beginning of the film, when it accidentally kills a corporate lackey. This, too, is now something that has occurred. In his TED talk, Singer describes a South African anti-aircraft cannon that had a “software glitch” and killed nine soldiers. Singer calls this “unmanned slaughter,” conducted by machines that are unable to comprehend the idea of “war crime.” Even ED209 squeals like a recognizable form of life when vanquished. However, Predator and Reaper drones are completely silent, providing no warning before they strike.

We have robots in the air — unmanned drones; the newly completed Anubis assassination micro-drone. We have robots in space — the recently launched, classified X-37B plane. And we have a whole host of other current or projected future weapons seemingly culled from 1980s science fiction films — spiderweb armor, liquid armor, invisibility cloaks, drones made to look like insects.

These are not merely efficient, emotionless killing machines. They are also instruments of psychological terror. They are the new face of the Panopticon— as Jeremy Bentham once examined (to the great detriment of everybody ever since, as it has become the model that our culture is to some extent based on), those who are made to think they are being watched are just as controlled as those that actually are being watched.

“We have them thinking that we can track them anywhere,” a former top CIA operations official recently told the Washington Post, referring to the psychological tactic of leading Taliban to believe that tracking devices for Predator drones could be everywhere and in anything. “That we’ve got devices in their cars, their houses, everywhere. They’re so afraid to stay in their houses at night they’re digging foxholes to sleep in.”

These machines are the implements of casual genocide. They are antithetical to human life, a betrayal of humanity, as they are a way to further remove the act of killing from anything that might be able to find remorse in doing so. Indeed, no one will even be able to find any meaning at all, even flat-out hatred, which would still be a human emotional response. Robotic war will be war conducted by spreadsheets. And, ultimately, such machines will hold no allegiance to any country, as they will be quickly copied by or even sold to the highest bidder.

This is where questions must be raised about the responsibility and power not only of arms manufacturers and their comrades, but also of science fiction writers and directors. Over the preceding decades, we have fetishized the machine. Art has concerned itself with the shock of new technology; with the process of becoming cybernetic. Artists have become spectators at the surgery, providing running commentary as we wait to see whether our culture will accept or reject its implants. Yet artists are more than just observers, reporters, and commentators. They are also creators. The narrative of robotic war, begun in science fiction and made real by defense contracts, might be seen, from a certain angle, as the progression of a single thing manifesting over time. Though art may be the play-acting of an idea, it can also, to some extent, be the testing of an idea — and if successful in its simulation of reality, can all too easily become reality.

On the other hand, counter-narratives to “technological progress” prove just as appalling.  The complete rejection of science represented by the Sarah Palins of the world is almost inconceivably brutal dehumanization — a complete subjugation to a reactionary, patriarchal, anti-woman, anti-human “god” — every bit as frightening as the narrative of cyborg hypercapitalism.

In A Cyborg Manifesto (1991), Donna Haraway wrote, “From one perspective, a cyborg world is about the final imposition of a grid of control on the planet, about the final abstraction embodied in a Star Wars apocalypse waged in the name of defense, about the final appropriation of women’s bodies in a masculinist orgy of war… From another perspective, a cyborg world might be about lived social and bodily realities in which people are not afraid of their joint kinship with animals and machines, not afraid of permanently partial identities and contradictory standpoints. The political struggle is to see from both perspectives at once because each reveals both dominations and possibilities unimaginable from the other vantage point.”

What would the real cybernetic shock be now? The grafting of more machine parts into our lives or the grafting of more human parts? Our lives are almost unthinkable without Internet connections, or without the oil brought home for us by the machines of war. To withdraw from either would be a far more potentially fatal shock to the system than the implantation of actual wetware cybernetics. An augmented reality optical chip, for instance, would only help facilitate our current condition, and would likely become socially enforced within certain economic brackets, just as smart phones were.

Can we create a non-alienated cybernetic world? Can we even begin to conceive of what that would look like? We can’t undo the past, but we can change the script of the future before it is acted out. Perhaps the challenge lies is finding new narratives that, instead of reacting against high technology, effectively reorient it towards serving human life — and humane values — instead of destroying them.

The Luddite back-to-the-land ethos of the early environmental movement has given way in recent decades to a vision of a more integrated future. Our most viable version of a livable future is the Green Cyborg in which technology and humanity meet halfway and start caretaking rather than dominating the Earth’s natural resources. This should be framed not as a return to neolithic, matriarchal values but as a forward synthesis of industrial technology and holistic thinking. This requires a simple shift in perspective from observing the world as a jumble of disconnected parts to observing it as an integrated system in which each part affects every other. It is a shift from seeing the world as parts in competition with each other to seeing it as parts striving for an emergent state of co-operative efficiency.

A liveable future lies not in a wholesale rejection of the cyborg process of becoming welded to high technology, but in remembering that we are already cyborgs — that we are already inseparably connected not only to each other, but to everything on the planet, including even the worst parts of postindustrial society and its byproducts and side-effects.

The challenges of this century will be cyborg ones. They will be challenges of synthesis — of discovering how to achieve balance within systems. We will work to establish an ever-evolving cybernetic balance within a frontierless, privacy-free, boundary-free, pluralistic world. This is not a New Age band-aid in which the easy answer is to simply realize that we are all one. Realizing that we are all parts of a single system is only the first step in effectively coping with and implementing that realization — work that may require more time than we have, yet which we must accomplish nonetheless. It is nothing less than the firm establishment and protection of our humanity and humaneness against all affronts to it; nothing less than remembering that we must use our tools properly lest we be used by them.

Robocop can’t be remade because it’s no longer the story of one comic book hero — it’s the story of all of us, left scratching our heads after the operation, struggling to integrate, hoping to one day remember what life was once like, left with the daily task of making sense and meaning of a mechanized world from which the only escape is that which we build from the scrapheap.

Share