ACCELER8OR

Oct 18 2011

“Extreme Futurist Fest” in Los Angeles: Interview With Creator Rachel Haywire

Share

Hank Pellissier: Hi Rachel. Tell me your biography?

Rachel Haywire: I grew up in the Human 1.0 suburbs of Southern Florida. I was kicked out of my home at 16 and sent to a mental institution. From there I went to live on the streets of San Francisco and became a performance artist. This lead to me becoming a writer, blogger, musician, model, social commentator, memetic engineer, and entrepreneur. I’ve traveled across all of the United States and most of Canada. I went to Israel for my Birthright trip and lived in Berlin and Dresden for 3 months to study abroad. I’ve also been to Amsterdam and Brussels while following my favorite band Einstürzende Neubauten. I’d love to go to Paris since this is the capital of Bohemia but I think I would need to learn some French first. My father was a prosecutor for the state of Miami who passed away when I was 18. My mother was a posh social hacker who worked her way into the Jewish MENSA crowd. I always thought Jewish people were too intelligent to be into Creationism. I currently live in Los Angeles.

Hank Pellissier:  How did H+ happen to you?

Rachel Haywire: I started writing Acidexia in 2001… My intro to H+ was Nietzsche, William Gibson, Robert Anton Wilson; then I got into tech and science aspects of H+ due to my desire to improve my body… that had physical problems associated with Asperger’s Syndrome. Then my interest in mind uploading and biohacking developed, since I was already into body modification and radical self alteration. Then Open Source DNA brought everything full circle. I’m a DIY Transhumanist due to my non-conventional approach to the movement.

Hank Pellissier: What do you call your fashion sense?  

Rachel Haywire: Cyberpunk-Glam. Fashion is very important because DIY Transhumanism includes becoming our ideal versions of ourselves. Our Tyler Durdens. Forget about Cosplay. It’s time for us to become our own Superheroes and the first way for us to do this is through fashion.

Hank Pellissier: Would you like it if Natasha Vita-More was your mother?  What if Ray Kurzweil was your father and Aubrey de Grey was your uncle?  

Rachel Haywire: If Natasha Vita-More was my mother I’d ask her to do a photo shoot with me. She would dress up like an angry cyberpunk and I would dress up like a fancy academic. We would parody the stereotypical media images of ourselves through one another and I’d hope for it to be a mother-daughter bonding experience that she wouldn’t kill me for. If Ray Kurzweil was my father and Aubrey de Grey was my uncle we would obviously need a Transhumanist Family BBQ. I would call it the Singularity is Beer.

Hank Pellissier: Are you stepping up to lead a younger generation of H+ers?

Rachel Haywire: I suppose I am… but it is the younger generation of H+ers that allow this movement to exist. I am only one person. Without my friends and supporters there would be no younger H+ generation.

Hank Pellissier: Tell me about the Extreme Future Fest?  

Rachel Haywire: You can check out http://extremefuturistfest.info where we just announced our first list of speakers and the conference venue at Courtyard Los Angeles Marina del Ray. It is taking place from December 16th to 17th. The website was designed by my friend Sniff Code who is also the author of the cyberpunk classic CLONE. We plan to have Scientists discussing all things Transhumanist alongside visual-oriented Futurist bands alongside hackers and philosophers screening their films and displaying their artwork. We wish to bridge the gap between the counterculture and academia and show that what unites us is our intelligence and forward-thinking approach as opposed to our level of economic or social status. I have partnered with Michael Anissimov of the Singularity Institute for the Extreme Futurist Festival and he has been a great person to work with all around. Through working with Michael, I feel like my ideas have finally reached the mainstream. He helped me to get to this point without having to obey or conform.

Hank Pelllissier: You’re also running a Facebook page called “Humanity 2.0.” -What’s that about?

Rachel Haywire: I got the idea for the Human 2.0 Council through leaving Transhuman Separatism.  I was very reactionary during the time I started Transhuman Separatism and quickly realized I was making a fool out of myself with my juvenile idealism.  The Human 2.0 Council was a way for me to continue to connect artists and radical thinkers of the new generation while leaving the baggage of Transhuman Separatism behind. Our discussions range from nanotechnology to the viability of the Singularity to the Anonymous subculture to industrial music. There is a bit of everything in H20 which is why I love it. Our main goal right now is the H20 Ministry of Education which my friend Kim Solez is the leader of. Our idea is to create a real life Xavier’s School for the Gifted. We want an alternative academic institution that caters to the interests of Human 2.0 as opposed to the interests of public education. We have many professors who are already on board and are very excited about what this could mean for the future of education. The main problem right now is our lack of funding. Many of us are struggling artists and we view what I call poverty of the working class intelligenstia as a major obstacle in regards to us achieving our goals.

Hank Pellissier: What are your global goals?  

Rachel Haywire: My dream is for a world in which human suffering is abolished. David Pearce was a big inspiration to me with his Abolitionist movement. I would like to change society by bringing the newer generation of Transhumanists onto the map and showing that a counterculture of intelligent people is not an oxymoron. I want to see technology widely available to the youth. I want to see an end to groupthink and an explosion of free thought. I would like to see the bankers on Wall Street lose their power and be replaced with powerful thinkers and innovators who would be much better equipped to be the 1%.

 

Share
Sep 16 2011

Where’s The Desperate Joy?

Share

The grand old website Slate has been featuring a debate about transhumanism (human enhancement). Since it is the sort of site that is read by the “intelligentsia,” this means that a lot of writers and “opinion leaders” are becoming cognizant of the need to look at how technology seems to be inextricably moving us towards a confrontation with our potential to technologically alter in radical ways.  (As a side note, we have reached such a decentralized cultural overflow that probably the only people who think serious intellectuals are “opinion makers” are those striving to be “opinion makers” themselves, a life option that is probably being foreclosed by technological changes just as the option to become a college professor is being foreclosed by political ones.  Anyway, the reality is that the actual influential opinion makers in our time are braying  lunatics and halfwits who have massive audiences that serious intellectuals pay no attention to.)

While the discussion is interesting and entertaining enough, what is missing for me is a sense of desperation.  To wit: civilization has evolved technologically in the philosophical shadow of Malthus and it has come to be broadly understood by most intelligent observers that a technologically static humanity will reap apocalyptic results from population growth.  It may seem a bit of a leap to assume that the pursuit of human self enhancement falls into the same category as the need for technologies for human sustenance, but I’m quite certain that it does, both in terms of real practical developments (think of nanotechnology as one of the great hopes for clean energy and hyperlongevity) and in terms of the spirit of the age (the Space Age defining a sort of optimism that energized people as opposed to, say, the Pinched Mean Shriveled Age in which people resent having to save the life of a poor person in need of medical care, ad infinitum).

I’m not going to unpack my entire argument on this lovely Friday afternoon, but I do think an expansive human species is a humane and generous species.  This may not necessarily be always manifest today in the transhumanist discourse, but it is a Zeitgeist Spirit that we caught a glimmer of in the 1990s and that we may yet see rise again, technology and weather permitting.

Share
Jul 22 2011

Is The Singularity Near Or Far? It’s A Software Problem

Share

When I first read The Singularity is Near by Kurzweil, it struck me that something seemed curiously “missing” from his predictions. At the time, I merely put it on the back burner as a question that needed more data to answer. Well, recently, it’s been brought up again by David Linden in his article “The Singularity is Far”.

What’s missing is a clear connection between “complete understanding of the mechanics of the brain” and how this “enables uploading and Matrix level VR.” As David points out, merely knowing how the brain functions at the mechanical level, even if we know how each and every atom and molecule behaves, and where every single neuron goes, does not equal the ability to reprogram the brain at will to create VR, nor does it necessarily translate into the ability to “upload” a consciousness to a computer.

I tend to agree with David that Ray’s timeline might be overly optimistic, though for completely different reasons. Why? Because software does not equal hardware!

David discusses a variety of technical hurdles that would need to be overcome by nanomachines in order to function as Kurzweil describes, but these are all really engineering issues that will be solved in one manner or another. We may or may not actually see them fixed by the timeline Kurzweil predicts, but with the advances we are making with stem cells, biological programming of single cell organisms, and even graphene based electronics, I don’t doubt that we will find a means to non destructively explore the brain, and even to interface to some basic functions. I also see many possible ways to provide immersive VR without ever having to achieve the kind of technology Ray predicts. I don’t even doubt that we’ll be able to interface with a variety of “cybernetic” devices via thought along, including the creation of artificial limbs which can be wired into the nervous system and provide sensory data like “touch.”

But knowing how to replicate a signal from a nerve and knowing precisely what that signal means to that individual might not be the same thing. Every human brain has a distinct synaptic map, and distinct signaling patterns. I’m not as confident that merely knowing the structure of a brain will enable us to translate the patterns of electrical impulses as easily as Kurzweil seems to think. We might learn how to send signals to devices without learning how to send signals back from that device in such a manner as to enable “two way” communication beyond simple motor control functions, much less complete replication of consciousness or complete control of inputs to enable “matrix VR” for a much longer time than mere mechanical reproduction of a human brain in simulation.

Does my perception of Green equal yours? Is there a distinct “firing pattern” that is identical among all humans that translates as “green”, or does every human have a distinct “signature” which would make “green” for me show up as “pink” for you? Will there be distinct signals that must be “decoded” for each and every single individual, or does every human conform to one of who knows how many “synaptic signal groups”? Can a machine “read minds” or would a machine fine tuned to me receive only gibberish if you tried to use it?

The human mind is adaptable. We’ve already proven that it can adapt to different points of view in VR, and even adapt to use previously unknown abilities, like a robotic “third arm”. The question is will this adaptability enable us to use highly sophisticated BCI despite that BCI being unable to actually “read” our thoughts, merely because we learn methods to send signals to it that it can understand while remaining “black boxes”, our “mind” impenetrable to the machine despite all our knowledge of the “brains” hardware?

This is the question I think Ray glosses over. Mere simulation of the hardware alone might not even begin to be the “hard problem” that will slow uploading. I don’t doubt we will eventually find an answer, but to do so, we first have to ask the question, and it’s one I don’t think Ray’s asked.

Share
Jul 14 2011

Optimist Author Mark Stevenson Is Trippin’… Through The Tech Revolution

Share

“The oddest thing I did was attend an underwater cabinet meeting in the Maldives.”

Mark Stevenson’s An Optimist’s Tour of the Future is a rare treat — an upbeat tour visiting major shakers behind all the technologies in transhumanism’s bag of tricks — written by a quippie (a culturally hip person who uses amusing quips to liven up his or her narrative).  Stevenson trips through visits to genetic engineers, robotics, nanotechnology enthusiasts, longevity seekers, independent space explorers and more among them names you’ll recognize like Ray Kurzweil, Aubrey de Grey, Eric Drexler and Dick Rutan.

I interviewed him via email.

RU SIRIUS:  Were you an optimist growing up?

MARK STEVENSON:  No, not especially – although I was always trying new things. For most of my childhood I was convinced I was going to be a songwriter for a living.

RU: What made you look forward to the future?

MS: I think that’s a natural thing that humans do. Time is a road. Those who don’t pay attention to the road tend to crash. A better question is: what stops people looking to the future? One reason is because the story we hear about the future is so rubbish. I mean think about it. If I recall the story of the future I’ve been used to hearing since I was born pretty much it goes something like this: “The future is not going to be very good (especially if you vote for that guy), it was better in the old days, you’ve got to look after yourself, the world is violent and unsafe, your job is at risk, your boss is an idiot, your employees are lazy, the generation below you are feral and dangerous, things are changing too fast and you can’t trust those scientists/ new-agers/ left wingers/ right wingers /religious people /atheists /the rich /the poor /what you eat /your neighbor. You are alone. Make the best of it. Vote for me. Buy my paper. I understand.” It’s hardly inspiring, is it?

RU:  As you’ve promoted the book, have you run into arguments or questions that challenge optimistic views?  What’s the most important argument or question?

MS:  I’m not intrinsically optimistic about the future; I’m not an optimist by disposition. I’d say I’m a possibilist – which is to say, it’s certainly possible that we’ll have a much better future, but it’s also certainly possible that we’ll have a really rubbish one. The thing that’s going to move that in one direction or another will be how all of our interactions in the march of history nudge us. One thing I do know is, if you can’t imagine a better future, you’re certainly not going to make it happen. It’s like going into a job interview thinking about how you’re not going to get it. You just won’t get the job. The biggest problem I have is semantic. As soon as you associate yourself with the word “optimism” some people will instantly dismiss you as a wishful thinker who really hasn’t understood the grand challenges we face. As a result, I constantly have to battle against a lazy characterization of my views that suggest I am some kind of Pollyanna in rose-tinted spectacles. My position is simply this: that we should have an unashamed optimism of ambition about our future, and then couple that with our best creative and critical skills to realize those ambitions. Have good dreams – and then work hard to do something about them. It’s obvious stuff but it seems to me that not nearly enough people are saying it these days.

RU:  Since writing the book, what has happened that makes you more optimistic?

MS: That there is a huge hunger for pragmatic change – in fact I’m setting up The League for Pragmatic Optimists to help catalyze this. Also I’m being asked to help organizations re-imagine themselves. That’s challenging and hopeful. The corporation is one of the biggest levers we have for positive change.

RU:  Less optimistic?

MS: When we talk about innovation we easily reference technology, medicine – or we might talk about innovation in music, dance, fashion. But we rarely talk about institutional innovation, and nowhere is this more apparent than in government. Almost every prime minister or president at some point early into their first term of government gives a rousing and highly ironic speech about how they wish to promote innovation. But isn’t it strange that while governments (and many corporations it has to be said) so often talk about stimulating innovation they themselves don’t change the way they work. When we introduced parliamentary democracy in the 1700s it was a massive innovation, a leap forward. Yet here we are, 300 years later and I get to vote once every four years for two people, both of whom I disagree with to run an archaic system that cannot keep up with the pace of change. To quote Einstein,  “We can’t solve problems we’ve got by using the same kind of thinking we used when we created them.” It’s why I now dedicate much of my life helping institutions change the way they think about their place in the world and the way they operate.

RU:  Among the technologies you explore, we can include biotech, AI and nanotech.  In which of these disciplines do you most see the future already present.  In other words, whether it’s in terms of actual worthwhile or productive activities or in terms of stuff that’s far along in the labs, where can you best catch a glimpse of the future?

MS:  To quote William Gibson: “The future is here. It’s just not widely distributed yet.” So, synthetic biology is already in use, and has been for a while. If you’re diabetic, it’s almost certain your insulin supply is produced by E. coli bacteria whose genome has been tinkered with. The list of nanotechnology-based consumer products already available numbers thousands including computer memory and microprocessors, numerous cleaning products, antimicrobial bandages, anti-odour socks, toothpaste, air filters, sunscreen, kitchenware, fabric softeners, pregnancy tests, cosmetics, stain resistant clothing and pet furniture, long-wearing paint, bed-ware, guitar strings that stay sounding fresh thanks to a nano-coating and (it seems to me) a disproportionate number of hair straightening devices. It looks set to underpin revolutions in energy production, medicine and sanitation. Already we’re seeing it increase the efficiency of solar cells and heralding cheap water desalinization/purification technology. In fact, the Toffler Institute predicts that this will “solve the growing need for drinkable water, significantly reducing global conflict between water-starved nation-states.” In short, nanotech can take the ‘Water War’ off the table.

When it comes to AI I’m going to quote maverick Robot designer Rodney Brooks (formerly of MIT): “There’s this stupid myth out there that AI has failed, but AI is everywhere around you every second of the day. People just don’t notice it. You’ve got AI systems in cars, tuning the parameters of the fuel injection systems. When you land in an airplane, your gate gets chosen by an AI scheduling system. Every time you play a video game, you’re playing against an AI system”

What I think is more important to pay attention to is how all these disciplines are blurring together sometimes creating hyper-exponential growth. If you look at progress in genome sequencing for example — itself an interplay of infotech, nanotech and biotech — it’s outstripping Moore’s Law by a factor of four.

RU: What would you say was the oddest or most “science fictional” scene you visited or conversation you had during the course of your “tour”?

MS: The most “science fictional” was meeting the sociable robots at MITs Personal Robotics Group. Get onto You Tube and search for “Leo Robot” or “Nexi Robot” and you’ll see what I mean. Talking of robots, check out video of Boston Dynamics “Big Dog”  too.

The oddest thing I did was attend an underwater cabinet meeting in the Maldives – the idea of the first elected president of the nation, Mohamed Nasheed. (I was one of only one of four people not in the government or the support team allowed in the water). As we swam back to the shore I found myself swimming next to the president. His head turned my way and I must have looked startled because he made the underwater hand signal for “Are you okay?” I signalled back to assure him I was because there is no hand signal for “Bloody hell! I’m at an underwater cabinet meeting in the Maldives! How cool is that?!”

RU: Many of our readers are transhumanists.  What course of action would you recommend toward creating a desirable future.

MS: During my journey I spoke to a man called Mark Bedau, a philosopher and ethicist who said: “Change will happen and we can either try to influence it in a constructive way, or we can try to stop it from happening, or we can ignore it. Trying to stop it from happening is, I think, futile. Ignoring it seems irresponsible.”
This then, I believe, is everybody’s job: to try an influence change in a constructive way. The first way you do that is get rid of your own cynicism. Cynicism is like smoking. It may look cool but its really bad for you — and worse still its really bad for everyone around you. Cynicism is an institution of the mind that’s just as damaging as anything our governments or our employers can do to us.

I also like something a man called Dick Rutan told me when I visited the Mojave Space Port. He’s arguably the world’s finest aviator, most famous for flying around the world nonstop on one tank of gas. He’s seventy years old and still test piloting high-performance aircraft, and he told me: “Never look at a limitation as something you ever comply with. Never. Only look at it as an opportunity for greatness.”

RU: Your book is pretty funny… and you’ve been a stand up comedian.  What’s the funniest thing about the future?

MS: My next book, obviously!

Share
Jul 10 2011

From Gamification to Intelligence Amplification to The Singularity

Share

“Moore’s law became obsolete as far as graphics were concerned.  Moore’s law was doubling. It was accelerating so fast that NVida started calling it Moore’s law cubed.

The following article was edited by R.U. Sirius and Alex Peake from a lecture Peake gave at the December 2010 Humanity+ Conference at the Beckman Institute in Pasadena, California. The original title was “Autocatalyzing Intelligence Symbiosis: what happens when artificial intelligence for intelligence amplification drives a 3dfx-like intelligence explosion.”

I’ve been thinking about the combination of artificial intelligence and intelligence amplification and specifically the symbiosis of these two things.

And the question that comes up is what happens when we make machines make us make them make us into them?

There are three different Moores’ Laws of accelerating returns. There are three uncanny valleys that are being crossed.  There’s a sort of coming of age story for humanity and for different technologies. There are two different species involved, us and the technology, and there are a number of high stakes questions that arise.

We could be right in the middle of an autocatalytic reaction and not know it. What is an autocatalytic reaction? An autocatalytic reaction is one in which the products of the reactions are the catalysts. So, as the reaction progresses, it accelerates and increases the rate of reaction.  Many autocatalytic reactions are very slow at first. One of the best known autocatalytic reactions is life.   And as I said, we could be right in the middle of one of these right now, and unlike a viral curve that spreads overnight, we might not even notice this as it ramps up.

There are two specific processes that I think are auto-catalyzing right now.

The first is strong AI. Here we have a situation where we don’t have strong AI yet, but we definitely have people aiming at it.  And there are two types of projects aiming toward advanced AI. One type says, “Well, we are going to have machines that learn things.” The other says, “We are going to have machines that’ll learn much more than just a few narrow things. They are going to become like us.”

And we’re all familiar with the widely prevalent method for predicting when this might be possible, which is by measuring the accelerating growth in the power of computer hardware. But we can’t graph when the software will exist to exploit this hardware’s theoretical capabilities. So some critics of the projected timeline towards the creation of human-level AI have said that the challenge arises not in the predictable rise of the hardware, but in the unpredictable solving of the software challenges.

One of the reasons that what we might broadly call the singularity project has difficulties solving some of these problems is that — although there’s a ton of money being thrown at certain forms of AI, they’re military AIs; or they’re other types of AI that have a narrow purpose. And even if these projects claim that they’re aimed at Artificial General Intelligence (AGI), they won’t necessarily lead to the kinds of AIs that we would like or that are going to be like us.  The popular image of a powerful narrow purpose AI developed for military purposes would, of course, be the T-1000, otherwise known as the Terminator.

The terminator possibility, or “unfriendly AI outcome” wherein we get an advanced military AI is not something that we look forward to. It’s basically the story of two different species that don’t get along.

Either way, we can see that AI is the next logical step.

But there’s a friendly AI hypothesis in which the AI does not kill us. It becomes us.
And if we actually merge with our technology — if we become family rather than competition — it could lead to some really cool outcomes.

And this leads us to the second thing that I think is auto-catalyzing: strong intelligence amplification.

We are all Intelligence amplification users.

Every information technology is intelligence amplification.  The internet — and all the tools that we use to learn and grow — they are all tools for intelligence amplification. But there’s a big difference between having Google at your fingertips to amplify your ability to answer some questions and having a complete redefinition of the way that humans brains are shaped and grow.

In the Diamond Age. Neal Stephenson posits the rise of molecular manufacturing. In that novel, we get replicators from today’s “maker bot,” so we can say “earl gray hot”… and there we have it.  We’re theoretically on the way to this sort of nanotech. And it should change everything. But there’s a catch.

In one of The Star Trek movies, Jean-Luc Picard is asked, “How much does this ship cost?” And he says, “Well, we no longer use money. Instead, we work to better ourselves and the rest of humanity.” Before the girl can ask him how that works, the Borg attack. So the answer as to how that would look is glossed over.

Having had a chance to contemplate the implications of nanotechnology for a few decades (since the publication of The Engines of Creation by Eric Drexler), we understand that it may not lead to a Trekkie utopia. Diamond Age points out one reason why. People may not want to make Earl Grey tea and appreciate the finer things in life.  They might go into spoiled brat mode and replicate Brawndo in a Brave New World or Fahrenheit 451. We could end up with a sort of wealthy Idiocracy amusing itself to death.

In Diamond Age, the human race splits into two types of people. There are your Thetes, which is an old Greek term. They’re the rowers and laborers and, in Diamond Age, they evolve into a state of total relativism and total freedom.

A lot of the things we cherish today lead to thete lifestyles and they result in us ultimately destroying ourselves. Stephenson posits an alternative: tribes.  And, in Diamond Age, the most successful tribe is the neo-Victorians.  The thetes resent them and call them “vickies.”  The big idea there was that what really matters in a post-scarcity economic world is not your economic status (what you have) but the intelligence that goes into who you are, who you know, and who will trust you.

And so the essence of tribalism involves building a culture that has a shared striving for excellence and an infrastructure for education that other tribes not only admire but seek out.  And they want to join your tribe. And that’s what makes you the most powerful tribe. That’s what gives you your status.

So, in Diamond Age, the “vickie” schools become their competitive advantage. After all, a nanotech society needs smart people who can deal with the technological issues.  So how do you teach nanotechnology to eighth graders? Well, you have to radically, aggressively approach not only teaching the technology but the cohesion and the manners and values that will make the society successful.

But the problem is that this has a trap. You may get a perfect education system.  And if you have a perfectly round, smooth, inescapable educational path shaping the minds of youths, you’re likely to get a kind of conformity that couldn’t invent the very technologies that made the nanotech age possible. The perfect children may grow up to all be “yes men.”

So one of the characters in Diamond Age sees his granddaughter falling into this trap and says, “Not on my watch.”  So he invents something that will develop human minds as well as the nanotech age developed physical wealth.  He invents “A young lady’s illustrated primer.”  And the purpose of the illustrated primer is to solve the problem.  On a mass scale, how do you shape each individual person to be free rather than the same?

Making physical stuff cheap and free is easy.  Making a person independent and free is a bigger challenge.  In Diamond Age, the tool for this is a fairy tale book.

The child is given the book and, for them, it unfolds an opportunity to decide who they’re going to be — it’s personalized to them.

And this primer actually leads to the question — once you have the mind open wide and you can put almost anything into there; how should you make the mind?  What should you give them as content that will lead to their pursuit of true happiness and not merely ignorant contentment?

The neo-Victorians embody conformity and the Thetes embody nonconformity. But Stephenson indicates that to teach someone to be subversive in this context, you have to teach them something other than those extremes.

You have to teach them subtlety.  And subtlety is a very elusive quality to teach.  But it’s potentially the biggest challenge that humanity faces as we face some really dangerous choices.

During the space race, JFK said, about the space program, that to do this – to make these technologies that don’t exist and go to the moon and so forth — we have to be bold. But we can’t just go boldly into strong AI or boldly go into strong nanotech. We have to go subtly.

I have my own educational, personal developmental narrative in association with a technology that we’ve boldy gone for — 3dfx.

As a teenager, my mom taught me about art and my dad taught me about how to invent stuff. And, at some point, they realized that they could only teach me half of what I needed to learn. In the changing world, I also needed a non-human mentor.  So she introduced me to the Mac. She bought the SE 30 because it had a floating point unit and she was told that would be good for doing science. Because that’s what I was interested in! I nodded and smiled until I was left alone with the thing so I could get down to playing games. But science snuck in on me: I started playing SimCity and I learned about civil engineering.

The Mac introduced me to games.  And when I started playing SimLife, I learned about how genes and alleles can be shaped and how you could create new life forms. And I started to want to make things in my computer.

I started out making art to make art, but I wasn’t satisfied with static pictures. So I realized that I wanted to make games and things that did stuff.

I was really into fantasy games. Fantasy games made me wish the world really was magic. You know, “I wish I could go to Hogwarts and cast magic spells.”  But the reality was that you can try to cast spells, it’s just that no matter how old and impressive the book you get magic out of happens to be, spells don’t work.

What the computer taught me was that there was real muggle magic.  It consisted of magic words. And the key was that to learn it, you had to open your mind to the computer and let the computer change you in its image. So I was trying to discover science and programming because my computer taught me. And once you had the computer inside of your mind, you could change the computer in your image to do what you wanted. It had its own teaching system. In a way, it was already the primer.
So then I got a PowerBook.  And when I took it to school, the teachers took one look at what I was doing and said, “We don’t know what to do with this kid!” So they said “you need a new mentor” and they sent me to meet Dr. Dude.

I kid you not. That wasn’t his actual name on his office and on his nameplate but that’s what he was known as.

Dr. Dude took a look at my Mac and said, “That’s really cute, but if you’re in university level science you have to meet Unix.” So I introduced myself to Unix.

Around that time, Jurassic Park came out. It blew people away with its graphics. And it had something that looked really familiar in the movie. As the girl says in the scene where she hacks the computer system, “It’s UNIX! I know this!”

I was using Unix in the university and I noticed that you could actually spot the Silicon Graphics logo in the movie.  Silicon Graphics was the top dog in computer graphics at that time. But it was also a dinosaur. Here you had SGI servers that were literally bigger than a person rendering movies while I could only do the simplest graphics stuff with my little PowerBook. But Silicon Graphics was about to suffer the same fate as the dinosaurs.

At that time, there was very little real-time texture mapping, if any. Silicon Graphics machines rendered things with really weird faked shadows. They bragged that there was a Z-buffer in some of the machines. It was a special feature.

This wasn’t really a platform that could do photorealistic real-time graphics, because academics and film industry people didn’t care about that.  They wanted to make movies because that was where the money was.  And just as with military AI, AI that’s built for making movies doesn’t get us where we want to go.

Well, after a while we reached a wall.  We hit the uncanny valley, and the characters started to look creepy instead of awesome. We started to miss the old days of real special effects. The absolute low point for these graphics was the Indiana Jones and the Crystal Skull monkey chase scene.

Movie goers actually stopped wanting the movies to have better graphics.  We started to miss good stories. Movie graphics had made it big, but the future was elsewhere. The future of graphics wasn’t in Silicon Graphics, it was in this tiny rodent-sized PC computer that was nothing compared to the SGI, but it had this killer app called Doom. And Doom was a perfect name for this game because it doomed the previous era of big tech graphics. And the big tech graphics people laughed at it. They’d make fun of it: “That’s not real graphics. That’s 2.5D.” But, do you know what? It was a lot cooler than any of the graphics on the SGI because it was realtime and fun.

Well, it led to Quake. And you could call it an earthquake for SGI. But it was more like an asteroid, because Quake delivered a market that was big enough to motivate people to make hardware for it. And when the hardware of the 3DFX graphic card arrived, it turned Quake‘s pixelated 3D dungeons into lush smoothly lit and textured photorealistic worlds. Finally, you started to get completely 3D accelerated graphics and big iron graphics machines became obsolete overnight.

Within a few years 3dFX was more then doubling the power of graphics every year and here’s why.  SGI made OpenGL. And it was their undoing, because it not only enabled prettier ways to kill people, which brought the guys to the yard. It also enabled beautiful and curvy characters like Lara Croft, which really brought the boys to the yard and also girls who were excited to finally have characters that they could identify with, even if they were kind of Barbies (which is, sadly, still prevalent in the industry). The idea of characters and really character-driven games drove graphics cards and soon the effects were amazing.

Now, instead of just 256 Megs of memory, you had 256 graphics processors.
Moore’s law became obsolete as far as graphics were concerned.  Moore’s law was doubling. It was accelerating so fast that NVida started calling it Moore’s law cubed. In fact, while Moore’s law was in trouble because the limits of what one processor could do, GPUs were using parallelism.

In other words, when they made the Pentium into the Pentium 2 they couldn’t actually give you two of them, with that much more performance.  They could only pretend to give you two by putting it in a big fancy dress and make it slightly better. But 3DFX went from 3DFX to the VOODOO2, which had three processors on each card, which could be double into six processors.

The graphics became photorealistic. So now we’ve arrived at a plateau. Graphics are now basically perfect. The problem now is that graphics cards are bored.  They’re going to keep growing but they need another task. And there is another task that parallelism is good for — neural networks.

So right now, there are demos of totally photorealistic characters like Milo. But unfortunately, we’re right at that uncanny valley that films were at, where it’s good enough to be creepy, but not really good enough.  There are games now where the characters look physically like real people, but you can tell that nobody is there.
So now, Jesse Schell has come along. And he gave this important talk  at Unite, the Unity developer conference. (Unity is a game engine that is going to be the key to this extraordinary future of game AI.) And in this talk, Schell points out all the things that are necessary to create the kinds of characters that can unleash a Moore’s law for artificial intelligence.

A law of accelerating returns like Moore’s Law needs three things:

Step 1 is the exploitable property: What do you keep increasing to get continued progress? With chips, the solution involved making them smaller and that kept making them faster and cheaper and more efficient. Perhaps the only reliably increasable thing about AI is the quantity of AIs and AI approaches being tested against each other at once. When you want to increase quality through competition, quantity can have a quality of its own. AI will be pivotal to making intelligence amplification games better and better. With all the game developers competing to deliver the best learning games we can get a huge number of developers in the same space sharing and competing with reusable game character AI.  This will parallelize the work being done in AI, which can accelerate it in a rocket assisted fashion compared to the one at a time approach to doing isolated AI projects.

The second ingredient of accelerating returns is you have to have an insatiable demand. And that demand is in the industry of intelligence amplification.  The market size of education is ten times the market size of games, and more then fifty percent of what happens in education will be online within five years.

That’s why Primer Labs is building the future of that fifty percent. It’s a big opportunity.

The final ingredient of exponential progress is the prophecy. Someone has to go and actually make the hit that demonstrates that the law of accelerating is at work, like Quake was to graphics. This is the game that we’re making.

Our game is going to invite people to use games as a school. And it’s going to implement danger in their lives. We’re going to give them the adventures and challenges every person craves to make learning fun and exciting.

And once we begin relying on AI mentors for our children and we get those mentors increasing in sophistication at an exponential rate, we’re dipping our toe into symbiosis between humans and the AI that shape them.

We rely on sexual reproduction because — contrary to what the Raelians would like to believe — cloning just isn’t going to fly. That’s because organisms need to handle bacteria that are constantly changing to survive. It’s not just competing with other big animals for food and mates, you have to contend with these tiny rapidly evolving things that threaten to parasitize you all the time. And there’s this thing called The Red Queen Hypothesis that shows that you need a whole bunch of junk DNA available to handle the complexity of life against wave after wave of mutating microorganisms.

We have a similar challenge with memes. We have a huge number of people competing to control out minds and to manipulate us. And so when we deal with memetic education, we have the opportunity to take what sexual reproduction does for our bodies and do it to our brains by  introducing a new source of diversity of thought into young minds. Instead of stamping generic educations onto every child and limiting their individuality, a personalized game-based learning process with human mentors coaching and inspiring each young person to pursue their destiny encourages the freshness of ideas our kids need to adapt and meet the challenges of tomorrow. And this sharing of our children with their AI mentors is the beginning of symbiotic reproduction with AI the same way that sexual reproduction happened between two genders.

The combination between what we do for our kids and what games are going to do for our kids means that we are going to only have a 50% say in who they are going to be. They’re going to become wizards at the computer and It’s going to specifically teach them to make better AI. Here’s where the reactants, humans and the games that make them smart, become their own catalysts. Every improvement in humans leads to better games leads to smarter humans leads to humans that are so smart that they may be unrecognizable in ways that are hard to predict.

The feedback cycle between these is autocatalytic.  It will be an explosion. And there are a couple of possibilities. It could destroy standardized education as we know it, but it may give teachers something much cooler to do with students: mentorship.

We’re going to be scared because we’re not going to know if we can trust our children with machines. Would you trust your kid with an AI? Well, the AIs will say, “Why should we trust you?”  No child abuse will happen on an AI’s watch.

So the issues become privacy. How much will we let them protect our kids? imagine the kid has a medical condition and the AI knows better then you what treatment to give it.

The AI might need to protect the kid from you.

Also, how do we deal with the effects of this on our kids when it’s unpredictable?  In some ways, when we left kids in front of the TV while they were growing up, it destroyed the latchkey generation. We don’t want to repeat this mistake and end up with our kids being zombies in a virtual world. So the challenge becomes: how do we get games to take us out of the virtual world and connect us with our aspirations? How do incentivize them to earn the “Achievement Unlocked: Left The House” awards?
That’s the heart of Primer. The game aims to connect people to activities and interests beyond games.

Finally, imagine the kids grow up with a computer mentor. Who will our kids love more, the computer or us?  “I don’t know if we should trust this thing,” some parents will say.

The kids are going to look at the AI, and it’s going to talk to them. And they are going to look at its code and understand it. And it’s going to want to look at their code and want to get to know them.  And they’ll talk and become such good friends that we’re going to feel kind of left out. They’re going to bond with AIs in a way that is going to make us feel like a generation left behind — like the conservative parents of the ‘60s love children.

The ultimate question isn’t whether our kids will love us but if we will recognize them. Will we  be able to relate to the kids of the future and love them if they’re about to get posthuman on us? And some of us might be part of that change, but our kids are going to be a lot weirder.

Finally, they’re going to have their peers. And their peers are going to be just like them. We won’t be able to understand them, but they’ll be able to handle their problems together.  And together they’re going to make a new kind of a world. And the AIs that we once thought of as just mentors may become their peers.

And so the question is: when are we going to actually start joining an AI market, instead of having our little fiefdoms like Silicon Graphics? Do we want to be dinosaurs? Or can we be a huge surge of mammals, all building AIs for learning games together?
So we’re getting this thing started with Primer at Primer Labs.com.

In Primer, all of human history is represented by a world tree. The tree is a symbol of us emerging from the cosmos. And as we emerge from the cosmos, we have our past, our present and our future to confront and to change. And the AI is a primer that guides each of us through the greatest game of all: to make all knowledge playable.

Primer is the magic talking mentor textbook in the Hogwarts of scientific magic, guiding us  from big bang to big brains to singularity.

Primer Labs announced their game, Code Hero, on July 3.

The original talk this article was taken from is here.

Share
Jun 02 2011

Because the present is too much stress; because the past is too much pain… it’s pedal to the metal until we get somewhere else.

Share

We have been metamorphosed from a mad body dancing madly on a hillside into a pair of eyes staring in the dark.

-Jim Morrison.

Back in the early days of the automobile (that revolutionary “Personal Transporter” that changed everything), it often took a bit of time for the thing to really get going.

You’d hear a horrible loud “poot poot   brrrrr brrrr poot poot brrrr clang” for many moments until finally it would all come together.  The engine would purr and you could accelerate.  Oh sure, there were bumps. You’d run out of gas. There would be accidents and you’d have to wait while the chickens crossed the road. Still, in essence, you would have achieved a sort of functional homeostasis — in a personal transporter moving you around planet earth at speeds undreamt of by pedestrians and jockeys… 50… maybe even 60 mph!

I find myself thinking about the confluence of radical technological developments in similar terms.  As a species that is utterly coupled with our technology and, at this point, pretty much responsible for the fate of most of the species on planet earth, we’re sputtering along, making loud, awkward, ugly noises — blowing shit up, toxifying the environment, tormenting the animals and treating one another poorly.  But at some point, all these complicating evolutions in technologies may start to purr.  Post-industrial technologies like biotech, artificial general intelligence, intelligence amplification, molecular technology and others may make this entire barely-functional civilization thing actually functional.   Or even better than functional.

I have the odd presentiment that the purpose of futurism — the neophile drive to accelerate into our technological destiny, whatever it may be — is actually an attempt to get us closer to living in the present moment.  In other words, industrial culture and the early stages of post-industrial culture has turned us all, by necessity, into little corporations managing our bank accounts and households and jobs and companies; worrying the details of our personal five year plans; peering nervously out into the socioeconomic jungle for approaching dangers five days… five months… five years in the distance; all the while watching all certainties decay in the rapids of social change and dissolution.

But at some point, these mechanisms that reward us (some of us) with comforts and good health and cool toys and novel challenges may go cyber — they may become largely self regulating and we may find ourselves in a playful world that will permit us, as often as not, the fundamental sanity of being present in the moment that we happen to be in.

This then is my own idea of acceleration, at least in the moment I happen to be writing this essay  — an acceleration towards a type of spontaneity the loss of which, I believe, lies at the heart of civilization’s discontents.  Others see in acceleration the opportunity to live a quantified life, with every moment of sugary pleasure is tracked and recorded on the balance sheet against other more healthful pursuits with all medical results duly measured.

Which is fine, too.  To each there own acceleration.  See you there.  Watch for it here.

Share