ACCELER8OR

Jul 22 2011

Is The Singularity Near Or Far? It’s A Software Problem

Share

When I first read The Singularity is Near by Kurzweil, it struck me that something seemed curiously “missing” from his predictions. At the time, I merely put it on the back burner as a question that needed more data to answer. Well, recently, it’s been brought up again by David Linden in his article “The Singularity is Far”.

What’s missing is a clear connection between “complete understanding of the mechanics of the brain” and how this “enables uploading and Matrix level VR.” As David points out, merely knowing how the brain functions at the mechanical level, even if we know how each and every atom and molecule behaves, and where every single neuron goes, does not equal the ability to reprogram the brain at will to create VR, nor does it necessarily translate into the ability to “upload” a consciousness to a computer.

I tend to agree with David that Ray’s timeline might be overly optimistic, though for completely different reasons. Why? Because software does not equal hardware!

David discusses a variety of technical hurdles that would need to be overcome by nanomachines in order to function as Kurzweil describes, but these are all really engineering issues that will be solved in one manner or another. We may or may not actually see them fixed by the timeline Kurzweil predicts, but with the advances we are making with stem cells, biological programming of single cell organisms, and even graphene based electronics, I don’t doubt that we will find a means to non destructively explore the brain, and even to interface to some basic functions. I also see many possible ways to provide immersive VR without ever having to achieve the kind of technology Ray predicts. I don’t even doubt that we’ll be able to interface with a variety of “cybernetic” devices via thought along, including the creation of artificial limbs which can be wired into the nervous system and provide sensory data like “touch.”

But knowing how to replicate a signal from a nerve and knowing precisely what that signal means to that individual might not be the same thing. Every human brain has a distinct synaptic map, and distinct signaling patterns. I’m not as confident that merely knowing the structure of a brain will enable us to translate the patterns of electrical impulses as easily as Kurzweil seems to think. We might learn how to send signals to devices without learning how to send signals back from that device in such a manner as to enable “two way” communication beyond simple motor control functions, much less complete replication of consciousness or complete control of inputs to enable “matrix VR” for a much longer time than mere mechanical reproduction of a human brain in simulation.

Does my perception of Green equal yours? Is there a distinct “firing pattern” that is identical among all humans that translates as “green”, or does every human have a distinct “signature” which would make “green” for me show up as “pink” for you? Will there be distinct signals that must be “decoded” for each and every single individual, or does every human conform to one of who knows how many “synaptic signal groups”? Can a machine “read minds” or would a machine fine tuned to me receive only gibberish if you tried to use it?

The human mind is adaptable. We’ve already proven that it can adapt to different points of view in VR, and even adapt to use previously unknown abilities, like a robotic “third arm”. The question is will this adaptability enable us to use highly sophisticated BCI despite that BCI being unable to actually “read” our thoughts, merely because we learn methods to send signals to it that it can understand while remaining “black boxes”, our “mind” impenetrable to the machine despite all our knowledge of the “brains” hardware?

This is the question I think Ray glosses over. Mere simulation of the hardware alone might not even begin to be the “hard problem” that will slow uploading. I don’t doubt we will eventually find an answer, but to do so, we first have to ask the question, and it’s one I don’t think Ray’s asked.

Share
Jul 14 2011

Optimist Author Mark Stevenson Is Trippin’… Through The Tech Revolution

Share

“The oddest thing I did was attend an underwater cabinet meeting in the Maldives.”

Mark Stevenson’s An Optimist’s Tour of the Future is a rare treat — an upbeat tour visiting major shakers behind all the technologies in transhumanism’s bag of tricks — written by a quippie (a culturally hip person who uses amusing quips to liven up his or her narrative).  Stevenson trips through visits to genetic engineers, robotics, nanotechnology enthusiasts, longevity seekers, independent space explorers and more among them names you’ll recognize like Ray Kurzweil, Aubrey de Grey, Eric Drexler and Dick Rutan.

I interviewed him via email.

RU SIRIUS:  Were you an optimist growing up?

MARK STEVENSON:  No, not especially – although I was always trying new things. For most of my childhood I was convinced I was going to be a songwriter for a living.

RU: What made you look forward to the future?

MS: I think that’s a natural thing that humans do. Time is a road. Those who don’t pay attention to the road tend to crash. A better question is: what stops people looking to the future? One reason is because the story we hear about the future is so rubbish. I mean think about it. If I recall the story of the future I’ve been used to hearing since I was born pretty much it goes something like this: “The future is not going to be very good (especially if you vote for that guy), it was better in the old days, you’ve got to look after yourself, the world is violent and unsafe, your job is at risk, your boss is an idiot, your employees are lazy, the generation below you are feral and dangerous, things are changing too fast and you can’t trust those scientists/ new-agers/ left wingers/ right wingers /religious people /atheists /the rich /the poor /what you eat /your neighbor. You are alone. Make the best of it. Vote for me. Buy my paper. I understand.” It’s hardly inspiring, is it?

RU:  As you’ve promoted the book, have you run into arguments or questions that challenge optimistic views?  What’s the most important argument or question?

MS:  I’m not intrinsically optimistic about the future; I’m not an optimist by disposition. I’d say I’m a possibilist – which is to say, it’s certainly possible that we’ll have a much better future, but it’s also certainly possible that we’ll have a really rubbish one. The thing that’s going to move that in one direction or another will be how all of our interactions in the march of history nudge us. One thing I do know is, if you can’t imagine a better future, you’re certainly not going to make it happen. It’s like going into a job interview thinking about how you’re not going to get it. You just won’t get the job. The biggest problem I have is semantic. As soon as you associate yourself with the word “optimism” some people will instantly dismiss you as a wishful thinker who really hasn’t understood the grand challenges we face. As a result, I constantly have to battle against a lazy characterization of my views that suggest I am some kind of Pollyanna in rose-tinted spectacles. My position is simply this: that we should have an unashamed optimism of ambition about our future, and then couple that with our best creative and critical skills to realize those ambitions. Have good dreams – and then work hard to do something about them. It’s obvious stuff but it seems to me that not nearly enough people are saying it these days.

RU:  Since writing the book, what has happened that makes you more optimistic?

MS: That there is a huge hunger for pragmatic change – in fact I’m setting up The League for Pragmatic Optimists to help catalyze this. Also I’m being asked to help organizations re-imagine themselves. That’s challenging and hopeful. The corporation is one of the biggest levers we have for positive change.

RU:  Less optimistic?

MS: When we talk about innovation we easily reference technology, medicine – or we might talk about innovation in music, dance, fashion. But we rarely talk about institutional innovation, and nowhere is this more apparent than in government. Almost every prime minister or president at some point early into their first term of government gives a rousing and highly ironic speech about how they wish to promote innovation. But isn’t it strange that while governments (and many corporations it has to be said) so often talk about stimulating innovation they themselves don’t change the way they work. When we introduced parliamentary democracy in the 1700s it was a massive innovation, a leap forward. Yet here we are, 300 years later and I get to vote once every four years for two people, both of whom I disagree with to run an archaic system that cannot keep up with the pace of change. To quote Einstein,  “We can’t solve problems we’ve got by using the same kind of thinking we used when we created them.” It’s why I now dedicate much of my life helping institutions change the way they think about their place in the world and the way they operate.

RU:  Among the technologies you explore, we can include biotech, AI and nanotech.  In which of these disciplines do you most see the future already present.  In other words, whether it’s in terms of actual worthwhile or productive activities or in terms of stuff that’s far along in the labs, where can you best catch a glimpse of the future?

MS:  To quote William Gibson: “The future is here. It’s just not widely distributed yet.” So, synthetic biology is already in use, and has been for a while. If you’re diabetic, it’s almost certain your insulin supply is produced by E. coli bacteria whose genome has been tinkered with. The list of nanotechnology-based consumer products already available numbers thousands including computer memory and microprocessors, numerous cleaning products, antimicrobial bandages, anti-odour socks, toothpaste, air filters, sunscreen, kitchenware, fabric softeners, pregnancy tests, cosmetics, stain resistant clothing and pet furniture, long-wearing paint, bed-ware, guitar strings that stay sounding fresh thanks to a nano-coating and (it seems to me) a disproportionate number of hair straightening devices. It looks set to underpin revolutions in energy production, medicine and sanitation. Already we’re seeing it increase the efficiency of solar cells and heralding cheap water desalinization/purification technology. In fact, the Toffler Institute predicts that this will “solve the growing need for drinkable water, significantly reducing global conflict between water-starved nation-states.” In short, nanotech can take the ‘Water War’ off the table.

When it comes to AI I’m going to quote maverick Robot designer Rodney Brooks (formerly of MIT): “There’s this stupid myth out there that AI has failed, but AI is everywhere around you every second of the day. People just don’t notice it. You’ve got AI systems in cars, tuning the parameters of the fuel injection systems. When you land in an airplane, your gate gets chosen by an AI scheduling system. Every time you play a video game, you’re playing against an AI system”

What I think is more important to pay attention to is how all these disciplines are blurring together sometimes creating hyper-exponential growth. If you look at progress in genome sequencing for example — itself an interplay of infotech, nanotech and biotech — it’s outstripping Moore’s Law by a factor of four.

RU: What would you say was the oddest or most “science fictional” scene you visited or conversation you had during the course of your “tour”?

MS: The most “science fictional” was meeting the sociable robots at MITs Personal Robotics Group. Get onto You Tube and search for “Leo Robot” or “Nexi Robot” and you’ll see what I mean. Talking of robots, check out video of Boston Dynamics “Big Dog”  too.

The oddest thing I did was attend an underwater cabinet meeting in the Maldives – the idea of the first elected president of the nation, Mohamed Nasheed. (I was one of only one of four people not in the government or the support team allowed in the water). As we swam back to the shore I found myself swimming next to the president. His head turned my way and I must have looked startled because he made the underwater hand signal for “Are you okay?” I signalled back to assure him I was because there is no hand signal for “Bloody hell! I’m at an underwater cabinet meeting in the Maldives! How cool is that?!”

RU: Many of our readers are transhumanists.  What course of action would you recommend toward creating a desirable future.

MS: During my journey I spoke to a man called Mark Bedau, a philosopher and ethicist who said: “Change will happen and we can either try to influence it in a constructive way, or we can try to stop it from happening, or we can ignore it. Trying to stop it from happening is, I think, futile. Ignoring it seems irresponsible.”
This then, I believe, is everybody’s job: to try an influence change in a constructive way. The first way you do that is get rid of your own cynicism. Cynicism is like smoking. It may look cool but its really bad for you — and worse still its really bad for everyone around you. Cynicism is an institution of the mind that’s just as damaging as anything our governments or our employers can do to us.

I also like something a man called Dick Rutan told me when I visited the Mojave Space Port. He’s arguably the world’s finest aviator, most famous for flying around the world nonstop on one tank of gas. He’s seventy years old and still test piloting high-performance aircraft, and he told me: “Never look at a limitation as something you ever comply with. Never. Only look at it as an opportunity for greatness.”

RU: Your book is pretty funny… and you’ve been a stand up comedian.  What’s the funniest thing about the future?

MS: My next book, obviously!

Share
Jul 12 2011

Rights of the nonhuman: Give me liberty or give me death?

Share

Dolphin Head“Give me liberty or give me death!” By revealing a new level of importance for the burst-pulse sounds used by dolphins along with their clicks and whistles, researchers have realized that the sounds mirror behavior that keeps the social hierarchy and peace of the pod intact. Now, there is an increased call for a Declaration of Cetacean Rights.  In another 30 years, will this become a call for the rights of non-human AIs?

A new kickstarter documentary project will explore the possibility that dolphins’ intelligence may be superior to our own. The film hypothesizes that our own limited intelligence and human-centric orientation may prevent us from recognizing the true intelligence of other species in much the same way that 19th century western science failed to recognize the intelligence and worth of non-western cultures.  Here’s a video preview:

The riddle of dolphin clicks and whistles has befuddled researchers not unlike early Egyptologists deciphering the Rosetta Stone. The problem is: there isn’t a Dolphinese-equivalent Rosetta Stone — at least, yet.

Rosetta StoneThe earliest human-dolphin communication research dates to John Lilly’s Communication Research Institute on St. Thomas in the Virgin Islands in the 1950s. During the early 1960s, Lilly and co-workers published several papers reporting that dolphins could mimic human speech patterns. Lilly’s later, more controversial, work with isolation tanks and psychedelics — attempting to put himself into “dolphin space” — led him to believe that dolphins represent an alien and perhaps superior earth-bound intelligence in an aqueous medium.

In the 1980s Lilly directed a failed attempt to teach dolphins a computer-synthesized language. In the 1990s, Louis Herman of the Kewalo Basin Marine Mammal Laboratory in Honolulu, Hawaii, found that bottlenose dolphins can keep track of over 100 different words. They can also respond appropriately to commands in which the same words appear in a different order, understanding the difference between “bring the surfboard to the man” and “bring the man to the surfboard”, for example.

A recent New Scientist article quotes Denise Herzing, founder of the Wild Dolphin Project in Jupiter, Florida, on the problems with these early attempts at human-dolphin communication: “They create a system and expect the dolphins to learn it, and they do, but the dolphins are not empowered to use the system to request things from the humans.”

Herzing is now collaborating with Thad Starner, an artificial intelligence researcher at the Georgia Institute of Technology in Atlanta, on a project named Cetacean Hearing and Telemetry (CHAT). They want to work with dolphins to “co-create” a language that uses features of sounds that wild dolphins communicate with naturally.

The recording device being built by Starner and his students includes two hyrdophones and a data storage computer about the size of a smartphone. The hydrophones are capable of picking up the full range of dolphin sounds. An LED in the diver’s mask will light up and indicate from which direction–thereby which dolphin sounds are coming. A handheld device called a Twiddler acts as both a mouse and a keyboard and allows the diver to select the sounds to be played back to the dolphin — that is, to decide what to “say.”

Singularity Hub reports that the initial “conversations” will involve eight “words” invented by the research team. “Seaweed” and “bow wave ride” are two examples. The researchers will then use software to listen and see if the dolphins can successfully mimic the learned sounds. If they can, the CHAT team will then listen for new words, the “fundamental units” of dolphinese.

While inter-species communication with dolphins is an exciting prospect in itself, and would certainly help cement the case for granting human-like rights for dolphins and other cetaceans (particularly if dolphins start to explicate quarks, quantum physics, and 3-D acoustical engineering to us), it is not a necessary argument for granting non-human rights according to some animal rights activists. At a two-day meeting in Helsinki in 2010 led by the Whale and Dolphin Conservation Society, conservationists, philosophers, and lawyers have come out saying that cetaceans should be granted the equivalent of human rights. Their Declaration of Cetacean Rights reads:

  • Every individual cetacean has the right to life.
  • Every individual cetacean has the right to life.
  • No cetacean should be held in captivity or servitude; be subject to cruel treatment; or be removed from their natural environment.
  • All cetaceans have the right to freedom of movement and residence within their natural environment.
  • No cetacean is the property of any State, corporation, human group or individual.
  • Cetaceans have the right not to be subject to the disruption of their cultures.
  • The rights, freedoms, and norms set forth in this Declaration should be protected under international and domestic law.
  • Cetaceans are entitled to an international order in which these rights, freedoms and norms can be fully realized.
  • No State, corporation, human group, or individual should engage in any activity that undermines these rights, freedoms, or norms.
  • Noting in this Declaration shall prevent a State from enacting stricter provisions for the protection of cetacean rights.

Another organization, the Nonhuman Rights Project, goes a step further suggesting that non-human animals including chimpanzees, elephants, and dolphins have the capacity to possess common law rights, “A declaration of common law personhood requires judges to decide something fundamental. A common law person is capable of having a common law right, any right. If one can have any common law right, one is a common law person.”

The multi-nation Oceanic-Union “Free Society over the Earth and Sea” broadens the definition of rights to include all forms of life (and, presumably, the ecosystems that support them). Article 106 — The respect of Animal Life — reads: “Therefore it is upon each and every Men and Women to both guard and protect all forms of life, rather than view other Animal Life forms as mere possessions and beasts of burden or pure sport.”

A noble sentiment, but one perhaps with blinders to the reality of ongoing human genocide in places like Rwanda and Darfur. If we can’t get it together with our fellow humans, how can we expect to our fellow humans to respect the rights of non-humans?

Making the civil rights case for your iRobot Roomba, of course, is downright silly. But, conceivably, by the middle of this century, machines could be demanding the same rights as humans.

Robot Woman

Ray Kurzweil has predicted that human-level AI may be here within 20 years or so. Others are more conservative, while a few think it could be here in the next decade.  What should we do with an AI with intelligence matching your own intellect as it arrives (most likely in a virtual world)? Are these things just machines that we can use however we want? If they do have civil rights, should they have the same rights as humans?

A recent Forbes blog poses a key question on the issue of AI civil rights: if an AI can learn and understand its programming, and possibly even alter the algorithms that control its behavior and purpose, is it really conscious in the same way that humans are? If an AI can be programmed in such a fashion, is it really sentient in the same way that humans are?

Even putting aside the hard question of consciousness, should the hypothetical AIs of mid-century have the same rights as humans?  The ability to vote and own property? Get married? To each other? To humans? Such questions would make the current gay rights controversy look like an episode of “The Brady Bunch.”

Of course, this may all a moot point given the existential risks faced by humanity (for example, nuclear annihilation) as elucidated by Oxford philosopher Nick Bostrom and others.  Or, our AIs actually do become sentient, self-reprogram themselves, and “20 minutes later,” the technological singularity occurs (as originally conceived by Vernor Vinge).

Give me liberty or give me death? Until an AI or dolphin can communicate this sentiment to us, we can’t prove if they can even conceptualize such concepts as “liberty” or “death.” Nor are dolphins about to take up arms anytime soon even if they wanted to — unless they somehow steal prosthetic hands in a “Day of the Dolphin”-like scenario and go rogue on humanity.

The issue of rights is clearly more pressing for dolphins than AIs at this point: our cetacean cousins continue to wash ashore in the wake of massive oil spills and nuclear reactor meltdowns, and show up as sashimi in Japanese grocery stores…

Share
Jul 10 2011

From Gamification to Intelligence Amplification to The Singularity

Share

“Moore’s law became obsolete as far as graphics were concerned.  Moore’s law was doubling. It was accelerating so fast that NVida started calling it Moore’s law cubed.

The following article was edited by R.U. Sirius and Alex Peake from a lecture Peake gave at the December 2010 Humanity+ Conference at the Beckman Institute in Pasadena, California. The original title was “Autocatalyzing Intelligence Symbiosis: what happens when artificial intelligence for intelligence amplification drives a 3dfx-like intelligence explosion.”

I’ve been thinking about the combination of artificial intelligence and intelligence amplification and specifically the symbiosis of these two things.

And the question that comes up is what happens when we make machines make us make them make us into them?

There are three different Moores’ Laws of accelerating returns. There are three uncanny valleys that are being crossed.  There’s a sort of coming of age story for humanity and for different technologies. There are two different species involved, us and the technology, and there are a number of high stakes questions that arise.

We could be right in the middle of an autocatalytic reaction and not know it. What is an autocatalytic reaction? An autocatalytic reaction is one in which the products of the reactions are the catalysts. So, as the reaction progresses, it accelerates and increases the rate of reaction.  Many autocatalytic reactions are very slow at first. One of the best known autocatalytic reactions is life.   And as I said, we could be right in the middle of one of these right now, and unlike a viral curve that spreads overnight, we might not even notice this as it ramps up.

There are two specific processes that I think are auto-catalyzing right now.

The first is strong AI. Here we have a situation where we don’t have strong AI yet, but we definitely have people aiming at it.  And there are two types of projects aiming toward advanced AI. One type says, “Well, we are going to have machines that learn things.” The other says, “We are going to have machines that’ll learn much more than just a few narrow things. They are going to become like us.”

And we’re all familiar with the widely prevalent method for predicting when this might be possible, which is by measuring the accelerating growth in the power of computer hardware. But we can’t graph when the software will exist to exploit this hardware’s theoretical capabilities. So some critics of the projected timeline towards the creation of human-level AI have said that the challenge arises not in the predictable rise of the hardware, but in the unpredictable solving of the software challenges.

One of the reasons that what we might broadly call the singularity project has difficulties solving some of these problems is that — although there’s a ton of money being thrown at certain forms of AI, they’re military AIs; or they’re other types of AI that have a narrow purpose. And even if these projects claim that they’re aimed at Artificial General Intelligence (AGI), they won’t necessarily lead to the kinds of AIs that we would like or that are going to be like us.  The popular image of a powerful narrow purpose AI developed for military purposes would, of course, be the T-1000, otherwise known as the Terminator.

The terminator possibility, or “unfriendly AI outcome” wherein we get an advanced military AI is not something that we look forward to. It’s basically the story of two different species that don’t get along.

Either way, we can see that AI is the next logical step.

But there’s a friendly AI hypothesis in which the AI does not kill us. It becomes us.
And if we actually merge with our technology — if we become family rather than competition — it could lead to some really cool outcomes.

And this leads us to the second thing that I think is auto-catalyzing: strong intelligence amplification.

We are all Intelligence amplification users.

Every information technology is intelligence amplification.  The internet — and all the tools that we use to learn and grow — they are all tools for intelligence amplification. But there’s a big difference between having Google at your fingertips to amplify your ability to answer some questions and having a complete redefinition of the way that humans brains are shaped and grow.

In the Diamond Age. Neal Stephenson posits the rise of molecular manufacturing. In that novel, we get replicators from today’s “maker bot,” so we can say “earl gray hot”… and there we have it.  We’re theoretically on the way to this sort of nanotech. And it should change everything. But there’s a catch.

In one of The Star Trek movies, Jean-Luc Picard is asked, “How much does this ship cost?” And he says, “Well, we no longer use money. Instead, we work to better ourselves and the rest of humanity.” Before the girl can ask him how that works, the Borg attack. So the answer as to how that would look is glossed over.

Having had a chance to contemplate the implications of nanotechnology for a few decades (since the publication of The Engines of Creation by Eric Drexler), we understand that it may not lead to a Trekkie utopia. Diamond Age points out one reason why. People may not want to make Earl Grey tea and appreciate the finer things in life.  They might go into spoiled brat mode and replicate Brawndo in a Brave New World or Fahrenheit 451. We could end up with a sort of wealthy Idiocracy amusing itself to death.

In Diamond Age, the human race splits into two types of people. There are your Thetes, which is an old Greek term. They’re the rowers and laborers and, in Diamond Age, they evolve into a state of total relativism and total freedom.

A lot of the things we cherish today lead to thete lifestyles and they result in us ultimately destroying ourselves. Stephenson posits an alternative: tribes.  And, in Diamond Age, the most successful tribe is the neo-Victorians.  The thetes resent them and call them “vickies.”  The big idea there was that what really matters in a post-scarcity economic world is not your economic status (what you have) but the intelligence that goes into who you are, who you know, and who will trust you.

And so the essence of tribalism involves building a culture that has a shared striving for excellence and an infrastructure for education that other tribes not only admire but seek out.  And they want to join your tribe. And that’s what makes you the most powerful tribe. That’s what gives you your status.

So, in Diamond Age, the “vickie” schools become their competitive advantage. After all, a nanotech society needs smart people who can deal with the technological issues.  So how do you teach nanotechnology to eighth graders? Well, you have to radically, aggressively approach not only teaching the technology but the cohesion and the manners and values that will make the society successful.

But the problem is that this has a trap. You may get a perfect education system.  And if you have a perfectly round, smooth, inescapable educational path shaping the minds of youths, you’re likely to get a kind of conformity that couldn’t invent the very technologies that made the nanotech age possible. The perfect children may grow up to all be “yes men.”

So one of the characters in Diamond Age sees his granddaughter falling into this trap and says, “Not on my watch.”  So he invents something that will develop human minds as well as the nanotech age developed physical wealth.  He invents “A young lady’s illustrated primer.”  And the purpose of the illustrated primer is to solve the problem.  On a mass scale, how do you shape each individual person to be free rather than the same?

Making physical stuff cheap and free is easy.  Making a person independent and free is a bigger challenge.  In Diamond Age, the tool for this is a fairy tale book.

The child is given the book and, for them, it unfolds an opportunity to decide who they’re going to be — it’s personalized to them.

And this primer actually leads to the question — once you have the mind open wide and you can put almost anything into there; how should you make the mind?  What should you give them as content that will lead to their pursuit of true happiness and not merely ignorant contentment?

The neo-Victorians embody conformity and the Thetes embody nonconformity. But Stephenson indicates that to teach someone to be subversive in this context, you have to teach them something other than those extremes.

You have to teach them subtlety.  And subtlety is a very elusive quality to teach.  But it’s potentially the biggest challenge that humanity faces as we face some really dangerous choices.

During the space race, JFK said, about the space program, that to do this – to make these technologies that don’t exist and go to the moon and so forth — we have to be bold. But we can’t just go boldly into strong AI or boldly go into strong nanotech. We have to go subtly.

I have my own educational, personal developmental narrative in association with a technology that we’ve boldy gone for — 3dfx.

As a teenager, my mom taught me about art and my dad taught me about how to invent stuff. And, at some point, they realized that they could only teach me half of what I needed to learn. In the changing world, I also needed a non-human mentor.  So she introduced me to the Mac. She bought the SE 30 because it had a floating point unit and she was told that would be good for doing science. Because that’s what I was interested in! I nodded and smiled until I was left alone with the thing so I could get down to playing games. But science snuck in on me: I started playing SimCity and I learned about civil engineering.

The Mac introduced me to games.  And when I started playing SimLife, I learned about how genes and alleles can be shaped and how you could create new life forms. And I started to want to make things in my computer.

I started out making art to make art, but I wasn’t satisfied with static pictures. So I realized that I wanted to make games and things that did stuff.

I was really into fantasy games. Fantasy games made me wish the world really was magic. You know, “I wish I could go to Hogwarts and cast magic spells.”  But the reality was that you can try to cast spells, it’s just that no matter how old and impressive the book you get magic out of happens to be, spells don’t work.

What the computer taught me was that there was real muggle magic.  It consisted of magic words. And the key was that to learn it, you had to open your mind to the computer and let the computer change you in its image. So I was trying to discover science and programming because my computer taught me. And once you had the computer inside of your mind, you could change the computer in your image to do what you wanted. It had its own teaching system. In a way, it was already the primer.
So then I got a PowerBook.  And when I took it to school, the teachers took one look at what I was doing and said, “We don’t know what to do with this kid!” So they said “you need a new mentor” and they sent me to meet Dr. Dude.

I kid you not. That wasn’t his actual name on his office and on his nameplate but that’s what he was known as.

Dr. Dude took a look at my Mac and said, “That’s really cute, but if you’re in university level science you have to meet Unix.” So I introduced myself to Unix.

Around that time, Jurassic Park came out. It blew people away with its graphics. And it had something that looked really familiar in the movie. As the girl says in the scene where she hacks the computer system, “It’s UNIX! I know this!”

I was using Unix in the university and I noticed that you could actually spot the Silicon Graphics logo in the movie.  Silicon Graphics was the top dog in computer graphics at that time. But it was also a dinosaur. Here you had SGI servers that were literally bigger than a person rendering movies while I could only do the simplest graphics stuff with my little PowerBook. But Silicon Graphics was about to suffer the same fate as the dinosaurs.

At that time, there was very little real-time texture mapping, if any. Silicon Graphics machines rendered things with really weird faked shadows. They bragged that there was a Z-buffer in some of the machines. It was a special feature.

This wasn’t really a platform that could do photorealistic real-time graphics, because academics and film industry people didn’t care about that.  They wanted to make movies because that was where the money was.  And just as with military AI, AI that’s built for making movies doesn’t get us where we want to go.

Well, after a while we reached a wall.  We hit the uncanny valley, and the characters started to look creepy instead of awesome. We started to miss the old days of real special effects. The absolute low point for these graphics was the Indiana Jones and the Crystal Skull monkey chase scene.

Movie goers actually stopped wanting the movies to have better graphics.  We started to miss good stories. Movie graphics had made it big, but the future was elsewhere. The future of graphics wasn’t in Silicon Graphics, it was in this tiny rodent-sized PC computer that was nothing compared to the SGI, but it had this killer app called Doom. And Doom was a perfect name for this game because it doomed the previous era of big tech graphics. And the big tech graphics people laughed at it. They’d make fun of it: “That’s not real graphics. That’s 2.5D.” But, do you know what? It was a lot cooler than any of the graphics on the SGI because it was realtime and fun.

Well, it led to Quake. And you could call it an earthquake for SGI. But it was more like an asteroid, because Quake delivered a market that was big enough to motivate people to make hardware for it. And when the hardware of the 3DFX graphic card arrived, it turned Quake‘s pixelated 3D dungeons into lush smoothly lit and textured photorealistic worlds. Finally, you started to get completely 3D accelerated graphics and big iron graphics machines became obsolete overnight.

Within a few years 3dFX was more then doubling the power of graphics every year and here’s why.  SGI made OpenGL. And it was their undoing, because it not only enabled prettier ways to kill people, which brought the guys to the yard. It also enabled beautiful and curvy characters like Lara Croft, which really brought the boys to the yard and also girls who were excited to finally have characters that they could identify with, even if they were kind of Barbies (which is, sadly, still prevalent in the industry). The idea of characters and really character-driven games drove graphics cards and soon the effects were amazing.

Now, instead of just 256 Megs of memory, you had 256 graphics processors.
Moore’s law became obsolete as far as graphics were concerned.  Moore’s law was doubling. It was accelerating so fast that NVida started calling it Moore’s law cubed. In fact, while Moore’s law was in trouble because the limits of what one processor could do, GPUs were using parallelism.

In other words, when they made the Pentium into the Pentium 2 they couldn’t actually give you two of them, with that much more performance.  They could only pretend to give you two by putting it in a big fancy dress and make it slightly better. But 3DFX went from 3DFX to the VOODOO2, which had three processors on each card, which could be double into six processors.

The graphics became photorealistic. So now we’ve arrived at a plateau. Graphics are now basically perfect. The problem now is that graphics cards are bored.  They’re going to keep growing but they need another task. And there is another task that parallelism is good for — neural networks.

So right now, there are demos of totally photorealistic characters like Milo. But unfortunately, we’re right at that uncanny valley that films were at, where it’s good enough to be creepy, but not really good enough.  There are games now where the characters look physically like real people, but you can tell that nobody is there.
So now, Jesse Schell has come along. And he gave this important talk  at Unite, the Unity developer conference. (Unity is a game engine that is going to be the key to this extraordinary future of game AI.) And in this talk, Schell points out all the things that are necessary to create the kinds of characters that can unleash a Moore’s law for artificial intelligence.

A law of accelerating returns like Moore’s Law needs three things:

Step 1 is the exploitable property: What do you keep increasing to get continued progress? With chips, the solution involved making them smaller and that kept making them faster and cheaper and more efficient. Perhaps the only reliably increasable thing about AI is the quantity of AIs and AI approaches being tested against each other at once. When you want to increase quality through competition, quantity can have a quality of its own. AI will be pivotal to making intelligence amplification games better and better. With all the game developers competing to deliver the best learning games we can get a huge number of developers in the same space sharing and competing with reusable game character AI.  This will parallelize the work being done in AI, which can accelerate it in a rocket assisted fashion compared to the one at a time approach to doing isolated AI projects.

The second ingredient of accelerating returns is you have to have an insatiable demand. And that demand is in the industry of intelligence amplification.  The market size of education is ten times the market size of games, and more then fifty percent of what happens in education will be online within five years.

That’s why Primer Labs is building the future of that fifty percent. It’s a big opportunity.

The final ingredient of exponential progress is the prophecy. Someone has to go and actually make the hit that demonstrates that the law of accelerating is at work, like Quake was to graphics. This is the game that we’re making.

Our game is going to invite people to use games as a school. And it’s going to implement danger in their lives. We’re going to give them the adventures and challenges every person craves to make learning fun and exciting.

And once we begin relying on AI mentors for our children and we get those mentors increasing in sophistication at an exponential rate, we’re dipping our toe into symbiosis between humans and the AI that shape them.

We rely on sexual reproduction because — contrary to what the Raelians would like to believe — cloning just isn’t going to fly. That’s because organisms need to handle bacteria that are constantly changing to survive. It’s not just competing with other big animals for food and mates, you have to contend with these tiny rapidly evolving things that threaten to parasitize you all the time. And there’s this thing called The Red Queen Hypothesis that shows that you need a whole bunch of junk DNA available to handle the complexity of life against wave after wave of mutating microorganisms.

We have a similar challenge with memes. We have a huge number of people competing to control out minds and to manipulate us. And so when we deal with memetic education, we have the opportunity to take what sexual reproduction does for our bodies and do it to our brains by  introducing a new source of diversity of thought into young minds. Instead of stamping generic educations onto every child and limiting their individuality, a personalized game-based learning process with human mentors coaching and inspiring each young person to pursue their destiny encourages the freshness of ideas our kids need to adapt and meet the challenges of tomorrow. And this sharing of our children with their AI mentors is the beginning of symbiotic reproduction with AI the same way that sexual reproduction happened between two genders.

The combination between what we do for our kids and what games are going to do for our kids means that we are going to only have a 50% say in who they are going to be. They’re going to become wizards at the computer and It’s going to specifically teach them to make better AI. Here’s where the reactants, humans and the games that make them smart, become their own catalysts. Every improvement in humans leads to better games leads to smarter humans leads to humans that are so smart that they may be unrecognizable in ways that are hard to predict.

The feedback cycle between these is autocatalytic.  It will be an explosion. And there are a couple of possibilities. It could destroy standardized education as we know it, but it may give teachers something much cooler to do with students: mentorship.

We’re going to be scared because we’re not going to know if we can trust our children with machines. Would you trust your kid with an AI? Well, the AIs will say, “Why should we trust you?”  No child abuse will happen on an AI’s watch.

So the issues become privacy. How much will we let them protect our kids? imagine the kid has a medical condition and the AI knows better then you what treatment to give it.

The AI might need to protect the kid from you.

Also, how do we deal with the effects of this on our kids when it’s unpredictable?  In some ways, when we left kids in front of the TV while they were growing up, it destroyed the latchkey generation. We don’t want to repeat this mistake and end up with our kids being zombies in a virtual world. So the challenge becomes: how do we get games to take us out of the virtual world and connect us with our aspirations? How do incentivize them to earn the “Achievement Unlocked: Left The House” awards?
That’s the heart of Primer. The game aims to connect people to activities and interests beyond games.

Finally, imagine the kids grow up with a computer mentor. Who will our kids love more, the computer or us?  “I don’t know if we should trust this thing,” some parents will say.

The kids are going to look at the AI, and it’s going to talk to them. And they are going to look at its code and understand it. And it’s going to want to look at their code and want to get to know them.  And they’ll talk and become such good friends that we’re going to feel kind of left out. They’re going to bond with AIs in a way that is going to make us feel like a generation left behind — like the conservative parents of the ‘60s love children.

The ultimate question isn’t whether our kids will love us but if we will recognize them. Will we  be able to relate to the kids of the future and love them if they’re about to get posthuman on us? And some of us might be part of that change, but our kids are going to be a lot weirder.

Finally, they’re going to have their peers. And their peers are going to be just like them. We won’t be able to understand them, but they’ll be able to handle their problems together.  And together they’re going to make a new kind of a world. And the AIs that we once thought of as just mentors may become their peers.

And so the question is: when are we going to actually start joining an AI market, instead of having our little fiefdoms like Silicon Graphics? Do we want to be dinosaurs? Or can we be a huge surge of mammals, all building AIs for learning games together?
So we’re getting this thing started with Primer at Primer Labs.com.

In Primer, all of human history is represented by a world tree. The tree is a symbol of us emerging from the cosmos. And as we emerge from the cosmos, we have our past, our present and our future to confront and to change. And the AI is a primer that guides each of us through the greatest game of all: to make all knowledge playable.

Primer is the magic talking mentor textbook in the Hogwarts of scientific magic, guiding us  from big bang to big brains to singularity.

Primer Labs announced their game, Code Hero, on July 3.

The original talk this article was taken from is here.

Share
Jun 26 2011

Developing Worlds: Beyond the Frontiers of Science Fiction

Share

The future will not be a monopoly of the current superpowers, but lies in the hands of tech-savvy youth from around the world, trying desperately to survive at all costs in an increasingly asymmetrical world.


Imagine a young African boy staring wide-eyed at the grainy images of an old television set tuned to a VHF channel; a child discovering for the first time the sights and sounds of a wonderfully weird world beyond city limits. This is one of my earliest memories; growing up during the mid-nineties in a tranquil compound house in Maamobi; an enclave of the Nima suburb, one of the most notorious slums in Accra. Besides the government-run Ghana Broadcasting Corporation, only two other television stations operated in the country at the time, and satellite television was way beyond my family’s means. Nevertheless, all kinds of interesting programming from around the world occasionally found its way onto those public broadcasts. This was how I first met science fiction; not from the tomes of great authors, but from distilled approximations of their grand visions.

This was at a time when cyberpunk was arguably at its peak, and concepts like robotics, virtual reality, and artificial intelligence were rife in mainstream media. Not only were these programs incredibly fun to watch, the ideas that they propagated left a lasting impression on my young mind for years to come. This early exposure to high technology sent me scavenging through piles of discarded mechanical parts in our backyard; searching for the most intriguing sculptures of steel from which I would dream up schematics for contraptions that would change the world as we knew it. With the television set for inspiration and the junkyard for experimentation, I spent my early childhood immersed in a discordant reality where dreams caked with rust and choked with weeds came alive in a not-so-distant future; my young mind well aware of the process of transformation occurring in the world around me; a world I was only just beginning to understand.

I am only now able to appreciate the significance of this early exposure to high technology in shaping my outlook on the world. From my infancy I became keenly aware of the potential for science and technology to radically transform my environment, and I knew instinctively that society was destined to continue being reshaped and restructured for the rest of my life. Mind you, I am only one of many millions in a generation of African children born during the rise of the global media nation; children raised on Nigerian movies and kung-fu flicks; Hindi musicals and gangster rap; Transformers, Spider Man, and Ananse stories; BBC, RFI, and Deutsche-Welle TV; the Nintendo/Playstation generation. Those of us born in this time would grow up to accept the fact that the only constant was change; that the world around us was perceptibly advancing at an alarming pace; that nothing would ever remain the same.

Just as my limited exposure to advanced technology shaped my outlook on the world for years to come, the youth of the developing world today are being shaped by far more radical technologies to which they now have unprecedented access. The result is the rise of a completely different mindset from the ones that has dominated the developing world until very recently; a growing recognition among these youth of the immense potential for science and technology to induce tangible social change. The role of social networking in facilitating the Bouazizi and Tahrir Square revolutions is perhaps one of greatest testaments to that fact, but it is not the first, and far from the last. What happens when third world youth gain increasing access to technologies that were practically unimaginable just a few years earlier? What happens if this trend continues, say, fifty years into the future? And whose job is it to answer these questions? Science fiction writers, of course.

This train of thought leads to realization that the boundaries of contemporary science fiction lie not in the Wild Western frontiers of outer space, but in the forgotten corners of our planet. I created the AfroCyberPunk blog in order to share some of these insights and questions with the world. The overwhelming response it generated was the first indication that the literary world was beginning to take an interest, but there are more signs that we are at the beginning of a global awakening to the role of the developing world in the future of science fiction. My own novel-in-progress began as a cyberpunk thriller set in a future North America, simply because whenever I tried to imagine an African future I found myself having to deal with issues I wished someone else had already dealt with; having to answer questions I wished someone else already had. I realized that I had no groundwork; no foundation whatsoever, and that to imagine a future Africa I would have to begin from scratch.

From the first time Western civilizations came into full contact with the developing world until today, we have primarily been net consumers of foreign technology, and the result of this asymmetrical relationship is that the mechanisms for development and regulation of technology simply do not exist in our parts of the world to the same level of sophistication as they do in the developed world. We are now observing what happens when developing societies acquire thousands of years of technological innovation within the space of a few years. I can only imagine what goes through the mind of young boys in Nima today as they surf Facebook across 3G networks on smart phones, Skype with friends all over the world, or go shopping online with someone else’s credit card. We clearly are sailing headfirst into uncharted waters, and the mapmakers—science fiction writers of the world—are only now scrambling to plot the course of our future.

Since I began writing my novel more than two years ago, the story has undergone a transformation which parallels the same trend that I see beginning in science fiction; a bold move out of largely familiar territory towards the developing worlds on the frontiers of the contemporary imagination. This article from The Independent sums up my sentiments quite succinctly, citing Nnedi Okorafor, Ian MacDonald, Lauren Beukes, Paolo Bacigalupi, and Alastair Reynolds as writers whose award-winning works herald a changing trend in the settings of contemporary science fiction novels, while District 9 and Kajola represent noteworthy attempts by African movie-makers to break into the science fiction genre. Through the course of this decade, we can expect to witness the emergence of a new brand of science fiction; one which makes the developing world central — rather than peripheral — to its narrative.

It’s becoming increasingly apparent that the future will not be a monopoly of the current superpowers, but lies in the hands of tech-savvy youth from around the world, trying desperately to survive at all costs in an increasingly asymmetrical world. Youths from Asia, the Middle East, Latin America, and Sub-Saharan Africa represent the single largest subgroup of the human population, and with the aid of advanced technology they will go on to shape the geopolitical destiny of our civilization. Science fiction has a lot of catching up to do in order to chronicle this new frontier in which the developing world plays a defining role; a frontier that has been neglected by mainstream science fiction for just about long enough. I’m proud to count myself among the new wave of writers exploring the immense potential of developing world science fiction, and I now look to the future with a renewed sense of anticipation, because the future I’ve waited for all my life is finally coming home.

Share
Jun 13 2011

The Intertwined Histories of Artificial Life and Civil Rights

Share

All modern tales of robots, automatons and other would-be humans trace a lineage to Mary Shelley’s 1817 masterpiece Frankenstein:  The Modern Prometheus. This is a story of a tinkerer (named Victor Frankenstein) stitching together dead body parts, and then enlivening the assembly with galvanic charge (resulting in “the monster” pop culture mistakenly calls Frankenstein).  It is the forerunner of many variations on human-makes-imitation, imitation-feels-aggrieved, imitation-goes-amok, human-regrets-imitation.

The imitations may be of flesh, as in Frankenstein, or of a kind of bio-plastic, as in Karel Capek’s 1920 play that gave us the word “robot” –  R.U.R (Rossum’s Universal Robots).  Alternatively, the imitations can really look robotic with metallic composite bodies such as in the film I, Robot, starring Will Smith.  Or, copies can be completely virtual as in the avatars deployed against humans in The Matrix.

The imitation’s grievance is generally traceable to a lack of acceptance, as in Frankenstein, or second-class citizenry, as in Astro Boy (originally created in manga format as Tetsuwan Atomu by Osamu Tezuka in the aftermath of World War II).  The sense of rejection may then express itself as reverse specism at perceived human inferiority, the sentiment of the Cylons of Battlestar Galactica. The resulting mayhem may be a handful of murders, as in Frankenstein, or an effort to kill only the “bad humans,” as in I Robot, or total genocide of almost all humans, as in R.U.R. And the sense of regret runs the gamut of quests to kill the Frankenstein, hunt down only potentially dangerous robots or prohibit any kind of artificial intelligence.

The imitations do not always go berserk.   Some use self-pity to deal with the rejection and discrimination.  The sadly earnest robot boy in Spielberg’s AI endlessly searches for a mother’s love, ultimately drowning himself in the quest.  The stoically diligent robot servant in Isaac Asimov and Robert Silverberg’s novel Positronic Man (the basis for Bicentennial Man starring Robin Williams) reinvents himself as a blood-based dying human.  Even without anti-human violence, the imitations always tend to feel the Frankenstein monster’s sense of abandonment and the humans always tend to feel Victor Frankenstein’s regret at creating an imitation.   After all, a Mother did dump the cute AI kid by the side of a highway (she did kindly leave him with his robot Teddy), and a Father did kick the Bicentennial Man out of the house he immaculately maintained.

Empowerment (via creation of an imitation) followed by Disappointment (due to the imitation feeling separate, unequal, unloved and/or threatened).  Conflict (arising out of humanity’s inadequate response to the imitation’s unhappiness) followed by Regret (based on humanity’s disdain for the conflict).  These are the themes of robots and other human-like creations:  Rising expectations, crashing expectations, agitation and lamentation.  These also are the age-old themes of civil rights.

It was in the very same time frame of Mary Shelley’s Frankenstein, the early 1800s, when our modern concepts of civil rights came into being.  While rights for preferred demographic groups date to antiquity, only around the time of Frankenstein did civil rights per se, i.e., the notion that anyone who values being free should be free, become a popular concept.  The American and French Revolutions, in 1776 and 1791, respectively, set the stage for civil rights with brilliant declarations of freedom understandable by the masses.  “We hold these truths to be self-evident, that all men are created equal.”  Yet, in fact, these revolutions were for free white men.  Hence, as recounted in Hochschild’s Bury the Chains, as of the late 1700s the vast majority of people in the world believed slavery was part of life, and that it had always been part of life, and always would be.  It was blessed in the bible and it was the economic foundation of the European empires.  The new French republic repulsed slave rebellions in its territories.  Women were no freer under George Washington than they were under King George.

It took an unprecedented generation-long public education effort, led by the self-freed slave Olaudah Equiano and the Cambridge-educated free-thinker Thomas Clarkson, to persuade the English public that “slaves were people” too.  Of course everyone realized that a slave’s body was that of a human, but very few thought that a slave’s soul was that of a person, certainly not that of a free person.  This was a massive education effort culminating in documents such as Britain’s 1807 Abolition of the Slave Trade Act, Britain’s 1818 Treaties with Spain, France and Portugal to ban the slave trade, and New York State’s decision, in 1817, to forbid slavery as of July 4th, 1827.  It took bestselling books and countless lectures that brought the heartfelt personhood of former slaves crashing into the minds of free people.  Common citizens began to understand, en masse, that someone who felt like them, even if born a slave, deserved to be treated like them.

Of course there was no “Autobiography of Frankenstein’s Monster,” as there was of Frederic Douglass.  There was no “Vindication of the Rights of Frankenstein’s Monster,” as there was of Woman, thanks to Mary Wollstonecraft’s 1792 polemic.  There were no real imitations of humans to create such calls to conscience.

Slaves, women and other oppressed people occupied the role of being an imitation of a human.  By bringing an African across the ocean to the plantation, an imitation of a human had been created – a slave – someone that looked (somewhat, to white people) human, but lived a boxed life of labor, torment and possession.   The act of enslavement was an empowerment for the masters, a human creation not different in kind than Frankenstein’s monster.   In quite an analogous manner the taking of a (usually) girl as one’s wife, in an age without recourse to divorce or remedy for spousal abuse, was another kind of enslavement.  By marrying a girl an imitation of a human had been created – a wife – someone that seemed (oddly, to men) human, but lived a boxed life of labor (until she died of it), torment and possession.  The act of betrothal was empowering for the husbands, but the creation of a wife was rarely followed with love or equal status.  Instead, her second-class citizenship was impressed upon her as firmly as the brand upon an African slave.

Just as has been the plotline in imagined technological imitations of humans, second-class citizenship for women and racial minorities was met with resentment and conflict.  The rising expectations of Africans born in the Americas were slapped down by racism.  The rising expectations of women empowered by the industrial revolution were crushed by sexism.  These dashed hopes fueled decade after decade of conflict – the long march of civil rights from the 1860s to the 1960s.

In the past two centuries, imitations of life and civil rights have swirled about each other like a strand of DNA.   The fictional imitations evolved from being called “monsters” or “things” by Shelley to “robot” meaning “forced worker” in Slavic by Capek.  Meanwhile, the socially constructed imitations evolved from being called “slaves” or “chattel” in the early 1800s to being called “coloreds” in the 1920s.  Women went from having no property rights in a marriage to equal rights.  The birth of artificial intelligence (AI), in the 1950s, gradually made Frankenstein-like stories plausible, albeit with digital persons rather than fused body parts.   A decade later, in 1968, we had a credible digital person, HAL, in Kubrick’s film 2001 A Space Odyssey, running America’s first spaceship to Jupiter, and (again) feeling aggrieved, and then going amok as he murdered crewmen.   As 1960s-era fictional robots and digital creations murdered humans on movie screens out of paranoia and resentment of second-class citizenship, out in the streets real world riots flared from equivalent emotions.

Meanwhile, America’s Civil Rights Movement slowly gathered steam with women’s voting rights in the 1920s, and African-American enforceable rights in the 1960s.  Some of the intertwined arc of robotics and civil rights can be appreciated in the life of a single great 20th century actor, Spencer Tracy.   He had his Broadway debut, in 1922, as a robot in R.U.R., and his Hollywood sunset, in 1967, as the sanctifier of a pioneering inter-racial marriage in Guess Whose Coming to Dinner. Gay, lesbian and even transgender rights arose in the 1980s upon an expanding platform of feminist and people of color successes.  Hence, few were surprised when, in 1989, Star Trek: The Next Generation aired its “Measure of a Man” episode, heralding the civil rights of digital people such as Commander Data.  The 200-year convergence of artificial life and civil rights has arrived.

Today most people regret treating Africans, other immigrants and women as second-class citizens, or much worse.  We realize that when we mistreated the “imitation” of a person – the wife of a husband, or the slave of a master – we unleashed an inevitable flood of resentment and conflict.  As in the cultural history of robots, automatons and other imitations, we realize at the end of the trail of tears that it was all so unnecessary.  Had Victor Frankenstein loved his creation, it would not have gone berserk.   Had all immigrants been treated equally, there would not be the fear, loathing and bloodshed that accompanied the march of civil rights.  Had men cherished the magic of women’s bodies, and partnered on the basis of equality, uncountable lives would not have been torn asunder in domestic discord.

The lesson of the intertwined cultural histories of techno-human imitations and civil rights is clear:  that which values life, regardless of its form, heritage or substrate, will demand to be respected in its value of life.   Tolerate substrate diversity easily in its beginnings, or tolerate it hard in the end.   If something thinks like a human, it will want to be loved, it will resent being abandoned and it will channel its anger in strange and unpredictable ways.  Better for all that we love, nurture and respect that which we create in our likeness.

Copyright Martine Rothblatt 2011

Share