ACCELER8OR

Dec 20 2011

Positively Eschatology

Share/Bookmark

In the 1850’s Auguste Comte (famous now as the father of sociology) worked out an elaborate system of religious observance based on humanism, positivism, and rational scientific progress.

If it was just a phase, it would be Comte’s last.  He died in 1857 — but his influential ideas about the application of reason to cultural and religious matters would soon lead to “Temples of Humanity” built in France and Brazil.

It was all founded on Science and Progress and Liberty — but to manage our humanism, this new religion did indeed install priests, prayers, saints (including Isaac Newton), and even a manner of “crossing” oneself that stimulated the phrenological points for Good Works (see John Gray’s Al Qaeda and What It Means to be Modern, Faber & Faber, 2003, for much more — and details on how the very many flavors of fundamentalism issue directly from idealistic moderns).

The Church of Virus (apparently still active at least as late as May, 2011) dresses up similar notions in religious trappings – but does so in blatantly and unapologetically transhuman style.  How many cults (we could name a few: Raelism, Scientology, Heaven’s Gate) take it as their mandate to re-educate people in the name of some sloppy imagining of “scientific progress”?  The trend has worked down deep into many mainstream religious groups as well.  The Southern Baptist Convention, for example, in “reaching the world for Christ,” is still pushing a modernist agenda to convert the pagan and prepare the world for unity under their own “rational theology” and systematic doctrines of salvation. 

But the “social physics” of Comte’s Positivist religion sits somehow simultaneously in two opposing camps.  On the one hand, it is clearly a religion (rites, churches, an eye toward “progress” through the spread of values).  But on the other, it is anti-religious, or at least atheistic.  The principles were that Humanity itself, not gods, would develop and push rational moral systems across the earth to all peoples — and all of it would be based on science, order, and reason rather than inherited beliefs, myths, or superstitions.

In a different time and place, and under different economic pressures, Positivism could have become something a lot more like Mao’s Cultural Revolution.

The New Atheists are fond of citing religion (crusades, jihad) as a cause for blood and terror; their critics are fond of citing the terrors atheists brought down on millions under Stalin, Mao, Pol Pot.  Both sides miss the point.  The point is that violent ideology causes blood and terror, and that violent ideology can be religious, anti-religious, or psuedo-religious.

The eschatology of transhumanism, past militant statements, by transhumanists, and the overly simplistic dismissal of history (dull, dirty, dumb) in favor of a cartoonishly idealized future (fun, sexy, smart — hey, no war & no worries!) should give us all pause. It sounds familiar.

It sounds like crows calling.

More Links

Positivist Church of Brazil

John Gray

Oct 14 2011

Is Stiff Academia Killing Mental Evolution?

One thing I have noticed about the Transhumanist community is that there is a division between the academic crowd and the consciousness expansion crowd. Previous Transhumanist movements have battled on idealistic grounds for the notion of what Transhumanism was really about. Was it the hard scientific outlook with the academic credentials and PowerPoints or was it the consciousness expansion outlook with the mind altering psychedelics and technological revolution? Was the hard academic current stopping the freethinking cyberpunk current from being viewed as Transhuman and was the freethinking cyberpunk current stopping the hard academic current from being taken seriously?

I used to say that the stiff academics were killing mental evolution and I completely sided with the freethinking cyberpunk current. Yet I have recently come to the realization that both currents of Transhumanism are equally important. As freethinking cyberpunks we need hard academics to build a sustainable movement or we will simply come off like a bunch of techno-hippies.

I do, however, wish to address a part of academia that has been upsetting me for a while. I’m talking about the anti-philosophy part which states that philosophy is irrelevant to Transhumanism because we now have technology. The “why have discussions on philosophy when we can build new machines?” people. They are the ones who are killing mental evolution because they dismiss philosophical discourse on the future as all talk and no action.

The last time I checked it appeared that philosophical discourse was required for action to exist in the first place. Would we be able to build new machines if we didn’t philosophize about technology? Why would we want to live in a society of robot builders if we couldn’t even theorize about what we were building? All talk and no action is a definite waste of time but all action and no talk is a cold society devoid of free thought and revolution. I feel that we need a mixture of both. We need the talk and we need the action. We need the techno-hippies who have just discovered LSD and Robert Anton Wilson to throw the raves and we need the MIT graduates to advance genetic research and throw the conferences. We need each and every person in this movement.

Transhumanism has split off into a bunch of different currents and in 2011 this has reached a level so meta-meta-meta that there are at least 30 different groups on Facebook for different currents of Transhumanism. Recently someone in the Singularity Network group asked a question to the effect of “why was I just added to 15 different Transhumanist groups?” Can we blame the hard academic elite or can we blame the petty infighting that every movement inevitably has to deal with? Should we be placing any blame in the first place or should we be embracing the splintering off of so many new movements?

In the end, I believe every MIT graduate was once a freethinking cyberpunk or — at the very least — they embraced these ideals in their youth. I also believe that every freethinking cyberpunk would benefit from a more academic education so they could turn their visions into realities via technology and scientific theory. The only thing killing mental evolution is the idea that ideas are no longer important because … “Hey! Check out those robots over there… and stop talking.”

Aug 31 2011

Transhumanism Against Scarcity: A Conversation with AnonymousSquared

“… why should anyone want to participate in an infinite unending marketplace.  What kind of human being sees that as the ultimate goal?

 

A couple of weeks ago, I was contacted by AnonymousSquared — a fellow who had read somewhere that I was thinking about writing a book titled “Steal This Singularity.“ (I’ll be thinking about it for a long time.)  He sent me a copy of his book-in-progress, which he calls “Transhumanism Against Scarcity.”   And while the book needs some work, it had some interesting ideas.  So I decided to have an email conversation with him.  Here goes more-than-nothing…

RU SIRIUS:  This discussion about ending human scarcity has a long and deep history.  Technologically, we may be moving in the right direction… towards molecular machines, desktop manufacturing, the digitization of everything.  But you say in your book that we’re headed in the wrong direction.

ANONYMOUSSQUARED:  I see two problems.  One is that environmental problems may intervene.  I don’t know if I can do anything about that.  The other problem that I see is a strain of libertarian absolutism that is fairly prevalent inside transhumanist circles and that is having way too much impact on politics in the real world. Maybe I can have some impact on that in a small way.

I don’t really have a beef with libertarianism per se… as a soft concept, finding our way towards a world with a lot less government coercion seems like a good thing.  I think the problem comes when ideals collide with the real world.  And you’ll notice that much of what I’ve written is focused on the world today, not on the future.  I thought of calling it Transhumanism Against Austerity, which is the way that global monetary policy is reintroducing scarcity into parts of the world where it had been all but eliminated.  It should be obvious to futurists that this is the wrong direction, if for no other reason than to avoid massive riots and an uprising of neoluddism.

We’re already very deep into a wildly technological time.  People notice stuff like artificial biology, bulletproof skin, the stuff that kids take for granted on their cell phones… people running around talking about robots overachieving us.  This is not lost on ordinary people.  And they’re looking around unemployed and with their homes “underwater” and medical costs rising and bankers getting free money from the government while they’re being asked to tighten their belts and they’re saying to themselves, “So this is what the techno-world is!”  Some of the people in this transhuman community have no idea what’s going to hit them.

RU:  The argument, of course, goes that the best way to end scarcity is to unleash an unfettered market.

AS: Sure, and you can’t argue with someone who is absolutely convinced that is the case.  It could conceivably even make sense at some point in the future, where a sort of tipping point is reached with nanotechnology and even the garbage pickers will be rich.  But it’s more likely that we need to think about how to get wealth to a majority of people who are economically superfluous… or we abandon them to penniless suffering.

The two main forces that are making most people economically superfluous are roboticization and globalization.  And of nearly equal importance is disintermediation of the intellectual creative classes.  Certainly corporations and business still need workers and people still want services and apps, but there’s a limit to all that.

The obvious one that everybody thinks about is that, with globalization, most types of work can be farmed out to places where there’s cheap labor, lower expectations and lower expenses.  Less obvious is that — with a globalized market, individuals are also superfluous as consumers.  So it’s the death of Keynsean economics, in the sense that global corporations and financing concerns feel no pain when Americans or Greeks stop spending.  And that’s because the possible market is so large that even with economies in recession, they’ve got more consumers than they’ve ever had before.

RU:  A few years ago, I was at a Singularity Conference and somebody whose name I forget gave a talk about robotization.  And he suggested that when robots can do everything that humans do faster, better and more efficiently, then we’ll have to give people what they need gratis.  And about a third of the audience booed him.  It was the only time I’ve ever heard a speaker get booed at one of these conferences.

AS:  Those people are against the future.  That’s the irony.  They’re trying to force ideas from the past onto the future and they’re doing damage to the present in the process.

I understand that in the 1970s, there was a lot of talk even among many libertarians that there was going to be this cybernetic age soon and people’s jobs would be replaced by machines… and how are we going to deal with that?  And they talked about the least bureaucratic ways to let people enjoy their lives after the machines take over… ideas like a reverse income tax or running some large centralized enterprise and giving everybody free stock.  It was just assumed that we wouldn’t leave people out in the cold when they were no longer necessary.  After all, as a society we wouldn’t be any poorer because the machine rather than the human is producing.  This seems so fundamentally human and obvious.  I think there’s been a massive dehumanization since then.

RU:  I lived through the seventies and they were pretty miserable.  Alienation with the internet is definitely less isolating and boring than alienation with it.  

Anyway, the popular argument with the idea that you have to help people who were replaced by technology is that we’ve learned that new technologies create new economic opportunities and new jobs and so forth.  I think it’s a partial truth that deteriorates as we go deeper into the postindustrial era, but it’s an argument that’s out there.

AS:  Well, we could go into the conventional arguments about actual income stagnation and insecurity but it’s all been said before and everybody has their arguments ready.  But I think anybody would have to admit that it’s already a weird economy. A big chunk of the market economy exist solely on the basis of the eventual expectation of advertising. How perverse is that… when you actually examine it? Where it really falls apart is when you have a billion busy little small entrepreneurs hustling some product.  Who has the attention and the need for what they have to offer… assuming it hasn’t already been hacked and distributed free anyway?  And why should anyone want to participate in an infinite unending marketplace.  What kind of human being sees that as the ultimate goal?

RU: Is there any reason to be optimistic?

AS:  Sure.  There are plenty of people with all types of ideological influences including libertarianism who are truly humanistic and want only to solve big problems ranging from scarcity to death. I want to ask them to be against austerity policies now. When you’re inviting people to be bold and excited and transhuman about the very extreme technological changes that are taking place, maybe it would be smart not to yank the floor out from underneath them at the same time.

Aug 19 2011

Artist Jasmin Lim Experiments With Visual Perception

Mobius Wave, by Jasmin Lim

“I think of myself as an artist who experiments with photography,” asserts Jasmin Lim.

She has produced an original and imaginative body of work to support that claim, going back to her days at the experimental Independent School of Art. A graduate of the Visual Arts program at San Francisco State University, Jasmin explores the relationship between the logic of the camera and our own visual perception, raising transhumanist themes of redefining human capacities and human nature through technology. “The camera made me start thinking about what it is we are able to see with our own sensory systems and how perception is mediated and distorted. As well as what our limitations are and what kinds of tools enable us to understand more complex substructures. All of my works question the cognitive processes that we use to conceptualize the world. I focus on visual perception because it takes up at least a quarter of our cognitive processing, about 25 percent of brain real estate. I try to illustrate that perceptions are not fixed.”

Jasmin’s approach is epitomized by her memorable “Mobius Wave”, in which her photograph of the ocean is reinvented as a sculpture of a mobius wave. She relinquishes the fixed orientation that is ordinarily dictated by the photographic frame and replaces it with a continuous one-sided surface, in an almost tactile evocation of the endless interconnectedness of the world’s waters. And just as all these waters reflect and suggest each other, so too does the Mobius Wave involve multiple versions of itself. “The final object is the photograph of the sculpture, which is simultaneously a two-dimensional photograph, a document of a sculpture in three dimensions, and a document of an event, because it was a temporary sculpture, giving it the fourth dimension of time.”

Although many of her works are documents of her sculptures, the final art object is usually the photograph. But Jasmin has also made videos, and with “Untitled (Persona Case Study)” she is premiering a window installation at Artists’ Television Access for the month of August. “It’s about the writer Laura Albert who published fiction under the pseudonym JT LeRoy  and then was attacked in the American media after she was revealed to be the author. I’ve combed through innumerable texts from the popular media, the blogworld, zines, journals, as well as artwork inspired by her, ephemera from her experience in group homes as a teenager, and other texts that are not directly related but address similar themes about identity formation and different types of “truth” — literal and figurative. I’ve tried to show a more dimensional and nuanced representation of her story, and I’ve still only scratched the surface. But I’m hoping that the diversity of these materials will suggest to people that there is so much more to understand about her story and her art.”

 

Aug 03 2011

Jason Louv’s Queen Valentine: A Romance in Two Worlds

Jason Louv’s new novel Queen Valentine is a hallucinatory trip through the supernatural underbelly of New York City… as one reviewer put it, “Like Alice in Wonderland if Lewis Carroll had overdosed on the opposite of Prozac. A twisted, dark, comical take on the origins of our hopes, dreams and nightmares.”

Louv is best known for his three previously published three anthologies on consciousness studies, Generation Hex (which Grant Morrison called “Your invitation to the party that might just bring the house down”), Ultraculture and Thee Psychick Bible with Genesis Breyer P-Orridge, and although the new book is a foray into fiction, it continues his themes of consciousness expansion, posthumanity, magic and the hidden occult side of the world. I caught up with him briefly by e-mail to discuss the new book.

RAY TESLA: So what is Queen Valentine?

JASON LOUV: It’s a novel exposing the supernatural underworld beneath New York, as seen through the eyes of a young woman who’s lost her soul working in advertising, and ends up stumbling into the world beneath. It’s a bit like Edward Bulwer-Lytton’s The Coming Race mashed up with “Mad Men.”

RT: Can you say more?

JL: Well, the premise is essentially this. In the middle ages, the people of Europe took it for granted that non-human beings — often called the Sidhe or the faery folk — were as real as humans, and regularly trafficked with the human world. Just like “modern” people sometimes claim to see UFOs or to have been abducted by aliens, in the middle ages people often claimed to have happened upon secret Sidhe kingdoms, to have been abducted to faerie land, or to have had their children swapped for faerie babies. That’s where we get a lot of European mythology from. And then we stop hearing about them as soon as the Inquisition and then the Age of Reason come in.

So the question is, what happened to those beings? And the answer in the book is, well, they did what lots of displaced people do. They emigrated to New York, or the settlement that became New York. And they’ve been living in secret catacombs and warrens underneath the city ever since, in their own shadow version of the city and shadow economy — along with their evil half, the Unseelie, who are like creatures created by pure nightmare energy. And after four hundred years, the Unseelie are tired of hiding, and they want to make a bid to subjugate the human side of the city.

RT: What were your main inspirations writing this?

JL: Having been involved both in advertising world and the supernatural underworld of New York.

RT: You’ve previously written about consciousness expansion and magic (Generation Hex, Ultraculture, Thee Psychick Bible) and about the transforming effects of technology on the soul. Do you see this as a continuation or a departure from those topics? Why the switch to fiction?

JL: Definitely a continuation. There’s only so much truth you can express about the hidden corners of reality in non-fiction or essay form before people start wondering if you’re making it up. The threshold is very low. With fiction, hopefully I can put it all in there and instead of that nagging voice in your head while you’re reading it being “I wonder if he made this up,” it might be “I wonder if any of this is actually true?”

RT: So are you saying there’s actually coded occult information in Queen Valentine?

JL: No. Certainly not.

RT: You’ve also written about transhumanism and posthumanity. Does that tie in with the book?

JL: In a way. The book is in many ways a critique of transhumanism from the perspective of the original guardians of the earth, the nature spirits who’ve had to adapt to our technological progress and find a way to live in the cracks like any diaspora culture. A lot of the tension in the book revolves around the different responses from different factions of the Sidhe to the direction humanity is going. There’s also a lot of satire of the Faustian need for physical augmentation. I don’t want to give too much away, but the crux of what’s being discussed is whether humanity will be allowed to manifest the kind of nightmare future that it seems to be hellbent on creating.

RT: Does that mean you have an essentially pessimistic view of the future?

JL: Not really. I’m a great believer in posthumanity. But there’s certainly a dark road that I see people heading down that I think they shouldn’t. I think if we keep pushing on things like genetically modified crops and voluntary surveillance social media there’s a good chance we could end up living a real shit of a situation. I’m disturbed on a daily basis by the fact that we’ve essentially allowed things like Facebook to turn our interpersonal space into a strip mall. And I see one tendency of humanity to become more and more soulless, more and more surrendered to mechanization and regimentation. But luckily we still have things like science fiction to create and advertise better futures. By any imaginary means necessary!

RT: What’s next for you?

JL: I’m done plotting the next book and on into writing it. I’d really like to get into wider media to educate more young minds. That’s what it’s about! I’d like to write some comics if they’ll let me at them.

Jul 22 2011

Is The Singularity Near Or Far? It’s A Software Problem

When I first read The Singularity is Near by Kurzweil, it struck me that something seemed curiously “missing” from his predictions. At the time, I merely put it on the back burner as a question that needed more data to answer. Well, recently, it’s been brought up again by David Linden in his article “The Singularity is Far”.

What’s missing is a clear connection between “complete understanding of the mechanics of the brain” and how this “enables uploading and Matrix level VR.” As David points out, merely knowing how the brain functions at the mechanical level, even if we know how each and every atom and molecule behaves, and where every single neuron goes, does not equal the ability to reprogram the brain at will to create VR, nor does it necessarily translate into the ability to “upload” a consciousness to a computer.

I tend to agree with David that Ray’s timeline might be overly optimistic, though for completely different reasons. Why? Because software does not equal hardware!

David discusses a variety of technical hurdles that would need to be overcome by nanomachines in order to function as Kurzweil describes, but these are all really engineering issues that will be solved in one manner or another. We may or may not actually see them fixed by the timeline Kurzweil predicts, but with the advances we are making with stem cells, biological programming of single cell organisms, and even graphene based electronics, I don’t doubt that we will find a means to non destructively explore the brain, and even to interface to some basic functions. I also see many possible ways to provide immersive VR without ever having to achieve the kind of technology Ray predicts. I don’t even doubt that we’ll be able to interface with a variety of “cybernetic” devices via thought along, including the creation of artificial limbs which can be wired into the nervous system and provide sensory data like “touch.”

But knowing how to replicate a signal from a nerve and knowing precisely what that signal means to that individual might not be the same thing. Every human brain has a distinct synaptic map, and distinct signaling patterns. I’m not as confident that merely knowing the structure of a brain will enable us to translate the patterns of electrical impulses as easily as Kurzweil seems to think. We might learn how to send signals to devices without learning how to send signals back from that device in such a manner as to enable “two way” communication beyond simple motor control functions, much less complete replication of consciousness or complete control of inputs to enable “matrix VR” for a much longer time than mere mechanical reproduction of a human brain in simulation.

Does my perception of Green equal yours? Is there a distinct “firing pattern” that is identical among all humans that translates as “green”, or does every human have a distinct “signature” which would make “green” for me show up as “pink” for you? Will there be distinct signals that must be “decoded” for each and every single individual, or does every human conform to one of who knows how many “synaptic signal groups”? Can a machine “read minds” or would a machine fine tuned to me receive only gibberish if you tried to use it?

The human mind is adaptable. We’ve already proven that it can adapt to different points of view in VR, and even adapt to use previously unknown abilities, like a robotic “third arm”. The question is will this adaptability enable us to use highly sophisticated BCI despite that BCI being unable to actually “read” our thoughts, merely because we learn methods to send signals to it that it can understand while remaining “black boxes”, our “mind” impenetrable to the machine despite all our knowledge of the “brains” hardware?

This is the question I think Ray glosses over. Mere simulation of the hardware alone might not even begin to be the “hard problem” that will slow uploading. I don’t doubt we will eventually find an answer, but to do so, we first have to ask the question, and it’s one I don’t think Ray’s asked.

Jul 10 2011

From Gamification to Intelligence Amplification to The Singularity

“Moore’s law became obsolete as far as graphics were concerned.  Moore’s law was doubling. It was accelerating so fast that NVida started calling it Moore’s law cubed.

The following article was edited by R.U. Sirius and Alex Peake from a lecture Peake gave at the December 2010 Humanity+ Conference at the Beckman Institute in Pasadena, California. The original title was “Autocatalyzing Intelligence Symbiosis: what happens when artificial intelligence for intelligence amplification drives a 3dfx-like intelligence explosion.”

I’ve been thinking about the combination of artificial intelligence and intelligence amplification and specifically the symbiosis of these two things.

And the question that comes up is what happens when we make machines make us make them make us into them?

There are three different Moores’ Laws of accelerating returns. There are three uncanny valleys that are being crossed.  There’s a sort of coming of age story for humanity and for different technologies. There are two different species involved, us and the technology, and there are a number of high stakes questions that arise.

We could be right in the middle of an autocatalytic reaction and not know it. What is an autocatalytic reaction? An autocatalytic reaction is one in which the products of the reactions are the catalysts. So, as the reaction progresses, it accelerates and increases the rate of reaction.  Many autocatalytic reactions are very slow at first. One of the best known autocatalytic reactions is life.   And as I said, we could be right in the middle of one of these right now, and unlike a viral curve that spreads overnight, we might not even notice this as it ramps up.

There are two specific processes that I think are auto-catalyzing right now.

The first is strong AI. Here we have a situation where we don’t have strong AI yet, but we definitely have people aiming at it.  And there are two types of projects aiming toward advanced AI. One type says, “Well, we are going to have machines that learn things.” The other says, “We are going to have machines that’ll learn much more than just a few narrow things. They are going to become like us.”

And we’re all familiar with the widely prevalent method for predicting when this might be possible, which is by measuring the accelerating growth in the power of computer hardware. But we can’t graph when the software will exist to exploit this hardware’s theoretical capabilities. So some critics of the projected timeline towards the creation of human-level AI have said that the challenge arises not in the predictable rise of the hardware, but in the unpredictable solving of the software challenges.

One of the reasons that what we might broadly call the singularity project has difficulties solving some of these problems is that — although there’s a ton of money being thrown at certain forms of AI, they’re military AIs; or they’re other types of AI that have a narrow purpose. And even if these projects claim that they’re aimed at Artificial General Intelligence (AGI), they won’t necessarily lead to the kinds of AIs that we would like or that are going to be like us.  The popular image of a powerful narrow purpose AI developed for military purposes would, of course, be the T-1000, otherwise known as the Terminator.

The terminator possibility, or “unfriendly AI outcome” wherein we get an advanced military AI is not something that we look forward to. It’s basically the story of two different species that don’t get along.

Either way, we can see that AI is the next logical step.

But there’s a friendly AI hypothesis in which the AI does not kill us. It becomes us.
And if we actually merge with our technology — if we become family rather than competition — it could lead to some really cool outcomes.

And this leads us to the second thing that I think is auto-catalyzing: strong intelligence amplification.

We are all Intelligence amplification users.

Every information technology is intelligence amplification.  The internet — and all the tools that we use to learn and grow — they are all tools for intelligence amplification. But there’s a big difference between having Google at your fingertips to amplify your ability to answer some questions and having a complete redefinition of the way that humans brains are shaped and grow.

In the Diamond Age. Neal Stephenson posits the rise of molecular manufacturing. In that novel, we get replicators from today’s “maker bot,” so we can say “earl gray hot”… and there we have it.  We’re theoretically on the way to this sort of nanotech. And it should change everything. But there’s a catch.

In one of The Star Trek movies, Jean-Luc Picard is asked, “How much does this ship cost?” And he says, “Well, we no longer use money. Instead, we work to better ourselves and the rest of humanity.” Before the girl can ask him how that works, the Borg attack. So the answer as to how that would look is glossed over.

Having had a chance to contemplate the implications of nanotechnology for a few decades (since the publication of The Engines of Creation by Eric Drexler), we understand that it may not lead to a Trekkie utopia. Diamond Age points out one reason why. People may not want to make Earl Grey tea and appreciate the finer things in life.  They might go into spoiled brat mode and replicate Brawndo in a Brave New World or Fahrenheit 451. We could end up with a sort of wealthy Idiocracy amusing itself to death.

In Diamond Age, the human race splits into two types of people. There are your Thetes, which is an old Greek term. They’re the rowers and laborers and, in Diamond Age, they evolve into a state of total relativism and total freedom.

A lot of the things we cherish today lead to thete lifestyles and they result in us ultimately destroying ourselves. Stephenson posits an alternative: tribes.  And, in Diamond Age, the most successful tribe is the neo-Victorians.  The thetes resent them and call them “vickies.”  The big idea there was that what really matters in a post-scarcity economic world is not your economic status (what you have) but the intelligence that goes into who you are, who you know, and who will trust you.

And so the essence of tribalism involves building a culture that has a shared striving for excellence and an infrastructure for education that other tribes not only admire but seek out.  And they want to join your tribe. And that’s what makes you the most powerful tribe. That’s what gives you your status.

So, in Diamond Age, the “vickie” schools become their competitive advantage. After all, a nanotech society needs smart people who can deal with the technological issues.  So how do you teach nanotechnology to eighth graders? Well, you have to radically, aggressively approach not only teaching the technology but the cohesion and the manners and values that will make the society successful.

But the problem is that this has a trap. You may get a perfect education system.  And if you have a perfectly round, smooth, inescapable educational path shaping the minds of youths, you’re likely to get a kind of conformity that couldn’t invent the very technologies that made the nanotech age possible. The perfect children may grow up to all be “yes men.”

So one of the characters in Diamond Age sees his granddaughter falling into this trap and says, “Not on my watch.”  So he invents something that will develop human minds as well as the nanotech age developed physical wealth.  He invents “A young lady’s illustrated primer.”  And the purpose of the illustrated primer is to solve the problem.  On a mass scale, how do you shape each individual person to be free rather than the same?

Making physical stuff cheap and free is easy.  Making a person independent and free is a bigger challenge.  In Diamond Age, the tool for this is a fairy tale book.

The child is given the book and, for them, it unfolds an opportunity to decide who they’re going to be — it’s personalized to them.

And this primer actually leads to the question — once you have the mind open wide and you can put almost anything into there; how should you make the mind?  What should you give them as content that will lead to their pursuit of true happiness and not merely ignorant contentment?

The neo-Victorians embody conformity and the Thetes embody nonconformity. But Stephenson indicates that to teach someone to be subversive in this context, you have to teach them something other than those extremes.

You have to teach them subtlety.  And subtlety is a very elusive quality to teach.  But it’s potentially the biggest challenge that humanity faces as we face some really dangerous choices.

During the space race, JFK said, about the space program, that to do this – to make these technologies that don’t exist and go to the moon and so forth — we have to be bold. But we can’t just go boldly into strong AI or boldly go into strong nanotech. We have to go subtly.

I have my own educational, personal developmental narrative in association with a technology that we’ve boldy gone for — 3dfx.

As a teenager, my mom taught me about art and my dad taught me about how to invent stuff. And, at some point, they realized that they could only teach me half of what I needed to learn. In the changing world, I also needed a non-human mentor.  So she introduced me to the Mac. She bought the SE 30 because it had a floating point unit and she was told that would be good for doing science. Because that’s what I was interested in! I nodded and smiled until I was left alone with the thing so I could get down to playing games. But science snuck in on me: I started playing SimCity and I learned about civil engineering.

The Mac introduced me to games.  And when I started playing SimLife, I learned about how genes and alleles can be shaped and how you could create new life forms. And I started to want to make things in my computer.

I started out making art to make art, but I wasn’t satisfied with static pictures. So I realized that I wanted to make games and things that did stuff.

I was really into fantasy games. Fantasy games made me wish the world really was magic. You know, “I wish I could go to Hogwarts and cast magic spells.”  But the reality was that you can try to cast spells, it’s just that no matter how old and impressive the book you get magic out of happens to be, spells don’t work.

What the computer taught me was that there was real muggle magic.  It consisted of magic words. And the key was that to learn it, you had to open your mind to the computer and let the computer change you in its image. So I was trying to discover science and programming because my computer taught me. And once you had the computer inside of your mind, you could change the computer in your image to do what you wanted. It had its own teaching system. In a way, it was already the primer.
So then I got a PowerBook.  And when I took it to school, the teachers took one look at what I was doing and said, “We don’t know what to do with this kid!” So they said “you need a new mentor” and they sent me to meet Dr. Dude.

I kid you not. That wasn’t his actual name on his office and on his nameplate but that’s what he was known as.

Dr. Dude took a look at my Mac and said, “That’s really cute, but if you’re in university level science you have to meet Unix.” So I introduced myself to Unix.

Around that time, Jurassic Park came out. It blew people away with its graphics. And it had something that looked really familiar in the movie. As the girl says in the scene where she hacks the computer system, “It’s UNIX! I know this!”

I was using Unix in the university and I noticed that you could actually spot the Silicon Graphics logo in the movie.  Silicon Graphics was the top dog in computer graphics at that time. But it was also a dinosaur. Here you had SGI servers that were literally bigger than a person rendering movies while I could only do the simplest graphics stuff with my little PowerBook. But Silicon Graphics was about to suffer the same fate as the dinosaurs.

At that time, there was very little real-time texture mapping, if any. Silicon Graphics machines rendered things with really weird faked shadows. They bragged that there was a Z-buffer in some of the machines. It was a special feature.

This wasn’t really a platform that could do photorealistic real-time graphics, because academics and film industry people didn’t care about that.  They wanted to make movies because that was where the money was.  And just as with military AI, AI that’s built for making movies doesn’t get us where we want to go.

Well, after a while we reached a wall.  We hit the uncanny valley, and the characters started to look creepy instead of awesome. We started to miss the old days of real special effects. The absolute low point for these graphics was the Indiana Jones and the Crystal Skull monkey chase scene.

Movie goers actually stopped wanting the movies to have better graphics.  We started to miss good stories. Movie graphics had made it big, but the future was elsewhere. The future of graphics wasn’t in Silicon Graphics, it was in this tiny rodent-sized PC computer that was nothing compared to the SGI, but it had this killer app called Doom. And Doom was a perfect name for this game because it doomed the previous era of big tech graphics. And the big tech graphics people laughed at it. They’d make fun of it: “That’s not real graphics. That’s 2.5D.” But, do you know what? It was a lot cooler than any of the graphics on the SGI because it was realtime and fun.

Well, it led to Quake. And you could call it an earthquake for SGI. But it was more like an asteroid, because Quake delivered a market that was big enough to motivate people to make hardware for it. And when the hardware of the 3DFX graphic card arrived, it turned Quake‘s pixelated 3D dungeons into lush smoothly lit and textured photorealistic worlds. Finally, you started to get completely 3D accelerated graphics and big iron graphics machines became obsolete overnight.

Within a few years 3dFX was more then doubling the power of graphics every year and here’s why.  SGI made OpenGL. And it was their undoing, because it not only enabled prettier ways to kill people, which brought the guys to the yard. It also enabled beautiful and curvy characters like Lara Croft, which really brought the boys to the yard and also girls who were excited to finally have characters that they could identify with, even if they were kind of Barbies (which is, sadly, still prevalent in the industry). The idea of characters and really character-driven games drove graphics cards and soon the effects were amazing.

Now, instead of just 256 Megs of memory, you had 256 graphics processors.
Moore’s law became obsolete as far as graphics were concerned.  Moore’s law was doubling. It was accelerating so fast that NVida started calling it Moore’s law cubed. In fact, while Moore’s law was in trouble because the limits of what one processor could do, GPUs were using parallelism.

In other words, when they made the Pentium into the Pentium 2 they couldn’t actually give you two of them, with that much more performance.  They could only pretend to give you two by putting it in a big fancy dress and make it slightly better. But 3DFX went from 3DFX to the VOODOO2, which had three processors on each card, which could be double into six processors.

The graphics became photorealistic. So now we’ve arrived at a plateau. Graphics are now basically perfect. The problem now is that graphics cards are bored.  They’re going to keep growing but they need another task. And there is another task that parallelism is good for — neural networks.

So right now, there are demos of totally photorealistic characters like Milo. But unfortunately, we’re right at that uncanny valley that films were at, where it’s good enough to be creepy, but not really good enough.  There are games now where the characters look physically like real people, but you can tell that nobody is there.
So now, Jesse Schell has come along. And he gave this important talk  at Unite, the Unity developer conference. (Unity is a game engine that is going to be the key to this extraordinary future of game AI.) And in this talk, Schell points out all the things that are necessary to create the kinds of characters that can unleash a Moore’s law for artificial intelligence.

A law of accelerating returns like Moore’s Law needs three things:

Step 1 is the exploitable property: What do you keep increasing to get continued progress? With chips, the solution involved making them smaller and that kept making them faster and cheaper and more efficient. Perhaps the only reliably increasable thing about AI is the quantity of AIs and AI approaches being tested against each other at once. When you want to increase quality through competition, quantity can have a quality of its own. AI will be pivotal to making intelligence amplification games better and better. With all the game developers competing to deliver the best learning games we can get a huge number of developers in the same space sharing and competing with reusable game character AI.  This will parallelize the work being done in AI, which can accelerate it in a rocket assisted fashion compared to the one at a time approach to doing isolated AI projects.

The second ingredient of accelerating returns is you have to have an insatiable demand. And that demand is in the industry of intelligence amplification.  The market size of education is ten times the market size of games, and more then fifty percent of what happens in education will be online within five years.

That’s why Primer Labs is building the future of that fifty percent. It’s a big opportunity.

The final ingredient of exponential progress is the prophecy. Someone has to go and actually make the hit that demonstrates that the law of accelerating is at work, like Quake was to graphics. This is the game that we’re making.

Our game is going to invite people to use games as a school. And it’s going to implement danger in their lives. We’re going to give them the adventures and challenges every person craves to make learning fun and exciting.

And once we begin relying on AI mentors for our children and we get those mentors increasing in sophistication at an exponential rate, we’re dipping our toe into symbiosis between humans and the AI that shape them.

We rely on sexual reproduction because — contrary to what the Raelians would like to believe — cloning just isn’t going to fly. That’s because organisms need to handle bacteria that are constantly changing to survive. It’s not just competing with other big animals for food and mates, you have to contend with these tiny rapidly evolving things that threaten to parasitize you all the time. And there’s this thing called The Red Queen Hypothesis that shows that you need a whole bunch of junk DNA available to handle the complexity of life against wave after wave of mutating microorganisms.

We have a similar challenge with memes. We have a huge number of people competing to control out minds and to manipulate us. And so when we deal with memetic education, we have the opportunity to take what sexual reproduction does for our bodies and do it to our brains by  introducing a new source of diversity of thought into young minds. Instead of stamping generic educations onto every child and limiting their individuality, a personalized game-based learning process with human mentors coaching and inspiring each young person to pursue their destiny encourages the freshness of ideas our kids need to adapt and meet the challenges of tomorrow. And this sharing of our children with their AI mentors is the beginning of symbiotic reproduction with AI the same way that sexual reproduction happened between two genders.

The combination between what we do for our kids and what games are going to do for our kids means that we are going to only have a 50% say in who they are going to be. They’re going to become wizards at the computer and It’s going to specifically teach them to make better AI. Here’s where the reactants, humans and the games that make them smart, become their own catalysts. Every improvement in humans leads to better games leads to smarter humans leads to humans that are so smart that they may be unrecognizable in ways that are hard to predict.

The feedback cycle between these is autocatalytic.  It will be an explosion. And there are a couple of possibilities. It could destroy standardized education as we know it, but it may give teachers something much cooler to do with students: mentorship.

We’re going to be scared because we’re not going to know if we can trust our children with machines. Would you trust your kid with an AI? Well, the AIs will say, “Why should we trust you?”  No child abuse will happen on an AI’s watch.

So the issues become privacy. How much will we let them protect our kids? imagine the kid has a medical condition and the AI knows better then you what treatment to give it.

The AI might need to protect the kid from you.

Also, how do we deal with the effects of this on our kids when it’s unpredictable?  In some ways, when we left kids in front of the TV while they were growing up, it destroyed the latchkey generation. We don’t want to repeat this mistake and end up with our kids being zombies in a virtual world. So the challenge becomes: how do we get games to take us out of the virtual world and connect us with our aspirations? How do incentivize them to earn the “Achievement Unlocked: Left The House” awards?
That’s the heart of Primer. The game aims to connect people to activities and interests beyond games.

Finally, imagine the kids grow up with a computer mentor. Who will our kids love more, the computer or us?  “I don’t know if we should trust this thing,” some parents will say.

The kids are going to look at the AI, and it’s going to talk to them. And they are going to look at its code and understand it. And it’s going to want to look at their code and want to get to know them.  And they’ll talk and become such good friends that we’re going to feel kind of left out. They’re going to bond with AIs in a way that is going to make us feel like a generation left behind — like the conservative parents of the ‘60s love children.

The ultimate question isn’t whether our kids will love us but if we will recognize them. Will we  be able to relate to the kids of the future and love them if they’re about to get posthuman on us? And some of us might be part of that change, but our kids are going to be a lot weirder.

Finally, they’re going to have their peers. And their peers are going to be just like them. We won’t be able to understand them, but they’ll be able to handle their problems together.  And together they’re going to make a new kind of a world. And the AIs that we once thought of as just mentors may become their peers.

And so the question is: when are we going to actually start joining an AI market, instead of having our little fiefdoms like Silicon Graphics? Do we want to be dinosaurs? Or can we be a huge surge of mammals, all building AIs for learning games together?
So we’re getting this thing started with Primer at Primer Labs.com.

In Primer, all of human history is represented by a world tree. The tree is a symbol of us emerging from the cosmos. And as we emerge from the cosmos, we have our past, our present and our future to confront and to change. And the AI is a primer that guides each of us through the greatest game of all: to make all knowledge playable.

Primer is the magic talking mentor textbook in the Hogwarts of scientific magic, guiding us  from big bang to big brains to singularity.

Primer Labs announced their game, Code Hero, on July 3.

The original talk this article was taken from is here.

Jun 06 2011

All Watched Over By Machines & Ayn Rand’s Face

The opening episode of All Watched Over By Machines of Loving Grace — the BBC documentary that’s been generating big buzz since its debut on BBC 2 in late March — is a wildly enjoyable and coruscating, but nevertheless flawed dissection of the connections between Randian Actualism, the rise of technoculture, and the accelerating boom/bust cycles of global market capitalism. Adam Curtis (he’s huge in England) is a smart and skilled filmmaker working in the very contemporary rapid fire, cut-up mode.

Nevertheless, I fear that I must quibble with the narrative a bit. Episode One paints (or tars) with broad strokes. Clever as hell… but this is a sketchy and surprisingly simplistic narrative presented as airtight history. It’s also a very European take on U.S. technoculture (“I see libertarian people!”).

The title, of course, comes from a famous poem by hippie writer Richard Brautigan envisioning a cybernetic/ecological/post-scarcity utopia. As I sat down to view it, I expected the usual scalding critique of the illusions of the ‘90s cyber/counterculture and the hippie technotopians who made it all go down, playing right into the hands (as critics would have it, and they are at least partially correct) of the global financial elite.

Imagine my surprise then as I viddied the opening: “It’s a strange story and it begins with a strange woman in the 1950s in New York.” Cut to Mike Wallace interviewing Ayn Rand.

“Well, no.” thought I. “It all started with Lee Felsenstein, veteran of the ultraleft countercultural underground newspaper, Berkeley Barb, who organized the Homebrew Computer Club for computer hobbyists in Berkeley, California in the mid-1970s. (Steve Jobs and Steve Wozniak were amongst the many attendees who created the “personal computer revolution.”)

But I quickly realized that Curtis was telling an entirely different story — one that is largely valid (in aforementioned broad strokes) — about what happened when boundary defying technologies with global implications collided with market capitalism under the, at best, premature assumption that these combined forces would become a smooth functioning cybernetic (self regulating) system that would never crash.

Ayn Rand’s face, slightly twitchy and sad-eyed, recurs throughout the opening episode, looming over the proceedings, the central conceit being that Rand’s extreme philosophy has guided our political economy for several decades. This is a partial truth. The real story is, of course, more complicated and much messier, having less to do with ideology put into practice than natural opportunism responding to a set of circumstances. I mean, does anybody really believe that we wouldn’t be where we are now technologically without Ayn Rand? And granting that; does anybody believe, given the globalizing nature of that technology, that average Westerners wouldn’t have fallen into competition with people in developing countries and that the leaders of nations and states wouldn’t be reduced to offering tax breaks (If not blow jobs) to corporations if they’ll only bring their business to town (or keep it there)?

A complimentary narrative is interweaved into the episode involving the Clinton Administration, as we follow the Democratic President as he gets swallowed whole by Robert Rubin’s market ideology and Monica Lewinsky and we witness the exciting boom and then bust of the nineties, repeated with less boom and worse bust in the 2000s.

This, then, is not so much the story of the rise of digital enthusiasm as it is the story of the rise of speculative casino global capitalism contextualized by digital enthusiasm.

I would also point out — in fairness to my libertarian friends — that the ethic of the species of Silicon Valley libertarian entrepreneurs who play a starring role in the episode (they’re not as utterly ubiquitous as the documentarian implies) is entrepreneurial capitalism. This ethic honors using capital to actually do something, as opposed to simply making money from money by playing tricky games. Not that there was a great deal of resistance to the speculative booms in those quarters, but you will find some of these entrepreneurs sharpening those distinctions today as they note the damage done.

All these quibbles aside, All Watched Over By Machines of Loving Grace looks to be a promising examination of technological exuberance over the last several decades. By the end of it all, I may be worshipping at Adam Curtis’s feet.

You can find the first two episodes now on YouTube and here’s hoping it makes it to BBC America.

May 31 2011

The Imaginary Foundation: My Chosen Headspace and Reality Tunnel

The artist has a compulsive need to pay tribute to what he has experienced, not only in order to share it with others, but also in order to fully reflect and bring into awareness the weight and depth of the emotional experience for himself or herself — the level of which might be bubbling just beneath the surface of his awareness. The ecstatic surrender, the aesthetic arrest, the rapturous awe, is felt, and upon returning to ordinary consciousness, the residual feeling compels one to honor it in words.

One must be willing to record oneself having idea sex in real time

This relentless urge becomes what fuels many of us: The Imaginary Foundation says that to “imbue our artistic work with even a twinkle of that reverence,” (felt during the ecstatic moment), is enough to give our lives purpose.

I believe one must be willing to explore oneself while in the ecstatic state, to maintain enough executive function to describe vividly what is felt so deeply.

One must be willing to record oneself having idea sex in real time — we’re talking about recording the bursting forth of Aha.

Pierre Teilhard de Chardin wrote,

“The living world is constituted by consciousness clothed in flesh and bone.” He argued that the primary vehicle for increasing complexity consciousness among living organisms was the nervous system. It is our responsibility to put it to good use!

An article in Wired said this:

“Teilhard went on to argue that there have been three major phases in the evolutionary process. The first significant phase started when life was born from the development of the biosphere. The second began at the end of the Tertiary period, when humans emerged along with self-reflective thinking. And once thinking humans began communicating around the world, along came the third phase. This was Teilhard’s “thinking layer” of the biosphere, called the noosphere (from the Greek noo, for mind). Though small and scattered at first, the noosphere has continued to grow over time, particularly during the age of electronics. Teilhard described the noosphere on Earth as a crystallization: ‘A glow rippled outward from the first spark of conscious reflection. The point of ignition grows larger. The fire spreads in ever-widening circles,’ he wrote,’till finally the whole planet is covered with incandescence.’”

The Imaginary Foundation says that “To Understand is To Perceive Patterns”… Rhetoric is the means by which we interpret patterns!

In that spirit, I asked Acceler8or to present these two videos:

 

Musings on Terrence McKenna’s Emergence of Language from jason silva on Vimeo.

 

Meeting of the minds with Transcendent Man director Barry Ptolemy from jason silva on Vimeo.