ACCELER8OR

Jul 10 2011

From Gamification to Intelligence Amplification to The Singularity

By Alex Peake


Share/Bookmark

“Moore’s law became obsolete as far as graphics were concerned.  Moore’s law was doubling. It was accelerating so fast that NVida started calling it Moore’s law cubed.

The following article was edited by R.U. Sirius and Alex Peake from a lecture Peake gave at the December 2010 Humanity+ Conference at the Beckman Institute in Pasadena, California. The original title was “Autocatalyzing Intelligence Symbiosis: what happens when artificial intelligence for intelligence amplification drives a 3dfx-like intelligence explosion.”

I’ve been thinking about the combination of artificial intelligence and intelligence amplification and specifically the symbiosis of these two things.

And the question that comes up is what happens when we make machines make us make them make us into them?

There are three different Moores’ Laws of accelerating returns. There are three uncanny valleys that are being crossed.  There’s a sort of coming of age story for humanity and for different technologies. There are two different species involved, us and the technology, and there are a number of high stakes questions that arise.

We could be right in the middle of an autocatalytic reaction and not know it. What is an autocatalytic reaction? An autocatalytic reaction is one in which the products of the reactions are the catalysts. So, as the reaction progresses, it accelerates and increases the rate of reaction.  Many autocatalytic reactions are very slow at first. One of the best known autocatalytic reactions is life.   And as I said, we could be right in the middle of one of these right now, and unlike a viral curve that spreads overnight, we might not even notice this as it ramps up.

There are two specific processes that I think are auto-catalyzing right now.

The first is strong AI. Here we have a situation where we don’t have strong AI yet, but we definitely have people aiming at it.  And there are two types of projects aiming toward advanced AI. One type says, “Well, we are going to have machines that learn things.” The other says, “We are going to have machines that’ll learn much more than just a few narrow things. They are going to become like us.”

And we’re all familiar with the widely prevalent method for predicting when this might be possible, which is by measuring the accelerating growth in the power of computer hardware. But we can’t graph when the software will exist to exploit this hardware’s theoretical capabilities. So some critics of the projected timeline towards the creation of human-level AI have said that the challenge arises not in the predictable rise of the hardware, but in the unpredictable solving of the software challenges.

One of the reasons that what we might broadly call the singularity project has difficulties solving some of these problems is that — although there’s a ton of money being thrown at certain forms of AI, they’re military AIs; or they’re other types of AI that have a narrow purpose. And even if these projects claim that they’re aimed at Artificial General Intelligence (AGI), they won’t necessarily lead to the kinds of AIs that we would like or that are going to be like us.  The popular image of a powerful narrow purpose AI developed for military purposes would, of course, be the T-1000, otherwise known as the Terminator.

The terminator possibility, or “unfriendly AI outcome” wherein we get an advanced military AI is not something that we look forward to. It’s basically the story of two different species that don’t get along.

Either way, we can see that AI is the next logical step.

But there’s a friendly AI hypothesis in which the AI does not kill us. It becomes us.
And if we actually merge with our technology — if we become family rather than competition — it could lead to some really cool outcomes.

And this leads us to the second thing that I think is auto-catalyzing: strong intelligence amplification.

We are all Intelligence amplification users.

Every information technology is intelligence amplification.  The internet — and all the tools that we use to learn and grow — they are all tools for intelligence amplification. But there’s a big difference between having Google at your fingertips to amplify your ability to answer some questions and having a complete redefinition of the way that humans brains are shaped and grow.

In the Diamond Age. Neal Stephenson posits the rise of molecular manufacturing. In that novel, we get replicators from today’s “maker bot,” so we can say “earl gray hot”… and there we have it.  We’re theoretically on the way to this sort of nanotech. And it should change everything. But there’s a catch.

In one of The Star Trek movies, Jean-Luc Picard is asked, “How much does this ship cost?” And he says, “Well, we no longer use money. Instead, we work to better ourselves and the rest of humanity.” Before the girl can ask him how that works, the Borg attack. So the answer as to how that would look is glossed over.

Having had a chance to contemplate the implications of nanotechnology for a few decades (since the publication of The Engines of Creation by Eric Drexler), we understand that it may not lead to a Trekkie utopia. Diamond Age points out one reason why. People may not want to make Earl Grey tea and appreciate the finer things in life.  They might go into spoiled brat mode and replicate Brawndo in a Brave New World or Fahrenheit 451. We could end up with a sort of wealthy Idiocracy amusing itself to death.

In Diamond Age, the human race splits into two types of people. There are your Thetes, which is an old Greek term. They’re the rowers and laborers and, in Diamond Age, they evolve into a state of total relativism and total freedom.

A lot of the things we cherish today lead to thete lifestyles and they result in us ultimately destroying ourselves. Stephenson posits an alternative: tribes.  And, in Diamond Age, the most successful tribe is the neo-Victorians.  The thetes resent them and call them “vickies.”  The big idea there was that what really matters in a post-scarcity economic world is not your economic status (what you have) but the intelligence that goes into who you are, who you know, and who will trust you.

And so the essence of tribalism involves building a culture that has a shared striving for excellence and an infrastructure for education that other tribes not only admire but seek out.  And they want to join your tribe. And that’s what makes you the most powerful tribe. That’s what gives you your status.

So, in Diamond Age, the “vickie” schools become their competitive advantage. After all, a nanotech society needs smart people who can deal with the technological issues.  So how do you teach nanotechnology to eighth graders? Well, you have to radically, aggressively approach not only teaching the technology but the cohesion and the manners and values that will make the society successful.

But the problem is that this has a trap. You may get a perfect education system.  And if you have a perfectly round, smooth, inescapable educational path shaping the minds of youths, you’re likely to get a kind of conformity that couldn’t invent the very technologies that made the nanotech age possible. The perfect children may grow up to all be “yes men.”

So one of the characters in Diamond Age sees his granddaughter falling into this trap and says, “Not on my watch.”  So he invents something that will develop human minds as well as the nanotech age developed physical wealth.  He invents “A young lady’s illustrated primer.”  And the purpose of the illustrated primer is to solve the problem.  On a mass scale, how do you shape each individual person to be free rather than the same?

Making physical stuff cheap and free is easy.  Making a person independent and free is a bigger challenge.  In Diamond Age, the tool for this is a fairy tale book.

The child is given the book and, for them, it unfolds an opportunity to decide who they’re going to be — it’s personalized to them.

And this primer actually leads to the question — once you have the mind open wide and you can put almost anything into there; how should you make the mind?  What should you give them as content that will lead to their pursuit of true happiness and not merely ignorant contentment?

The neo-Victorians embody conformity and the Thetes embody nonconformity. But Stephenson indicates that to teach someone to be subversive in this context, you have to teach them something other than those extremes.

You have to teach them subtlety.  And subtlety is a very elusive quality to teach.  But it’s potentially the biggest challenge that humanity faces as we face some really dangerous choices.

During the space race, JFK said, about the space program, that to do this – to make these technologies that don’t exist and go to the moon and so forth — we have to be bold. But we can’t just go boldly into strong AI or boldly go into strong nanotech. We have to go subtly.

I have my own educational, personal developmental narrative in association with a technology that we’ve boldy gone for — 3dfx.

As a teenager, my mom taught me about art and my dad taught me about how to invent stuff. And, at some point, they realized that they could only teach me half of what I needed to learn. In the changing world, I also needed a non-human mentor.  So she introduced me to the Mac. She bought the SE 30 because it had a floating point unit and she was told that would be good for doing science. Because that’s what I was interested in! I nodded and smiled until I was left alone with the thing so I could get down to playing games. But science snuck in on me: I started playing SimCity and I learned about civil engineering.

The Mac introduced me to games.  And when I started playing SimLife, I learned about how genes and alleles can be shaped and how you could create new life forms. And I started to want to make things in my computer.

I started out making art to make art, but I wasn’t satisfied with static pictures. So I realized that I wanted to make games and things that did stuff.

I was really into fantasy games. Fantasy games made me wish the world really was magic. You know, “I wish I could go to Hogwarts and cast magic spells.”  But the reality was that you can try to cast spells, it’s just that no matter how old and impressive the book you get magic out of happens to be, spells don’t work.

What the computer taught me was that there was real muggle magic.  It consisted of magic words. And the key was that to learn it, you had to open your mind to the computer and let the computer change you in its image. So I was trying to discover science and programming because my computer taught me. And once you had the computer inside of your mind, you could change the computer in your image to do what you wanted. It had its own teaching system. In a way, it was already the primer.
So then I got a PowerBook.  And when I took it to school, the teachers took one look at what I was doing and said, “We don’t know what to do with this kid!” So they said “you need a new mentor” and they sent me to meet Dr. Dude.

I kid you not. That wasn’t his actual name on his office and on his nameplate but that’s what he was known as.

Dr. Dude took a look at my Mac and said, “That’s really cute, but if you’re in university level science you have to meet Unix.” So I introduced myself to Unix.

Around that time, Jurassic Park came out. It blew people away with its graphics. And it had something that looked really familiar in the movie. As the girl says in the scene where she hacks the computer system, “It’s UNIX! I know this!”

I was using Unix in the university and I noticed that you could actually spot the Silicon Graphics logo in the movie.  Silicon Graphics was the top dog in computer graphics at that time. But it was also a dinosaur. Here you had SGI servers that were literally bigger than a person rendering movies while I could only do the simplest graphics stuff with my little PowerBook. But Silicon Graphics was about to suffer the same fate as the dinosaurs.

At that time, there was very little real-time texture mapping, if any. Silicon Graphics machines rendered things with really weird faked shadows. They bragged that there was a Z-buffer in some of the machines. It was a special feature.

This wasn’t really a platform that could do photorealistic real-time graphics, because academics and film industry people didn’t care about that.  They wanted to make movies because that was where the money was.  And just as with military AI, AI that’s built for making movies doesn’t get us where we want to go.

Well, after a while we reached a wall.  We hit the uncanny valley, and the characters started to look creepy instead of awesome. We started to miss the old days of real special effects. The absolute low point for these graphics was the Indiana Jones and the Crystal Skull monkey chase scene.

Movie goers actually stopped wanting the movies to have better graphics.  We started to miss good stories. Movie graphics had made it big, but the future was elsewhere. The future of graphics wasn’t in Silicon Graphics, it was in this tiny rodent-sized PC computer that was nothing compared to the SGI, but it had this killer app called Doom. And Doom was a perfect name for this game because it doomed the previous era of big tech graphics. And the big tech graphics people laughed at it. They’d make fun of it: “That’s not real graphics. That’s 2.5D.” But, do you know what? It was a lot cooler than any of the graphics on the SGI because it was realtime and fun.

Well, it led to Quake. And you could call it an earthquake for SGI. But it was more like an asteroid, because Quake delivered a market that was big enough to motivate people to make hardware for it. And when the hardware of the 3DFX graphic card arrived, it turned Quake‘s pixelated 3D dungeons into lush smoothly lit and textured photorealistic worlds. Finally, you started to get completely 3D accelerated graphics and big iron graphics machines became obsolete overnight.

Within a few years 3dFX was more then doubling the power of graphics every year and here’s why.  SGI made OpenGL. And it was their undoing, because it not only enabled prettier ways to kill people, which brought the guys to the yard. It also enabled beautiful and curvy characters like Lara Croft, which really brought the boys to the yard and also girls who were excited to finally have characters that they could identify with, even if they were kind of Barbies (which is, sadly, still prevalent in the industry). The idea of characters and really character-driven games drove graphics cards and soon the effects were amazing.

Now, instead of just 256 Megs of memory, you had 256 graphics processors.
Moore’s law became obsolete as far as graphics were concerned.  Moore’s law was doubling. It was accelerating so fast that NVida started calling it Moore’s law cubed. In fact, while Moore’s law was in trouble because the limits of what one processor could do, GPUs were using parallelism.

In other words, when they made the Pentium into the Pentium 2 they couldn’t actually give you two of them, with that much more performance.  They could only pretend to give you two by putting it in a big fancy dress and make it slightly better. But 3DFX went from 3DFX to the VOODOO2, which had three processors on each card, which could be double into six processors.

The graphics became photorealistic. So now we’ve arrived at a plateau. Graphics are now basically perfect. The problem now is that graphics cards are bored.  They’re going to keep growing but they need another task. And there is another task that parallelism is good for — neural networks.

So right now, there are demos of totally photorealistic characters like Milo. But unfortunately, we’re right at that uncanny valley that films were at, where it’s good enough to be creepy, but not really good enough.  There are games now where the characters look physically like real people, but you can tell that nobody is there.
So now, Jesse Schell has come along. And he gave this important talk  at Unite, the Unity developer conference. (Unity is a game engine that is going to be the key to this extraordinary future of game AI.) And in this talk, Schell points out all the things that are necessary to create the kinds of characters that can unleash a Moore’s law for artificial intelligence.

A law of accelerating returns like Moore’s Law needs three things:

Step 1 is the exploitable property: What do you keep increasing to get continued progress? With chips, the solution involved making them smaller and that kept making them faster and cheaper and more efficient. Perhaps the only reliably increasable thing about AI is the quantity of AIs and AI approaches being tested against each other at once. When you want to increase quality through competition, quantity can have a quality of its own. AI will be pivotal to making intelligence amplification games better and better. With all the game developers competing to deliver the best learning games we can get a huge number of developers in the same space sharing and competing with reusable game character AI.  This will parallelize the work being done in AI, which can accelerate it in a rocket assisted fashion compared to the one at a time approach to doing isolated AI projects.

The second ingredient of accelerating returns is you have to have an insatiable demand. And that demand is in the industry of intelligence amplification.  The market size of education is ten times the market size of games, and more then fifty percent of what happens in education will be online within five years.

That’s why Primer Labs is building the future of that fifty percent. It’s a big opportunity.

The final ingredient of exponential progress is the prophecy. Someone has to go and actually make the hit that demonstrates that the law of accelerating is at work, like Quake was to graphics. This is the game that we’re making.

Our game is going to invite people to use games as a school. And it’s going to implement danger in their lives. We’re going to give them the adventures and challenges every person craves to make learning fun and exciting.

And once we begin relying on AI mentors for our children and we get those mentors increasing in sophistication at an exponential rate, we’re dipping our toe into symbiosis between humans and the AI that shape them.

We rely on sexual reproduction because — contrary to what the Raelians would like to believe — cloning just isn’t going to fly. That’s because organisms need to handle bacteria that are constantly changing to survive. It’s not just competing with other big animals for food and mates, you have to contend with these tiny rapidly evolving things that threaten to parasitize you all the time. And there’s this thing called The Red Queen Hypothesis that shows that you need a whole bunch of junk DNA available to handle the complexity of life against wave after wave of mutating microorganisms.

We have a similar challenge with memes. We have a huge number of people competing to control out minds and to manipulate us. And so when we deal with memetic education, we have the opportunity to take what sexual reproduction does for our bodies and do it to our brains by  introducing a new source of diversity of thought into young minds. Instead of stamping generic educations onto every child and limiting their individuality, a personalized game-based learning process with human mentors coaching and inspiring each young person to pursue their destiny encourages the freshness of ideas our kids need to adapt and meet the challenges of tomorrow. And this sharing of our children with their AI mentors is the beginning of symbiotic reproduction with AI the same way that sexual reproduction happened between two genders.

The combination between what we do for our kids and what games are going to do for our kids means that we are going to only have a 50% say in who they are going to be. They’re going to become wizards at the computer and It’s going to specifically teach them to make better AI. Here’s where the reactants, humans and the games that make them smart, become their own catalysts. Every improvement in humans leads to better games leads to smarter humans leads to humans that are so smart that they may be unrecognizable in ways that are hard to predict.

The feedback cycle between these is autocatalytic.  It will be an explosion. And there are a couple of possibilities. It could destroy standardized education as we know it, but it may give teachers something much cooler to do with students: mentorship.

We’re going to be scared because we’re not going to know if we can trust our children with machines. Would you trust your kid with an AI? Well, the AIs will say, “Why should we trust you?”  No child abuse will happen on an AI’s watch.

So the issues become privacy. How much will we let them protect our kids? imagine the kid has a medical condition and the AI knows better then you what treatment to give it.

The AI might need to protect the kid from you.

Also, how do we deal with the effects of this on our kids when it’s unpredictable?  In some ways, when we left kids in front of the TV while they were growing up, it destroyed the latchkey generation. We don’t want to repeat this mistake and end up with our kids being zombies in a virtual world. So the challenge becomes: how do we get games to take us out of the virtual world and connect us with our aspirations? How do incentivize them to earn the “Achievement Unlocked: Left The House” awards?
That’s the heart of Primer. The game aims to connect people to activities and interests beyond games.

Finally, imagine the kids grow up with a computer mentor. Who will our kids love more, the computer or us?  “I don’t know if we should trust this thing,” some parents will say.

The kids are going to look at the AI, and it’s going to talk to them. And they are going to look at its code and understand it. And it’s going to want to look at their code and want to get to know them.  And they’ll talk and become such good friends that we’re going to feel kind of left out. They’re going to bond with AIs in a way that is going to make us feel like a generation left behind — like the conservative parents of the ‘60s love children.

The ultimate question isn’t whether our kids will love us but if we will recognize them. Will we  be able to relate to the kids of the future and love them if they’re about to get posthuman on us? And some of us might be part of that change, but our kids are going to be a lot weirder.

Finally, they’re going to have their peers. And their peers are going to be just like them. We won’t be able to understand them, but they’ll be able to handle their problems together.  And together they’re going to make a new kind of a world. And the AIs that we once thought of as just mentors may become their peers.

And so the question is: when are we going to actually start joining an AI market, instead of having our little fiefdoms like Silicon Graphics? Do we want to be dinosaurs? Or can we be a huge surge of mammals, all building AIs for learning games together?
So we’re getting this thing started with Primer at Primer Labs.com.

In Primer, all of human history is represented by a world tree. The tree is a symbol of us emerging from the cosmos. And as we emerge from the cosmos, we have our past, our present and our future to confront and to change. And the AI is a primer that guides each of us through the greatest game of all: to make all knowledge playable.

Primer is the magic talking mentor textbook in the Hogwarts of scientific magic, guiding us  from big bang to big brains to singularity.

Primer Labs announced their game, Code Hero, on July 3.

The original talk this article was taken from is here.

  • By Paul McGlothin, July 11, 2011 @ 2:33 pm

    An extremely well written, provocative article. It made me think more deeply about what may happen as the singularity becomes reality.As a leader of the CR Way, which helps people live longer and better by harnessing their innate biochemistry, I realize that many of the things we try for will become easily reachable as the singularity grows nigh. However, if AI takes a nasty, terminator turn, all that we try for and then some could be wiped out very quickly.

  • By Mark Bruce, July 11, 2011 @ 7:13 pm

    “And the question that comes up is what happens when we make machines make us make them make us into them?”

    By the end of the article I finally understood and appreciated what that sentence actually meant. Your vision is grand, and while transiently troubling, the scenario you paint is both uplifting and inspiring. Looking forward to trying Code Hero. Thanks for assembling this novel collection of memes – it was a pleasure to be infected ;)

  • By iPan, July 12, 2011 @ 12:03 pm

    Yes, yessity, yes yes!!!

  • By Derek New Orleans, July 14, 2011 @ 12:15 am

    Great Article! Shared it on facebook.

  • By zeroreference, July 24, 2011 @ 8:17 pm

    Ugh, please. This is not a well-written article. It’s turned me into an ass just from all the assumptions being made, and to top it off there are more holes than a block of swiss cheese that was hit by a cluster bomb (what the cheese was doing in the cluster bomb’s vicinity is one of the universe’s great mysteries).

    Science fiction is allowed to make such suppositions and glosses – which doesn’t necessarily detract from its value. Philosophy, on the hand, is not. And any sort of futurist, predictive screed such as this is fundamentally aiming for the same sort of universality, predictive value, and intellectual recognition which is traditionally accorded to philosophy.

  • By Antman, July 24, 2011 @ 9:07 pm

    Interesting article. Having worked with various researchers and academics, and having seen how differently they code (and how reusable their code isn’t), I can see a need for a standardised framework for AI development so that we can make the most out of distributed AI research. i.e.: Just as we’ve had OpenGL for graphic and OpenAL for audio, someday soon we’re going to need an OpenAI.

  • By Alex Peake, July 24, 2011 @ 11:18 pm

    This article is based on a talk I gave at Humanity+ @ Caltech and our first game is nearing release.

    You can watch the video with visuals that illustrate the main ideas of the talk here:

    http://primerlabs.com/hplusatcaltech

    Our first game Code Hero is a game about making games where you shoot code with a javascript code gun that let you code the change you wish to see. Competing AIs recruit you and teach computer programming to empower us to shape our future.

    You can watch the early prototype trailer here and sign up for the upcoming beta release:

    http://www.primerlabs.com

    @ZeroReference: Fair criticism and I’ll get detailed if you’ll ask detailed questions. My article covers a lot of ground without going into much detail on each item. Everybody’s busy, but if you would provide detailed points, I would like to provide detailed answers now that we have a forum to get in-depth.

    During a live talk with a short timeslot with a live audience one does not have time to go into scholarly detail with footnotes to back up every idea and point. The difference between science fiction storytelling and talks about science fiction-inspired startups is that the speaker backs up their words by shipping the product.

    Assumptions in the article are based on much more than I had time to go into and work we’re doing at Primer Labs, not passive predictions of the way things are going by themselves. As Frank Herbert said in Dune, predictions of the future become accurate when the predictor acts to make the future come about.

    Our first game Code Hero is a game about making games where you shoot code with a javascript code gun that let you code the change you wish to see. Competing AIs recruit you and teach computer programming to empower us to shape our future.

    You can watch the prototype trailer here and sign up for the upcoming beta release:

    http://www.primerlabs.com

    I’m going to write a written followup exploring these ideas in more detail, and I’m happy to answer your specific questions and I will be doing so here and in the Slashdot forum.

    You can also discuss with the rest of the 100+ comments on Slashdot:

    http://games.slashdot.org/story/11/07/25/0144209/Can-AI-Games-Create-Super-Intelligent-Humans

  • By Rachel Haywire, July 25, 2011 @ 2:39 am

    I enjoyed seeing your presentation and and love the tone of this article Alex.

  • By Daniel Cordey, July 25, 2011 @ 6:40 am

    Alex,
    I found your article very interesting and your description of AI in games is very enlighting.

    However, you seem to ignore work done [on the subject] by many people before you. Here are a few points I’d like to mention.

    - Thoughts about the limit of understanding between humans and AI has been extensively handled by Isaac Asimov in many of it’s books (and many others).
    - Seneca was the tutor and later advisor of… Nero. But Seneca has’nt been very successful in the “transmission” of it’s “intelligence”. Though, even if we can trust the source, the target needs some “validation” :-)
    - Evolution and survival if not a mater of “intelligence”, but a “process” of biological evolution, natural selection (Charles Darwin). and “adaptation” (Jean Piaget).
    - Before talking any further on “learning” process, I’d suggest you consult the Jean Piaget bibliography (http://en.wikipedia.org/wiki/Jean_Piaget). His work on “cognitive development” can’t be ignore if you decide to debate on the subject.

    Anyway, thank’s for bringing new aspects of AI potentials and limits.

  • By TyposDistractMe, July 25, 2011 @ 8:24 am

    Interesting speculation and game promotion.

    I noticed a couple of typos:
    “imagine the kid has a medical condition and the AI knows better then you what treatment to give it.”
    should be:
    “Imagine the kid has a medical condition and the AI knows better than you what treatment to give it.”

  • By Matthew C. Tedder, July 25, 2011 @ 10:22 am

    I have an approach to strong AI for which I’ve been seeking a test-bed environment. A game-like environment makes the most economic sense to me. I was trying to develop in Panda3D but have recently switched to WebGL. I’ve designed the brain/mind of units to enable real-time addition of unit types and modification of AI. There are four modules per unit:

    Affector — this translates sensory input to internal attributes. E.g. what is seen, how eating affects energy levels, etc.

    Intrinsor — this is the mind (for animate units). It may read internal attributes, have its own memory, and make requests for actuations to the Effector.

    Containor — this maintains contained objects. It can initiate or mediate affects and effects between objects contained. e.g. food processing in the stomach (if an animate unit); or objects growing and living in a land (for a “land” type unit); or objects in a bag, in a house, in the water, etc. It has the power to filter or control the physics within.

    Effector — this receives actuation requests from the Intrinsor and performs outputs effect requests to this unit’s containor object. But first, it filters them through limitations based such as energy levels, physical limitations of the unit design, etc.

  • By Alex Peake, July 25, 2011 @ 12:21 pm

    @Matthew C. Tedder: I would love to hear more about your framework! We are also using a very modular approach to letting a thousand AI implementations bloom so that they can interoperate and compete over time. Email me at alex@primerlabs.com if you’d like to discuss further.

  • By Alex Peake, July 25, 2011 @ 12:26 pm

    @Daniel Cordey: I didn’t have time to go into academic theory in a 23 minute talk but I do love to discuss it and learn from feedback.

    The limits of human-machine understanding is a big part of making this possible. We’re treating the concepts the player can learn as memes and the AI’s model of those as temes so players are learning memes from the mentor temes and improving on the temes to more accurately model the behavior the meme should enact and to better convey it to the player who is trying to learn it.

    The AI teme doesn’t really “understand” the concept it is trying to teach to the player, it merely needs to model it sufficiently to appear to successfully enough to transmit the corresponding meme.

  • By John Moser, July 25, 2011 @ 5:15 pm

    Disgustingly cutesy.

    “And the question that comes up is what happens when we make machines make us make them make us into them?”

    Ick.

  • By Crazy Irish Dan, July 25, 2011 @ 10:18 pm

    Well I, for one, welcome our robot overlords…

    While I do take issue with some of the assumptions (‘magic spells don’t work’, really, or do yours just not? For example, I’ve never yet seen a practicing magician claiming that science doesn’t exist…) for a ‘singularity’ article this actually had something to say – instead of just a gushing of masturbatory fantasy. “I’m gonna live forever, and my nanobot army will cover everything in boobs, BOOBS I tell you!”

    Mostly, I’m impressed by the connection between thought and action. What a novel concept in the ‘I have an idea, or at least I’ll act like I do so I get blog hits’ era.

  • By Protagoras, July 31, 2011 @ 9:34 am

    Awesome, really inspiring and interesting – I would really like to play the game once it comes out. I also related deeply to the whole “looking for magic but accepting that the ‘supernatural’ doesn’t exist, thus turning into science fiction and hard science to find that ‘magic’”. Or atleast that’s how I understood your writing.

    Also, and perhaps more intriguing, is if (and when) you or someone else starts a backlash again this, ideally in the form of some religion-backed-conservative-front looking to “stop the heretics from destroying humanity”. Not only will it be a great publicity campaign, it also seems inevitable and (perhaps) required for the further development of your idea – along the lines of needing opposition in order to create the friction that pushes us from statu quo and into real change. (also, would make for a great story)

    Just a thought.

  • By Harrison, September 12, 2011 @ 6:33 am

    This is one of the most exciting things I’ve ever heard of.
    I wonder why javascript?
    And what next? Could we have an open-ended exploration of mathematics and physics? What would that look like? Performing calculations and manipulations ‘by hand’, by mind that is, would be too slow, so we’d need very high-level interactions and tool selection.
    The possibilities are endless.

  • By Cinderella, September 20, 2011 @ 3:19 am

    Thanks for spending time on the copmuetr (writing) so others don’t have to.

Other Links to this Post