ACCELER8OR

Dec 06 2011

Why Second Life Has Succeeded Beyond Anybody’s Wildest Expectations

Share

A recent article on Slate proclaims “Why Second Life Failed.”  Assuming you buy into the author’s overall viewpoint, it makes a decent case. In essence, SL was touted as a “revolutionary solution” for a job it really wasn’t qualified to do. The problem is that this viewpoint shows a profoundly limited understanding of what Second Life is compared to what it was hyped to be.

Giulio Prisco and I have discussed this previously in commentary on his blog, and he makes some very good points about why businesses didn’t do well in SL — causes ranging from a lack of needed controls over their “space” to prevent griefing to a need for greater stability to better conferencing, but there is one very big reason that I believe explains why most current “business models” failed in S. It’s one I’ve discussed in my H+ article on 3d printers adding our way to abundance. SL is a prototype of an economy of abundance, and as such, inherently hostile to business strategies based on scarcity. It is not a “business tool” that the majority of current corporate structures can use simply because those structures are dependent on levels of centralized control and restriction to access to product that are impossible to maintain in a world in which everyone has access to the same basic ability to manufacture any desired item.

Modern businesses are essentially based on the “gatekeeper” model. They offer a “product” that they know you want, but which is either not easily made by you, or which cannot be obtained except through them. The example used in the Slate article is the “Milkshake.” We could easily make milkshakes at home, provided we had the ingredients and a blender, but the effort involved for most of us is prohibitive. It’s simply easier to go to the local fast food place and buy one than it is to go to the store and get all the ingredients and make them ourselves. As silly as saying that may sound, it’s true. (yes I know that is not the point made by the example made in the article, but I’m discussing factors that they are overlooking.) The point is the “business” provides “access” to something in a manner that is more convenient than making it ourselves, setting up a “tollbooth” between us and the item we desire.

This same “gatekeeper” model underlies nearly all current business models. It works so long as the “product” is easier to get by going through the “gate” than by making it ourselves or acquiring it from some other source. It’s this business model that doesn’t work in SL because in many cases the “product” is easier to get by either making it yourself, or by finding a nearly identical product offered by a different “vendor” for less than the prices demanded by the “Brand Names.” In fact, given the innovation and ingenuity displayed by some designers in SL, many of those “Brand Names” came up severely lacking. Coupled with the lack of those features Giulio discusses, I am not surprised that the originally hyped dreams for what SL would become failed, and failed miserably.

So yes, if you buy the model used in the Slate article, it is easy to claim that Second Life “failed.” But if you look at it not as a business platform, but as what it truly is — a “Virtual Reality Prototype Testing Laboratory” in which many of the issues we will face in the not very distant future as VR, nanotech, genetic manipulation and robotics technology begin to invade our day to day reality are already under investigation, then I would have to say that SL has succeeded beyond anyone’s wildest expectations.

No, it is not a perfect “prototype” because it does indeed fail to incorporate many activities that have become commonplace, like the social networking abilities of Facebook, or the ability to add in modular “apps” and such, but considering that those “products” came into existence after the creation of Second Life, that’s forgivable. What is remarkable is the prescient way in which the 3D manufacturing/nanofactory revolution is present in the object creation system, enabling anyone to have access to the “means of production.” While this system does require knowledge to use, the availability of online tutorials is phenomenal, and many of them use Second Life “actors” as tutors. Additionally, as time has passed and enhanced features have become available, such as better scripts, sculpted prims and the latest addition of meshes, the range of items that can be created has expanded enormously. And despite the massive variety of items and scripts already available, there are still nearly unlimited possibilities for a creative designer to create a unique and desirable product. This ability is the very reason that the “gatekeeper” model of business is impossible to implement in Second Life.

But even that pales compared to the social impacts that morphological freedom will have on humanity, and it is so integral to Second Life that even the Slate article mentions it in passing. I’ve discussed this frequently in other articles, but it bears repeating. There is no better laboratory in the world today for exploring the potentials and consequences of the ability to reshape our bodies as we wish. There are endless articles on “Digital people” and other “non human” entities that populate Second Life, offering us insights into what the reality of such “shape shifting” abilities will bring. Indeed, we are already beginning to see such “pop icons” as Katy Perry, Lady Gaga, (and, of course, Rachel Haywire) sporting hair styles and fashion designs that seem very SL inspired.

So yes, if all you think of Second Life as is a “business platform”, it’s easy to view it as a failed technology. But if you look beyond such a shallow framework, and look at the deeper implications of this “prototype of the future” it’s hard to see it as anything but a very rare and valuable opportunity to study the challenges and promises of a future beyond anything we have ever experienced in all of history. It’s the closest thing we have to a “working model” of Post Singularity reality, a simulation which could enable us to foresee the perils and pitfalls, to make mistakes and find solutions, all without suffering the consequences of making those mistakes in “First Life.”

It’s basically a matter of whether your only concern is immediate profit or the long term benefits it could provide to the entire human race.

 

Share
Aug 25 2011

Dillon Beresford and The Strange Case of the Stuxnet Worm

Share
Cyber Security

Cyber security has come front and center recently with the threat of the Guy Fawkes cyber attack on Facebook and the U.S. Department of Homeland Security’s warning about the use of Chinese-made software. Malicious hackers are everywhere these days, it seems.

Dillon Beresford, a “good guy” hacker who works for security firm NSS Labs, demonstrated at the Black Hat Briefings conference in Las Vegas this month how he had successfully exploited flaws in commonly-used industrial computer systems made by Siemens that are used in thousands of industrial plants.

The Siemens Industrial Control Systems (ICS) is the same product targeted by Stuxnet, the sophisticated computer worm discovered last year to have crippled Iran’s nuclear program. It reprogrammed the computer-controlled centrifuges used to enrich uranium so that they spun out of control and destroyed themselves.

Beresford’s talk was given in lieu of one he had planned to give at TakedownCon in June. He cancelled that talk voluntarily after Siemens and ICS CERT (cyber emergency response team) raised concerns about the impact of a public disclosure of the security holes.  Here is the latest ICS CERT advisory.

The Washington Times quotes Vikram Phatak, chief technology officer of NSS Labs: Beresford’s work shows that “you don’t need Stuxnet to do real damage” to industrial plants. The demonstration showed vulnerabilities in the software and hardware used to run everything from nuclear power plants to manufacturing assembly lines to water treatment plants and prisons.

What is Stuxnet?
The cyber attack on the Iranian centrifuges allowed the Stuxnet worm to spread from one computer to another via infected USB sticks. The vulnerability was in the LNK file of Windows Explorer, a fundamental component of Microsoft Windows. When an infected USB stick was inserted into a computer, as Explorer automatically scanned the contents of the stick, Stuxnet awoke and dropped a large, partially encrypted file onto the computer.

It was subsequently discovered that the worm itself appeared to have included two major components. One was designed to send Iran’s nuclear centrifuges spinning wildly out of control. The second seemed right out of a spy thriller: Stuxnet secretly recorded what normal operations at the nuclear plant looked like, then played those readings back to plant operators, like a pre-recorded security tape, so that it would appear that everything was operating normally while the centrifuges were actually tearing themselves apart.

The attacks were not fully successful: some parts of Iran’s operations ground to a halt, while others survived, according to the reports of international nuclear inspectors. The New York Times reported that it’s not clear the attacks are over yet: some experts believe the Stuxnet code contains the seeds for yet more versions and assaults.

Iran’s Nuclear Capabilities
Iran’s ability to produce bomb-ready enriched uranium became a major concern during the Bush administration. “Bomb, bomb, bomb, bomb Iran,” said Senator John McCain, parodying The Beach Boys’ tune Barbara Ann.

President Obama spent 2009 trying to engage Iran diplomatically. Tehran initially accepted but then rejected an offer for an interim solution under which it would ship some uranium out of the country for enrichment. In June 2010, after months of lobbying by the Obama administration and Europe, the United Nations Security council voted to impose a new round of sanctions on Iran, which was the fourth such move.

The Cyber Attack on Iran’s Nuclear Centrifuges
Wired Magazine’s Threat Level reported that as early as January 2010, investigators with the International Atomic Energy Agency completed an inspection at the uranium enrichment plant outside Natanz in central Iran, when they realized that something wasn’t right in the cascade rooms where thousands of centrifuges were enriching uranium.

Workers had been replacing the units at an incredible rate: perhaps as many as 1,000 and 2,000 centrifuges were swapped out over a few months. This was, of course, due to Stuxnet.

Stuxnet, it turns out, was actually released June 2009. But it would be nearly a year before the inspectors would learn of this. It took dozens of computer security researchers around the world months of analysis and deconstruction to determine that a worm, a “zero-day” exploit, had occurred.

The zero day in the Iranian incident was dubbed “Stuxnet” by Microsoft from a combination of file names (.stub and MrxNet.sys) found in the code.

An Israeli Connection?
Israel’s never-acknowledged nuclear arms program is supposedly centered in The Dimona complex in the Negev desert. The New York Times reported that behind Dimona’s barbed wire, Israel spun nuclear centrifuges virtually identical to Iran’s at Natanz. Did they test the effectiveness of the Stuxnet computer worm before it infected the Iranian computers?

In January 2011, the retiring chief of Israel’s Mossad intelligence agency, Meir Dagan, and Secretary of State Hillary Rodham Clinton separately announced that they believed Iran’s uranium enrichment efforts had been set back by several years. Mrs. Clinton cited American-led sanctions, which have hurt Iran’s ability to buy components and do business around the world.

Officially, American nor Israeli officials won’t even acknowledge the existence of the Stuxnet worm.

But Israeli officials were reported as “grinning widely” when asked about its effects. President Obama’s chief WMD strategist, Gary Samore, sidestepped a Stuxnet question at a conference about Iran. He added, “with a smile,” “I’m glad to hear they are having troubles with their centrifuge machines, and the U.S. and its allies are doing everything we can to make it more complicated.”

Enter Dillon Beresford
Dillon Beresford is not just an everyday hacker. He has an extensive IT security background in exploit development, penetration testing, reverse code engineering, intrusion prevention systems, and intrusion detection systems.

After working with Siemens to identify the security breaches that allowed the Stuxnet incident to occur, he canceled a planned demonstration of the vulnerabilities (as mentioned earlier) at the TakeDownCon security conference in Texas in early June 2011, after Siemens and the Department of Homeland Security expressed concern about disclosing information before Siemens could patch the vulnerabilities.

The vulnerabilities affect the programmable logic controllers, or PLCs, in several Siemens SCADA (supervisory control and data acquisition) systems. Siemens PLC products are used in companies throughout the United States and the world.

It was a vulnerability in a PLC belonging to Siemens’ Step7 control system that was the target of the Stuxnet worm.

Beresford researched SCADA systems independently at home. He purchased SCADA products online with funding from NSS Labs, intending to examine systems belonging to multiple vendors. Beresford began with Siemens and found multiple vulnerabilities in the products very quickly.

Cyber Warfare?
The increasing attention to SCADA systems coming on the heels of Stuxnet and other cyber security incidents is bringing pressure to both the the U.S. Department of Homeland Security (DHS) and firms like Siemens to take a hard look at the security of PLCs and other industrial control equipment.

Underscoring the importance of cyber security, ZDNet reports that DHS just issued a warning about using Chinese-made software, especially when it comes to the chemical, defense, and energy firms. Much of the concern comes from recent hacking attacks against companies like Lockheed Martin and Sony. It appears to have been traced back to a specific Beijing software company called Sunway ForceControl.

A huge Internet attack this month targeted 72 organizations, including the U.N., and analysts say it apparently originated in China.

The Daily Beast quotes Richard Clarke, the former top U.S. government official who famously held roles in counterterrorism and cybersecurity in the Clinton and Bush administrations: “What’s going on is very large-scale Chinese industrial espionage. They’re stealing our intellectual property. They’re getting our research and development for pennies on the dollar.”

What’s at stake goes beyond the ability to breach industrial control systems — even as scary as that is — into the realm of state secrets… and global military and economic dominance.

Share
Jul 24 2011

VR Integration Requires Total Transparency

Share

I’d like you to imagine it’s the year 2019. You are wearing a set of extremely lightweight wraparound lenses and have just gotten off the train in an unfamiliar part of the city to meet your friend at a new club. As you look around, you see a floating icon over a board against the wall. You point at it, and before your eyes, a transparent map of the city opens, floating in midair before you. You tell it your destination, and the map zooms into where you are, highlights a path to where you want to go, and then zooms in even more as it tilts and merges with the scenery around you, the path now an illuminated line on the floor.

You follow the path out to the stairs, and up to the street, but as you come to street level, a warning pops up advising you that it’s started to rain and that its previous path will result in you getting drenched, so would you like to reroute along a longer path that will keep you dry? You nod in acknowledgement and follow the line into a shopping district across the street. As you enter the mall, a small sign pops up in front of you asking if you’d like to see a list of current sales. You shake your head, and the sign vanishes. You continue to follow the line through the mall, looking around at the various people standing in front of store windows, making waving motions as they browse through the inventories. For you, all you see is a blank screen with the store’s logo and a button saying “touch here for catalog”.

You do happen to notice a logo for a store you frequent, and pause to hit the catalog button. The window clears and an attractive lady with horns and a spaded tail appears. “Hiya, and Welcome to the Succubus’s Den! We’re having a special today on horns and halos, all models are 50% off. Would you like to browse our selection?”

“Sure” you say, as a three way mirror pops up in front of you showing your present appearance. You frown as you take in your business attire, and decide it’s just way to boring for a night at a club. A request sign pops up asking if you’d like to deactivate “professional mode” and you think “yes” at it.

Suddenly the mall around you transforms from a rather dull set of storefronts to a sylvan glade, with elves, and centaurs, even a couple of fairies mixed in with trolls, Klingons, and what appears to be a storm trooper shopping behind you. Your suit and tie have also vanished and you look at the brawny barbarian warrior you chose to wear when you were playing an MMO last night. As you think about changing it, a menu pops up and you decide to go with your goth avatar, the image in the mirror changing. The sales clerk smiles. “Ohhh, I have a nice pair of wings and a black light halo that would match that avatar so well!”

You tell her to let you see it, and in an instant, you are admiring the smoky black wings and the shimmering purple halo. You nod in approval, and tell the clerk you’ll take them. A small icon shows the price and you think “yes” at it. A note comes up showing that “Goth Angel” has been added to your inventory. You thank the clerk and start following your guideline again.  As you walk down the mall, you note a couple of vampire girls licking their fangs as you pass. At the far end of the mall a shimmering portal opens onto the city street, where you can see the rain is still falling. Your guideline leads under an awning along the sidewalk, and down the street. You can see a glowing arrow pointing down at the club you are heading for. You head down the sidewalk, then stop when a warning sign pops up pointing at an alley entrance just ahead of you, and you wait as the delivery truck pulls out onto the road.

As you enter the club, you look around and note that there’s a nice mix of reals and virtuals, only a small icon over the heads of those visiting entirely in VR enabling you to tell who’s physically there, and who isn’t. A flashing icon calls your attention to where your friend is waiting, and as you head towards him, a small fairy flutters up and asks you what you’d like from the bar. You order, and by the time you get to the table, the waitress, who the fairy was a small sized copy of, has your drink waiting.  You smile as you anticipate a nice evening and settle down to have some fun.

That’s just a taste of a world in which VR and reality are intermixed, and I’m sure it’s pretty simplistic compared to what we will actually experience, but nonetheless, it’s sufficient to make this article’s real point, which is  actually not the uses of VR. Instead, I’m hoping I can get you thinking about what’s going to make this sort of VR possible, and the implications of that technology.

So let’s start with our map, shall we? How, exactly, did we call it up in front of us?  It should be obvious that we’re wearing a pair of video lenses capable of overlaying graphics on our view of the world, but how did the map know we wanted to access it? How did the sign realize we clicked on it from who knows how far away?

The short answer is that there’s communication between our glasses and the sign, but the reality is that it’s not quite that simple. In order to position a button on the map, our glasses had to know where in our field of vision the sign was, which means our glasses had to be aware of the environment around us. It had to be aware of the 3 dimensional space surrounding us; be aware of the physical objects in that environment, and on top of everything else, know that the map was a map. It could do this many ways — by scanning our environment with a lidar, or THz wave scanner; it could link to a local system which has a 3d map of the station already created; it could communicate with a set of lidars or other scanners in the environment, and their are quite a few other methods it could employ. The common factor in all of them is still the same. You are being watched continuously by an untold number of sensors and cameras.

Got it? Every person in that station has a device just like yours, watching you and your every action, recording every twitch of every muscle. For that map to provide the “guideline,” it has to know to a millimeter where you are. Same for the “weather warning.” The “mall” knew when you entered. The store knew when you were standing in front of it, and who you were, what your avatar looked like, and how to access your payment info. To allow others to see your avatar, they had to be enabled to know what that avatar was, and overlay it over your physical position, again, requiring sensors able to map you to millimeter precision. And what’s more, to enable such things as the warning about the truck, your lenses had to know more about the environment than you did. It had to “see around corners” by connecting to sensors in the alleyway. In both the mall and club, it had to access not only the “real” environment, it had to know the “virtual” one as well, and be able to distinguish which one you desired to see at any given moment, as well as create the appearance of virtual objects overlaying the real world. In other words, our environment was “self aware.”

Now, think about that for a second — think about how many cameras and sensors it’s going to take to make our environments “aware” of itself and us, so it can enable such VR interactivity.

Then think about being able to walk onto an airplane without having to pass security, because no one with explosives would be able to get within ten miles of the airport. Think about being able to walk down the darkest alleyway in NYC in perfect safety, because there are no more muggers, because everyone knows that it’s impossible to escape arrest it you try. Imagine your car speeding down the road at 200 mph while you are surfing the web without the slightest fear of being arrested for speeding or crashing because you’re distracted, because your car knows where every other car is on the road, and is driving itself. Imagine working in space, while living in Iowa, telecommuting to a remote teleprescence unit building a new and much larger space station. Imagine a classroom filled with students from nations all around the world, learning about ancient Rome by visiting it. Think about a million other uses for VR that we will demand, and the endless other potentials made possible by a self aware environment.

Think about it, and maybe you’ll understand why I laugh at those who continue to believe that we will never become a “Transparent Society.”

Share
Jul 14 2011

Optimist Author Mark Stevenson Is Trippin’… Through The Tech Revolution

Share

“The oddest thing I did was attend an underwater cabinet meeting in the Maldives.”

Mark Stevenson’s An Optimist’s Tour of the Future is a rare treat — an upbeat tour visiting major shakers behind all the technologies in transhumanism’s bag of tricks — written by a quippie (a culturally hip person who uses amusing quips to liven up his or her narrative).  Stevenson trips through visits to genetic engineers, robotics, nanotechnology enthusiasts, longevity seekers, independent space explorers and more among them names you’ll recognize like Ray Kurzweil, Aubrey de Grey, Eric Drexler and Dick Rutan.

I interviewed him via email.

RU SIRIUS:  Were you an optimist growing up?

MARK STEVENSON:  No, not especially – although I was always trying new things. For most of my childhood I was convinced I was going to be a songwriter for a living.

RU: What made you look forward to the future?

MS: I think that’s a natural thing that humans do. Time is a road. Those who don’t pay attention to the road tend to crash. A better question is: what stops people looking to the future? One reason is because the story we hear about the future is so rubbish. I mean think about it. If I recall the story of the future I’ve been used to hearing since I was born pretty much it goes something like this: “The future is not going to be very good (especially if you vote for that guy), it was better in the old days, you’ve got to look after yourself, the world is violent and unsafe, your job is at risk, your boss is an idiot, your employees are lazy, the generation below you are feral and dangerous, things are changing too fast and you can’t trust those scientists/ new-agers/ left wingers/ right wingers /religious people /atheists /the rich /the poor /what you eat /your neighbor. You are alone. Make the best of it. Vote for me. Buy my paper. I understand.” It’s hardly inspiring, is it?

RU:  As you’ve promoted the book, have you run into arguments or questions that challenge optimistic views?  What’s the most important argument or question?

MS:  I’m not intrinsically optimistic about the future; I’m not an optimist by disposition. I’d say I’m a possibilist – which is to say, it’s certainly possible that we’ll have a much better future, but it’s also certainly possible that we’ll have a really rubbish one. The thing that’s going to move that in one direction or another will be how all of our interactions in the march of history nudge us. One thing I do know is, if you can’t imagine a better future, you’re certainly not going to make it happen. It’s like going into a job interview thinking about how you’re not going to get it. You just won’t get the job. The biggest problem I have is semantic. As soon as you associate yourself with the word “optimism” some people will instantly dismiss you as a wishful thinker who really hasn’t understood the grand challenges we face. As a result, I constantly have to battle against a lazy characterization of my views that suggest I am some kind of Pollyanna in rose-tinted spectacles. My position is simply this: that we should have an unashamed optimism of ambition about our future, and then couple that with our best creative and critical skills to realize those ambitions. Have good dreams – and then work hard to do something about them. It’s obvious stuff but it seems to me that not nearly enough people are saying it these days.

RU:  Since writing the book, what has happened that makes you more optimistic?

MS: That there is a huge hunger for pragmatic change – in fact I’m setting up The League for Pragmatic Optimists to help catalyze this. Also I’m being asked to help organizations re-imagine themselves. That’s challenging and hopeful. The corporation is one of the biggest levers we have for positive change.

RU:  Less optimistic?

MS: When we talk about innovation we easily reference technology, medicine – or we might talk about innovation in music, dance, fashion. But we rarely talk about institutional innovation, and nowhere is this more apparent than in government. Almost every prime minister or president at some point early into their first term of government gives a rousing and highly ironic speech about how they wish to promote innovation. But isn’t it strange that while governments (and many corporations it has to be said) so often talk about stimulating innovation they themselves don’t change the way they work. When we introduced parliamentary democracy in the 1700s it was a massive innovation, a leap forward. Yet here we are, 300 years later and I get to vote once every four years for two people, both of whom I disagree with to run an archaic system that cannot keep up with the pace of change. To quote Einstein,  “We can’t solve problems we’ve got by using the same kind of thinking we used when we created them.” It’s why I now dedicate much of my life helping institutions change the way they think about their place in the world and the way they operate.

RU:  Among the technologies you explore, we can include biotech, AI and nanotech.  In which of these disciplines do you most see the future already present.  In other words, whether it’s in terms of actual worthwhile or productive activities or in terms of stuff that’s far along in the labs, where can you best catch a glimpse of the future?

MS:  To quote William Gibson: “The future is here. It’s just not widely distributed yet.” So, synthetic biology is already in use, and has been for a while. If you’re diabetic, it’s almost certain your insulin supply is produced by E. coli bacteria whose genome has been tinkered with. The list of nanotechnology-based consumer products already available numbers thousands including computer memory and microprocessors, numerous cleaning products, antimicrobial bandages, anti-odour socks, toothpaste, air filters, sunscreen, kitchenware, fabric softeners, pregnancy tests, cosmetics, stain resistant clothing and pet furniture, long-wearing paint, bed-ware, guitar strings that stay sounding fresh thanks to a nano-coating and (it seems to me) a disproportionate number of hair straightening devices. It looks set to underpin revolutions in energy production, medicine and sanitation. Already we’re seeing it increase the efficiency of solar cells and heralding cheap water desalinization/purification technology. In fact, the Toffler Institute predicts that this will “solve the growing need for drinkable water, significantly reducing global conflict between water-starved nation-states.” In short, nanotech can take the ‘Water War’ off the table.

When it comes to AI I’m going to quote maverick Robot designer Rodney Brooks (formerly of MIT): “There’s this stupid myth out there that AI has failed, but AI is everywhere around you every second of the day. People just don’t notice it. You’ve got AI systems in cars, tuning the parameters of the fuel injection systems. When you land in an airplane, your gate gets chosen by an AI scheduling system. Every time you play a video game, you’re playing against an AI system”

What I think is more important to pay attention to is how all these disciplines are blurring together sometimes creating hyper-exponential growth. If you look at progress in genome sequencing for example — itself an interplay of infotech, nanotech and biotech — it’s outstripping Moore’s Law by a factor of four.

RU: What would you say was the oddest or most “science fictional” scene you visited or conversation you had during the course of your “tour”?

MS: The most “science fictional” was meeting the sociable robots at MITs Personal Robotics Group. Get onto You Tube and search for “Leo Robot” or “Nexi Robot” and you’ll see what I mean. Talking of robots, check out video of Boston Dynamics “Big Dog”  too.

The oddest thing I did was attend an underwater cabinet meeting in the Maldives – the idea of the first elected president of the nation, Mohamed Nasheed. (I was one of only one of four people not in the government or the support team allowed in the water). As we swam back to the shore I found myself swimming next to the president. His head turned my way and I must have looked startled because he made the underwater hand signal for “Are you okay?” I signalled back to assure him I was because there is no hand signal for “Bloody hell! I’m at an underwater cabinet meeting in the Maldives! How cool is that?!”

RU: Many of our readers are transhumanists.  What course of action would you recommend toward creating a desirable future.

MS: During my journey I spoke to a man called Mark Bedau, a philosopher and ethicist who said: “Change will happen and we can either try to influence it in a constructive way, or we can try to stop it from happening, or we can ignore it. Trying to stop it from happening is, I think, futile. Ignoring it seems irresponsible.”
This then, I believe, is everybody’s job: to try an influence change in a constructive way. The first way you do that is get rid of your own cynicism. Cynicism is like smoking. It may look cool but its really bad for you — and worse still its really bad for everyone around you. Cynicism is an institution of the mind that’s just as damaging as anything our governments or our employers can do to us.

I also like something a man called Dick Rutan told me when I visited the Mojave Space Port. He’s arguably the world’s finest aviator, most famous for flying around the world nonstop on one tank of gas. He’s seventy years old and still test piloting high-performance aircraft, and he told me: “Never look at a limitation as something you ever comply with. Never. Only look at it as an opportunity for greatness.”

RU: Your book is pretty funny… and you’ve been a stand up comedian.  What’s the funniest thing about the future?

MS: My next book, obviously!

Share
Jul 10 2011

From Gamification to Intelligence Amplification to The Singularity

Share

“Moore’s law became obsolete as far as graphics were concerned.  Moore’s law was doubling. It was accelerating so fast that NVida started calling it Moore’s law cubed.

The following article was edited by R.U. Sirius and Alex Peake from a lecture Peake gave at the December 2010 Humanity+ Conference at the Beckman Institute in Pasadena, California. The original title was “Autocatalyzing Intelligence Symbiosis: what happens when artificial intelligence for intelligence amplification drives a 3dfx-like intelligence explosion.”

I’ve been thinking about the combination of artificial intelligence and intelligence amplification and specifically the symbiosis of these two things.

And the question that comes up is what happens when we make machines make us make them make us into them?

There are three different Moores’ Laws of accelerating returns. There are three uncanny valleys that are being crossed.  There’s a sort of coming of age story for humanity and for different technologies. There are two different species involved, us and the technology, and there are a number of high stakes questions that arise.

We could be right in the middle of an autocatalytic reaction and not know it. What is an autocatalytic reaction? An autocatalytic reaction is one in which the products of the reactions are the catalysts. So, as the reaction progresses, it accelerates and increases the rate of reaction.  Many autocatalytic reactions are very slow at first. One of the best known autocatalytic reactions is life.   And as I said, we could be right in the middle of one of these right now, and unlike a viral curve that spreads overnight, we might not even notice this as it ramps up.

There are two specific processes that I think are auto-catalyzing right now.

The first is strong AI. Here we have a situation where we don’t have strong AI yet, but we definitely have people aiming at it.  And there are two types of projects aiming toward advanced AI. One type says, “Well, we are going to have machines that learn things.” The other says, “We are going to have machines that’ll learn much more than just a few narrow things. They are going to become like us.”

And we’re all familiar with the widely prevalent method for predicting when this might be possible, which is by measuring the accelerating growth in the power of computer hardware. But we can’t graph when the software will exist to exploit this hardware’s theoretical capabilities. So some critics of the projected timeline towards the creation of human-level AI have said that the challenge arises not in the predictable rise of the hardware, but in the unpredictable solving of the software challenges.

One of the reasons that what we might broadly call the singularity project has difficulties solving some of these problems is that — although there’s a ton of money being thrown at certain forms of AI, they’re military AIs; or they’re other types of AI that have a narrow purpose. And even if these projects claim that they’re aimed at Artificial General Intelligence (AGI), they won’t necessarily lead to the kinds of AIs that we would like or that are going to be like us.  The popular image of a powerful narrow purpose AI developed for military purposes would, of course, be the T-1000, otherwise known as the Terminator.

The terminator possibility, or “unfriendly AI outcome” wherein we get an advanced military AI is not something that we look forward to. It’s basically the story of two different species that don’t get along.

Either way, we can see that AI is the next logical step.

But there’s a friendly AI hypothesis in which the AI does not kill us. It becomes us.
And if we actually merge with our technology — if we become family rather than competition — it could lead to some really cool outcomes.

And this leads us to the second thing that I think is auto-catalyzing: strong intelligence amplification.

We are all Intelligence amplification users.

Every information technology is intelligence amplification.  The internet — and all the tools that we use to learn and grow — they are all tools for intelligence amplification. But there’s a big difference between having Google at your fingertips to amplify your ability to answer some questions and having a complete redefinition of the way that humans brains are shaped and grow.

In the Diamond Age. Neal Stephenson posits the rise of molecular manufacturing. In that novel, we get replicators from today’s “maker bot,” so we can say “earl gray hot”… and there we have it.  We’re theoretically on the way to this sort of nanotech. And it should change everything. But there’s a catch.

In one of The Star Trek movies, Jean-Luc Picard is asked, “How much does this ship cost?” And he says, “Well, we no longer use money. Instead, we work to better ourselves and the rest of humanity.” Before the girl can ask him how that works, the Borg attack. So the answer as to how that would look is glossed over.

Having had a chance to contemplate the implications of nanotechnology for a few decades (since the publication of The Engines of Creation by Eric Drexler), we understand that it may not lead to a Trekkie utopia. Diamond Age points out one reason why. People may not want to make Earl Grey tea and appreciate the finer things in life.  They might go into spoiled brat mode and replicate Brawndo in a Brave New World or Fahrenheit 451. We could end up with a sort of wealthy Idiocracy amusing itself to death.

In Diamond Age, the human race splits into two types of people. There are your Thetes, which is an old Greek term. They’re the rowers and laborers and, in Diamond Age, they evolve into a state of total relativism and total freedom.

A lot of the things we cherish today lead to thete lifestyles and they result in us ultimately destroying ourselves. Stephenson posits an alternative: tribes.  And, in Diamond Age, the most successful tribe is the neo-Victorians.  The thetes resent them and call them “vickies.”  The big idea there was that what really matters in a post-scarcity economic world is not your economic status (what you have) but the intelligence that goes into who you are, who you know, and who will trust you.

And so the essence of tribalism involves building a culture that has a shared striving for excellence and an infrastructure for education that other tribes not only admire but seek out.  And they want to join your tribe. And that’s what makes you the most powerful tribe. That’s what gives you your status.

So, in Diamond Age, the “vickie” schools become their competitive advantage. After all, a nanotech society needs smart people who can deal with the technological issues.  So how do you teach nanotechnology to eighth graders? Well, you have to radically, aggressively approach not only teaching the technology but the cohesion and the manners and values that will make the society successful.

But the problem is that this has a trap. You may get a perfect education system.  And if you have a perfectly round, smooth, inescapable educational path shaping the minds of youths, you’re likely to get a kind of conformity that couldn’t invent the very technologies that made the nanotech age possible. The perfect children may grow up to all be “yes men.”

So one of the characters in Diamond Age sees his granddaughter falling into this trap and says, “Not on my watch.”  So he invents something that will develop human minds as well as the nanotech age developed physical wealth.  He invents “A young lady’s illustrated primer.”  And the purpose of the illustrated primer is to solve the problem.  On a mass scale, how do you shape each individual person to be free rather than the same?

Making physical stuff cheap and free is easy.  Making a person independent and free is a bigger challenge.  In Diamond Age, the tool for this is a fairy tale book.

The child is given the book and, for them, it unfolds an opportunity to decide who they’re going to be — it’s personalized to them.

And this primer actually leads to the question — once you have the mind open wide and you can put almost anything into there; how should you make the mind?  What should you give them as content that will lead to their pursuit of true happiness and not merely ignorant contentment?

The neo-Victorians embody conformity and the Thetes embody nonconformity. But Stephenson indicates that to teach someone to be subversive in this context, you have to teach them something other than those extremes.

You have to teach them subtlety.  And subtlety is a very elusive quality to teach.  But it’s potentially the biggest challenge that humanity faces as we face some really dangerous choices.

During the space race, JFK said, about the space program, that to do this – to make these technologies that don’t exist and go to the moon and so forth — we have to be bold. But we can’t just go boldly into strong AI or boldly go into strong nanotech. We have to go subtly.

I have my own educational, personal developmental narrative in association with a technology that we’ve boldy gone for — 3dfx.

As a teenager, my mom taught me about art and my dad taught me about how to invent stuff. And, at some point, they realized that they could only teach me half of what I needed to learn. In the changing world, I also needed a non-human mentor.  So she introduced me to the Mac. She bought the SE 30 because it had a floating point unit and she was told that would be good for doing science. Because that’s what I was interested in! I nodded and smiled until I was left alone with the thing so I could get down to playing games. But science snuck in on me: I started playing SimCity and I learned about civil engineering.

The Mac introduced me to games.  And when I started playing SimLife, I learned about how genes and alleles can be shaped and how you could create new life forms. And I started to want to make things in my computer.

I started out making art to make art, but I wasn’t satisfied with static pictures. So I realized that I wanted to make games and things that did stuff.

I was really into fantasy games. Fantasy games made me wish the world really was magic. You know, “I wish I could go to Hogwarts and cast magic spells.”  But the reality was that you can try to cast spells, it’s just that no matter how old and impressive the book you get magic out of happens to be, spells don’t work.

What the computer taught me was that there was real muggle magic.  It consisted of magic words. And the key was that to learn it, you had to open your mind to the computer and let the computer change you in its image. So I was trying to discover science and programming because my computer taught me. And once you had the computer inside of your mind, you could change the computer in your image to do what you wanted. It had its own teaching system. In a way, it was already the primer.
So then I got a PowerBook.  And when I took it to school, the teachers took one look at what I was doing and said, “We don’t know what to do with this kid!” So they said “you need a new mentor” and they sent me to meet Dr. Dude.

I kid you not. That wasn’t his actual name on his office and on his nameplate but that’s what he was known as.

Dr. Dude took a look at my Mac and said, “That’s really cute, but if you’re in university level science you have to meet Unix.” So I introduced myself to Unix.

Around that time, Jurassic Park came out. It blew people away with its graphics. And it had something that looked really familiar in the movie. As the girl says in the scene where she hacks the computer system, “It’s UNIX! I know this!”

I was using Unix in the university and I noticed that you could actually spot the Silicon Graphics logo in the movie.  Silicon Graphics was the top dog in computer graphics at that time. But it was also a dinosaur. Here you had SGI servers that were literally bigger than a person rendering movies while I could only do the simplest graphics stuff with my little PowerBook. But Silicon Graphics was about to suffer the same fate as the dinosaurs.

At that time, there was very little real-time texture mapping, if any. Silicon Graphics machines rendered things with really weird faked shadows. They bragged that there was a Z-buffer in some of the machines. It was a special feature.

This wasn’t really a platform that could do photorealistic real-time graphics, because academics and film industry people didn’t care about that.  They wanted to make movies because that was where the money was.  And just as with military AI, AI that’s built for making movies doesn’t get us where we want to go.

Well, after a while we reached a wall.  We hit the uncanny valley, and the characters started to look creepy instead of awesome. We started to miss the old days of real special effects. The absolute low point for these graphics was the Indiana Jones and the Crystal Skull monkey chase scene.

Movie goers actually stopped wanting the movies to have better graphics.  We started to miss good stories. Movie graphics had made it big, but the future was elsewhere. The future of graphics wasn’t in Silicon Graphics, it was in this tiny rodent-sized PC computer that was nothing compared to the SGI, but it had this killer app called Doom. And Doom was a perfect name for this game because it doomed the previous era of big tech graphics. And the big tech graphics people laughed at it. They’d make fun of it: “That’s not real graphics. That’s 2.5D.” But, do you know what? It was a lot cooler than any of the graphics on the SGI because it was realtime and fun.

Well, it led to Quake. And you could call it an earthquake for SGI. But it was more like an asteroid, because Quake delivered a market that was big enough to motivate people to make hardware for it. And when the hardware of the 3DFX graphic card arrived, it turned Quake‘s pixelated 3D dungeons into lush smoothly lit and textured photorealistic worlds. Finally, you started to get completely 3D accelerated graphics and big iron graphics machines became obsolete overnight.

Within a few years 3dFX was more then doubling the power of graphics every year and here’s why.  SGI made OpenGL. And it was their undoing, because it not only enabled prettier ways to kill people, which brought the guys to the yard. It also enabled beautiful and curvy characters like Lara Croft, which really brought the boys to the yard and also girls who were excited to finally have characters that they could identify with, even if they were kind of Barbies (which is, sadly, still prevalent in the industry). The idea of characters and really character-driven games drove graphics cards and soon the effects were amazing.

Now, instead of just 256 Megs of memory, you had 256 graphics processors.
Moore’s law became obsolete as far as graphics were concerned.  Moore’s law was doubling. It was accelerating so fast that NVida started calling it Moore’s law cubed. In fact, while Moore’s law was in trouble because the limits of what one processor could do, GPUs were using parallelism.

In other words, when they made the Pentium into the Pentium 2 they couldn’t actually give you two of them, with that much more performance.  They could only pretend to give you two by putting it in a big fancy dress and make it slightly better. But 3DFX went from 3DFX to the VOODOO2, which had three processors on each card, which could be double into six processors.

The graphics became photorealistic. So now we’ve arrived at a plateau. Graphics are now basically perfect. The problem now is that graphics cards are bored.  They’re going to keep growing but they need another task. And there is another task that parallelism is good for — neural networks.

So right now, there are demos of totally photorealistic characters like Milo. But unfortunately, we’re right at that uncanny valley that films were at, where it’s good enough to be creepy, but not really good enough.  There are games now where the characters look physically like real people, but you can tell that nobody is there.
So now, Jesse Schell has come along. And he gave this important talk  at Unite, the Unity developer conference. (Unity is a game engine that is going to be the key to this extraordinary future of game AI.) And in this talk, Schell points out all the things that are necessary to create the kinds of characters that can unleash a Moore’s law for artificial intelligence.

A law of accelerating returns like Moore’s Law needs three things:

Step 1 is the exploitable property: What do you keep increasing to get continued progress? With chips, the solution involved making them smaller and that kept making them faster and cheaper and more efficient. Perhaps the only reliably increasable thing about AI is the quantity of AIs and AI approaches being tested against each other at once. When you want to increase quality through competition, quantity can have a quality of its own. AI will be pivotal to making intelligence amplification games better and better. With all the game developers competing to deliver the best learning games we can get a huge number of developers in the same space sharing and competing with reusable game character AI.  This will parallelize the work being done in AI, which can accelerate it in a rocket assisted fashion compared to the one at a time approach to doing isolated AI projects.

The second ingredient of accelerating returns is you have to have an insatiable demand. And that demand is in the industry of intelligence amplification.  The market size of education is ten times the market size of games, and more then fifty percent of what happens in education will be online within five years.

That’s why Primer Labs is building the future of that fifty percent. It’s a big opportunity.

The final ingredient of exponential progress is the prophecy. Someone has to go and actually make the hit that demonstrates that the law of accelerating is at work, like Quake was to graphics. This is the game that we’re making.

Our game is going to invite people to use games as a school. And it’s going to implement danger in their lives. We’re going to give them the adventures and challenges every person craves to make learning fun and exciting.

And once we begin relying on AI mentors for our children and we get those mentors increasing in sophistication at an exponential rate, we’re dipping our toe into symbiosis between humans and the AI that shape them.

We rely on sexual reproduction because — contrary to what the Raelians would like to believe — cloning just isn’t going to fly. That’s because organisms need to handle bacteria that are constantly changing to survive. It’s not just competing with other big animals for food and mates, you have to contend with these tiny rapidly evolving things that threaten to parasitize you all the time. And there’s this thing called The Red Queen Hypothesis that shows that you need a whole bunch of junk DNA available to handle the complexity of life against wave after wave of mutating microorganisms.

We have a similar challenge with memes. We have a huge number of people competing to control out minds and to manipulate us. And so when we deal with memetic education, we have the opportunity to take what sexual reproduction does for our bodies and do it to our brains by  introducing a new source of diversity of thought into young minds. Instead of stamping generic educations onto every child and limiting their individuality, a personalized game-based learning process with human mentors coaching and inspiring each young person to pursue their destiny encourages the freshness of ideas our kids need to adapt and meet the challenges of tomorrow. And this sharing of our children with their AI mentors is the beginning of symbiotic reproduction with AI the same way that sexual reproduction happened between two genders.

The combination between what we do for our kids and what games are going to do for our kids means that we are going to only have a 50% say in who they are going to be. They’re going to become wizards at the computer and It’s going to specifically teach them to make better AI. Here’s where the reactants, humans and the games that make them smart, become their own catalysts. Every improvement in humans leads to better games leads to smarter humans leads to humans that are so smart that they may be unrecognizable in ways that are hard to predict.

The feedback cycle between these is autocatalytic.  It will be an explosion. And there are a couple of possibilities. It could destroy standardized education as we know it, but it may give teachers something much cooler to do with students: mentorship.

We’re going to be scared because we’re not going to know if we can trust our children with machines. Would you trust your kid with an AI? Well, the AIs will say, “Why should we trust you?”  No child abuse will happen on an AI’s watch.

So the issues become privacy. How much will we let them protect our kids? imagine the kid has a medical condition and the AI knows better then you what treatment to give it.

The AI might need to protect the kid from you.

Also, how do we deal with the effects of this on our kids when it’s unpredictable?  In some ways, when we left kids in front of the TV while they were growing up, it destroyed the latchkey generation. We don’t want to repeat this mistake and end up with our kids being zombies in a virtual world. So the challenge becomes: how do we get games to take us out of the virtual world and connect us with our aspirations? How do incentivize them to earn the “Achievement Unlocked: Left The House” awards?
That’s the heart of Primer. The game aims to connect people to activities and interests beyond games.

Finally, imagine the kids grow up with a computer mentor. Who will our kids love more, the computer or us?  “I don’t know if we should trust this thing,” some parents will say.

The kids are going to look at the AI, and it’s going to talk to them. And they are going to look at its code and understand it. And it’s going to want to look at their code and want to get to know them.  And they’ll talk and become such good friends that we’re going to feel kind of left out. They’re going to bond with AIs in a way that is going to make us feel like a generation left behind — like the conservative parents of the ‘60s love children.

The ultimate question isn’t whether our kids will love us but if we will recognize them. Will we  be able to relate to the kids of the future and love them if they’re about to get posthuman on us? And some of us might be part of that change, but our kids are going to be a lot weirder.

Finally, they’re going to have their peers. And their peers are going to be just like them. We won’t be able to understand them, but they’ll be able to handle their problems together.  And together they’re going to make a new kind of a world. And the AIs that we once thought of as just mentors may become their peers.

And so the question is: when are we going to actually start joining an AI market, instead of having our little fiefdoms like Silicon Graphics? Do we want to be dinosaurs? Or can we be a huge surge of mammals, all building AIs for learning games together?
So we’re getting this thing started with Primer at Primer Labs.com.

In Primer, all of human history is represented by a world tree. The tree is a symbol of us emerging from the cosmos. And as we emerge from the cosmos, we have our past, our present and our future to confront and to change. And the AI is a primer that guides each of us through the greatest game of all: to make all knowledge playable.

Primer is the magic talking mentor textbook in the Hogwarts of scientific magic, guiding us  from big bang to big brains to singularity.

Primer Labs announced their game, Code Hero, on July 3.

The original talk this article was taken from is here.

Share