Posted by: Stenros | January 21, 2010


I finally went to saa Avatar yesterday. It was difficult to meet that film with an open mind considering that I’m the right age to have grown up with director James Cameron’s previous films, I used to work as a film critic and this film has been in production hell forever, everyone and their grandpa have been talking and tweeting about the film for some time (and it seems that I have to do that as well), I’m not a fan of 3D though it seems everyone else is (colour and sound also ruined cinema, as you know) and so on. Still, I did throughly enjoy the film. Many seem to have thought it is a bit of Dances with Pocahontas-Smurf and the Last of the Space Mohicans, and it is, but to me it seemed more like James Cameron’s Pink Narcissus with Furries.

This of course has nothing to do with pervasive games. Except that today I run into this behind-the-scenes video on Gizmondo and it turns out that they layered computer generated characters into live action scenes in real time. You need to go there to see it, WordPress doesn’t allow embedding video from Yahoo.

Anyhoo, as Gizmondo points out, this is essentially augmented reality. Back in IPerG we evaluated a few games that used AR. They were fairly advanced for that moment in time. Still, it was not excacty James Cameron’s IPerG. The equipent they use is a bit cumbersome (not exactly wearable in a game without a strong tech theme), but this does look pretty good. I’d certainly like to hear what Blair MacIntyre (who gave a pretty impressive speach on AR last year in GDC) has to say about this. Update: Professor MacIntyre puts it all in perspective in the comments.


  1. Ok, I’ll bite, but I don’t have much earth shattering to say here. One way I think about this is

    The AR-ish technology is pretty cool, although it’s not as new as they try to give the impression it is; folks I’ve talked to at other major FX companies (e.g., at least LucasFilm, from what I recall of a hallway conversation) have been doing this kind of live-AR-view for a while. Or so they claim. :) Being able to real-time AR machinima production was the motivation for our “AR SecondLife” project a few years back (we created an AR client for SL so that we could set up real/virtual stages, and have the virtual characters controlled in real time by off-stage SL clients, but have them appear in the real-time view), and so I talked with folks from these shops to see what they thought. Of course, what we did was pretty low-brow compared to this! But, taking the next step to mocap control is what everyone wanted to do when they saw it.

    Live-AR-view is a pretty logical next step to what’s been out there for a while: if you watch the making of AI, for example, or LOTR, they had real-time VR views of the CG from the viewpoint of the camera. But it’s a big step!

    As for the difference between this and AR games, “the tracking’s the thing.” That and a predetermined stage with known and controllable physical props and people. Or, put another way: precise world knowledge. Knowing precisely where the viewer (camera, phone, game console, etc) is, that’s half the battle. But, in a “real world” situation, if you want to create a visually coherent scene that really blends the physical and virtual world, you need to know where all that physical stuff is. Chromakey helps in a big way (since you don’t need to track all the physical stuff, you just subtract it away and put the virtual stuff behind); but, when you want the virtual content in front of (or worse, mixed in with) the physical stuff, you need everything to be carefully choreographed; you need to if that table, or that persons hand, is in front of or behind the virtual content.

    It will be a LONG time till we can do this kind of integration in big scale, outdoor AR. In controlled scenarios, we already can do it (think of the museum pieces that folks at UCF have done). In tabletop AR games (like we’ve been doing) we can also do it to a limited extent, as long as we assume the only thing in the game space is the props we know should be there.

    But to have those zombies come running down the hallway and through your living room, you need to know a lot about the building and the furniture and the random junk in the way; thats what my vision colleagues would euphemistically call a “hard problem.”

    What makes the AR-camera work is the amazing physical setups, of course. Having a precise mocap system (notice the retro-reflective spheres on the cameras, people, props, etc) that covers a vast space and tracks all these things simultaneously is just amazing. Once you have all that technology integrated and working, the AR view is almost trivial, I suspect (they even seemed to imply that; “once we had the fusion and virtual cameras, we got this!”).

    All that said, the really impressive thing they did was the combined mocap/face tracking: that is the essence of their term “performance capture” and it’s what really let the actors shine. My suspicion is that they could have go by without all the AR views — it might have meant for post-time, more work for Cameron, etc., but it wouldn’t have mattered so much in the end. But, without the performance capture, none of it would have mattered, since that’s what made the virtual characters so believable. Imagine what Polar Express could have been … poor, poor Tom Hanks, too far ahead of the curve. :)

  2. Wow, thanks for cutting through all the marketing embellishment! This really put the video in perspective.

  3. Finally got around to see Avatar and was able to check out the spoilery behind-the-scenes. And I gotta say we’ve come a long way from Phantom Menace and that stuff.

    The part where the colonel is testing the battle robot is the meta moment of the movie: The robot mimics the colonel like the CG mimics actors.

    For theme park use, I could imagine them creating a bluescreen room, giving people mocap tags and AR/VR screens and providing access to the Avatarland in the foreseeable future.

    I wonder when they’ll be able to use the graphics and 3D models made for movies directly in video games. Or perhaps they do that already, I don’t know.

  4. @montola: a free roaming game in the world of pandora rendered in realtime….awesome :)

    thanks to Blair MacIntyre for the interesting post

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: