Archive | Immersive Technology RSS feed for this section

Coffee, Beer, Zombies, and the Future of Immersive Entertainment

30 Oct

Unfortunately, it couldn’t be all play, all the time in Atlanta. We had a full day of work ahead of us to set the record straight on a few items.

Does Octane deserve to be considered one of the nation’s best coffee establishments? Yes. Especially when paired with a popover from the adjoining Little Tart Bakeshop in the Grant Park location.

Will Rosebud provide a superlative experience for the discerning brunch enthusiast? We started with fried cheese grits with smoked cheddar and pepper jelly. Need I go on?

Can Porter Beer Bar live up to its reputation as one of Atlanta’s best destinations for enjoying a wide variety of amazing beer? Early results are extremely promising, but from a due diligence standpoint, we feel that it would be imprudent to end our investigation prematurely.

Atlanta is a flexible concept

“Atlanta” is a flexible concept

Refreshed and recharged, and in gratitude to Atlanta’s gracious hospitality, we thought it was only right that we do our part to assist them with their zombie problem. Atlanta Zombie Apocalypse promises an interesting variant on the standard haunted attraction. Unlike NetherWorld, which was incongruously located a few blocks from suburban strip malls, AZA is convincingly nestled in the woods off a deserted stretch of highway. The attraction is built on and around the grounds of a deserted motel and it didn’t take much effort to believe that zombies might be nearby.

For those who don’t frequent this type of entertainment, the haunted attraction industry is all grown up and embraces more than what we endearingly referred to as a “haunted house” back in the day. There are a growing number of variants on the theme of scaring people for fun and profit, including haunted mazes, haunted trails, scare zones, haunted hayrides, etc. Many attractions aggregate and even mashup several of these styles to make sure that you don’t get too comfortable. AZA provides three attractions which can be generally described as 1) haunted house/haunted trail mashup, 2) haunted maze + paintball and 3) haunted house/immersive theatre.

cdc-zombie-posterOut of respect for the organizers, I won’t give up any spoilers, but I will share some general impressions. Admission can be purchased on a per attraction basis. I was here from California and was in the middle of nowhere, GA. I was going to all three.

Although the attractions can be visited on a standalone basis, some of the employees had specific ideas on the order in which to experience them if you were seeing more than one. They were essentially imposing a meta-level of story and emotional engagement that hadn’t actually been accounted for in the overall design of the attraction. The best part is: different people had different opinions on the ideal order, and they were pretty passionate about their individual assessments. It was like having a personal team of horror sommeliers. More importantly, it was unscripted proof that the tendency to find story in our lives is central to the human experience. It also echoed my thoughts on the very personalized reactions that one can expect from immersive experiences.

Unlike many haunted attractions, which are essentially self-directed (i.e., here’s the entrance, proceed to the exit,) each of these attractions had a guided component. While the most memorable features of many attractions are the live performers, the level of interactivity is typically low and unidirectional: they scare, you scream, keep moving. The guides and performers at AZA added a sense of theatricality and interactivity which enhanced the the overall immersive quality of the experience. The first attraction we chose was called “The Curse,” and it took advantage of the ample, wild surroundings by using parts of the neighboring forest as the set. There was a backstory and a mystery to the attraction, and we were recruited as investigators and addressed directly by the guides and performers. Although I saw room for improvement, I was certainly entertained.

All in a day's work

The blogger in the line of duty

For me, the main attraction, and the reason I found myself in Conley, GA deferring my important research of Atlanta’s brewpubs, was the “Zombie Shoot.” We were provided with a semi-automatic AirSoft gun (essentially paintball, but using specialized BBs instead of paintballs) and protective headgear, and were told, “Aim for the head or the chest. Keep firing until they go down.” Pretty much the exact opposite of your standard preshow advisement, “Keep your arms and legs inside the vehicle at all times.” Also, in case it’s not totally clear, you’re firing at real people, I mean zombies, not cardboard cut-outs or animatronics. Did it work? Mostly. Was it fun? Definitely.

Maybe King Lear actually IS the future of haunted attractions

Maybe Shakespeare actually IS the future of haunted attractions

The final attraction was called “?” and I think it’s telling that it might have been my favorite even though on its surface, it seemed to be a standard haunted house. Why? Story. It was the most theatrical of the three and featured the most interesting and inventive story, while still remaining focused on trying to scare you out of the building. Granted, we’re not talking Shakespeare, but it was a refreshing and successful variant on what could have been an uninspired production.

The production value in these attractions was not award-winning, and the storytelling was not expert, but I’m still glad I checked them out. It was all produced with a lot of heart, and everybody from the people selling the tickets to the performers and guides and zombies seemed to be having a good time and wanted to make sure that we were too. By providing three very different, hybridized attractions, it was clear that the creators were willing to take some risks and try out some new ideas, which will always impress me more than playing it safe.

Crossing the line from a one-way performance to an interactive experience can immediately raise many expectations that are potentially difficult to meet. The biggest challenge becomes the reconciliation between the audience’s sense of agency and the practical and aesthetic parameters of structured entertainment. It’s a tricky balancing act between living an experience and playing along. To choose an extreme yet practical example from this show, being provided with a gun and an opportunity to fend off assailants is real and pulls you right into the experience. However, remaining cognizant of where you can and can’t shoot can temporarily pull you out. It trades the steady equilibrium of an evenly immersive but passive experience for one which attempts to balance instances of deep immersion within a more structured rules-based framework. The resulting dynamics suggest a hybrid with gaming, sports, and other experiences that would otherwise seem at odds with what (in this case) is essentially theater. Personally, I think that finding a satisfying balance in these intersecting sensibilities is a challenge worth taking on, and that we’re hopefully just scratching the surface of the types of experiences we can expect to enjoy in the future.

Advertisement

Breaking Bad: Need a Third Eye For My Second Screen

20 Sep
I Heart Breaking Bad

I Heart Breaking Bad

I finally got a chance to experience Story Sync for AMC’s Breaking Bad.

DISCLAIMER: I’m a big fan of Breaking Bad; it’s one of the few shows I can remember watching from inception instead of telebinging my way into a state of catatonia.

RANT ALERT: I have almost nothing good to say about this, so if you’re generally having a nice day, and would rather not read about someone complaining about television, then feel free to check in later when I’m in a better mood.

Quick! Breaking Bad is about to start!

Quick! Breaking Bad is about to start!

Story Sync is a “second screen” experience, which refers to the use of an additional device (e.g., smartphone, tablet) while consuming a primary source of content (e.g., TV). Despite the high-tech sound of it, Story Sync actually requires you to recalibrate your habits by a few decades. That’s right, prepare to depart the world of DVR, Hulu, VOD, Netflix, iTunes, and any other means that you regularly use to consume content at pretty much ANY OTHER TIME than the exact moment in time when it originally aired, because Story Sync only really works as designed during the show’s original broadcast. Aside from the Super Bowl, I can’t remember the last time I watched a TV show during its scheduled airtime. At least the Super Bowl is a live event, broadcast once annually, with an unknowable outcome that will be spoiled within about five minutes of your engaging the outside world. On the other hand, it makes no sense for me to drop everything I’m doing in my life to watch a pre-recorded, serialized, dramatic show, but that is what AMC requires if you want to experience Story Sync.

AMC Executive Headquarters

AMC HQ

To get started, you must download the AMC Mobile app to your supported device of choice and keep it handy as you enjoy one of the best shows on television. As the show progresses, your device will play an alert sound whenever it has new synchronized content for you to consume, and the interface also provides a running countdown to when the next tantalizing update will be served. The second screen content is in the form of “cards” which contain a unit of content which is typically one of the following:

– relevant still photo from the current or a past episode

– relevant video clip from a past episode

– interactive multiple choice trivia question

– interactive poll

– advertisement; yes, advertisements.

imagesLet me just cut to the chase. Breaking Bad is an intense show. It has successfully navigated the murky waters of long-format television to create an epic story that has momentum and feels real without getting mired in its own intricacies. On the whole, the aggregate experience can be as visceral and emotionally satisfying as watching a great movie. The level of technical artistry from the writing to the cinematography to the editing, sound, music, etc. is exemplary and all of it is brought home by incredible performances by a very talented and well-cast collection of performers. [PING] Hold on, it’s Story Sync, be right back, I need to answer a trivia question. Dammit! Oh, that’s the right answer? Right, now I remember. OK, where was I? Without giving anything away, characters do unexpected things and the plot never fails to enthrall over literally dozens of hours. Particularly in these final episodes, there are moments of great pathos and emotional catharsis as [PING] Let’s see here, a poll: how would I rate Walt’s ethical decision here on a scale of 1-5. Hmmm. Well, could be a three, but I’m leaning toward two. Two? Three? Umm. OK, two. Cool, 31% of Story Sync’d America agrees with me; I feel so . . . connected.  Even the arguably relevant content which is designed to refresh your memory about a pertinent detail from a past episode is usually an inappropriately timed distraction.

You’re probably wondering why I don’t just pause the show so I can focus my attention on the poll and not miss what’s happening on the “first screen.” Well, this is a “live TV” experience, and pausing would cause me to lose my sync, not to mention run the risk that my second screen updates might actually end up SPOILING something I haven’t seen on the first screen yet. Also, while you can technically experience Story Sync on a non-live version of the show, it is provided as an unsynchronized “archive” version of all of the second screen content. This means that spoiler information for the entire show that you are about to watch is just a swipe away.

ya-voteBefore you write me off as a Luddite, (actually a pretty audacious proposition if you’ve read any of my other posts) I admit that the concept of second screen might have its place. I’m an unabashedly huge fan of Project Runway, which has a second screen feature that allows you to vote about whether you agree with the judges or whether you think a particular contestant is headed for disaster. Chances are, you’re probably already thinking or talking about the exact question and it takes about a second to form a quick opinion and cast a vote. The results are shown on the air in a corner of the screen in real-time and then they disappear. This works because Project Runway is a reality show and a competition. There is no disbelief to suspend and the second screen experience provides a brief, intermittent outlet to your inner judge with instant gratification. This type of entertainment has a familiar and expected structure (setup, competition, drama, judging, victory/loss, reflection) that is simply more compatible with a second screen experience that can enhance those inherent emotional beats.

The opposite is true with a dramatic show. The level of impact of your emotional experience with the created world of the show is directly correlated to your engagement and attention. Every detail that is seen and heard on-screen is deliberately designed to deepen and retain your immersion in the story as it unfolds. But it’s a two-way street. While I can see that the idea behind Story Sync was to encourage a deeper level of engagement, unfortunately, it seems to have the opposite effect. Moreover, even the smallest amount of analysis or audience-testing would have yielded the same conclusion before it saw the light of day, or at least the dim lighting of your living room.

Walter White disapproves of Story Sync

Heisenberg votes “No” on Story Sync

Plain and simple, Story Sync represents an extraordinary lapse of judgment on the part of the decision-makers who thought this would be a good idea for Breaking Bad. When this show premiered more than five years ago, I’m pretty sure that the creators did not intend for the attention of their audience to be dissipated into a Chinese water torture of inconsequential irritations and distractions. I’m sure that there must be a creative and compelling way to keep fans immersed and engaged with this world, but I’m also sure that this isn’t it.

Choose Your Own Adventure: 21st Century Edition

13 Sep
Where's the Spoiler Alert?

Spoiler Alert!

I have many fond memories of the Choose Your Own Adventure series of books. I whittled away many hours of post-recess silent reading wandering through myriad outcomes in my imagination. I always wondered why that experience had to be limited to the world of books, and I thought it would be so cool if that sensibility could be woven into movies. Well, as it turns out it’s happening . . . . sort of.

When I was looking for apps to purchase for my NeuroSky MindWave EEG headset, I couldn’t resist Paranormal Mynd: Exorcism from MyndPlay. Before I go any further, if you aren’t familiar with what’s currently happening with consumer-grade EEG technology, a brief background will be helpful. Electroencephalography (EEG) is the detection and recording of the electrical activity generated by the brain. For this information to be useful in a medical context, it is not uncommon for a recording device, typically worn as some form of headgear, to have anywhere from 16 to 256 electrodes. This grade of equipment can be expensive as well as generally impractical for casual/recreational use.

Medical grade data and great for metal night at the karaoke bar

Medical grade data and great for metal night at the karaoke bar

In the past few years, NeuroSky and other companies, including Emotiv and InteraXon, have developed consumer-grade devices that have 1-4 electrodes and range in price from $80 – $300. The tradeoff pertains to the quantity and quality of the data collected. NeuroSky’s MindWave headset uses a single dry electrode, but can be worn and activated in less than one minute. Their proprietary algorithms currently interpret one’s brain activity as variable levels of FOCUS and RELAXATION. From an application development standpoint, this allows for the conditional execution of routines based on either or both of those levels as needed.

Possession or just caffeine withdrawal?

Possession or just caffeine withdrawal?

Paranormal Mynd: Exorcism is essentially a movie that contains “decision points” which launch different scenes based on the outcome of some period of EEG analysis. The story is told primarily from a first person point of view. As the viewer, you are an exorcist, and the decision points require you to maintain a threshold level of focus for a period of time in order to successfully exercise your powers. In this title, there are only two decision points, and the entire experience takes less than ten minutes.

On one hand, It’s easy to dismiss this as something that I paid a buck for that entertained me for fifteen minutes. However, I found it unique enough to also see it as a proof-of-concept that raises some interesting questions about the potential for this style of storytelling.

The plot goes something like this . . .

The plot goes something like this . . .

Even more than the old Choose Your Own Adventure books, there is a game aspect to it that makes it more than just a story. While some of those books had multiple satisfying outcomes, others seemed designed to encourage you to exercise your judgment to find a single outcome that felt like “winning.” The MyndPlay title adds a new dimension by requiring some level of “skill” to trigger a desired outcome. In other words, your decision to access a certain outcome is separate and different from your ability to access it, which certainly amplifies the game quality of the experience. While a good game often has a great story component to it, I still make a distinction between when I’m in the mood to “play a game” vs. “enjoy a story.” This experience essentially combines the two, and it’s still an open question as to whether it works for me.

Other questions that I had:

Do I want to see all the outcomes? Even the ones where I don’t win? Do I want to try harder to see a different outcome? How does this meta-level of engagement change my emotional relationship to the story? Does it matter?

This title employed classic horror tropes with very little subtlety. Girl is possessed. Bad things happen if she stays that way. You need to save the day. Could this format be effective in the telling of more sophisticated stories?

Co-author of this post

Co-author of this post

Do the nuances of the decision mechanism enhance or detract from the experience of the story? For instance, I actually found it very easy to achieve threshold levels of focus, because my attention was riveted on the possessed character on the screen. In fact, I found I had to try harder to trigger a threshold of non-focus just to explore the alternate outcome. What if the threshold tests were structured in counterpoint to the content, e.g., what if I had to trigger a threshold of relaxation while watching the same intense scene? Would that make it too much of a game?

Basically, if this is an intermediate step in the exploration of a new style of storytelling, where do we go from here?

Neurogaming Conference 2013: Expo Highlights Part 2

16 Aug
'Nuff Said

‘Nuff Said

In my previous post, I talked about some of the highlights from the 2013 Neurogaming Conference and Expo. Did I mention that the Expo was in a nightclub? So basically on a pleasant weekday in early May, while you were probably engaged in performing or avoiding something at work, I was hanging out in a nightclub making drones fly around with my brain. Man, I’ve always wanted to say that!

Anyway, where was I? Ah yes, the bar had just opened. My go-to industry event cocktail tends to be a Gin & Tonic. It tastes great in a plastic cup and it’s almost impossible to screw up. So drink in hand, I continued my explorations.

Tactical Haptics: These guys are developing a technology called Reactive Grip which is partially based on technologies developed out of the University of Utah. I’m not a gamer, so please feel free to correct me if I get some of this wrong. In a sense, they are extending the general paradigm of a motion-based game controller such as the Razer Hydra by Sixense by adding haptic feedback. Sounds straightforward enough, but it’s not. It was one of my favorite demos at the Expo, and I’ve been trying to figure out why. Basically, my limited experience with haptic feedback is that it’s like the 3D of the gaming world. With rare exception, studios are churning out 3D movies because they feel like they’re supposed to, and the end result is an obligatory, thoughtlessly-executed assault that is ultimately distracting. Haptic feedback can be the same way: I’m firing a big gun or driving a big car, my hands are buzzing, OK, I get it, tell me something I don’t know. I don’t feel that much more involved in the experience just because I’m feeling the exact same buzzing sensation too many times at totally predictable points.

Michael Buffer doesn't need Tactical Haptics

Michael Buffer doesn’t need Tactical Haptics

Tactical Haptics takes it much further. First of all, the haptic feedback in Reactive Grip is palm-facing so it’s accessing what is probably a more vulnerable and sensitive part of your hand. As a result, it lends itself to enhancing the types of actions that you would perform while gripping something with your palm and fingers. I have found a disconnect between the way you hold a standard haptic game controller and the object it intends to emulate, e.g., steering wheel, gun, etc. However, the form factor of Reactive Grip maps pretty closely to how you would hold a sword, for instance. In fact, one of their demos featured a virtual on-screen mannequin that you could hack with a sword. The continuity of what you see on the screen with what you feel in your hand is unlike anything I’d ever experienced. Even the level of feedback was regulated based on whether you were “touching” the mannequin or attempting to cut through it. The effect is uncanny, which is usually a good indication that you’re on to something big. Rather than providing a token enhancement, this technology really pulls you into the experience by engaging a critical sense in a well-considered way. These guys are looking to launch a kickstarter and they’re hustling on the road, making appearances at Meetups in the Bay Area. Worth checking out!

NextGen Interactions: I had been wanting to check this out for a while, because it allowed hands-on (or would that be heads-on) experience with the Oculus Rift virtual reality headset. One of the guys in line mentioned that there had been “a line to get into the line” to try Oculus Rift at the recent Game Developers Conference, so waiting for a few minutes seemed like a good deal. Jason, the founder of NextGen Interactions, was allowing visitors to experience a prototype of a game that he was developing that featured the headset. At this point, it’s hard to say anything about experiencing Oculus Rift for the first time that hasn’t been said before, but the general consensus, with which I agree, can be accurately summed up as HOLY $#!+!!!

Better give the Oculus Rift a break

Better give the Oculus Rift a break

It’s real easy. You sit down in the chair, put on the goggles, and the world that you know disappears. Let me say that again. The world disappears. The first thing I did when I put on the goggles was look up and then look behind me. Yup, looking up at a tall building or back at an empty landscape. Total 360 degree immersion. Jason incorporated Razer Hydra as the controller which makes for a very intuitive way to move around, and once you start moving, it is impossible to continue believing that your chair is not actually moving. Actually, you basically forget all about the chair. Jason patiently walked me through the level he designed, where he had embedded some features that I appreciated including puzzles that required you to interact directly with your environment like picking up objects, stacking them, etc., which you literally do with your hands, thanks to the Hydra. As a testament to the level of immersion though, I found it very difficult to focus on what he was saying, because the whole concept of his voice coming in from some nightclub in San Francisco was totally alien to what was “real” for me, which was a post-apocalyptic landscape that I was intent to explore. After about ten minutes of this, it was time to move on, and I actually found it somewhat disappointing to return to the real world. Jason was meticulous about collecting feedback, and apparently my enjoyment of the Gin & Tonic made me a subject of interest with regard to the potential for motion sickness. I admit to a slight feeling of queasiness which I think I would have felt even without the drink. Hopefully, he had a control group. Oculus Rift did not invent virtual reality, but they seem to have figured out a way to engineer it in a way that will be consumer-friendly (i.e., it won’t cost thousands of dollars.) With developers like NextGen Interactions building content for the platform, it shouldn’t take long to catch on.

Coming Soon: Isaac Asimov posthumously sighs at what I’m getting out of the Foundation Trilogy.

Neurogaming Conference 2013: Expo Highlights Part 1

15 Aug

In my previous post, I talked a bit about Neurogaming in general. However, how does one even find themselves at a Neurogaming Conference? Good question, and someone should really aggregate a list of responses, because I’m sure it would be a fascinating read. Speaking for myself, earlier this year, I was searching for a project that would allow me to learn about robotics, Arduino, interactivity, etc., and I came across a book called “DIY Make a Mind-Controlled Arduino Robot.”

Lead me not into temptation

Lead me not into temptation

For $4.99, if your curiosity can resist this type of temptation, then your willpower is an order of magnitude stronger than mine. The details of this project are best discussed in a future posting, but in short, it makes use of a NeuroSky MindWave, which is a consumer-grade EEG headset priced at less than $100 that performs rudimentary analysis of your brainwaves, and interfaces with a software development environment and an “app store” featuring games, brain training, interactive films, etc. It didn’t take long to discover that there is a thriving community of makers, researchers, artists, and hobbyists who are actively monkeying around with these types of devices with astonishing results. From there, it didn’t take long to stumble on the conference.

The conference featured an excellent lineup of speakers participating in panels on a wide variety of topics, but in this post, I’m going to focus on the Expo. There were a modest but diverse array of exhibitors ranging from EEG headset manufacturers to research organizations to gaming startups to the city of Helsinki which apparently has a high concentration of both the gaming industry (Angry Birds) and neuroscience, go figure. For my part, my body was just moving my head around the room trying to figure out what I could plug it into, so here are some of the highlights.

If you look closely, you can see the headset

If you look closely, you can see the headset

Puzzlebox: The Puzzlebox booth allowed visitors to test-fly their Orbit product, which is a small helicopter that you can control with your brain using a NeuroSky headset. Because I had some prior experience with this type of headset, I was able to sustain flight for approx. 20 seconds on my first try. At this point, the level of control is essentially limited to on/off, or in this case fly/don’t fly. From what I understand about NeuroSky’s platform, it would be tricky to develop a product that you could both fly AND direct (e.g., turn right, turn left, etc.). The employee at the booth told me that they are planning to release additional functionality in the form of software upgrades to combat the trend of planned obsolescence that seems to be the norm in toys (and pretty much everything else) these days. She also mentioned that the founder had designed a mind-controlled pyrokinetic installation, but that it wasn’t quite ready for public consumption. So, basically, you’re one EEG headset away from recreating Stephen King’s Firestarter in the comfort of your own home. On a serious note, there is clearly a lot of creativity and energy at this company, and I have a lot of respect for their commitment to maintaining the value in their products through periodic updates. I can’t wait to see what they come up with next.

Foc.us: The founders of this company were among the friendliest people at the whole conference. They market a different sort of headset. Rather than trying to figure out what electrical signals are being generated by your brain, their device actually stimulates your brain using a method called transcranial direct current stimulation (TDCS). The foc.us is marketed toward gamers, but the technique has a history of therapeutic application and as a cognitive aid. This device is unregulated by the FDA and was essentially self-funded, but when you think about the things that are FDA-approved and funded by institutional capital, you quickly realize how this might actually be a good thing. So I tried it. The guys advised that you probably shouldn’t use it for more than 40 minutes in 1 day. I wore it for about 7 minutes., and probably would have kept going, but there were people waiting. The sensation can best be described as “prickly,” and lives in a grey area between “annoying” and “not quite painful.”

I'll take this over Olestra any day

I’ll take this over Olestra any day

It reminded me of how I feel about exercise — you’re basically lying to yourself if you think that it feels good, but you exert a temporary override on your subjective definition of “tolerable.” So did it do anything? I’m glad that I tried it, but it’s difficult to get a handle on the effects after only one trial. I do know that while I was wearing it, I was talking the founders’ ears off by providing them unsolicited business advice about the product. They seemed like good ideas at the time! I want to say that I felt a little bit more tuned in and focused afterward, but in the interest of full disclosure, it was evening, and I realized the bar had just opened. Generally, it’s exciting to see technology that was previously the domain of medical research and military training make its way into the mainstream.

In my next post, I’ll share a few more mind-blowing highlights and more about the bar.

Neurogaming Conference 2013: The Enchanting Prelude

13 Aug

For the purposes of this post, we’ll just pretend that “blog” is short for “backlog.” I attended the 1st Neurogaming Conference and Expo in San Francisco in May 2013, but chances are, you weren’t there, and you’ve never heard about Neurogaming, so we’re cool. By the way, I get a real kick out of the fact that this was the first incarnation of this event. This is one of the few instances where you can show up to the party early and have it be a good thing.

NO! This is not neurogaming!

NO! This is not neurogaming!

To start, I’m reminded of a comment that was made by one of the panelists on the “Investing in Neurogaming” panel. He basically said, “NEVER use the word Neurogaming with consumers.” This made me laugh out loud, because it explains the odd looks and slow backing-away when I mention the word to people. The conference actually touched on a number of interesting topics including medical/therapeutic and educational applications, and business/industry considerations in addition to “gaming.”

So, what does it mean exactly? Like any other recently-minted compound buzzword, the answer to that question is somewhat up-for-grabs, so I’ll take a hack at it. For me, neurogaming means the deliberate incorporation of emotional dynamics into the feedback loop of gameplay. Notice that I’m not talking about sensors or lidlocks or pharmaceuticals or mind control or anything like that. In fact, I’m deliberately taking a position that is not based in technology or biology, because it has helped me zero in on what fascinates me about it. I would go so far as to say that using my definition, arguably Neurogaming has been around for decades.

Have a seat = pwnd (literally)

Have a seat = pwnd (literally)

Take for instance (some of) the chess players that you see in a typical metropolitan park. Personally, I get spooked just walking by these guys, watching them move their pieces with efficient lethality before commandingly pummeling their timeclocks. If that doesn’t affect your game, then you should check your pulse.

People play games for any number of reasons, but some kind of emotional kick is almost always at the root of it. However, this dynamic potentially leads to a couple of dead ends.

The calm before the storm

The calm before the storm

First, games inherently require a level of abstract mental processing which competes with and interrupts the emotional experience. Unless you have a very vivid imagination, chess is still some funky-looking pieces on alternate-colored squares, and there’s probably a lot of ping-ponging going on inside your head that keeps you from developing an impractical level of emotional momentum. In fact, for certain games, it’s probably advisable to contain your emotions so that they don’t negatively impact your performance, in which case the emotional payoff is felt afterward. Second, despite the standard bluffs, threats, intimidation and occasional laughter that characterizes most gameplay chatter, there usually isn’t a method for any of this emotional energy to be utilized directly by the game itself. None of this is to suggest that emotions do not have a real effect on gameplay. Even a casual engagement with board games suggests that the opposite is true. However, my definition requires a “deliberate” as opposed to incidental incorporation of emotions. And what better means than technology to facilitate that; hence the conference.

Personally, looking at the trends, technologies, platforms, and prototypes that were discussed at the conference from a few steps back has allowed me to get a better grasp on their implications in other areas, including storytelling and experience design, regardless of whether an actual game is part of the equation.

What? You wanted to hear about the actual conference? I guess I got carried away. In the next post in this series, I share some stories and experiences from the event, including my highlights from the Expo where I attempt to plug my head into everything that I can.

AWE 2013 Highlights – Part 2: ChatPerf, Seebright, Hermaton

2 Aug

In Part 1 of my highlights from AWE 2013, I focused on the challenges that attend the quest for the holy grail of Augmented Reality: a wearable solution that provides a seamless experience. It’s a high-stakes game with a lot of heavy-hitting players and the outcome will potentially have significant impact on the high-tech sector, and possibly society in general. In this post though, I want to touch on some organizations that charmed me because they seemed to be doing things that nobody else was doing, yet still pushing the envelope and having a lot of fun doing it.

ChatPerf

Scent never looked so good

ChatPerf: You can watch movies and listen to music on your phone. With haptics, your phone can even generate some useful, if rudimentary, tactile feedback. But smell things on your phone? Admit it, you’ve been waiting for it, and it’s here! I would have been happy to trade the sheer novelty of experiencing scent coming from a phone for a certain lack of polish in a product that still seems to be in the first-gen/preorder phase of its lifecycle. However, what charmed me most about ChatPerf is that they have already managed to make this product attractive and fun. It comes in different colors, and the overall aesthetic does not necessarily clash viciously with the well-considered design of the iPhone. You literally plug it into an iPhone, and then tap a button on the screen which causes the device to emit a fairly potent aromatic puff. Yes, you can literally see it puff! There is a Willy Wonka/steampunk aspect to it which is delightfully dissonant and unexpected. The developers at the booth told me that each unit is good for 200 blasts (trust me that’s more than enough) especially since each unit can only generate a single scent. However, they said that they are working on a single device that will generate 1000 different scents. Yeah, kind of stopped me dead in my tracks, too. Now that I’ve crossed that off my list, I’m hoping that there will be at least one lickable smartphone at AWE 2014.

Seebright Spark

Has anyone seen my phone?

Seebright: As I’ve previously indicated, the Holy Grail won’t be found anytime soon. Personally, I think the folks at Innovega are on the right track, but that’s a whole other discussion. However, we clearly live in an age where we are essentially undaunted by the prospect of technical limitations, and when we want something, we want it now! At least, that’s what the folks at Seebright seem to be thinking. I hesitate to oversimplify what they’ve done, but I haven’t come across anybody else who is doing it, and it works, so here goes. The Spark is a piece of headgear into which you insert your phone, that uses optics to beam an image of the phone’s screen into your line of sight. Once you’ve played with AR long enough, you get to an apex of frustration where you want to take duct tape, an old watering can and a thick rubber band, and attach your phone to your head. Seebright feels your pain, but they put a lot of thought into it, and the Spark looks and works quite well, despite seeming to be a somewhat low-tech solution. It seems to be an intermediate step that may see itself made obsolete by the HUD that currently only exists in everyone’s dreams. However, so far we’re hearing a lot of talk without a lot of results, whereas the Spark makes you feel like we’re getting somewhere and tries to scratch that itch. I think that technology like this can facilitate prototyping of new experiences since they bridge a gap that is currently too wide, impractical or expensive to traverse from where we’re currently standing.

Hermaton

And you thought your dreams were weird

Hermaton: A central feature of AR, and one of the reasons why it has such a magic quality, is that there is no substitute for experiencing it firsthand. Nowhere is this more evident than with Hermaton from Darf Design. By definition, all AR requires a real-world object to serve as a “marker” to anchor one or more layers of virtual content. While this marker can be anything, it’s typically something 2-dimensional like a page in a magazine, or with the rapid development of computer-vision algorithms, a 3D object. In most cases, you’re dealing with an “average-sized” object, although there are some impressive examples of augmenting an entire building. However, Hermaton is basically a marker that you can walk inside that surrounds you. Words and pictures simply won’t do this justice. Even the marker, which was deployed as two full walls of a booth, is a compelling piece of abstract art. Hermaton succeeds as installation art, architectural statement, immersive game/story, and most importantly a satisfying experience. My experience walking through it was to temporarily lose myself in a novel form of exploration. There are interactive components on the environment, but they can be difficult to access, which requires you to slow down, absorb your surroundings, and enjoy the journey. It’s a powerful archetype for immersive storytelling, and I’m sure there will be followers and copycats. However, I can’t wait to see what’s next from this group.

Aeropress: The Art & Science of Perfect Coffee

26 Jul

If you don’t believe that coffee makes you a more productive, creative, attractive, interesting, and overall better person, then you may not find the rest of this post very interesting. I wouldn’t go so far as to call myself a coffee geek. True, I occasionally roast my own beans, and will sometimes indulge in the zen of a pour-over, but most days I can make my peace with Mr. Coffee if it comes out strong enough. But that all changed when the Aeropress entered my life.

Like broadband, mobile phones and DVRs, I’m finding it increasingly difficult to remember what the world was like before Aeropress. I vaguely recall a notion that there were two worlds of coffee: the one at home where I could make something well above functional, but acceptably short of sublime, and the other in the outside world of expensive espresso machines operated by artful barristas in some aromatic sacrosanctum. For less than $30, the Aeropress collapsed those worlds into two minutes of meditative perfection.

I should host this in my kitchen

I should host this in my kitchen

Coffee is a ritual. I know that I am not unique in the peace of mind that I experience in the particular and personal way in which I prepare, measure, grind, filter, and execute the first and most important cup of the day. But with almost every method, you must ultimately disconnect yourself and trust the machine, the process, or simply time itself. Once your water is heated and your beans are ground, you are less than one minute away from coffee, but the Aeropress requires you to become part of the process. Without rehashing the details of how the system works, which is more thoroughly described elsewhere, the Aeropress requires you to exercise equal parts patience, finesse, judgment, and care. In other words, it takes a bit of practice, but you feel a true sense of connection with your coffee. For what it’s worth, in my opinion, the properly-executed result is as good or better than what I can get in a cafe in the Bay Area, which is High Praise across the board. If you’re looking for a good bean, I’ve been pretty fixated on Highwire Espresso.

You’re probably wondering, what on earth does this have to do with Immersive Storytelling Technology? Well, I’m not going to lie to you, strictly speaking, nothing. However, it’s worth noting that the Aeropress is known as an immersion brewing method. While simple, it has enough science and technique behind it to qualify as technology. And as far as I’m concerned, after a cup of this stuff, I’m ready to jump into just about anything creative, including my favorite pursuit: storytelling.

AWE 2013 Highlights – Part 1: Epson Moverio

14 Jun

I made the journey from Oakland to Santa Clara to attend the Augmented World Expo (AWE 2013) last week, and enjoyed a day dedicated to the magic of Augmented Reality (AR). AR is a rapidly developing technology that has the potential to be a real game-changer in the development of immersive experiences and entertainment. There was a lot to absorb and discover and I will be sharing some of my personal highlights over the next few posts.

First some background. AR is hardly mainstream; it has only been deployed in a commercial context in the past few years. AWE is only in its 4th year, and the event has an electric air of excitement, novelty, and discovery to it. I attended it last year, and I was surprised and happy to see how much it had grown in terms of attendees and exhibitors since then. At last year’s event, I had been hoping to try a Heads-Up Display (HUD), which is basically a fancy term for futuristic eyewear that allows you to experience the world like Tony Stark or The Terminator.

Governator

As seen at AWE 2029.

The development of a high-quality, consumer-grade HUD is the type of holy grail achievement that would have a dramatic, transformative impact on the mainstream engagement with AR. In my opinion, as it’s currently conceived, Google Glass will not provide the type of game-changing, HUD-enabled AR that I am referring to. However, based on the attention that it’s getting, despite the fact that it’s not even for sale yet, it seems clear that this concept is captivating the public’s collective delight, or disgust, depending on your perspective.

To my recollection (and disappointment), there was not a single eyewear exhibitor at the 2012 event. This year though, an entire section of the expo was dedicated to eyewear and the topic of wearable computing was very hot. What a difference a year makes! I made a point of visiting every single eyewear booth, and attempting to try on every single product I could get my hands on. For pure entertainment potential and application, I was most impressed by the Epson Moverio.

Somehow, not that cool.

Epson Moverio BT-100: Not quite cool.

Don’t get off your horses just yet — this is not the grail, but it does seem like a step in the right direction. When you look at this image, remember: this represents the best effort from the marketing department of a multi-billion dollar, global corporation to make their product look cool. If they can’t pull it off, it’s probably not there yet. Does he look relaxed in that picture? Actually, he’s just rendered slightly immobile by the sheer weight of the glasses. OK, it’s not quite that bad, but it basically takes quite a bit of hardware to pull off what is otherwise a very impressive experience. The glasses project video content so that it looks like you’re watching a screen from a few feet away. The gear also natively supports 3D content without any additional enhancements or hardware. I could almost, almost look forward to a long airplane flight if I had this handy, which incidentally is probably one of Epson’s most compelling use-cases.

However, the glasses are essentially see-through, so that when you’re not watching any content, you can still see the world around you. Enter AR. Epson has been working closely with interested partners to support the integration of aftermarket components with some impressive results. Rig up a standard-issue webcam and load up a custom application on the Android-powered hardware that ships with the glasses, and you have a very compelling wearable AR solution. In fact, Scope Technologies was exhibiting alongside Epson and was demo’ing an impressive AR-driven training solution that was built on their “hacked” version of the Epson platform. Although I was only watching a monitor of what the glasses were projecting, and was not able to experience it myself, it was a promising glimpse of something real which I’d previously only thought of in conceptual terms.

In general though, this integration was a mess. Cables everywhere. 3rd party accessories mashed into each other and poking out at odd angles. A bit of showmanship and legerdemain to create the illusion of a seamless experience. One developer had to integrate an Android tablet into his setup to provide some missing functionality in the Epson-supplied box. The end result was . . . cosmetically challenged — about the furthest thing you could get from the elegance of an iPhone or the milligram design precision of Google Glass. However, there was a DIY sensibility to it that was driven by the fire of wanting to touch the future NOW, and not when some massive corporation hands it down to us from on high. I loved it!

In Part 2 of my posts about AWE 2013, I talk about some of the smaller exhibitors who are breaking exciting, new ground.

GeekDad

Raising Geek Generation 2.0

Themed Entertainment Jobs

Have Fun Creating Fun

Haunting

The Home for Immersive Horror

This Week in Laundry

Immersive Storytelling Technology

Theme Park Insider

Immersive Storytelling Technology

InPark Magazine

Serving the themed entertainment community

imho

Sharing What I've Learned...of Creating Experience with Deep, Emotional Connection