Perspective & Experiences in Storytelling
Deciding the stage to set our web3 plays loose upon the world.
When I was 10 years old, I went to a friend’s birthday party at a local shopping mall in Saint Louis. It was a golden era of shopping malls in my lifetime, with Sbarro pizza in the food court, Sam Goody CD racks, The Sharper Image gadgetries, and the best part of all: arcades.
Arcades just aren’t what they used to be, at least not what they used to be in malls. The mall I went to had an arcade called “The Millennium Arcade,” and was located in the basement of the mall, next to the food court and auxiliary parking garage. Everything from Dance Dance Revolution pads to an air gun shooting gallery was found in this heaven of childhood, but probably the coolest thing to experience in there was a monolithic machine I saw streams of people lining up to climb into. It was a virtual reality roller coaster, the size of a large Dodge work van, centered smack dab in the middle of the arcade and anyone who was anyone begged their parents to ride it.

I remember getting in the giant glossy capsule for the first time and being blown away by the experience. I couldn’t even fathom that at 10 I was flying through volcanos, dodging and rolling to evade pterodactyls and raptors, and the entire time I was terrified and awestruck. When I got out of that capsule and walked down the stairs I said to myself this was the best game in the arcade…and I realized I didn't even play anything.
To be able to immerse oneself visually in a completely different world was once the stuff of dreams for me until that awesome birthday party, and it has only become more prevalent in our culture and civilization as technology has advanced. Since that day we’ve collectively invested in this experience. We have movies, console games, and even phone applications built to immerse one in what is called an extended reality experience, and this kind of reality has become a primary marketing strategy for brands and companies to involve themselves viscerally with their target audience.
This article is going to discuss this type of digital experience, breaking down in a very general way how extended reality experiences are constructed to tell stories and immerse users in experience, especially within the evolving metaverse and Web3 world. I’m not a philosopher, physicist, or even a professional extended reality programmer, but understanding this in a general sense has given me (and hopefully will give you) a sense of the incorporative power extended reality technologies have to develop deeply visceral stories.
Setting the Virtual Stage
To be clear about what extended reality is, we should really be clear about what types of reality fall under its domain. To do that, we need to identify what parts of an extended reality experience we need to create each of these types of reality.
On a foundational level, to make an extended reality experience, we need the following:
A participant with access to 1 or more perspectives; we can call assets that provide a specific perspective artificial eyes for the sake of clarity.
A source program incorporating a projection device.
A space (depending on the type of experience, this may vary).
An orientation (in most cases, gravity).
You probably don’t see this list represented anywhere unless it’s seen with greater technical detail, but each of these is necessary to provide the illusion of extended reality. For the sake of the precision of this article, let’s define what reality actually means before we get hot and heavy into the awesome power of this stuff.
Reality as we experience it is the present state of information-collecting performed by your senses, from input to thought to reaction and everything in between. So, extended reality is the extension of our present state of sensory data collection. It is an experience, and extended reality experiences are often done through visual manipulation via headsets or glasses. These types of experiences often fall under 1 of 3 categories: virtual reality, augmented reality, and the somewhat controversially implemented but inevitable mixed reality experience.
Working our way from the former to the latter, we can see how immersion is evolving in each space, and hopefully I can provide you a clear picture as to what we can expect with regards to implementation in these spaces in Web3. After all, these realities and stories are already being played out across the metaverse, and in the great words of Walt Whitman “the powerful play goes on, and you may contribute a verse.”
Let’s dive in and talk about the different ways to put on our play…
Virtual Reality: Electric Dreams
Virtual reality as we know it is arguably the most customizable of these realities with regards to the complexity involved in its production. Virtual reality is an independent, artificial environment which is experienced through sensory stimuli (such as sights and sounds) provided by a computer from a singular perspective, where said perspective partially or entirely determines what happens in the environment. There are two major parts of that definition:
VR is ultimately independent and artificial to the physical space a participant is in.
VR also requires a secondary brain that always provides a singular perspective.
Most often the sensory interpreter involves a headset projecting images directly to your eye or a group of eyes via screen and eye tracking. The top-of-the-line individual headsets offer eye tracking sensors as well to provide extra information to perfect the projected perspective, but ultimately the perspective being experienced is very singular. In essence, a single camera POV is used when experiencing a virtual reality.
That first takeaway is quite crucial to understanding this technology. VR offers an environment that is independent of the physical space your body occupies. You can dogfight with tie fighters in space or you could be transported across the world to a jungle, but those experiences produce their own space and gravity within the confines of the room you’re in. And if your brain decides that the space you’re entering virtually supersedes the physical space your body occupies, participants can experience some nasty side effects. You might just get motion sickness from unconsolidated gravity, fall right off your couch, or down a flight of stairs, or something else comical or dangerous to those of us witnessing anyone wearing a VR headset.
It’s no secret that we live in an attention driven society. Hype culture is what we’d rather call it but they both are operating under the same exploit: the majority of society wants experiences, not products. Experiences require your time and attention, which is why many of the best video games out there take hours to complete, and the really REALLY good ones don’t want you to just beat the game, they want you to have fun playing it regardless. Sounds like the metaverse is a natural stage for these prospective plays to be performed.
There are companies that have picked that exact idea up and ran with it, namely Meta (who changed it’s namesake from Facebook for just that very reason). Meta’s thrown a fortune into developing its purchased subsidy, Oculus, and creating a VR headset that can be used in almost any metaverse world that currently supports VR. Allowing a user to experience virtual immersion in a metaverse-centered application allows for users to interact in shared space virtually no matter where their source profile or source representation comes from, and right now the Oculus Quest 2 is the accessible consumer standard to beat.
That doesn’t mean that’s necessarily the way to enter the extended reality space though. The type of immersive experience you wish to market to your consumer base needs to line up with the needs of your base. Meta is a social platform ultimately, and their purpose of bringing people together in shared space aligns with both their industry and their development plan for Oculus and the Meta VR platform. That’s a very different experience to that of say learning how to fix a virtual carburetor or play a virtual violin replicated from the original. VR is great for practicing something safely, but both of those simulations require a user to rely on the artificial gravity of the virtual space for orientation and input. Augmented reality programming may be more applicable to those circumstances that require a more personal physical interaction in reality.
That in essence is the natural limitation to VR: if your participant doesn’t collect the singular perspective correctly, the immersion is lost. Many VR experiences are often projected quite restrictively because of this, and in most cases corral participants within the experience to guide them on (sometimes literally) a track to take or a path to follow.
To eliminate these restrictions, we need something else to be present on stage during our play. An important element is often lost on participants that sometimes need to walk before they dance in a virtual story: relativity.
Augmented Reality: Space on Space
VR is a remarkable technology, but as stated in the previous section, its independence of the physical space the experience occupies can be a problem. A person could see a vast ocean from a VR beach but have a wall 6 inches in front of them physically, and that lack of consolidated spatial information has caused anything from broken equipment to injured users when experiencing a VR game, movie, or even just trying on a VR headset for the first time. What does this type of reality lack? A new perspective.
I consider augmented reality a more evolved form of virtual reality, not because the hardware or development process is necessarily more complicated or more expensive, but because there’s more information being translated in real time at any given moment when experiencing it. AR requires the projection device to not only recognize your perspective (whether that be through AR glasses or your phone or tablet’s screen), but it also has to recognize a separate perspective through what I previously called an artificial eye. This eye is often a camera or sensor on a device which collects spatial information about the environment the experience is being had in, which is then fed to the program facilitating the experience so that it can blend the physical space with the virtually rendered one (often through an AI interpretation program). This offers a new layer of immersion with the new information. A table can be virtually recognized and identified by a separate eye, and then virtually painted so that no matter what perspective you take, each pixel of paint stays perfectly in place, granting you the illusion of believing the table has been painted in real space.
AR is great for device-driven experiences, meaning you’re perfectly comfortable using the hardware provided by the source projection device to interact with the virtual environment. Pokémon GO is quite possibly the most perfect example of this (although it’s notable to mention Snapchat has also capitalized on a similar UI). Users playing the Niantic and Nintendo-developed game simply point their phone in different directions or walk around real space to randomly generate virtual assets they can then “catch” with the flick of their fingers on their phone. The Pokémon Company leans heavily on the nostalgic power of their IP in the development of the game, but the abilities AR offers participants to immerse themselves with their favorite characters in the real world was obvious to them, and they capitalized on the opportunity perfectly. Nintendo’s stock rose 25% in the week after the game’s release, adding a total of $7.5 billion to the company's market value.
There’s also an added layer to reality that’s often a consequence of the additional perspective: gravitational relativity. Just like how it takes two points to make a line, it takes two perspectives to create relativity. In virtual reality, objects and bodies in the artificially generated space are bound by their own artificially relative gravity, and when you’re really immersed in it, often your brain cannot consolidate its own conception of gravity with the artificial one. Since AR provides perspectives that look at physical space simultaneously with virtual rendering, it incorporates physical objects into the illusion which helps your brain to realign itself with real physical gravity. This often allows users to not only experience the reality longer, but with greater ease and familiarity.
Mixed Reality: Holographic Matter
There’s virtual reality, which in essence is a projection of an independently rendered space; a play with a script, sets, and story often produced with an intentional path to take. Then there’s augmented reality, which incorporates virtual elements projected upon physical space. The thing is, neither one of these experiences interact with arguably the most important component of the whole thing: the participant. Absolutely, Oculus and HTC sell controllers for their respective headsets, but those mechanically-driven input devices aren’t your actual hands. It feels clunky or muddy, even with a stellar network connection. We as participants don’t want just intersection of space, we want immersion with realities.
This is where mixed reality enters the arena. Watching the original MCU Iron Man movie one is probably one of the more quintessential representations of this in cinema in the scene where Tony Stark (Robert Downey Jr.) is seen working with his home AI assistant, Jarvis, via AR hologram to develop his "Mark II" suit.
It may not be the first representation or the best, but it illustrates what extended reality could offer when the tech has its day. VR gives you artificial space, then AR takes that to a new level by projecting artificial space upon physical space. Mixed reality, or MR, offers the last layer of the extended reality illusion: passive input.
Passive input is a reaction in an artificial environment executed by an observed behavior rather than expected direct input. AR uses controllers and screens; devices coded into the extended reality experience. These input schemes are the exclusive cause for the program to behave the way it does. Passive input is observed behavior, and it’s a reaction to what the program observes, not what the program independently intends to accomplish. So when Tony sticks his hand through the holographically rendered sleeve of his suit’s stabilizer, the hologram reacts to his motion and behavior by tethering the projection to his arm, but it has to anticipate every potential interaction with his arm or body touching the hologram to actually be completely immersive.
This is what makes mixed reality the holy grail of extended reality experiences: real physical immersion between realities. Offering both VR’s virtual interactivity with AR’s augmented perspective, mixed reality (MR) allows for us not only to create virtual space on top of reality, but also to interact with it ourselves, with our own hands, so to speak.
Getting into MR development is tricky because it requires a type of operational utility to program a truly immersive experience: empathetic programming. What I mean to say is that this type of development requires the programmer to understand from start to finish what the experience could be, as opposed to what it should be. For example, say a participant is playing a virtual violin; each recognizable placement of a finger on the violin’s neck must match each tone or microtone of the physical violin being played, or else the operational utility of the experience is useless. If a mixed reality doesn’t incorporate a level of believable realism in its development, then the operational utility of the program is null and void.
This makes MR a big bet, one that only a single company has truly had the guts to really sink its teeth into in this space: Microsoft. Microsoft’s development of their HoloLens device line and their Mesh communication platform for the device clearly shows an active interest in the space for the company, and it’s clear why. Microsoft Windows runs the enterprise world; almost everything in the infrastructure of a corporation relies in some capacity on Microsoft-implemented assets, and Microsoft as a company has always had the strongest relationship with its industry clients. Now, Microsoft offers HoloLens devices for plant workers, industrial technicians, and is slowly but surely working its way from industrial applications to a more consumer approach after having thoroughly tested their HoloLens device in production environments. Their slow and steady approach to development in the space of mixed reality is not just something to keep an eye on, it’s a brilliant and effective way to develop in what is still considered a futurist technology.
Owning an Extended Experience
I recently posted an article talking about storytelling, nostalgia, and their power in a web3 product. Storytelling is an experience; a constructed reality we place upon the ears or eyes of our audience to immerse them in plots, characters, and settings that transport them out of the reality they’re accustomed to. Each of these extended reality experiences offer different ways to develop and tell these stories, and it is crucial to decide which experience or reality suits the story we want to tell.
We’ve explored these different experiences from the perspective of narratives because we have reached the point in our evolution where we can now create and preserve our own stories. Stories differentiate us from one another and individuate our character, yet they also unite us under the human experience. As cheesy as that all sounds, it’s stories that develop culture, motivation, and passion for the things we truly care for in this reality we physically experience, and in the end, it’s stories that we’ll live through when we’re long gone.
This is the world of web3; a world where anyone can develop a narration, a plot, and an asset to the greater metaverse and how a company or an individual tells the story of what they make is crucial to their acceptance and growth in the web3 community. Why is extended reality important in web3? Because it lets us actually make the stage for the play we wish to put on. Extended reality is a platform for storytelling, visceral storytelling. Instead of just participating in the reality we’re given, we actually have the power now to participate in completely different layers of the same reality or a completely new one. That alone, the fact that we can now participate in these stories like never before thanks to these experiences, is what makes the future of web3 and the metaverse all the more exciting.
Writing made possible by Immutable Labs. Check them out!