What's virtual reality? It's a computer-generated world in which we move and interact with objects, other real people, and virtual reality people. It's a place that isn't really there but that offers the powerful illusion of existence.
Virtual reality today comes in two flavors. One surrounds you with three-dimensional objects and scenes so that you feel you
are walking through the scenes so that you're visually surrounded by the virtual world. This effect requires equipment: virtual-reality goggles, for instance, or specially equipped rooms.
The second type of virtual reality appears before you on a two-dimensional screen, such as your computer monitor. The computer graphics and programming are so well done that a full three-dimensional world feels live on your two-dimensional screen. Many computer games are forms of virtual reality. They aren't known as virtual reality games, though, simply as three-dimensional games with some built-in artificial intelligence. Yet when we play them, we're there. Much of the basic programming for this kind of on-screen VR is the same as for the more elaborate kind. The computer doesn't care whether the virtual space it constructs is an image on a screen or a three-dimensional holographic projection.
On-screen virtual reality also exists as a result of a special programming language called the Virtual Reality Modeling Language, or VRML 2.0. Uses for interactive VRML worlds include business applications, such as walking people through the internals of equipment, showing them how to fix a machine from different angles, letting them walk though an on-line shopping store, explore battle simulations, and cruising them through the layout of a new public sports arena.
If we wanted to build a virtual world we might begin as God did, with a light source. We supply numbers defining both direction and intensity; for example:
The DirectionalLight is like a stage floodlight for a virtual reality scene. It illuminates the scene with light rays parallel to the direction, a vector supplied by x, y, and z coordinates.
The intensity is the brightness of the light, and the color is the Red-Green-Blue (RGB) value that defines the light's color. In the RGB example of 1 1 1, each 1 represents a hexadecimal code of ff, meaning Full Red, Full Green, and Full Blue. With 1 1 1, the total color combination is white. Therefore, our light is bright white in this example.
As a caveat, you might notice that, by changing the color, intensity loses its relevance. Light emission is approximately equal to intensity times color, but with color turned to maximum white, what's the point of reducing intensity? You can just as easily reduce the color from full white to something less intense.
We might want to specify background textures or images for the ceiling (such as a sky), ground (some grass perhaps), and a wraparound world (perhaps a forest that encircles us as we move through the scenes). Or, for fast loading and easier lighting, we can just specify background colors in gradients, such as:
The groundAngle supplies a cutoff angle for each groundColor. In the example, we have four groundColor values separated by commas. Each groundColor is an RGB value, and the first (00.8 0) is what we might see if looking straight down. So there's one more groundColor than groundAngle.
The colors for the sky are created in the same way. One more skyColor than skyAngle, with the first skyColor the RGB value we see when looking straight up.
These are very simple examples. Rather than supply colors for the ground and sky, we can instead designate background images for the entire virtual reality scene: front, back, right, left, top, and bottom. Using this second method, we essentially define a cube of images, which together, define a panorama surrounding our virtual reality world. We can place clouds in the sky, or on the floor. We can place mountains in the distance, or on the ceiling.
At this point in coding a VR world, we have to move beyond the easy steps of defining the sky and ground. We have to create the objects that will fill the world, and we must make the objects interact and move.
To understand virtual reality code requires a basic comprehension of object-oriented programming (OOP). Way beyond the scope of this book, but to get a feeling for the holodecksâwhich are virtual reality worldsâwe have to start somewhere.
Think of OOP as a hierarchy of objects. Each object describes a “thing,” what it looks like, what it does, the data it uses. We might
define various VR objects and some of the components that enable them to interact. For example, here's a snippet of code:
PROTO Snippet defines an object called Snippet that we can use repeatedly in the program without consuming extra resources. Snippet itself is a simple three-dimensional brick.
Each exposedField can be accessed from other parts of the program, for example, to change the color of each Snippet we create. An exposedField implicitly knows how to handle two event types: an incoming set_ event that changes the field value, and an outgoing _changed event that sends the exposedField's changed value to another node. In the example, code can change the color, position, and size of the Snippet object.
Simple geometric constructions enable us to code the appearance of each Snippet. Thus, much of the VR world can be built up from variations on specific aspects of fundamentally identical parts.
Suppose, to make the metaphor tangible, this particular Snippet is an actual brickâor a representation of oneâand you see this Snippet resting on the ground. In the VR world, you might see 45 Snippet blocks on the ground. Or 100 of them. Or only one.
Each looks likes a real brick. Each has a different color. Each feels real to your touch in the VR world. It's all programming code, it's all created from one tiny Snippet defined in VR software language.
You're immersed in this Snippet-filled VR world, just as Trek characters are immersed in holodeck adventures. In reality, you're sitting on a chair in your living room. But your brain's immersed in a fantasy world, the Snippet world on your computer screen (or delivered directly into your brain through your eyes via goggles).
Perhaps you pick up the brick and hurl it at a huge spider web obstructing your entrance to the cave of Dr. Cruelman. In reality, you're still sitting on a chair in your living room. Only in virtual reality are you throwing the brick at a spider web.
The PlaneSensor notices that you moved the Snippet brick. The ROUTE statements and vrmlscript enable the code to move the Snippet brick on the computer screen. It seems to you in realtime that you lifted and hurled the brick. No pause. No frame jitter. You continue to play the adventure game.
Perhaps a spider flies into the scene, angry that you destroyed its web. In the real world, spiders can't fly. In virtual reality worlds, objects and creatures can do anything we program them to do.
The VR spider might be an object composed of many parts: legs, hair, eyeballs, mouth, ears (we can do anything we want in
code), a tail, and pinchers. Perhaps our VR spider has dragon-fire breath, as well. Each part of the spider can be programmed to move in any way our imagination dictates. The dragon-fire breath can spray from its tail or eyeballs. Perhaps when we throw a VR brick at the web, the spider sprays fire from whatever body part is closest to us.
In general, we can program living creatures in VR worlds to do anything we want. The only limitation is our imagination. We can code one prototype spider that defines basic parts of spiders in our VR world. From the prototype, we can then create many other spiders, each of which inherits the basic spider's properties, then adds to the mix by moving different ways, spraying bombs as well as fire, smiling sweetly and throwing flowers rather than fire, and so forth.
We route actions from one object to another. An action, such as throwing a brick at the spider web, triggers another action, such as the spider flying on-scene and hurling fireballs at us.
For more complex events, the code might be done through vrmlscript, Javascript, or Java. For example, you throw the brick at the spider web, and I want my coded spider to do three actions and another spider to do four actions, plus I want these two spiders' actions to trigger an attack from six spider colonies, who live in giant webs on islands in my VR world. While I can route one event to multiple, additional events, when things get this complicated, simple routing statements may misfire during program execution. Using programming languages that offer more sophistication, when you throw that brick at the web, I can trigger complex, even artificially intelligent actions from creatures, settings, and objects anywhere in the world I've created.