A favorite scene for any AV geek has to be when R2D2 gets cleaned up by Luke Skywalker on Tatooine. All of a sudden, a light sputters out and projects a holographic image of Princess Leia pleading with Obi Wan Kenobi to save her. That scene set the stage in many people’s heads about the future of audio visual, including 3D and holographic images.
So, 37 years later, (yes it has been THAT long), where are we in realizing this type of technology? Well it’s not quite there yet, is it? We have seen many attempts at producing realistic multidimensional images. It seems the stakes have been raised in the last decade to try and revive old methods or find new ways for accomplishing this goal. A recent YouTube Video on “physical pixels” is actually what inspired me to write this blog, but more on that later.
So how many ways can we currently add dimension to our images? Let’s take a look.
So this is what most of the world refers to as “3D, although it’s not really. Stereoscopic content relies on separate left and right eye images that when combined, give the illusion of depth. There have been many ways to do this over the years since the 1950’s.
Originally we had dichroic 3D, the old red and blue glasses that would filter out the image meant for the other eye. Dolby revived this with some more sophisticated dichroic filters (green and purple) in their latest version of stereoscopic.
Then of course we have good old linear polarization that polarizes the left and right eye images to get the same effect. An improvement in this for viewing angle was circular polarization, what most people know as RealD, although contrast is sacrificed somewhat.
There is also the much debated active format that for the most part strobes the eyes on alternating frames to stop one eye from seeing the frame intended for the other. ExpanD seemed to be the best known in this field, although there are others as well.
We are also seeing auto-stereoscopic start to get better, which can be seen with the naked eye without glasses.
However this colorization or polarization of light is achieved, (through an FPR or spinning wheels via frame stacking/interlacing/side by side, frame sequencing, etc.) the fact remains . . . this is not “3D”. It is two 2D images combined to give a depth illusion. Moving left or right will NOT give you any extra perspective to see around or behind objects or to see a different perspective of the object represented in those 2D frames.
Head tracking is an interesting method of creating a 3D experience that goes beyond stereoscopic. Content is created in a 3D environment with pixel data existing for all sides of an object and its surroundings. The viewer, through some use of technology like IR lights on glasses or today even with a Kinect sensor, can stand and walk left right up and down to analyze an object from many different angles. The image changes and is adjusted based on where that viewer is looking at the content from. The content also changes scale as the viewer moves forward and back.
The best way to describe the effect is that is exactly like looking through a window. As the viewer moves closer and looks to the right, they can see around the frame of the screen to see objects that were obscured from further back at another angle. (It’s no wonder that some companies are using this technology in conjunction with 4k screens and deeming it a True Window).
This is much closer to “3D” than stereoscopic and is definitely of the glassless variety. There is no effective way to completely and intuitively “circle an object” however. Typically this content is kept to a 180 degree horizontal viewing area as well as an acute vertical viewing area. The major drawback is that it is dependent on the one viewer being tracked. Multiple people can see the screen, however only the one being tracked gets an intuitive 3D experience. The others observe the content being manipulated but with no real relevance to their location.
Most of what we know today as holographic is really a reflective technology of some sort that allows reflected light to create the illusion of depth. This is a much older trick than the blue and red 3D of the 1950’s. In fact it predates electricity as it was used in the theater. A piece of glass would be placed at an angle toward the audience in front of the stage. As the play would go on, an actor would be placed down below this angled piece of glass in the orchestra pit. When lit by a lamp, he would suddenly “appear” on stage as an apparition, looking as if he were there in a thin vapory form. A Peppers Ghost.
You can use this to create images in front of a live set, or you can use it in conjunction with another traditional screen in the background. This allows layering of background content on the traditional scree and foreground content on the angled “holographic” surface. In this sense you are painting content digitally like a Disney artist who lays their characters drawn on clear velum over a painted background.
As you can see, this again is really a merging of 2D images on different planes to create real depth, but not “3D”. You will likely see more of this technique using combinations of transparent projection films and LCD or OLED films placed in front of more traditional screens or set work.
Some other holographic methods like volumetric displays use mirrors clear screen substrates, to create some very compelling 3D imagery inside a “volume like a cube or a sphere. Typically these are very small displays as well.
Smoke, water vapor, etc. all are used as well to create the illusion of an image popping out of thin air. No matter what the method, there is still a required substrate to reflect the light. Images are not indeed popping out of thin air, but instead are just being projected or reflected by some less obvious screen materials.
Anyone who has seen the amazing large venue YouTube videos or case studies from companies like Projection Design and Christie are familiar with projection/pixel mapping. A projector or typically multiple projectors are used to project content on an actual 3D object.
It is accomplished by using a program to model the surface being projected on, and then create content that is warped. Now when you actually project on the 3D object, the physical dimensions of the object itself restore the content to its correct proportions. Because you are using 2D projection on an actual 3 dimensional object, it is easier to create some real 3D effects while using perspective to create some artificial ones. I like to think that this would be M.C. Escher’s favorite.
Light Field Displays
I learned about these a few years back at InfoComm in a 3D course given by a company out of Hungary, Holografika. If you remember my take on head tracking up above, (if not maybe you should be tattooing clues on your body and looking for John G. who killed your wife), it allows for the computer to manipulate content in reaction to the viewer’s position, to give them extra data to see around objects etc. However this was only good for the viewer being tracked.
Light field displays also contain all that extra data that allow viewers to see around objects getting a truer 3D experience . . . with one huge plus, multiple viewers can all get a 3D experience at the same time.
How? A light field display can create different colors from the same pixel location on a screen by splaying light out of that pixel at multiple angles both horizontally and vertically.
Go stand in front of your window. Go on . . . do it! OK, now hold out your arm and randomly touch a spot on the window with your index finger. What color is the object directly in line with your eye and finger? Now keep your finger still, don’t move it, but take a step or move your head to the right. What color is the object directly in line with your eye and finger now?
Your finger location represents a pixel on the screen. That pixel location did not change when you moved your head at the window, but the pixel color value did. A light field display emits different colors out of the same pixel location at multiple angles and what color you see is viewing angle dependent. This means multiple users in multiple locations are seeing pixel data created especially for them at the angle they are viewing from.
Again, you can’t intuitively walk all the way around an object but you can get a great 3D experience with these horizontally at a fairly wide angle, although currently the vertical light field is much narrower.
The main issue with these is content. These are currently 63 Megapixel displays but could range much higher. (You take the resolution of the physical pixel locations and multiply by each viewing angle of the light field. . .) The equivalent of a 1920×1080 or 2 Megapixel device with 31 separate viewing angles. Imagine the HDMI spec it would take to deliver that content over 1 cable!
Physical pixels are a very interesting idea and they take the principles of projection mapping up a notch. A projector or multiple projectors are still needed to project color data onto the white pixels, but in these displays, each pixel is a physical column. The column can be extended or retracted to a range of depths, actually moving the pixel toward or away from your eyes. This is real depth that can be realized from multiple angles by multiple viewers. Again, a 180 degree horizontal and vertical limitation is in place, you couldn’t walk behind the display to see the back of the object.
The example shown is obviously “low resolution” are there are only an estimated 900 pixel columns in the display prototype. However a 1080p resolution display of this type would be stunning to say the least if used with the proper projector and pixel mapping logic. It steps projection/pixel mapping up a notch by allowing you to manipulate the physical object being mapped on as well.
I think what I was most impressed with was that in the video, the display is not playing back canned contact, but reacting in real time to recreate whatever is placed in the scanning area. There is some high level computing going on there to say the least.
This is where my Big Boy Pants just aren’t big enough. This technology is extremely complicated, but uses an array of Red, Green, and Blue lasers to focus light at different distances from the laser and actually excite the atoms of Oxygen and Nitrogen in the air at that location, causing them to create more intense light at a point in space. It can work in air or water. Currently this type of technology creates a lot of stray visible light above and below the image being created. There are also many safety concerns still with laser technology, especially when the beams are not contained. However, if you want to make an image appear out of nowhere from R2D2’s small projection device, this seems like the way to do it.
At the end of the day, we haven’t quite reached our “Save Us Obi Wan” moment yet. There are many potentially promising ideas out there. Some are retreads, some are compromises based on a desire to get them to market as quick as possible, and some others are truly unique and innovative new ideas that may not yet be practical, but may end up defining the future.
Which technology do you think is the most promising? Chime in below!