THE #1 AV NEWS PUBLICATION. PERIOD.

Is the Era of “CompCine” at Hand?

light_camera

This blog originally appeared at DisplayDaily here.

The term “CompCine stands for Computational Cinematography, a term SMPTE Director of Engineering and Standards, Howard Lukk used to describe a new category of image capture at the recent Streaming Media for Field of Light Displays (SMFoLD) workshop held at the SMPTE conference last week. It describes a style of image capture that blends optics with image processing to enable new capabilities that conventional optical camera systems cannot achieve.

Light field cameras are one type of CompCine device and they capture light from the scene from multiple perspectives. Light field cameras fall into three main types: plenoptic, camera arrays and moving cameras.

The cameras being developed by Lytro, for example, are the plenoptic type. They feature a single or multiple sensors at a focal plane, a single lens system and a microlens array in between. The company started with a consumer version of this design, but now has a massive cinema grade version and a multi-headed VR camera version as well.

Companies like Fraunhofer IIS are developing a 2D camera array based on discrete cameras that can synchronously capture images with horizontal and vertical parallax. Alternatively, a camera or camera array can also move in space to capture images that can then be processed to create light field data.

Image Processing is Key

The image processing part is just as fundamental to these devices. It allows the images to be manipulated to change the focus or depth of field, to change the viewpoint, including virtual viewpoints of the scene, and it enables the creation of depth maps of the scene. Depth maps allow the creation of a 3D model of the scene with video “textures” overlaid on the model. This is extremely powerful as it now allows textures to be replaced within the model, creating, in essence, a virtual green screen. It also allows for the scene to be virtually illuminated in different ways to create new effects in post production. In other words, it is like working in a game engine using video textures instead of graphic textures.

At the SMFoLD workshop, we not only heard about advanced CompCine capture devices, we heard about the latest field of light displays, plus efforts underway to standardize the encoding, formatting, compression and signaling of light field data to enable the distribution of this content to 2D or 3D displays. The presentations can be viewed at www.smfold.org/agenda, and include presentations on related activities with JPEG, MPEG and SMPTE standards organizations.

Synthetic Apertures

But there are other examples of CompCine that are intriguing as well. For example, Tony Davis of RealD presented a paper at the SMPTE conference describing what he called a synthetic aperture. His idea is to use image processing to replace the 180-degree square-shaped shutter used on almost all cinematic capture today. This is a historical artifact of the mechanical film capture days from 100 years ago, so it is time to add some image processing to this area and improve performance.

Davis says that if content is captured at 120fps with a 360-degree shutter, a synthetic shutter can be applied that mimics any desired “look”. This includes conventional 24 frame 180-degree shutter for the conventional film motion blur and judder as well as anything else. This allows much more creative control from frame to frame. He also noted that the techniques was used on the recent Ang Lee “Billy Lynn” film that was shot at 120 fps in 4K. Derivative versions at lower frame rates and resolutions will be needed, so the synthetic shutter may be able to help preserve the 120fps look even if projected at lower frame rates by eliminating the judder consistent with lower capture rates.

Let There be Light

I also just looked into a new company called Light that will soon market a consumer camera that can be classified as a CompCine device. The first version of the Light camera is called the L16 and it should be available in early 2017 for $1,699. It has 16 individual camera modules with lenses of three different focal lengths—five are 28-mm equivalent, five are 70-mm equivalent, and six are 150-mm equivalent. All the lenses in the L16 are molded plastic and camera modules similar to those used in smartphone cameras. Each camera module has a lens, a 13-megapixel image sensor, and an actuator for moving the lens to focus the image. The longer focal length modules also have mirrors to direct the field of view depending upon the mode of operation.

A camera with any focal length between 28 and 150mm can be realized by engaging certain sensors and performing image processing to create the final image. Changing the focus, depth of field and the creation of depth maps are all possible.

For example, suppose you wanted to capture an image with a 50-mm field of view. For this, all the 28-mm and 70-mm camera modules capture images simultaneously (with the 70-mm mirrors tilted to capture the image as four quadrants). The resulting image is 52 megapixels and has much better light sensitivity than a single camera due to the redundancy. That also means some modules can be dedicated to under and over exposing the image to create an HDR image.

Image processing can likewise be used to create the desired bokeh effect. This is the aesthetic quality of the blur in out of focus parts of the image and is a very important part of lens and camera selection to achieve a desired look. Now it can be achieved digitally.

While it seems that indeed we are entering the era of computational cinematography on the camera side, we are not seeing as much activity on the display side yet. Perhaps that will change in the next few years. We will have to wait and see. – Chris Chinnock

Top