Under Pressure: Uncompressed vs. Lossless

pressureIn today’s world of video, there are a lot of numbers swirling around. We have resolutions like 4096 (4k), 2160p (UHD), and 1080p. We have frame rates like 24, 29.97, 50 and 60.  We have color bit depths of 8, 10 and 12.  And we have data bit-rates of how fast all this data is transferred down our cables. It’s enough to make all but the most patient engineer’s head spin.

Then we have the additional pleasure of having to distribute these signals over multiple types of cables and distribute them through several different types of switches. We even convert the signals to other signals to make sure we can push them longer distances through the aforementioned cables above.

If you really think about everything that happens in this convoluted chain of electrical manipulations, its amazing that we ever get a picture back on the other side in the first place. Thankfully we do get a picture at the other end more often than not, (even if sometimes it takes hours of troubleshooting to attain). But what is actually happening to our pictures in the process?

Two words typically emerge when discussing video extension and video distribution. Those two words are “uncompressed” and “lossless.” Manufacturers love to use these words when describing their products, but what do they actually mean?

Video signals are usually compressed to save space. The space in question is the space required to store the video. Compression is almost always required in stored media. Think about a movie for instance. Blu-Ray discs easily store a 1920x1080p movie that plays at 24fps. However, if you calculate the space needed for a file of that size, (even at 8 bits and 4:2:0 color sampling), that file would take up 536 GB of space.  (Here’s a fun tool to play with to calculate uncompressed video in case your geekery overtakes your weekend). Yet a Blu-Ray disc is typically 50GB. How does that happen? Compression. In VTC scenarios, most codecs are using H.264. Compression.

Now let’s explore the other term, “Lossless.” Compression can actually be either a “lossless” process (preserving every bit of data but just compressing it to take up less space) or a “lossy” process where pieces of the data are actually thrown out. So given these parameters, compression is not always a bad thing for quality as long as the compression is lossless. It can be bad for our signals if the compression is lossy.

Funnily enough, most of the compressed signals we deal with use lossy compression. MPEG-4 and H.264 notwithstanding. This is why we get motion and compression artifacts in many of our signals. The codecs (compressors/decompressors) use algorithms to predict what data should have been in the spot where the data was tossed aside, and some do this better than others. We make quite a few compromises to save space and bandwidth.

We are now seeing companies start to adopt better compression schemes like JPEG2000. JPEG200 is the standard compression scheme for digital cinema packages used in theaters. It is a major upgrade to MPEG-4 in that it is less lossy and much more resistant to bit errors and artifacts. (It can also be used in a way that is lossless as well).

Some extenders like these from SVSI now use JPEG2000 compression to allow larger video streams like UHD to pass through standard gigabit Ethernet switches. An uncompressed 3840x2160p signal averages around 6 Gbps (depending on bit depth and refresh rate) while SVSI can use JPEG200 to get that under 900 Mbps to pass through a gigabit switch. (If you want to do uncompressed UHD, SVSI does that as well at 6 Gbps but you’d need a 10G backbone with SFP connectors).

So if a system can distribute, extend and switch signals by compressing them and do it in a way that is lossless, what is the advantage to having an uncompressed signal? The answer is latency. Compression on the send side and decompression on the receiver side take some time. How much time depends on the technology being used. The SVSI piece mentioned above advertises a 16ms delay. I’m not sure if that is cumulative or at each side, but either way, it doesn’t sound like a lot. However, in mission critical situations, the importance of that latency would need to be discussed.

At the end of the day, I think we can all agree that the idea of our signals being compressed and lossy does not sound very attractive. We’d much rather tell our clients that their signals are uncompressed and lossless. The reality, however, is that compression can be lossless and not compromise the integrity of the signals given that latency is not a major issue.

So here is the million dollar question. If you extend a your Blu-Ray or VTC signals with video extenders that tout “uncompressed” signals, is the signal actually uncompressed? No. It just means that the extenders didn’t compress the video again. It transferred the already compressed video from point to point without squashing it down again. It was lossless and traveled without delay from point A to point B, BUT it was lossy before it ever arrived at point A to begin with!

In summation: Uncompressed is not always realistic. Compressed is not always bad. Lossless does not always really equal lossless. Lossy is never a good thing.

Have a great weekend #AVTweeps.