Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only - Post Production Myths Vol. 1

Post Production Myths Vol. 1

01/11/17 • 10 min

5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only

1. Transcoding to a better codec will improve quality

This is a very, very common question. It doesn’t matter what forum you contribute on, or troll on, this question is always asked. If I take my compressed camera footage, like an 8bit h.264, and convert it into a more robust codec, like 10bit ProRes or DNX – will it look better?

And it does topically make sense. If I put something small into something bigger, well, that’s better right? Unfortunately, the math doesn’t support this. Transcoding to a better codec won’t add quality that wasn’t there to begin with. This includes converting from an 8bit to a 10bit or greater source, or even converting from compressed color sampling value like 4:2:0 to something a bit more robust like 4:2:2.

Think of it this way.

Putting the same quality file into a larger file won’t make it visually “better”.

Imagine this glass of eggnog is sum quality of your original video. Adding rum or not is strictly your call. And you decide you want more of it. So, you pour the eggnog into a larger glass.

You’re not getting more eggnog, you’re just getting a larger container of the same amount of eggnog, and empty space not occupied by your eggnog is filled with empty bits.

What transcoding will do, however, is make your footage easier for your computer to handle, in terms of rendering and playback. Less compressed formats, like ProRes and DNX are easier for the computer to play than, say, an h.264. This means you can scrub easier in your timeline and render faster. Now, this is mainly due to long GOP vs non Long GOP which I discuss here.

In fact, if you wanna get REAL nitpicky, ProRes and DNX are NOT lossless codecs – they’re lossy, which means when you transcode using them, you will lose a little bit of information. You most likely won’t notice, but it’s there...or should I say, NOT there?

Interesting read. Click here.

Now, there is some validity to a unique situation.

Let’s say you shoot with a 4K camera. Perhaps it samples at 8bit color depth with 4:2:0 color sampling. By transcoding to a 1080p file, you can dither the color sampling to 4:4:4, and dither the sample depth to 10bit. However, as you’ve probably surmised, this comes at a loss of resolution – from 4K all the way down to HD.

More resources:

2. Log formats are the same as HDR

The two go hand in hand, but you can do one without the other. Let me explain.

HDR – when we talk about acquisition – Involves capturing material with a greater range of light and dark – stops – as well as color depth. It’s a combination of multiple factors. Shooting in a log format – whether it’s Slog, log-c, or another variant, is used to gain as much data as possible based on the limitations of the camera sensor.

So, let’s say you have a camera that only shoots in SDR – standard dynamic range – like Rec.709 – which has been the broadcast standard for almost 27 years.

But camera tech has gotten better in the last 27 tears. So, how do we account for this better ability of the camera within this aging spec? We can shoot in a log format. Log reallocates the limited range of the SDR of the camera’s sensor to the parts of the shot you need most. Log simply allows us to use more of the cameras inherent abilities.

So, while you get the extra abilities that the camera’s sensor allows, it doesn’t give you the complete HDR experience.

Now, if you shoot with a camera that isn’t constrained to Rec.709 and offers a log format – you now have the best of both worlds – greater dynamic range, and a format that allows you to exposure this extra realm of color possibilities.

3. You can grade video on a computer monitor

plus icon
bookmark

1. Transcoding to a better codec will improve quality

This is a very, very common question. It doesn’t matter what forum you contribute on, or troll on, this question is always asked. If I take my compressed camera footage, like an 8bit h.264, and convert it into a more robust codec, like 10bit ProRes or DNX – will it look better?

And it does topically make sense. If I put something small into something bigger, well, that’s better right? Unfortunately, the math doesn’t support this. Transcoding to a better codec won’t add quality that wasn’t there to begin with. This includes converting from an 8bit to a 10bit or greater source, or even converting from compressed color sampling value like 4:2:0 to something a bit more robust like 4:2:2.

Think of it this way.

Putting the same quality file into a larger file won’t make it visually “better”.

Imagine this glass of eggnog is sum quality of your original video. Adding rum or not is strictly your call. And you decide you want more of it. So, you pour the eggnog into a larger glass.

You’re not getting more eggnog, you’re just getting a larger container of the same amount of eggnog, and empty space not occupied by your eggnog is filled with empty bits.

What transcoding will do, however, is make your footage easier for your computer to handle, in terms of rendering and playback. Less compressed formats, like ProRes and DNX are easier for the computer to play than, say, an h.264. This means you can scrub easier in your timeline and render faster. Now, this is mainly due to long GOP vs non Long GOP which I discuss here.

In fact, if you wanna get REAL nitpicky, ProRes and DNX are NOT lossless codecs – they’re lossy, which means when you transcode using them, you will lose a little bit of information. You most likely won’t notice, but it’s there...or should I say, NOT there?

Interesting read. Click here.

Now, there is some validity to a unique situation.

Let’s say you shoot with a 4K camera. Perhaps it samples at 8bit color depth with 4:2:0 color sampling. By transcoding to a 1080p file, you can dither the color sampling to 4:4:4, and dither the sample depth to 10bit. However, as you’ve probably surmised, this comes at a loss of resolution – from 4K all the way down to HD.

More resources:

2. Log formats are the same as HDR

The two go hand in hand, but you can do one without the other. Let me explain.

HDR – when we talk about acquisition – Involves capturing material with a greater range of light and dark – stops – as well as color depth. It’s a combination of multiple factors. Shooting in a log format – whether it’s Slog, log-c, or another variant, is used to gain as much data as possible based on the limitations of the camera sensor.

So, let’s say you have a camera that only shoots in SDR – standard dynamic range – like Rec.709 – which has been the broadcast standard for almost 27 years.

But camera tech has gotten better in the last 27 tears. So, how do we account for this better ability of the camera within this aging spec? We can shoot in a log format. Log reallocates the limited range of the SDR of the camera’s sensor to the parts of the shot you need most. Log simply allows us to use more of the cameras inherent abilities.

So, while you get the extra abilities that the camera’s sensor allows, it doesn’t give you the complete HDR experience.

Now, if you shoot with a camera that isn’t constrained to Rec.709 and offers a log format – you now have the best of both worlds – greater dynamic range, and a format that allows you to exposure this extra realm of color possibilities.

3. You can grade video on a computer monitor

Previous Episode

undefined - Offline / Online Workflows for Film, TV, and Media

Offline / Online Workflows for Film, TV, and Media

1. What is an Offline/Online workflow?

Various flavors of tape used for offline (and sometimes tape online!)

Like most things in the media industry, the term originates from the analog days of yore. We shot stuff on film. Now, cutting and splicing film, unless done really carefully, can easily cause damage. So we used workprints – film copies of the original negative – that could be manipulated without running the original stock. This would be the offline edit. Once the offline creative edit was completed with the workprint, the points at which the film was cut – the frame numbers – were then applied to the master footage – your online. This is also nowadays called the conform. Over the years, the workprint became videotape – 2”, 3⁄4”, VHS, and in some cases, Laserdisc...but all of these formats still were still offlines for the film online.

Towards the late 80’s, there began a movement to replace the creative analog editing that had been done for the past few decades... with computers. The NLE – non-linear editor- was born.

And today, now that film is fading away, we apply the same paradigm to digital footage inside our computers. We perform creative edits with low-resolution versions of the high res master footage. And once our creative edit is done, we tell our NLE to look at the high res footage instead of the low.

2. What are some examples?

If we focus on how offline/online is used today, it invariably involves a discussion on codecs, and you know there are few things in this world I like more than a good codec discussion.

This means taking the OCM – original camera masters – and creating low res versions, often called proxies – for the creative team to edit with. The low res versions typically don’t look nearly as good as the original, but they take up less storage space and are easy for the computer to handle.

A traditional offline/online workflow

The quality of the proxies usually lies below what we call “broadcast quality”.

Broadcast quality is just as it sounds – the quality at which something will or will not be broadcast, per the standards of the broadcaster. And this is somewhat subjective. Over the years, “broadcast quality” has become a moving target, and the transition from SD to HD and HD to UHD has thrown a wrench in the works.... and this doesn’t even account for theatrical quality. Suffice it to say, ”broadcast quality” is essentially “what the distribution partner asks for”.

Generally speaking, any format you work with that is beneath broadcast quality is considered “offline”, and any resolution at or above broadcast quality – or, the original camera masters themselves – is considered your online version.... a vast majority of the time. High-end feature films with larger budgets can walk this line, but it’s a rarity.

Here are some places to start.

Common Offline

Common Online

15:1, 14:1, 10:1 (Avid SD) 2:1, 1:1 DNx36 -115* DNx 145+ ProRes 422 Proxy, LT ProRes 422+ DNxHR LB DNxHR SQ – DNxHR444 h.264 below ~2.5Mbps Cineform: High or Better OCM (Original Camera Masters)

It’s important to remember offline proxy resolutions don’t need to match the resolutions of the originals. We can do a standard def offline for an HD or 4K master. Unscripted reality television shows often cut in standard def, such as the 15, 14, and 10:1 listed here. They are small in file size – which really adds up to storage savings with thousands of hours of footage... and the format is easy for computers to decode – especially when dealing with multicam.

Scripted shows with less footage, or feature films, may move to DNx for their offline. Bad Robot, for example, often works in DNx115 for movies like Star Trek. Deadpool and

Next Episode

undefined - Live Streaming

Live Streaming

Editor’s Note: This episode was not live-streamed.

1. What components do I need for live streaming?

Getting started is pretty easy. Thanks to platforms like Facebook and YouTube, you can do it with a few clicks from your phone. But that won’t help you much when you want to up your production value. Here is what you need to get started.

Live Streaming from your CDN to end users

To successfully plan your setup, we need to work backward and follow the old credo of “begin with the end in mind”.

So, the last part of live streaming is “how do I get my video to viewers”? This is normally done through a CDN – a content delivery network. A content delivery network takes your video stream, and not only pushes it out to your legion of fans but also does things like create lower resolution variants, also called transcoding and transrating. A good CDN also supports different platforms, such as streaming to mobile devices and various computer platforms. A CDN also takes the load off of your streaming machine. Imagine your single CPU being tasked with sending out specific video streams in specific formats for every user?

OK, now that we have our CDN, we need to send a high-quality signal to it. I’ll address the specifics of that later in the episode, but suffice it to say, for a successful broadcast, you’ll need a decent Internet connection with plenty of headroom.

Live Streaming to a CDN

All CDNs will provide you with a protocol – that is, the way in which they want to receive the live feed for your broadcast. Now, this is different than the protocols that your end device will use – as a good CDN will take care of that mumbo jumbo translation for you....but only if you get the video to them in the right format.

These upload streaming formats can include HLS, RTSP, RTMP, and Silverlight formats. The end game is that your software needs to be able to stream to the CDNs mandated format for it to be recognized.

Speaking of software, this is one of the most critical decisions to make.
Will your streaming software ONLY stream, or will it also do production tasks as well, like switching between multiple sources, playing prerecorded video, adding graphics, and more? Of the utmost importance is “does your software support the camera types you’re sending it?”

Cameras to I/O device to Streaming Device

....which leads up to the beginning of your broadcast chain... “what kind of cameras are you using?” Are you using USB cameras? Or, are you using cameras with HDMI or HDSDI outputs? If the latter, then you need an I/O device on your computer which can not only take all of the inputs you want but also in the frame size and frame rate you want.

You’ll quickly see that the successful technical implementation of a live stream is based on each part playing with the others seamlessly.

A complete live streaming workflow

2. CDNs?!

In case I wasn’t clear, your CDN choice is one of the most critical decisions. Your CDN will dictate your end users experience.

Here are some things to look for in a CDN.

Does your CDN allow an end user a DVR like experience, where they can rewind to any point in the stream and watch? Does this include VOD options to watch later after the event is over?

Many CDNs have a webpage front end, where you can send users to watch the stream. However, most users prefer to take the video stream and embed it in their own website so they can control the user experience.

Also, “is this a private stream?’ If so, ensure your CDN has a password feature. Speaking of filtering viewers, does your CDN tie into your RSVP system – or, are you using your CDN’s RSVP system? This is another way to create a more personalized experience for the viewer – as well as track who watches – and get their contact info you can follow up with them after the event if needed....as well as track the streams metrics – so you can improve the content and experience for the next live stream.

CDN’s can’t do all of this for free. This is why most CDNs restrict video stream quality on free accounts. This means your upload quality may be throttled, or the end users viewing quality or even total viewer count may be lim...

Episode Comments

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/5-things-simplifying-film-tv-and-media-technology-audio-only-263286/post-production-myths-vol-1-31196482"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to post production myths vol. 1 on goodpods" style="width: 225px" /> </a>

Copy