
5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only
5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only
All episodes
Best episodes
Seasons
Top 10 5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only Episodes
Goodpods has curated a list of the 10 best 5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only episodes, ranked by the number of listens and likes each episode have garnered from our listeners. If you are listening to 5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only for the first time, there's no better place to start than with one of these standout episodes. If you are a fan of the show, vote for your favorite 5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only episode by adding your comments to the episode page.

How to Edit Remotely with Adobe Premiere Pro
5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only
10/20/22 • 15 min
Today we’re going to take a deep dive into the various ways to edit remotely with Adobe Premiere Pro, when you or your team, plus your computers and media just can’t be in the same place at the same time. Let’s get started.
1. Team Projects
Team Projects in Premiere Pro – as well as for After Effects – has been around for quite a while. And it’s free!
Team Projects is Adobe’s solution for creatives who each have their own local edit system– either at the office or at home – and a local copy of the media attached to that edit machine. Meaning, no one in your team is using any shared storage. Everyone accesses a Project that Adobe hosts in the cloud. And because Creative Cloud is managing the Team Projects database, versions and changes are tracked.
Overview of Adobe Team Projects
Of course, this workflow does require discipline, including organizing media carefully and utilizing standardized naming conventions.
But once you’re in the groove, Team Projects is very easy to use.
Let’s take a quick look, so you can see how the flow goes.
You can start the process when you have Premiere Pro Open. Give the project a name, and then add Team members as collaborators with their email addresses. They’ll get a desktop notification through Creative Cloud that they’ve been added to the Team project.
Be sure to check your scratch disks on the 3rd tab correctly – as every team project editor will be saving their files to their own local storage.
Let’s fast-forward till we have an edit we want to share.
In your Team Project pane, you’ll see a cute little arrow at the bottom right of the pane, that tells you to share your changes in the team project.
Don’t worry if you forget, if you look sequence name tab, and see an arrow, that’s a reminder to share and upload your changes.
Click the “Share My Changes” button, and you’ll see all the stuff you’ll be sharing with your team members. Add a comment if you wanna summarize what you did. Click “Share”. Premiere Pro will then upload your changes to your Team’s Creative Cloud account.
Your Team members will then open the Team Projects they’ve been invited to through the Creative Cloud Desktop app.
Don’t worry – if any media files are marked offline in Premiere, team members can either relink to that media if they have it locally, or you can download them from within Premiere Pro via the Media Management option in Premiere.
As you can see, this is where the aforementioned Scratch disks, media management, and organization really come into play – or else you’ll be relinking all day.
Now, this is just the project, sequences, and media. What about Review and Approve with your Team? With the Adobe acquisition of Frame.io last year, Team Projects now adds review and approval capability inside Teams Projects.
Despite the fact Team Projects has been around for a while, it’s still an excellent solution that is already part of your Adobe Creative Cloud subscription, so there is no extra cost to test it out.
2. Adobe Productions
Adobe Productions is the evolution of the “Shared Projects” feature Adobe rolled out in 2017 and it differs significantly from Adobe’s Team Projects.
While the workflow with Team Projects expects each user to have their own local workstation and local copies of the media, Productions operates assuming that everyone on your team is already connected to the same shared storage. This would be akin to having all your creatives in the same building, and all of them mounting the same NAS or SAN volumes to their machine. Because everyone can access the same media at the same time, there is no need to copy media back and forth from your local drive to the shared drive. It saves a ton of time.
Overview of Adobe Productions
The trick to make this work is to have a software “traffic cop” working in the background and watching over every user opening what Adobe calls a “Production”.
A Production is essentially a sup’d up Premiere Pro project file that points to several smaller project files. The smaller project files are what each editor works with, and all changes are saved to the smaller projects within the primary Production.
The ...

Editing Remotely with Avid Media Composer
5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only
04/19/20 • 16 min
All of you are asking the same thing. “How can I edit remotely or work from home?” Today we’ll look at Avid, as they have many supported options, so you can cut with Media Composer from just about anywhere. Let’s get started.
1. Extending Your Desktop
The first method we’ll look at is simply extending your desktop, that is – having your processing computer at the office, while you work from home and remote into that machine. This has been the crutch that most facilities have relied on in the past few weeks. Let’s examine how this works.
First, this scenario assumes that you edit at a facility, where all of the networked computers and shared storage are...and that you can’t take any of those things home. This can be due to security, or other concerns like needing access to hundreds of TB of data.
In this case, the creatives are sent home, and I.T. installs a remote desktop hardware or software solution on each of the machines. The creatives then connect through a VPN or virtual private network – to gain secure access from their home editing fortresses of solitude back into the facility and attempt to work as normal.
Now topically this sounds like a real win-win, right? You get access to your usual machine and usual shared storage. Sure, you lose things like a confidence monitor (if you had one), but you should be fine, right?
The devil, as always, is in the details.
Typical screen sharing software solutions that are installed on your office editing machine are often dumpster fires for creatives. I’m not saying they are bad for general I.T. use, or when you need to remote in and re-export something, but by and large most screen sharing protocols do not give a great user experience. Full frame rate, A/V sync, color fidelity, and responsiveness usually suffer. Solutions like Teamviewer, Apple or Microsoft Remote Desktop, VNC, or most any of the other web-based solutions fail. Hard. You’ll pull all of your hair out before you finish an edit.
Moving up to more robust solutions like HP’s RGS – Remote Graphics Software – or a top of the line solution like Teradici’s PCoIP software – is about as good as you’re gonna get. The license cost may cost a few hundred dollars, too...depending on your configuration.
But here’s the kicker. They’re Windows only as a host.
While you can access the computer running the Teradici software with a macOS or Windows equipped computer – or even via a hardware zero client – the environment you create in will always be a Windows OS.
Quite unfortunately, there does not exist a post-production-creative friendly screen sharing solution for macOS.
The only solution I’ve come across over these many years is a company called Amulet Hotkey – yes, that’s their name – who take the Teradici PCoIP Tera2 card and put it into a custom external enclosure and add some secret sauce.
You then feed the output of your graphics card, plus your keyboard, mouse, and audio into the device and the PCoIP technology takes over. It’s quite frankly the best of both worlds: PCoIP and macOS.
This ain’t gonna be cheap.
Expect a few thousand dollars per hardware device, and availability at the moment may be difficult. You also going to need to do some network configuration for Quality of Service, and then decide ho...

YouTube Tips And Tricks For Your Media
5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only
09/12/18 • 13 min
On this episode of 5 THINGS, I’ve got a few tricks that you may not know about that will help you upload, manage, and make YouTube do your bidding.
1. Upload Tricks
You probably have media on YouTube, and you probably think that after thousands of hours, you’ve mastered the ‘Tube. But there are some little-known upload tricks and workarounds, playback shortcuts, and voodoo that you may not even know that YouTube does.
Let’s start at the beginning. Let’s say you’ve got a masterpiece of a video, maybe it’s an uncle getting softballed in the crotch, maybe it’s your buddy taking a header down a flight of stairs, or maybe, just maybe it’s the cutest pet in the world. MINE.
Lucy: the cutest, bestest pet in the world.
...and the world needs to see it, right? And you export a totally-worth-the-hard-drive-space-huge-monster-master-file so you don’t lose 1 bit of quality.
And that’s OK.
Now, in actuality, it’s not the most efficient, but let’s save that existential technology discussion for another episode.
Did you know that YouTube will actually take this massive file? That you don’t need to shrink it, recompress it, or otherwise use the presets in your transcoder du jour? YouTube used to publish a page that specified video file sizes and data rates for “enterprise” grade connections. Ostensibly, this was so companies with fat internet connections could upload massive files. After all, YouTube re-compresses all files anyway. Yes, as I’ve said many times before, YouTube will ALWAYS recompress your media. ALWAYS.
“Enterprise” video bitrates for YouTube. These are 5-6x LARGER than what YouTube recommends for typical users.
But, this page was taken down.
Why?
Because accepting larger files sizes ties up YouTube’s servers, and takes longer for their transcoding computers to chomp through and subsequently create the versions you’ll end up watching. Plus, it’s a crappy experience for you, the end user, to wait hours for an upload AND the processing. Despite this, you can still do it. In the above video, you can see I have an HD ProRes file, and it’s several GB. As I select the file and start the upload, YouTube tells me this will take several hours. That sucks. However, uploading a less compressed file means the versions that YouTube creates will be based on higher quality than the compressed version you’d normally export and upload from your video editor or one that you’d create from your media encoder.
YouTube creates all of your media variants...so you don’t have to.
So what would you rather have...a copy of a copy of your finished piece? Or just 1 copy? The less compression you use, the better looking the final video will be.
More on this here: A YouTube user uploads, downloads and re-uploads the same video to YouTube 1000 times and the compression is staggering. This is called “generational loss”.
By the way, you know that YouTube creates all of your various versions, right? From the file you upload, YouTube creates the 240, 360, 480, 720, 1080 and higher versions. Are yours not showing up? Have patience. YouTube takes a little bit.
OK, back to the high res fat files. I know, the fact there is a long upload for these large files sucks, but it does lead me to the next tip. Did you know you can resume a broken upload? As long as you have the same file name, you can resume an upload for 24 hours after a failed upload. In the video above, let me show you. Here’s the file I was uploading before. As you can see, it still has quite a way to go. Oops, closed my tab. Now, I’m gonna wait a little bit.
OK, it’s been several hours, and I’m going to open a new tab and navigate back to the upload page. Now, I’ll reselect the same file. The upload picks up ...

Building a Hackintosh
5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only
06/19/18 • 23 min
Have you ever thought about using a Hackintosh? Wondering how they perform? Or maybe you wanna build one? Fear not my tech friends, for in this episode, we’ve got you covered.
1. Why Build a Hackintosh?
If you spend any amount of time following Apple, you’ve realized that they are a consumer technology juggernaut. Phones, tablets, watches, headphones. This has led some to speculate that Apple isn’t paying attention to the professional market.
That is, Apple isn’t making computers for those who need a lot of horsepower for creative applications, and expandability to make the system more powerful than what the factory model ships with...or what Apple deems us as worthy of.
We also need to look at the cost. The Apple logo carries a price premium, and without much exception, Apple computers are more expensive than their Windows or Linux counterparts. And while I concede that a ready-to-roll machine should cost more than the sum of its parts, Apple tends to inflate this more than most.
Another reason to build a Hackintosh....is, well, because it’s there. Because you can. Well, physically, anyway. I’m not a lawyer, and debating the legalities of building a Hackintosh is not my idea of an afternoon well spent. However, the tech challenge in and of itself is enough for some to dig in.
Lastly, owning a Hackintosh means you’ll at some point you’re gonna need troubleshoot the build due to a software update breaking things. If you don’t build it yourself, you’re not gonna know where the bodies are buried, and you’ll be relying on someone else to fix it.
For all of these reasons, I rolled up my sleeves, grabbed some thermal paste, and went down the road of building my very own Hackintosh.
2. Choosing the Right Parts
“Look before you leap.”
When building my Hackintosh, this was my cardinal rule. See what others had done before, what hardware and software junkies had deemed as humanly possible, and follow build guides. Although I was willing to build it, I didn’t want it to be a constant source of annoyance due to glitches, and then no avenue to search for answers if things went south. Part of building a Hackintosh is being prepared for things to break with software updates – and to only update after others had found the bugs. I wanted to keep the tinkering after the build to a minimum.
More createy, less fixey.
The main site online for a build like this is tonyxmac86.com. The site has tons of example builds, a large community on their forums, and even better, users who have done this a lot longer than me.
A great starting point is the “Buyer’s Guide” which has parts and pieces that lend themselves to the power than many Apple machines have. A CustoMac Mini, for example, is closely related to the horsepower and form factor you’d find with a Mac Mini.
As I tend to ride computers out for awhile, I decided to build a machine with some longevity.
Longevity meant building a more powerful machine, and thus as close as possible to a Mac Pro. And wouldn’t you know it, there is a section called “CustoMac Pro”.
The downside to a machine as powerful and expandable as a CustoMac Pro is that it’s fairly large. After I took inventory of all of the expansion cards I’d want to use, I realized I didn’t need everything that a CustoMac Pro afforded me. The large motherboard in the system – known as an ATX board, was simply overkill and was too large of a footprint for my work area. I could actually go with something a little bit smaller and still have plenty of horsepower.
So, I looked into the CustoMac mATX builds. M stands for Micro, and the mATX board would be similar to a full sized ATX board, but a bit smaller. I’d also lose some expandability with a smaller, micro ATX motherboard, but I could use the same processor that I would use in a full size build – in this case, a Core i7-8700...

LiveU Solo vs Teradek VidiU Pro
5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only
04/19/18 • 13 min
In this episode, we’re going to dive deep into live streaming. Sure your phone can do it, but if you want a professional live stream to YouTube or Facebook or just about anywhere else, then this episode is for you.
My friends at LiveU asked me to do a deep dive on their Solo streaming device. I suggested, “why not a shootout between the LiveU Solo and the Teradek VidiU Pro?” – (which is another similar device).
So, is that cool with you, LiveU?
1. What do they do?
Both units have a ton of similarities; so let’s get those out of the way first so we can focus on the ooey gooey differences.
At the core of both units is the ability to take an SD or HD video source, usually, HDMI, encode the signal, and stream it out to the web to sites like YouTube or Facebook. This can be via a traditional Ethernet Internet connection, going mobile with Wi-Fi, or even streaming via a USB cellular 3G/4G connection.
Both The LiveU Solo and VidiU Pro have the ability to bond across a number of onboard connections, which gives you the ability to have an added level of not only redundancy but also throughput, to ensure your signal is getting out smoothly and at the highest bandwidth possible. Both units, however, charge extra for this feature, which I’ll get into that later.
Both units can take AC power, run on an internal battery, are small in size, and both weigh under a pound (although the VidiU Pro is less than half of that) and they easily mount on your camera or attach to your Batman utility belt.
The units also have built-in batteries so you can stream for several hours before having to re-charge. Both models also have a webpage backends to control the units.... to varying degrees.
Web Interfaces for the LiveU Solo (left) and Teradek VidiU Pro (right)
The Solo and VidiU Pro both do their jobs very well. But the devil is certainly in the details, and that’s what we’re going to tackle next.
2. Setup and Connectivity
The LiveU and Teradek units both allow setup via a web browser on your computer or mobile device. LiveU, via the LiveU Portal and Teradek via the webserver on the unit.
LiveU is pretty straightforward. Create an account at http://solo.liveu.tv/, boot up your LiveU Solo unit (although this may take a while), and connect your Solo to the internet. Then, enter in your Solo serial number into the portal, and you’re off and running. You now can do your entire streaming configuration of the Solo signal on the webpage for your unit. Now, this is a blessing and a curse...which I’ll get into a bit later on.
Teradek’s setup is very similar. Boot up the unit (which doesn’t take as long as LiveU), and get your unit on your local network – it doesn’t even have to access the internet at this point. It just needs an IP address so your local computers or Wi-Fi enabled cellular devices can see it. Unlike the Solo, the VidiU Pro has a web server built-in, which makes configuration a bit easier.
Why is that Michael?
Having a local webserver built-in means I don’t need to have Internet connectivity on my laptop or workstation if I’m out in the field and want to configure the unit. Yes, I know, you can use cellular devices like your phone to configure their unit, but if cellular service on location is lacking for your phone’s carrier, now what? It’s a slight issue, but I think having a web server built into the unit as Teradek does makes for an easier way of local configuration.
Speaking of cellular connections, at this point, if you’re planning on going into the great wide open and streaming via 4G connections, make sure you’ve purchased your cellular modems and chosen the right data plan – one that won’t throttle your bandwidth or cut you off at the knees during an event.
Now it’s time for the speeds and feeds and goes in and goes out part of our ...

The Truth About Video Editing Software in Hollywood
5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only
07/12/17 • 17 min
NLE – Non linear editing – has been around for over 40 years, but it didn’t become common place in Hollywood – that is, being used for feature film and broadcast television – until the early 90’s. And that’s where we’ll start.
But before I start, I do need to set a disclaimer. I also work for Key Code Media, who sells many of the tech solutions that I talk about on 5 THINGS. And wouldn’t ya know it, we sell a heck of a lot of Avid and things that play with Avid...including Adobe, and Apple, for that matter. I don’t want any of you think I’m a paid shill, so I got clearance
from this guy...ya really gotta watch the video above to see.
1. Avid Media Composer
A large part of understanding one’s popularity is to examine WHY it’s popular. And that requires sharing the most brief of history lessons.
OK, do you remember a time before Internet connected cell phones? Now, try and remember how our daily lives changed when most everyone had one of these devices.
It was a definite shift in how we consumed media. Now, imagine that, only with the CREATION side of media. This was Hollywood in the early 90’s. Digital video cameras were still very new, and limited to standard definition. There were many companies toying with building digital editing software, but none really took hold. That is, until Avid Media Composer came along in the early 90’s.
By building a digital editing platform, based on the terminology and methodology the experienced film editors knew, Avid was able to make the industry adoption of their technology much easier. Thus, we already have 2 reasons Media Composer was popular: it appealed to the sensibilities of the user base, and it was one of the few solutions out there.
Avid also built around their ecosystem, including not only their own shared storage, but having the top audio editing system in the industry; Pro Tools, by then Digidesign, giving users a complete solution tech partner to work with. We call this the “one throat to choke” paradigm.
Many facilities already invested in a complete end-to-end Avid infrastructure.
By the time other NLE’s were in a useable state for film and TV projects, Avid had a massive head start. This meant a decent-sized user base in the Hollywood market, facility infrastructures (and thus lots of money already invested in hardware and software) that were built around Media Composer, in addition to workflows that incorporated both legacy film-based material, tape acquisition, and newer digital formats. Avid also had project sharing by the early 00’s, something that only recently other NLE’s getting right. For all of these reasons, Avid had the Hollywood market cornered. And all of this played into one of the greatest untold truths about Hollywood technology.
Hollywood is predominantly risk-averse.
If something worked last season, why change it for this season? Changing it messes with budgets and timelines and generally upsets the natives.
And that’s why today, Avid is still used on a vast majority of all feature films and broadcast television here in Hollywood. Existing customer investment in infrastructure, experienced talent pool – both available and already on staff, documented workflows with other departments, a complete ecosystem, and a risk-averse industry. If you plan on getting a job tomorrow out in Hollywierd, working in broadcast television or feature film, Media Composer needs to be your strongest software tool.
2. Apple Final Cut Pro X

Transcoding in Post Production
5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only
03/08/17 • 16 min
1. Onset and Editorial Transcoding
When does Post actually begin?
Since we’ve moved from celluloid to digital, the answer to this query has quickly moved to production. In fact, over the past 20 years, a new position has emerged – the DIT, or Digital Imaging Technician, as a direct response to the need to coordinate between digital acquisition and subsequent post-production. In fact, the DIT is such an instrumental part of the process, that the DIT is often the liaison that connects production and post together.
Adding a watermark, timecode window burn, and LUT inside Blackmagic Resolve
Now, this can vary depending on the size of the production, but the DIT will not only wrangle the metadata and media from the shoot and organize it for post, but they may have added responsibility. This can include syncing 2nd system audio to the camera masters. This may also include adding watermarks to versions for security during the dailies process or putting a LUT on the camera footage. Lastly, the DIT may also create edit ready versions – either high or low res – depending on your workflow. A very common tool is Blackmagic Resolve, but also tools like Editready, Cortex Dailies, or even your NLE.
Now, having the DIT do all of this isn’t a hard and fast rule, as often assistant editors will need to create these after the raw location media gets delivered to post. What will your production do? Often, this comes down to budget. Lower budget? This usually means that the assistants in post are doing a majority of this rather than the folk’s onset.
As for the creation of edit-ready media, this speaks to the workflow your project will utilize. Are you creating low res offline versions for editorial, and then reconforming to the camera originals during your online?
A traditional offline/online workflow
Or, are you creating a high res version that will be your mezzanine file that would work with throughout the creative process?
A mezzanine format workflow
OK, now on to actually creating edit-worthy media.
This can be challenging for several reasons.
You need to create media that is recognized by and optimized for the editorial platforms you’re cutting on. For Avid Media Composer, this is OPAtom MXF wrapped media. This media is commonly DNxHD, DNxHR and ProRes. What these have in common is that they are non-long GOP formats, which makes them easier for Avid to decode in real time.
The go-to has been the 20-year-old offline formats of 15:1, 14:1, or 10:1.
These formats are very small in size, are easy for computers to chomp through, but look like garbage. But if it means not spending money for larger storage or new computers, it’s tolerated. Recently, productions have been moving to the 800k and 2Mb Avid h.264 variants so they can keep larger frame sizes.
You can create this media through the import function, using dynamic media folders, or consolidate and transcode within Media Composer itself.
Adobe Premiere is a bit more forgiving in terms of formats. Premiere, like Avid, will work best with non

Live Streaming
5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only
02/07/17 • 12 min
1. What components do I need for live streaming?
Getting started is pretty easy. Thanks to platforms like Facebook and YouTube, you can do it with a few clicks from your phone. But that won’t help you much when you want to up your production value. Here is what you need to get started.
Live Streaming from your CDN to end users
To successfully plan your setup, we need to work backward and follow the old credo of “begin with the end in mind”.
So, the last part of live streaming is “how do I get my video to viewers”? This is normally done through a CDN – a content delivery network. A content delivery network takes your video stream, and not only pushes it out to your legion of fans but also does things like create lower resolution variants, also called transcoding and transrating. A good CDN also supports different platforms, such as streaming to mobile devices and various computer platforms. A CDN also takes the load off of your streaming machine. Imagine your single CPU being tasked with sending out specific video streams in specific formats for every user?
OK, now that we have our CDN, we need to send a high-quality signal to it. I’ll address the specifics of that later in the episode, but suffice it to say, for a successful broadcast, you’ll need a decent Internet connection with plenty of headroom.
Live Streaming to a CDN
All CDNs will provide you with a protocol – that is, the way in which they want to receive the live feed for your broadcast. Now, this is different than the protocols that your end device will use – as a good CDN will take care of that mumbo jumbo translation for you....but only if you get the video to them in the right format.
These upload streaming formats can include HLS, RTSP, RTMP, and Silverlight formats. The end game is that your software needs to be able to stream to the CDNs mandated format for it to be recognized.
Speaking of software, this is one of the most critical decisions to make.
Will your streaming software ONLY stream, or will it also do production tasks as well, like switching between multiple sources, playing prerecorded video, adding graphics, and more? Of the utmost importance is “does your software support the camera types you’re sending it?”
Cameras to I/O device to Streaming Device
....which leads up to the beginning of your broadcast chain... “what kind of cameras are you using?” Are you using USB cameras? Or, are you using cameras with HDMI or HDSDI outputs? If the latter, then you need an I/O device on your computer which can not only take all of the inputs you want but also in the frame size and frame rate you want.
You’ll quickly see that the successful technical implementation of a live stream is based on each part playing with the others seamlessly.
A complete live streaming workflow
2. CDNs?!
In case I wasn’t clear, your CDN choice is one of the most critical decisions. Your CDN will dictate your end users experience.
Here are some things to look for in a CDN.
Does your CDN allow an end user a DVR like experience, where they can rewind to any point in the stream and watch? Does this include VOD options to watch later after the event is over?
Many CDNs have a webpage front end, where you can send users to watch the stream. However, most users prefer to take the video stream and embed it in their own website so they can control the user experience.
Also, “is this a private stream?’ If so, ensure your CDN has a password feature. Speaking of filtering viewers, does your CDN tie into your RSVP system – or, are you using your CDN’s RSVP system? This is another way to create a more personalized experience for the viewer – as well as track who watches – and get their contact info you can follow up with them after the event if needed....as well as track the streams metrics – so you can improve the content and experience for the next live stream.
CDN’s can’t do all of this for free. This is why most CDNs restrict video stream quality on free accounts. This means your upload quality may be throttled, or the end users viewing quality or even total viewer count may be lim...

Post Production Myths Vol. 1
5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only
01/11/17 • 10 min
1. Transcoding to a better codec will improve quality
This is a very, very common question. It doesn’t matter what forum you contribute on, or troll on, this question is always asked. If I take my compressed camera footage, like an 8bit h.264, and convert it into a more robust codec, like 10bit ProRes or DNX – will it look better?
And it does topically make sense. If I put something small into something bigger, well, that’s better right? Unfortunately, the math doesn’t support this. Transcoding to a better codec won’t add quality that wasn’t there to begin with. This includes converting from an 8bit to a 10bit or greater source, or even converting from compressed color sampling value like 4:2:0 to something a bit more robust like 4:2:2.
Think of it this way.
Putting the same quality file into a larger file won’t make it visually “better”.
Imagine this glass of eggnog is sum quality of your original video. Adding rum or not is strictly your call. And you decide you want more of it. So, you pour the eggnog into a larger glass.
You’re not getting more eggnog, you’re just getting a larger container of the same amount of eggnog, and empty space not occupied by your eggnog is filled with empty bits.
What transcoding will do, however, is make your footage easier for your computer to handle, in terms of rendering and playback. Less compressed formats, like ProRes and DNX are easier for the computer to play than, say, an h.264. This means you can scrub easier in your timeline and render faster. Now, this is mainly due to long GOP vs non Long GOP which I discuss here.
In fact, if you wanna get REAL nitpicky, ProRes and DNX are NOT lossless codecs – they’re lossy, which means when you transcode using them, you will lose a little bit of information. You most likely won’t notice, but it’s there...or should I say, NOT there?
Interesting read. Click here.
Now, there is some validity to a unique situation.
Let’s say you shoot with a 4K camera. Perhaps it samples at 8bit color depth with 4:2:0 color sampling. By transcoding to a 1080p file, you can dither the color sampling to 4:4:4, and dither the sample depth to 10bit. However, as you’ve probably surmised, this comes at a loss of resolution – from 4K all the way down to HD.
More resources:
- CAN 4K 4:2:0 8-BIT BECOME 1080P 4:4:4 10-BIT? DOES IT MATTER?
- When 420 8bit becomes 444 10bit from Barry Green
- Discovery: 4K 8bit 4:2:0 on the Panasonic GH4 converts to 1080p 10bit 4:4:4
2. Log formats are the same as HDR
The two go hand in hand, but you can do one without the other. Let me explain.
HDR – when we talk about acquisition – Involves capturing material with a greater range of light and dark – stops – as well as color depth. It’s a combination of multiple factors. Shooting in a log format – whether it’s Slog, log-c, or another variant, is used to gain as much data as possible based on the limitations of the camera sensor.
So, let’s say you have a camera that only shoots in SDR – standard dynamic range – like Rec.709 – which has been the broadcast standard for almost 27 years.
But camera tech has gotten better in the last 27 tears. So, how do we account for this better ability of the camera within this aging spec? We can shoot in a log format. Log reallocates the limited range of the SDR of the camera’s sensor to the parts of the shot you need most. Log simply allows us to use more of the cameras inherent abilities.
So, while you get the extra abilities that the camera’s sensor allows, it doesn’t give you the complete HDR experience.
Now, if you shoot with a camera that isn’t constrained to Rec.709 and offers a log format – you now have the best of both worlds – greater dynamic range, and a format that allows you to exposure this extra realm of color possibilities.
3. You can grade video on a computer monitor

AI Tools For Post Production You Haven’t Heard Of
5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only
01/23/24 • 12 min
Generative A.I. is the big sexy right now. Specifically creating audio, stills, and videos using AI. What’s often overlooked, however, is how useful Analytical A.I. is. In the context of video analysis, it would involve facial or location recognition, logo detection, sentiment analysis, and speech-to-text, just to name a few. And those analytical tools are what we’ll focus on today.
We’ll start with small tools and then go big with team and facility tools. But I will sneak in some Generative AI here and there for you to play with.
1. StoryToolkitAI
StoryToolKitAI can transcribe audio and index video
Welcome to the forefront of post-production evolution with StoryToolKitAI, a brainchild of Octavian Mots. And with an epic name like Octavian, you’d expect something grand.
He understood the assignment.
StoryToolKitAI transforms how you interact with your own local media. Sure, it handles the tasks we’ve come to expect from A.I. tools that work with media like speech-to-text transcription.
But it also leverages zero-shot learning for your media. “What’s zero shot learning?” you ask.
Imagine an AI that can understand and execute tasks that it was never explicitly trained for. StoryToolKitAI with zero-shot learning is like trying to play charades. It somehow gets things right the first time. While I’m still standing there making random gestures, hoping someone will figure out that I’m trying to act out Jurassic Park and not just practicing my T-Rex impersonation for Halloween.
How Zero-Shot Learning Works. Via Modular.ai.
Powered by the goodness of GPT, StoryToolKitAI isn’t just a tool. It’s a conversational partner. You can use it to ask detailed questions about your index content, just like you would talk with chat GPT. And for you DaVinci Resolve users out there, StoryToolkitAI integrates with Resolve Studio. However, remember Resolve Studio is the paid version, not the free one.
Diving Even Nerdier: StoryToolkit AI employs various open-source technologies, including the RN50x4 CLIP model for zero-shot recognition. One of my favorite aspects of StoryToolkit is that it runs locally. You get privacy, and with ongoing development, the future holds endless possibilities.
Now, imagine wanting newer or even bespoke analytical A.I. models tailored for your specific clients or projects. The power to choose and customize A.I. models. Well, who does it like playing God with a bunch of zeros and ones, right? Lastly, StoryToolKitAI is passionately open-source. Octavian is committed to keeping this project accessible and free for everyone. To this end, you can visit their Patreon page to support ongoing development efforts (I do!)
On a larger scale, and on a personal note, I believe the architecture here is a blueprint for how things should be done in the future. That is, media processing should be done by an A.I. model – with its transparent practices – of your choosing and can process media independently of your creative software.
A potential architecture for AI implementation for Creatives
Or better yet, tie this into a video editing software’s plug-in structure, and then you have a complete media analysis tool that’s local and using the model that you choose.
2. Twelve Labs
Have you ever heard of Twelve Labs?
Don’t confuse Twelve Labs with Eleven Labs, The A.I. Voice synthesis company.
Twelve Labs is another interesting solution that I think is poised to blow up... or at least be acquired. While many analytical A.I. indexing solutions search for content based on literal keywords, what if you could perform a semantic search? That is, using a search engine that understands words from the searcher’s intent and their search context.
This type of search is intended to improve the quality of search results, Let’s say here in the U.S., we wanted to search for the term “knob”.
Other English speakers may be searching for something completely different.
That may not actually be the best way to illustrate this. Let’s try something different.
“And in orde...
Show more best episodes

Show more best episodes
FAQ
How many episodes does 5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only have?
5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only currently has 36 episodes available.
What topics does 5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only cover?
The podcast is about Podcasts, Technology and Tv & Film.
What is the most popular episode on 5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only?
The episode title 'Editing Remotely with Avid Media Composer' is the most popular.
What is the average episode length on 5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only?
The average episode length on 5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only is 14 minutes.
How often are episodes of 5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only released?
Episodes of 5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only are typically released every 34 days, 22 hours.
When was the first episode of 5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only?
The first episode of 5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only was released on Sep 23, 2014.
Show more FAQ

Show more FAQ