Unless the editorial team is just one person, you will need storage that can be shared by multiple editors. For example, if you shot something on your cell phone (H.264), exporting in an uncompressed format would be a waste of drive space. The balance you have to consider when choosing a color space is the same balance that we’ve discussed when talking about bit depth and codecs. The DP may even have a different creative LUT for each scene. We dive into camera technology so that you can start your workflow off right, covering topics like cameras, codecs, and color spaces. As with everything in the post-production process, you will need to balance resources. Captioning, both open and closed, serves many purposes. HTTP is also not as secure as other transmission methods, meaning your media stream could be intercepted. Although the media files for a film tend to be very large, the project files created by the NLE are usually quite small. If you are using a highly-compressed codec, if you are using a high-bitrate codec, or if you are recording in a log format, then an offline edit will probably allow you to work faster, on cheaper hardware. ), storage size (how many GB or TB can you store? Any pixels that are identical between the two tracks will appear black, while any pixels that are different will appear in color. When you have an extremely fast turnaround time. Even a simple drama or corporate video project frequently makes use of VFX to fix small errors or save the production time and money. The most common format outside of a theater environment. This eliminates many of the visual differences between takes, locations, and cameras, and ultimately serves to maintain a story’s continuity. There are a few ways to accomplish this. On a documentary film, the editorial process often runs in parallel with production, since the story is built primarily in the edit. Top of the line color correction brings a level of precision that is difficult to match, but it is expensive both in money and time. Unless your effects add efficiently to your storyline, they are of no use. Knowing the basic components of these tools and what they can do is vital to understanding the artistic potential of color correction as a whole, and the impact it can have on your project. ), simultaneous connections (how many people can access the files at once? Frame.io is a great online tool for reviewing an audio mix remotely. Because VFX work can often extend very close to the project deadline, color correction may take place gradually as individual shots are finalized and handed off to the coloring team. Nearly every camera maker has their own flavor of log, but they all do essentially the same thing. The “5” in 5.1 represents the Left, Center, Right, Ls, and Rs channels. To make this process as quick and smooth as possible, the tracks in the editing software should be labelled and organized before the AAF/OMF is created. Here’s a list of relevant info to pass along to post audio to ensure the final mix meets those requirements: The first step is organizing the audio clips from the AAF/OMF in a digital audio workstation. But even in these scenarios, a remote workflow process is valuable for sharing dailies with studio execs, investors, or other third party stakeholders who might not sit in on early review sessions. You could shoot video of fire against a black background and then composite it into a shot of the house. Syncing, backups, checksums, and more. Even if you’re doing all of your color and VFX work inside your NLE, you’re still going to relink back to those camera files before you export. In a traditional workflow, if schedules are tight, the editor may have two separate in-person meetings, one with the director and another with the producer, who may give conflicting feedback. The once exclusive service offered by high-end post-production facilities have been eroded away by video editing software solutions that operate on a non-linear editing system (NLE). On a smaller project, it may be quite feasible to send the media files to another editor over the internet. VR video players have special decoders that tailor the audio experience depending on where the viewer is looking. ), but some editors use more detailed metadata to organize their edit. If the editors decide to combine two performances in the same take, they will probably be able to do a pretty-good job in their NLE, but the finishing artist will recreate that shot with their more advanced tools. ), and reliability (what happens if a hard drive crashes?). Or, if you’re using Frame.io for remote feedback, you may add the Frame.io plugin to bring review and collaboration tools into the NLE. It’s also worth noting that depending on the quality of the software you’re using to key out the background, very fine details can disappear, too (such as wisps of hair or feathers, which you may want to keep in mind while casting or costuming). Captioning services can also help improve your video viewership by letting you reach out to a wider audience who may speak a different language or have a disability. Stems/splits are also commonly required. It’s important for each audio clip to be organized on a track with similar audio clips before the post audio process can continue (dialogue with dialogue, sound effects with sound effects, etc.). Is it being heard through a small mobile device with headphones, or in a movie theater? On the higher end, Avid Symphony, Autodesk Flame/Lustre, and Assimilate Scratch are all common finishing options. Also, I think transcribing makes a lot of sense, so am all for it, so opt for it whenever I can, but it can either be The colorist and the VFX artist often require very carefully-tuned powerful custom machines in order to handle the enormous task of processing hundreds of millions of pixels every second for high resolution images, but the editor will probably be fine with a mid-to-high-tier standard computer. In the US, typical measurements are -24db (+1/-1db) Integrated LUFS/LKFS and -10db True Peak (some networks provide a bit more wiggle room, but these recommendations should cover all the bases). On the lower end, and even on some high-end films, After Effects is common for finishing because of its low price tag but broad feature set. Imagine taking water from a small glass and pouring it into a larger glass. That still does happen with digital dailies, but it’s now much more common for the reviewers to watch the dailies separately online, using a platform like Frame.io which allows them to leave comments directly on the video from their laptop or smartphone. This allows the editor to make sure that the performance works in the scene before sending the shot off, or that the background is the best one in which to integrate the effects. When we say “highest quality”, we mean that you want to capture as much information as possible—so you want less compression, higher bit rate, higher bit depth, and less chroma subsampling (click on the box above for more explanations). This means either exporting from within the NLE or, exporting through a software application that understands a project file, such as Adobe Media Encoder with Adobe Premiere projects. If you’re working with very tight turn-around times, however, you may choose a codec that will allow you to start editing immediately after a shoot, even if that means a higher cost or a sacrifice in image quality. While an all-in-one embedded file sounds like the most complete option, it’s not always the easiest. In discussing log images, we’ve just touched the topic of color spaces, which is a fundamental technical concept that underlies all image data and processing. Luckily, most closed captions file are nothing but specially formatted text, so a quick internet search will find many free sites that can convert between the formats. The media player will access both and sync the playback during viewing. This workflow is becoming less common, however. Avid’s DNxHR and Apple’s ProRes are very common for editorial. On a smaller project, the DIT or a DIT’s assistant may do most of this prep work. In order to make your final decision about the camera and codec, you’ll need to return to this section after you’ve read the rest of this guide, but we’ll give you a good overview here. Rotoscoping (or roto) is the frame-by-frame process of tracing an object as it moves through a shot to create a matte so that it can be combined with other images. This idea applies to broadcast television as well as to online streaming, although the technical specs are different for each one. Sound is the invisible half of the film. This format has been around for some time, but is highly immersive and remains one of the most popular surround formats. Both with XMLs and AAFs, it’s necessary to research and test the workflow before you begin post-production to ensure that the project can be accurately transferred between the tools you are using. Individual networks may also have unique spec requirements that post audio should be made aware of. The basic process for producing a 3D character or object starts with modeling, which involves creating a detailed rigid 3D sculpture which will subsequently be “skinned” or painted. Animatics – A group of storyboards laid out on a timeline to give a sense of pace and timing. Contributed by Mike McCarthy, Tech with Mike First. Both file types accomplish a similar task – exporting a timeline from one piece of software so it can be imported into another. This process is critical to both the technical and creative aspects of a film, as it ties together everything presented on screen into a cohesive and beautiful image. Image courtesy Paul Machliss. shutter speed, lens information, date, time, timecode, etc.) Pro tip: Follow the general rule: less is more. In essence, a holdout matte is just what it sounds like—a cutout of a foreground object that allows you to isolate it from its background and place it over a new background. With lower-end projects or fast-turnaround projects, however, the tools that are built into the NLE may get the job done. Usually, if you are adding a CG element to a live-action shot, the VFX software can generate a matte automatically. How to document your workflows Using workflow software like Process Street, workflow documentation has become so easy that there are no longer any excuses for not doing it. Failing QC can be a time consuming and expensive proposition. Compositing is the task of combining multiple images together so that they appear to be part of the same shot. Any sound effects that don’t fit into the reality of the scene, but rather are used for emotional impact, would fit into this category. If you do need to send your project from your NLE to another piece of software, however, you have a series of steps to follow. Imagine that the different video clips were stacked on top of each other. Transcoding can be done either “in software” on a Mac, Windows, or Linux operating system with a specialized transcoding application, or it can be done “in hardware”, which requires the user to play the video out of the computer via “baseband” connections (video cables). If you’re consistent with your file naming, however, it can almost always be automated. When it’s done right, the dialog track should sound completely natural and unedited. Plugins like FxFactory by Noise Industries provides an entire ecosystem of third-party plugins for post-production software from Apple, Adobe, and Blackmagic’s Davinci Resolve. Post-production is the third and last stage of video production but also the most crucial. greater latitude and flexibility in color correction; greater retention of details in highs and lows; etc. And that’s where we really shine. If you record log footage with a high-quality codec (the codec you choose is important), you will have many of the benefits of working with raw footage (e.g. Once the project is reconnected to the original camera files, it is back “online.”. The “.1” represents the discrete low frequency effects channel. While providing a captions file for optional viewing seems to be pretty self explanatory, there are some considerations to be made. The sound recording capabilities of most cameras, even professional cinema cameras, can’t match a high-end sound recorder. On a large project like a feature film or TV show, for the most part, the various roles in post-production are kept fairly separate. As soon as a draft is ready for review, the timeline is rendered and compressed, the file is uploaded, and the reviewers are notified automatically. There are numerous ways to print , each with qualities and drawbacks, too many to list here. Jellyfish shared storage rack array. If ignored, a project may fail quality control in preparation for broadcast and be unable to air until addressed. The VFX editor (and/or the post-production supervisor/coordinator) and the VFX house will each use detailed lists and spreadsheets (or some sort of off-the-shelf or custom project management software) to track the status of each shot throughout the process. Now: how do you get your finished project out to the world? While all NLEs can create titles, they don’t have the level of nuance that a finishing tool provides. For most people, transcoding the footage isn’t a huge issue because it can be done overnight or on a spare computer. That way, your dialog editor can easily choose the best microphone for each scene. Each video endpoint requires your media to be encoded in such a way that their data center can process the media effectively. The major difference is that, when an editorial team is working together in the same office, they work off of a single set of shared files. Good color correction pushes a story’s tone and captures a certain emotional reaction from viewers. Once the editor is confident of the shot’s placement in the sequence, the VFX editor (or an assistant editor) will prep several files. These are often fantastical creatures or objects, but they can also be perfectly normal things used in invisible effects. Apple ProRes Raw image from Frame.io’s Apple FCP X integration promo film. Unfortunately, the files that editors like to use aren’t as convenient for producers to review, partly because of their file size and partly because they may require special software to be installed. The colorist builds the look. It’s unlikely that the film will be entirely black, because there will be many minor differences between the reference video and the conformed version. I hope they will help you organize anything that relates to post-production. Here’s a quick overview the process: The dialog edit has just as much to do with listening to the space in between the words as it does the words themselves. In the simplest scenario, the file structure inside the NLE will exactly match the structure of the files as they are stored in the shared storage. It is preferable to send a larger number of well organized tracks, instead of combining different kinds of clips together on a smaller number of tracks. These transmission methods are considered facility-based tools, and thus are more expensive than FTP or HTTP methods. The dialog edit aims to smooth out those chopped up bites to make the edits “invisible” to the listener, and give the impression that what you hear is what was actually said.