Team Deakins Podcast

Posted on by
Back to Team Deakins Podcast...

Back to Team Deakins Podcast...

Episode 19 - Color Science with JZ (7 replies and 11 comments)

James
12 months ago
James 12 months ago

Team Deakins has an in depth discussion with "JZ" Zell, Color Scientist. We talk LUTs, resolution, color space, pixels, work flow and lots more. He really knows his stuff!

Please use this topic for further questions and comments.

Colemar Nichols
12 months ago
Colemar Nichols 12 months ago

Hi James,

   In this episode, when discussing de-bayering ‘In Time’, I was confused.   You guys have said that film was recorded in uncompressed HD to Codex recorders, therefore de-bayered in camera rather than in post?  Was Arriraw not available at that time?

dmullenasc
12 months ago

When the Alexa first came out, a raw output wasn’t enabled yet so uncompressed RGB HD to an external recorder was the best option. The movie “Hugo” and the early seasons of “Game of Thrones” did the same thing, recorded uncompressed RGB HD externally.

Colemar Nichols
12 months ago

do you know why it took some time for the Alexa to output Arriraw wheb the D-21 had been doing it for years?

Wouter
12 months ago
Wouter 12 months ago

to ACES or not to ACES? 

Colemar Nichols
12 months ago

I’ve found ACES is great for a show with multiple cameras of different types. I’m currently timing a film shot over three years on Panavision DXL, DXL2, Genesis, Arriflex D-21, Kodak Vision2, Vision3, Blackmagic Micro, and Sony A7s and when pushing it through ACES it all matches up 90%. Really is a great workflow, but not necessary with a single camera and lens set.

Roger Deakins
12 months ago
Roger Deakins 12 months ago

The de-bayering was done by e Film not in the camera. We did not use ACES as we have a proprietary work flow.

Kinora
12 months ago

How do you address archival concerns with film productions you work upon? Is this something outside of your ability to specify or influence? I ask in relation to the ongoing development of new color spaces and the conundrum of digital storage uncertainty. Do you advocate for film out masters for archival purposes and if so, how do you insure your look is recaptured in any future release IF these elements have to be used?

JZell
11 months ago

for the digital content we keep and archive the camera RAW since new debayer algorithms will most likely give us better results in the future.
For film productions we keep the original camera negative and the Print LUT. The color corrected master gets filmed out as 3 strip YCM on 3 separate black and white film strips. Statistically the color NEG will last just over 100 years. While the black and white stock might last 400 years ...

bangsunei
12 months ago
bangsunei 12 months ago

If i understood the episode correctly, you guys still do your own de-bayering on all your features and not only before on board ArriRaw was introduced? Is it correctly understood that this is not the norm even in high-end Hollywood productions?

I know that the "idea" behind a RAW signal is to have the signal before any de-bayering/de-moasicing is done, but I truly thought that the signal from Alexa cameras (And all other high-end digital cameras) had the de-bayering baked into their proprietary file formats like ArriRaw.

Is it really possible for every man (Or mathematical genius) to de-bayer the signal with their own algorithms or did the people at efilm do some ingenious reverse enginering and software programming to get it out of the meta data?

Also how does the process work, since you have your own proprietary workflow do you have your own software programs to de-bayer the Raw signal before handing it over to an editor in a NLE og a grader in filmbase or Resolve? And does this create problems that woulnd't otherwise occur using Arri' default de-bayering?

Very last question, and sorry for the long post, but this episode simply blew my mind! What are the benefits, in your opinion of doing this? Certainly doing your own de-bayering, changing the RGB values of the pixels, must change the actual "colour" of the picture. What are the benefits of this compared to, say doing this only with a monotoring LUT? Do you do it to have an image with more leeway for VFX and DI work?

Thank you for a great episode!

bangsunei
12 months ago

I read up a little more and it definetely seems that Steve Yedlin does his own debayering as well, which JZ also mentions on the pid. If anyone is interested in this kind of thing i would sincerely recommend his webpage https://www.yedlin.net/index.html» and read the resolution and display prep videos and read his short article about colour science. The way he has achieved to emulate film print digitally, and with multiple cameras is astonishing and seriously inspiring.

JZell
9 months ago

you don't necessarily have to invent your own debayer algorithm. Often "just" comparing different debayer options of different vendors will already give you a good option to find the right and most perfect solution for your movie. And it is not only the debayer version it is as well the custom sharpness setting. And if you want to really get fancy you can choose different sharpening settings for RED, GREEN and BLUE.

K.Wasley
11 months ago
K.Wasley 11 months ago

It would be great to be able to get some information from JZ on 'look' LUT creation. What tools were used to create Rogers LUT? When Roger's LUT was created what was the aim for the 'look'? What were the targets for the look? For example was it Kodak film stock / print targets or data that were used to form the basis for the look? What are the characteristics of colour hues and luminance and contrast within hues, do they shift much from what is typical of the Alexa's native colour science? Where does JZ aim to set 18% grey for the Alexa? Eg does the LUT have a mid grey point lower than LogC's typical 416 or 0.41 in order to build in a bit of extra exposure? Is the contrast curve more reminiscent of a film print type curve, eg more contrasty than K1S1? What methods does JZ use to assess interpolation between data points in order to have smooth transitions? Any further information digging deeper into look creation that helps DOP's in creating their own looks would be much appreciated.

JZell
11 months ago

remember in the podcast we talked about that we first met on TrueGrit. For TrueGrit it was our job to create a digital viewing device which matches the projected film print. To get there we used an EFILM/ARRI proprietary NEG and Print Film analyzer, analyzing the TrueGrit NEG & Print stock developed @ the Deluxe LAB. We combined this Print Film Pre-Visualization with a custom P3 style Output LUT to match what the monitor could do at that time. For the following ALEXA shows we "just" replaces the NEG component of this 3D LUT with a "LogC Alexa Wide Gamut" Component. For 3D LUT processing we like to use tools which can perform a thetrahedral interpolation between data points. In a controlled environment, the 18% gray card should measure 0.18, 0.18, 0.18 in linear light. That is true if frame rate, shutter angle, ASA settings and T-stops are in sync with the light meter reading. While on set the DP of course has the freedom to under or overexpose to his or her liking. The key for success is to build always 4 LUT versions in one go. They all have the same look but 1 is for Rec709 monitoring, 1 for sRGB, 1 for P3 and 1 for Rec2020 PQ HDR monitoring

K.Wasley
10 months ago

I'm sorry JZ i didn't have notifications switched on so i did not see that you had replied. Thank you so much for answering those questions and expanding on the art and science of look LUT development, it's much appreciated.

I assume the print stock that you modelled for True Grit was 2383? When you removed the NEG part and replaced it with LogC Alexa Wide Gamut, was there any modelling of the NEG or film negative's response in general applied to the Alexa Log C component? Being digital, Alexa has a pretty different response to light compared with film, being an additive process rather than subtractive, was any of this subtractive character inherent in film, the deeper luminance of film's colour response for example applied to the Alexa component or any adjustment of the print emulation to compensate for this? For example i notice that when Alexa LogC is pushed through a 2383 LUT you usually get pretty orange greens that feel somewhat unnatural, did you therefore apply any additional transform to adjust the Alexa component to counter this before combining with the print emulation part of the LUT? In terms of tools do you use Nuke for example to plot in the data points or propriety software? Do you test the LUT on a grangers rainbow or similar for artefacts, noise etc? Thanks so much for sharing your knowledge.

JZell
9 months ago

to go from Cineon film Log to Alexa Wide Gamut LogC, we mainly "just" change the Cineon Log curve to an Alexa LogC curve. And use a matrix to map film scan primaries to Alexa Wide Gamut. That is possible because the Cineon Log scan already came from an ARRI pin registered scanner. EFILM does get matching results when mixing Film with Alexa footage. During the Podcast we talked about Steve Yetlin. On Steves web page you will find an impressive comparison between a film and an Alexa acquired shot. While Steve Y uses Nuke, EFILM used proprietary software to analyze Neg and Print and create LUTs. and back in the day we had to create 3 different print film previz LUTs depending on if we send the film for development to Technicolor, Deluxe or Fotokem.

James
11 months ago
James 11 months ago

JZ!! Thanks for joining in on the discussion! You know everything so you're a great resource!

James

kuba
10 months ago
kuba 10 months ago

Hello! On the podcast, JZ spoke about a relatively cheap consumer monitor calibration tool (~$200, I believe). If possible, I would love to know what tool this is in specific. Also, how would you implement a tool like this in your workflow?

Thanks in advance!

JZell
10 months ago

There is the SpectraCal C6 or the X-rite i1Display Pro Plus Colorimeter. You want to combine it with a software tool which gives you x and y readings. You want to calibrate the white point of your monitor so that it matches your LUT target. If you for example use a Rec709 LUT for a D65 white point you’re want to adjust your monitor to an x and y reading of: x=0.3127 and y=0.329. The white point numbers are defined by SMPTE standards. SMPTE is providing the target numbers for Rec709 D65 or P3 DCI white. SMPTE as well provides the x and y numbers for the Target color gamut. To confirm that your Rec709 or P3 or Rec2020 Color Gamut numbers are right you want to measure the x and y values for red, green and blue.

Back to Team Deakins Podcast...