What we really changed when we changed the EI/ISO? (5 replies and 14 comments)
Hello there,I am really confused about the camera setting of EI. As somebody said,"changing EI/ISO does not change anything but the distribution of latitude.",like the form of Log-C Arri shows,"the higher EI you choose the less stops of dark range you have." If from this point of view,we should use lower EI rather than the higher in the dark scene,because the lower EI has more dynamic range for darkness. But in fact,no one shoot that way.
I wondering how EI changing works in camera.It works in the process of A/D conversion or simply digital amplification? Could we really get more details by choosing a higher EI like 1600 or we just shoot in 800EI and lift 1 stop in the post can have the same?
The screenshots from FIlmmaker IQ.com John P. Hess
I do know the real details from darkness depends on the more exposure.But what is the reason we choose higher EI in a night scene rather than a lower EI that the man said
Just put my cent on this topic, of course, there are other giant DP's that can write more and better than me (as Mr. Deakins and Mr. Mullen).
The dynamic range distribution (related to the EI or ISO - also if are slightly different as sensibility value) is different from the sensor to sensor and from a brand to another brand.
As you write for instance Alexa has 14+ stop of dynamic range capability and this "kind" of capability can will change related to the EI that you set during your shooting day, this maybe can depend on the A/D conversion of the processor (but maybe I'm wrong). I think is a good habit to test a sensor before starting to shoot just to see how dynamic range capability and color rendition can change into the ISO/EI scale.
For instance, the sensor of the BMPCC6K (that I know) gives its "best" capability of the highlights around 1000 ISO (also if the brand claim that the first base ISO is 400) preserving more detail over the 18%gray than if you shoot at 200 ISO letting the "blacks" crushes more quickly, and for instance, that sensor keeps more pieces of information into the "shadows" area (more capability under the 18%gray) around 1250 ISO (but no more than 3200) letting the highlights "blows out" more quickly.
For this reason, for me, is a good habit to test a lot a camera and "learn" how to use each spec to have the "best" result we want in each situation.
But as I said before this is only my penny and I wait (as you) for an answer from masters like Mr. Deakins and Mr. Mullen to learn better.
Have a nice day!
Thank you for your kindly answer Mr.MaxA. Indeed,test is the best way to learn,and practice is the best teacher. I will take that advice！Have a nice day too!
I can't really answer what parts of the signal involve analog amplification versus before A/D conversion versus digital amplification. Since Arriraw recording only uses ISO settings as metadata before raw conversion in post, I suspect that when you record ProRes Log-C in-camera instead of raw, the amplification is after the A/D step. But I could be wrong.
And it's also complicated by the fact that the Alexa sensor uses two A/D inputs/outputs, high and low gain, for better dynamic range, whereas other cameras use something similar for dual ISO capability. And I don't know at which point the two signals from the Alexa sensor are recombined. I think it involves two 14-bit A/D converters creating one 16-bit combined data stream.
This is to say that not all camera manufacturers handle amplification the same way.
Also keep in mind that while in theory, you might think that (on the Alexa) using a higher ISO for daytime, for better highlight information at the expense of shadow information, makes sense -- and lower ISO for night exteriors, for better shadow information at the expense of highlight information makes sense, you have to be careful because a night scene might have very bright highlights in the frame and a day scene might have very dark shadows in the frame. This is one reason why it is simpler to just stick closer to the recommended ISO 800, or in that neighborhood.
As for why night exterior scenes often use higher ISOs, not lower, there are two reasons: (1) often you are balancing your lighting with existing levels of real-world practical lighting that you want to capture for a more naturalistic effect (or you are only using available or practical light); (2) it is often impractical or difficult to light large areas at night for a low ISO setting, or doing so might look artificial.
EDIT function disappeared again, so ignore the first "versus" in the first sentence
Your professional answer always reassure me,Mr.David Mullen,thank you a lot.
Carefully I read your answer,I finally got the point.The really important thing is not to capture all the brightness of the scene,but to use the latitude as a spoon and go scoop what we want most in the scene.Feeling determin the exposure of cinematography,like Mr.Deakins uses 1600EI for 1917,created such a strong feeling of the night on the front.
Wish you have a nice day.
David... thanks for this explanation of the stages the light signal passes through (A/D)... I saw your post on FB and read the subsequent comments... all this discussion of how to avoid noise - given the inherent characteristics and limitations of current sensor technology when it comes to dealing with the extremes of dynamic range... (I agree with your advice on exposing so the noise prone area is in black) I wonder if Erik Messerschmitt's use of HDR on Mank and Mindhunter to control the extremes is not a way out of the conundrum... I have a long background in using light meters and dealing with limitations of color transparency film - both in 16mm and large format stills... that was one reason I switched to shooting color neg... I am not advocating the hyperreal results of HDR, rather wondering if HDR or some lighter version might offer a way of dealing with the problems of noise in the deep shadows and burn in the highlights...
Unfortunately, noise and grain are more pronounced in HDR displays due to the increase in brightness (plus the sharpness of UHD / 4K) so it doesn't go away. But there are post tools to mitigate grain and noise if necessary.
"Mank" used the monochrome Red sensor so it was inherently faster due to the lack of a Bayer filter.
Have you seen Erik's explanation of his process? There are a couple of videos on ASC where he explains how this enabled him to achieve an effectively greater dynamic range and control the point where the highlights burn out...
I need to read up on the monochrome Red sensor... I shoot still with a Fuji X camera, so am familiar with Bayer filter or lack of... Thanks
Here is a quote from an ASC article on Erik's use of HDR...
" Everyone likes to talk about the bright whites in HDR, but I think perhaps the added range in the shadows is more interesting and more important than added range in the highlights. ... I use an HDR reference display when I’m shooting and monitor in Dolby PQ and Rec. 2020. In my mind, it’s a totally different format with different exposure parameters and lighting requirements. The added color gamut is particularly exciting, as I can be very subtle and controlled in my use of color, particularly the secondaries. I also find I use a lot less fill because I can more comfortably expose to the right given the added range in the highlights.... In Rec. 709 video, we often exposed the set in terms of maximizing the dynamic range of the image because the window is so small, so our practicals and windows all ended up at the top of the curve. Now that we have more range, I can put my practicals at [an exposure level that hits] 300-400 nits [on a display] and then put a background window at 800. The audience senses the environment is lit by the practicals while simultaneously recognizing that the background window is substantially brighter. That’s something I never felt we could accomplish in SDR effectively.”
... and I just saw where he used a custom design Red with the monochrome Helium sensor... one of these days I will learn to slow down and READ for total comprehension.... looks like I have more homework, Gracias Maestro
The other quotes are his comments in using HDR in Season 2 of Mindhunter...
What your asking highly depends on the camera system... for most camera systems ('most' - an intentionally used vague word) further amplification happens after A/D conversion.
It's also important to note, sometimes people overlook this, 800 ASA (ISO), 1600, 400 is just a speed rating - how a camera achieves that speed is entirely up to its design. Note the most recent Sony adventure of it's 'EXMOR' CMOS sensor where it alters the layering of a standard CMOS sensor 'moving the photosensitive material above metal wiring'. Different photosite size (of course larger photosites with more surface area will be more sensitive) - these are simple examples in a complex pipeline but the latter especially the most compromised and altered in contemporary sensor design.
In regards to lifting in post, again it depends on the camera design and its internal pipeline. The most obvious being if your capturing information in a logarithmic container or an already compressed file. I'm not sure if boosting your gain internally comes before or after conforming the information from linear to log. If it did, hypothetically, you'd be amplifying before the potential of further errors (which is always preferable considering amplifying errors only makes them... bigger.).
The way that I look at it, each transformation the signal/image goes through leaves room for error (even just rounding errors and so forth). With that amplification beforehand is preferable. This is of course incredibly loose and depends highly on the camera system and it's internal pipeline (another example being shooting in ARRIRAW or ProRes 4444).
What your asking highly depends on the camera system... for most camera systems ('most' - an intentionally used vague word) further amplification happens after A/D conversion. - Elaborating on that again, an obvious example of an exception being dual native ISO when a camera has two analogue outputs from each photo site, one with a high base, one with a low base. This of course is amplification before A/D conversion. Then of course you have ARRI's system and it'd be interesting to know at what point they combine the signals and if, for example rating at a higher EI would combine the two differently to allocate more of the high base to the bottom end of the signal. However in all there webinars they always seem to advertise there captured signal as in a sense a 'package' - for lack of a better term which they then redistribute when altering gain.
Thank you and your advise,Mr.Gabj3.Very reasonable analysis for camera system,like SONY FS7 I used before at school,when choosing "cine EI" mode,then the camera wil works like ARRIRaw that just record the EI as metadata.I will take your carful advise.
Different photosite size (of course larger photosites with more surface area will be more sensitive) *being another factor that dictates sensitivity before the analogue gain to the base signal.
It is a similar principle yes, the difficulty with Sony's FX6 and FX9 is it's really frustrating to know where amplification takes place throughout the pipeline in their contemporary camera systems. With internal compensation for various 'features' including the internal ND, noise mitigation (digital and analogue) among a weird compression to the high and low end of the signal it's odd, it's odd.
They advertise there SLOG3 container curve to be similar to that of cineon however I found highlights held just bellow clipping had weird compression artefacts. I'm no expert in the field but I'd imagine if there container was indeed similar to that of a cineon in the high end no such compression would take place.
to that of cineon*