gabj3

Posted on by

Forum Replies Created

Viewing 15 replies - 1 through 15 (of 27 total)
  • Author
    Replies
  • in reply to: Continuity with Haze #215789
    gabj3
    Participant

      Lighting is an exact science if you want it to be.

      Maxwell’s equations surrounding electromagnetism can be pretty thrilling!

      However, you hit the nail on the head because we measure everything in photographic stops, which is double or half the energy. Errors are generally negligible; a third of a stop difference is generally more than acceptable. In the world of energy, that’s 30% of total power.

      Because of various laws of electromagnetism, angle plays a factor. It’s typically referred to in cinematography as ‘Lamberts Cosine Law’ < – however, this can also be derived from Maxwell’s equations.

      The more obtuse of an angle to the normal of the grey card, the more ‘light loss’ you have.

      So, keep the angle normal. I recommend lighting your grey card with the same intensity as your key/shooting stop.

      You can set it up anywhere in the room with the same haze density as the set. I.E., a corner of a set.

      I’d recommend sacrificing a little light at a fixed distance from the grey card. When shooting, turn it off.

      Keep everything identical; the only variation should be the haze density.

      A plug-in to introduce haze is akin to a plug-in to introduce a filter or any other optical variation. It is inherently possible to do so, but it will never be truly accurate.

       

      Infinityvision.tv

      in reply to: Continuity with Haze #215779
      gabj3
      Participant

        The ideology is a spot meter and an 18% grey card or a white card—anything with constant reflectance, not specular.

        Set the card up; if you want to be diligent, take a reading…

        Haze the room and stand XYZ meters away from the grey card (the XYZ number should be replicable and marked).

        Take a reading; that value partly defines the haze density between you and the grey card. If all other variables stay the same (distance, light level, etc), a change in that reading would indicate a change in haze density.

        Infinityvision.tv

        in reply to: Am I a rubbish cameraman or is it my autism? #215778
        gabj3
        Participant

          I’m autistic.

          I’m an engineer; I design and run the technical department for Top Gear Australia, The Mole, etc.

          I got work by embracing aspects of autism, leaning towards the technical side of camera technology.

          Cinematography is a path of mastery; nobody will ever know enough.

           

          Infinityvision.tv

          in reply to: The rise of A.I. #215630
          gabj3
          Participant

            Nothing will ever replace the human mind’s complete and utterly sporadic and unpredictable nature.

            I’m an engineer; I use AI and am not scared of it replacing us.

            What makes a film production so great is its collaboration of many minds and many people and its unique, unpredictable nature.

            What it replaces is people who shoot pretty pictures.

             

            Infinityvision.tv

            in reply to: split tone look- how much to get in camera? #215424
            gabj3
            Participant

              To define why it will cause you artefacts.

              We spend much of our time keeping the green channel of any image acquisition as noise-free as possible. This is because, photopically, your human vision is most sensitive to the green part of the spectrum yada yada.

              In all WB gains for the Alexa line cameras (and most), no gain is ever applied to the green channel.

              To add green in post-production, you would have to gain the green channel, dependent on how much, this would introduce visible noise.

              That being said, consider what you’re trying to achieve. Even with the ideology a camera is just data collection and a large part of authorship comes in display preparation and post production. You still need two different hues to make the selects easy.

              To create a lighting effect in post is painful to make look realistic… when you can just light it.

              Post is geat at de/emphasising or stylising an effect.

               

              Infinityvision.tv

              in reply to: Multi camera coverage disadvantages #215310
              gabj3
              Participant

                The more cameras you introduce, the more compromises you make.

                Generally, you add more cameras when you can’t retake or want to save time.

                For example, on Top Gear, we use 7 ENG cams in the studio, as it’s a presenter-led show based on actuality.

                 

                 

                Infinityvision.tv

                in reply to: Lighting for dark sequences #215276
                gabj3
                Participant

                  Hello,

                  It depends on the creative intent you’re trying to achieve.

                  In a photographic sense, noise is the difference in gradient from the mean signal.

                  Noise is inherent to any sampling of information. In the case of imaging, the electromagnetic activity of the sensor impacts all values it creates. Noise/interference from the reset line (shutter) and thermal noise, among many other causes, add differentiation in charge as the analogue signal is amplified and read out from the sensor.

                  It’s important to note there are multiple stages of analogue amplification; with an APS sensor, every photosite has an amplifier, and the base of every column of photosites (pixels) has an amplifier. Amplifiers increase the noise in ratio to the amplification of the signal. Noise is in direct proportion to the signal.

                  However, as this amplification is in the analogue domain, there are infinite steps. Therefore, the difference in gradient can be considered pleasing, incredibly, if not too overwhelming.

                  The final sensitivity after amplification in this analogue realm is considered a camera’s native ISO’ for dual native base cameras, which means your cameras have two readouts with different levels of analogue amplification.

                  The FX6 uses gain adaptive column amplifiers with a high or low gain option on the same line. The downside is that it reduces its dynamic range at its higher base.

                  Once your signal is quantized – I.E. sampled digitally in a radiometrically linear fashion. This is when you enter the realm of digital gain. Digital gain has a far more digitally disruptive effect.

                  By this, imagine you have an 18% grey card, and your camera is representing this card digitally, with a mean signal of R 18, G 18, and B 18. Now, let’s say, due to the impacts of sensor architecture, there are pixels representing a value of R 21 G 19 B 15; if one then amplified the quantized digital signal linearly, your mean signal would have an RGB triplet of  R 36 G36, B36 and your astray pixel would have a reading of R 42 G 38 B 30, the linear difference has doubled because your signal has doubled! SNR is a ratio!

                  However, if you halved your signal (reduced your ISO by a stop), you would have R9, G9, and B9, and your astray pixel would have a value of R10, R9, and B7 values. You were reducing your noise by half.

                  There you go, that’s noise, that’s it, nothing else to it.

                  It’s lost information; the noise floor is where a camera can’t distinguish further information. If you raise the crap out of digital ISO, like Bradford Young, you lose details in the shadows, which could be artistically pleasant!

                   

                   

                   

                   

                  Infinityvision.tv

                  gabj3
                  Participant

                    I’m unsure exactly what you mean by blue clipping –

                     

                    By that, I assume you mean your B channel has become saturated?

                    OR potentially, you’re highlighting errors in some cameras OETFs (opto electric transfer functions).

                     

                    The prior –

                    If your blue channel becomes saturated, this means that your B channel can’t perceive any more information as some part along the signal process, there is a saturation point, whether it’s the photodiode, photosite, or ADC.

                    This means that if you have a light with a fuller spectrum at great intensity, the other R and G channels will read while your blue channel remains saturated, reducing the saturation of your blue channel. This is akin to RGB clipping.

                     

                    The latter –

                    A camera doesn’t perceive colour as you do – in that its RAW spectral response produces an image that looks quite different to the eye. It looks significantly more magenta than our eye and the final output of the camera.

                    All cameras have OETFs to transfer from their initial RGB space to XYZ to match our LMS cone functions.

                    This is done with a simple 3×3 matrix and simple matrix multiplication to adjust the gain of the RGB triplets for each spectral primary.

                    However, this equation is over-determined and error-prone. Hence, some colours are prone to errors and can cause pretty gnarly artefacts – this is especially common in more saturated far-gamut colours.

                    Each spectral response and OETF matrix is dependent on the camera and manufacturer, so – test test test!

                    Thanks

                    G

                     

                     

                    Infinityvision.tv

                    gabj3
                    Participant

                      Technically, yellow for a blue light.

                      However, when lighting your key, it’s important to be careful when using LED light as an RGB LED doesn’t emit any spectral power in the Y region. Then you have an even RED and GREEN channel.

                      It might be worthwhile using gel’d tungsten fixtures for a clean key (I would light your key with yellow light as therefore you have the most spectral content in the region you desire.)

                       

                       

                       

                      Infinityvision.tv

                      in reply to: Dynamic Range #214993
                      gabj3
                      Participant

                        In a technical sense –

                        Your camera analogue architecture (your photosite, column amps – and all the circuitry before digital quantization) is photographically linear and linear in signal.

                        In contrast, your photopic vision perceives light logarithmically. I.e. you perceive each photographic stop of light with an equal amount of value even though they are linearly double or half the amount of energy.

                        This is where the digital world of signal processing and your photopic vision do not align. Producing a camera’s analogue sensor architecture in anything other than a linear fashion would be problematic from a sampling and gain perspective.

                        However, for a camera to capture an additional stop of dynamic range, it must have double the internal latitude in quantization and a double physical saturation limit.

                        So yes, extending a camera’s dynamic range isn’t easy – especially in a fashion that works photographically.

                        Infinityvision.tv

                        in reply to: Uncompressed HD vs Arriraw #204634
                        gabj3
                        Participant

                          Hi David,

                          100% no gamut is applied and the log-c container is applied to the scene linear RGB values.

                          From my understanding (which could be entirely wrong). Log-C defines the equation that encodes a linear signal to a qazilog container for Arri cameras.

                          Log-C doesn’t inherently dictate the gamut but just dictates ‘luminance values’. Of course, all of this happens prior to debayer or any kind of chromatic related OETF.

                          Infinityvision.tv

                          in reply to: Uncompressed HD vs Arriraw #204486
                          gabj3
                          Participant

                            I should note as well, ARRI has its whole  32 bit word and wrapping but the core information is a 16-bit linear Y values compressed with ARRIs LogC container.

                            That’s it, that’s ARRIRAW.

                            Infinityvision.tv

                            in reply to: Uncompressed HD vs Arriraw #204485
                            gabj3
                            Participant

                              To kind of second what was said above –

                              A RAW signal is a radiometrically linear readout of  your sensor.

                              In the case of an Alexa a 16-bit unsigned integer linear readout.

                              The issue with a linear raw signal is human beings don’t perceive light linearly but rather logarithmically and our image typically resembles this.

                               

                              As we know, a stop is double or half the amount of energy. We perceive each stop logarithmically (each stop has the same amount of apparent energy) where as in an imaging sense we perceive light linearly where the brightest stop is half the amount of total values, the second brightest stop is a quarter the total amount of values – very inefficient as we perceive all stops logarithmically.

                               

                              So we compress the linear signal with a logarithmic container. We take the Y value and give it a linear bias to 256 and then apply a log encode that gives 512 values per stop.

                               

                              This logarithmic compression is the only difference in information than an absolute RAW acquisition format (acquiring a 16-bit linear signal).

                              Infinityvision.tv

                              in reply to: Attenuation of light #200142
                              gabj3
                              Participant

                                Your balancing values, if you want to calculate how larger sources fall-off see here.

                                Gabriel Devereux | SOFT-LIGHT CALCULATOR

                                If it’s a small source 4pi distance ** 2 is about right.
                                Your sensor can resolve a linear range of light.

                                Infinityvision.tv

                                gabj3
                                Participant

                                  So you’re suggesting shooting daylight under 3200k and correcting it in your NLE?

                                  White balance is gain in your B and G channels (dependent on filtration). In a typical Bayer CFA-type signal process, you amplify both red and blue channels dependent on the white balance; if you’re shooting at 3200k, a higher factor of gain is applied to the blue channel and so on and so forth.

                                  Note, this doesn’t happen in the analogue domain but after demosaicing, when shooting non-RAW type codecs like ProRes or DNx.

                                  Adjusting white balance in post is applying another level of gain to your R and B channels. I guess if you like the look, sure, but it’s an unnecessary level of gain and would lead to chromatic noise in some instances.

                                   

                                   

                                  Infinityvision.tv

                                Viewing 15 replies - 1 through 15 (of 27 total)