gabj3

Posted on by

Forum Replies Created

Viewing 13 replies - 16 through 28 (of 28 total)
  • Author
    Replies
  • gabj3
    Participant

      So you’re suggesting shooting daylight under 3200k and correcting it in your NLE?

      White balance is gain in your B and G channels (dependent on filtration). In a typical Bayer CFA-type signal process, you amplify both red and blue channels dependent on the white balance; if you’re shooting at 3200k, a higher factor of gain is applied to the blue channel and so on and so forth.

      Note, this doesn’t happen in the analogue domain but after demosaicing, when shooting non-RAW type codecs like ProRes or DNx.

      Adjusting white balance in post is applying another level of gain to your R and B channels. I guess if you like the look, sure, but it’s an unnecessary level of gain and would lead to chromatic noise in some instances.

       

       

      Infinityvision.tv

      in reply to: Raw compressed format – RED patent #176319
      gabj3
      Participant

        To complement the above, I have a brief on DCT compression using more accurate terminology.

        Many other image and video formats use DCT-based compression and decompression, such as JPEG and MPEG-2. The main idea behind DCT compression is to divide the image into blocks (most commonly of 8 by 8-pixel size) which will be treated separately. Inside each block, the Discrete Cosine Transformation II is applied to obtain a series of coefficients that indicate the contribution of each of the different cosine waves that approximate the original image. These coefficients are divided by the corresponding values of a quantisation table and rounded to the nearest integer. Usually, each standard has a series of different tables which dictate the compression level; this step is where most of the lossy compression is done. Finally, the resulting block is zig-zag scanned, resulting in many sequential zeros being compressed by a Huffman encoder. As the image is converted into Y’CbCr beforehand, compression can be accompanied by chroma subsampling which reduces the bandwidth used for the chroma components, exploiting the human visual system’s lower acuity for colour differentiation compared to luminance.

        Infinityvision.tv

        in reply to: Raw compressed format – RED patent #176317
        gabj3
        Participant

          To nerd,

          It’s interesting to view the different types of image compression that occur upon acquisition and what REDRAW is/means.

          RED owns the patent for spatial compression in RAW image files; spatial information is image detail, like lines on an MTF chart. It’s the difference in values between pixels.

          RED now employs DCT 2 (no longer Wavelet) compression, the same found in ProRes, JPEG, etc. It converts spatial information to a sinusoidal wave, representing a gradient in pixel values for an 8×8 area with a corresponding coefficient chart. Thus it significantly reduces the number of values required for an area of an image.

          Note, RED itself didn’t create any of the compression algorithms used; they applied it to a RAW codec (sampling a single Y value per pixel) and patented it, and, more importantly, enforced said patent.

          Now ARRI, RED, Sony and everybody else with a RAW acquisition method generally applies a level of compression before the acquisition; however, it’s not spatial compression, it’s a compression of single pixel value, rather than delivering a 16-bit value per pixel, we provide a 10-12-bit value per pixel disregarding unnecessary information – a logarithmic container!

          So, everybody applies compression (unless under extreme circumstances) to the range of values; RED applies compression for the number of values required to create an image, thus decreasing the bitrate further. The cleverness is, of course, taken from JPEG2000, where you have Wavelet/DCT compression and still debayer afterwards, a clever but reasonably simple addition to the complexity of DCT 2.

          Now, of course, as soon as you don’t sample a single Y value per pixel and minorly interpolate values, you’re no longer genuinely infringing on the REDs patent (BMD); ProRes RAW is almost identical to RED RAW (Apple gave RED the DCT 2 compression algorithm) and blah blah blah the rest is well known.

          Now, does it significantly hold back the potential of cinema and, therefore, cinematography? No.

          It’s just spatial compression of a RAW image that reduces bitrate. I’m sure it hurts some independent filmmakers and lowers the possibility of RAW acquisition on some shoots, but they’re by no means limiting the potential of a camera.

          It’s just one of the more talked about scuffles in our industry.

           

          Infinityvision.tv

          in reply to: Avoiding the “digital motion” look #173620
          gabj3
          Participant

            To through a spanner in the works.

            The term ‘motion cadence’ is thrown around too much.

            Your digital sensor will inherently perform differently to film. You’re dealing with the analogue movement of negative through a gate. Not a bunch of photodiodes being line reset. You’re probably seeing a multitude of aberrations in film like gate weave and so forth.

            However, it’s very important to note. The likelihood of there being a difference in ‘motion cadence’ between digital cameras is minuet.

            A CMOS sensor is inherently a line of MOSFET diodes; electrons absorb the magna of energy from photons, which allows the charge to bridge the gap between a PN junction – thus giving voltage proportional to the amount of light that hits x photodiode.

            A photodiode is shorted every y interval, and this is your shutter speed. If it’s an APS ‘global shutter’ (adding local capacitors), it still has a line reset (the line going across the horizontal row of diodes/pixels that resets them). It just adjusts the interval in which each line is reset, as the local photosite can hold the voltage (local capacitors).

            However – motion; the idea of each pixel receiving light over a period of time. Is the same. Adjusting your shutter (reset interval) will change this.

            Infinityvision.tv

            in reply to: The future of HMI’s #170925
            gabj3
            Participant

              Cost and consistency,

              Not to sound like a broken record but, a part from a single phosphor white light LED. All LED’s have spiky discontinuous spectrums that limit the gamut of colours being resolved underneath them but, also means cross shooting with different formats with different spectral primaries difficult.

              A HMI does inherently have a spiky spectrum and does lack in consistency but, it’s tried and tested and we acclimatised to the disparity between fixtures.

              If you buy an old decrepit HMI with a new bulb, you know what you’re getting unlike an old decrepit LED. Also changeable bulbs – that’s possible with LEDs but require calibration.

              Also I believe from an actual power ratio LEDs and HMIs are similar. The difference is more a HMI bulb emits at 4pir2 as a point source and therefore loses total output in its ‘beam angle’ unlike an LEDs linear plane.

              Also, heat dispersion for LEDs is still quite tricky so an 18k or 24k HMI may stay unmatched for a few years to come.

              But, yes, LEDs are taking over… just buy the good ones!

               

               

              Infinityvision.tv

              in reply to: Differences between the ARRI XT Studio and other cameras #170457
              gabj3
              Participant

                Keep in mind, for cameras up to the Mini; there was significant latency to the EVF. ‘Significant’ in our experience is 40< ms or more significant.

                Keep in mind a camera is taking in signal, so a sensor readout which isn’t instantaneous even for a global shutter, converting it to digital, applying matrix/WB/EI math, OETF to LogC, LogC to Gamma transform (display) to briefly describe all the functions that need to take place for every pixel singular and then three channels of value before being displayed in the EVF (which in itself also has additional latency).

                In the world of video tap, we define this as cumulative latency. Our links may be two-scan lines or two-frames of latency, but once you throw in-camera processing, display processing and genlock (non-applicable in this instance), it’s generally greater.

                This introduces severe strain to the operator; with modern computation advancements, latency has decreased. Also, display technology has advanced.

                Infinityvision.tv

                in reply to: Changing the Cinematographer’s Exposure Values in Post #170189
                gabj3
                Participant

                  You can off-set logarithmic gamma similarly to RAW just the camera manufacturer itself then has no ‘say’ in redistribution of values.

                  As you say when you adjust RAW values in the decoder its in a linear space (keep in mind, in RAW almost nobody still captures linear values, we just re-linearise the signal from its logarithmic container).

                  You can do the same with a ProRes 4444 – non ‘RAW’ codec by applying the same mathematical transforms, it’s just a little jankier as most logarithmic equations have an EI aspect that you cannot adjust, so the signal redistribution is not the same as if its done with the manufacturers SDK.

                   

                  However, this does bring up an interesting subject – I recently had a series of cinematographers shoot on REDs and Sony’s for the entirety of a show, both claiming either looked better. I told production they were indistinguishable.

                  We created our show look in the Log3G10 RedWGRGB IPP2 colour pipeline and I matched all Sony’s to Log3G10 RedWGRGB and the additional show-‘look’ was applied and was near indistinguishable.

                  Just map out the transform with both cameras in near-identical settings as a test, make your corrects (I don’t do this in a colour NLE but rather a compositor) and it’ll be as accurate if not more than applying corrections after normalising signal and it retains artistic intent.

                   

                  Infinityvision.tv

                  in reply to: Alexa 35 vs Mini LF #170134
                  gabj3
                  Participant

                    I should emphasise –

                    The Alexa 35 has some crazy internal circuitry on an analogue sensor level.

                    They’ve gone against their previous statements –  by creating a physically smaller photodiode with greater latitude and higher sensitivity. It isn’t inherently a good thing but is just an update on circuitry on their ALEV chipset.

                    If they used the same circuitry on a larger photosite they would achieve even greater sensitivity – however, to do so and stick to the 4k mandate an LF-sized optical block would be required.

                    I should note, when I talk about circuitry I’m not talking about some magic source, its generally quite simple physics – it’s more just actually fabricating the silicon.

                    All CMOS cameras have a photodiode (charge as the electrons absorb the Magna of energy from photons) a reset switch (the shutter) that shorts the diode every assigned interval, a power source follower – an amplifier/gate that the charge generated by the photodiode dictates the amount of VDD (voltage) that passes through the MOSFET transistor in which there is a capacitor for ‘global shutter’ cameras and a column-line readout switch.

                    Now, that’s all reasonably simplistic until you have made it 2.2micrometers wide and tall, and mitigate SNR while dealing with unimaginably small amounts of voltage.

                     

                    Infinityvision.tv

                    in reply to: Alexa 35 vs Mini LF #170133
                    gabj3
                    Participant

                      It won’t make other cameras ‘obsolete’

                      Going away from the ‘new colour’ and ‘new logarithmic container’. Which, by all means, is just a different way of storing value.

                      Any tristimulus observer – any camera with three channels can resolve ANY colour. Yes, this does mean that a Canon T3I rebel and ALEXA Mini LF Super 35 Pro Colour 65 can determine the same amount of colours.

                      The balance is the initial spectral sensitivity of the sensor concerning the XYZ Space / LMS cone function gamut or, as an alternative, the camera’s Wide Gamut’ space.

                      However, as they’ve changed the size of their photodiodes and, therefore, the spectral sensitivity of each camera – while possible to be remapped to the initial AWG3 (note ARRI’s 65-point LUT that will re-map the colour near perfectly), a 3×3 Matrix (internal OETF) will not as with all 3×3 matrix’s they’re error-prone as they are inherently over determined as we’re trying to match any arbitrary spectral light.

                      They rebranded it as ‘new and exciting.’ rather than ‘sh*t we can’t match our old gamut anymore’.

                      So, what is remarkable about the Alexa 35 is the apparent SNR of -100db and, with that inherently, more camera latitude DR in which they needed to create an extended logarithmic container to store the additional values (that’s it).

                      This is a marvel in internal camera circuitry – and I am, for one, looking forward to taking it apart. However, that doesn’t inherently make other cameras obsolete. Unless you were clipping signal left, right and centre and constantly struggling to gain exposure – there is not much difference.

                       

                      Infinityvision.tv

                      gabj3
                      Participant

                        I don’t work a lot with film! I have worked with film, but its organic nature is harder to quantify.

                        “resulting sensitometric curve of the developed negative” I’m just going to call the SRC (Spectral Response Curve) of the Negative.

                        The SRC dictates a few things; its amplification of G and B channels showcases its tungsten balanced stock (higher gain in the blue channels to compensate for the tungsten illuminant (and I believe the aim is a D65 illuminant).

                        If you get very tech’y’ you can look at the SRC concerning the CIE standard for a tungsten illuminant, and you’ll be able to gauge its apparent colour shift and such in comparison to other illuminants.

                        However, that is a; to quote a Cambridge professor here, ‘shit ton of math, and it’s easier to shoot it.

                        With spiky discontinuous illuminants like RGB LEDs (or spiky illuminants such as fluorescents, HMI’s and so forth), you can ensure the SPD of the illuminant aligns with the SRC of the negative in terms of producing white and mitigating unwanted colour shifts.

                        Unless a photographic push defies my fundamental understanding of applying gain to channels, it amplifies all RGB values linearly, and the separation in channels becomes greater linearly in a radiometric sense.

                        Let’s say you have an RGB triplet of 100, 75 and 50 in a linear space. If you push it by a stop, you should have an RGB triplet of 200, 150 and 100 in a linear space. As such, the difference between the three triplets has doubled accordingly. In a logarithmic camera container, the math becomes trickier and is dictated by linear bias and encoding.

                        Infinityvision.tv

                        in reply to: Lighting Distance Formulas #169602
                        gabj3
                        Participant

                          The general consensus is –

                          Buy a light, a diffuser, a reflector and play around with it for a few years (or days) and, you’ll get the hang of it.

                          The alternative is to hire a gaffer with said above experience (preferable to young cinematographers).

                          The alternative is to give high-school geometry another crack – your understanding of a camera, power and so forth is far greater in complexity than most of the equations applicable.

                          It’s just taking the time, maybe downloading python, and exploring that aspect of it. In my head that’s the only alternative to real world practice. Equations are useful as you build them you learn and see patterns in light propagation. Therefore on-set you can make judgement calls just based of that knowing – you don’t need to whip out the calculator unless the time to rig and test greatly outweighs the 10-30 minutes it’d take to calculate it.

                          G

                          Infinityvision.tv

                          in reply to: Lighting Distance Formulas #169572
                          gabj3
                          Participant

                            Continuing on this! Re how far away your light should be from your reflector. Have a play with the calculator. The amount of your reflector you illuminate, the amount used, and so forth.

                            That’s just more look at the beam angle, draw an isosceles triangle and see what happens.

                            Infinityvision.tv

                            in reply to: Lighting Distance Formulas #169570
                            gabj3
                            Participant

                              Here is the calculator I wrote on the subject

                              Gabriel Devereux | SOFT-LIGHT CALCULATOR

                              I’d recommend having a bit of a read on a few other topics in this forum.

                              I find you can assume most laws of light with a firm understanding of the inverse square, basic geometry and a few orthogonal laws (lamberts cosine). As you introduce different levels of reflectivity and specularity of reflectors (very non-Lambertian), it gets increasingly more challenging – silver stipple and gold stipple being an ultimate pain, but typical soft reflectors – Ultrabounce, Muslin falls into the realm of Lambertian-‘enough’ and Mirrors fall into the world of the exact opposite (just basic geometry).

                              Re-Inverse Square Law, it’s a statement of proportionality, not absolute magnitude. By that, I mean the whole double the distance half the output is an assumption that the energy distribution is proportionate to the above ratio. In optical far-field and such, the above statement applies. Still, people’s common misconception about Fresnels, Ellipsoidal Spots and Lasers and that they don’t conform to the inverse square is untrue.

                              However, one thing I will say. As an industry, a craft, we have somewhat limited our technical understanding of what we do, using the subjectivity and preference of the term. Some say some aspects of light calculation are impossible, and so forth.

                              That’s just all categorically untrue. It’s just that the education around our industry doesn’t represent this. EMF calculations exist for RF propagation, and the same laws apply.

                              As an example of my rant – we say one camera has a ‘superior’ colour science to another while not understanding the fundamentals that all are an approximation of an HVS response. A manufacturer takes artistic liberty in its CMF OETF. The Alexa doesn’t have ‘superior’ colour science to another; it just looks nicer, it may have spectral primaries that are more indicative of its look concerning RED Monstro spectral primaries, but one can achieve the same result – blah, blah.

                               

                              Infinityvision.tv

                            Viewing 13 replies - 16 through 28 (of 28 total)