Search Results for 'no'

Posted on by

Home Forums Search Search Results for 'no'

Viewing 15 results - 1,306 through 1,320 (of 1,795 total)
  • Author
    Search Results
  • #204801
    Stip
    Participant

      When shooting raw I like to lower ISO just a tad bit to get a “thicker” negative, especially in low light scenes, but am generally an advocate of getting it as close as possible to the final look in camera. For example I like Alexa’s noise and night exteriors shot at ISO 1600.

      “I’m wary about my work being judged as not up to snuff if that room for tinkering isn’t there. Perhaps this is a consequence of the level I’m working at currently, and it’s something one has to learn to navigate with collaborators.”

      I know what you mean. I often didn’t have a say in post and it happened a lot that the colorists changed exposure – and thus mood – distinctly. I think it definitely depends on the scale of the production – the smaller, the more tinkering in post in my experience.

      #204791
      Carl
      Participant

        Yes, I think you’ve touched on the key issue here David, i.e. changing one’s mind.

        I would prefer to light and expose for the desired look on set. It’s a way of working that just makes sense to me. I generally stick to the native ISO, where the camera performs it’s best, and only shift the dynamic range in favour of either end if necessary.

        I’m early in my career (only one film shot) but have noticed that people my age very much like to tinker in post.

        I’m wary about my work being judged as not up to snuff if that room for tinkering isn’t there. Perhaps this is a consequence of the level I’m working at currently, and it’s something one has to learn to navigate with collaborators.

         

        #204780
        dmullenasc
        Participant

          If you pick the “correct” exposure for the mood you want, then in theory you wouldn’t be pushing it around in post. That’s the issue, do you want to expose for the look you want… or do you want maximum flexibility to push it around in post, i.e. change your mind?

          It’s possible to split the difference, i.e. play it safer by working at a base ISO with fairly minimal noise so you have some room to adjust without noise becoming too problematic even while exposing for the look you want.

          #204779
          dmullenasc
          Participant

            I got this info from someone at ARRI:

            There are two distinct types of logarithmic data:

            Users see LogC3 (pre-ALEXA 35) or LogC4 (ALEXA 35 and later) RGB data.

            Writers at the device driver level see another logarithmic format, extremely early in the imaging pipeline, that is used to store the image while it is still a photomosaic, i.e. has not yet been debayered into RGB.

            For pre-ALEXA 35 cameras, there is a ‘hidden’ form for 12-bit photomosaic data, and then for each exposure index there is a particular variant of the LogC3 curve (‘gamma’ if you will, though not in the mathematical sense). The LogC3 differences are slight enough that most people don’t even know they exist.

            With the ALEXA 35 there is the same ‘hidden’ form, this time extended for 13-bit photomosaic data, and then there is one (and only one) LogC4 curve. 

            SMPTE RDD 55, which documents ARRIRAW in MXF, lays out the details of both 12- and 13-bit low-level bitstreams.

            #204667
            dmullenasc
            Participant

              Log-C gamma is meant to emulate film negative scans but I think it’s applied to Arriraw conversions / debayering to RGB for color-correction, it’s not the 12-bit log storage format of Arriraw. ARRI ‘s own website is not completely clear on this but I believe we’re talking about two different things, the Log-C used for debayered images and the mathematical log used for data storage in Arriraw.

              #204634
              gabj3
              Participant

                Hi David,

                100% no gamut is applied and the log-c container is applied to the scene linear RGB values.

                From my understanding (which could be entirely wrong). Log-C defines the equation that encodes a linear signal to a qazilog container for Arri cameras.

                Log-C doesn’t inherently dictate the gamut but just dictates ‘luminance values’. Of course, all of this happens prior to debayer or any kind of chromatic related OETF.

                Infinityvision.tv
                Gabriel Devereux - Engineer

                #204495
                dmullenasc
                Participant

                  Arriraw is a 12-bit log recording but it’s not Log-C, there’s no color space or gamma applied to the image, it’s just data that is converted from 16-bit linear to 12-bit log for storage.

                  #204486
                  gabj3
                  Participant

                    I should note as well, ARRI has its whole  32 bit word and wrapping but the core information is a 16-bit linear Y values compressed with ARRIs LogC container.

                    That’s it, that’s ARRIRAW.

                    Infinityvision.tv
                    Gabriel Devereux - Engineer

                    #204485
                    gabj3
                    Participant

                      To kind of second what was said above –

                      A RAW signal is a radiometrically linear readout of  your sensor.

                      In the case of an Alexa a 16-bit unsigned integer linear readout.

                      The issue with a linear raw signal is human beings don’t perceive light linearly but rather logarithmically and our image typically resembles this.

                       

                      As we know, a stop is double or half the amount of energy. We perceive each stop logarithmically (each stop has the same amount of apparent energy) where as in an imaging sense we perceive light linearly where the brightest stop is half the amount of total values, the second brightest stop is a quarter the total amount of values – very inefficient as we perceive all stops logarithmically.

                       

                      So we compress the linear signal with a logarithmic container. We take the Y value and give it a linear bias to 256 and then apply a log encode that gives 512 values per stop.

                       

                      This logarithmic compression is the only difference in information than an absolute RAW acquisition format (acquiring a 16-bit linear signal).

                      Infinityvision.tv
                      Gabriel Devereux - Engineer

                      #204447
                      dmullenasc
                      Participant

                        In the early days of the Alexa, uncompressed HD out to a Codex was used because Arriraw wasn’t an option yet. “Game of Thrones” also used that format at first. Uncompressed HD is uncompressed, unlike ProRes, but it’s still a debayered RGB signal so color temp and ISO are baked into a Log-C output like with ProRes. And it’s downsampled to HD resolution.

                        I also think three HD channels uncompressed is actually more data to handle than Arriraw… leaving the data as a single Bayer pattern signal is a form of data compression in that debayering it to RGB triples the amount of data (if keeping the same resolution — in this case, though, the signal is being downsampled to HD.)

                        Today with the internal ProRes 4444 xq option, or Arriraw, there’s no reason to record uncompressed HD out to a Codex.

                        #204439
                        LucaManciniLuca
                        Participant

                          Hello Mr. Deakins,
                          <p style=”text-align: left;”>I was reading an ASC article on Skyfall and they mentioned:</p>

                          “<span style=”color: #000000; font-family: Merriweather, sans-serif; font-size: 20px;”>The uncompressed ArriRaw format was also a new development for the Alexa since </span><em style=”border: 0px solid #e5e7eb; box-sizing: border-box; –tw-translate-x: 0; –tw-translate-y: 0; –tw-rotate: 0; –tw-skew-x: 0; –tw-skew-y: 0; –tw-scale-x: 1; –tw-scale-y: 1; –tw-scroll-snap-strictness: proximity; –tw-ring-offset-width: 0px; –tw-ring-offset-color: #fff; –tw-ring-color: rgba(59,130,246,0.5); –tw-ring-offset-shadow: 0 0 #0000; –tw-ring-shadow: 0 0 #0000; –tw-shadow: 0 0 #0000; –tw-shadow-colored: 0 0 #0000; color: #000000; font-family: Merriweather, sans-serif; font-size: 20px;”>In Time.<span style=”color: #000000; font-family: Merriweather, sans-serif; font-size: 20px;”> Deakins notes, “We were told </span><em style=”border: 0px solid #e5e7eb; box-sizing: border-box; –tw-translate-x: 0; –tw-translate-y: 0; –tw-rotate: 0; –tw-skew-x: 0; –tw-skew-y: 0; –tw-scale-x: 1; –tw-scale-y: 1; –tw-scroll-snap-strictness: proximity; –tw-ring-offset-width: 0px; –tw-ring-offset-color: #fff; –tw-ring-color: rgba(59,130,246,0.5); –tw-ring-offset-shadow: 0 0 #0000; –tw-ring-shadow: 0 0 #0000; –tw-shadow: 0 0 #0000; –tw-shadow-colored: 0 0 #0000; color: #000000; font-family: Merriweather, sans-serif; font-size: 20px;”>Skyfall<span style=”color: #000000; font-family: Merriweather, sans-serif; font-size: 20px;”> would get an Imax release, so how our images would look on that giant screen was a big consideration. I shot tests comparing uncompressed HD, which I used on </span><em style=”border: 0px solid #e5e7eb; box-sizing: border-box; –tw-translate-x: 0; –tw-translate-y: 0; –tw-rotate: 0; –tw-skew-x: 0; –tw-skew-y: 0; –tw-scale-x: 1; –tw-scale-y: 1; –tw-scroll-snap-strictness: proximity; –tw-ring-offset-width: 0px; –tw-ring-offset-color: #fff; –tw-ring-color: rgba(59,130,246,0.5); –tw-ring-offset-shadow: 0 0 #0000; –tw-ring-shadow: 0 0 #0000; –tw-shadow: 0 0 #0000; –tw-shadow-colored: 0 0 #0000; color: #000000; font-family: Merriweather, sans-serif; font-size: 20px;”>In Time,<span style=”color: #000000; font-family: Merriweather, sans-serif; font-size: 20px;”> and ArriRaw, and we blew those images up and watched them on an Imax screen at Swiss Cottage. It was quite startling; both images looked pretty damned good, but the ArriRaw had a definite advantage.”</span>

                           

                          What was this particular advantage of Arriraw you saw considering both are lossless formats I believe ??

                           

                          Thanks

                           

                          #204430
                          andrewtrost
                          Participant

                            Curious what everyone here thinks of the EL Zone system that Ed Lachman created a few years ago, seems applicable to this thread!

                            https://www.newsshooter.com/2023/03/06/el-zone-exposure-system-how-does-it-work-and-how-do-you-use-it/

                            I haven’t used it on a project yet, but recently attended an event where Ed explained his reasoning behind creating it and how he personally uses it.  I think it definitely has a place as a useful tool that could/should replace false color.

                            Ed is currently trying to get manufacturers like ARRI and Sony to add it directly to their camera systems, seems like a no-brainer to me!

                            dmullenasc
                            Participant

                              Lately if been pondering the reverse of your idea. What if I set my digital cinema camera to 3200K and then use an #85 or #85B as filtration? I was wondering if it may have a pleasing effect on skin tones or other “warmer” elements of the frame. Something I will test soon.

                              I think mainly you’ll just find that your blue channel got noisier, which is how a digital camera takes a raw conversion to RGB and makes it 3200K. There may be a subtle difference in the color values due to the dyes of the optical filter… but the question is whether that could just as easily be created with minimal color-correction.

                              Stip
                              Participant

                                I did a similar quick test once with a blue filter in order to see if it’d help with day-to-night conversion. Shoot raw to be able to visually losslessly change white balance in post.

                                I couldn’t see a difference between using the blue filter and using no filter but turning WB to a higher Kelvin in post.

                                Modern cine cams are so good at balancing temperature, would be interesting to know if you’ll see a difference at all with that test.

                                #203765

                                In reply to: Bathtube Lighting

                                Al Duffield
                                Participant

                                  Tough one without knowing the layout of the room or what “romantic” looks like in this films visual language.

                                  classic romantic tropes are long lenses, soft lighting, shallow focus, creamy images, bokeh, candle light, rose petals etc etc etc, this list goes on.

                                  I think we need a lot more information to give you any specific input, but generally speaking I suggest you discuss with the director what “romantic” fees like, and use that to work out how it should look. Then it’s just a case of execution 🙂

                                Viewing 15 results - 1,306 through 1,320 (of 1,795 total)