dmullenasc

Posted on by

Forum Replies Created

Viewing 15 replies - 226 through 240 (of 303 total)
  • Author
    Replies
  • in reply to: editor modifying composition in post #214494
    dmullenasc
    Participant

      For an editor to change the cinematographer’s work without a very good reason shows a lack of respect — a cinematographer wouldn’t come into the editing room and start changing cuts after all.  Sometimes there is a good reason though — but an editor would usually tell the cinematographer what was going on and why they had to crop into the shot, etc.

      I only once worked with an editor who routinely recomposed and even re-colored footage I shot, which was crazy. But the movie was not very good and the director was crazy, so I just let them do what they wanted to the footage.  But it was really annoying — the editor only wanted to use close-ups so took all the medium and wide shots and zoomed into them to create even more close-ups and told me I wasn’t shooting enough close-ups!

      in reply to: ENR process with Digital ? #214123
      dmullenasc
      Participant

        The Varicon was a redesign of the original Lightflex, which flashed the image with white or colored light in front of the lens, sort of a controlled and even veil of light. The Panaflasher sat on the unused mag port on the Panaflex (there were two mag ports), and fogged the film inside the body of the camera as it ran through it.

        in reply to: ENR process with Digital ? #214066
        dmullenasc
        Participant

          Varicon and Panaflasher were ways of flashing the negative while filming. This lifted the blacks, which softened the colors and lowered the contrast. There was some minimal increase in shadow detail (it’s always hard to separate any increase in actual shadow information versus just the lifted blacks making the detail more apparent.)

          in reply to: ENR process with Digital ? #213015
          dmullenasc
          Participant

            Keep in mind that ENR was a print process – it involved adding b&w developer tanks to the FCP print processing line so that after the bleach step was skipped, a percentage of silver could be permanently developed and left in the print.  For ECN processing (camera negative, intermediates) your only option was skip bleach or partial skip bleach.

            The result when silver was left in a print was: (1) deeper blacks than the D-Max of the print stock, (2) less shadow detail due to increased contrast in the shadows, (3) darker colors that were somewhat desaturated from the addition of black silver in the dyes, (4) increase graininess in the print due to silver grains being added to the color dye clouds.

            So this is really a color-correction for display / release format issue, not a sensor issue.

            Sure, you can increase the contrast of the shadows and decrease the saturation using a LUT or basic DIT adjustments using ASC-CDL values for the display, the dailies, etc. As for the increase in graininess, you either have to live with just using a higher ISO for noise as a grain substitute — or use a film grain software in post, but then it probably would be added in the final color-correction, not in dailies or seen on set.

            The last item, the increase in black levels beyond what is normally possible for a 35mm release print, that’s difficult to control, your only hope is to get the movie shown using laser projection (which is becoming more commonplace) and accepting that black level as a base.

            Silver retention done to camera negative stocks had a somewhat different look than ENR.  Leaving silver in a negative increases density and contrast in the highlights, not the shadows, since that’s where the silver is mostly formed. So highlights get hotter rather than shadows getting blacker. Color is desaturated from the addition of black silver. Graininess is heavier than when done to the print for the simple fact that grains in camera negative stocks are larger than on intermediate and print stocks, since those have a very low ASA.

            in reply to: A reflection of our profession. #212384
            dmullenasc
            Participant

              You develop your own personal language based on your own emotional responses to light and color. I recall interviews with both John Boorman and Ingmar Bergman stating that they found the bright sunshine of California to be “oppressive” (of course they were also probably channeling their feelings towards Hollywood studios). Boorman said he found cold colors like blue to be “relaxing”. So there are limits to ascribing universal values to things like light and color.
              Also keep in mind that even if your guide is the script, it’s not always necessary to mimic the emotions of a scene visually, what in literature is called “pathetic fallacy.” Sometimes the visuals can be a counterpoint to the emotions of a scene, like when a character gets bad news on a perfect spring day surrounded by nature.

              in reply to: Education on the more expensive things. #212157
              dmullenasc
              Participant

                When I was coming up, often my 1st AC, my Key Grip, and my Gaffer were more experienced than I was so I could talk to them about specialized equipment like remote heads, telescoping cranes, etc.  I also went to trade shows, read trade magazines, visited rental houses, read the specs on websites, owners manuals, etc.

                dmullenasc
                Participant

                  35mm trailers were prepared in both 1.85 “flat” and 2.39 “scope” to match the feature presentation so that the projectionist would not have to change lenses and masks.

                  So clearly in this case the studio marketing division had access to an “open gate” D.I. master to make the 1.85 trailer (and any 16×9 unletterboxed versions needed.) of course, they should have made a 1.85 flat trailer with a 2.39 matte.

                  in reply to: Artemis software use #207593
                  dmullenasc
                  Participant

                    I use weather apps for the weather. I only use the Artemis as a lens preview or a lens finder that can record video or stills. I’m not aware of other features like drawing lighting diagrams, seems like that would be done on other apps.

                    in reply to: Artemis software use #207592
                    dmullenasc
                    Participant
                      in reply to: Artemis software use #207511
                      dmullenasc
                      Participant

                        in reply to: Artemis software use #207396
                        dmullenasc
                        Participant

                          I use it as a lens finder. The problem with normal lens finders is that you line up the shot and then hand it to the director or camera operator to see but you don’t know if they are really framing it the same way. We use the iPhone Artemis on scouts or in prep to figure out the lens choice but on the set we use the Artemis Pro which allows use to mount our actual lens onto an iPad. This way we can line up the shot with the director watching and record the blocking rehearsal. We can store video or still frames; if done earlier with stand-ins, these can be used for a storyboard.

                          dmullenasc
                          Participant

                            She has the more direct eyeline because she is the main character, later in the movie he will have the more direct eyeline.

                            in reply to: Is there such a thing as ‘correct’ exposure? #205617
                            dmullenasc
                            Participant

                              I think Gordon Willis (or maybe it was Conrad Hall) once said that there was nothing wrong about working on the edge… as long as you are consistent enough to not fall over that edge.  For example, maybe in your testing, you find that you can underexpose everything by 3-stops for a look and technically you are fine with the quality… but if you go a 1/2-stop too far, you fall off the cliff so to speak. So when working at the more extreme ISO settings, you have to understand your reduced ability for correcting errors.

                              in reply to: Is there such a thing as ‘correct’ exposure? #204780
                              dmullenasc
                              Participant

                                If you pick the “correct” exposure for the mood you want, then in theory you wouldn’t be pushing it around in post. That’s the issue, do you want to expose for the look you want… or do you want maximum flexibility to push it around in post, i.e. change your mind?

                                It’s possible to split the difference, i.e. play it safer by working at a base ISO with fairly minimal noise so you have some room to adjust without noise becoming too problematic even while exposing for the look you want.

                                in reply to: Uncompressed HD vs Arriraw #204779
                                dmullenasc
                                Participant

                                  I got this info from someone at ARRI:

                                  There are two distinct types of logarithmic data:

                                  Users see LogC3 (pre-ALEXA 35) or LogC4 (ALEXA 35 and later) RGB data.

                                  Writers at the device driver level see another logarithmic format, extremely early in the imaging pipeline, that is used to store the image while it is still a photomosaic, i.e. has not yet been debayered into RGB.

                                  For pre-ALEXA 35 cameras, there is a ‘hidden’ form for 12-bit photomosaic data, and then for each exposure index there is a particular variant of the LogC3 curve (‘gamma’ if you will, though not in the mathematical sense). The LogC3 differences are slight enough that most people don’t even know they exist.

                                  With the ALEXA 35 there is the same ‘hidden’ form, this time extended for 13-bit photomosaic data, and then there is one (and only one) LogC4 curve. 

                                  SMPTE RDD 55, which documents ARRIRAW in MXF, lays out the details of both 12- and 13-bit low-level bitstreams.

                                Viewing 15 replies - 226 through 240 (of 303 total)