Forum Replies Created
-
AuthorReplies
-
Here is a recent photo I took at twilight on a cloudy day. I shot in raw so could process the photo to any color temp setting. The thing to remember is that the level of blue is a creative choice. Also, in real twilight, the light gets more blue minute by minute, and how clear the sky is will affect the color.
Traditionally, in older movies, day-for-night or dusk-for-night was achieved by using 3200K tungsten-balanced film and removing the 85 correction filter. But keep in mind that when the movie was color timed later, the degree of blue was often adjusted to taste. Also, the blue will look more blue if there is something lit with warm tungsten light in the shot.
If a drop costs money, what about the hours of a VFX person working on compositing an image into the window? At the minimum, you could blur the view further with some 1/4 Hampshire Frost behind the window (or a stretched scrim) and then put anything outside that might create some texture, a Day Blue Muslin with some green bushes, a Tan Muslin with some black & white tape stripes to suggest window frames and pipes running up a brick wall, etc.
I don’t know about composition, but from a lighting standpoint, “Alien” (1979) is a compendium of lighting textures on faces, from ultra hard to ultra soft. In a similar vein is “Apocalypse Now” (1979).
In terms of composition of faces in close-up, it depends on the aspect ratio, but for scope movies, I’d look at Sergio Leone movies like “Once Upon a Time in the West”.
If you have people in the frame, then it’s easiest to shift strong colors that are opposite of the faces, hence why blue screens and green screens are used for VFX. So if a background is pure blue, then it can easily be shifted to blue-green (cyan) or green without affecting foreground faces lit warm. Or if there is only blue in the frame and the faces are pure silhouette like in that frame from “Vertigo”.
BUT what about the blue light spilling onto the faces, perhaps in the shadow side, so the face has a mix of blue and warmth in that area? Or any other area in the frame where the blue light is transitioning into the warm light, creating an in-between color? Even in that scene from “Vertigo” there are other shots where the green light is mixing with warm interior lighting.
Now just shifting the blue channel towards green isn’t going to look as “clean” overall compared to if you had just lit with cyan light in the first place where the blending point of colors would come out naturally with that mix of green. And it gets harder as you go further from the original blue — you can get away with some minor shift to blue-green but turning blue to pure green is much trickier in those areas of mixed colored light. So in the long run, it’s better to just light it with the colors you want in the first place — why spend time in post trying to create that effect, with mixed results?
That’s interesting, the idea that a movie takes on the perspective of so many characters in a subjective camera style that the overall effect is objective. That often seems to be the case in crime dramas, particularly with a detective / police investigator character — information is doled out to the viewer as a character learns it. And despite the subjective nature of that approach, there is also an objective tone of a dispassionate or cynical “god’s eye” view of humanity. And the truth is, most movies switch back and forth from subjective to objective perspectives.
Shallow focus was not only available to “large productions” in the past, it was available to low-budget productions as well. Even if you’re talking about people limited to smaller sensors like in the early days of 2/3″ HD in 2000-2007 or so, there were tools to get shallow focus like the P&S Technik Mini-35, the Brevis35, the Letus… all spinning groundglass attachments that allowed one to use 35mm lenses and get that depth of field.
No single technique is better or worse than another, it all depends on whether it is wielded by an artist or not — and Greig Fraser is an artist, he’s proven that on many productions.
I think Roger is happy that every movie doesn’t look like he shot it! Variety is the spice of life as they say.
First of all, shallow focus has been available to filmmakers ever since the 1930s, low or high-budget, when f/2.0 lenses became available for cinema. Unless you are talking only about digital cameras when 35mm sensors appeared in the mid-2000s with the Panavision Genesis, but low-budget filmmakers had access to the Red One by the late 2000s, so I’m not sure why you picked 2010 as a date for shallow-focus being available for smaller movies.
Second of all, I don’t know why a visual effects scene shot on a volume or against a green screen, etc. would be more realistic and less “CG” if the focus was deeper. And that frame from “The Batman” that you posted doesn’t look like it was grabbed from the blu-ray so I’m not sure that is the correct color timing.
Shallow and deep focus filmmaking follows trends and both approaches can be used well or inappropriately. The 1930s was a time where shallow focus and lens diffusion was popular, to the point where sometimes it was used when sometimes it shouldn’t have been. Then after the critical reaction to “Citizen Kane”, you sometimes saw deep-focus used when it wasn’t always necessary, or caused the lighting to look artificial due to the high levels required.
But after fast f/1.4 lenses arrived in the 1970s, you saw plenty of low-budget movies with shallow focus.
I think in the case of “The Batman”, as with other Greig Fraser works lately, the shallow focus works well for creating a certain dreamlike mood or a feeling that one is trapped in the headspace of a character (it worked well in Roger’s “Empire of Light” as well). But certainly we are in a time where shallow focus is trendy to the point where it gets used indiscriminately even on big-budget projects just because it is “pretty”.
Our minds can “zoom” in on some detail, like spotting a face in a crowd from a distance — flying in on a drone across a stadium to a face in a crowd wouldn’t really make sense for a POV of someone who can’t move in that direction.
From Wikipedia:
The f-number N is given by:
N = f/D
where f is the focal length, and D is the diameter of the entrance pupil (effective aperture). It is customary to write f-numbers preceded by “f/”, which forms a mathematical expression of the entrance pupil’s diameter in terms of f and N. For example, if a lens’s focal length were 10 mm and its entrance pupil ‘s diameter were 5 mm, the f-number would be 2. This would be expressed as “f/2” in a lens system.
But focal length isn’t really a measurement of the physical length of the lens, it’s the distance from the optical center to the sensor plane when the focus is at infinity, and the aperture size is not the measurement of the size of the hole by measuring the back of the lens but is the “effective aperture” size as seen through the front element. So this is why lenses can vary in size even though they might all be a 50mm.
But if you put a 50mm lens set to f/2 on cameras with different sized sensors, it’s still f/2 exposure-wise just as if you only had one sized sensor and then cropped in post to smaller areas.
It’s a misunderstanding when people commonly say that focal length creates perspective effects. Perspective on landscapes and faces is determined by camera position and distance to the subject. It’s not even a camera issue — take a walk in the mountains. You see a boulder on the hill you are standing on and there is a distant mountain peak. As you walk closer to the boulder it gets relatively larger compared to the distant mountain. It’s the same thing with a face — get closer and the distance from your viewpoint to the nose changes but the distance from the nose to the ears is a constant, so the nose gets relatively larger compared to the ears as you get closer. What the focal length provides is the view of the subject, how tight or wide it is, the degree of magnification or expansion.
So if you match distance and you match field of view between formats by using the equivalent focal lengths, you get the same image more or less. What’s different is the depth of field, because the larger formats need longer focal lengths to achieve the same view, there is less depth of field unless you stop down the lens (by the same amount as the conversion factor, so if the factor is 1.5X then you’d stop down the equivalent lens on the larger format by 1.5-stops to match the depth of field of the smaller format.)
To calculate the equivalent focal lengths in terms of horizontal view, find out the width of the sensor area being used and divide the larger number by the smaller number to get the conversion (crop) factor.
Generic figures of 24mm wide for Super 35 and 36mm wide for Full-Frame 35 means you divide 36 by 24 to get 1.5. So that’s the conversion factor. To get a matching horizontal view you’d multiply the Super35 lens by 1.5 to get the equivalent view in FF35. So a 32mm lens in Super35 has the same view as the 48mm in FF35.
But to be more precise you need the actual widths of the sensor areas being used. Open Gate on a classic Alexa is 28.25mm and on the Alexa 35, it’s 27.99mm. Open Gate on the Alexa LF is 36.70mm. So if you were comparing a classic Alexa to an LF, both using Open Gate ARRIRAW, you’d divide 36.70 by 28.25, which is 1.3. So a 32mm lens on a classic Alexa would have to be a 41.6mm on an Alexa LF if you wanted to match the horizontal view.
Being a keen observer of natural and practical light is an important skill.
There’s no right or wrong answer here. Some people want a clean image and will try to work at lower ISOs, others either want some noise or have to work in low light level conditions and will use a high ISO.
-
AuthorReplies