Forum Replies Created
-
AuthorReplies
-
At 24/25 fps at ISO 100:
100 fc = f/2.8
200 fc = f/4
400 fc = f/5.6
800 fc = f/8
So with ISO 800:
100 fc = f/8
200 fc = f/11
400 fc = f/16
You lose 5-stops going from 25 fps to 800 fps, so to shoot at f/2.8, you need to light for f/16 so that once you lose 5-stops by going from 25 fps to 800 fps, you’re down to f/2.8.
That’s about a 5-stop light loss so you look at the photometric data for the light and figure out how many foot-candles you need to shoot at, let’s say f/2.8 at 800 fps or think of it as shooting at f/16 at 25 fps.
The old rule of thumb is that you need 100 fc to shoot at f/2.8 / 24 fps / 180 degree shutter / 100 ASA. So that’s like f/8 at 800 ASA if you have 100 fc. So to get 2 more stops of exposure for f/16 at 800 ASA, you’d need 400 fc. Then you’d be shooting near an f/2.8 at 800 fps at 400 fc. I think.
Mainly you shoot tests in advance for any technique that you want to try. Otherwise (or as well) you contact people who have tried that technique or you hire crew people with some experience in that technique. If you only do things you’ve done before, you never learn. You take calculated risks based on research.
If you get the look in camera, then why do you need the flexibility to enhance the look in post? You already have the look. There’s nothing wrong with taking advantage of post tools to create a look… but one would hope the primary reason isn’t that you want to be able to change your mind later. Hedging your bets like that isn’t the most effective way to learn and grow as an artist because you never risk failure.
Every technique has its limits so you just have to learn what they are.
Now of course most cinematographers want some post flexibility to deal with sub-optimal things that happen on sets, which is why one shoots raw or log. For example, outdoors changes in weather can shift colors and contrast around, sometimes shot to shot. It’s one reason (of many) in the past that most filmmakers would rather shoot color negative than color reversal. Having almost no margin for error can be a bit too challenging on some projects. But on the flip side, a cinematographer is hesitant to hand in footage with no look at all — with maximum flexibility to create any look — and hope that control over that image will be given to them throughout the post chain.
I think a lot of cinematographers opt for a hybrid approach, creating the look in terms of light and shadow, mood, composition, etc. in camera knowing that it will go through a post color-correction step so recording raw or log to make adjustments easier. Some looks can only be created in post (or by live-grading on set perhaps but you run into time issues then) — like adding film grain, or selectively correcting portions of the frame or halating the blacks, etc. You just have to pick the best methods to create your look — but the hope is that you know that look in advance, not that you’ll figure it out later.
Usually when a movie used a silver retention process for the release prints, the transfers for home video use the camera negative (or a timed color interpositive made off of the negative) that is “normal” and the look of the silver retention print process is created digitally in the color-correction — mainly by increasing contrast in the shadows, making the blacks deeper, and lowering saturation.
If the skip bleach process had been done to the camera negative, then it would be baked in but some of the effect is a little different than when done to the release print. The silver grains in camera negative are larger than in print stock (since print stock has a very low ISO), and the increase in density from leaving silver in the negative causes the highlights to get hotter rather than the shadows to get darker.
There are some body cam shots in the opening of “Seconds” (1966):
They are distracting and unreal but obviously that was the intent, whether it is heavy-handed or not is open for debate.
-
This reply was modified 4 months, 3 weeks ago by
dmullenasc.
Changing a lens versus moving the camera can be more or less fast, there are too many variables to consider but in general, swapping a prime lens to one slightly longer is not hard.
I’ve been in some situations where the OTS used a longer lens than the CU. For example, if cross-covering where A and B cameras are shooting opposite angles to get both sides of the overs at the same time, it is easier to keep the other camera out of the shot if working on longer lenses from further back. Or maybe you want the foreground shoulder to not get larger in size relative to the person facing the camera so you use a longer lens for that flatter perspective, but in the close-up you want the effect of the lens being physically closer.
The other issue is eyelines — if you shoot a close-up with wide-angle lens and want a really tight eyeline, the actor might have to look at the edge of the lens or inside the mattebox rather than the other actor. In the OTS, they can naturally look at each other. So if you push in for the CU rather than go to a longer lens, you have to consider whether the actor can still see the off-camera actor and if that eyeline is too wide now. I did a movie where the director wanted wider-angle lenses for CUs, like a 28mm or 35mm, but wanted a tight eyeline so the actor had to look at a mark in the lens. It made the scenes more intense… but after four weeks, the main actor complained “for just once on this movie, I’d like to look at the other actor in my close-ups!”
I don’t have a scientific answer but skin is a pastel color and as it gets lighter, like any color, it gets less saturated by eye.
A similar thing happens in colored lighting — 5600K light on a 3200K camera looks pale blue when overexposed and deep blue when underexposed.
As to how overexposed a face can get and still be corrected back to normal, that depends on the camera and the recording format, the skill of the colorist — and the face.
The same show LUT could be used for set monitors, dailies, and then as a starting point in the final color-correction with the likely option to start from scratch and go back to log or raw, but use the show LUT as a reference. LUTs might be loaded into some cameras or the camera sends a log image to the DIT cart and the LUT is applied there and sent out to all the other monitors, or it might be loaded into individual monitors.
But this might be separate from technical LUTs like for viewing in P3 versus Rec.709, or working within the gamut for 2383 Vision print stock for a film-out.
Not sure what the difference is between a “show” LUT versus a “look” LUT unless a look LUT refers to things in post like a Vision 2383 print LUT as a base for correction so you don’t work outside of the color gamut that print stock can create.
On many shows, the dailies colorist gets the footage, either shot raw or something like ProRes, converts it from raw if necessary, applies the LUT being used on set for the monitors, applies any shot by shot image adjustments to the LUT sent in by the DIT usually as ASC CDL values, probably applies any letterboxing needed, and outputs in the deliverables that editorial needs and whatever is used for streaming dailies. They will also back-up the data.
The softness of a light (i.e. the blurriness of the shadows created by the light) is determined by its size relative to the subject, that’s basically it.
What different bounce surfaces and diffusion materials control is the degree to which you can fill that surface evenly to maximize the size of the source, versus getting a hotter spot in the center for example, or some specular light mixed into the soft light (which is desirable sometimes to give the soft light some “texture”.)
So putting grid cloth over UltraBounce isn’t going to do much other than maybe give you a bit more of a hot spot in the center (or wherever your lamp is aimed) because grid cloth has a tiny bit more shine to it. You could try putting bleached muslin over the UltraBounce, it’s maybe slightly more matte. But it’s not additive, you’re basically swapping an UltraBounce for a bleached muslin. With the grid cloth idea, it’s a bit more additive in the sense that bouncing off grid cloth alone is less light efficient since so much passes through the cloth, so adding an UltraBounce from behind will improve the amount of light you get off of the bounce.
But it’s not going to make the UltraBounce light “softer” and less sourcey. To do that you either need a larger UltraBounce (and be able to fill the larger surface) to create a larger bounce — or have the room to “book light” the bounce by passing it through another frame closer to the actor but then basically now the closer diffusion frame is the “source” in terms of the size determining the softness. But book-lighting would make it easier to fill that diffusion frame more evenly.
Now it’s possible that the reason the UltraBounce is not soft enough for you is that you aren’t filling it evenly, so it may help to hit it with multiple smaller units rather than one larger unit.
-
This reply was modified 5 months, 1 week ago by
dmullenasc.
Test, test, test is all I can say…
Check out “Lost Highway” for some of the very dark scenes in the apartment — it’s quite underexposed with lifted blacks, so has somewhat of a foggy “head cold” sort of feeling. Perhaps a hazed set would help create some of this nebulous quality. To me it sounds like you want very dim underexposed detail under soft light, at least in some areas, so you can still see some movement through space. Pure black or milky blacks (grey) with no detail (i.e. no real image detail) would make it hard to sense motion.
I’m all for poetic language to get at the feeling desired but at some point, you have to get into specifics. Do you see anything of the scene, the action, the set, etc.? Is it very dim with low-contrast or shadowy with large areas of blackness but a few highlights? Do you want noise? Do you want true blacks? There are no right or wrong answers.
The problem with very dim underexposed imagery is that it plays differently on a large theater screen where it commands attention and is still the biggest & brightest thing in your field of view, versus a TV monitor where it is smaller in your field of vision and competing with the rest of the room lights.
Generally what they mean when they say they don’t use negative stock is that they don’t use camera negative stock, which is much higher in ISO (and grainier) than lab intermediate and print stocks.
-
This reply was modified 4 months, 3 weeks ago by
-
AuthorReplies
