# Dynamic Range War



## unfocused (Mar 9, 2012)

Okay, I've noticed a lot of discussion (to put it politely) in other threads about the "dynamic range" of the new 5D III sensor. I'm hoping someone can enlighten me a bit and explain why or if I should care.

I'm not clear what exactly people mean by dynamic range. It seems like at least two definitions are possible.

1) Are you referring to the ability of the sensor to record detail in a scene that has a wide range of light. For example, are we talking about the ability to capture detail in a brightly lit canyon, where the light ranges from near total sunlight to near black. So that, a sensor that has a dynamic range of say "9" would be able to record detail for up to four stops from the midpoint in either direction?

2) Or, are you referring to the ability of the sensor to record discernible differences in light. For example, a range of "9" would mean that on a scale from black to white, there would be nine clear steps visible?

It's been many years since I read the Zone System (and frankly, I found the books excruciatingly boring), but as I recall Adams' basic premise was that film was capable of recording far greater dynamic range than could be reproduced by photographic paper (much less commercial printing). By manipulating exposure and development of the film, he sought to compress the dynamic range recorded by the film, so that it could be aligned with what the final print could reproduce. The general concept, as I recall, was to expose to retain some detail in the shadows and then develop to retain detail in the highlights. 

My understanding is that photographic prints even today have less possible range than sensors and computer monitors less than prints. (Although the back lighting of monitors gives the appearance of greater saturation and richness in colors)

So, if I am wrong about this, can someone explain it in understandable terms. And, if I am right, then why should I care at all about dynamic range so long as the final medium is always going to be more limited than the medium used to capture the image in the first place?


----------



## dtaylor (Mar 9, 2012)

unfocused said:


> 1) Are you referring to the ability of the sensor to record detail in a scene that has a wide range of light. For example, are we talking about the ability to capture detail in a brightly lit canyon, where the light ranges from near total sunlight to near black. So that, a sensor that has a dynamic range of say "9" would be able to record detail for up to four stops from the midpoint in either direction?



This is what people mean when they say dynamic range. Though the mid point, i.e. the point where a gray tone is rendered middle gray, is not necessarily in the middle of the range. Digital sensors typically have more shadow range than highlight range, and print film typically has the opposite.



> My understanding is that photographic prints even today have less possible range than sensors and computer monitors less than prints. (Although the back lighting of monitors gives the appearance of greater saturation and richness in colors)



Monitors have more DR than prints. Some may even exceed sensors.



> And, if I am right, then why should I care at all about dynamic range so long as the final medium is always going to be more limited than the medium used to capture the image in the first place?



Because you can compress the captured range into a range that will fit on paper, and your viewer can see the shadow and highlight detail you saw at the scene. More DR also covers more exposure errors.

On that note...I don't know why anybody is talking about 5D3 DR yet. To my knowledge there are no published transmission step wedge tests of the 5D3. DPR is the site which usually does this first. You can safely ignore any and all claims based on noise measurements (i.e. DxO and personal estimates made from available RAW files). Trying to compute DR from the noise floor does NOT result in an accurate measurement of sensor DR in the real world.


----------



## awinphoto (Mar 9, 2012)

Hey unfocused... your definitions are both correct to a discernible degree... Yes, the downfall in photography, DR range was/is printing... film had a few more stops than photographic paper back in the day... I think film, especially negative film, had closer to 9 stops give or take (slide film had anywhere from 11-13 if i'm not mistaken but it's been quite a few years). Photographic paper at the time had like 5-6 stops of DR, so the zone method was created to leverage the developing process, exposure process, and printing to get the most maximum range out of papers limitations. It was an entire class in itself... a lot of math and testing and experimenting. 

Now with digital, we are once again limited by CMYK and printing... The RGB color gamut is vastly wider than CMYK and commercial printing has not been able to catch up and there is really nothing you can do about it without adding spot colors and such... Modern epson and HP printers have even added Red inks and Orange inks and i think even blue, not to mention light cyan, light magenta, etc... all trying to get the widest gamut possible. Commercial printers have yet to catch on to do something similar without really adding to the cost of production.

Now regarding dynamic range, it is the level of stops and subtitles that you see throughout the entire range of the print... DR is best seen as an S curve... most cameras can get most of the middle ranges but struggle getting the information in the extreme highlights and shadows in which the DR range gets elongated and such. The 5d2, for example had a DR, depending on the testing source, of around 11-12 stops of DR. I would guesstimate most consumer digital cameras out there on the market today should capture around 9-10 stops of DR easily if not more. Now the question remains, if nikon/canon/sony/phase one, etc develop a sensor that could capture 13,14,15 stops of DR, most of it, as mentioned above would be in the subtleties in the highlights and shadows, but whether any would show up in print is another thing. Of course with the increasing development of digital frames, projectors, HD monitors, etc... you can make a good presentation, but for professional photographers delivering paper prints to clients, until not only cameras but printers/ink/output/cmyk development continues as well, it really is a futile argument. 

Edit... epson a few years ago came out with the R2800 which was supposed to give more DR by having multiple black/gray inks to give more definitions in subtle tones, as well as give cleaner B&W prints without any color tinting that was prevalent in standard B&W inkjet printing... While successful, It also struggled in the color part because where they added in blacks and gray inks, they took away in color inks to compensate... now they have the R2880 and the R3000 which is trying to blend the technologies... It's still a work in progress however these printer companies update their printers almost less frequently than the 1d series does and commercial printing really hasn't changed a whole lot in the last half decade.


----------



## Larry (Mar 9, 2012)

unfocused said:


> if I am right, then why should I care at all about dynamic range so long as the final medium is always going to be more limited than the medium used to capture the image in the first place?



Despite whatever limitations of the final medium, if there are discernible differences in the print that result from DR differences in the sensors, most of us would care.

The old, out of tune piano with some sticking keys will never deliver the whole tune, but 10 fingers will still do a better job than 5 ;-)


----------



## epsiloneri (Mar 9, 2012)

Dynamic range is the ratio between the brightest signal you can reliably detect, to the faintest signal you can reliably detect. This is close to your definition 1. Your definition 2 is closer to the numerical dynamic range, but is not necessarily related to the photographic dynamic range (because with DR = brightest/darkest the number of steps in between don't matter).

BTW, the human eye has a DR of about 30 stops (though not simultaneously). It's fortunate reproductions don't have that kind of DR, as it would easily be quite uncomfortable to watch. Just imagine _actually_ being blinded by a photograph of the Sun...


----------



## Mt Spokane Photography (Mar 9, 2012)

This image shows why we would like more dynamic range in a camera. I took this on a bright day, and the camera exposed for the bright sky. I then switched to manual exposure and exposed for the person, but kept the image as a example.

A higher dynamic range might let you get good detail from both the bright and the shadow areas, without enough dynamic range, you must chose the area you want to be able to show the detail.

Black and white film typically had very good dynamic range, and you routinely got good prints from a bit of under or over exposure. Digital is not so forgiving, its because there is less dynamic range.

Most of the discussion involves how much range you can theoritically extract from a raw image, and is not necessarily related to the quality of the final image.


----------



## awinphoto (Mar 9, 2012)

Mt Spokane Photography said:


> Black and white film typically had very good dynamic range, and you routinely got good prints from a bit of under or over exposure. Digital is not so forgiving, its because there is less dynamic range.



Only if you knew you screwed your exposure in the beginning and developed your film to push/pull times to compensate... Otherwise you were left with a muddy mess... Also DR may or may not also help your image... much like epsiloneri said your eye has a huge DR, but what your eye does is compensate on the fly... so in your scene, you see the sky, your focus of your subject goes oof and the sky slightly darkens to reveal the detail in the clouds, sky, etc... then you focus on your subject, it dilates, lets more light in, your subject appears normal, and then you lower your vision, look in shadow your eye see the shadow detail fine... It does all this by not focusing/processing all the information at the same time and focusing on everything at the same time... Kinda like walking indoors and outdoors or when outside, looking in the distance trying to discern something in the distance, squinting forces your eye to focus exactly on said subject and your eye recacluates the light so you can see it. Our eyes is the most advanced camera/lens system ever created and we mostly dont even think of it... So in theory, if you nailed exposure on the guy, the shadows would have been brought up nicely and you would get white skys... increased DR may give you nicer shadow and maybe discern a cloud shadow or two but probably couldn't make this a perfect photo unless we get the same DR as our eyes.


----------



## dtaylor (Mar 9, 2012)

For the record...

* Slide film has 5-8 stops depending on emulsion.

* Early generation DSLRs were around 8 stops.

* Current DSLRs are in the 10-12 stop range. (Note: the newest FF bodies from Canon and Nikon haven't been tested yet.)

* Print film has 9-14 stops depending on emulsion.

* I've seen some B&W emulsions, when properly processed, yield 18 stops.


----------



## awinphoto (Mar 9, 2012)

dtaylor said:


> For the record...
> 
> * Slide film has 5-8 stops depending on emulsion.
> 
> ...



I think you got your print film and slide film mixed... Slide film typically had much wider DR than color negatives... So much so that many pro's who shot color would shoot slides, and then try reexposing the slides on negative film when needed to print. But they were two separate beasts... Negatives you exposed for shadows, print for highlights, slides, you had to expose slightly for highlights. B&W film had maybe upwards to 9 or so when we did our denistometer tests, but it really was dependent on ISO and brand.


----------



## qwerty (Mar 9, 2012)

As neuroanatomist pointed out in another thread, people (myself included) always want a little bit more. However, I really was expecting a boost in DR with the new sensor. I do hope that the people reporting a lack of significant improvement are wrong (and they generally list ways their analysis might be off); however it does have me a little concerned.

If you look at DxoMark scores for dynamic range (see http://www.dxomark.com/index.php/Cameras/Compare-Camera-Sensors/Compare-cameras-side-by-side/(appareil1)/680%7C0/(brand)/Nikon/(appareil2)/485%7C0/(brand2)/Nikon/(appareil3)/483%7C0/(brand3)/Canon ; click on measurements, the dynamic range), you can see that the 5dII dynamic range asymptotes at low ISO while the DR for the two Nikon sensors keeps improving as you reduce ISO.

I have some background in statistics, but know nothing about sensor design. However, I did expect that Canon would have an amazing low-iso dynamic range with their latest generation of sensors, if only to keep up with Nikon. Right now, Nikon's full frame sensor from 2008 and crop sensor from 2010 beat Canon's best full frame sensor by a wide margin (almost 2 full stops). 

I am not very knowledgeable about film photography, but my recollection is that, beyond the reported dynamic range, film is more forgiving than digital; with digital, if you blow a highlight, it is blown completely and utterly beyond recovery (your Spinal Tap brand amp won't go to 12, no matter how hard you try). With film, since it is analogue, it is a more gradual process; it becomes progressively harder to distinguish between different highlight areas, but there is still some minuscule difference.


I really hope that the people who are claiming no such improvement for the 5D III are wrong. My photographs will certainly be limited by my skill and not my camera's DR, but I would (rationally or not) feel better about purchasing a 5D III if I felt like it was optimal in every way.


----------



## awinphoto (Mar 9, 2012)

qwerty said:


> As neuroanatomist pointed out in another thread, people (myself included) always want a little bit more. However, I really was expecting a boost in DR with the new sensor. I do hope that the people reporting a lack of significant improvement are wrong (and they generally list ways their analysis might be off); however it does have me a little concerned.
> 
> If you look at DxoMark scores for dynamic range (see http://www.dxomark.com/index.php/Cameras/Compare-Camera-Sensors/Compare-cameras-side-by-side/(appareil1)/680%7C0/(brand)/Nikon/(appareil2)/485%7C0/(brand2)/Nikon/(appareil3)/483%7C0/(brand3)/Canon ; click on measurements, the dynamic range), you can see that the 5dII dynamic range asymptotes at low ISO while the DR for the two Nikon sensors keeps improving as you reduce ISO.
> 
> ...



While I too hope to see improvement in the new 5d3... if it helps in any way, it also has that cool HDR function... from what I could tell in reviews you could set the ranges as little to as far apart as you want and it will either do it in camera or save the individual files for you to do it later based on your preference, and got several different settings... If it's as good as advertised, it could be one way to really boost DR in your images without it looking fake. I'm excited to give it a whirl.


----------



## unfocused (Mar 9, 2012)

Thanks all. This is a very helpful and reasonable discussion.

Now, to flog the horse's corpse just a bit more: looking at Mt. Spokane's example, what I have a hard time wrapping my head around is that it seems to me the problem is not that the sensor fails to record _enough_ dynamic range, but rather the sensor does not know how to _compress_ the range that exists in the scene.

It still strikes me that the problem is that the distance between the various tones from shadow to sky needs to be narrowed. Granted, this particular image is not properly exposed for the skin tones, but if it were, the sky would be blown out. Is that a problem with too little dynamic range? Or is it more correctly a problem with too much dynamic range? 

Finally, I would just add that in this example, my first reaction is to see an image that would look about the same if shot with slide film. We've always had a problem with these kind of lighting conditions and the old way of dealing with it was to change the conditions by shifting position or moving the subject. 

Again, I appreciate the responses here. Candidly, my non technical take on all this is that the improvements that we are talking about with modern sensor technology are primarily around the margins. Whether it is ISO, noise, dynamic range, etc. etc., it sure seems to this old film photographer that we are light years ahead of where we used to be.


----------



## neuroanatomist (Mar 9, 2012)

unfocused said:


> Now, to flog the horse's corpse just a bit more: looking at Mt. Spokane's example, what I have a hard time wrapping my head around is that it seems to me the problem is not that the sensor fails to record _enough_ dynamic range, but rather the sensor does not know how to _compress_ the range that exists in the scene.



In order to compress it, the sensor must first _record_ it - and that's the problem. 

Consider your eyes - they have what amounts to a situationally flexible dynamic range. When you are in bright sunlight, but then walk into a nearly pitch-black room, at first you can't see anything - but after a while, your eyes accomodate and you can see...until you walk out into the sunlight again, which monentarily blinds you (although you adjust more rapidly in that direction). 

A sensor does not have that flexibility - it's dynamic range is fixed. You can adjust the exposure to capture a higher or lower portion of the scene's total dynamic range, but you're limited. Imagine a scene with 13 stops of dynamic range in it from deepest shadow to brightest highlight. If your sensor has 11 stops of DR, you can choose to set the exposure so you capture stops 1-11 of the scene and blow two stops of highlights, or capture stops 3-13 of the scene and lose two stops of shadow detail. IF you had a sensor with a 13-stop DR, you could capture the whole range of the scene in one shot. 

Yes, output media are often more limiting (and JPG images are limited to 8-bits) - but the point is, if you capture the full tonality of the scene in the RAW file, you can then choose which portions to keep, or compress the tonal range of the scene, as desired. But if your sensor doesn't have a sufficiently large DR to capture lowest lows to highest highs in the first place, those data are gone and cannot be recovered later.


----------



## unfocused (Mar 9, 2012)

Thanks. Got it.

http://www.facebook.com/photo.php?fbid=2010338070756&set=a.2010337630745.95914.1612860769&type=3&theater

Not sure this will work, but if it does, you may see an example of why I think we are pretty much just dealing with the margins. Shot of the amphitheater in Arles. I wanted the highlights to blow out and the shadows to go near black. Not sure I would have liked the effect otherwise.

It's quite mind-boggling to stand there and realize that two thousand years ago, people were walking through these same halls on their way to their seats, where they were going to watch gladiators fight to the death. Anyway, I really do appreciate everyone's comments. This is why I enjoy this forum.


----------



## dtaylor (Mar 9, 2012)

awinphoto said:


> I think you got your print film and slide film mixed... Slide film typically had much wider DR than color negatives... So much so that many pro's who shot color would shoot slides, and then try reexposing the slides on negative film when needed to print.



Slide film never had wider DR than color negative. Check the characteristic curves published by manufacturers. The best slide film I'm aware of, in terms of DR, was Astia at roughly 8 stops. That's where print films began.

The only reason slides were ever shot to negatives and then printed is because you can't directly print a slide with standard darkroom printing processes. The resulting print would be a negative of the slide.


----------



## dtaylor (Mar 9, 2012)

qwerty said:


> If you look at DxoMark scores for dynamic range



Don't. Their scores are all over the place and do not concur with standard transmission step wedge tests or real world results. The fact that their DR values change between the screen and print reports suggests that they don't even know what DR is, as harsh as that may sound.

I've stayed out of the other thread but I got a good laugh out of the first couple pages. Nobody can tell you the 5D3's DR by shoving some IR samples into a piece of software. IR's samples are very good for other things, but not that. We won't know the 5D3's DR until someone performs a transmission step wedge test (DPR does this). Even then, digital DR is highly sensitive to the RAW converter used. I have a funny feeling, based on initial review of IR high ISO RAW and JPEG files, that our RAW converters will need an update to get the most out of the 5D3's sensor.



> Right now, Nikon's full frame sensor from 2008 and crop sensor from 2010 beat Canon's best full frame sensor by a wide margin (almost 2 full stops).



It's about 1 stop (11 vs. 12, RAW, ACR maximized for DR).



> I am not very knowledgeable about film photography, but my recollection is that, beyond the reported dynamic range, film is more forgiving than digital; with digital, if you blow a highlight, it is blown completely and utterly beyond recovery (your Spinal Tap brand amp won't go to 12, no matter how hard you try). With film, since it is analogue, it is a more gradual process; it becomes progressively harder to distinguish between different highlight areas, but there is still some minuscule difference.



True, but print film quickly fell apart on the shadow side.


----------



## awinphoto (Mar 9, 2012)

dtaylor said:


> awinphoto said:
> 
> 
> > I think you got your print film and slide film mixed... Slide film typically had much wider DR than color negatives... So much so that many pro's who shot color would shoot slides, and then try reexposing the slides on negative film when needed to print.
> ...



While you technically are right in the essence that the negative had more range it all came down to the fact, in order to be seen in final output, any way you slice it, it had to be printed which paper only had 5-6 stops whereas slides had much more. In the darkroom you could add contrast filter, dodge/burn/ etc but you were still limited in that regard... Also while negatives had more DR, they were not as vibrant, they had more grain, they were not as contrasty/punchy, and in the end, you were still limited by the paper, whether it was magazine, or photographic paper, it really boiled down to that. 

Color, low grain, contrast... these were the reasons many pro's chose to shoot with slides over film, and simply when you dealt with final output, projecting the slides just looked better than prints from negatives, especially color. They could, if it came down to a client wanting to use their photos, expose slide to negative then to print, but they had the best of both worlds by going that route. I loved film and that era, but in the end, I love digital more because I prefer to work in my office, rather than a dark, smelly dark room.


----------



## Caps18 (Mar 9, 2012)

Mt Spokane Photography said:


> This image shows why we would like more dynamic range in a camera. I took this on a bright day, and the camera exposed for the bright sky. I then switched to manual exposure and exposed for the person, but kept the image as a example.
> 
> A higher dynamic range might let you get good detail from both the bright and the shadow areas, without enough dynamic range, you must chose the area you want to be able to show the detail.



Were you able to use Digital Photo Professional or another image processing tool to adjust the shadows, highlights and contrast in RAW? 

But I agree that you shouldn't have to take a HDR image in order to get what the human eye can see. Either have a setting mode that will apply the settings to artificially make a HDR image, or have it automatically adjust the settings in such a way as to bring out the mid-darks, and to under expose the mid-brights.


----------



## dtaylor (Mar 9, 2012)

awinphoto said:


> While you technically are right in the essence that the negative had more range it all came down to the fact, in order to be seen in final output, any way you slice it, it had to be printed which paper only had 5-6 stops whereas slides had much more.



The capture range is what is at issue. It was easy to compress the detail from a negative into the DR of a print in the darkroom, and is trivial today with scanners and PS.

Slides never had much more. As I said, Astia hit 8 stops, but others were 5-6, maybe 7.



> Also while negatives had more DR, they were not as vibrant, they had more grain, they were not as contrasty/punchy,



All of this really depends on the emulsion. While probably nothing tops Velvia for contrast and saturation, I remember Kodak Supra 100 and 400 as very vibrant films. Grain also wasn't one sided. Slide films generally had less grain at ISO 50-100, but couldn't touch print films at 400 or higher.


----------



## te4o (Mar 10, 2012)

Fantastic thread! I always highly respect great teachers! DTaylor is obviously among the best. I enjoyed your comments like cool water in a hot sunny day: all these heated and merely preliminary discussions about DR from a IR RAW download should be stopped and discarded immediately. 
I enjoyed your explanation, thank you, now some practical points:
Do you consider a particular RAW converter more suitable for DR "rescue"? You mentioned ACR optimized for DR, what is that?
Do you think the current RAW converters are incapable of delivering optimal results with the 5D3 sensor and need to be adjusted/updated? Is this offered in Canon's proprietary RAW converter? 
We had a discussion about the "best" RAW converter and obviously no one addressed these points. Please, DTaylor, specify how do you convert your RAWs and why. Thank you again!


----------



## CanineCandidsByL (Mar 29, 2012)

dtaylor said:


> Monitors have more DR than prints. Some may even exceed sensors.



Sorry, but thats not true. Some extreme monitors might but generally if you look at a monitors specs it will show you differernt.
For example, I looked up one monitor and it shows a 1:20,000 (and you can find ones up to a 1:1,000,000), but in the specs it says 1:1000 typical.

The backlight intensity can vary which gives you the difference between the two specs. So for a movie that moves between bright and dark, you might get 1:20,000 difference between the darkest pixel in a dark scene and the brightest pixel in a bright scene. However for a stationary picture, your limited to the amount of filtering the foward layer can provide...in this case 1:1000.

For those who haven't left already, how many stops are in these ratios? Remember a stop is a doubling of light and that 1 stop of DR represents 2x between the darkest and lighest. So we start with DR of 1 equal to 1:2 and we double each time....

Stops Ratio
1 1:2
2 1:4
3 1:8
4 1:16
5 1:32
6 1:64
7 1:128
8 1:256
9 1:512
10 1:1024 (typical for most monitors)
11 1:2048
12 1:4096
13 1:8192
14 1:16384
15 1:32767
16 1:65536
17 1:131072
18 1:262144
19 1:524288
20 1:1048576 (the largest "nonsense" number I have ever seen for a monitor)

I should make a special mention for LED monitors. Instead of one light source for the whole monitor, they have LEDs that can be individually adjusted brighter/darker. As such, you actually might get those 1:1000000 ratios on the screen, but a single LED controls a group of pixels so a white pixel next to a black pixel would generally be back at that 1:1000 figure.

TMI? ;D


----------



## keithfullermusic (Mar 29, 2012)

Mt Spokane Photography said:


> This image shows why we would like more dynamic range in a camera. I took this on a bright day, and the camera exposed for the bright sky. I then switched to manual exposure and exposed for the person, but kept the image as a example.
> 
> A higher dynamic range might let you get good detail from both the bright and the shadow areas, without enough dynamic range, you must chose the area you want to be able to show the detail.
> 
> ...



If DR was that high then they wouldn't make any money selling flashes!!!

Also, You can just use grad filters for things like this. Granted, it wouldn't be ideal right here where the top half of the guy would be darker than the lower, but grad filters solve this problem with ease when taking landscape shots. Polarizers also do a good job of darkening the sky and water more than other things, so that might help in situations like this.

I bet you could lighten the shadows in LR/Aperture and get tons of info back on the darker parts also.

Granted, more DR would be sweet, but there are some ways around it.


----------



## helpful (Mar 29, 2012)

For the record, slide film has a smaller dynamic range than negative film.

However, it still held at least as much "data" as did negative film.

Therefore, there was much more detail in the image coming from a slide, but exposure latitude was not as forgiving. Blown highlights, and lost shadows were more likely with slide film. Overexposure in particular is much less of a problem with negative film, because there was a lot of headroom with negative film. Not so with slide film.

The way this compares to digital images is as follows:

Approximately the brightest 1/2 of the image data (i.e., the right side of the histogram) represents one stop. Therefore, the most image detail is in this area. Subtle variations between brightness are easily discernable.

The darkest 1/2 of the image data is divided up as if it was another image, like this:

* the brightest 1/2 of the remaining 1/2 of the image data (i.e., the upper half of the bottom half of the histogram) contains another stop of brightness data. There is slightly less detail, since not as much data is used to represent one stop of light.

The remaining quarter of the image data is divided up again, recursively:

* the brightest 1/2 of the remaining 1/4 of the image data contains another stop of brightness. There is significantly less detail, etc.

Let's assume that your image data is stored as 16 bits per pixel, and the primary part of your image is exposed at EV 16.

Then the top 8 bits contain a detailed view of the brightest one stop of the image from EV 16.0 to 16.9.
The next 4 bits contain a moderately detailed view of the next brightest stop of the image from EV 15.0 to 15.9.
The next 2 bits contain a poorly detailed view of the next brightest stop of the image from EV 14.0 to 14.9.
The final 1 bit contains a very low detail view of the darkest stop of the image from EV 13.0 to 13.9.


This is how it works, simplified, with "linear gamma."

All modern cameras use non-linear gamma systems to expand the dynamic range of linear gamma from the maximum of four stops to many more stops of dynamic range.

But the principle is the same.

If you want to get the most detail from your images, then the major area of your image's histogram should be towards the right.

Try taking a photo of a subject with little dynamic range, like a a square foot in the middle of a field of green grass. If the image is underexposed, the histogram will make a narrow band towards the left side. The size of the band represents the detail recorded from the grass.
If the image is properly exposed, the band will be a little bit larger showing that more detail is being recorded in the upper area of the histogram which is more detailed because more bits are used to record each stop of light.


----------



## Marsu42 (Mar 29, 2012)

helpful said:


> If you want to get the most detail from your images, then the major area of your image's histogram should be towards the right.



Thanks for pointing that out, I only discovered this fact after some try and error - maybe they should have put a sentence like this in the manual


----------



## neuroanatomist (Mar 29, 2012)

Marsu42 said:


> helpful said:
> 
> 
> > If you want to get the most detail from your images, then the major area of your image's histogram should be towards the right.
> ...



The concept is termed ETTR.


----------



## Marsu42 (Mar 29, 2012)

neuroanatomist said:


> The concept is termed ETTR.


While it might not make me seem like a pro (hey, I am not!): I didn't know about that, thanks again dr. neuro!

The only thing left for me to wonder when shooting at low light is if it's better to have a properly exposed histogram at higher iso or a histogram that leans to the left at lower iso. Using lr4 and its smart shadow recovery, I'm tending towards the latter with my aps-c sensor, because higher iso than 800 really ruins the picture while 1ev underexposure does not.


----------



## helpful (Mar 29, 2012)

Marsu42 said:


> neuroanatomist said:
> 
> 
> > The concept is termed ETTR.
> ...



That is a question that there may never be a definitive answer for. 

In my experience it depends on the camera. Some cameras whose high ISO modes are just "software enhanced" are better used at lower ISOs with slight underexposure, and then pushed on the comupter with software that is better designed and can increase the brightness without resulting in as much nosie as the in-camera "software-enhanced" ISO boosting.

For other cameras that really do have the ability to increase their light sensitivity, it is absolutely better to increase the ISO to get more bits of detail in the file, and then try to decrease noise later. Underexposing reduces the actual amount of data that is captured in the scene. So in an ideal world, you would get the proper exposure by increasing the ISO, and then reduce the nosie in post-processing, and your image would have higher quality than shooting underexposed and then boosting the ISO with software.

It all depends on whether the ISO level that you are shooting at is provided by the intrinsic analog/physics capabilities of the image sensor physics (in which case you should shoot at high ISO and reduce noise afterwards), or by the camera's image processing (in which case you should shoot at lower ISO and increase brightness afterwards).

Boosting ISO with software always reduces the amount of usable data in the image (which is why the Nikon D4 only has 6-8 bits of DR at ISO 200,000, because it is software boosted). So that is what you always want to avoid.


----------



## TrumpetPower! (Mar 29, 2012)

I'd caution strongly against expose to the right and even bothering with thinking about linear encoding and the like. Even if that's still what's going on in the silicon, it's been ages since that sort of folk wisdom has had any practical application. Unless you've got a very specific, uncommon, awkward, and carefully-crafted workflow, you're just going to risk blowing your highlights and wind up with unnatural and weird-looking tonal and color shifts.

Expose properly, ideally with a well-calibrated incident meter.

If you've still got crushed shadows and blown highlights in critical areas of the image, you either need better light or you need to go to HDR -- and that's assuming that the crushed shadows and blown highlights are a problem in the first place...the kinds of photography where it's a problem but you can't either fix the light or use HDR are basically nonexistent.

Don't forget that there's a great deal more DR to be had in any well-exposed RAW image than what comes right out of the converter with the default settings. Much more, in fact, than any of the various numerical tests would lead you to believe. And, unless your printer is too big to fit on a tabletop, noise simply isn't a factor any more. Those whose printers take ink by the gallon have to worry about it, but they also generally know how to capture good exposures such that noise again doesn't become a problem or is at least manageable / acceptable.

And that's why the whole brouhaha over the wider dynamic range of the D800 over the 5DIII is meaningless. In the real world, you're never going to find yourself in a situation where the 5DIII has insufficient DR but the D800 is good enough. That minuscule set of scenes where it could theoretically apply still requires either better light or HDR for proper results, even if you've got the D800 in hand.

It's also why Canon went the better route this time 'round: the 5DIII's non-sensor qualities (AF, FPS, etc.) are significantly better than the D800's, and those qualities have the potential for significantly more improvements in image quality than just a few extra megapickles. At least, they do if you're shooting something other than dollar bills taped to brick walls....

Cheers,

b&


----------



## helpful (Mar 29, 2012)

neuroanatomist said:


> Marsu42 said:
> 
> 
> > helpful said:
> ...



Great point, I would like to add a link a really cool article on the subject as well:

http://www.luminous-landscape.com/tutorials/expose-right.shtml

In summary,

"For Maximum S/N Ratio [i.e., image quality]

"The simple lesson to be learned from this is to bias your exposures so that the histogram is snugged up to the right, but not to the point that the highlights are blown. This can usually be seen by the flashing alert on most camera review screens. Just back off so that the flashing stops."

There are photos of the histograms in the article.

Note that for a dark subject, you still need the image to be darker than for a bright subject, so it is overly simplistic to say "always have the histogram snugged up to the right." But to get the most details out of all the data that is available in the light coming from the subject, that is the way.


----------



## Aglet (Mar 29, 2012)

Mt Spokane Photography said:


> This image shows why we would like more dynamic range in a camera. I took this on a bright day, and the camera exposed for the bright sky. I then switched to manual exposure and exposed for the person, but kept the image as a example.
> 
> A higher dynamic range might let you get good detail from both the bright and the shadow areas, without enough dynamic range, you must chose the area you want to be able to show the detail.
> 
> ...



This image is a good example of how to make use of the full DR of a sensor - altho it hasn't yet been processed to do so for final ouput if one wanted to achieve that look.

You expose for the hilites so they're not blown. Then you tone-curve adjust to lighten the dark areas. There are numerous ways to do this but Adobe's FILL LIGHT is one of the simplest and most effective.

Where the DR limitations come in, shadows at low iso, is NOISE.
When you start to brighten those shadows you may also start to see more chroma noise show up in those areas. If the noise is random it's more acceptable and easier to minimize its appearance.

If the noise has a pattern to it, banding, cross-hatching or similar, then it's very difficult to impossible to remove the appearance of this noise.

This is where the Canon vs Sony-Nikon sensor argument arrises.
One company's sensors have more pattern noise at low iso shadows than the other company's, thus limiting the ability to boost shadows and achieve an effective HDR image from one exposure.

Other than that, they all make pretty good cameras and each has their respective compromises. you choose the one that works the way you need it to.


----------



## helpful (Mar 29, 2012)

TrumpetPower! said:


> I'd caution strongly against expose to the right
> 
> Expose properly, ideally with a well-calibrated incident meter.
> 
> ...



Good points, and actually you have explained very well what the point of ETTR is, which is to maximize the potential of camera sensors, which are capable of far more than what is offered by default exposure settings. The camera sensors are so good that the dynamic range of most scenes is far within the maximum DR capabilities of the sensor. So photographers are faced with a question--use zero exposure compensation, or try to manually expose to get the most detail? Due to the way that data is recorded (2,048 bits of data for the right side in the brightest stop of light), there is much more data recorded on the right side of the histogram.

http://schewephoto.com/ETTR/index.html

There is an amazing example on that page which shows that the little tiny blip on the right side of the sensor (way, way, overexposed, and extreme ETTRing) actually has as much image detail as almost the entire image histogram. That's not what anyone should do. The point is just to show how much more data and details are being recorded for any part of the image that is on the right side of the histogram, compared to the left.


----------



## TrumpetPower! (Mar 29, 2012)

helpful said:


> http://schewephoto.com/ETTR/index.html
> 
> There is an amazing example on that page which shows that the little tiny blip on the right side of the sensor (way, way, overexposed, and extreme ETTRing) actually has as much image detail as almost the entire image histogram. That's not what anyone should do. The point is just to show how much more data and details are being recorded for any part of the image that is on the right side of the histogram, compared to the left.



Actually, that page is an excellent example of why _not_ to do ETTR.

Look at the three adjusted histograms. Notice that tall spike on the right on the rightmost histogram? See how it's the only part of the histogram significantly different from the other two? And how it goes all the way to the top?

Even though that spike isn't all the way at the right edge, it still represents saturated, blown-out pixels. All that's happened is that ACR has uniformly reduced the blown pixels to a still-uniform value less than maximum.

Sure, the shadows are cleaner. But I bet a 100% crop of that ``Prairie'' sign would show much more detail in the properly-exposed version than the overexposed one. Were that a wedding dress, Mr. Schewe's smartypants exposure hijinks would just have made the mother of the bride very angry indeed. Even though his histogram showed a "good ETTR" exposure.

And have a look at the waterfall, too. Sure, he was able to recover a good amount of tonality, but the colors are posterized to a ridiculous extent. That exact same sort of posterization is going on in all the other overexposed highlights, with the degree of posterization proportional to the amount of overexposure.

In other words, using ETTR means all your specular highlights will be either devoid of color or have that same sort of severe posterization. Now, granted, the definition of specular highlights is that they get blown...but they'll be much bigger in area and the transition from colorful to blown will be much more abrupt and less colorful. You're basically taking a sledgehammer to your specular highlights, when they really should (in my opinion) remain light and delicate.

By all means, if you like the ETTR look, especially if you're shooting sleeping black cats at the bottom of a coal mine, go for it. But, when that cat wakes up and you want to capture the glint of the candlelight in her eye...use ETTR if you want the glint to be a hard-edged white outline, and expose properly if you want it to look like a candle flame.

Cheers,

b&


----------



## sarangiman (Mar 29, 2012)

> * Current DSLRs are in the 10-12 stop range. (Note: the newest FF bodies from Canon and Nikon haven't been tested yet.)
> 
> * Print film has 9-14 stops depending on emulsion.



dtaylor: Print film is more forgiving, yes, but I've always found it hard to compare the stops of DR in print film vs. digital b/c it boils down to: how much noise are you willing to accept in the shadows? If you look at Roger Clark's treatise on DR (http://www.clarkvision.com/articles/dynamicrange2/), negative film falls apart so quickly that its _acceptable_ DR is significantly lower than that of even earlier generation DSLRs & almost on par w/ some slide films (though with Velvia, e.g., the actual signal is very low so that, even with lower noise, it may be hard to extract that signal without a drum scanner). However, latitude with slide film is terrible; I'm always afraid of clipping shadows or blowing highlights. With negative film, I overexpose a sunset by 3 stops & still retain color around the sun. But the shadows are still starving for exposure & are just obliterated by noise! For example, here's a shot overexposed by 2 2/3 stops (as compared to what Evaluative Metering thought the exposure should be on a EOS-3):






Ektar did well, considering the DR of the scene (the sun is still high up in the sky). But those rocks are incredibly noisy upon closer inspection.

So it's my opinion that your estimation of the DR of negative film is highly dependent upon your subjective opinion of acceptable SNR in the shadows.

Which is why DXO attempts to standardize measurements by setting that acceptable SNR to 1 in their measurements. But, like you, I have my doubts of DXO measurements when they claim the D800 has 1.4 stops more DR than the D4, normalized or not. At a pixel level (not normalized), DXO claims 13.24 stops DR for the D800, but only 13.1 stops for D4 _normalized_ (so, less at the pixel level). That just doesn't make any sense.



> True, but print film quickly fell apart on the shadow side.



Exactly. You said it yourself 



> "For Maximum S/N Ratio [i.e., image quality]
> 
> "The simple lesson to be learned from this is to bias your exposures so that the histogram is snugged up to the right, but not to the point that the highlights are blown.



Agreed, but this brings up another issue I've had with most RAW converters/image processing software I've had for a while -- when you ETTR, even without blowing out channels, you quickly desaturate bright regions like skies. Software doesn't make it easy for you to recover those tones, which is why I've often found myself being careful about how much I ETTR when I want saturated skies in my final photo. Lightroom 4 is changing that with their 'highlights' & 'whites' slider, which now allow you to really pull back color/tones from bright regions of your photograph. Aperture/Photoshop has allowed you to do this, with limited ability, with their 'Highlights/Shadows' tools for some time now... but I never found it to be enough or as good as LR 4 now.



> Where the DR limitations come in, shadows at low iso, is NOISE.
> When you start to brighten those shadows you may also start to see more chroma noise show up in those areas. If the noise is random it's more acceptable and easier to minimize its appearance.
> 
> If the noise has a pattern to it, banding, cross-hatching or similar, then it's very difficult to impossible to remove the appearance of this noise.
> ...



EXACTLY. Thank you for concisely stating the reason why some of us care about banding in so-called 'useless shots taken with the lens cap on'.


----------

