# Why isn't Canon working on DSLRs with higher dynamic range?



## DanG_UE (May 30, 2014)

It seems that many people are interested in film in part because it captures light with a similar range to the human eye. Between that, the wide utilization of RAW, and just the general issue of needing HDR or some other technique to balance many scenes, why isn't Canon focusing some efforts on creating sensors able to capture at least closer to the 24 stops the human eye can see?

I shoot in very high contrast areas with poor lighting (abandoned buildings), and dynamic range is actually the trait I care more about than any other at the moment. Canon has only reached 12.1 EV, while Nikon is at least hitting 14.2 EV.

Fujifilm had that one DSLR back in the day that could recover an insane amount from the highlights. I remember hearing it had something to do with a grid of different sized holes over the pixel array. I'm surprised no one has looked to challenge that technique.


----------



## bdunbar79 (May 30, 2014)

DanG_UE said:


> It seems that many people are interested in film in part because it captures light with a similar range to the human eye. Between that, the wide utilization of RAW, and just the general issue of needing HDR or some other technique to balance many scenes, why isn't Canon focusing some efforts on creating sensors able to capture at least closer to the 24 stops the human eye can see?
> 
> I shoot in very high contrast areas with poor lighting (abandoned buildings), and dynamic range is actually the trait I care more about than any other at the moment. Canon has only reached 12.1 EV, while Nikon is at least hitting 14.2 EV.
> 
> Fujifilm had that one DSLR back in the day that could recover an insane amount from the highlights. I remember hearing it had something to do with a grid of different sized holes over the pixel array. I'm surprised no one has looked to challenge that technique.



So they are hitting 14.2 EV with a 14-bit ADC you are saying?


----------



## Don Haines (May 30, 2014)

bdunbar79 said:


> DanG_UE said:
> 
> 
> > It seems that many people are interested in film in part because it captures light with a similar range to the human eye. Between that, the wide utilization of RAW, and just the general issue of needing HDR or some other technique to balance many scenes, why isn't Canon focusing some efforts on creating sensors able to capture at least closer to the 24 stops the human eye can see?
> ...


And you wonder why people question DXO?


----------



## 3kramd5 (May 30, 2014)

Who says they aren't?

I think the noise (haha) over a stop or two of additional shadow recovery is fairly inconsequential to canon. 

Would it be cool for them to bring in 24-bit ADC? Yah sure, but at the end of the day, most people look at screens with maybe 8 stops of DR, or prints with less. When the recording device is already capturing significantly more range than the end product is capable of displaying, photographers still need to employ methods to compress the DR of a scene, either before it hits the film/sensor (nd filters, lighting techniques, etc), or before it hits the print (exposure stacking, etc).

Having more information to work with when we enter post would be welcome, but the number of instances in which someone wants to show the words on a newspaper under a picnic table in broad daylight as well as the details of the clouds in the sky above it are pretty rare. 

That said, I also welcome DR improvements because of what it means for noise.


----------



## LetTheRightLensIn (May 30, 2014)

Don Haines said:


> bdunbar79 said:
> 
> 
> > DanG_UE said:
> ...



That is not something to make on question DxO.
DxO simply normalizes everything to 8MP equivalent for fair comparison, so it makes perfect sense that a D800 could end up with more than DR than bits in its ADC on their normalized comparison. (if you look at the screen view chart it doesn't go over 14).

But yeah it is true that making use of the full res it doesn't hit over 14 stops (then again neither does the Canon hit over 12 stops at it's full res).


----------



## LetTheRightLensIn (May 30, 2014)

Anyway, hopefully they are at this point.


----------



## Orangutan (May 30, 2014)

DanG_UE said:


> It seems that many people are interested in film in part because it captures light with a similar range to the human eye.



Does it? The eye is more of a 10fps video camera than a still camera, if I remember correctly.



> I shoot in very high contrast areas with poor lighting (abandoned buildings), and dynamic range is actually the trait I care more about than any other at the moment. Canon has only reached 12.1 EV, while Nikon is at least hitting 14.2 EV.



CR Geek has made a valid point on numerous occasions: the fraction of scenes where the DR is strictly beyond Canon's range, and strictly within Nikon/Sony range is very small. Even in your circumstance, you probably need more than 14 stops of DR, so you may need to do something HDR-ish.

So,


> Why isn't Canon working on DSLRs with higher dynamic range


Who says they aren't working on one? But if you mean why don't they put one in a near-generation camera, maybe it's because the market doesn't require it.


----------



## Don Haines (May 30, 2014)

DanG_UE said:


> why isn't Canon focusing some efforts on creating sensors able to capture at least closer to the 24 stops the human eye can see?



Yes, the human eye is capable of seeing 24 stops of light... but even my 60D can see 40 stops of light.... (3 stops of aperture, 7 stops of ISO, 18 of shutter speed, and 12 stops of DR).... but only about 12 at a time.

And similarly that's how the eye works. You do not see the whole 24 stops at once. The iris adjusts to let light in, giving several stops of range, and in dim light you go to a very low resolution B+W sensor. Note how your eyes take time to adjust as you go from bright to dim areas... And on top of that, your "video feed" from your eyes is an incredibly processed predictive feed where the output is based on past events and not what you actually see, plus the resolution is highest NEAR the centre, falling off severely at the edges, and blank in the middle (blind spot). The mind takes this incredibly lousy video feed and processes it into what we perceive as vision.

In short, just about every camera out there has superior resolution and DR than the human eye.


----------



## eml58 (May 31, 2014)

Don Haines said:


> DanG_UE said:
> 
> 
> > why isn't Canon focusing some efforts on creating sensors able to capture at least closer to the 24 stops the human eye can see?
> ...



I'm thinking of strapping a pair of 1Dx onto my face, I can always see better through the viewfinder, age related no doubt.


----------



## Don Haines (May 31, 2014)

eml58 said:


> Don Haines said:
> 
> 
> > DanG_UE said:
> ...


Seriously though.... ever used a night vision headset? You can see great in what appears to be complete darkness... imagine a camera like that... ISO 26,214,400?


----------



## Lee Jay (May 31, 2014)

Base ISO DR isn't particularly important, IMHO. Sure, more is better, but the only time I absolutely could not capture a scene with a Canon camera at base ISO due to limited DR, I needed about 30 stops, so one or two extra stops wouldn't have mattered a bit.

High ISO DR, on the other hand, is crucial, and here Canon is on par with the rest, none of which are good enough.

The catch is, any improvement in high ISO DR will help at base ISO also.

Canon has a huge opportunity to leverage dual pixel technology to massively increase DR at base and moderate ISO.


----------



## Lee Jay (May 31, 2014)

Don Haines said:


> In short, just about every camera out there has superior resolution and DR than the human eye.



Baloney.

The human eye can see about 14 stops of DR in a single image without moving. Some cameras can do that in a single image, many cannot. I read this long ago, and tested myself as a result of what I had read, and it's correct.

By the way, the eye can see much more than 24 stops across the entire range of light adaptation and pupil diameters. It depends on your age and diet and such, but 30 is doable (but only 14 at a time, in good light, as I said).


----------



## Aglet (May 31, 2014)

Lee Jay said:


> ..the eye can see much more than 24 stops across the entire range of light adaptation and pupil diameters. It depends on your age and diet and such, but 30 is doable (but only 14 at a time, in good light, as I said).



Good explanation why Exmors are good enough (I'm happy with mine) and Canon is suffering inadequacy anxiety. ;D

When the organic sensor from Fuji-Matsushita sees the light of day, it'll also likely be able to see in the shadows of dark holes at the same time.

EDIT - ADD LINK TO 15.3 STOPS OF DR ON SONY A7s
www.sonyalpharumors.com/sony-adds-silent-mode-and-15-3-stops-in-raw-via-fw-upgrade-on-the-new-sony-a7s/

Uhmmmm... isn't that still a 14b camera?...

even downsampled to 8MP, DxO-style... (what's the noise math again?)


----------



## dak723 (May 31, 2014)

Maybe Canon likes the "punchier" higher contrast images that they have been producing for all these years, rather than the lower contrast images that you get with more DR. Yes, there are times you want more DR - and the professional photographers definitely find the difference in "what you see" and the more limited DR the camera captures to be frustrating, but the average person probably likes the higher contrast images they are getting with Canon. I know, as a professional artist, that when drawing or painting the usual conventional wisdom is to limit the value range (DR in artist speak) and "push" the darks (make the darks darker). Many artists, when they take pics of their artwork, find the pics to be better! That's because there are more and deeper darks.

So maybe it's not a high priority. Just a total guess on my part.


----------



## jrista (May 31, 2014)

Aglet said:


> Lee Jay said:
> 
> 
> > ..the eye can see much more than 24 stops across the entire range of light adaptation and pupil diameters. It depends on your age and diet and such, but 30 is doable (but only 14 at a time, in good light, as I said).
> ...



It is a 14-bit ADC. I think this has to do with their in-camera image processing (which is kind of what the A7s is all about, and the reason it has such clean ultra-high ISO video). It's shifting the exposure around, lifting shadows and compressing highlights. I'm guessing that's where they get the "15.3 stops DR". It wouldn't be "sensor output RAW", though...the output from the sensor is 14-bit, so it would have to be limited to 14 stops of DR AT MOST (and there is always some overhead, some noise, so it would have to be LESS than 14 stops, i.e. 13.something.)


----------



## zlatko (May 31, 2014)

DanG_UE said:


> It seems that many people are interested in film in part because it captures light with a similar range to the human eye. Between that, the wide utilization of RAW, and just the general issue of needing HDR or some other technique to balance many scenes, why isn't Canon focusing some efforts on creating sensors able to capture at least closer to the 24 stops the human eye can see?



Film never had the dynamic range that you imagine. Film was very far from the human eye — and that was part of its charm. In any event, we don't know what Canon is working on.


----------



## LetTheRightLensIn (May 31, 2014)

Just found out that there are plans to try to start releasing HDR displays starting around 2020 (12bits per channel, HDR, ultra wide gamut, 4k-8k, with true blacks). Kinda like what I had been talking about might arrive one day.


----------



## LetTheRightLensIn (May 31, 2014)

Don Haines said:


> DanG_UE said:
> 
> 
> > why isn't Canon focusing some efforts on creating sensors able to capture at least closer to the 24 stops the human eye can see?
> ...



And yet that is not true at all despite some claims. Just look at some scenes, and register both bright and dark parts at once which totally blow out current sensors.


----------



## LetTheRightLensIn (May 31, 2014)

Lee Jay said:


> Base ISO DR isn't particularly important, IMHO. Sure, more is better, but the only time I absolutely could not capture a scene with a Canon camera at base ISO due to limited DR, I needed about 30 stops, so one or two extra stops wouldn't have mattered a bit.



I find plenty of scenes where an extra 2-3 stops would help a ton. These scenes also can be mapped pretty well onto current displays.


----------



## LetTheRightLensIn (May 31, 2014)

Aglet said:


> Uhmmmm... isn't that still a 14b camera?...
> 
> even downsampled to 8MP, DxO-style... (what's the noise math again?)



maybe a7s has a higher bit depth ADC??


----------



## Sporgon (May 31, 2014)

zlatko said:


> DanG_UE said:
> 
> 
> > It seems that many people are interested in film in part because it captures light with a similar range to the human eye. Between that, the wide utilization of RAW, and just the general issue of needing HDR or some other technique to balance many scenes, why isn't Canon focusing some efforts on creating sensors able to capture at least closer to the 24 stops the human eye can see?
> ...



Exactly. Where did this come from ? Talk about seeing the past through Rose tinted spectacles ! 

So much rubbish about what the 'eye can see'. That's like saying your camera lens gives better DR than another. The brain sees, so you could say we see in a form of HDR, or multiple exposures, as someone pointed out, rather than one single exposure. 

If you produced a picture with the same contrast as we 'see' it would be very flat and unappealing. Even old artists added contrast in their paintings, often giving very dark, heavy shadows. 

The Sony sensor is very good, but if you exposure the Canon optimally the difference is generally academic in the vast majority of circumstances. However if you have no understanding of exposure the exmor is better.


----------



## SoullessPolack (May 31, 2014)

LetTheRightLensIn said:


> Just found out that there are plans to try to start releasing HDR displays starting around 2020 (12bits per channel, HDR, ultra wide gamut, 4k-8k, with true blacks). Kinda like what I had been talking about might arrive one day.



Lol!

Source?


----------



## Aglet (May 31, 2014)

LetTheRightLensIn said:


> And yet that is not true at all despite some claims. Just look at some scenes, and register both bright and dark parts at once which totally blow out current sensors.
> I find plenty of scenes where an extra 2-3 stops would help a ton. These scenes also can be mapped pretty well onto current displays.



That's what I find. When I render a hi DR scene to a print, especially if it's displayed in subdued lighting, I have to compress the heck out of it by mostly lifting the darker areas so that there's something to actually see other than too-dark-to-bother-looking. That's not for all scenes or prints of course, but having the ability to lift those dark areas a lot without FPN interfering is really a nice feature of the Sony sensors..




Sporgon said:


> Exactly. Where did this come from ? Talk about seeing the past through Rose tinted spectacles !
> 
> So much rubbish about what the 'eye can see'. That's like saying your camera lens gives better DR than another. The brain sees, so you could say we see in a form of HDR, or multiple exposures, as someone pointed out, rather than one single exposure.
> 
> ...



I disagree with you, heartily! 
when you look at a scene, you tend to look around the scene and the rapid adjustments your eye makes allow you to see and interpret a wide natural DR.
If you don't map that effect in a large print, at least to some extent, then it's like staring at the brightest part and not really seeing the detail in the darker areas. So if your eyeballs don't move, go ahead and shoot and print that way.
I produce images for people with articulated eyeballs.


----------



## jrista (May 31, 2014)

Aglet said:


> Sporgon said:
> 
> 
> > Exactly. Where did this come from ? Talk about seeing the past through Rose tinted spectacles !
> ...



This is correct in one sense. The eye is constantly processing, and has a refresh rate of at least 500 frames per second in normal lighting levels (under low light levels, it can be considerably slower, and under very bright levels it can be quite a bit faster.) That high refresh rate, more so than the movement of the eye, is what's responsible for our high moment-DR. We can see a lot more than 14 stops of DR in any given second, but that's because our brains have effectively HDR-tonemapped ~500 individual frames. 

When it comes to print, your not entirely correct. I've done plenty of printing. You have to be VERY careful when tweaking shadows to lift enough detail that they don't look completely blocked, but not lift too much that you lose the contrast. The amazing thing about our vision is that while we see a huge dynamic range, what we see is still RICH with contrast. When it comes to photography, when we lift shadows, were compressing the original dynamic range of the image into a LOWER contrast outcome. With a D800, while technically you do have the ability to lift to your hearts content, doing so is not necessarily the best thing if your goal is to reproduce what your eyes saw. It's a balancing act between lifting the shadows enough to bring out some detail, but not so much that you wash out the contrast. 

Canon cameras certainly do have more banding noise. However just because they have banding noise does not mean you have to print it. After lifting, you can run your print copies through one of a number of denoising tools these days that have debanding features. I use Topaz DeNose 5 and Nik Dfine 2 myself. Both can do wonders when it comes to removing banding. Topaz DeNoise 5 in particular is a Canon users best friend, as its debanding is second to none, and it has a dynamic range recovery feature. You can also easily use your standard photoshop masking layers to protect highlight and midtone regions of your images and only deband/denoise the shadows as well, and avoid softening higher frequency detail in regions that don't need any noise reduction at all.

This is a little bit of extra work, however you CAN recover a LOT of dynamic range from Canon RAW images. They use a bias offset, rather than changing the black point in-camera. As such, even though Canon's read noise floor is higher at low ISO than Nikon or Sony cameras, there are still a couple stops of recoverable detail interweaved WITHIN that noise. Once you deband...it's all there. You can easily get another stop and a half with debanding, and if your more meticulous and properly use masking, you can gain at least two stops. That largely negates the DR advantage that Nikon and Sony cameras have. You won't have quite the same degree of spatial resolution in the shadows as an Exmor-based camera, but our eyes don't pick up high frequency details all that well in the shadows like that anyway, so at least personally, I haven't found it to be a significant issue.

There are benefits to having more DR in camera. Not the least of which is a simplified workflow...you don't have to bother with debanding, and you have better spatial resolution in the shadows. That said, if you ignore Canon's downstream noise contributors, their SENSORS are still actually quite good...the fact that you can reduce the read noise and recover another stop or two of usable image detail means their sensors are just as capable as their competitors. Their problem is really eliminating the downstream noise contributors. The secondary amplifier, the ADC, and even the simple act of shipping an analog signal across an electronic bus. Canon can solve most of those problems by moving to an on-die CP-ADC sensor design, similar to Exmor. They have the technology to do just that as well...they have a CP-ADC patent. They also have a number of other patents that can reduce dark current, adjust the frequency of readout to produce lower noise images (at a slower frame rate, say) or support higher frame rates (for action photography). Canon has the patents to build a better, lower noise, high dynamic range camera. It's really just a question of whether they will put those patents to work soon, or later...or maybe even not at all. (I'm pretty sure they have had the CP-ADC patent at least since they released the 120mp 9.5fps APS-H prototype sensor...which was years ago now.)


----------



## 9VIII (May 31, 2014)

Clever wording Jrista. Since that conversation we had in (January? February?) Today I actually happened to have the afternoon off and knowing where this conversation was headed actually read the relevant chapters in "Principles of Neural Science" (absolutely fantastic book, not difficult to read at all).
The only direct reference they make to dynamic range is 10 stops.


----------



## 100 (May 31, 2014)

Sporgon said:


> The Sony sensor is very good, but if you exposure the Canon optimally the difference is generally academic in the vast majority of circumstances. However if you have no understanding of exposure the exmor is better.



If the scene has less dynamic range than the sensor is capable of recording you don’t need more dynamic range, that’s true for a lot of situations. However, filter manufacturers sell loads of 1-3 stop GND filters so a couple more stops of dynamic range is useful to a lot people as well, even people who understand exposure, or should I say, especially to people who understand exposure.


----------



## jrista (May 31, 2014)

9VIII said:


> Clever wording Jrista. Since that conversation we had in (January? February?) Today I actually happened to have the afternoon off and knowing where this conversation was headed actually read the relevant chapters in "Principles of Neural Science" (absolutely fantastic book, not difficult to read at all).
> The only direct reference they make to dynamic range is 10 stops.



Here is a little test, for anyone who is interested. This is how my eyes work...maybe it isn't the same for everyone else. On a fairly bright day, with some clouds in the sky, find a scene where you can see the clouds, as well as the deep shadows underneath a tree. Pine trees are ideal. In my case, I can see the bark of the tree and the dried pine needles under the tree very well, while simultaneously being able to see detail in the clouds. 

Make sure you bring a camera along. Meter the deepest shadows under the tree using aperture priority mode, and set the ISO to 100. Then meter the brightest part of the clouds. Compute the difference via the shutter speed (which should be the only setting that changes as you meter.) In my experience, that is a dynamic range of 16-17 stops at least, if not more. My eyes have had no trouble seeing bright white cloud detail simultaneously with seeing detail in the depth of the shadows under a large pine tree. I do mean simultaneously...you want to stand back far enough that you can see both generally within the central region of your eye, and be able to scan both the shadows and the highlights of the scene without having to move your eyes much. The sun should be behind you somewhere, otherwise your looking at significantly more dynamic range and your eyes WON'T be able to handle it.

Whatever 9VIII's books may say, this is a real world test. Compare what your eyes SEE with what your camera meters. You'll be surprised how much dynamic range there is in such a simple scene, and the fact that your eyes can pick it all up in a moment...well, to me, that means our vision is certainly more than 10 stops of dynamic range "at once", and more than even a D800. The thing about a neuroscience book, whatever they may say, it can only be a guess. They cannot actually measure the dynamic range of human vision, and at best they can only measure basic neural response to the human EYE, which is not the same thing as vision. The eye is the biological device that supports vision, but vision is more than the eye.


----------



## 9VIII (May 31, 2014)

The funny thing is Neuro was using that book as his reference for saying 20 stops, but I guess you have to take some of the data other than the author's words.

I wholeheartedly agree though, practical testing trumps textbooks. I have the same argument with people over and over concerning resolution (we don't have nearly enough).


----------



## Orangutan (May 31, 2014)

jrista said:


> Here is a little test, for anyone who is interested. This is how my eyes work...maybe it isn't the same for everyone else. On a fairly bright day, with some clouds in the sky, find a scene where you can see the clouds, as well as the deep shadows underneath a tree. Pine trees are ideal. In my case, I can see the bark of the tree and the dried pine needles under the tree very well, while simultaneously being able to see detail in the clouds.



Could you post a picture of this scene? I'm having difficulty imagining how I can simultaneously (without moving my eyes) see into the dark depths of a stand of trees, while simultaneously seeing clouds. The closest I can imagine is a brightly lit flower nearer to me than a stand of trees, but both along the same line-of-sight.


----------



## jrista (May 31, 2014)

Orangutan said:


> jrista said:
> 
> 
> > Here is a little test, for anyone who is interested. This is how my eyes work...maybe it isn't the same for everyone else. On a fairly bright day, with some clouds in the sky, find a scene where you can see the clouds, as well as the deep shadows underneath a tree. Pine trees are ideal. In my case, I can see the bark of the tree and the dried pine needles under the tree very well, while simultaneously being able to see detail in the clouds.
> ...



You move your eyes, just not a lot. The point is the scene should generally be static...you shouldn't be looking in one direction for the shadows, then turning around 180 degrees for the highlights. The point is that, while our eyeballs themselves, our retinas and the neurochemical process that resolves a "frame", may only be capable of 5-6 stops of dynamic range, our "vision", the biochemical process in our brains that gives us sight, is working with FAR more information than what our eyes at any given moment process. It's got hundreds if not thousands of "frames" that it's processing , one after the other, in a kind of circular buffer. It's gathering up far more color and contrast information from all the frames in total than each frame has in and of itself. The microscopic but constant movements are what give us our high resolution vision...it's like superresolution, so we get HDR and superresolution at the same time, all in the span of a second or two. 

My point is that if I look at the deep shadows under a large tree, then a moment later flick my eyes to a cloud, then a moment later flick back, I can see detail in both. There is no delay, there is no adjustment period. My visual perception is that I can see details in extremely bright highlights SIMULTANEOUSLY with seeing details in extremely dark shaded details. My "vision" is better than what my eyeballs themselves are capable of (which, really, last I checked, was only about 5-6 stops of dynamic range, and actually less color fidelity and resolution than what we actually "see" in our brains.) Our brains are doing a degree of processing that far outpaces any specific "device". Our vision is the combination of a device and a high powered HDR/Superresolution crunching computer that does this amazing thing all at once.


----------



## Lee Jay (May 31, 2014)

Individual rods and cones have relatively poor dynamic range. However, the combination of both and the fact that sensitivity can vary from one place to another over the surface of the retina means that the combined DR of the entire imaging surface is quite good.


----------



## Orangutan (May 31, 2014)

jrista said:


> Orangutan said:
> 
> 
> > jrista said:
> ...



Yes, that I'd believe. I think it's fair to say it's our brains that actually "see," -- our eyes just feed some raw info to the brain.


----------



## neuroanatomist (Jun 1, 2014)

Lee Jay said:


> Individual rods and cones have relatively poor dynamic range. However, the combination of both and the fact that sensitivity can vary from one place to another over the surface of the retina means that the combined DR of the entire imaging surface is quite good.



The resolution of 'the entire imaging surface' of the retina...sucks. The fovea centralis has the highest acuity, outside of that small, central area the acuity drops precipitously. An analogy might be an 18 MP FF sensor (24x36mm) where the central 3x3mm area delivers 9 MP of the final image, with the other 9 MP coming from the remaining 99% of the sensor. So, the fovea basically has a 100-fold higher resolution than the rest of the retina. There are no rods in the fovea, only cones. 

In bright light, rhodopsin (the visual pigment in rods) is fully isomerized, meaning rods are fully saturated in bright light, and it takes several seconds for the photoactivatable form rhodopsin to begin to be regenerated (and many minutes for full regeneration). At light levels where rhodopsin is transducing photon input, cone opsins are not receiving sufficient light to signal. So, your statement that the combination of rods and cones leads to a higher DR is practically incorrect, since the functional activation of the rod vs. cone systems occurs at very different light levels – the two aren't active simultaneously. 'Variation (in DR) over the surface of the retina' is also not practically useful, since it's the fovea that delivers the high-acuity information. 

Feel free to debate the point, but bear in mind that I taught neuroscience to medical and graduate students for 8 years, and prior to that I studied the isomerization of 11-cis to all-trans retinal using time-resolved resonance Raman spectroscopy (the isomerization takes ~6 femtoseconds, if you were curious...a 'shutter speed' of 1/166,000,000,000,000 s). 8)


----------



## Orangutan (Jun 1, 2014)

neuroanatomist said:


> Feel free to debate the point, but bear in mind that I taught neuroscience to medical and graduate students for 8 years, and prior to that I studied the isomerization of 11-cis to all-trans retinal using time-resolved resonance Raman spectroscopy (the isomerization takes ~6 femtoseconds, if you were curious...a 'shutter speed' of 1/166,000,000,000,000 s). 8)


----------



## Lee Jay (Jun 1, 2014)

neuroanatomist said:


> Lee Jay said:
> 
> 
> > Individual rods and cones have relatively poor dynamic range. However, the combination of both and the fact that sensitivity can vary from one place to another over the surface of the retina means that the combined DR of the entire imaging surface is quite good.
> ...



I took neural anatomy from a very bright guy (Marvin Lutches at the University of Colorado), and we did some tests to show that both the variation in sensitivity across the retina and lateral inhibition (I suppose most here would call it on-sensor sharpening) were real and detectable while staring with one eye at one target.

The test I did on myself was to stare out a window, with one eye, while trying to read a sign in the foreground in my near-central vision (a couple degrees out - right next to the window from my perspective). The exposure difference between the window and the sign was 12 stops, and the sign was dark brown. I could easily distinguish details in the clouds while being able to read the sign at the same time.


----------



## neuroanatomist (Jun 1, 2014)

Lee Jay said:


> I took neural anatomy from a very bright guy (Marvin Lutches at the University of Colorado), and we did some tests to show that both the variation in sensitivity across the retina and lateral inhibition (I suppose most here would call it on-sensor sharpening) were real and detectable while staring with one eye at one target.
> 
> The test I did on myself was to stare out a window, with one eye, while trying to read a sign in the foreground in my near-central vision (a couple degrees out - right next to the window from my perspective). The exposure difference between the window and the sign was 12 stops, and the sign was dark brown. I could easily distinguish details in the clouds while being able to read the sign at the same time.



I wasn't arguing against the relatively good DR of the human visual system, merely pointing out the fallacies in your explanation of the physiology underlying that DR. 

Google doesn't turn up much about Professor Marvin Lutches beyond a page on dragonfly flight. I studied neuroanatomy under, and subsequently taught for, Marian Diamond. I even had the opportunity to analyze the dendritic spines of Albert Einstein's brain in her lab. Good times...


----------



## Darkmatter (Jun 1, 2014)

Even though you don't think you're moving your eyes when looking at clouds and tree bark, you more then likely are, just a small amount, and at hard to notice speeds. You're so used to this process that its not something you can be sure that you would notice, since you normally don't. You're brain combines all these part images together to give you what you perceive as "the world." It is also very possible that you aren't truly seeing the bark. You've seen it many times before, even if you don't realize it. You're brain will often fill in information that it isn't actually "seeing" at that very moment because it either knows that it is there, or that it believes that it is there. This is one of the reasons why eye witness accounts of sudden and traumatic crimes can be notoriously inaccurate. As an example, a person may honestly believe that a mugger has a gun in his one hand that is down at his side near his (the robbers) pocket, but in reality what is there is a dark pattern on his jacket pocket, either from the jacket's colour/style, or from a shadow. The witness isn't lying, he was just so afraid for his/her life that he/she imagined that there was a gun there. 

Probably the only real, reliable way to conduct an experiment like looking at a very dark and a very bright thing at the same time and knowing that you didn't look at each separately would be to have a special camera/s closely monitoring your head, eyeballs, and pupils for any movement. It would also have to be an artificial or set up scene so that there was some symbol or something in the dark area that you would have to be able to identify without any movement. I honestly don't think "not moving" at all is possible without medical intervention such as somehow disabling a persons ability to move at all; body, neck, head, even eyeballs.

Not exactly a fun experiment.


----------



## jrista (Jun 1, 2014)

Darkmatter said:


> Even though you don't think you're moving your eyes when looking at clouds and tree bark, you more then likely are, just a small amount, and at hard to notice speeds. You're so used to this process that its not something you can be sure that you would notice, since you normally don't. You're brain combines all these part images together to give you what you perceive as "the world." It is also very possible that you aren't truly seeing the bark. You've seen it many times before, even if you don't realize it. You're brain will often fill in information that it isn't actually "seeing" at that very moment because it either knows that it is there, or that it believes that it is there. This is one of the reasons why eye witness accounts of sudden and traumatic crimes can be notoriously inaccurate. As an example, a person may honestly believe that a mugger has a gun in his one hand that is down at his side near his (the robbers) pocket, but in reality what is there is a dark pattern on his jacket pocket, either from the jacket's colour/style, or from a shadow. The witness isn't lying, he was just so afraid for his/her life that he/she imagined that there was a gun there.



Your talking about a different kind of eye movement, but you are correct, our eyes are always adjusting.

Regarding eye-witness accounts...the reason they are unreliable is people are unobservant. There are some individuals who are exceptionally observant, and can recall a scene, such as a crime, in extensive detail. But how observant an individual is, like many human things, falls onto a bell curve. The vast majority of people are barely observant of anything not directly happening to them, and even in the case of things happening to them, they still aren't particularly clearly aware of all the details. (Especially this day and age...the age of endless chaos, immeasurably hectic schedules, and the ubiquity of distractions...such as our phones.)

I believe the brain only fills in information if it isn't readily accessible. I do believe that for the most part, when we see something, the entirety of what we see is recorded. HOW it's recorded is what affects our memories. For someone who is attuned to their senses, they are evaluating and reevaluating the information passing into their memories more intensely than the average individual. The interesting thing about memory, it isn't just the act of exercising it that strengthens it...it's the act of evaluating and _associating _memories that TRUELY strengthens them. Observant individuals are more likely to be processing the information their senses pick up in a more complex manner, evaluating the information as it comes in, and associating that new information with old information, and creating a lot more associations between memories. A lot of what gets associated may be accidental...but in a person who has a highly structured memory, with a strong and diverse set of associations between memories, one single observation can create a powerful memory that is associated to dozens of other powerful memories. The more associations, the more of your entirety of memory that will get accessed by the brain, and therefor strengthened and enhanced, than when you have fewer associations.

I actually took a course on memory when I first started college some decade and a half ago. The original intent of the course was to help students study, and remember what they study. The course delved into the mechanics and biology of sensory input processing and memory. Memory is a multi-stage process. Not just short term and long term, but immediate term (the things you remember super vividly because they just happened, but this memory fades quickly, in seconds), short term, mid term, and long term (and even that is probably not really accurate, it's still mostly a generalization). Immediate term memory is an interesting thing...you only have a few "slots" for this kind of memory. Maybe 9-12 at most. As information goes in, old information in this part of your memory must go out. Ever have a situation where you were thinking about something, were distracted for just a moment, but the thing you were thinking about before is just....gone? You can't, for the life of you, remember what it was you were thinking about? That's the loss of an immediate term memory. It's not necessarily really gone...amazingly, our brains remember pretty much everything that goes into them, it's just that all the memories we create are not ASSOCIATED to other things that help us remember them. It's just that the thing you were thinking about was pushed out of your immediate-mode memory by that distraction. It filled up your immediate mode memory slots.

Your brain, to some degree, will automatically move information from one stage to the next, however without active intervention on your part, how those memories are created and what they are associated with may not be ideal, and may not be conducive to your ability to remember it later on. The most critical stage for you to take an active role in creating memories is when new information enters your immediate term memory. You have to actively think about it, actively _associate _it with useful other memories. Associations of like kind, and associations of context, can greatly improve your ability to recall a memory from longer term modes of memory storage. The more associations you create when creating new memories, the stronger those memories will be, and the longer they are likely to last (assuming they continue to be exercised). The strongest memories are those we exercise the most, and which have the strongest associations to other strong memories. Some of the weakest memories we have are when were simply not in a state of mind to take any control over the creation of our memories...such as, say, when a thug walks in and puts a gun in someone's face. Fear, fight or flight, activities kicked into gear by a strong surge of hormones can completely mess with our ability to actively think about what's going on, and instead...we react (largely out of 100% pure self preservation, in which case if we do form memories in such situations...they are unlikely to be about anyone but one's self, and if they are about something else, they aren't likely to be very reliable memoreis.) 

It was one of the best courses I ever took. Combined with the fact that I'm hypersensitive to most sensory input (particularly sound, but sight and touch as well...smell not so much, but I had some problems with my nose years ago), the knowledge of how to actively work sensory input and properly use my memory has been one of the most beneficial things to come out of my time in college.

If you WANT to have a rich memory, it's really a matter of using it correctly. It's an active process as much as a passive one, if you choose to be an observant individual. If not...well, your recallable memory will be more like swiss cheese than a finely crafted neural network of memories and associations, and yes...the brain will try to fill in the gaps. Interestingly, when your brain fills in the gaps, it isn't working with nothing. As I mentioned before, we remember most of what goes in...it's just that most of the information is randomly dumped, unassociated or weakly associated to more random things. The brain knows that the information is there, it just doesn't have a good record of where the knowledge is. I don't think that really has to do with the way our eyes function...it has to do with how the brain processes and stores incoming information. Our eyes are simply a source of information, not the tool that's processing and storing that information.



Darkmatter said:


> Probably the only real, reliable way to conduct an experiment like looking at a very dark and a very bright thing at the same time and knowing that you didn't look at each separately would be to have a special camera/s closely monitoring your head, eyeballs, and pupils for any movement. It would also have to be an artificial or set up scene so that there was some symbol or something in the dark area that you would have to be able to identify without any movement. I honestly don't think "not moving" at all is possible without medical intervention such as somehow disabling a persons ability to move at all; body, neck, head, even eyeballs.
> 
> Not exactly a fun experiment.



You do move your eyes, a little. You can't really do the experiment without moving back and forth. The point is not to move your eyes a lot. If you look in one direction at a tree, then another direction at the cloud, your not actually working within the same "scene". In my case, I was crouched down, the tree in front of my, the cloud just to the right behind it. Your whole field of view takes in the entire scene at once. You can move your eyes just enough to center your 2° foveal spot on either the shadows or the cloud highlights, but don't move around, don't move your head, don't look in a different direction, as that would mess up the requirements of the test. So long as "the scene" doesn't change...so long as the whole original scene you pick stays within your field of view, and all you do is change what you point your foveal spot at, you'll be fine.

To be clear, a LOT is going on every second that you do this test. Your eyes are sucking in a ton of information in tiny fractions of a second, and shipping it to the visual cortex. Your visual cortex is processing all that information to increase resolution, color fidelity, dynamic range, etc. So long as you pick the right test scene, one which has brightly lit clouds and a deep shaded area, you should be fine. In my experience, when using my in-camera meter, the difference between the slowest and fastest shutter speed is about 16 stops or so. I wouldn't say that such a scene actually had a full 24 stops in it...that would be pretty extreme (that would be another eight DOUBLINGS of the kind of tonal range I am currently talking about...so probably not a chance). But I do believe it is more dynamic range than any camera I've ever used was capable of handling in a single frame.


----------



## Hillsilly (Jun 1, 2014)

As a film shooter, I've read DanG_UE's comments with interest. But, I wouldn't be so quick to write off current digital sensors. While sensors can certainly be improved, when I go looking for it, I'm always surprised by the amount of detail that is hidden in highlights and shadows. 

Rather than discussing the DR of the eye (as fascinating as that is), a more interesting experiment would be for the OP (or anyone) to upload a RAW image from a current Canon DSLR shot at ISO 100 where lack of DR is perceived to be a problem - ie an image that appears to have a similar amount of blown highlights and dark shadows and where exposure was set somewhere near the middle (is that a correct understanding of the problem?). We can all then take turns to see if these images can be fixed or whether there is a major problem with the current state of sensors. Getting a few people's input on this could also highlight which programs and post processing techniques work best for this.


----------



## jrista (Jun 1, 2014)

dilbert said:


> jrista said:
> 
> 
> > ...
> ...



B&H Photo's product page, under specifications:

http://www.bhphotovideo.com/c/product/1044728-REG/sony_ilce7s_b_alpha_a7s_mirrorless_digital.html

It took me about 3 seconds to find that. I just searched for "Sony A7s Bit Depth", and that was one of the first five links (the rest, for some reason, were all about the A77...)

It states:


----------



## jrista (Jun 1, 2014)

If Sony had come out with a sensor with on-die 16-bit ADCs, that would have been far, far bigger news than the fact that it can do ISO 409k. No one really cares about ISO 409k. The noise levels at that ISO are a simple matter of physics when it comes to stills. 

When it comes to the A7s video performance, their DSP, BIONZ X, is the bigger news, since it's doing a significant amount of processing on the RAW signal to reduce noise at ultra high ISO settings. The BIONZ X image processor does 16-bit IMAGE PROCESSING, however the sensor output is 14-bit, and the output of the image processing is ALSO 14-bit. There is a page on Sony's site somewhere that describes this, soon as I find it I'll link it.

The BIONZ X processor is the same basic thing as Canon's DIGIC and Nikon's EXPEED. It's the in-camera DSP. Canon's DIGIC 6 has a lot of similar capabilities to Sony's BIONZ X. They both do advanced noise reduction for very clean high ISO JPEG and video output. They both do high quality detail enhancement as well. I don't believe Canon's DIGIC 6 does 16-bit processing, it's still 14-bit as far as I know. The use of 16-bit processing can help maintain precision throughout the processing pipeline, however since the sensor output is 14-bit, you can never actually increase the quality of the information you start with. That would be like saying that when you upscale an image in photoshop, you "extracted" more detail out of the original image. No, you don't extract detail when you upscale...you FABRICATE more information when you upscale. 

Same deal with BIONZ X...during processing, having a higher bit depth reduces the impact of errors (especially if any of that processing is floating point), however it cannot create more out of something you didn't have to start with. That is evident by the fact that Sony is still outputting a 14-bit RAW image, instead of a 16-bit RAW image, from their BIONZ X processor.

UPDATE: 

From the horses mouth: http://discover.store.sony.com/sony-technology-services-apps-NFC/tech_imaging.html#BIONZ



> *16-bit image processing and 14-bit RAW output*
> 16-bit image processing and 14-bit RAW output help preserve maximum detail and produce images of the highest quality with rich tonal gradations. The 14-bit RAW (Sony ARW) format ensures optimal quality for later image adjustment (via Image Data Converter or other software).



Higher precision processing, but still 14-bit RAW. The fact that the raw sensor output is 14-bit means that the dynamic range of the system cannot exceed 14 bits. The use of 16-bits during processing increases the working space, so when Sony generates a JPEG or video, it can lift shadows and compress highlights with more precision and less error. I suspect their "15.3 stops of dynamic range" is really referring to the useful working space within the 16-bit processing space of BIONZ X. Simple fact of the matter, though, is that when it comes to RAW...it's RAW. Your dynamic range is limited by the bit depth of the ADC. Since the ADC is still 14-bit, and ADC occurs on the CMOS image sensor PRIOR to processing by BIONZ X, then any processing Sony does in-camera can do no more, really, than what you could do with Lightroom yourself.


----------



## K-amps (Jun 1, 2014)

Being in my late teens in the 80's and witnessing how the DAC's in the CD players would go from 16 to 18 to 20 to 24 etc. every 2-3 years, then the sampling frequency even faster than that....

Apologies for the stupid question in advance: 

What's the huge deal with this 14bit wall in Digital Photography? Why can't they make an 16 bit processor and give us 16 stops of DR.... ?


----------



## neuroanatomist (Jun 1, 2014)

dilbert said:


> I'm still waiting for your reply to me asking for a reference (you know, a URL) to something that supports your claim of the Sony a7s only having a 14bit ADC ...





dilbert said:


> That's the bit depth of the file, not the width of the ADC.



Let's turn this around. Can you provide a reference showing the a7S has a 16-bit ADC? As far as I know, there have not yet been any dSLRs with greater than a 14-bit ADC. If Sony was the first to release a true 16-bit camera, one would think they wouldn't be shy about it...it should be childishly simple for you to provide many such references. 

Since they are claiming >15 stops of DR, it's certainly in their best interest to not make much of the fact that they're using a 14-bit ADC which cannot deliver the actual DR they claim, meaning they're merely cooking the RAW file to include fabricated data. 

Speaking of Sony lying about their RAW data, perhaps you could also provide evidence that the a7S outputs a real 14-bit (or higher) RAW file, instead of using the *lossy* 11+7-bit delta compressed RAW format used by the a7 and a7R.


----------



## Orangutan (Jun 1, 2014)

dilbert said:


> jrista said:
> 
> 
> > B&H Photo's product page, under specifications:
> ...



Why would they do 16-bit ADC, which generates a 16-bit data set, then downsample it back to 14-bits? Technically you are correct that they're not the same, but it would be monumentally stupid for Sony to throw away data and incur additional processing overhead. It's therefore reasonable to assume they are the same.


----------



## Orangutan (Jun 1, 2014)

My comments are based on my (imperfect) memory of science podcasts and other science journalism I've encountered in the last few years. If you have contradictory info I'd love to see a reference.



jrista said:


> Regarding eye-witness accounts...the reason they are unreliable is people are unobservant. There are some individuals who are exceptionally observant, and can recall a scene, such as a crime, in extensive detail.



My understanding is that new research has shown this to be wrong. There are a few "savant" types who have very precise/correct memory function, but for "neurotypical" (i.e. "normal") people, this is not so.



> I believe the brain only fills in information if it isn't readily accessible. I do believe that for the most part, when we see something, the entirety of what we see is recorded.



Again, my understanding is that recent research shows that the adage "seeing is believing" has it backwards: it should be "believing is seeing." The brain does not record raw image info at all, but constructs a reality that incorporates visual data with existing beliefs and expectations. It's that highly-processed "reality" that's recorded. As an example, back in 2004 there was that video tape of a purported Ivory-Billed Woodpecker. Subsequent analysis showed that it was almost certainly the rather common pileated woodpecker. The "eyewitnesses," however, recall seeing detail that would clearly distinguish it as an IBW. Even if it was a pileated, those witness may truthfully and genuinely believe they saw those distinguishing characteristics.


----------



## neuroanatomist (Jun 1, 2014)

dilbert said:


> Nope. The specs for the a7s only quote the bit depth for the image files, not the ADC.


So, you are suggesting, or at least implying, they are using an ADC with more than 14-bits, but you have no evidence to back that up. Simple logic would say that if Sony has indeed released a 16-bit camera, they would promote that fact and not throw away those extra bits. *Conclusion: the a7S uses a 14-bit ADC.*




dilbert said:


> Or maybe they're using a spreading function (i.e applying a curve to the sensor feed) rather than doing a linear conversion?


Whether they are interpolating or extrapolating is irrelevant - the former is fabricating data inside the existing range, the latter is fabricating data outside the existing range, but either way the data are being fabricated. *Conclusion: Sony continues to lie about their RAW image data.*




dilbert said:


> Note that at present the claim for 15.3 stops of DR comes from a 3rd party ... even I'm dubious on that. I'll wait and see what Sony says and more importantly, what DxO can measure.


It's a little sad when your factual errors lead you to qualify and hedge your own statements. The screenshot below is from Sony's a7S page. *Conclusion: dilbert doesn't bother to check his facts.*


----------



## Sporgon (Jun 1, 2014)

100 said:


> Sporgon said:
> 
> 
> > The Sony sensor is very good, but if you exposure the Canon optimally the difference is generally academic in the vast majority of circumstances. However if you have no understanding of exposure the exmor is better.
> ...



Those people will know that by using a 1-3 stop GND you are able to get more light to the non ND part of the frame, which, depending upon what you are shooting, results in improved data from dark areas whether you are using a 12 or 14 DR capable camera. So you probably have as many Sonikon photographers buying them as Canon.


----------



## jrista (Jun 1, 2014)

dilbert said:


> neuroanatomist said:
> 
> 
> > Let's turn this around. Can you provide a reference showing the a7S has a 16-bit ADC?
> ...



Did you completely miss the post where I linked directly to Sony's site that SHOWS the sensor output (which CONTAINS the ADC) is 14-bit? How convenient...that you read my first post, and just magically didn't happen to see my second post. Sony's OWN SITE says the Sensor output is 14-bit. The sensor is an Exmor. Exmor uses CP-ADC ON-DIE. The last output of the sensor is FROM the ADC.

Therefor...the A7s IS 14-BIT! I love it how you DEMAND I PROVE things to you, then simply ignore the FACTS when I smack you upside the face with them.

There is absolutely ZERO question about it. The facts are the facts. The A7s is still "just" an Exmor, and Exmor's use 14-bit ADC. Here, I'll smack you upside the face with them *again*:

From the horses mouth: http://discover.store.sony.com/sony-technology-services-apps-NFC/tech_imaging.html#BIONZ



> *16-bit image processing and 14-bit RAW output*
> 16-bit image processing and 14-bit RAW output help preserve maximum detail and produce images of the highest quality with rich tonal gradations. The 14-bit RAW (Sony ARW) format ensures optimal quality for later image adjustment (via Image Data Converter or other software).


----------



## jrista (Jun 1, 2014)

Orangutan said:


> My comments are based on my (imperfect) memory of science podcasts and other science journalism I've encountered in the last few years. If you have contradictory info I'd love to see a reference.
> 
> 
> 
> ...



I'm not saying everyone can be a savant. I'm saying everyone can learn how to WORK their memory to improve it. I did it...I used to have the same old poor memory that everyone had, I forgot stuff all the time, couldn't remember accurately. By thinking about, exercising, and processing sensory input more actively, I can intentionally bring up other memories that I want associated to the new ones I'm creating. Purposely recalling memories in certain ways and reviewing after creating them has helped me strengthen those memories, improving my ability to accurately recall the original event, be it sight, sound, smell, touch, taste or all of the above.

Whatever current research shows, memory is NOT simply some passive process we have absolutely no control over. It's also an active process that we CAN control, and we can improve our memory if we choose to...either only of specific events of importance, or we can train ourselves to process input in a certain way such that most input is more adequately remembered and strongly associated.



Orangutan said:


> > I believe the brain only fills in information if it isn't readily accessible. I do believe that for the most part, when we see something, the entirety of what we see is recorded.
> 
> 
> 
> Again, my understanding is that recent research shows that the adage "seeing is believing" has it backwards: it should be "believing is seeing." The brain does not record raw image info at all, but constructs a reality that incorporates visual data with existing beliefs and expectations. It's that highly-processed "reality" that's recorded. As an example, back in 2004 there was that video tape of a purported Ivory-Billed Woodpecker. Subsequent analysis showed that it was almost certainly the rather common pileated woodpecker. The "eyewitnesses," however, recall seeing detail that would clearly distinguish it as an IBW. Even if it was a pileated, those witness may truthfully and genuinely believe they saw those distinguishing characteristics.



I don't think any of that contradicts the notion that our brains store much or most of everything that goes into them. I don't deny that our beliefs and desires can color HOW we remember...as they could control what we recall. Remember, memory is often about association. If the guy watching the woodpecker was vividly remembering an IBW at the time (would have been an amazing find, for sure! I really hope they aren't extinct, but... :'(), that wouldn't necessarily change the new memories being created, but it could overpower the new memories with the associations to old memories of IBW. Upon recall...you aren't just recalling the new memories, but things associated with it as well. What you finally "remember" could certainly be colored by your desired, causing someone to misremember. Good memory is not necessarily good recall, and it certainly doesn't overpower an individual's desires for something to be true. All that gets into a level of complexity about our our brains work that goes well beyond any courses on the subject I've ever taken.

BTW, I am not talking about savants who have perfect memory. Eidetic memories or whatever you want to call them, that's a different thing than what I'm talking about. Eidetic memories are automatic, it's more how those individuals brains work, maybe a higher and more cohesive level of processing than normal individuals. That doesn't change the fact that you CAN actively work with your memory to improve it, considerably. I'm not as good at it these days as I used to...severe chronic insomnia have stolen a lot of abilities like that from me, but when I was younger, I used to have an exceptional memory. I remembered small details about everything because I was always working and reviewing the information going in. Before I took that class, my memory was pretty average, after and still largely since, it's been better than average to truly excellent. 

That has to do with memory creation itself, though...it doesn't mean my memories can't be colored by prior experiences for desires. I think it lessens the chance of improper recall, but it's still possible to overpower a new memory with associations to old ones, and over time, what is recalled may not be 100% accurate (again, not talking about eidetic memories here, still just normal memory.) There have been cases of obsessive-compulsive individuals having particularly exceptional memory, on the level of supposed eidetics and in some respects better. For the very very rare individual, memories become their obsession, and because it's an obsession, every memory is fully explored, strengthened and associated to a degree well beyond normal. Recall is very fast, and the details can be very vivid. It isn't just image-based either, all sensory input can be remembered this way (sounds, smells, etc.) With such strong associations and synaptic cleft strengthening, such an individuals memories are effectively permanent as well. The difference would be the obsessive-compulsive chooses what memories to obsesse over...so their recall isn't necessarily as complete as an eidetic (who's memory for imagery is more automatic.)


----------



## GMCPhotographics (Jun 1, 2014)

Sporgon said:


> 100 said:
> 
> 
> > Sporgon said:
> ...


----------



## Sporgon (Jun 1, 2014)

GMCPhotographics said:


> Sporgon said:
> 
> 
> > 100 said:
> ...



I agree with you; I don't use GND filters at all since digital has come of age - ie for about the last ten years. 

100 seemed to me to be suggesting that the continued production of GND filters is to support those poor souls who still use Canon for landscape photography.

Most of my panoramics are shot in the way you describe, but often I don't need to do this. There is more latitude in these modern Canon sensors than some people give them credit for. Here's a shot out of a window at Bolton Castle in the English Yorkshire Dales, where Mary Queen of Scots was imprisoned by Elizabeth I. It's actually taken from a 'garderobe' ( toilet) L shaped passage, and the only light coming in is from this window. It was taken at mid day with the sun out. 

I've shown the original jpeg from RAW with the jpeg picture style applied. Then I've shown the finished picture, and finally for those that like absurdity I've lightened the shadows and brought the sky down to silly levels. Anything blown ? No. FPN ? Only in the few areas where the sensor has recorded zero light, and even then it's not bad. 

And this was taken on the 'old' 5DII which is nothing like as good as the 6D in it's latitude and data manipulation.


----------



## K-amps (Jun 2, 2014)

dilbert said:


> K-amps said:
> 
> 
> > Being in my late teens in the 80's and witnessing how the DAC's in the CD players would go from 16 to 18 to 20 to 24 etc. every 2-3 years, then the sampling frequency even faster than that....
> ...



Thanks. Again I am no expert, but perhaps they would use 16 or 20 bits for similar logic that the CD player manufacturers used, i.e. larger bit depth converters are more linear in certain bit depths... i.e. offset the data stream from the sensor by 2 bits and then encode/decode. If for nothing else, it might be less noisy maybe. I don't know... just thinking out aloud. I would feel visual acuity is more sensitive than auditory... so why doesn't this have the (marketing) traction that audio component manufacturers had. In the end it was a Sony idea of using direct stream as opposed to a ladder process that satisfied the most die hard audiophiles...

How does 59k electrons convert to 14 bit and what influences the # of electrons. Do EXMOR's do more than 59k electrons?


----------



## K-amps (Jun 2, 2014)

Has any manufacturer implemented a dual scan or dual pixel single scan of a read whereby one scan reads the scene (as an example) +6 stops over and the second scan reads 6 stops under, then these scans are merged to output a file that is +12 stops more in DR than the one file.

If the dual scan will cause a blur for fast objects... perhaps have a dual pixel read out where each partner pixel offsets the recording of the image by +/- 6 stops, and either yield 2 raw files for manual processing, or do an in camera processing to compress the file into visible DR for output purposes...


----------



## wickidwombat (Jun 2, 2014)

i think a more pertinent question should be why isn't everyone bored of this discussion since there has been near on a billion threads about it all running on endlessly.... yes i know i over exagerated the billion threads part.... but since everything else in these threads is over exagerated to the point of absurdity why not?


----------



## neuroanatomist (Jun 2, 2014)

dilbert said:


> neuroanatomist said:
> 
> 
> > dilbert said:
> ...



So...Sony's RAW files, much like your arguments, are half-baked. No surprise, really.


----------



## Badger (Jun 2, 2014)

Holy crap!
You guys and gals are way too smart for me!
Canon, I love you, can I have just a little more DR?


----------



## jrista (Jun 2, 2014)

dilbert said:


> > Sony's OWN SITE says the Sensor output is 14-bit. The sensor is an Exmor. Exmor uses CP-ADC ON-DIE. The last output of the sensor is FROM the ADC.
> >
> > Therefor...the A7s IS 14-BIT!
> > ...
> ...



Well, it's no surprise that you buy into DXO's bull. There are two values on DXO's site for DR. One is a measure, as in something actually MEASURED from a REAL RAW file. The other is an EXTRAPOLATION. It isn't even a real extrapolation, it is just a number spit out by a simple mathematical formula...they don't actually even do what they say they are doing.

The first of these is Screen DR. Screen DR is the ONLY actual "measure" of dynamic range that DXO does. It is the SINGLE and SOLE value for DR that is actually based on the actual RAW data. In the case of the D800....do you know what Screen DR is? (My guess is not.) 

The other of these is Print DR. Print DR is supposedly the dynamic range "taken" from a downsampled image. The image size is an 8x12 "print", or do DXO's charts say. As it actually happens to be, and this is even according to DXO themselves...Print DR is not a measure at all. It isn't a measurement taken from an actually downsampled image. You know what it is? It is an extremely simple MATHEMATICAL EXTRAPOLATION based on...what? Oh, yup...the only actual TRUE MEASURE of dynamic range that DXO has: Screen DR. Print DR is simply the formula* DR+ log2 SQRT(N/N0)*. DR is ScreenDR. N is the actual image size, N0 is the supposed downsampled size. The formula is rigged to guarantee that "Print DR" is higher than Screen DR...not even equal to, always higher. And, as it so happens, potentially 100% unrelated to reality, since it is not actually measured. 

DXO doesn't even have the GUTS to ACTUALLY downsample real images and actually measure the dynamic range from those downsampled images. They just run a mathematical forumla against Screen DR and ASSUME that the dynamic range of an image, IF they had downsampled it, wouold be the same as what that mathematical value says it should be.

Print DR is about as bogus as "camera measurement 'science'" can possibly get. It's a joke. It's a lie. It's bullshit. The D800 does not have 14.4 stops of DR, as DXO's Print DR would indicate. The Screen DR measure of the D800? Oh, yeah...it's LESS than 14 stops, as one would expect with a 14-bit output. It's 13.2 stops, over ONE FULL STOP less than Print DR. The D600? Says Print DR 14.2, but Screen DR is 13.4. D610? Print DR 14.36, but Screen DR 13.55. D5300? Print DR 13.8, but Screen DR 13. A7? Print DR 14, but Screen DR 13.2. A7s? Print DR 14 but Screen DR 13. NOT ONE SINGLE SENSOR with 14-bit ADC output has EVER actually MEASURED more than 14 stops of dynamic range. That's because it's impossible for a 14-bit ADC to put put enough information to allow for more than 14 stops of dynamic range. There simply isn't enough room in the bit space to contain enough information to allow for more than 14 stops..not even 0.1 more stops. Every stop is a doubling. Just as every bit is a doubling. Bits and stops, in this context, are interchangeable terms. In the first bit you have two values. With the second bit, your "dynamic range" of number space doubles...you now have FOUR values. Third bit, eight values. Fourth bit, sixteen values. Fifth bit, thirty two values. To begin using numeric space beyond what the 14th bit allows, which would be necessary to start using up some of the 15th stop of dynamic range, you need at least 15 bits of information. It's theoretically, technologically, and logically impossible for any camera that uses a 14-bit ADC to have more than 14 stops of dynamic range. 

Here is another fact about dynamic range. Dynamic range, as most photographers think about it these days, is the number of stops of editing latitude you have. While it also has connotations to the amount of noise in an image, the biggest thing that photographers think about when it comes to dynamic range is: How many stops can I lift this image? We get editing latitude by editing RAW images. RAW. Not downsampled TIFFs or JPEGs or any other format. RAW images. How do we edit RAW images? Well...as _*RAW*_ images. There IS NO DOWNSAMPLING when we edit a RAW image. Even if there was...who says that we are all going to downsample our images to an 8x12" print size (3600x2400 pixels, or 8.6mp)? We edit RAW images at full size. It's the only possible way to edit a RAW image...otherwise, it simply wouldn't be RAW, it would be the output of downsampling a RAW to a smaller file size...which probably means TIFF. Have you ever tried to push the exposure of a TIFF image around the same way you push a RAW file around? You don't even get remotely close to the kind of shadow lifting or highlight recovery capabilities editing a TIFF as you do a RAW. Not even remotely close. And the editing latitude of JPEG? HAH! Don't even make me say it.

Therefor, the ONLY valid measure of dynamic range is the DIRECT measure, the measure from a RAW file itself, at original size, in the exact same form that photographers are going to be editing themselves. Screen DR is the sole valid measure of dynamic range from DXO. Print DR is 100% bogus, misleading, fake.

It doesn't matter what Sony does in their BONZ X chip. The sensor output is 14-bit RAW. The only thing their BIONZ chip can do is...the same thing YOU do. They can lift shadows, and compress highlights. They can shift exposure around and reduce noise by applying detail-softening noise reduction algorithms. But then, well then you don't actually have a RAW file anymore. You have a camera-modified file. With Sony's propensity for using a lossy compression algorithm in their RAWs, you don't even get full 14-bit precision data per pixel, and that fact has shown in many cases when people go to edit their Sony RAWs in post. The compression artifacts can be extreme. I find it simply pathetic that Sony, with all this horsepower under their thumb, would completely undermine it all by storing their RAW images in a lossy compressed format. It completely invalidates the power of their sensors, and speaks to the fact that Sony is probably just as schizophrenic internally as Nikon is. That will lead to inconsistent products and product lines, poor product cohesion, lackluster design for OTHER aspects of their cameras beyond the sensor, etc. Were already seeing many of these problems with Sony cameras. Their sensors may be good, but how Sony themselves are using their sensors is crap.


----------



## jrista (Jun 2, 2014)

dilbert said:


> jrista said:
> 
> 
> > ...
> ...



It doesn't matter what they do in the middle. The FILE you WORK WITH is 14-bit. All that BIONZ does is do some post processing. That doesn't change the dynamic range, it only changes the contrast within the original dynamic range. That's it. There is no magic here, nothing special. A 14-bit file has enough numeric space for 14 stops of dynamic range. Anything you do to the data, such as apply a non-linear curve, simply COMPRESSES the information contained within those original 14 stops to SOMETHING LESS!! There is no ex nihilo here.

It's no different than say upconverting from sRGB to AdobeRGB. There is no value to doing that, since you already lost the original color fidelity when you converted to sRGB in the first place. You cannot restore colors in the AdobeRGB space once they are lost...they are gone forever. It's the same thing with bit depth and stops of dynamic range.

The only way Sony can literally achieve 15.3 stops of dynamic range is if their sensor ADC units are 16 bit, the processing pipeline is 16-bit, AND the output RAW file is 16-bit. And I mean a FULL 16-bit, not some half-witted 13+3-bit lossy compressed RAW file, as that would just decimate the full tonal range. I mean a full, uncompressed (or at least lossLESSly compressed) 16-bit RAW file. There is no other way to achieve more than 15 stops of dynamic range...you have to have the numeric space to represent those stops.


----------



## awinphoto (Jun 2, 2014)

And this is why I dont spend much time anymore on this forum... but is fun to watch the banter back and forth... let me get the popcorn warmed up... extra butter this time!


----------



## mackguyver (Jun 2, 2014)

awinphoto said:


> And this is why I dont spend much time anymore on this forum... but is fun to watch the banter back and forth... let me get the popcorn warmed up... extra butter this time!


LOL, I spend too much time on the forum, but I'm usually a spectator anytime I see posts about DR or Megapixels. It's fun to watch 8).


----------



## Valvebounce (Jun 2, 2014)

Hi Folks. 
I'm looking for an app to automatically mark any post with DR in the title as read! ;D

Cheers Graham.



mackguyver said:


> awinphoto said:
> 
> 
> > And this is why I dont spend much time anymore on this forum... but is fun to watch the banter back and forth... let me get the popcorn warmed up... extra butter this time!
> ...


----------



## unfocused (Jun 2, 2014)

mackguyver said:


> awinphoto said:
> 
> 
> > And this is why I dont spend much time anymore on this forum... but is fun to watch the banter back and forth... let me get the popcorn warmed up... extra butter this time!
> ...



I wish I had your willpower. Sometimes I just can't help myself and respond to these stupid threads.


----------



## jrista (Jun 2, 2014)

Ah, you guys just don't like a good debate!  Sometimes debate is healthy...

Although I'll grant it's less debate and more "Let's beat that last little bit of brain matter and bone of the dead horse over there...AGAIN" on these forums...but hey, that isn't my fault.  Some people are just too thick to let the facts soak into their skulls. I'm just responding to the call:







;D


----------



## unfocused (Jun 3, 2014)

jrista said:


> Ah, you guys just don't like a good debate!  Sometimes debate is healthy...
> 
> Although I'll grant it's less debate and more "Let's beat that last little bit of brain matter and bone of the dead horse over there...AGAIN" on these forums...but hey, that isn't my fault.  Some people are just too thick to let the facts soak into their skulls. I'm just responding to the call:
> 
> ;D



I, for one, am glad you are afflicted with this obsession. I have learned a lot from your posts. And, I mean that in all sincerity.


----------



## mackguyver (Jun 3, 2014)

jrista said:


> Ah, you guys just don't like a good debate!  Sometimes debate is healthy...
> 
> Although I'll grant it's less debate and more "Let's beat that last little bit of brain matter and bone of the dead horse over there...AGAIN" on these forums...but hey, that isn't my fault.  Some people are just too thick to let the facts soak into their skulls. I'm just responding to the call:
> 
> ...


That cartoon is awesome and I'm all for debates, and even (gasp) recently replied to a megapixels post, but that and DR are two topics that generally seem to get out of control! Sometimes it's more fun to watch, but as unfocused says, you certainly know a lot and we always learn from you.


----------



## LetTheRightLensIn (Jun 3, 2014)

Orangutan said:


> As an example, back in 2004 there was that video tape of a purported Ivory-Billed Woodpecker. Subsequent analysis showed that it was almost certainly the rather common pileated woodpecker. The "eyewitnesses," however, recall seeing detail that would clearly distinguish it as an IBW. Even if it was a pileated, those witness may truthfully and genuinely believe they saw those distinguishing characteristics.



Actually a great many still hold that it is an IBW on the tape. And multiple people had multiple sightings, many better than the time on the tape. This reminds me of the time my 100% clear observation of something else was laughed off by the snobby, if we didn't see it first, forget it counting birders (just a year later dozens were replicating my 'absurd, ridiculous, impossible' sighting.


----------



## LetTheRightLensIn (Jun 3, 2014)

jrista said:


> dilbert said:
> 
> 
> > neuroanatomist said:
> ...



I'm guessing that by 15.3 stops they just mean in 8MP normalized terms as DxO does (however, that is kinda misleading, since it really leaves you guessing, I mean what if it is 15.3 normalized to 1MP? Then it's even worse than the current Sonys.)


----------



## LetTheRightLensIn (Jun 3, 2014)

K-amps said:


> Has any manufacturer implemented a dual scan or dual pixel single scan of a read whereby one scan reads the scene (as an example) +6 stops over and the second scan reads 6 stops under, then these scans are merged to output a file that is +12 stops more in DR than the one file.
> 
> If the dual scan will cause a blur for fast objects... perhaps have a dual pixel read out where each partner pixel offsets the recording of the image by +/- 6 stops, and either yield 2 raw files for manual processing, or do an in camera processing to compress the file into visible DR for output purposes...



Canon has a patent to not quite do that but to read it at two ISOs at once (so you get the shadow detail from an ISO3200 read and the brights and midtones and upper darks from an ISO100 read), similar to something E.M. proposed on DPR a few years ago.


----------



## LetTheRightLensIn (Jun 3, 2014)

Oh, no. Jrista what happened?? About a year ago, you had finally gotten down with the concept of normalization, but now you are back to your old game of normalization doesn't make sense again. 



jrista said:


> Well, it's no surprise that you buy into DXO's bull. There are two values on DXO's site for DR. One is a measure, as in something actually MEASURED from a REAL RAW file. The other is an EXTRAPOLATION. It isn't even a real extrapolation, it is just a number spit out by a simple mathematical formula...they don't actually even do what they say they are doing.



And since the simple formula is simply it actually gives worse results if anything compared to fancy techniques, not better.



> The first of these is Screen DR. Screen DR is the ONLY actual "measure" of dynamic range that DXO does. It is the SINGLE and SOLE value for DR that is actually based on the actual RAW data. In the case of the D800....do you know what Screen DR is? (My guess is not.)
> 
> The other of these is Print DR. Print DR is supposedly the dynamic range "taken" from a downsampled image. The image size is an 8x12 "print", or do DXO's charts say. As it actually happens to be, and this is even according to DXO themselves...Print DR is not a measure at all. It isn't a measurement taken from an actually downsampled image. You know what it is? It is an extremely simple MATHEMATICAL EXTRAPOLATION based on...what? Oh, yup...the only actual TRUE MEASURE of dynamic range that DXO has: Screen DR. Print DR is simply the formula* DR+ log2 SQRT(N/N0)*. DR is ScreenDR. N is the actual image size, N0 is the supposed downsampled size. The formula is rigged to guarantee that "Print DR" is higher than Screen DR...not even equal to, always higher. And, as it so happens, potentially 100% unrelated to reality, since it is not actually measured.



Oh brother. It is not rigged! Why are you back to calling normalization rigged again???? Do you realize that 90% of modern tech and science wouldn't work out if what you say was true?



> DXO doesn't even have the GUTS to ACTUALLY downsample real images and actually measure the dynamic range from those downsampled images. They just run a mathematical forumla against Screen DR and ASSUME that the dynamic range of an image, IF they had downsampled it, wouold be the same as what that mathematical value says it should be.
> 
> Print DR is about as bogus as "camera measurement 'science'" can possibly get. It's a joke. It's a lie. It's bullshit. The D800 does not have 14.4 stops of DR, as DXO's Print DR would indicate. The Screen DR measure of the D800? Oh, yeah...it's LESS than 14 stops, as one would expect with a 14-bit output. It's 13.2 stops, over ONE FULL STOP less than Print DR. The D600? Says Print DR 14.2, but Screen DR is 13.4. D610? Print DR 14.36, but Screen DR 13.55. D5300? Print DR 13.8, but Screen DR 13. A7? Print DR 14, but Screen DR 13.2. A7s? Print DR 14 but Screen DR 13. NOT ONE SINGLE SENSOR with 14-bit ADC output has EVER actually MEASURED more than 14 stops of dynamic range. That's because it's impossible for a 14-bit ADC to put put enough information to allow for more than 14 stops of dynamic range. There simply isn't enough room in the bit space to contain enough information to allow for more than 14 stops..not even 0.1 more stops. Every stop is a doubling. Just as every bit is a doubling. Bits and stops, in this context, are interchangeable terms. In the first bit you have two values. With the second bit, your "dynamic range" of number space doubles...you now have FOUR values. Third bit, eight values. Fourth bit, sixteen values. Fifth bit, thirty two values. To begin using numeric space beyond what the 14th bit allows, which would be necessary to start using up some of the 15th stop of dynamic range, you need at least 15 bits of information. It's theoretically, technologically, and logically impossible for any camera that uses a 14-bit ADC to have more than 14 stops of dynamic range.



no, no, no and no

comparing noise at different energy scales as if the scales were the same is what would be totally bogus!



> Here is another fact about dynamic range. Dynamic range, as most photographers think about it these days, is the number of stops of editing latitude you have. While it also has connotations to the amount of noise in an image, the biggest thing that photographers think about when it comes to dynamic range is: How many stops can I lift this image? We get editing latitude by editing RAW images. RAW. Not downsampled TIFFs or JPEGs or any other format. RAW images. How do we edit RAW images? Well...as _*RAW*_ images. There IS NO DOWNSAMPLING when we edit a RAW image. Even if there was...who says that we are all going to downsample our images to an 8x12" print size (3600x2400 pixels, or 8.6mp)? We edit RAW images at full size. It's the only possible way to edit a RAW image...otherwise, it simply wouldn't be RAW, it would be the output of downsampling a RAW to a smaller file size...which probably means TIFF. Have you ever tried to push the exposure of a TIFF image around the same way you push a RAW file around? You don't even get remotely close to the kind of shadow lifting or highlight recovery capabilities editing a TIFF as you do a RAW. Not even remotely close. And the editing latitude of JPEG? HAH! Don't even make me say it.
> 
> Therefor, the ONLY valid measure of dynamic range is the DIRECT measure, the measure from a RAW file itself, at original size, in the exact same form that photographers are going to be editing themselves. Screen DR is the sole valid measure of dynamic range from DXO. Print DR is 100% bogus, misleading, fake.



Please go the library and check out a book on normalization and mathematics.





> Their sensors may be good, but how Sony themselves are using their sensors is crap.



sometimes they do do some annoying things, that is true at least


----------



## jrista (Jun 3, 2014)

LetTheRightLensIn said:


> Oh, no. Jrista what happened?? About a year ago, you had finally gotten down with the concept of normalization, but now you are back to your old game of normalization doesn't make sense again.
> 
> 
> 
> ...



I don't disagree with you. But your missing my point. I tried to be clear about what I'm referring to. Noise is one thing. And noise in an image doesn't just come from read noise, it's the photon shot noise in the signal as well. And YES, downsampling normalizes results. I'm not debating that.

I'm specifically debating the notion that you actually have 14.4 stops worth of EDITING LATITUDE with a D800, or 14.2 with a D800, etc. Because that's what everyone things about. That's what everyone is referring to when they bring up the DR difference. It's not a bad thing, and there is no question that Sony Exmor has more DR than a Canon sensor. The problem I have is the missleading notion that DXO's Print DR "results" have created in the community.

We don't push the exposure of downsampled TIFF files around, so it makes no sense to refer to 14.4 stops of DR in the context of, say, discussing the benefit of DR when working with landscapes. It really doesn't make any sense to refer to 14.4 stops of 8mp image DR when discussing actual photographic editing in ANY context EXCEPT when directly comparing cameras, and then, only in a very neat and tidy context...such as when your actually on the DXO web site. In all other contexts, the only legitimate measure is that taken directly from the RAW...from the actual image we actually work with out in the actual world. In that context...the D800 has 13.2 stops of DR. 

Does that make sense? As far as I'm concerned: Comparison Shmarison!  I care about actual real-world editing latitude. Mathematically extrapolated imaginary downsampled fake "measurements" don't tell me jack about what I am ACTUALLY going to be able to do FOR REAL. Screen DR? It tells me exactly what I want to know. It tells every photographer what they want to know: How much can I lift my landscape photos? Print DR is lying...it tells you you could lift more than you actually can, because you don't edit RAW images downsampled, JPEGs don't even remotely cut it, and TIFF images, because they are RGB triples rather than RAW indepentent digital signal values, you can't lift the shadows nor compress the highlights the same way...not without significant artifacts after a push or pull of a couple stops. (i.e. you may be able to lift shadows by two, maybe three stops without artifacts with a TIFF at "14.4" stops DR, but you could easily lift a Nikon D800 RAW by six stops at 13.2 stops DR.)

So I don't disagree. I agree. I am just working within a different context. The context I believe most photographers approach the subject of DR (based on the things they reference when they approach it.) To you and me, dynamic range means signal cleanliness across the entire band. To most everyone else, it means: How much can I lift without banding in the shadows?


----------



## Deleted member 91053 (Jun 3, 2014)

OK I give up! - "Why isn't Canon working on DSLRs with higher dynamic range"??????
Am I missing something? I have yet to have problems with the Dynamic range capabilities of any Canon DSLR that I have owned.
I have used several Nikons (D800/800E + others) that are alleged to have increased Dynamic Range but, frankly, I was not too impressed wit the results, lenses perhaps? I read that they have higher DR at low ISO - perhaps they do - but I was not impressed by the overall IQ.
I am not saying that my cameras are perfect, but what I am saying is that they have yet to let me down in the DR department. 
Am I just exposing properly?


----------



## LetTheRightLensIn (Jun 4, 2014)

jrista said:


> I'm specifically debating the notion that you actually have 14.4 stops worth of EDITING LATITUDE with a D800, or 14.2 with a D800, etc. Because that's what everyone things about. That's what everyone is referring to when they bring up the DR difference. It's not a bad thing, and there is no question that Sony Exmor has more DR than a Canon sensor. The problem I have is the missleading notion that DXO's Print DR "results" have created in the community.



Mostly it gets brought up when talking camera vs. camera though, so it's fine.



> We don't push the exposure of downsampled TIFF files around, so it makes no sense to refer to 14.4 stops of DR in the context of, say, discussing the benefit of DR when working with landscapes. It really doesn't make any sense to refer to 14.4 stops of 8mp image DR when discussing actual photographic editing in ANY context EXCEPT when directly comparing cameras, and then, only in a very neat and tidy context...such as when your actually on the DXO web site.



Not a tiny context. People compare bodies all the time, that's a pretty large context. And the Print DR is a much better, fairer comparison than the Screen DR charts, when comparing body to body.




> Does that make sense? As far as I'm concerned: Comparison Shmarison!  I care about actual real-world editing latitude.



Most people seem to care how one does relative to another and whether they will have more or less lattitude than with what they currently own and the Print DR chart is what tells you that. The Screen DR chart is generally more misleading. If you want to know what you'll get using every MP and ignoring other cameras, the Screen DR chart tells you what to expect though.



> Mathematically extrapolated imaginary downsampled fake "measurements" don't tell me jack about what I am ACTUALLY going to be able to do FOR REAL. Screen DR? It tells me exactly what I want to know. It tells every photographer what they want to know: How much can I lift my landscape photos?



Not so simple though. As looking at the Screen DR chart alone could still trick you. You just look at it and see that you can do with it say 10 stops but you have no way to relate that to what you are used to dealing with on your other cameras. Maybe they say 14 stops, but maybe the real world performance is the same between the two cameras. Yeah if you use all the MP and want 14 stops at a higher frequency of detail you won't get it and only get 10, but you actually might do just the same as your old camera if you compare them at the same detail scale.



> Print DR is lying...it tells you you could lift more than you actually can, because you don't edit RAW images downsampled, JPEGs don't even remotely cut it, and TIFF images, because they are RGB triples rather than RAW indepentent digital signal values, you can't lift the shadows nor compress the highlights the same way...not without significant artifacts after a push or pull of a couple stops. (i.e. you may be able to lift shadows by two, maybe three stops without artifacts with a TIFF at "14.4" stops DR, but you could easily lift a Nikon D800 RAW by six stops at 13.2 stops DR.)



It's not lying it's just letting you know how you do between various cameras.



> So I don't disagree. I agree. I am just working within a different context. The context I believe most photographers approach the subject of DR (based on the things they reference when they approach it.) To you and me, dynamic range means signal cleanliness across the entire band. To most everyone else, it means: How much can I lift without banding in the shadows?



yeah but that can easily lead one to trick oneself, you have to be very careful to realize that you can't use your past references as a basis (unless the cameras happened to have the same MP count)


----------



## LetTheRightLensIn (Jun 4, 2014)

johnf3f said:


> OK I give up! - "Why isn't Canon working on DSLRs with higher dynamic range"??????
> Am I missing something? I have yet to have problems with the Dynamic range capabilities of any Canon DSLR that I have owned.
> Am I just exposing properly?



You are simply just shooting scenes that don't have a lot of DR and avoiding all the ones that do.


----------



## neuroanatomist (Jun 4, 2014)

LetTheRightLensIn said:


> johnf3f said:
> 
> 
> > OK I give up! - "Why isn't Canon working on DSLRs with higher dynamic range"??????
> ...



I've shot many scenes where the 11-12 stops of DR my Canon sensor could capture was insufficient. However, for the vast majority of those scenes, the 13-14 stops of DR from a SoNikon sensor would also have been insufficient.


----------



## mackguyver (Jun 4, 2014)

neuroanatomist said:


> LetTheRightLensIn said:
> 
> 
> > johnf3f said:
> ...


Same here. If two extra stops were enough, I could bracket at +/-1 EV, but I typically bracket at 2 stops, which would theoretically give me 16 stops with the 5DIII / 1D X


----------



## sanj (Jun 4, 2014)

neuroanatomist said:


> LetTheRightLensIn said:
> 
> 
> > johnf3f said:
> ...



Yes. In extreme situations no camera can provide details both in shadows and highlights. And I like this fact: It helps me create contrast vs muddy photos.


----------



## Aglet (Jun 4, 2014)

johnf3f said:


> OK I give up! - "Why isn't Canon working on DSLRs with higher dynamic range"??????
> Am I missing something? I have yet to have problems with the Dynamic range capabilities of any Canon DSLR that I have owned.
> I have used several Nikons (D800/800E + others) that are alleged to have increased Dynamic Range but, frankly, I was not too impressed wit the results, lenses perhaps? I read that they have higher DR at low ISO - perhaps they do - but I was not impressed by the overall IQ.
> I am not saying that my cameras are perfect, but what I am saying is that they have yet to let me down in the DR department.
> Am I just exposing properly?



Could be a different tone/gamma curve that makes things pop or look more appealing from your Canon compared to other bodies.

Here's an example from my D800E.

I was looking at backlit granaries in a grassy field last weekend, with a partly cloudy sky.
I was using my d800e with 70-200. As I framed the shot I was looking around at the 3 main elements in the scene and noting their relative brightness to each other, within the VF. Within the constraint of the optical VF, it's easy to do that.

I could clearly see all the cloud detail in the viewfinder
I could clearly see all the detail of the granaries' shadow sides simultaneously
I could clearly see the grass detail as well.
This scene did not, visually, appear to have a lot of DR. But it does have enough to make the camera's standard tone-curve/gamma interpretation for jpg appear flawed when shooting it.

If you expose to retain the cloud detail without clipping, the shadowed structures look too dark compared to how they look by eye.
If you expose a little more to bring up the structures' shadow area to look like it appeared in the viewfinder, the highlights get clipped.
After last week's discussion of how things look to the eye, I was suprised to see just how much my organic visual system was compressing the DR of this scene compared to the camera's (jpg) response.

There was no "correct exposure" for nailing this scene in one shot using the standard curves that produced the OOC jpg. It had to be exposed to retain highlight detail and the darker areas will have to be brought up in post in order that it should look at is appeared to my eyeball looking at the real scene at the time of the shot.
Just using the exif data from the jpgs I use to catalog a shoot, the granaries and grass were about 1.5 stops too dark compared to the sky. I manually bracketed 2 stops with 3 shots. I'll use the one without highlight clipping and tweak the shadows and midtones in post so it looks closer to how it did in reality

to my eye:
- 1154 was very close to how the sky looked in the VF, reality was a tad brighter, maybe 1/3 stop
- 1153 is still a little too dark for how the grass and granaries looked
- 1152 was is slightly brighter than how the granaries looked but very close for the grass

All are ISO 100, f/4
shutter :1154=1/1250, 1153-1/640, 1152=1/320

So, even without a very challenging scene to shoot, some manipulation in post is required to adjust the image to make it look close to reality by lifting shadows to the point of low-midtones. Most cameras can cope with this small amount of shadow push without any FPN issues.
if I wanted to push hard enough to see what's inside of the open door of the round granary, then the Exmor sensor gives a better chance but that would be merely experimental as I could see no detail, with my eye, beyond that doorframe.


----------



## Aglet (Jun 4, 2014)

From a different perspective, a change of less than 90 degrees shows how relatively flat the lighting was otherwise... Altho this was shot with a Fuji. EDIT - within about 1 minute of the last shot with the D800.


----------



## LetTheRightLensIn (Jun 6, 2014)

neuroanatomist said:


> LetTheRightLensIn said:
> 
> 
> > johnf3f said:
> ...



Well I've encountered numerous where the extra 2-3 stops would help a lot.


----------



## jrista (Jun 6, 2014)

Ah, you guys and your 12-14 stops.  Little ppls with their little bits of DR. 

Here's a glimpse at the big leagues. Try this:

Original 50 frame integration of 270 second exposures, calibrated with a master bias (180 frames), master dark (50 frames), and master flat (30 frames) [grand total exposure time across all frames of ~8hrs):





After stretching that totally BLACK image by some 20, 25 stops, and two days worth of post processing with the most advanced noise reduction and data extraction tools on the planet:






Fourteen stops. HAH! I fart in the general direction of your fourteen stops! And Laugh. MUHAHAHAHAHAAAA!


----------



## sdsr (Jun 6, 2014)

johnf3f said:


> OK I give up! - "Why isn't Canon working on DSLRs with higher dynamic range"??????
> Am I missing something? I have yet to have problems with the Dynamic range capabilities of any Canon DSLR that I have owned.
> I have used several Nikons (D800/800E + others) that are alleged to have increased Dynamic Range but, frankly, I was not too impressed wit the results, lenses perhaps? I read that they have higher DR at low ISO - perhaps they do - but I was not impressed by the overall IQ.
> I am not saying that my cameras are perfect, but what I am saying is that they have yet to let me down in the DR department.
> Am I just exposing properly?



There are lots of possibilities - maybe you don't take photos of situations/places with lots of high contrast, or you expose correctly for the elements in the scene that matter to you and you don't worry about or even notice or care about the rest, or when you have dark shadows you don't want to lighten them as much as others do, if at all, or when you do the resulting noise doesn't bother you, etc., etc. 

(I too wasn't wild about the images I took with an 800e when I rented one last year out of curiosity; I prefer what I can get from my Sony A7r - though like you I don't know why, exactly - the camera, the lenses I used, not having the camera long enough to figure out how best to use it, or some mix of these.)


----------



## Deleted member 91053 (Jun 6, 2014)

Some very interesting comments and food for thought - thanks.
Perhaps I should have stated at the beginning that my primary interest is wildlife photography, generally with long lenses and rarely at the lowest ISO. In this type of photography I feel that DR may be a lesser consideration compared to my camera's and lens's capabilities?


----------



## LetTheRightLensIn (Jun 7, 2014)

johnf3f said:


> Some very interesting comments and food for thought - thanks.
> Perhaps I should have stated at the beginning that my primary interest is wildlife photography, generally with long lenses and rarely at the lowest ISO. In this type of photography I feel that DR may be a lesser consideration compared to my camera's and lens's capabilities?



It depends. Higher ISOs have more limited DR so you tend to hit the issue even more easily, although often the light is flatter OTOH.

Anyway, at higher ISO Canon DR is as good as with the others so you certainly aren't held back any by Canon compared to other brands if upper mid to high iso is where you tend to live.


----------



## eml58 (Jun 7, 2014)

jrista said:


> Fourteen stops. HAH! I fart in the general direction of your fourteen stops! And Laugh. MUHAHAHAHAHAAAA!



I think the message taped to an Arrow & shot into the Chest may have had more impact & been more pointed ;D

Love those Movies.


----------



## jrista (Jun 7, 2014)

eml58 said:


> jrista said:
> 
> 
> > Fourteen stops. HAH! I fart in the general direction of your fourteen stops! And Laugh. MUHAHAHAHAHAAAA!
> ...



Me too!


----------



## Deleted member 91053 (Jun 7, 2014)

jrista said:


> eml58 said:
> 
> 
> > jrista said:
> ...



Obviously not a Troll as he didn't say " I don't want to talk to you no more, you empty-headed animal food trough wiper! I fart in your general direction! Your mother was a hamster and your father smelt of elderberries!"

P.S. a friend of mine was an extra in the battle scene at the end of the film.


----------



## clicstudio (Jun 10, 2014)

I've been asking the same question for years. It's funny that I was just about to start a thread bragging about the amazing HDR capabilities of the 5DIII…

I did a furniture shoot yesterday. I didn't take my 1DX. Instead I took my friend's 5DIII because it has in-camera HDR which WORKS… Unfortunately, this only works with a tripod. Since the camera takes 3 images and then combines them into one. Also, there is a little bit of cropping on the final image, so u always have to frame wider...

I also wish for the day my camera can see what my eyes see. Even the most expensive cameras can't capture a perfect sunset or a backlit portrait without a lot of tweaking…

Magic Lantern has a hack for the 5D and I think 7D that enhances the DR but it makes the image noisy.
My suggestion? Dual or triple Sensors. One for highlights and one for Shadows.

IF one sensor could be calibrated to "see" only the top range of light and the other the bottom, it could work…

I attached one image of yesterday's shoot. Taken with available light only… This photo would be IMPOSSIBLE with my 1DX. I am very impressed with it and it looks great. Check out the white curtain. U can see the trees outside and not a washed out 100% white curtain. and the detail of the orchids against the backlit. Also, no noise or distortion.
So, to the OP, get a 5DIII and a tripod and it will change the way u see things…

Technical info: 5DIII, Canon 24-70 F2.8L II, ISO 320, 28mm, F8.0, 1/20



Happy Shooting.


----------



## jrista (Jun 10, 2014)

clicstudio said:


> I've been asking the same question for years. It's funny that I was just about to start a thread bragging about the amazing HDR capabilities of the 5DIII…
> 
> I did a furniture shoot yesterday. I didn't take my 1DX. Instead I took my friend's 5DIII because it has in-camera HDR which WORKS… Unfortunately, this only works with a tripod. Since the camera takes 3 images and then combines them into one. Also, there is a little bit of cropping on the final image, so u always have to frame wider...
> 
> ...



Wow. I want that house! NICE!!!


----------

