# For our DR peepers: Sony A7000 - rumored 15,5 stops DR



## xps (Jul 8, 2015)

Some rumors from sonyalpharumors:
http://www.sonyalpharumors.com/sr3-sony-a7000-is-the-first-e-mount-camera-with-15-5-stops-on-sensor-hdr/

What the hell if this would be an Canon sensor? ;D


----------



## psolberg (Jul 8, 2015)

interesting trick but seeing how nikon manages to pull DR from sony sensor better than sony and how the two year old sony sensors were already in mid 14's, I think the world's first full frame 15 stop DR capable sensor will be a nikon/sony and not resort to any tricks.

but time will tell.


----------



## neuroanatomist (Jul 8, 2015)

Yes, it's quite a good trick to have DR "in mid 14's" when your camera has a 14-bit ADC. Hail to the almighty DxO Biased Scores, and kudos to those that revel in that BS. :


----------



## psolberg (Jul 8, 2015)

neuroanatomist said:


> Yes, it's quite a good trick to have DR "in mid 14's" when your camera has a 14-bit ADC. Hail to the almighty DxO Biased Scores, and kudos to those that revel in that BS. :



isn't their margin of error about that much (half a stop)? You're also assuming their ADC quantization scale is a uniform power of two. It doesn't have to be. You can quantize a larger DR than 14 stops in 14 bits by simply deciding how your conversion curve maps the signal to a bit value. This is not that different from how tone mapping works in HDR software. It is off course not as preferable as a true 15 bit value, but given precision at the lower bits are just measuring noise, and how a 1 stop is not going to push your tone curve excessively, it could be easily done.

But it may very well be the world's first full frame 15 bit camera is coming. It was only a matter of time and if anybody has a reason to do this is the sony sensors. They are after all the only ones with this problem...which is actually a good problem


----------



## neuroanatomist (Jul 9, 2015)

Except that the data don't suggest that's the case....but people just go on quoting 14.whatever-stops of DR, maybe some people just like the stench of bovine scat. 

You're correct that we'll see true 15-bit or higher DR in consumer cameras at some point. It's already in use in some scientific imaging applications.


----------



## psolberg (Jul 9, 2015)

neuroanatomist said:


> Except that the data don't suggest that's the case....but people just go on quoting 14.whatever-stops of DR, maybe some people just like the stench of bovine scat.
> 
> You're correct that we'll see true 15-bit or higher DR in consumer cameras at some point. It's already in use in some scientific imaging applications.



What data is that you're referring to? If you mean to say the RAW DOX data I have never seen it but I have no problem seeing the two stop advantage on the sony sensor and neither have basically every reviewer that has scrutinized it. Probably the most extensive set of DR analysis that is not numeric, but visual is here
http://diglloyd.com/search-ajax.html?q=dynamic+range

So we have quite a few years worth of images that show a clear advantage to sony's sensors and all but those in denial accept it. It is a dead horse and time to move on buddy. To dwell on how that is quantized in the 14 bit raw file and how DXO measures it is ultimately focusing on the wrong number. You can argue that given photons are random, there isn't 14 stops in a 14 bit file anyway. But the various measurements are still useful to compare differences: it may be that the 14 bit file contains only 12 stops of USABLE DR. Still the camera that gets you 12 usable stops will have a higher DR than the one that yields 10. Again the absolute value isn't what you're comparing: it is the difference between the absolute values. That is what the images will show. 

I agree it is unfortunate some people focus on the absolute value, but it is ultimately the value that must be given because how else are we to extract the useful value: the difference. So while you can disagree all you want with absolute numbers in relation to actual DR, the difference and evidence of the difference already settled this topic. 

Back to speculation:
noise control via sensor tech should make better use of the 14 bit file to maybe get one more stop of usable range. So if the current baseline they are using scores it at 14.5, they will score it at 15.5. If the ADC has a non power of two curve, then probably can fit more. Point being it will be a 3 stop usable difference over a lesser sensor even if there isn't actually 15 stops of usable range.


----------



## 3kramd5 (Jul 9, 2015)

This was digitized with a 16-bit ADC. I see 17-18 different tones (19 & 20 appear indistinguishable on this display). 

I have no reason to believe the a7000 won't be nearly as capable with a tone curve similar to what they have now plus multi-exposure blending.


----------



## neuroanatomist (Jul 9, 2015)

Interesting that I have not, here or elsewhere, denied that SoNikon sensors have 2+ more stops of low ISO DR...yet you feel compelled to argue the point anyway. Nicely done. 

2 + 2 = 5. Don't focus on the fact that the absolute answer of 5 is wrong, what matters is that the sum is greater than the addends...that's all that really matters.


----------



## psolberg (Jul 9, 2015)

neuroanatomist said:


> Interesting that I have not, here or elsewhere, denied that SoNikon sensors have 2+ more stops of low ISO DR...yet you feel compelled to argue the point anyway. Nicely done.
> 
> 2 + 2 = 5. Don't focus on the fact that the absolute answer of 5 is wrong, what matters is that the sum is greater than the addends...that's all that really matters.



buddy, if you feel compelled enough to whine about people claiming 14.4 stops and 14 bit DAC conversion curves, are you really going to complain when somebody answered? here is a question: if you didn't want to hear it, why did you bring it up? 

Also I'm not saying 2+2=5. I think you've gone mad. What I'm saying is that if you quantize DR the same from a set of 14 bit values, the absolute value DR you calculate to be there doesn't mean anything if you cannot compare it to something else. DXO is in the business of comparison. Plenty of field testing (see Diglloyd link) supports the findings. You can spin around the actual DR being 14 stops or not. Given the conversion curve is never stated to be a power of two, it really doesn't matter as long as they use the same curve on all cameras, which they do (or their comparison would be meaningless).

What the takeaway from DXO should be is that some cameras have better DR than others as measured by them in terms of relative stops to each other. And that is the key part: stops apart from each other. Sony/Nikon: 2-3 stops ahead whatever the actual DR of their curve is. That is it. Just get over it.


----------



## neuroanatomist (Jul 9, 2015)

Still arguing the point I never disputed. At the outset, you wrongly stated that SoNikon sensors have >14-stops of DR, and I agreed that SoNikon delivers 2+ more stops of low ISO DR than Canon. You've posted a few hundred words, and I still agree that SoNikon delivers 2+ more stops of low ISO DR than Canon, and you were still wrong to state that SoNikon sensors have >14-stops of DR. Again, well done.


----------



## StudentOfLight (Jul 10, 2015)

A question for all you technical experts...
Lets say I have an image properly exposed for my middle gray subject but I've lost detail in the highlight and shadows because the dynamic range of the scene was very high. For argument sake lets say my camera is a D810 which has 13.7 stops of dynamic range at pixel level and it has 36MP. How much would I need to downsize my image to regain detail that is hidden in the highlights areas and shadow areas, both of which are important to properly convey the meaning of the image? (see attached image)


----------



## 3kramd5 (Jul 10, 2015)

As I understand it, you are not going to gain highlight detail by downsampling. Downsampling affects DR because noise is averaged and becomes lower relative to signal. It's only advantageous where SNR is low (ie not in the highlights).


----------



## neuroanatomist (Jul 10, 2015)

Moreover, you don't gain DR beyond what can be captured by the sensor's actual (pixel level) DR. If your scene contains more than 13-14 stops of DR, you're going to lose highlights, shadows, or both...and no amount of downsampling will recover what you lose.


----------



## 3kramd5 (Jul 10, 2015)

neuroanatomist said:


> Moreover, you don't gain DR beyond what can be captured by the sensor's actual (pixel level) DR. If your scene contains more than 13-14 stops of DR, you're going to lose highlights, shadows, or both...and no amount of downsampling will recover what you lose.



Right. You aren't going to suddenly see details on the surface of the sun, or that dastardly black cat who haunts the neighborhood coal mine, if they weren't digitized from the onset. You're going to see less noise.


----------



## neuroanatomist (Jul 10, 2015)

3kramd5 said:


> neuroanatomist said:
> 
> 
> > Moreover, you don't gain DR beyond what can be captured by the sensor's actual (pixel level) DR. If your scene contains more than 13-14 stops of DR, you're going to lose highlights, shadows, or both...and no amount of downsampling will recover what you lose.
> ...



And yet...some people will keep bleating on about how SoNikon sensors deliver 14.whatever stops of DR. :


----------



## Don Haines (Jul 10, 2015)

neuroanatomist said:


> 3kramd5 said:
> 
> 
> > neuroanatomist said:
> ...



I wonder how long it will be before we start to see some larger photon wells and 16 bit a/d converters in cameras..... right now, the full well size takes 15 bits to digitize and Sony/Nikon throws away 1.3 bits for noise and Canon throws away about 3 bits for noise..... A bit deeper well and a bit cleaner circuitry and 14 bits will no longer cut it...

My bet is within two years....


----------



## 3kramd5 (Jul 11, 2015)

Don Haines said:


> neuroanatomist said:
> 
> 
> > 3kramd5 said:
> ...



You think consumer cameras will be able to make significant use of 16-bit ADCs in two years? I'll be surprised, but I guess it's possible.


----------



## Don Haines (Jul 11, 2015)

3kramd5 said:


> Don Haines said:
> 
> 
> > neuroanatomist said:
> ...


I should have written my thoughts a bit clearer....

I think that in 2 years it will happen on one or two very high end cameras... 1DX II ???? but most cameras will not have it or even need it.

Also 16 bit A/D is easy to do... in the electronics world 16 bit A/D is low-res.... 24 bit is considered normal and 32 bit (and above) is considered high res.... and in some very specialized instruments ( think $100,000 + ) you can even find 64 bit A/D...


----------



## 3kramd5 (Jul 11, 2015)

Don Haines said:


> 3kramd5 said:
> 
> 
> > Don Haines said:
> ...



Going to a 16-bit ADC is certainly easy to do, but I don't think sensors will be such that two additional bits would be useful to anyone outside the marketing department.


----------



## msm (Jul 11, 2015)

neuroanatomist said:


> Yes, it's quite a good trick to have DR "in mid 14's" when your camera has a 14-bit ADC. Hail to the almighty DxO Biased Scores, and kudos to those that revel in that BS. :



Yes clearly, here is a 1bit image 8192 pixels wide, then the same 1bit image again downsampled to 1024 pixels. As you clearly can see there can't possibly be more than 1stop DR in this image :


----------



## neuroanatomist (Jul 11, 2015)

msm said:


> neuroanatomist said:
> 
> 
> > Yes, it's quite a good trick to have DR "in mid 14's" when your camera has a 14-bit ADC. Hail to the almighty DxO Biased Scores, and kudos to those that revel in that BS. :
> ...



Certainly, there are ways to digitally increase the DR in an already-captured image. I bet your grandmother knows how to suck eggs without you having to teach her. 

Which camera did you use that converted the original >1 stop analog signal to digital using a 1-bit ADC? 

At issue is the scene DR vs. the system DR _at image capture_. Are you suggesting that signal above the full well capacity or below the noise floor can be included in the dynamic range of an image? That a sensor with a 13.5-stop difference between noise floor and full well capacity can capture the full DR of a scene with 14.4-stops of DR, in a single image? 

But lots of people like BS, and seem to eat it up with a giant spoon. For example, in this post on Digital Camera World that purports to explain, "_...what you need to know about capturing all the tones in a scene,_" the author states that:

[quote author=Markus Hawkins on Digital Camera World]
For instance, the Nikon D610’s dynamic range has been measured at between 13 and 14.4 EV at ISO 100.
[/quote]

The D610's DR has been *measured* at 14.4 EV. Do you believe that statement to be true? Is the D610 _ capturing all the tones in a scene_ when that scene has 14.4 stops of DR? Personally, I think Markus has eaten a big helping of DxO and his breath reeks of BS. 

DxO aside, it's true that 14-bit ADCs are commonly used because that's more than required to fully represent the captured range of current sensors. As sensors exceed than 14-bit limit, camera makers _could_ clip or map that >14-bit signal into the smaller range. It would be transformed data still called RAW, but that's already common (Nikon applied NR to RAW, Sony applies lossy compression, and Canon's mRAW and sRAW aren't really RAW). More likely they'll just move to 16-bit ADCs (just like they went from 12- to 14-bits a while back). The step after that (18-bits) will be the tricky one.


----------



## msm (Jul 11, 2015)

Neuro, I am gonna ignore your usual irrelevant BS. I will just point out that there are multiple definitions of dynamic range. You got the traditional per pixel ones (ie engineering DR and DXO screen) and you got the more modern ones which takes resolution into account (ie DXO print and Bill Claff's photographic dynamic range). Both Bill Claff and DXO disclose their DR definitions at their websites. When the resolution of the sensor is sufficiently high it is perfectly possible for the dynamic range measured in stops to exceed the number of bits of the DAC when using the DR measures which take resolution into account.

In a per pixel measure of DR, a sensor with 1 pixel and 11stops of DR has better dynamic range than a 50megapixel sensor with 10.9stops DR. For a photographer, this is a obviously a completely useless piece of information, thats why we have the more modern measures. A 50 megapixel sensor with per pixel DR of 13.5stops captures way more information than a 12mpixel sensor with per pixel DR of 13.5, in fact it captures more than a stop more per pixel DR after you downscale to 12mpix than the 12mpix sensor does. 

Unless you have some information to share with us that points out that DXO's D750 test has not been performed in accordance with their definition, then you may as well consider the following statement to be a fact: "The D750 sensor has been measured at 14.5EV dynamic range at base iso using the DXO print definition".


----------



## 3kramd5 (Jul 11, 2015)

msm said:


> 50 megapixel sensor with per pixel DR of 13.5stops captures way more information than a 12mpixel sensor with per pixel DR of 13.5, in fact it captures more than a stop more per pixel DR after you downscale to 12mpix than the 12mpix sensor does.



By what magic does downsampling an image change what the sensor already captured? 

You'll never capture more by downsampling - what gets digitized initially is the most of anything you get. downsampling will reduce noise and therefore increase DR of an image, but it doesn't affect the capture.

[quote author=msm]
Unless you have some information to share with us that points out that DXO's D750 test has not been performed in accordance with their definition, then you may as well consider the following statement to be a fact: "The D750 sensor has been measured at 14.5EV dynamic range at base iso using the DXO print definition".[/quote]

It is not fact. Had DXO down sampled a d750 image and run analysis on it it would be fair to say "an image from the d750 has been measured at 14.5," but that isn't what they do. They analyze RAW (dxoone notwithstanding) and use a math model to predict downsampled "print" DR; it isn't measured.


----------



## neuroanatomist (Jul 11, 2015)

msm said:


> Neuro, I am gonna ignore the main point you made and the specific question you asked me.



Fixed that for ya. I'll try once more, and I'll keep it simple. You're holding a D750 and have framed a scene with 14.5 stops of DR from deepest shadow detail to brightest highlight detail. Yes or no: can you capture in a single image all that detail from the deepest shadow to the brightest highlight? Feel free to use whichever definition of DR that you prefer. 




msm said:


> Unless you have some information to share with us that points out that DXO's D750 test has not been performed in accordance with their definition, then you may as well consider the following statement to be a fact: "The D750 sensor has been *measured* at 14.5EV dynamic range at base iso using the DXO print definition".



"Measured,"...in the context of science and engineering, the word has a specific meaning. I understand it, DxO understands it, but it seems that you do not. DxO's understanding is described in the attached screenshot (from here). In terms of their plots, Screen DR is *measured* and Print DR (the one that's >14-stops) is mathematically determined from that measured value. If you were able to answer the above question correctly, you should understand why the distinction is important.


----------



## msm (Jul 12, 2015)

3kramd5 said:


> msm said:
> 
> 
> > 50 megapixel sensor with per pixel DR of 13.5stops captures way more information than a 12mpixel sensor with per pixel DR of 13.5, in fact it captures more than a stop more per pixel DR after you downscale to 12mpix than the 12mpix sensor does.
> ...



Downsampling does not create more information. But for instance a 50megapixel sensor captures more information than a 16 megapixel one with identical per pixel DR, which results in less noise and higher dynamic range from the 50megapixel sensor when the images are viewed at identical magnification (or downsampled to identical resolutions). This is why looking at per pixel DR without considering resolution is meaningless from a photography perspective.



> It is not fact. Had DXO down sampled a d750 image and run analysis on it it would be fair to say "an image from the d750 has been measured at 14.5," but that isn't what they do. They analyze RAW (dxoone notwithstanding) and use a math model to predict downsampled "print" DR; it isn't measured.



In practice it doesn't matter if they downsample first then calculate or if they use a formula, the difference will be negligable. Or are you implying the statistics is wrong? But if you know for a fact they calculate it from the formula you can substitute the word "measured" with "estimated" in my statement above, but unless DXO mess it up it will be a good estimate however.


----------



## msm (Jul 12, 2015)

neuroanatomist said:


> msm said:
> 
> 
> > Neuro, I am gonna ignore the main point you made and the specific question you asked me.
> ...



Uninteresting question, but if you absolutely want to then yeah you can come up with a scene and a definition of DR where the answer in practice is yes. In general however a DSLR can never capture all detail of a scene.


----------



## 3kramd5 (Jul 12, 2015)

msm said:


> In practice it doesn't matter if they downsample first then calculate or if they use a formula, the difference will be negligable.



I guess what bothers me, as a bit of a purist, is that they make every effort to discuss how their scorings are based on RAW, for example:

*All sensor scores reflect only the RAW sensor performance of a camera body. All measurements are performed on the RAW image file BEFORE demosaicing or other processing prior to final image delivery. DxOMark does not address such other important criteria as image signal processing*, mechanical robustness, ease of use, flexibility, optics quality, value for money, etc. While RAW sensor performance is critically important, it is not the only factor that should be taken into consideration when choosing a digital camera.

and

Since users can choose their RAW converter and tune the settings to a very fine degree, and since we want to evaluate the intrinsic quality of the sensor and the lens, it is only logical to perform measurements on RAW images.

and 

Only RAW-based measurements report on the image quality of the photographic hardware irrespective of the RAW converter. ,

and yet by default what they display in their sensor scoring is not RAW but rather one which has been noise-reduced via the averaging of pixels involved in down-sampling. Seems a little strange. It also negates a common use (if internet posts are to be believed) for high-resolution sensors: cropping. 

That which is based on downsampling a demosaiced RGB image shouldn't be referred to as a sensor property, or score, etc., any more than one which has been run through DXO Prime. It's a reasonable way to compare potential SNR between different cameras with different resolutions, but that's about it. 



msm said:


> But if you know for a fact they calculate it from the formula you can substitute the word "measured" with "estimated" in my statement above, but unless DXO mess it up it will be a good estimate however.



I'd probably still take issue with it since it says "the sensor has been..." 

I'd have no problem with the statement "Prints which have been averaged down to roughly 1/3 the native sensor resolution have been estimated as potentially representing 14.5EV DR" ;D

Personally, I work on my images at full resolution, and what happens when they get down sampled to another format are a bonus. I would much prefer if I DXO gave prominence to "screen DR," but it isn't my website and I'm not about to start a competing one, so I'll deal with a few additional clicks. Unfortunately, given how the internet works, each step away from DXO makes it a little less likely the caveats are understood by the end reader.


----------



## RGF (Jul 12, 2015)

To have 15.5 (call it 16) stops of DR, then the sensor needs to output atleast 15 bits of data + a few extra for the darkest dark. Let's say 4 bit is sufficient. That means 20 stops of data.

Normally LR, ... save 16 bits of data (actually there are only 15 bits, the high order bit is a sign bit so data is recorded from 0 ... 32767 which is 2^15-1.

For 20 bits of data (required to support 15.5 stops of DR), the sensor and processing would need to use 32 bit data which means files would be twice the size.

Sounds like DxO is measuring something that is not actually used.


----------



## msm (Jul 12, 2015)

3kramd5 said:


> That which is based on downsampling a demosaiced RGB image shouldn't be referred to as a sensor property, or score, etc., any more than one which has been run through DXO Prime. It's a reasonable way to compare potential SNR between different cameras with different resolutions, but that's about it.



Where is it stated that it is based on a demosaiced image?



3kramd5 said:


> I'd probably still take issue with it since it says "the sensor has been..."
> 
> I'd have no problem with the statement "Prints which have been averaged down to roughly 1/3 the native sensor resolution have been estimated as potentially representing 14.5EV DR" ;D
> 
> Personally, I work on my images at full resolution, and what happens when they get down sampled to another format are a bonus. I would much prefer if I DXO gave prominence to "screen DR," but it isn't my website and I'm not about to start a competing one, so I'll deal with a few additional clicks. Unfortunately, given how the internet works, each step away from DXO makes it a little less likely the caveats are understood by the end reader.



Ok so you only care about pixel peeping 100% crops and don't care about how the entire image look from a reasonable viewing distance? In that case I would recommend you get a low resolution camera with good per pixel DR measures.

Measures like DXO print DR and Bill Claff's photographic dynamic range is based on the image in its entirety, for instance Bill Claff states "PDR is the dynamic range you would expect in an 8x10” print viewed at a distance of about arms length."


----------



## 3kramd5 (Jul 12, 2015)

msm said:


> Ok so you only care about pixel peeping 100% crops and don't care about how the entire image look from a reasonable viewing distance?



Incorrect on both. I said I work on full res and that if noise is reduced when I change format it's a bonus. However I wouldn't purchase a high resolution camera (eg the 42.4 ish or 50.1 ish MP bodies I have on order) with the intent to shed the vast majority of the resolution.

I mostly use a 5D3 which isn't high res by current standards and doesn't have huge per pixel DR by current SOTA. Doesn't matter, it works for me. I sometimes use a 36MP camera, and when I do, it's because I intend to use the resolution, by which I mean print a whole heck of a lot bigger than 8X10.



msm said:


> 3kramd5 said:
> 
> 
> > That which is based on downsampling a demosaiced RGB image shouldn't be referred to as a sensor property, or score, etc., any more than one which has been run through DXO Prime. It's a reasonable way to compare potential SNR between different cameras with different resolutions, but that's about it.
> ...



Fair 'nuff. Substitute "that which is based on an estimate of what could result were you to downsample a demosaiced RGB image..." 
Regardless, it's signal processing and is no longer RAW by any stretch of DXO's verbiage.


----------



## neuroanatomist (Jul 12, 2015)

msm said:


> neuroanatomist said:
> 
> 
> > msm said:
> ...



Uninteresting to you...perhaps because you don't like the answer. What it means is that,the D750 does *not* deliver 14.5-stops of DR. Furthermore, no current SoNikon sensor delivers >14-stops of DR. 

Yet...there are claims like that all over the place, including the one by psolberg which started this discussion. All thanks to the BS plopping to the ground behind DxO.


----------



## msm (Jul 12, 2015)

neuroanatomist said:


> msm said:
> 
> 
> > neuroanatomist said:
> ...



Uninteresting because it is a stupid question. No sensor can capture all detail in any scene period. Then if you ignore that point then the question is not well defined:

"a scene with 14.5 stops of DR from deepest shadow detail to brightest highlight detail": How do you define this precisely?
" can you capture in a single image all that detail from the deepest shadow to the brightest highlight?": Again defined how?


----------



## neuroanatomist (Jul 12, 2015)

msm said:


> neuroanatomist said:
> 
> 
> > msm said:
> ...



14.5 stops of DR in the scene. Can the D750 capture that full 14.5 stops of DR without clipping a highlight or blocking up a shadow? Oh, wait – you've already answered and the answer is, "No." Now, it's both uninteresting and stupid because you don't like the answer. Weaseling around interpretations doesn't alter the facts. No current Sony/Nikon/etc. sensor used in a dSLR or MILC can capture more than 14-stops of DR. Can you grasp that simple fact, or do you need more basic terminology and concepts defined for you?


----------



## 3kramd5 (Jul 12, 2015)

msm said:


> "a scene with 14.5 stops of DR from deepest shadow detail to brightest highlight detail": How do you define this precisely?



How about a scene of a constant texture lit at one end by a spot. If you meter in the spot light, it gives f/128, and if you meter the opposite end it gives f/1 with an extra half stop from a longer exposure time. 

The question then would be can you see the same constant texture across the entire scene?


----------



## neuroanatomist (Jul 12, 2015)

3kramd5 said:


> msm said:
> 
> 
> > "a scene with 14.5 stops of DR from deepest shadow detail to brightest highlight detail": How do you define this precisely?
> ...



I don't have a lens that opens up to f/1 or stops down to f/128, so your question is impractical and irrelevant. See...I can weasel around on interpretation, too.


----------



## msm (Jul 13, 2015)

neuroanatomist said:


> msm said:
> 
> 
> > neuroanatomist said:
> ...



Wrong, the problem here is that you can't formulate a meaningful question. You can "capture" more than 14stops DR (as in you can separate signals of more than 14 stops below white point), just downsample sufficently and you are left with pixels which are way past 14. I have already demonstrated that in the goose picture above. And as a consequence by some definitions of dynamic range you can have more than 14stops of DR from a 14bit ADC (because the measured signal is supersampled by definition). Why is this so hard to accept for you? It is just a definition you know.


----------



## neuroanatomist (Jul 13, 2015)

msm said:


> Wrong, the problem here is that you can't formulate a meaningful question. You can "capture" more than 14stops DR (as in you can separate signals of more than 14 stops below white point), just downsample sufficently and you are left with pixels which are way past 14. I have already demonstrated that in the goose picture above. And as a consequence by some definitions of dynamic range you can have more than 14stops of DR from a 14bit ADC (because the measured signal is supersampled by definition). Why is this so hard to accept for you? It is just a definition you know.



Well, it looks like we've run up against the wall of your inability to accept facts. As I already stated, you can digitally introduce over 14-stops if data into a 14-bit digital file. But current sensors cannot *capture* the complete dynamic range of a scene with >14-stops of DR. It must be nice for you – you are able to have as much DR as you want in your files – you can capture the detail of a white rocks face in full sun and the detail inside the unlit cave, all in one shot...all you have to do is downsample. The rest of us live in the real world, where detail in the highlights will be lost to clipping, detail in the shadows will be lost and unrecoverable, or both. Your handwaving around "by some definitions" is ignoring the central point that information lost at capture cannot be recreated later, and your unwillingness or inability to admit that fact is rather sad. 

Your intransigence has apparently reached the point of a mental handicap regarding this issue, so I see no point in continuing this discussion.


----------



## StudentOfLight (Jul 13, 2015)

How is the noise floor measured? Is "Screen" DR value not already an average derived from a representative sample of pixels?


----------



## msm (Jul 13, 2015)

neuroanatomist said:


> msm said:
> 
> 
> > Wrong, the problem here is that you can't formulate a meaningful question. You can "capture" more than 14stops DR (as in you can separate signals of more than 14 stops below white point), just downsample sufficently and you are left with pixels which are way past 14. I have already demonstrated that in the goose picture above. And as a consequence by some definitions of dynamic range you can have more than 14stops of DR from a 14bit ADC (because the measured signal is supersampled by definition). Why is this so hard to accept for you? It is just a definition you know.
> ...



This is getting funny. I do not have a problem with accepting facts. However I do have a problem with facts from Neuroland.

Let's instead consider some facts from reality:

- A single pixel in a modern image sensor can at most create 14bits of information per scan by a 14bit DAC.
- Modern image sensors however consist of many pixels, in fact millions of pixels. In fact a 5DS sensor creates more than 700megabits of information per scan.
- In digital imaging some parties have started to measure dynamic range as a property of the sensor, not as the property of the individual pixels. In these measures, dynamic ranges in stops can exceed the number of bits in the DAC.

I have tried to explain why several times now and even demonstrated the principle practically but it is obviously a waste of time. You should read your last post again and reflect on whether it maybe applies to yourself, but it seems self reflection is not one of your strenghts. :


----------



## neuroanatomist (Jul 13, 2015)

3kramd5 said:


> How about a scene of a constant texture lit at one end by a spot. If you meter in the spot light, it gives f/128, and if you meter the opposite end it gives f/1 with an extra half stop from a longer exposure time.
> 
> The question then would be can you see the same constant texture across the entire scene?



Apparently the answer is yes. Just downsample and the detail in the texture lost to highlight clipping and/or shadow blocking will magically be created. 

_Physics – it's only a suggestion. _.


----------



## msm (Jul 13, 2015)

neuroanatomist said:


> 3kramd5 said:
> 
> 
> > How about a scene of a constant texture lit at one end by a spot. If you meter in the spot light, it gives f/128, and if you meter the opposite end it gives f/1 with an extra half stop from a longer exposure time.
> ...



Oh is this another wonderful insight from Neuroland or is it an attempt of trying to ridicule someone based on arguments they never made? My guess is on the latter but from what I have seen above I can't be 100% sure :


----------



## StudentOfLight (Jul 14, 2015)

msm said:


> - A single pixel in a modern image sensor can at most create 14bits of information per scan by a 14bit DAC.


Using a 14-bit Analog-to-Digital-Converter (ADC), a pixel's quantized value can be anything from 0 to 16383. When you combine multiple pixels you can average their values. So if you combine an input pixel array (0;0;0;1) you can have an output pixel value of 0.25. Log2(16383/0.25)>14. I believe this is your argument msm, please correct me if I'm wrong.

For the Nikon D610, if your noise floor (average read noise) is 0.32 EV then that corresponds to a 14-bit pixel value of 1.25. If you combine an input pixel array (1.25;1.25;1.25;1.25) you get an output pixel value of 1.25. Log2(16383/1.25)=13.7. I believe this is Neuro's argument, please correct me if I'm wrong.


----------



## msm (Jul 14, 2015)

StudentOfLight said:


> msm said:
> 
> 
> > - A single pixel in a modern image sensor can at most create 14bits of information per scan by a 14bit DAC.
> ...



I don't have time nor motivation to try to write up a fully detailed and correct explanation but I'll try to sum up some of it. 

One important aspect here is that when you read pixel values the result is subject to noise which means we need to turn to statistics. The important part here is that when you average measurements from multiple pixels the result will have less noise than the noise of the individual measurements, the more measurements you average the less noise you get. If you want to understand the details I recommend reading about the concepts in basic statistics like stochastic variables, expected values, variance and standard deviation.

Also important are the definitions of DR which you can read about on DXO's site or at http://home.comcast.net/~NikonD70/GeneralTopics/Sensors_&_Raw/Sensor_Analysis_Primer/Sensor_Analysis_Primer.htm

The sensor based DR definitions either downsample to a specific resolution by bilinear filtering or adjusting for a circle of confusion. This is done by calculating weighted averages for multiple pixels. So obviously the more measurements you have (ie more pixels) the lower the noise is in the downsampled pixels or in the CoC. Since modern sensors based on same technology in practice typically have identical per pixel noise at base ISO regardless of resolution, this means that in practice a high resolution sensor typically produces less noisy images (when viewed at identical distance or at identical resolution) than a low resolution sensor based on same technology. This is why I argue that per pixel measurements of dynamic range alone is of limited value in photography.

When DXO says that the landscape DR is above 14 stops it simply means that the ratio between the noise floor (probably defined as the signal where signal to noise ratio is 1, check the DXO site to be sure) and the clipping or white point is above 14 stops at base ISO per pixel after the image has been downsampled to 8mpixels. It doesn't mean anything other than this, so it is important not to misinterpret this number. A signal to noise ratio of 1 also means a very noisy signal so that does not mean that you can "capture details 14 stops below white point" or anything like that. The real answer to a question like that would also depend on things like how detailed the detail is (ie how many pixels would be needed to reproduce it of sufficient quality) and how large area of the sensor it covers.

The actual values from dynamic range measurements can vary significantly depending on whichever criterions are used, the important thing are usually not the values themselves but the differences in measured values for different sensors measured by the same definition.


----------



## zim (Jul 14, 2015)

Glad to see that last post was removed


----------



## Frodo (Jul 14, 2015)

This thread discusses per pixel and per sensor dynamic range. What about whole system dynamic range? 
I thought that flare inside the lens and camera defined an upper limit for dynamic range, although I don't know what that is.


----------



## Sporgon (Jul 14, 2015)

Frodo said:


> This thread discusses per pixel and per sensor dynamic range. What about whole system dynamic range?
> I thought that flare inside the lens and camera defined an upper limit for dynamic range, although I don't know what that is.



I've got a feeling that some lenses do allow the sensor to capture slightly more dynamic range, specifically modern, 'made for digital' ones when compared with some of the last film era lenses that were designed to boost contrast with film.


----------



## PhotographyFirst (Jul 14, 2015)

Frodo said:


> This thread discusses per pixel and per sensor dynamic range. What about whole system dynamic range?
> I thought that flare inside the lens and camera defined an upper limit for dynamic range, although I don't know what that is.



That's a very complex issue to try to quantify. It depends on where the bright and dark areas are positioned in the frame. Some lenses can maintain very high contrast ratios in one portion of the frame where bright light is shining and then very low contrast in other parts of the frame. There is also the question of how far apart the brightest and darkest areas are from each other in the frame. If you have a sharp edge transition from super dark to super bright, all lenses will be struggling to maintain any decent level of DR. If the darkest area is in one corner and the brightest area in the opposite corner, then most lenses will perform far beyond what any current sensor can do in terms of DR.


----------



## rfdesigner (Jul 15, 2015)

StudentOfLight said:


> A question for all you technical experts...
> Lets say I have an image properly exposed for my middle gray subject but I've lost detail in the highlight and shadows because the dynamic range of the scene was very high. For argument sake lets say my camera is a D810 which has 13.7 stops of dynamic range at pixel level and it has 36MP. How much would I need to downsize my image to regain detail that is hidden in the highlights areas and shadow areas, both of which are important to properly convey the meaning of the image? (see attached image)




You don't expose for mid tones, you expose for hilights. it's called Expose To The Right: ETTR.. so long as you don't clip any pixels then you can recover shadows by downsampling, you get one stop improvement in the shadows by halving both vertical and horizontal resolution, do the same again and get another stop.

You can also gain one stop in the shadows by averaging 4 images, or 2 stops with 16 images.. this is what is often done in astro imaging.

additionally you get the same performance in the real world with either the sony or Nikon implementation of the same sensor despite what DxO would have you believe. Nikon clip the noise so makeing the zero point look less noisy, if you have extremely low but not zero signal level the noise is raised so that it is fully sampled and you lose the "benefit" of clipping, so whilst areas at 14EV below clipping might appear marginally better, those at -13EV below clipping and brighter should appear identical.


----------



## Aglet (Jul 15, 2015)

It's been a long day and I'm tired and the silly bickering here ceases to be amusing any more.

Y'all might need to pause your arguments for a moment and think about a particular aspect of DxO's downsampled DR, which I'm sure you're all aware of and understand how it's calculated but either don't think is real-world relevant or just enjoy arguing over semantics.

Remember that for any maximum number representing a full quantized count of a signal, the MINIMUM quantifiable amount is ZERO.
Anything over zero = infinity. Infinity would be a lot of DR!

What DXO's downsampling does is merely to average out the black level data provided by the sensor.
The closer to zero you can get, the higher the DR, no matter whether its 14 bits, 12 or 8 or less.

Since Canon's sensor systems don't produce many zeros due to prodigous read noise, their DR is gonna be limited.
ABC cameras produce more zero data for black levels so simply have a better black level when averaged and that makes for a better DR number the way DxO calculates it.

Theoretically, it's possible to use a really high resolution sensor, digitized at only 1 bit but, for sake of argument, using some sort of diffusion and dithering method, to produce an infinite DR measurement because it's black levels would always be represented by only zero and not some slightly greater than zero noise number. (because the diffusion and dithering algorithms it uses are perfect)


So, IMO, the print DR number is carp!
The screen DR number is somewhat useful but is also misleading as we're not really sure how they're averaging all the black pixels. (simple mean, RMS, mean + RMSvar)

The full SNR measurements are slightly more useful as you can see how clean the entire signal is at any tonal level (in %) for any major ISO.
By looking at where the plot intersects the bottom axis, the farther to the left, the better. The higher the plot intersects the vertical right axis, the better. 
Log2 (latter / former) is your DR when using 1:1 signal to noise ratio as the base limit for the measurement as they've chosen to depict it. 

ALL OF THESE MEASUREMENTS ARE STILL ALMOST USELESS IF YOU DON'T HAVE A FIGURE FOR PATTERN NOISE which is what really limits useful DR.
But if you KNOW a particular body does not exhibit pattern noise, then DxOmark's numbers are very useful and directly comparable.
Have a look at the SNR graph for a Pentax K5ii if you want to see an impressively clean camera which is also devoid of any significant pattern noise.

http://www.dxomark.com/Cameras/Pentax/K-5-II---Measurements#measuretabs-6


So, perhaps to surprise Neuro, I don't fit into his DRone group.
I'm in the smaller anti-pattern-noise group. 
Maybe someone can come up with a catchy acronym for that.


----------



## 3kramd5 (Jul 16, 2015)

dilbert said:


> 3kramd5 said:
> 
> 
> > This was digitized with a 16-bit ADC. I see 17-18 different tones (19 & 20 appear indistinguishable on this display).
> ...



Yes, I see them all on my desktop (originally posted from an iphone6), and 19 on my laptop (2015 macbook pro). Note that DXO's "print DR" score is 14.8, so presumably 15-and-on are where the math model predicts SNR of less than 1 after downsampling from 6k.


----------



## 3kramd5 (Jul 16, 2015)

msm said:


> Measures like DXO print DR and Bill Claff's photographic dynamic range is based on the image in its entirety, for instance Bill Claff states "PDR is the dynamic range you would expect in an 8x10” print viewed at a distance of about arms length."



I think I missed this before. What papers and inks are you using to see 14.5 stops of DR on a print? Maybe with neat backlighting you could get there, by in my experience the final format, be it web-based or printed, has a far narrower DR than any modern DSLR musters. Gotta map tones down. 

Print DR from DXO doesn't tell me anything about what I could expect in a printed 8X10, it tells me what I could expect a downsampled digital file to contain.


----------



## msm (Jul 17, 2015)

3kramd5 said:


> msm said:
> 
> 
> > Measures like DXO print DR and Bill Claff's photographic dynamic range is based on the image in its entirety, for instance Bill Claff states "PDR is the dynamic range you would expect in an 8x10” print viewed at a distance of about arms length."
> ...



Well yes of course, that statement is maybe a bit simplistic. It is a measure of the quality of the data off the sensor not the quality of some actual print. So you could see it as what you could expect to see in some theoretical perfect print, or you can just see it as a measure of how many stops of information you got available to tone map into your print.


----------



## 3kramd5 (Jul 19, 2015)

Fair enough. I entirely agree that DR is an appropriate sensor property. And I do have more fidelity to compress shadow tonality when I use my A7R. I'm hoping that holds with the A7R2.


----------



## neuroanatomist (Jul 19, 2015)

msm said:


> 3kramd5 said:
> 
> 
> > msm said:
> ...



You continue to perpetuate that fallacy. 

"Print DR" is *not* a measurement of any property of the sensor. "Print DR" does *not* indicate the amount of scene DR – light information in the real world – that can be captured by the sensor. Information/image detail in the real-world subject at luminance levels which fall outside of the directly measured DR of a sensor (what DxO reports as "screen DR") is lost, and downsampling or other post-capture manipulation of the digital file will *not* recover those data. 

As 3kramd5 states, "print DR" tells you what DR you can expect in a downsampled digital file, and that's a pretty useless piece of information for practical purposes. "Print DR" is a contrived value that facilitates comparison of DR among sensors of different MP count – in other words, the primary practical utility of "print DR" is as a comparison shopping tool.


----------



## 3kramd5 (Jul 19, 2015)

Sensor DR, though, i.e. what DXO calls "screenDR," is an entirely reasonable and appropriate measure for a digital I/O signal chain. I don't think anyone could successfully argue (and note I am not claiming Neuro is attempting to) that the Sony signal chain doesn't produce lower noise, and thus make possible larger DR, than the canon signal chain, given the usual caveats (at or near base ISO, practicality, etc).


----------



## msm (Jul 19, 2015)

neuroanatomist said:


> "Print DR" is *not* a measurement of any property of the sensor. "Print DR" does *not* indicate the amount of scene DR – light information in the real world – that can be captured by the sensor. Information/image detail in the real-world subject at luminance levels which fall outside of the directly measured DR of a sensor (what DxO reports as "screen DR") is lost, and downsampling or other post-capture manipulation of the digital file will *not* recover those data.
> 
> As 3kramd5 states, "print DR" tells you what DR you can expect in a downsampled digital file, and that's a pretty useless piece of information for practical purposes. "Print DR" is a contrived value that facilitates comparison of DR among sensors of different MP count – in other words, the primary practical utility of "print DR" is as a comparison shopping tool.



Oh ok so when you fall below "Screen DR" or a SNR of 1 all of a sudden all information dissapears in Neuro land? :

This is not how things work in statistics. It is not how sensors work in the real world either. And fortunately it is easy to test, here is a couple of pictures for you, one is ETTR'ed (not a single pixel clipped according to rawdigger) and the other is underexposed 16 stops (where all information would be lost according to you) and then pushed.

If it wasn't for pattern noise the 16 stops pushed image would look much better and that is the one thing DXO DR doesn't account for.

Amusing how you prefer to keep making a fool out of yourself rather than admitting you are wrong :


----------



## 3kramd5 (Jul 19, 2015)

msm said:


> neuroanatomist said:
> 
> 
> > "Print DR" is *not* a measurement of any property of the sensor. "Print DR" does *not* indicate the amount of scene DR – light information in the real world – that can be captured by the sensor. Information/image detail in the real-world subject at luminance levels which fall outside of the directly measured DR of a sensor (what DxO reports as "screen DR") is lost, and downsampling or other post-capture manipulation of the digital file will *not* recover those data.
> ...



Of course not, hence [re-attached from the first page].

Print DR is a somewhat dubious (since for sensors with resolutions larger than 8MP, it involves a noise-reducing process) way to compare sensors, and it's arbitrary. So is a SNR>=1, but at least that's somewhat common.


----------



## neuroanatomist (Jul 19, 2015)

msm said:


> Oh ok so when you fall below "Screen DR" or a SNR of 1 all of a sudden all information dissapears in Neuro land? :
> 
> This is not how things work in statistics. It is not how sensors work in the real world either. And fortunately it is easy to test, here is a couple of pictures for you, one is ETTR'ed (not a single pixel clipped according to rawdigger) and the other is underexposed 16 stops (where all information would be lost according to you) and then pushed.



Apparently you don't understand the meaning of the word "range". It seems you think _dynamic_ range represents a _static_ set of values distributed around metered 'middle gray'. 

At least you have succeeded in proving one thing...your lack of comprehension and knowledge regarding this topic. Great job!


----------



## StudentOfLight (Jul 19, 2015)

rfdesigner said:


> StudentOfLight said:
> 
> 
> > A question for all you technical experts...
> ...


By downscaling you average pixels essentially sacrificing fine detail to gain a cleaner rendition of larger-scale detail, but if the darks are being amputed then does that not affect the efficacy of downscaling? If the ((dark detail)+(read noise)) pixels are being deleted will averaging of those pixels not result in lost dark details?


----------



## jrista (Jul 20, 2015)

neuroanatomist said:


> Yes, it's quite a good trick to have DR "in mid 14's" when your camera has a 14-bit ADC. Hail to the almighty DxO Biased Scores, and kudos to those that revel in that BS. :



It's actually the compression algorithm. Also technically speaking, it isn't actually RAW. However, cRAW applies a tone curve to the data coming off the sensor before it black clips and compresses. It's actually the same thing they do with the A7s, and will probably do with future cameras. They take a greater dynamic range than their bit depth and use a mathematical curve to compress it into a smaller space. Same thing we do with our RAW editors.

The thing with Sony cameras is they have the data fidelity to actually do that. The data precision in their RAW files may only be 12 bit, however in terms of usable information they still deliver stops more than any Canon camera. You can hate DXO all you want (I don't like most of what they do either), but the real world results are all you should need to understand that bit depth and usable information are not synonymous. 

The thing I don't get is why Sony doesn't just go strait to 16-bit RAW. In digging around with their SDK lately, I think Sony has some strange data bottlenecks somewhere in their readout pipeline. They have a number of cases where they restrict the sensor readout to 12 bits even, and it always seems to be throughput related. If at some point they resolve those issues, I'd be willing to bet Sony puts the first 16-bit RAW consumer camera on the market.


----------



## msm (Jul 20, 2015)

neuroanatomist said:


> msm said:
> 
> 
> > Oh ok so when you fall below "Screen DR" or a SNR of 1 all of a sudden all information dissapears in Neuro land? :
> ...



I have proven you wrong with practical examples several times already and here you are trying to weasel yourself out again with a straw man argument. Well done! :

Time to look into that mirror troll.


----------



## neuroanatomist (Jul 20, 2015)

jrista said:


> neuroanatomist said:
> 
> 
> > Yes, it's quite a good trick to have DR "in mid 14's" when your camera has a 14-bit ADC. Hail to the almighty DxO Biased Scores, and kudos to those that revel in that BS. :
> ...



Regarding the rumored A7000, yes it's certainly possible to map higher DR from on-sensor HDR down into a lower bit depth file (assuming the rumors are true). As you state, it won't be RAW data at that point, but Sony doesn't seem to care much about that anyway. It certainly breaks the long-standing common practice of using an ADC that can fully encompass the DR from the sensor. Maybe it's common to compress/map wider analog data down prior to digitizing it in other fields e.g. audio, I'm not sure. 

Regarding the more recent discussion in this thread regarding current cameras, it has nothing to do with in-camera mapping/compression, and everything to do with DxO's downsampling the files to 8 MP. AFAIK, no current dSLR/MILC can record >14-stops of DR _at capture_ (but for example the Red Epic Dragon can, and it uses a 16-bit ADC).


----------



## neuroanatomist (Jul 20, 2015)

msm said:


> I have proven you wrong with practical examples several times already and here you are trying to weasel yourself out again with a straw man argument.



You've proven 'with practical examples several times' that current dSLRs/MILCs (e.g. D750, a7R) can capture more than 14-stops of DR present in the scene being imaged? If you truly believe that, I feel sorry for you.


----------



## msm (Jul 20, 2015)

neuroanatomist said:


> msm said:
> 
> 
> > I have proven you wrong with practical examples several times already and here you are trying to weasel yourself out again with a straw man argument.
> ...



DR has precise definitions in mathemathical terms, do yourself a favour and stop your pathethic attempts at trying to make it something it isn't.

For instance how do you decide if a sensor "can capture more than 14-stops of DR present in the scene being imaged"? Your personal opinion? If you can't even make a precise definition that is just completely ridiculous waste of time. it is like discussing with a child who just changes the meaning of things as it suits him/her.


----------



## neuroanatomist (Jul 20, 2015)

msm said:


> neuroanatomist said:
> 
> 
> > msm said:
> ...



I know how to define DR. Your error is in defining it for only one part of the system (the digital file) rather than considering the (more relevant for photography) definition in terms of the subject being imaged. 

Forest : trees :: scene : pixels. 

Light meter for scene, image analysis for captured image. Alternatively, an example was provided by 3kramd5 a couple of pages back, that need only involve a light bulb and a sheet of canvas. Too complex for you?

Yes, your ridiculous intransigence and lack of comprehension makes further discussion on this issue a complete waste of my time. If ignorance is bliss, you must be a very happy person!


----------



## msm (Jul 20, 2015)

neuroanatomist said:


> msm said:
> 
> 
> > neuroanatomist said:
> ...



Is this what passes as a scientific definition for you? Can be implemented infinite number of ways leading to whatever conclusion you want. 

If you really want to go there then we can just make unscientific criterions like "being able to capture enough data to enable a person with normal eye sight to unmistakenly read 2 rows of text consisting of 5 letters each". Then I have already demonstrated that the a7r easily has more than 16stops of DR!


----------



## jrista (Jul 21, 2015)

neuroanatomist said:


> jrista said:
> 
> 
> > neuroanatomist said:
> ...



You know I don't like DXO any more than anyone else. Their persistent attempts to box all of the complexity of sensor IQ into a single scalar number is as annoying as ever. That said, all they are doing is normalizing. Normalization should be a well understood concept, especially by someone such as yourself, given how many times it's been explained on these forums.

Personally, outside of a pure comparison context, I don't believe the numbers spit out by 8mp normalized results tell us much about what we'll experience when actually editing a RAW file in a program like lightroom. I believe that Screen DR (non-normalized DR) tell us that, since that is the per-pixel DR of the RAW data that we are literally working with. Another issue I have with DXO's Print DR is that,while they call it a "measurement", it is absolutely nothing of the sort. It is a purely extrapolated number, acquired by running another DR value through a simple mathematical formula. It's the purely theoretical maximum DR that the camera might achieve if it had perfect noise characteristics. It does not account for actual noise characteristics, because it is not an actual measurement...it's an extrapolation from an actual measurement (which so happens to be Screen DR). 

The fact that DXO does not make that clear, and worse the fact that they utilize their Print DR numbers as THE DR numbers, has lead a significant percentage of the photography community regurgitating numbers like 14.8 stops of DR as actual real-world DR. Assuming all of those people are downsampling all of their photos to 8 megapixels, then they may well indeed be getting 14.8 stops, or 14.5 stops, or 14.2 stops or whatever it is for the camera they use. If they are keeping their data RAW, or worse, printing at even larger sizes than native, then they are decidedly NOT getting the theoretical maximum potential DR.

Sadly, I do not believe that actually downsampling images and actually measuring them would paint Canon in any better light. Canon has worse noise characteristics than the competition, so actual Print DR measures for Canon would probably end up worse than the extrapolated Print DR "measures" that DXO uses now. I spend a lot of time working at the noise floor with astrophotography. I run FFT's on individual subs and integrations every so often. Canon data doesn't even come close to exhibiting a gaussian distribution of noise. Sony cameras are closer, but their compression limits how close they can get. Nikon D800 data with the black point hack actually has the closest to gaussian representation, but even they aren't purely gaussian. A clean, pure gaussian noise profile would result in ideal downsampling results, while the non-gaussian noise profile of Canon data is going to result in less than ideal downsampling results. I don't believe any ILC currently on the market could actually achieve exactly the Print DR that DXO lists, although I think the D800 & D810 probably get closer than anything else. 

As for recording more than 14 stops of DR "at capture". Depending on exactly what you mean there, it's possible. Assuming "at capture" means in the analog signal on the sensor, then if you had a sensor with, say, an 80,000e- FWC and 3e- read noise. The dynamic range of the analog signal is 14.75 stops of DR. With a strait ADC conversion using a 14-bit ADC, you would have to clip that to 14 stops in some way. Alternatively, you could compress the dynamic range, preserving the original bounds of the information it represented by combining some of the information with less precision. Lower precision, same range of information. That's what cRAW does...applies a curve that compresses the sensor DR into a tighter range, then digitizes it. 

Conversely, if you have a sensor with 80,000e- and 25e- RN, then your dynamic range is 11.68 stops. Your analog signal doesn't have enough tonality to even use 12 bits, so storing the signal in 14 bit data is really just wasteful. It could affect camera design in other ways...larger numbers use more bandwidth, thus potentially limiting your maximum throughput, putting a cap on frame rate, etc. Canon could easily be using 12-bit data, and we wouldn't be losing a thing. We wouldn't be able to represents steps of noise as accurately, but we certainly wouldn't be losing tonality. Because of the higher read noise, the sensor isn't capable of delivering more than 14 stops of DR, so compressing the signal makes no sense.

If Canon delivers a 5D IV with an 80ke- FWC and 5e- RN, on the other hand, they would have 14.01 stops of DR, and at that point, they would be able to fully utilize the bit depth of their ADCs. They still wouldn't need to compress the original sensor information to fit the bit depth of the ADC, but they would at least be able to use all of it.

Bandwidth is, IMO, the primary reason Sony hasn't gone to 16 bit ADC yet. They seem to be borderline on 14 bit as it is, but since their sensors do deliver at the very least more than 13 stops of DR, they are able to utilize a 14 bit ADC unit. If it wasn't for any kind of throughput bottlenecks, then I suspect Sony would already be using 16 bit ADCs. One of the reasons Exmor has lower noise is because their ADC units are column parallel, which allows them to be run at a lower frequency. It is entirely possible that running them at a higher frequency to handle 16-bit conversion introduces more noise, which would diminish dynamic range. I'm not sure. Either way, what Sony is doing, compressing the original signal before conversion, is the best way to preserve the information their sensors are capable of delivering, even if it costs some precision. In practical use, the loss of precision doesn't seem to be a huge issue. It might result in some posterization of smooth gradients at the low and high ranges of the signal, where linearity may drop. Again, in practice, it seems that photon shot noise is generally high enough that posterization isn't a big problem outside of extreme circumstances.


----------



## neuroanatomist (Jul 21, 2015)

jrista said:


> You know I don't like DXO any more than anyone else. Their persistent attempts to box all of the complexity of sensor IQ into a single scalar number is as annoying as ever. That said, all they are doing is normalizing. Normalization should be a well understood concept, especially by someone such as yourself,
> 
> Personally, outside of a pure comparison context, I don't believe the numbers spit out by 8mp normalized results tell us much about what we'll experience when actually editing a RAW file in a program like lightroom. I believe that Screen DR (non-normalized DR) tell us that, since that is the per-pixel DR of the RAW data that we are literally working with.



Certainly I inderstand normalization and its utility, given that I compare datasets across platforms routinely. In the case of "print DR," it seems that we agree...it's a comparison shopping tool (and I'd add that for some, it's a bragging rights tool). As you say, "screen DR" is what matters when considering an individual RAW file from a photographic standpoint, as opposed to comparing sensors. 



jrista said:


> The fact that DXO does not make that clear, and worse the fact that they utilize their Print DR numbers as THE DR numbers, has lead a significant percentage of the photography community regurgitating numbers like 14.8 stops of DR as actual real-world DR.



Exactly the point I made earlier in the thread, when I pointed out a camera magazine's statement concerning 'capturing all the tones in a scene' and stating the D610's DR has been measured at up to 14.4 stops. A patently false statement that someone inexplicably argued to support. 




jrista said:


> As for recording more than 14 stops of DR "at capture". Depending on exactly what you mean there, it's possible. Assuming "at capture" means in the analog signal on the sensor, then if you had a sensor with, say, an 80,000e- FWC and 3e- read noise. The dynamic range of the analog signal is 14.75 stops of DR. With a strait ADC conversion using a 14-bit ADC, you would have to clip that to 14 stops in some way. Alternatively, you could compress the dynamic range, preserving the original bounds of the information it represented by combining some of the information with less precision. Lower precision, same range of information. That's what cRAW does...applies a curve that compresses the sensor DR into a tighter range, then digitizes it.



Yes, it's theoretically possible. But to reiterate, the discussion was about specifics and current cameras, such as the "fact" that the D750 can capture 14.5 stops of DR. Given a FWC of 81608 e- and read noise of 5.5 e-, that's clearly not the case (as shown by DxO's screen DR measurement). But some people – well, one person – apparently can't seem to come to grips with the fact that a screen DR of 13.9 stops means information in a scene that exceeds that range will be lost and unrecoverable. I suspect you can explain it more effectively if you choose, but I wouldn't bother.


----------



## 3kramd5 (Jul 21, 2015)

jrista said:


> As for recording more than 14 stops of DR "at capture". Depending on exactly what you mean there...



Seems like we should be able to come up with an agreement about what it means. I propose that capture consists of everything between and inclusive of the light hitting the sensor, and the digitized data being written.


----------



## jrista (Jul 21, 2015)

3kramd5 said:


> jrista said:
> 
> 
> > As for recording more than 14 stops of DR "at capture". Depending on exactly what you mean there...
> ...



I don't know that it is that simple. Sensors currently still accumulate *analog *signals. Signals that are represented by an electric charge. Cameras ultimately produce digital signals, signals that are represented by bits stored as integral numbers. Analong signals and digital signals have similarities, but they also have key differences. 

There is also the fact that the readout pipeline is where noise gets added to the signal, and the native dynamic range of the sensor can be reduced. 

Furthermore, if dynamic range compression is being used somewhere in the readout pipeline, precision may be lost, but information can be preserved. So, if a sensor is capable of 15.5 stops of dynamic range, that 15.5 stops of information in an actual signal is compressed into 14 stops, or even 12 stops, at an early stage in the readout pipeline, then how much dynamic range do you have?

Not a simple question. You may have as little as 12 bits of precision, but in my experience you can have considerably more *usable information* than that. Sony cameras lossy compress their information. That's probably the thing I like LEAST about their cameras, it's kind of a big deal for me to have truely raw data. However in practice, somehow, despite the most precise bit of information in every 32 pixel block of pixels in an ARW being 11 bits, I can still push up orders of magnitude more information from the shadows in an A7r or A7s than I can with any Canon camera. 

So...I don't think things are just as simple as: If it's bigger than the bit depth of the ADC, it simply can't be. That just doesn't jive with reality. It doesn't jive with what I can do with the data from an Alpha camera, let alone from something like the D810 (which does NOT lossy compress the data). 

Precision vs. usable information. Sony is reducing their precision...a lot in some cases, more than I think they should. However that does not seem to cost them the information...and that's really what matters in the end. 

Anything else...it's all just playing games, semantics, reasons to perpetuate a pointless argument for...well, it seems forever. 

These days, it's become fairly simple to me. These days, what I care about are the actual results. I don't always need 14 stops or 15 stops or 16 stops of dynamic range. However when I DO need *more*...my Canon cameras are only giving me 11 stops. I then have to resort to something more complex, such as multi-frame capture and HDR blending..._assuming I even have that option_. I can't count the number of times I've ETTRed with my 5D III on a day with wonkey light...nailed exposure on one frame in a burst only to have it come out slightly OOF, then have the next frame shot be perfectly in focus yet two thirds of a stop brighter and clip my highlights because a cloud moved and the sun popped out. ETTR is essential with a Canon camera if you wish to preserve all the information you can, especially when shooting birds with both bright and dark feathers...and it's also a major risk. If I had 12 stops or more of DR, I could just keep the exposure a safe 1 stop from the clipping point, and just never have to worry. If I had 14 stops of DR, I could push the shadows on those darker feathers, and they would not only be barely usable...they would be pristine! I wouldn't always need the full 14 stops...but having *more than eleven*? INVALUABLE.


----------



## Hector1970 (Jul 21, 2015)

Great contribution Jon (JRista). I always love reading your posts. They are very informative (quite technical but not over technical). You are a real asset to Canon Rumors. I must say too your website is a delight. There are alot of posters here from who you hear alot of technical details and complaints about sharpness or dynamic range or how bad Canon are. They don't show too much of their work. 
I can see from your photos you are pushing the technical limits of the camera gear in both wildlife and the stunning astrophotography. Keep up the good work. It's inspiring.


----------



## 3kramd5 (Jul 21, 2015)

jrista said:


> 3kramd5 said:
> 
> 
> > jrista said:
> ...



Of course, but you and I can't do anything with charge. We can only use what is ultimately produced.




jrista said:


> There is also the fact that the readout pipeline is where noise gets added to the signal, and the native dynamic range of the sensor can be reduced.



Indeed, and since we can't ever store the data without reading it out, it seems logical to include the entire signal chain in the process of capture. 



jrista said:


> Furthermore, if dynamic range compression is being used somewhere in the readout pipeline, precision may be lost, but information can be preserved. So, if a sensor is capable of 15.5 stops of dynamic range, that 15.5 stops of information in an actual signal is compressed into 14 stops, or even 12 stops, at an early stage in the readout pipeline, then how much dynamic range do you have?



Whatever it is compressed to. If the data is made non-linear before being filed, that is included. If it's noise-reduced before being filed, that's included.

It's all well and good to consider the resulting generation of charge as capturing photons, but perhaps outside of a laboratory subassy, nobody can use that charge until it's digitized and filed. 



jrista said:


> Not a simple question. You may have as little as 12 bits of precision, but in my experience you can have considerably more *usable information* than that. Sony cameras lossy compress their information. That's probably the thing I like LEAST about their cameras, it's kind of a big deal for me to have truely raw data. However in practice, somehow, despite the most precise bit of information in every 32 pixel block of pixels in an ARW being 11 bits, I can still push up orders of magnitude more information from the shadows in an A7r or A7s than I can with any Canon camera.



Yah, I've not seen any issues with my A7R's lossy compression. I would certainly like the option to store a lossless RAW with the A7R2, but the compression artifacts I've seen in rawdigger's analysis haven't manifested to my knowledge in any of my shots with the platform.



jrista said:


> These days, it's become fairly simple to me. These days, what I care about are the actual results.



Concur, and going back to the beginning of this reply, hence my suggestion that we define capture as the entire process of converting photons to charge, reading, amplifying, digitizing those data, and writing files. The files are the results.


----------



## msm (Jul 21, 2015)

neuroanatomist said:


> But some people – well, one person – apparently can't seem to come to grips with the fact that a screen DR of 13.9 stops means information in a scene that exceeds that range will be lost and unrecoverable. I suspect you can explain it more effectively if you choose, but I wouldn't bother.



Yes please looking forward to seeing that explanation. Particularly want to see the explanation of how I was able to recover information from a 16 stop underexposed image from my A7R, considering it has a "screen DR" of less than 13 stops and "information in a scene that exceeds that range will be lost and unrecoverable". ;D


----------



## neuroanatomist (Jul 21, 2015)

msm said:


> neuroanatomist said:
> 
> 
> > But some people – well, one person – apparently can't seem to come to grips with the fact that a screen DR of 13.9 stops means information in a scene that exceeds that range will be lost and unrecoverable. I suspect you can explain it more effectively if you choose, but I wouldn't bother.
> ...



Pointless as stated, but I have about one minute while my coffee brews so once more into the breach...

What do you need explained? The fact that a _range_ has both lower and upper bounds? The fact that DR is a measure of the difference between those bounds? The fact that your 'proof':







...is so ridiculously far from simultaneously exceeding both of those bounds that it would be funny as a joke, but is just pathetic for your intent? The fact that the bounds of the range are defined by physics (FWC and read noise in e-), and not by any relationship to your camera's arbitrary algorithm for selecting a matrix-metered exposure on which your claim of a 16-stop underexposure is based? 

I could go on, but as I stated...it's pointless, and my coffee is ready.


----------



## msm (Jul 21, 2015)

neuroanatomist said:


> msm said:
> 
> 
> > neuroanatomist said:
> ...



Metering has absolutely nothing to do with this. My basis for claiming 16 stops underexposure is simple, rawdigger shows me the brightest pixels in the exposure are near clipping in a 8 seconds exposure (but not a single pixel is clipped). Then I take another exposure at 1/8000s while keeping everything else the same (aperture, ISO, lighting). Going from 8 seconds to 1/8000 is lowering the exposure 16 stops, simple. So yeah you are right your post was completely pointless, but please share your vast expertise in physics etc and explain how all information suddenly is lost when you go below a SNR of 1. ;D


----------



## neuroanatomist (Jul 21, 2015)

News flash: changing the exposure conditions (shutter speed or aperture) changes the absolute luminance values over which the dynamic range is distributed. Is shifting the range synonymous with expanding it? Here's a novel thought...I wonder...if one could somehow take two different exposures and blend them...somehow...maybe that would improve DR?


----------



## emko (Jul 21, 2015)

neuroanatomist said:


> News flash: changing the exposure conditions (shutter speed or aperture) changes the absolute luminance values over which the dynamic range is distributed. Is shifting the range synonymous with expanding it? Here's a novel thought...I wonder...if one could somehow take two different exposures and blend them...somehow...maybe that would improve DR?



bracketing to increase DR always has issues


----------



## neuroanatomist (Jul 22, 2015)

emko said:


> neuroanatomist said:
> 
> 
> > News flash: changing the exposure conditions (shutter speed or aperture) changes the absolute luminance values over which the dynamic range is distributed. Is shifting the range synonymous with expanding it? Here's a novel thought...I wonder...if one could somehow take two different exposures and blend them...somehow...maybe that would improve DR?
> ...



It was a nice idea, or so I thought...in retrospect, though, an unnecessary one. No need to bracket, just get an a7R...it can capture >16 stops of DR. That *fact* was *proven* by msm a few posts back. Well, at least _his_ a7R can capture >16 stops of DR...I have no idea what the rest of the world is doing wrong that limits their a7R sensors to the laws of physics, I guess they just don't understand. :


----------



## jrista (Jul 22, 2015)

3kramd5 said:


> jrista said:
> 
> 
> > These days, it's become fairly simple to me. These days, what I care about are the actual results.
> ...



Here is where the discrepancy lies, though. With a maximum precision of 11 bits in the two most precise values stores in a cRAW block (the rest are 7-bit offsets from those two)...how much DR do you have? There are some hardliners who would say you don't have more than 11 stops...however in practice that is clearly not the case. The sensor may be capable of 14 stops, or 15 stops, or 16 stops...and that original DR is being compressed to fit within a lower precision data file. The sensor itself might indeed be capable of 15 stops. Will hardliners accept that? 

There is also the simple fact that dynamic range is really just a hardware thing. The term dynamic range is applied to RAW image files as a matter of course these days...however fundamentally, I don't think that is valid. A RAW image file has a signal to noise ratio...but it does not have dynamic range. The RAW image IS the signal, and that signal came off the sensor, which has dynamic range.

It's just not a simple situation anymore.  Camera companies are employing different and alternative ideas in the way they process the data on the camera hardware and firmware. Some of them are clearly preserving more information, despite the fact that they are using less precision to store it. I wish it really was just as simple as 14 bit ADC, 14 bit file, max 14 stops of DR. I wish it was just as simple as Sony moving to a 16 bit ADC in the A7000 and storing their 15.5 stops of DR without compression or anything else as-is in a 16-bit data file. I would so much prefer that. But...it's just not that simple anymore.


----------



## 3kramd5 (Jul 22, 2015)

Well again outside of a subassy lab environment with specialty equipment (which sensor fab houses likely have and use to acceptance test production articles), the files are the best you can do. So it's good enough in my mind to treat the camera as a black box. 

And sure, I'm talking about the output of the signal chain, not DR of the system, strictly speaking.


----------



## Aglet (Jul 22, 2015)

neuroanatomist said:


> msm said:
> 
> 
> > neuroanatomist said:
> ...



Considering that, as of that post, you've spent approximately 276 days, 7.75 hrs online in this forum, that's a pretty lame response.

Arbitrarily limiting your precious time to one minute to respond to a perfectly valid experiment, and using that minute to be little more than insulting, when you've obviously spent many hours of your online time in highly repetitive and pointless arguments with no educational opportunity for anyone is quite ironic.

For someone who claims to be so educationally-oriented your reply to msm is all attitude, no education.

Are you weaseling out of providing an explanation because you don't know how to describe why msm's experimental data supports msm's theory?

C'mon Dr. Brain, educate him.


----------



## neuroanatomist (Jul 22, 2015)

Aglet said:


> ...a perfectly valid experiment...



Hey, do you have one of those magical a7R cameras with >16-stops of DR, too? Is it powered by a perpetual motion machine?


----------



## jrista (Jul 22, 2015)

neuroanatomist said:


> Aglet said:
> 
> 
> > ...a perfectly valid experiment...
> ...



I'm still confused about your reactions here. I think we have clearly demonstrated that there is a disconnect between dynamic range, bit depth, and usable range of information. The Sony cameras store the most precise pixels in every 32-pixel block with only 11 bits. However that does not prevent them from supporting data recovery many stops beyond what any Canon camera is capable of. All of the Canon cameras I own can handle about two stops before non-gaussian noise begins to appear. The 5Ds may be capable of about three stops (although with heavy color noise.) The A7r, A7s, and A6000 all seem to be capable of lifting information as much as seven or eight stops, and the noise characteristics remain very close to gaussian (in other words, ridiculously easy to clean up, with some tools you can totally wipe out clean gaussian noise and leave behind useful information.)

You seem bound and determined to stand your ground. That's all well and good, certainly up to you...but I'm not really sure what it accomplishes.


----------



## Aglet (Jul 22, 2015)

jrista said:


> neuroanatomist said:
> 
> 
> > Aglet said:
> ...



Maybe it's burst Neuro's bubble-of-comprehension and he's trying to buy time with with snark while he inflates a new one to live in. 
edit: i mean, it could be hard to accept that 1 bit could potentially represent something other than one stop of photographic information. As a canon-only user he may not have had any experience with any equipment that would display otherwise. All that expensive 24, 32, 48, 64-bit lab instrumentation doesn't have much bandwidth so raw file compression is unlikley to be encountered. Unlike Canon's generous supply of 14 bits to encode only 10 to 11 stops worth of data, generally dithered with a repetitive data pattern _feature_.


----------



## neuroanatomist (Jul 22, 2015)

jrista said:


> I'm still confused about your reactions here. I think we have clearly demonstrated that there is a disconnect between dynamic range, bit depth, and usable range of information. The Sony cameras store the most precise pixels in every 32-pixel block with only 11 bits. However that does not prevent them from supporting data recovery many stops beyond what any Canon camera is capable of.



A scene with 14.0 stops of dynamic range - can you capture that entire range with a single image using an a7R?

Most people posting in this thread know the correct answer to that question. One person does not. But you do have a point - that person will not be convinced.

Ps. To be clear, the 14-bit ADC 'limit' on DR is an artificial one, true - but AFAIK no current camera breaks that limit, the rumored A7000 would be the first. Until now, manufacturers have chosen ADCs that have greater bit depth than the sensor has DR, even if there are options to produce files with lower bit depth. If Sony releases a sensor with 15.5-stops of DR and a 14-bit ADC, that would be innovative, but not in a completely positive way. It's not unlikely they can make such a sensor, but they really should pair it with a 16-bit ADC.


----------



## neuroanatomist (Jul 22, 2015)

Yeah, Aglet - I've never had to map a higher number of bits of data down into a lower bit depth. Well, except for the few times I've converted an image to jpg. I think I did that once or twice in 2011.


----------



## jrista (Jul 22, 2015)

neuroanatomist said:


> jrista said:
> 
> 
> > I'm still confused about your reactions here. I think we have clearly demonstrated that there is a disconnect between dynamic range, bit depth, and usable range of information. The Sony cameras store the most precise pixels in every 32-pixel block with only 11 bits. However that does not prevent them from supporting data recovery many stops beyond what any Canon camera is capable of.
> ...



Since the sensor is actually capable of 13.5 stops, my answer is 'no'. However, it WILL capture 13.5 out of 14 stops of that scene with usable noise characteristics. The other 0.5 stops will have diminishing noise characteristics and thus reduced usability. Despite that...the full 14 stops of dynamic range is there, and in significantly better shape, than the other 3 stops of information the Canon camera is not capturing.  

Remember, a four stop lift with an ISO 100 image made with an Exmor sensor is ISO 1600. I doubt there is anyone on these forums that wouldn't call an ISO 1600 image from a modern Canon DSLR usable. I think you might be able to find some who find ISO 3200 images unusable, and a lot who find ISO 6400 images unusable...but I think you would be pretty hard pressed to find someone on these forums that finds ISO 1600 images unusable.

Now, a +4 stop push with an A7r is not only going to be like ISO 1600...but only the shadows are going to be like that. Everything else is still going to have ISO 100 quality! That bottom 0.5 stops of the shadows? You could very well lift that, but in that 14 stop scene are your true zone zero, so we don't need them to be lifted a ton. They could be left alone, or they could be lifted slightly if you want there to be just a hint of detail in there. 

Either way, from a practical standpoint, the A7r is going to gather significantly more usable information from that 14 stop scene than any Canon DSLR on the market. It'll gather about 13.5 stops, to be exact.  That information is going to be limited by gaussian noise, which cleans up wonderfully, with minimal amounts of NR that are marginally destructive to detail at worst (especially if you use higher end NR tools, but LR's NR will do just fine.)


----------



## neuroanatomist (Jul 22, 2015)

jrista said:


> neuroanatomist said:
> 
> 
> > Most people posting in this thread know the correct answer to that question. One person does not. But you do have a point - that person will not be convinced.
> ...



Mine, too. 

As for the differential DR between camera brands, although that wasn't the point of the discussion your position on the importance of those differences is clear,and as you stated earlier, I'm not really sure what rehashing it accomplishes.


----------



## msm (Jul 22, 2015)

I note a lack of symmetry in this thread. Both me and neuro make claims in this thread, but I always back these up with explanations and examples while neuro never does. Seeing that neuro is the first to ask for people to back up their claims when he doesn't like them that is quite clear example of double standards.

Here is a little challenge for you Neuro, start to back up your claims or they are just worthless.



neuroanatomist said:


> [Information/image detail in the real-world subject at luminance levels which fall outside of the directly measured DR of a sensor (what DxO reports as "screen DR") is lost, and downsampling or other post-capture manipulation of the digital file will *not* recover those data.



Still waiting for the explanation of this one.....



neuroanatomist said:


> A scene with 14.0 stops of dynamic range - can you capture that entire range with a single image using an a7R?
> 
> Most people posting in this thread know the correct answer to that question. One person does not. But you do have a point - that person will not be convinced.



You can't say that the correct answer is no because it depends entirely on what you mean by "a scene with 14.0 stops" and "capturing that entire range" as already explained. As you still haven't managed to produce any meaningful explanation of what either of those things mean it is still a completely meaningless question. Go back to page 3 or wherever this was first brought up.



jrista said:


> Personally, outside of a pure comparison context, I don't believe the numbers spit out by 8mp normalized results tell us much about what we'll experience when actually editing a RAW file in a program like lightroom. I believe that Screen DR (non-normalized DR) tell us that, since that is the per-pixel DR of the RAW data that we are literally working with.



I agree if you do all your editing at 200% view. However I personally care more about what happens at image level, and I could be wrong but I think most photographers ultimately will agree with me on that one. The 20D, 7D, 7DII, 5DIII, 5DS/R all got screen DR which is practically identical in the range 10.95 to 11.12. I think we all know what happen at the image level when we try to edit raw files from those cameras of the same high contrast scene exposed similarly relative to saturation. But if someone needs proof then it would be nice if someone who has a 5DS/R and one of the other cameras could post some raw files taken under the above conditions and we could all see for ourselves.


----------



## jrista (Jul 23, 2015)

msm said:


> neuroanatomist said:
> 
> 
> > [Information/image detail in the real-world subject at luminance levels which fall outside of the directly measured DR of a sensor (what DxO reports as "screen DR") is lost, and downsampling or other post-capture manipulation of the digital file will *not* recover those data.
> ...



There is no explanation, it's just wrong. Engineering DR is based on an SNR of 1 at the noise floor. For information within the noise floor, the SNR is less than one. Most people won't use such information. I won't use it for daytime photography, although with astrophotography, I will. However there IS information in that data, where the SNR is less than one. It's just that the noise is larger than the information, so identifying the information is extremely difficult. It is not impossible, just difficult. The higher the noise is, the more difficult it is to identify signals at increasingly lower SNRs below 1. 



msm said:


> jrista said:
> 
> 
> > Personally, outside of a pure comparison context, I don't believe the numbers spit out by 8mp normalized results tell us much about what we'll experience when actually editing a RAW file in a program like lightroom. I believe that Screen DR (non-normalized DR) tell us that, since that is the per-pixel DR of the RAW data that we are literally working with.
> ...



I care about how it will look at my largest publication size. In my case, that is usually a 36x24" print, although I print at a range between 14x11" to as large as 40x30". This is part of why I have a problem with "print" DR...such a radically subjective term. 

I edit at both the fill image size and 1:1 scale size in Lightroom. I don't usually edit at larger than 100%. I check detail and noise quality at 100%.


----------



## Aglet (Jul 23, 2015)

msm said:


> neuroanatomist said:
> 
> 
> > [Information/image detail in the real-world subject at luminance levels which fall outside of the directly measured DR of a sensor (what DxO reports as "screen DR") is lost, and downsampling or other post-capture manipulation of the digital file will *not* recover those data.
> ...



perhaps you need to make that second image more clear, as below, since lots of poorly setup displays and viewing environments will not show what's there.

It's quite apparent that the white text, against a gray background, is showing up very well. If the background had been much darker, then the 'noise' level of the gray background would be even (lower) darker and provide an even greater relative contrast (SNR) than this.
I'm not doing any analysis on it, but it looks to me like that white text is at a SNR >1 even if the gray background is considered the noise.

I think I saw, somewhere, that you'd set the white text to just below clipping (full well) with an exposure of 8 seconds.
You then made that other exposure at 1/8000 second. (presumably at the same F-stop) for a difference of 16 stops. (please correct me if I mis-remembered that)

so, with a 14 bit ADC, that "white" should have been buried 2 stops below the SNR=1 level and not even readable in a 14-bit linear system. It should be only somewhat visible in the LSB of a quiet 16-bit ADC's conversion of that signal.
Yet, there it is, nearly as discernable as snow against coal on a moonless night.

Now, if that were a 1:1 pixel crop and this was the result we'd be really amazed.
if it's the full frame reduced to this tiny output, well then, statistical smoothing goes a long way towards... 
_Hey, Wait a minute, didn't someone here say you're not going to be able to recover more information (scene DR &-or SNR) if it was outside of the range of the data conversion?..._
And this is clearly a good example of the data being considerably smaller than the ADC's least significant bit.

Maybe they meant to say if it was only outside the UPPER ADC limit.
Did anyone say that, somewhere?..

Cuz this demo seems to prove that it is possible to pick fly-poop out of black pepper while wearing boxing gloves if given enough to work with. (I think I'd made that same point to the same party many months ago)


----------



## 3kramd5 (Jul 23, 2015)

msm said:


> I agree if you do all your editing at 200% view.



If you do your post-processing in RAW, you are not working on a downsampled image; it doesn't matter what magnification you're viewing at. That's what I was getting at several pages ago.


----------



## msm (Jul 23, 2015)

3kramd5 said:


> msm said:
> 
> 
> > I agree if you do all your editing at 200% view.
> ...



True but what you view on the screen is unless you are at 100% view or higher.


----------



## StudentOfLight (Jul 25, 2015)

I just created an image with more than 22 stops of dynamic range from a single RAW image capture with my 6D. It is high recommended that you calibrate your screen before viewing in order to fully appreciate the true epicness of this image. (See attached)


----------



## Proscribo (Jul 25, 2015)

StudentOfLight said:


> I just created an image with more than 22 stops of dynamic range from a single RAW image capture with my 6D. It is high recommended that you calibrate your screen before viewing in order to fully appreciate the true epicness of this image. (See attached)


I don't know why, but I decided to take a picture of your masterpiece with my camera.. and the result turned out to be quite crappy, I suppose it's because of that amazing DR.


----------



## StudentOfLight (Jul 25, 2015)

Proscribo said:


> StudentOfLight said:
> 
> 
> > I just created an image with more than 22 stops of dynamic range from a single RAW image capture with my 6D. It is high recommended that you calibrate your screen before viewing in order to fully appreciate the true epicness of this image. (See attached)
> ...


Too bad, but don't worry, I'll eventually share a blog post on my processing workflow. One day you'll also be able create these masterpieces ;D


----------



## toto069 (Aug 25, 2015)

so interesting infos coque galaxy j1 etui galaxy j1


----------

