# EOS-1D X Mark II Claims of 15 Stops of DR [CR3]



## Canon Rumors Guy (Jan 11, 2016)

```
<p>We’re told that Canon will claim 15+ stops of dynamic range for the new Canon EOS-1D X Mark II. This claim was also made for the Cinema EOS C300 Mark II, which may be true at the hardware level, but in practice it may not actually perform to that specification. Let’s hope it’s actually the case with the new 22mp sensor.</p>
<p>Specifications for this camera have been extremely slow to come in from known sources. There is definitely a much tighter ship being run at Canon, but we do expect more to leak out as we approach an announcement in the next 4-8 weeks.</p>
<span id="pty_trigger"></span>
```


----------



## heptagon (Jan 11, 2016)

15 stops at what resolution?

If on the single pixel level, that would truly be phenomenal!

If on a scaled down image of 1 Megapixel, that wouldn't be very impressive.


----------



## tpatana (Jan 11, 2016)

One more claim I'm not sure I'd believe completely. Hoping for best of course, but...


----------



## boozed (Jan 11, 2016)

On-chip ADCs maybe? Say it isn't so Canon!


----------



## Mt Spokane Photography (Jan 11, 2016)

When you really need lots of DR, 15 stops is not enough. DR is, however, basically a measure of noise, so lower noise means higher DR and better high ISO.


----------



## frankchn (Jan 11, 2016)

If that is true, then this is basically the C300 Mark II sensor scaled up. 

(36mm * 24mm) / (24.6mm x 13.8mm) * (8.85 MP) = 22.5 MP


----------



## tpatana (Jan 11, 2016)

frankchn said:


> If that is true, then this is basically the C300 Mark II sensor scaled up.
> 
> (36mm * 24mm) / (24.6mm x 13.8mm) * (8.85 MP) = 22.5 MP



Since the claim is same, it's highly likely it's same technology, or similar at least.

I missed the parts how/why the C300-II claim was incorrect. Someone give short description what happened there?


----------



## frankchn (Jan 11, 2016)

tpatana said:


> Since the claim is same, it's highly likely it's same technology, or similar at least.
> 
> I missed the parts how/why the C300-II claim was incorrect. Someone give short description what happened there?



Cinema5D has good write ups about this issue, but it seems to primarily concern filmmakers who are using Log2 Gammas and such. I am not sure if it has any direct relevance to photographers.


https://www.cinema5d.com/canon-c300-mark-ii-review-dynamic-range/
https://www.cinema5d.com/canon-measured-15-stops-dynamic-range-c300-mark-ii/

In any case, I would assume that the 4K recording in this camera would be DCI 4K within a Super 35 frame (i.e. exactly the same as the C300 Mark II).


----------



## frankchn (Jan 11, 2016)

FWIW, I've looked at some reviews from Cinema5D specifically regarding dynamic range and I've summarized them here. Again, this is primarily of interest to filmmakers rather than photographers and video DR does not directly map to photo DR.


Arri ALEXA - 14 stops
Sony FS7 - 12.4 stops
Canon C300 Mark II - 12.3 stops
Sony A7r II - 12.3 stops
Sony A7s / A7sII - 11.8 stops
Leica SL - a bit more than 9 stops


----------



## PureClassA (Jan 11, 2016)

This is of NO surprise after the C300II. I realize things are all relative as to how they are measured, but as a shooter of the Full Frame 6D, 5D3, and 5DSR, let me offer my following personal observations:

The 5D3 does a very good job allowing me to pull shadows and compress highlights to my satisfaction across the ISO range.

The 6D does a better job.

The 5DSR does a really good job across the ISO range, equal to or better than the 6D.

When I have had the pleasure of shooting 1DX, it has done an excellent job of providing all the latitude I needed, provided I didn't completely F up the shot by 5 stops with the lens cap on...

The 1DX2 I will bet will allow for at least ONE if not MORE stops of latitude than I have previously experienced. This will be a VERY well received professional tool that will sell extremely well for its market base, far far far more than a D5.


----------



## Don Haines (Jan 11, 2016)

Canon Rumors said:


> We’re told that Canon will claim 15+ stops of dynamic range for the new Canon EOS-1D X Mark II. This claim was also made for the Cinema EOS C300 Mark II, which may be true at the hardware level, but in practice it may not actually perform to that specification. Let’s hope it’s actually the case with the new 22mp sensor.</p>
> <p>Specifications for this camera have been extremely slow to come in from known sources. There is definitely a much tighter ship being run at Canon, but we do expect more to leak out as we approach an announcement in the next 4-8 weeks.</p>
> <span id="pty_trigger"></span>


If the RAW files are still 14 bit, then say no to 15 stops of dynamic range..... If the RAW files are 16 bit, then say hello!


----------



## PureClassA (Jan 11, 2016)

Don I suspect not, because there was no such 16 bit RAW capture in the C300 II. We will likely get the same tech scaled up for full frame stills that we got in the C300II. THAT BEING SAID... with a FF sensor vs. Super 35, that SHOULD account for a nice step up even over what the C300II provides. So perhaps a real world output of 13-14 stops is not off the table.



Don Haines said:


> Canon Rumors said:
> 
> 
> > We’re told that Canon will claim 15+ stops of dynamic range for the new Canon EOS-1D X Mark II. This claim was also made for the Cinema EOS C300 Mark II, which may be true at the hardware level, but in practice it may not actually perform to that specification. Let’s hope it’s actually the case with the new 22mp sensor.</p>
> ...


----------



## H. Jones (Jan 11, 2016)

Whether or not the claim is true, I'll certainly accept any boost as a worthy upgrade. Professionally I've never once found myself really needing even 12 stops outside of cases where I could easily bracket for a HDR, but I can understand certain uses. 

I'd really love to pick up a 1DX Mark II for personal use, but with the added cost over the 5D series, I'm really going to have to justify that to my wallet.


----------



## IglooEater (Jan 11, 2016)

What was cool (at least to me) about the c300 mark ii's claimed dynamic range was not so much the 15+ stops, but the fact that it maintained 15 stops through to almost iso 100,000- or so they said.


----------



## candc (Jan 11, 2016)

Don Haines said:


> Canon Rumors said:
> 
> 
> > We’re told that Canon will claim 15+ stops of dynamic range for the new Canon EOS-1D X Mark II. This claim was also made for the Cinema EOS C300 Mark II, which may be true at the hardware level, but in practice it may not actually perform to that specification. Let’s hope it’s actually the case with the new 22mp sensor.</p>
> ...



I am not up on the binary bits part of it all. Is that how it works, you need 16 bit raw to record 16 stops dr?


----------



## adaminc (Jan 11, 2016)

boozed said:


> On-chip ADCs maybe? Say it isn't so Canon!



On-chip ADCs have been around for... since well the beginning of CMOS image sensors. Essentially any camera these days that uses a CMOS sensor, has ADCs on the sensor, and uses a high speed differential serial communications system like LVDS to move those digital signals off the sensor package (APS).

Edit: Seems that Canon has been using both on and off sensor ADC for some time now, but is going to concentrate on using on sensor ADC.


----------



## Orangutan (Jan 11, 2016)

candc said:


> I am not up on the binary bits part of it all. Is that how it works, you need 16 bit raw to record 16 stops dr?


I believe the answer is that it's not strictly necessary, but it makes the circuitry simpler. Just as you can represent the range of water temperatures from freezing to boiling as 32-212 or 0-100 (or any other range you choose) you can also represent electron counts on whatever scale you want. However, there would be extra work to "map" (technical term) 15 "stops" of DR into 14bits of data. Extra work means more chips, more heat, higher costs, etc.

If someone out there knows more about this, I hope they'll write in with a better explanation.


----------



## Matthew Saville (Jan 11, 2016)

As someone who is rooting for Canon but isn't as up to date with the latest progress, can anyone fill me in on just how "verified" the C300 mk2's dynamic range claims are?

If so, I'm pretty willing to believe Canon when they say (leak) that the 1DX mk2 is *designed* to achieve 15+ EV's, and in real-world has at least 13-14 EV's of usable dynamic range.


----------



## Matthew Saville (Jan 11, 2016)

adaminc said:


> boozed said:
> 
> 
> > On-chip ADCs maybe? Say it isn't so Canon!
> ...



Yeah, that has been one of the issues keeping Canon held back in the base ISO dynamic range category, yet on par or ahead in the high ISO dynamic range category.

If Canon is indeed switching entirely to on-chip ADC from now on, then we can safely assume an easy ~2EV bump in base ISO shadow recovery right off the bat, I bet.


----------



## jrista (Jan 11, 2016)

If true, this is very welcome. About time. That said, I'll believe it when I see it. 

I am skeptical, as they claimed this before with the C300 II, and in testing it wasn't close. I am also curious whether it is an actual linear DR increase at the hardware level, or due to some kind if processing curve.


----------



## Don Haines (Jan 11, 2016)

Orangutan said:


> candc said:
> 
> 
> > I am not up on the binary bits part of it all. Is that how it works, you need 16 bit raw to record 16 stops dr?
> ...


You can scale a 16 bit number into a 14 bit register..... and you end up with 14 bits of precision.... you try scaling it back up and you get two bits of random noise added on to the bottom of the signal.


----------



## jrista (Jan 11, 2016)

Don Haines said:


> Orangutan said:
> 
> 
> > candc said:
> ...



Dead on. Once you discard information you cannot get it back. You can reorganize information though, and a compression curve applied to the raw oixel data before ADC could allow 15 stops of information to be compressed more usefully into 14 bits with only a small loss.


----------



## Orangutan (Jan 11, 2016)

Don Haines said:


> Orangutan said:
> 
> 
> > candc said:
> ...


I'd love to hear the fuller explanation of that. From what little I understand, the sensor has a capacity for a certain number of electrons. Ideally, you want to just count those electrons. For example, the 1DX has 88,600 (per Clarkvision). That's would require 17 bits to represent directly, so some magic is already occurring to scale that to 14 bits. Part of that, I presume, comes from "removing the noise." With my almost-nonexistent understanding of signal processing, I'm at a loss to understand how it's inherently harder to convert 17 bits of electrons to 14 bits of data than it is to convert 17 bits of electrons to 15 bits of data. (I'm assuming the 1DX2 will not have significantly higher FWC due to slightly smaller pixel size). I.e., it's not binary data until after the signal processing is done, so the analog signal can be sliced into slabs of any desired size.

Again, I confess ignorance and curiosity.


----------



## heheapa (Jan 11, 2016)

15 stops @ base till 12 stops @ ISO25600 will be welcomed


----------



## PureClassA (Jan 11, 2016)

FWC can't be greater with smaller pixels on a denser sensor (22MP vs 18MP ... i Don't believe) But then again, as has been discussed at length on this forum over the years, the problem Canon sensors truly suffer from is the read noise, which robs a great deal of the native DR they actually capture prior to being processed OFF chip on a separate ADC. Now we have a FF sensor (NOT a super 35 like the C300II) with ON chip ADC. Let's take the 5D3 sensor which is capable of more DR than the rest of the system allows and make in ON chip ADC. I think it's entirely plausible that we see a lot more of its native capabilities (13 stops + perhaps) than we have before because we aren't losing signal to noise introduced along a signal path that no longer exists since the ADC process is now on board. That alone right there could make up a pretty significant portion of a cleaner signal, and that's NOT taking into account the newer die process Canon is apparently using since after the 5D3 to create the pixels for the 7D2 and 5DSR, neither of which benefitted from on chip ADC. The 1DX2 will be the true marker for Canon going forward. I think I got most everything right, but I'll bow to Jon's wisdom on the math here.



Orangutan said:


> Don Haines said:
> 
> 
> > Orangutan said:
> ...


----------



## Pompo (Jan 11, 2016)

frankchn said:


> If that is true, then this is basically the C300 Mark II sensor scaled up.
> 
> (36mm * 24mm) / (24.6mm x 13.8mm) * (8.85 MP) = 22.5 MP



15 stops DR it sure is better than 11ish I pray that'll be the case!


----------



## raptor3x (Jan 11, 2016)

jrista said:


> If true, this is very welcome. About time. That said, I'll believe it when I see it.
> 
> I am skeptical, as they claimed this before with the C300 II, and in testing it wasn't close. I am also curious whether it is an actual linear DR increase at the hardware level, or due to some kind if processing curve.



There was a follow up article including a response from Canon that indicated there were different camera settings needed to extract the maximum dynamic range from the sensor as well as showing how Canon is defining dynamic range a bit differently from Cinema5D. By Canon's definition, which is similar to how DxO defines it, and the "optimal" camera settings they were indeed hitting 15 stops.


----------



## PureClassA (Jan 11, 2016)

Video is a funny animal because most capture isnt done in RAW mode, whereas nearly all pro work in stills is. Like Sony S-Log and Canon C-Log, you need to use certain image profiles to get the flattest image and widest DR possible from the camera. Where as shooting in RAW, you get everything no matter what. All provided of course you expose correctly (no lens cap shots). I'd love to read the article you're speaking of. Have a link?



raptor3x said:


> jrista said:
> 
> 
> > If true, this is very welcome. About time. That said, I'll believe it when I see it.
> ...


----------



## jrista (Jan 11, 2016)

So they ARE using a compression curve to achieve it. Well, that is less than ideal. Sounds much the same as Sony using craw to preserve dynamic range with their lossy raw compression. Bummer. Definitely not good for Astro...we need linear signals. Might be fine for landscapes though.


----------



## Diltiazem (Jan 11, 2016)

frankchn said:


> FWIW, I've looked at some reviews from Cinema5D specifically regarding dynamic range and I've summarized them here. Again, this is primarily of interest to filmmakers rather than photographers and video DR does not directly map to photo DR.
> 
> 
> Arri ALEXA - 14 stops
> ...



Interesting. According to this site C300 MarkII and A7rII seem to have the same DR and it's better than A7s/A7sII.


----------



## Policar (Jan 11, 2016)

frankchn said:


> FWIW, I've looked at some reviews from Cinema5D specifically regarding dynamic range and I've summarized them here. Again, this is primarily of interest to filmmakers rather than photographers and video DR does not directly map to photo DR.
> 
> 
> Arri ALEXA - 14 stops
> ...



This is very interesting/scandalous. Makes for good yellow journalism/blog.

It's all video measurements so there's a ton of post processing on top of everything else, whereas DXOmark (faulty in its own respects) at least measures raw readout data... less useful, but... more meaningful for stills.

It seems like Canon has sensor technology that's on par or close enough to on par with Sony's that for stills you're getting the same DR. For video it should serve you well, too.

The Alexa is special. Its dual gain path is magic. Watch the Revenant. Nothing compares for pure high quality capture, but they're using a 65mm (medium format) sensor to get there and that sensor is an "unofficially" dramatically improved version of a dual gain path-way-out-of-your-league 90w-to-use-it monster of a sensor so....

What we can take from this is: Canon decided to catch up with Sony.

The D5 and 1DX2 might have a lot in common. But with Canon and Nikon trading advantages...

...and ideally both pushing for huge innovation in their competitors, by pushing their specific buttons.....

Game on.


----------



## Pitbullo (Jan 11, 2016)

Mt Spokane Photography said:


> When you really need lots of DR, 15 stops is not enough. DR is, however, basically a measure of noise, so lower noise means higher DR and better high ISO.



I do find this statement a bit odd, though I agree with some. When my camera clips highlights, and I try to expose for the highlights, making me raise the shadows, with noise and banding, the 11 stops of DR my sensor gives me is not enough. That does not make a 15 stop DR - sensor not necassary. Perhaps 13 stop would do in my case, which is still 2 stop more than I get from my current setup. Hence, a 15 stop sensor would be great, and more than enough.


----------



## Neutral (Jan 11, 2016)

jrista said:


> So they ARE using a compression curve to achieve it. Well, that is less than ideal. Sounds much the same as Sony using craw to preserve dynamic range with their lossy raw compression. Bummer. Definitely not good for Astro...we need linear signals. Might be fine for landscapes though.


In general it is even possible to squeze 20db of scene dynamic range into 14 bit ADC output.
For best results it should be of course analog signal compression before ADC input and not digital compression like done by Sony.
Kind of S-Log analog compressor curcuit which could be switched on/off and also have adjustable gain curve via camera settings.
Problem though that using 14 bit ADC would not give any real gain for tonal range which is important. So higher than 14 stop per pixel DR using 14 bit ADC still would be some compromise/tradeoff which is not OK for some of the applications where tonal range fidelity is important.


----------



## Woody (Jan 11, 2016)

If Canon manages to match Sony/Nikon in the low ISO dynamic range arena, that will be a massive achievement already.

Now, can they get their DPAF to work in AF Servo mode? That will totally awesome.

What about getting their standard AF sensor to match Nikon's 3D tracking capabilities?


----------



## Woody (Jan 11, 2016)

dilbert said:


> When DxO test cameras and say > 14 stops of DR for Nikon, everyone here says "bullsh*t, DxO are stupid/wrong."
> 
> When Canon does a press release and says "15 stops of DR", everyone goes "wow, cool."
> 
> ... I suspect if Canon said "The Sun will rise in the west tomorrow" lots of people here would go "Cool! Where can I go and see it?"



LOL ;D ;D


----------



## Neutral (Jan 11, 2016)

Pitbullo said:


> Mt Spokane Photography said:
> 
> 
> > When you really need lots of DR, 15 stops is not enough. DR is, however, basically a measure of noise, so lower noise means higher DR and better high ISO.
> ...



Mt Spokane statement is perfectly correct.
DR is the ratio between most strongest and most weakest signals.
Strongest signal is limited by fotocell saturation point, weakest signal by the circuit noise floor.
So analog DR (before ADC) could be improved by raising saturation point with given circuit noise floor level or by reducing noise level or doing both at the same time.
Then 14bit ADC would be limiting factor, so to squeze more analog signal DR to ADC limits it would be requied to use pre-ADC analog compression circuit (with S-log gain curve).

On the other hand statement that 15 stops DR is more than enough is more than odd /strange.
More DR means better overall sensor quality especially more DR at high ISO.
So having more DR is the same as having more money - the more the better )


----------



## Diltiazem (Jan 11, 2016)

Policar said:


> frankchn said:
> 
> 
> > FWIW, I've looked at some reviews from Cinema5D specifically regarding dynamic range and I've summarized them here. Again, this is primarily of interest to filmmakers rather than photographers and video DR does not directly map to photo DR.
> ...



Cinema5D doesn't measure sensor DR, it's measurement is based on 'how people usually shoot'. So, he uses different exposures and ISOs for different cameras. If you did that for a still camera you would be called a moron.


----------



## Neutral (Jan 11, 2016)

Woody said:


> If Canon manages to match Sony/Nikon in the low ISO dynamic range arena, that will be a massive achievement already.
> 
> Now, can they get their DPAF to work in AF Servo mode? That will totally awesome.
> 
> What about getting their standard AF sensor to match Nikon's 3D tracking capabilities?



For 1DX II I am more interested in high ISO DR improvements and do not care much about low ISO DR. If it be better then good, if the same as before I am OK with that.
For general walkaround camera I already have better performance within ISO range from 100 to 12800 using my Sony a7s and a7rII than using my 1DX .
But high ISO performance for 1DX II is extremely important for this camera intended applications.
My wish that it would be at least 1 stop better than a7s and a7rII at high ISO.
Best would be to have 1 stop better performance at ISO 25600 than Sony a7s.


----------



## AndreeOnline (Jan 11, 2016)

Here are your 15 stops broken down to discreet levels. Each stop must per definition be a doubling of the previous value in a linear representation:

0. Stop: 0-1
1. Stop: 1-2
2. Stop: 2-4
3. Stop: 4-8
4. Stop: 8-16
5. Stop: 16-32
6. Stop: 32-64
7. Stop: 64-128
8. Stop: 128-256
9. Stop: 256-512
10. Stop: 512-1024
11. Stop: 1024-2048
12. Stop: 2048-4096
13. Stop: 4096-8192
14. Stop: 8192-16384
15. Stop: 16384-32768

Since it would be very impractical to assign 1 value to the first stop and 16384 values to the last stop, we use a logarithmic function to distribute the values in a non-linear fashion. The log curve will assign roughly the same amount of discreet values per stop.

Now, you might think that you can take these values and, with the help of a log curve, break them out to even more "stops" by using a flatter profile. But the only thing that means is that you start to define fractions of a real, linear light stop. So that isn't possible.

In practical terms, on a signal level, Dynamic Range (DR) in dB is calculated:

DR (db) = 20 log (Peak signal at Full Well Saturation/r.m.s. Noise)

For an image sensor, in essence, each stop will correspond to a 6dB change.

15 stops would in theory need a 90dB sensor. The C300 mkII achieves 67dB (which Canon is pretty proud of). 67dB is equivalent to 11.17 stops. So, there is a discrepancy here.

My own little theory is that cameras recording log images do some significant pulling of the signal (under exposing) and then use the ISO to boost the levels back up (and recover the shadows). Kind of like a dual ISO thing.

Meaning if you shoot at ISO800 you get 3 stops highlight protection (based off of ISO100). 9 real stops + 3 ISO stops = 12 stops. A Sony that requires ISO3200 for log would give you a whopping 5 stops extra. 9 real stops + 5 ISO stops = 14 stops.

This technique would be especially advantageous if the sensor has low readout noise—something the Sony sensors have been good at. And now the new generation of Canon sensors too, I think.

Another example of this is Canon's HTP that gives you an additional stop of highlight recovery, but the lowest available ISO is 200. Or, as in my 1Dc: ISO400 (required for log) would give me 10+2=12 stops.


----------



## AndreeOnline (Jan 11, 2016)

dilbert said:


> HTP is just for JPEG. It does not impact raw files at all.



I know, I'm only addressing the movie mode here. Should have made that clear.


----------



## neuroanatomist (Jan 11, 2016)

dilbert said:


> When DxO test cameras and say > 14 stops of DR for Nikon, everyone here says "bullsh*t, DxO are stupid/wrong."
> 
> ....
> 
> I suspect if Canon said "The Sun will rise in the west tomorrow" lots of people here would go "Cool! Where can I go and see it?"



Your understanding of facts is as astute as ever. The point was that DxO reported >14 stops of DR as a calculation based on downsampling an image to 8 MP, and the cameras reported with >14 stops of DR would be incapable of recording a scene with >14 stops of DR without clipping highlights, shadows or both. 

I wonder if the sun rises in the west in dilbertland? :




Neutral said:


> On the other hand statement that 15 stops DR is more than enough is more than odd /strange.



I don't know that anyone here has argued that more DR wouldn't be better. It's just not a priority for everyone, even though for some it's obviously the sine qua non, and many of those people seem to think their opinion/desire is universally shared, despite substantial evidence to the contrary. 

Ancient CR proverb: There are none so blind as those who must lift shadows by six stops to see.


----------



## memoriaphoto (Jan 11, 2016)

dilbert said:


> AndreeOnline said:
> 
> 
> > ...
> ...



Actutally, it does. I believe when activated, ISO is dropped one stop and then a low-mid curve is applied by the imageprocessor when the RAW file is produced by the camera.

The ALO however, is another story and only affects JPEG (and/or RAW conversions if done in DPP)


----------



## docsmith (Jan 11, 2016)

Pitbullo said:


> Mt Spokane Photography said:
> 
> 
> > When you really need lots of DR, 15 stops is not enough. DR is, however, basically a measure of noise, so lower noise means higher DR and better high ISO.
> ...



Mt. Spokane's description fits my experience perfectly. I shoot with the 5DIII and I run into 2 types of "DR" issues. The first is bright sun and shadows. This far exceeds 10-15 stops DR. In other words, expose for the shadows and blow out the part of the frame in full light. Expose for the bright light and the shadows are completely dark. An additional stop or two of DR will not help me in this circumstance.

But, the other time I run into DR is noise in the blacks/shadows. This can be in astro or a few other situations, like with a background behind a waterfall. I do not run into it often (you know, proper exposure), but occasionally I see some shadow noise. Here, simply cleaning up noise in the blacks/shadows will help my images and expand DR by 1-2 stops. From what I've gathered over the years on this forum and others is that the primary cause of this issue is noise that is gained as the analog signal moves from the chip to off the chip before being converted to a digital signal, thus, the hope is that on chip A/D converters would minimize this noise and the issue with black/shadow noise.

Quickly on the question someone had about 15 bits needed for 15 stops of DR, my understanding that this is needed is because each "bit" is really a digit in a binary sequence. So a 4 "bit" sequence is 0000, 0101, 1111, etc. 5 bit is 00000, 11111, etc. For simplicity in understanding why 15 bits are needed for 15 stops of DR, I imagine that each bit measures the light filling up the pixel well behind a bayer sensor. So, in my 4 bit system, 0000 would be black, and 1111 would be completely bright. So, to define 15 stops of brightness, you need 15 digits in a binary sequence to quantify that light. 

Simplistic, but that is my level of understanding. If others know better, please expand.


----------



## AndreeOnline (Jan 11, 2016)

docsmith said:


> Simplistic, but that is my level of understanding. If others know better, please expand.



2bit would be 22 levels of luminance = 4
8bit would be 28 levels of luminance = 256
10bit would be 210 levels of luminance = 1024
14bit would be 214 levels of luminance = 16384

For 15 stops, as I've already stated above, you need 32768 or 215 (15bit).

This, of course, assumes a single exposure or only one image processing pipeline. I can imagine an 14bit solution with parallell processing (sort of like internal HDR bracketing) that generates 15 stop files.

15 stops can be compressed into 14, or even 10bit files for that matter. But you can't take a single 14bit originated file and expand it to 15 stops.


----------



## jeffa4444 (Jan 11, 2016)

AndreeOnline said:


> Here are your 15 stops broken down to discreet levels. Each stop must per definition be a doubling of the previous value in a linear representation:
> 
> 0. Stop: 0-1
> 1. Stop: 1-2
> ...


I completely agree with your evauation in terms of dB per stop. Arri use a Esser light sphere with a Esser Plates to accurately measure their cameras and indeed developed one with Esser to read the Alexa. We have the same device which can be used for stills cameras as well as video cameras and we use it to make sure manufacturers claims are accurate. Both DXOMark and Cinema5D methods are open to a degree of interpritation whereas the Esser method we have found gives very accurate results and shows the variation even of cameras of the same type (very important in visual effects). 
We have not tested the Canon C300 MKII yet but have tested cameras like the Sony F55 & F5 which give more DR than the FS7 but not Alexa levels. 
The Alexa 65 uses the same sensor (three stitched sideways) as the regular Alexa XT and gives the same 14 stop DR. 
Cinematographers will always use more DR, however we fixate on it yet dont about the limitations created by CFAs which "lock in" to a certain degree the colorimagery thats why the big advances going forwards will come from this area.


----------



## neuroanatomist (Jan 11, 2016)

memoriaphoto said:


> dilbert said:
> 
> 
> > AndreeOnline said:
> ...



Neither of you are correct, although dilbert's statement is closer to the reality. HTP does not directly affect the RAW image data, the tone curve is applied only to the jpg image. However, although the RAW image data aren't directly affected, the RAW file is affected because the metadata are recorded incorrectly. What HTP does is deliberately underexpose by one stop, and 'misrecord' the ISO in the metadata - that's why ISO 100 isn't available when you turn on HTP, i.e. you set ISO 200, it shoots at ISO 100 but records 200, or you set ISO 800, it shoots at 400 and records 800. If shooting JPG, it processes the underexposed image to brighten everything except the highlights (meaning it applies a tone curve). If shooting RAW, it sets a metadata flag so DPP can apply that tone curve. 

If you open that RAW file in a 3rd party converter, results vary. Some ignore the flag and you just get an underexposed image. Others compensate by just boosting the total exposure by one stop - and that just re-blows your highlights and adds shadow noise.


----------



## Pitbullo (Jan 11, 2016)

docsmith said:


> Pitbullo said:
> 
> 
> > I do find this statement a bit odd, though I agree with some. When my camera clips highlights, and I try to expose for the highlights, making me raise the shadows, with noise and banding, the 11 stops of DR my sensor gives me is not enough. That does not make a 15 stop DR - sensor not necassary. Perhaps 13 stop would do in my case, which is still 2 stop more than I get from my current setup. Hence, a 15 stop sensor would be great, and more than enough.
> ...


Usually my problems start when I take pictures of the kids outside in the afternoon. Low sun, not too bright outside, and quite fast moving subjects. Perhaps ISO 800 to get the shutter speed right. DR is a problem then. Either blown highlights or misserable shadows. A few more stops of DR (esp. at ISO above base 400-3200) would be great.


----------



## davidmurray (Jan 11, 2016)

docsmith said:


> Quickly on the question someone had about 15 bits needed for 15 stops of DR, my understanding that this is needed is because each "bit" is really a digit in a binary sequence. So a 4 "bit" sequence is 0000, 0101, 1111, etc. 5 bit is 00000, 11111, etc. For simplicity in understanding why 15 bits are needed for 15 stops of DR, I imagine that each bit measures the light filling up the pixel well behind a bayer sensor. So, in my 4 bit system, 0000 would be black, and 1111 would be completely bright. So, to define 15 stops of brightness, you need 15 digits in a binary sequence to quantify that light.
> 
> Simplistic, but that is my level of understanding. If others know better, please expand.



The signal that has the dynamic range is analogue. We want the same degrees of difference between each stop, so it in reality doesn't matter if we take a 8/10/12/14/15/16 stop analog dynamic range and map the analog signal onto arbitrary digital values.

What does matter is whether or not there are enough digital values and the gaps in the analog values they represent are sufficiently small so that digital quantization noise is at or below the analog noise floor.

So if the mapping of the analog signal onto digital values is done carefully the analog DR shouldn't be a problem.

Or, to put it another way, adding an additional 6dB to the analog signal doesn't necessarily mean you need another digital bit to record the values. You MIGHT, but not necessarily. It all depends on how big the gaps in the analog voltage becomes between each digital value representing the analog voltage.


----------



## rfdesigner (Jan 11, 2016)

davidmurray said:


> docsmith said:
> 
> 
> > Quickly on the question someone had about 15 bits needed for 15 stops of DR, my understanding that this is needed is because each "bit" is really a digit in a binary sequence. So a 4 "bit" sequence is 0000, 0101, 1111, etc. 5 bit is 00000, 11111, etc. For simplicity in understanding why 15 bits are needed for 15 stops of DR, I imagine that each bit measures the light filling up the pixel well behind a bayer sensor. So, in my 4 bit system, 0000 would be black, and 1111 would be completely bright. So, to define 15 stops of brightness, you need 15 digits in a binary sequence to quantify that light.
> ...



it's more complex than that.

The analogue signal has intrinsic noise in it, equal to the square root of the signal (because the original signal is a number of electrons).

The brightest pixels do not need 14 bits recorded for them, in reality 9 would be more than adequate, just so long as we know it's the brightest 9 we're talking about.

A non-linear curve between the ADC and recorded file is fine, just so long as it is completely defined and reversible without error (no reason why this is not possible)


----------



## davidmurray (Jan 11, 2016)

rfdesigner said:


> davidmurray said:
> 
> 
> > docsmith said:
> ...



Yes - agreed.

The key is the ability to take the digital value and get back the original electrical voltage without introducing more noise than what was in the original analog signal.


----------



## kaihp (Jan 11, 2016)

AndreeOnline said:


> docsmith said:
> 
> 
> > Simplistic, but that is my level of understanding. If others know better, please expand.
> ...



All this assumes integer calculations - which isn't a bad assumption at all. But it might be that Canon chooses to store the luminance levels as a floating-point number. It's more computational work, but it could definitely be done to reduce the data to a (say) 14 or even 12 bit number.


----------



## rrcphoto (Jan 11, 2016)

who's to say it's 14 bit RAW data?

canon was the first to go 14 bit, they could conceivably be the first to go to 16 bit (in the non medium format space that is).

Historically the first camera they upped the bit depth on was the 1 series (the 1D Mark III was the first with 14 bit RAW data).

there's alot of talk / whining / hand wringing and none of you explored the possibility that canon just simply went to 15/16 bit RAW files

*IF* canon implemented the ADC patents which are dual slope ADC - they are not ADC in the traditional sense, but a time to digital value. quantifying that could easily occur with a variety of bit depths.


----------



## 3kramd5 (Jan 11, 2016)

dilbert said:


> When Canon does a press release and says "15 stops of DR", everyone goes "wow, cool."...
> And not only that, people are accepting Canon's "15 stops of DR" statements over testing that actually shows less.



Did you and I read the same thread?



heptagon said:


> 15 stops at what resolution?
> 
> If on the single pixel level, that would truly be phenomenal!
> 
> If on a scaled down image of 1 Megapixel, that wouldn't be very impressive.





Mt Spokane Photography said:


> When you really need lots of DR, 15 stops is not enough. DR is, however, basically a measure of noise, so lower noise means higher DR and better high ISO.





tpatana said:


> One more claim I'm not sure I'd believe completely. Hoping for best of course, but...



etc


----------



## PureClassA (Jan 11, 2016)

At 16 bit, how much larger of a file are we talking here though? It's pretty substantial. I think Canon would give serious consideration to that. As previously mentioned, the full "15 stops of DR" Canon is boasting is likely something firmware driven that can/will only be recoverable within proprietary software like DPP. Once the RAW file gets imported and interpreted, it can be exported in a loss-less TIFF for LR or PS manipulation if desired. I'm not saying they won't go 16bit RAW files, just saying I'd be surprised if they did. The DR on current Canon bodies is more than satisfactory for most pros, however I understand there are SOME shots where more would help and that there are those who specialize in certain styles where more is commonly demanded. Canon isn't a panacea for everyone, but do serve the majority of pro togs very well. That being said, if you have a shot where you really need to squeeze every drop of highlight and shadow manipulation out of the widest image range possible on the new Canon models, you'll need to use DPP for those particular shots first. (I would suspect)





rrcphoto said:


> who's to say it's 14 bit RAW data?
> 
> canon was the first to go 14 bit, they could conceivably be the first to go to 16 bit (in the non medium format space that is).
> 
> ...


----------



## rfdesigner (Jan 11, 2016)

PureClassA said:


> At 16 bit, how much larger of a file are we talking here though? It's pretty substantial. I think Canon would give serious consideration to that. As previously mentioned, the full "15 stops of DR" Canon is boasting is likely something firmware driven that can/will only be recoverable within proprietary software like DPP. Once the RAW file gets imported and interpreted, it can be exported in a loss-less TIFF for LR or PS manipulation if desired. I'm not saying they won't go 16bit RAW files, just saying I'd be surprised if they did. The DR on current Canon bodies is more than satisfactory for most pros, however I understand there are SOME shots where more would help and that there are those who specialize in certain styles where more is commonly demanded. Canon isn't a panacea for everyone, but do serve the majority of pro togs very well. That being said, if you have a shot where you really need to squeeze every drop of highlight and shadow manipulation out of the widest image range possible on the new Canon models, you'll need to use DPP for those particular shots first. (I would suspect)
> 
> 
> 
> ...



it's only 2/14ths larger.

So a 22MPixel 16 bit file will look much like a 19MPixel 14 bit file, assuming the same number of bits are thrashing at the bottom.


----------



## Maiaibing (Jan 11, 2016)

Canon Rumors said:


> We’re told that Canon will claim 15+ stops of dynamic range for the new Canon EOS-1D X Mark II. This claim was also made for the Cinema EOS C300 Mark II, which may be true at the hardware level, but in practice it may not actually perform to that specification. </p>
> <span id="pty_trigger"></span>



Wow. If Canon wants to attract bad press and flak at the release of the 1D X II making a baseless/deceptive claim on one of Canon's perceived shortcomings towards its direct competition must be the sure-fire way of getting it. 

I hope for Canon they are smarter than this. So Canon either does not make the claim or delivers were it counts...


----------



## FEBS (Jan 11, 2016)

Don Haines said:


> Canon Rumors said:
> 
> 
> > We’re told that Canon will claim 15+ stops of dynamic range for the new Canon EOS-1D X Mark II. This claim was also made for the Cinema EOS C300 Mark II, which may be true at the hardware level, but in practice it may not actually perform to that specification. Let’s hope it’s actually the case with the new 22mp sensor.</p>
> ...



+1

I hope to say hello, but a change of 14 to 16 bits sampling is a major upgrade!


----------



## kubelik (Jan 11, 2016)

3kramd5 said:


> dilbert said:
> 
> 
> > When Canon does a press release and says "15 stops of DR", everyone goes "wow, cool."...
> ...



he's just trolling. everyone else is here having a pretty enlightening discussion on how it's unlikely that Canon has actually packed in 15 stops of DR, how it could be claimed through marketing nonsense/obfuscation, or how it could actually happen if Canon used the right technology and gave us 16-bit files.


----------



## neuroanatomist (Jan 11, 2016)

FEBS said:


> I hope to say hello, but a change of 14 to 16 bits sampling is a major upgrade!



Unless you're Sony, who 'innovatively' claim 16-bit image processing.


----------



## PureClassA (Jan 11, 2016)

rfdesigner said:


> it's only 2/14ths larger.
> 
> So a 22MPixel 16 bit file will look much like a 19MPixel 14 bit file, assuming the same number of bits are thrashing at the bottom.



Forgive my ignorance. Ya lost me. You're saying a 16bit RAW file from 22MP sensor will have a similar file size to a 14bit RAW file from a 19MP sensor?? I'm missing something. How could that be smaller/same size?


----------



## 3kramd5 (Jan 11, 2016)

PureClassA said:


> rfdesigner said:
> 
> 
> > it's only 2/14ths larger.
> ...



16bits per pixel	*	19,000,000pixel	=	304,000,000bits
14bits per pixel	*	22,000,000pixel	=	308,000,000bits


----------



## MrToes (Jan 11, 2016)

I'll believe it when I see it!


----------



## heptagon (Jan 11, 2016)

candc said:


> I am not up on the binary bits part of it all. Is that how it works, you need 16 bit raw to record 16 stops dr?



Some enlightening information about Signal-to-Noise Ratio (SNR) and the Dynamic Range (DR):

1) The SNR is usually defined as the square of the signal amplitude (e.g. the number of photons detected in a pixel) divided by the square of the standard deviation of the noise on that pixel.

The following factors add up into that pixel noise:

* The poisson distribution of the number of detected photons.

* The random electronic thermal noise.

* Signal interference with other electronic components.

* Quantization errors.

2) To get the DR you ask the question: What is the highest amplitude that I can get and what is the amplitude with an SNR of 1? Then you divide them and get the DR.

Why is the SNR not equal to the DR?

The reason is the poisson noise. While the other noise factors mostly stay at the same level for dark and bright pixels on the same image (but change with different selected ISO values), the noise of a bright pixel is higher than the noise of a dark pixel. When you have a high amplitude, e.g. a high number of photons, there is a high variation in the actual number of photons in that pixel. This is simple poisson statistics and cannot be avoided.

The bright parts of your image are dominated by the poisson noise and all sensors are electronically very close to perfection there. Look up the SNR values at DXO and they are the same for all sensors with the same size.

Why is the DR of Sony sensors better than the DR of Canon sensors?

Sony has a better A/D-converter which reduces the electronic interference noise. Therefore dark pixels have lower noise. Bright pixels are virtually identical.

Why is the DR of Sony and Canon sensors equal at high ISO?

Canon amplifies the signal on the sensor before sending it to the A/D-converter, therefore the electronic interference is not as high in comparison. Here, the maximum brightness is limited.


Now to answer your question:

In a linear system, yes, you need 16 bits to represent the values needed for 16 stops of DR at the single pixel level.

However if you scale down the image you gain 1 bit of SNR and DR when you reduce width and height by a factor of 2. This even works when you use 14 bit for the RAW image and 16 bit for the digital processing afterwards. This is simple statistics. If you take a picture at 22 MP at 13 stops of DR and scale it down to 1.4 MP you get 15 stops of DR. Therefore HD-ready video with 15 stops of DR is totally believable with the current sensor technology.

One way to get more than 14 stops of DR out of 14 bits is to use a nonlinear A/D-curve and use digital processing to linearize it afterwards in 16 bits. This would reduce the SNR a bit but it's not really a problem because due to poisson noise you don't have 16 bits of SNR anyways. The only difficulty is to make a good low noise nonlinear amplifier.

So, yes, there are some tricks to get more than 14 stops of DR out of a 14 bits A/D-converter.


----------



## PureClassA (Jan 11, 2016)

Oh! Ok. You originally meant a 22MP 14bit vs. 19MP 16 bit. I'm with ya.



3kramd5 said:


> PureClassA said:
> 
> 
> > rfdesigner said:
> ...


----------



## heptagon (Jan 11, 2016)

MrToes said:


> I'll believe it when I see it!



This. So far everything is smoke and mirrors.

It starts to mean something to me if the sensor technology 1) really improves the images I make, 2) makes it into the 6DII and 3) comes at an affordable price point.


----------



## jrista (Jan 11, 2016)

Well, if we assume 22mp, then at 14-bit:

Size = 22000000 * 14 / 8 = 38.5mb

And at 16-bit:

Size = 22000000 * 16 / 8 = 44.0mb

The uncompressed raw size of every light sensitive pixel on the sensor would require 5.5 additional megabytes per image. Not really all that much. This ignores the fact that the masked border pixels are also included, along with some metadata, so the difference might be 6mb in the end. Anyway, it shouldn't be a big deal going from 14 bits to 16 bits for a sensor of lower resolution like this.

Now, if we were talking 50mp, that might be a bit of a heftier growth in file size:

Size = 51600000 * 14 / 8 = 90.3mb
Size = 51600000 * 16 / 8 = 103.2mb

Gain of over 13mb per image there. Of course, with a 5Ds you would likely be taking far fewer shots at a lower frame rate, in the case of landscapes possibly as little as one shot per minute. With a 1D X II you could be taking 15 frames per second in regular bursts. If you regularly bring home a couple thousand shots with a 1D X, then you would need an additional 12 gigs to handle it at 16-bit. 



PureClassA said:


> At 16 bit, how much larger of a file are we talking here though? It's pretty substantial. I think Canon would give serious consideration to that. As previously mentioned, the full "15 stops of DR" Canon is boasting is likely something firmware driven that can/will only be recoverable within proprietary software like DPP. Once the RAW file gets imported and interpreted, it can be exported in a loss-less TIFF for LR or PS manipulation if desired. I'm not saying they won't go 16bit RAW files, just saying I'd be surprised if they did. The DR on current Canon bodies is more than satisfactory for most pros, however I understand there are SOME shots where more would help and that there are those who specialize in certain styles where more is commonly demanded. Canon isn't a panacea for everyone, but do serve the majority of pro togs very well. That being said, if you have a shot where you really need to squeeze every drop of highlight and shadow manipulation out of the widest image range possible on the new Canon models, you'll need to use DPP for those particular shots first. (I would suspect)
> 
> 
> 
> ...


----------



## jrista (Jan 11, 2016)

For those who are interested, I use this SNR formula (well, a more complicated version, but this is the stuff that matters for daytime terrestrial photography) frequently in my astro work:

SNR = ImageSignal/SQRT(ImageSignal + DarkCurrent + ReadNoise^2)

While in a pure signal, noise is simply the SQRT(ImageSignal), in a digital camera we also have dark current and read noise. Dark current is negligible for most daytime astrophotography, although with older cameras (like the 5D II) even an exposure of a few seconds (for say long exposure water photography) would suffer from increased color noise and hot pixels. These days, DSLRs and mirrorless have very low dark current, so that term can usually be ignored. 

That leaves the image signal and the read noise to be added in quadrature. When you have a strong signal, over midtone gray, read noise doesn't matter much. As you drop farther and farther below midtone gray, read noise can matter more and more, depending on how high it is. If you have a faint image signal of 100e-, and 30e- read noise:

SNR = 100/SQRT(100 + 30^2) = 100/SQRT(100 + 900) = 100/(SQRT(1000) = 100/31.62 = 3.15:1 ~= 10dB

If you reduce read noise to 3e-, on the other hand:

SNR = 100/SQRT(100 + 3^2) = 100/SQRT(100 + 9) = 100/SQRT(109) = 100/10.44 = 9.58:1 ~= 19.63dB

In terms of stops, the camera with 3e- read noise has a stop and a half advantage over the camera with 30e- read noise. It was demonstrated some time ago that the Canon sensors themselves seem to be capable of up to around 15.6 stops of DR, given the intrinsic noise that comes from electronics on the sensor itself. It's their downstream ADC units that seem to add the most noise. If Canon could resolve that downstream noise problem, then I don't doubt that they could deliver true, raw 15 stops of dynamic range. 

The question is...have they fixed the ADC units and reduced the noise from them?

I don't think it is just as simple as moving the ADC units onto the sensor die. That allows increased parallelism, and lower ADC operating frequency. But that was not all that Sony Exmor or the CP-ADC Toshiba sensors did. They also improved other things, such as moving the clock generator to a remote area of the sensor to prevent it from introducing noise into the ADC units, by adding logic to tune each ADC unit to it's column and eliminate banding, by improving the signaling that drives all the circuitry, etc. Canon would have to do more than just move the ADC units onto the sensor die...but given recent patents, I think they have the technology to do what's necessary.

I just hope they actually DO...I would rather see a real hardware level gain in dynamic range, due to a reduction in read noise, than the use of some kind of artificial gain due to compressing the high dynamic range analog signal into a lower dynamic range 14-bit output via some kind of tone or compression curve.


----------



## rrcphoto (Jan 11, 2016)

jrista said:


> I don't think it is just as simple as moving the ADC units onto the sensor die. That allows increased parallelism, and lower ADC operating frequency. But that was not all that Sony Exmor or the CP-ADC Toshiba sensors did. They also improved other things, such as moving the clock generator to a remote area of the sensor to prevent it from introducing noise into the ADC units, by adding logic to tune each ADC unit to it's column and eliminate banding, by improving the signaling that drives all the circuitry, etc. Canon would have to do more than just move the ADC units onto the sensor die...but given recent patents, I think they have the technology to do what's necessary.



this gets lost in the noise. I don't think canon in one fell swoop is going to catch up to 2-3 generations of exmor sensors. I suspect this will take at least one more generation.

While they have the patents, it's alot to do in one generation, and those patents are all pretty new.


----------



## jrista (Jan 11, 2016)

rrcphoto said:


> jrista said:
> 
> 
> > I don't think it is just as simple as moving the ADC units onto the sensor die. That allows increased parallelism, and lower ADC operating frequency. But that was not all that Sony Exmor or the CP-ADC Toshiba sensors did. They also improved other things, such as moving the clock generator to a remote area of the sensor to prevent it from introducing noise into the ADC units, by adding logic to tune each ADC unit to it's column and eliminate banding, by improving the signaling that drives all the circuitry, etc. Canon would have to do more than just move the ADC units onto the sensor die...but given recent patents, I think they have the technology to do what's necessary.
> ...



I agree that it probably won't happen in one fell swoop, however I disagree that it COULDN'T happen in one fell swoop. It happened like that before.  

The patents were recently granted, but if you look at the dates on most of them, they are not new by any means, two years old or older (I think one was from 2011 even.) 

Canon has a bad habit of sitting on lucrative technology. I've never understood it, but they have had patents for some pretty kick-ass technology since 2008...they just haven't employed it. At least, not in their DSLRs...some of the technology did find it's way into their smaller form factor sensors for P&S cameras, and those were the most advanced sensors Canon ever manufactured, using a 180nm process, copper interconnects, etc. There is some kind of lethargy in Canon's larger sensor division that just keeps them from moving forward at none other than a snails pace. 

Canon fans often ridicule Nikon, which has their own business problems. However that has never kept them from pushing the technology envelope on every front at all times. The D500 is, in my humble opinion, a phenomenal camera. It's a refinement of technology on every front, not just the ergonomics or just the AF or just the sensor...it packs a ton of high end functionality at every level. I hear all the arguments about Canon needing to continue delivering the excellent customer service and reliability they always have...but if a company that is struggling even more than Canon and has a much smaller budget for R&D can push the technology envelope ever farther, why can't Canon? Canon has BILLIONS to play with...

Anyway. Someday I'm sure they will catch up. I would just prefer they do it in one fell swoop, because I believe they can.


----------



## Proscribo (Jan 11, 2016)

rrcphoto said:


> this gets lost in the noise. I don't think canon in one fell swoop is going to catch up to 2-3 generations of exmor sensors. I suspect this will take at least one more generation.
> 
> While they have the patents, it's alot to do in one generation, and those patents are all pretty new.


I don't see why Canon couldn't do it if they wanted, as Sony isn't the only one who has good sensors. Actually, according to DXO, Toshiba and Samsung have BETTER sensors than Sony, altho they are APS-C, also Toshiba's sensors are now pretty much Sony's.


----------



## AndreeOnline (Jan 11, 2016)

kaihp said:


> All this assumes integer calculations - which isn't a bad assumption at all. But it might be that Canon chooses to store the luminance levels as a floating-point number. It's more computational work, but it could definitely be done to reduce the data to a (say) 14 or even 12 bit number.



Well, obviously they don't "store the luminance levels as a number" at all. It's a transfer function that de-linearizes the raw data.

And I'm not sure if I'm reading you correct here, but I get the feeling you think that you could gain more DR by using a "finer scale". I see variants of this reasoning from time to time. That is why I tried to address it earlier and make it clear that you can't just "define more stops at the cost of fewer luminance levels per stop".

Anyway. There's little point discussing it here. The thread is already all over the place with some information mixed with lots of speculation. =)

To get back on an easier to follow line of thought, think of it this way: 



Canonrumors said:


> EOS-1DX Mark II Claims 15 Stops of DR





Canon at release said:


> …
> …
> …
> …
> …and 15 stops of DR when using Canon Log 2 shooting videos in 4k mode.


----------



## jrista (Jan 11, 2016)

AndreeOnline said:


> kaihp said:
> 
> 
> > All this assumes integer calculations - which isn't a bad assumption at all. But it might be that Canon chooses to store the luminance levels as a floating-point number. It's more computational work, but it could definitely be done to reduce the data to a (say) 14 or even 12 bit number.
> ...



Aye, increasing bit depth when the camera is only capable of 12 stops of dynamic range only increases the number of levels of noise you can differentiate, but it doesn't really improve the dynamic range of the camera. 

That said, personally, I prefer the smoother, finer grain of noise from a 16-bit camera than I do from a 12 bit camera. When you are limited to 12 bits, or even 14 bits, the noise grains are a bit harsher. With the full range of 16 bits, noise does look cleaner and smoother, and the results just look better IMO. Of course, this is going off of what I see with astro CCD cameras which are usually 16-bit. Those cameras tend to be quite expensive because of the high grade engineering and high quality signal processing, which results in nearly perfect gaussian characteristics:







In my testing, Canon DSLRs don't have a perfect gaussian noise...there are other characteristics that show up quite readily in an FFT:


----------



## PureClassA (Jan 11, 2016)

jrista said:


> I don't think it is just as simple as moving the ADC units onto the sensor die. That allows increased parallelism, and lower ADC operating frequency. But that was not all that Sony Exmor or the CP-ADC Toshiba sensors did. They also improved other things, such as moving the clock generator to a remote area of the sensor to prevent it from introducing noise into the ADC units, by adding logic to tune each ADC unit to it's column and eliminate banding, by improving the signaling that drives all the circuitry, etc. Canon would have to do more than just move the ADC units onto the sensor die...but given recent patents, I think they have the technology to do what's necessary.
> 
> I just hope they actually DO...I would rather see a real hardware level gain in dynamic range, due to a reduction in read noise, than the use of some kind of artificial gain due to compressing the high dynamic range analog signal into a lower dynamic range 14-bit output via some kind of tone or compression curve.



Your knowledge is always a marvel. Thanks. And having looked at all the amazing patents Canon has filed and published in recent years, I would tend to agree that there is no reason (apart from perhaps cost) they can't implement a major overhaul in one big step. And it's not just when they are filed, Canon would have been maturing those technologies for some time prior to even filing them. That's not really so much a measure in the timeline of where they are in terms of market-ready mass production. "Hey we know EXACTLY how to do it, we just have to drop a ton of time and money to make it happen." It's like Maeda said himself, even he is frustrated with how slow Canon moves.

I would have to assume if they are now producing the ADCs ON sensor die, to me that seems the biggest manufacturing cost obstacle to overcome. Seems like the rest is rather mundane in comparison. That begs the question again, "Well then, why wouldn't they?" Of course this is all academic until we get a damn camera...


----------



## yeahright (Jan 11, 2016)

jrista said:


> AndreeOnline said:
> 
> 
> > kaihp said:
> ...


the fact that the FFT-spectrum is not flat means that the noise is not WHITE, but not that it is not GAUSSIAN.


----------



## rfdesigner (Jan 11, 2016)

jrista said:


> (2D FFTs)



For those that don't understand 2D FFTs. The ideal response to an image of noise is a flat noise plane with a single white dot in the middle (representing the overall average rightness). Low frequencies are in the centre, high frequencies are towards the edge.

If there are dots or stripes in the FFT space then that means there are repeating patterns in the image space, even if you can't seem them, which is often the case.


----------



## FEBS (Jan 11, 2016)

heptagon said:


> So, yes, there are some tricks to get more than 14 stops of DR out of a 14 bits A/D-converter.



What you explain is indeed the base of oversampling. To get a picture 22Mb @ 15bit DR instead of 14 bit you need an effective sensor of 22Mb x 4 = 88Mb sampled by a 14bit Analog to Digital Convertor. Then by scaling down, and using a good software low-pass filter you can get 22Mb @ 15bit in theory as in practice we also don't get 14stops of DR from a 14bit convertor due to the noise.

But 88Mb sensor and a 14bit ADC does not seems the way to me how Canon will get the 15 stops of DR.


----------



## jrista (Jan 11, 2016)

yeahright said:


> jrista said:
> 
> 
> > AndreeOnline said:
> ...



Well, the 5D III is neither gaussian nor white, really. Here is the actual noise (not the FFT) and it's histogram:










Here is the actual noise and histogram for the QSI sample:










The standard deviation and characteristic of the noise from the QSI, which is also 16-bit, is significantly cleaner. A D800/D810 has noise much closer to the QSI than to the 5D III, for what it's worth. The standard deviation is larger, but not that much larger. Anyway, this kind of noise quality is why astrophotographers spend the big bucks on a high quality CCD camera.


----------



## deleteme (Jan 11, 2016)

Canon Rumors said:


> We’re told that Canon will claim 15+ stops of dynamic range for the new Canon EOS-1D X Mark II. This claim was also made for the Cinema EOS C300 Mark II, which may be true at the hardware level, but in practice it may not actually perform to that specification. Let’s hope it’s actually the case with the new 22mp sensor.</p>
> <p>Specifications for this camera have been extremely slow to come in from known sources. There is definitely a much tighter ship being run at Canon, but we do expect more to leak out as we approach an announcement in the next 4-8 weeks.</p>
> <span id="pty_trigger"></span>




It was inevitable that the DR bragging race would be an integral part of marketing.
Now we have to see if it really exists.


----------



## PureClassA (Jan 11, 2016)

Normalnorm said:


> It was inevitable that the DR bragging race would be an integral part of marketing.
> Now we have to see if it really exists.



That's why we're here... :


----------



## PhotographyFirst (Jan 11, 2016)

Hasn't it already been shown that Canon sensors could produce 15 stops of DR for a long time now?

It's really just a matter of extracting that 15 stops of DR and getting it into a file for the memory card, which they have not done as of yet. 

I am betting a large sum of internet points that if Canon becomes the new stills DR leader, the next point of contention for hating Canon will probably be the new automated AFMA system Nikon now has. Once that hits the streets, everything less will be incapable of taking even mediocre photos.


----------



## LetTheRightLensIn (Jan 11, 2016)

Pitbullo said:


> Mt Spokane Photography said:
> 
> 
> > When you really need lots of DR, 15 stops is not enough. DR is, however, basically a measure of noise, so lower noise means higher DR and better high ISO.
> ...



Yeah what he meant is that if someone else does it better it doesn't matter since it's either not enough anyway, fake, whatever. But if Canon does it then it's great and helpful. 

(for the record, from what I see from dappled forest photography that little extra bit he says is useless would actually very often be just enough to be totally helpful, maybe not absolutely ideal, but enough to make it work; and then those who say it still doesn't matter since HDR and tone mapping are ugly, well, first you can do careful types of tone mapping and tone certain ranges and areas apart and get a decent bit of HDR out it without it having that super HDR look at all and second HDR TV are starting to appear and in another 2 years most monitors and tvs produced will probably be 4k, ultra wide gamut, 10bit, HDR, at least other than for the least expensive stuff, the next generation disc format already states that video should be encoded as ultra wide gamut and HDR, I'm not sure they will make it, but the industry had set a goal to make all non-most basic displays be 4k, ultra wide gamut and HDR (and 10bit) by the end of 2018)


----------



## PureClassA (Jan 11, 2016)

PhotographyFirst said:


> Hasn't it already been shown that Canon sensors could produce 15 stops of DR for a long time now?
> 
> It's really just a matter of extracting that 15 stops of DR and getting it into a file for the memory card, which they have not done as of yet.



Yes. That discussion was rehashed briefly earlier on this very chain in fact. 

(1) Shorten the signal path (on Board ADCs). (2) Keep out the noise better than now (other patents Canon already has but yet unknown if they will be employed in the DX2), and (3) realize more of the natural DR of the sensor (like the 5D3 which has several stops more than we realize thanks to problems with 1 & 2 at present).


----------



## zim (Jan 11, 2016)

rfdesigner said:


> jrista said:
> 
> 
> > (2D FFTs)
> ...



appreciated, thank you


----------



## zim (Jan 11, 2016)

ok I might have to delete this out of embarrassment but I've got to ask....

Does all this increased DR mean that a *correctly* exposed image which has DR within the bounds of the camera, so doesn't have to be pushed or pulled, at a high iso say 25600 would have less of the grainy stuff and look more like existing 12800 ?


----------



## heptagon (Jan 11, 2016)

FEBS said:


> But 88Mb sensor and a 14bit ADC does not seems the way to me how Canon will get the 15 stops of DR.



Quad-Pixel-Phase-Detection-Focus?


----------



## rfdesigner (Jan 11, 2016)

zim said:


> ok I might have to delete this out of embarrassment but I've got to ask....
> 
> Does all this increased DR mean that a *correctly* exposed image which has DR within the bounds of the camera, so doesn't have to be pushed or pulled, at a high iso say 25600 would have less of the grainy stuff and look more like existing 12800 ?



DR is only an issue at low ISO. Above about 1600 ISO Canon=Nikon=Sony (roughly)

What's happening is this: Sensor collects electrons -> amplified -> ADC -> stored in RAW format. If that Amplification stage adds noise (and it will to some extent) then it limits the range of brightnesses that can be accurately determined.

At High ISO the amplification is high, so the signal level at the beginning can be low.. making the camera sensitive. At low ISO the amplification is low, so now the camera can be sensitive to a much higher signal level without clipping the ADC.

If the whole system has the same apparent noise normalised to the sensor at low ISO and high ISO, then you can see that low ISO = wide range of signals = high DR. Roughly, this is what the sony sensors achieve. It's usually useful where you have a unshielded light soruce and you want not to blow that out whilst also show what it's illuminating.

i.e. sunset shots, indoor shots with the view outside being important to the shot too (& no flash) etc.


----------



## Don Haines (Jan 11, 2016)

dilbert said:


> kubelik said:
> 
> 
> > ...
> ...


And this is the point that I was trying to raise.... if Canon (or Nikon or Sony) had managed to increase the DR of their sensors by two stops, then they are going to need extra bits to hold that increase in resolution. The easiest way to do this is to start saving 16 bit RAW files. If the 1DX2 does not have an appreciable gain in RAW bit depth, then it will not have an appreciable gain in real DR.


----------



## heptagon (Jan 11, 2016)

zim said:


> ok I might have to delete this out of embarrassment but I've got to ask....
> 
> Does all this increased DR mean that a *correctly* exposed image which has DR within the bounds of the camera, so doesn't have to be pushed or pulled, at a high iso say 25600 would have less of the grainy stuff and look more like existing 12800 ?



No, high DR only plays a role at low ISO. High ISO cannot have high DR because there are only so few photons per pixel. Canon cameras already have little excessive noise at high ISO.

An image which is exposed "correctly" such that nothing has to be pushed or pulled does not need high DR. Do you know any display device that can display more than 10 stops of DR?


----------



## Jack Douglas (Jan 11, 2016)

rfdesigner,

Thanks that's helpful. By analogy it's not unlike an analog audio signal - to low and it's lost in the noise as you amplify both noise and signal. Too high and it's clipping the signal and intelligible louder sounds are lost. Is that a reasonable comparison?

Jack


----------



## Lee Jay (Jan 11, 2016)

heptagon said:


> An image which is exposed "correctly" such that nothing has to be pushed or pulled does not need high DR. Do you know any display device that can display more than 10 stops of DR?



All of them, including prints.

It's not hard to tone map 25 stops of DR into a 6 stop capable display system. If you go to the Hubble Site, you'll see a lot of that sort of thing going on. Many of those images (but not all) have absolutely huge DR, all mapped into an 8-bit JPEG.


----------



## zim (Jan 11, 2016)

Thanks for the reply redesigned and heptagon. I get where the benefits lay now, I had just been wondering if there could have been higher ISO benefits.

Heptagon, no I don't all my output goes to print and photobooks ! ;D


----------



## tpatana (Jan 11, 2016)

3kramd5 said:


> dilbert said:
> 
> 
> > When Canon does a press release and says "15 stops of DR", everyone goes "wow, cool."...
> ...



I was wondering exactly same. Then I saw who wrote that. Not first time he's writing in his own bubble.


----------



## sanj (Jan 11, 2016)

Pitbullo said:


> docsmith said:
> 
> 
> > Pitbullo said:
> ...



Yes!


----------



## neuroanatomist (Jan 11, 2016)

Lee Jay said:


> heptagon said:
> 
> 
> > An image which is exposed "correctly" such that nothing has to be pushed or pulled does not need high DR. Do you know any display device that can display more than 10 stops of DR?
> ...



That doesn't answer the question.


----------



## Jack Douglas (Jan 11, 2016)

Lee Jay said:


> heptagon said:
> 
> 
> > An image which is exposed "correctly" such that nothing has to be pushed or pulled does not need high DR. Do you know any display device that can display more than 10 stops of DR?
> ...



I don't have a firm handle on this so may be displaying ignorance. When the human eye views a scene where there is bright sun somewhat in the field of view and deep shadows as well, it does not perceive all the detail in the highest brights and the lowest darks. To me, looking at some HDR photos that were bragged up, I felt they just looked "unnatural". Is that what we're after, being able to distinguish detail in the darkest and brightest areas even though it doesn't represent reality? 
This is just an innocent question. Isn't mapping just another way of saying compression?

Jack


----------



## sanj (Jan 11, 2016)

heptagon said:


> zim said:
> 
> 
> > ok I might have to delete this out of embarrassment but I've got to ask....
> ...



Not my experience.


----------



## sanj (Jan 11, 2016)

Jack Douglas said:


> Lee Jay said:
> 
> 
> > heptagon said:
> ...



Jack my understanding is that our eyes look at one thing at a time so when at a sunset when we look at the bright clouds our eyes adjust to show details and when we look at the darkness under the trees our eyes adjust. We can do this very rapidly and in effect have a very hight DR built in.


----------



## JoseB (Jan 12, 2016)

A color of a pixel is defined by --> R(14 bit), G(14bit), B(14bit) -> total 42 bit.

About the non linear sampling, remember the old vinil music records, that to fit the frequency range in a thin groove, the basses are attenuated and the trebles are expanded during recording following a RIAA curve. 
When playing, the signal was restored by one amplifier stage, with the help of a couple of capacitors and resistors. Here applying an inverted RIAA amplification curve. The basses were expanded and the trebles attenuated. 
It provided also the reduction of the noise in the high frequencies.
It could be done similarly to the analog signal extracted from the pixels, before the ADC.
I think.


----------



## neuroanatomist (Jan 12, 2016)

Jack Douglas said:


> To me, looking at some HDR photos that were bragged up, I felt they just looked "unnatural".



*shad·ow*. ˈSHadō/ _noun_
1. a dark area or shape produced by a body coming between rays of light and a surface.

Your brain sort of expects shadows to be...dark. When they're not dark, and there's no obvious reason for that, it naturally looks unnatural. Naturally.


----------



## Diltiazem (Jan 12, 2016)

Jack Douglas said:


> Lee Jay said:
> 
> 
> > heptagon said:
> ...



There are several aspects to your question.
1. The fact that you have asked this question says something about most people taking pictures. Most people simply don't lift shadows or feel the need to do it or know that it can be done.
2. Most people who lift shadows overdo it and make their picture look 'unrealistic'.
3. In a high contrast scenes our eyes adjust to both bright and dark areas. So, if you expose for bright areas, dark areas might get darker in the image than how we actually see. So, shadow lifting maybe needed in these situations to make a picture look realistic (not necessarily look better). 
4. Exmor and other non-Canon sensors are better in shadow lifting, although none of them are perfect. So, a better technique at the moment is to blend multiple exposure. But there are situations where blending is not an option, so better shadow characteristics is always welcome.


----------



## Jack Douglas (Jan 12, 2016)

neuroanatomist said:


> Jack Douglas said:
> 
> 
> > To me, looking at some HDR photos that were bragged up, I felt they just looked "unnatural".
> ...



So, it seems I just don't prefer "unnatural" when it comes to DR. And it seems I'm in the minority. Now relative to other metrics ...... well, probably I'm more forgiving. 

Jack


----------



## Jack Douglas (Jan 12, 2016)

Diltiazem, that's all logical and reasonable and reason to want more DR, especially if you're regularly in those situations. I always shoot raw and often do adjust exposures to the extent that DPP allows and I agree it can enhance the quality of a shot. The bald eagles in Haida Gwaii taught me quite a bit with the white head presenting challenges. Thanks for that.

Jack


----------



## Lee Jay (Jan 12, 2016)

Jack Douglas said:


> Lee Jay said:
> 
> 
> > heptagon said:
> ...



Yes.

Example, this is about what the Orion Nebula looks like with a normal contrast curve:






This is the Hubble image:






That bright area in the middle is way, way, way brighter than the dust that surrounds it. The Hubble Site folks have massively compressed probably 20+ stops of DR into an 8 bit JPEG. And it still looks fine. Great, even. Not all HDRs are so good. Many look very unnatural. Go look in the sunrises and sunsets thread for a few that are pretty bad.


----------



## Jack Douglas (Jan 12, 2016)

Lee Jay, I had already concluded that an example like you give is good reason for more DR. I guess it was the bad examples that kind of turned me off. Especially when the directional effect of the light source/s is lost - then the photo becomes visually confusing.

Jack


----------



## Diltiazem (Jan 12, 2016)

Jack Douglas said:


> Lee Jay, I had already concluded that an example like you give is good reason for more DR. I guess it was the bad examples that kind of turned me off. Especially when the directional effect of the light source/s is lost - then the photo becomes visually confusing.
> 
> Jack



Actually, bad examples are way more common than the good ones.


----------



## Lee Jay (Jan 12, 2016)

Diltiazem said:


> Jack Douglas said:
> 
> 
> > Lee Jay, I had already concluded that an example like you give is good reason for more DR. I guess it was the bad examples that kind of turned me off. Especially when the directional effect of the light source/s is lost - then the photo becomes visually confusing.
> ...



And very few of the good ones require a high DR sensor, since a quick 3-shot HDR can crush a single shot from the best Sony sensors as far as DR goes.


----------



## scyrene (Jan 12, 2016)

jrista said:


> rrcphoto said:
> 
> 
> > jrista said:
> ...



But isn't that the point? If the executives feel they're doing fine financially already, why would they bother innovating so much? I'm not defending that, by the way, but it seems like a reasonable reading of the situation. And would also explain why smaller/less profitable companies innovate more or faster (if indeed they do) - they feel they need to, in order to sell more.


----------



## kphoto99 (Jan 12, 2016)

Lee Jay said:


> Diltiazem said:
> 
> 
> > Actually, bad examples are way more common than the good ones.
> ...



On that note, is there an optimum stop distance between the shots for a 3-shot HDR? And if there is, is it same or different one for a 5 shot HDR?


----------



## Diltiazem (Jan 12, 2016)

Lee Jay said:


> Diltiazem said:
> 
> 
> > Jack Douglas said:
> ...



Yes, that is one of the reasons why this feature didn't have any significant impact on the market. I often laugh when people lift 5 stops of shadows to show how good it is compared to Canon. In those examples one can clearly see how bad sony/nikon images look when you push shadows that much (although canon look worse).


----------



## djrocks66 (Jan 12, 2016)

If Canon can make a sensor that has the same Dynamic range as the Nikon D750 I would be very happy. Thats all I have to say.


----------



## candc (Jan 12, 2016)

Lee Jay said:


> Diltiazem said:
> 
> 
> > Jack Douglas said:
> ...




I am not a big fan of hdr tone mapped type images and dont like the hassle of bracket processing. Sony sensors are not any better at preserving highlights than canon but they are really good at lifting underexposed images. That's a plus when shooting because you don't have to try and center your exposure or pick which end you want to preserve in a high dr scene. You just have to make sure you don't overexpose for the highlights. Its easy because you have a live histogram in the viewfinder and you can use zebras if you want. It would be nice to just center and shoot but that would take 19 or 20 stops I think.


----------



## 3kramd5 (Jan 12, 2016)

Lee Jay said:


> That bright area in the middle is way, way, way brighter than the dust that surrounds it. The Hubble Site folks have massively compressed probably 20+ stops of DR into an 8 bit JPEG. And it still looks fine. Great, even. Not all HDRs are so good. Many look very unnatural. Go look in the sunrises and sunsets thread for a few that are pretty bad.



Sure it looks great, but in fairness none of us have a baseline for what it "should" look like (i.e. to the naked human eye), as opposed say to looking around outside. Heavily lifted shadows are often very obvious in photos of common scenes; they look neither like natural light nor an artificially lit scene.


----------



## jrista (Jan 12, 2016)

neuroanatomist said:


> Jack Douglas said:
> 
> 
> > To me, looking at some HDR photos that were bragged up, I felt they just looked "unnatural".
> ...



Dark does not mean noisy, though. Neither does dark mean black, or devoid of detail. By definition dark means "approaching black"...but not actually black. When it comes to whether DR matters...it's not the darkness that matters, it's how much noise is in that darkness, and how much that noise costs you useful detail. 

It's also the fact that because of the nature of a RAW image, in their natural linear form they do not represent the world as we see it, and with a default camera curve they tend to compress both the highlights and the shadows too much. Our eyes see far more detail in shadows and highlights than a RAW rendered with the default settings of any RAW editor. Yes, shadows should be dark...but everything that is dark in a camera RAW image before it is processed is not necessarily _*supposed *_to be dark, or for that matter, a actual shadow. 

When your storing more information than you can display (and unless your particularly lucky and unique to have a 10-bit display with a video card that can actually render 10-bit, and your using software that also supports 10-bit display...then your working with 8 bits, and what you see on screen is about 8 stops of dynamic range...anything more, and it will be crushed to black or clipped to white), your bound to, by default, render some of that information incorrectly. Hence the reason we need the ability to lift shadows. Same reason we need the ability to recover highlights, because when something starts to blow out to nearly pure white in a RAW, it's not representative of reality...in reality, things don't suddenly clip to hard white with harsh edges. Why should shadows be any different?

Canon users are *used *to having to keep shadows very dark (and personally I would say unnaturally dark in many cases), because to do otherwise means dealing with more noise. They don't push more because they either can't, or are simply unwilling to have that much noise in their image...especially if it has unnatural patterns in it. Once you work with data that has more dynamic range for a while, the NEED to keep shadows so crisply dark fades, and you begin to appreciate the flexibility that having lower read noise offers. You don't always have to lift your shadows. You won't always need to. However having the freedom to is extremely useful when you do need it.


----------



## Lee Jay (Jan 12, 2016)

candc said:


> I am not a big fan of hdr tone mapped type images and dont like the hassle of bracket processing.



I'm not usually either, but in Lightroom it's now just three buttons - control-shift-m and you have a new almost raw file without that nasty HDR tone mapping that many such tools produce. Then you can develop it like any other image, just with six stops cleaner shadows.


----------



## jrista (Jan 12, 2016)

Lee Jay said:


> Jack Douglas said:
> 
> 
> > Lee Jay said:
> ...



Jack, to add to what Lee Jay has said, this is my own Orion Sword image...(and for all the effort I put into it, I think I can say one of the best you'll find (outside of the Hubble image, of course) ):





http://www.astrobin.com/full/142576/F/

This is an HDR composite. I took a number of sets of data, including 210s, 120s, 60s, 30s, 15s, 10s and 5s sets of subs. (I had intended to take 240s subs...I accidentally took 210s, hence the break in the linearity of the exposure times.) To actually capture the central stars (the Trapezium or "Trap" as we call it), I had to keep dropping my exposure time down to a mere 5 seconds per sub. Even then...note how much brighter the core of Orion Nebula is in my image? The Trap itself is still overexposed, I wasn't able to accurately reproduce all of the major stars in there (there are four major stars in the trap, and a number of other smaller ones) at 5s. I could have gone down to 2.5s, maybe even 1.25s subs, to better resolve those stars without the heavy clipping in them. Down to 5s, I expanded my dynamic range to 17 stops, and if I'd gone down to 1.25s that would have expanded it to 19 stops. Throw in the integration of 40x210s subs, which reduces noise by over a factor of 6x, and you have even more dynamic range. So, this was a truly high dynamic range image...probably 18-20 stops from the trap to the darkest background sky. 

I manually linear-fit each integration to each other, starting with the 10s to the 5s, then the 15s to the fit 10s, etc. Once all the integrations were fit, I combined each subsequent set into a combined image using an exponential transfer curve formula in PixelMath in PixInsight to produce a single high dynamic range image. 

Now, if you look at Orion Nebula with a pair of binoculars or a decently large telescope, you will only see the large central nebula, including the bluish rim...but you won't see all the rest of the outer dust. You would need an extremely large telescope to start seeing some of that outer dust, and a gargantuan telescope to see it all (albeit more faintly than I have depicted here for sure.) 

But...what is reality, here? Because we cannot normally see the outer dust around Orion Nebula, is it improper and incorrect and lying to reveal it? This is one of my favorite regions of the night sky. Has been since I was 7 or 8 as a kid. Orion Nebula was the first nebula I think I ever saw through a telescope. I've wanted to see it FOR REAL for my entire life. I've wanted to know what's really out there in space in the constellation of Orion my whole life. This image represent's what's really there. It isn't what we see when we look through a telescope...what we see when we look through a telescope is a pale soft gray shadow of the reality of the object up there in space, 1350 light years away. I didn't want to just reproduce what I could see every night during winter by pointing a telescope up at the sky...I wanted to reproduce everything, all of it, top to bottom, brightest to faintest, as much of the intriguing structure and detail of the region as I could. 

And damn if it wasn't a HELL of a LOT of work. I reprocessed this data three times. I spent two solid weekends the third time, and I developed some of my own processing techniques in the end to bring out all the structures and all the nuances of color that I could. It would have been a lot easier to do if my camera had more dynamic range. The most tedious part of the process was linear fitting everything, identifying the blending regions, and tuning the exponential transfer curve formula for each deeper integration to blend them properly. Took a lot of time. Lot of meticulous and tedious measurement. The 5D III has 11 stops of DR. If I'd had 13.8 stops, it would have required fewer integrations to capture the entire dynamic range of the whole object. If I'd had 15 stops, I'd have required even fewer integrations...maybe only two or three. 

Reality is more than what we see. Sometimes we don't see all of what really is there. Sometimes the goal isn't to exactly reproduce reality (and other times, it's to reproduce it more accurately.) Sometimes having a piece of technology that is more capable than our humble little eyes can let you reveal something that people rarely see. Sometimes it can let you realize a childhood dream.


----------



## Aglet (Jan 12, 2016)

well said, JR


----------



## eninja (Jan 12, 2016)

sanj said:


> Jack Douglas said:
> 
> 
> > Lee Jay said:
> ...



I am not an expert by I hope what I will say make sense: As an analogy to image sensor, our eye can do selective ISO, given a scene with bright and dark portion. our eye increase ISO on dark portion such that we can see desired details, at the same time decrease ISO on bright areas to see details on bright area. 

Try to take photo with this similar scene, on d image, if u can see detail on dark portion, highlight will be overexpose, when u see details on highlight portion, no details on dark area. but when u see the actual scene with your eyes. You see both details.


----------



## retroreflection (Jan 12, 2016)

My introduction to high dynamic range occurred one November evening as I was leaving work - the moon was just over the blast furnace's flare stack with its blue flame visible in the night sky (carbon monoxide flames are invisible in daylight). Add the little lights around the furnace, the wispy clouds, touch of snow on the ore piles, and it was beautiful. I rushed back to my office for the best camera available. Not even close to capturing the scene. So, I went home and learned some things.
Yes you need to capture the data. A high dynamic range sensor can sometimes get it all in one shot, but there are many scenes that need more than fifteen (I feel, but have no proof, that the moon frequently features in scenes with more than 15 stops of information). Therefore, bracketing is still needed. 
All of this focus on sensor capability misses the other steps in the process. Jrista has just told us that he's had to develop techniques in post to process his beauties. We also have to be mindful of the display(s). But, I can sit in my living room at noon while reading a book and glance over the top of the page to identify the bird in the tree. One shot, all of the information compressed into the display in my head. I think that sensor and firmware system has been available for a million years or so.
Maybe some of this SENSOR DR!!!!! energy should be directed to software coupled with displays. After the shot, why can't it be as easy as just looking? Shouldn't it be hard to make it look unnatural?
BTW the "high dynamic range" of our vision is accomplished, in part, by in-retina COMPRESSION. Dynamic, pixel level neutral density filtering could be the thing we've been waiting for.


----------



## Jack Douglas (Jan 12, 2016)

I can honestly say I understand the issues reasonably well now - thanks. Obviously, some folk could always do with more DR no matter how much Canon dished out. I'm closer to the group that says they shoot most often at ISOs where DR is similar across the brands. But hey, if Canon can give more I wouldn't complain unless it was at the expense of something more important to me personally.

I really want better IQ with lower noise in the ISO 2000 to 6400 range. 

Jack


----------



## heptagon (Jan 12, 2016)

Lee Jay said:


> heptagon said:
> 
> 
> > An image which is exposed "correctly" such that nothing has to be pushed or pulled does not need high DR. Do you know any display device that can display more than 10 stops of DR?
> ...



How is tonemapping 25 stops DR into an 8 bit JPEG different from pushing and pulling like crazy?


Your answer to my question is like:

Q: Do you know a stair with 1000 steps?

A: With 6 decimal digits you can represent 1 million different numbers.


----------



## jrista (Jan 12, 2016)

Jack Douglas said:


> I really want better IQ with lower noise in the ISO 2000 to 6400 range.



Have you read about the D500? Might want to google around for some articles and have a read.  It sounds really impressive, at any ISO, even as high as 12800.


----------



## heptagon (Jan 12, 2016)

Jack Douglas said:


> I really want better IQ with lower noise in the ISO 2000 to 6400 range.



Not going to happen with current silicon technology using the Bayer pattern because it is physically not possible.

* They have about 50% of quantum efficiency. If that gets increased to a perfect 100% it would be an improvement by 1 step. With silicon this is probably not going to happen.

* Readout noise is not the big problem at high ISO. Little to none to gain here except regarding DR at low ISO.

* What I'm waiting for is stacked photodetectors to replace the Bayer pattern. The green channel would double its pixel count resulting in a 1 step of improvement which could actually be achieved.

* A diversification into more color channels would reduce the noise amplification when decomposing the sensor data into RGB channels. Also it would improve white balance and color rendition.

That's what we can hope for in the next 10, 100, 1000 years: 2 steps and a little something of improvement in high ISO performance.

This is also the reason why Canon greatly improves the digital noise reduction of their JPEG engine. There is much more to gain for people who shoot JPEG.


----------



## jrista (Jan 12, 2016)

heptagon said:


> Lee Jay said:
> 
> 
> > heptagon said:
> ...



You can't always expose a scene such that nothing needs to be pushed or pulled. Not when the scene has more dynamic range than the camera. One of the most difficult subjects for me to shoot as a bird photographer are birds with white and black feathers in direct sunlight. Even if I've got the sun behind me, black feathers don't reflect much, and white feathers reflect a ton. The 5D III suffers severely in that case. So did my 7D. I simply don't have enough dynamic range in the camera to get an exposure where I could just leave it be without recovering highlights AND lifting shadows.

There are a lot of birds like this. Loons. A wide range of ducks. Chickadees. Nuthatches. The list goes on and on. Bird photography isn't a situation where you can sit still and bracket. The bird's are there then gone a moment later. You gotta get the shot in a single frame, and to get the best shot, you need to fire off a whole string of single frames to get one with the bird posed right and without any subject blur. The compromise I am forced into these days is to underexpose the dark feathers. I have no other option, because to go the other way would mean permanently clipping the light feathers, and that is completely unrecoverable. Because of the high read noise in Canon cameras, I cannot recover those dark feathers without heavy noise. The 5D III in particular has this very speckled (due to hot pixels from the rather high dark current) and often banded noise as well.

There are many ways to use an increase in dynamic range. I would LOVE to be able to set my exposure 2/3rds to 1 stop underexposed in a well lit exposure, and just leave it there. Bird photography is catching fleeting moments where a bird shows up before it's gone, and short sequences of action. Sometimes the light changes on a dime, and with the limited DR of current Canon cameras, you don't have the option of leaving yourself any headroom to deal with those moments when the sun pops out from behind a passing cloud and the scene brightens by a stop for a couple frames before slipping behind a cloud again. You have to constantly be on the ball, constantly adjusting your exposure for every change in the light. Gets really tedious after a while, and there are always moments when your focused on getting the right moment instead of keeping an eye on the exposure meter...and oops, you just clipped the crap out of those bright white feathers. 

Sometime you just can't get that perfectly exposed shot without the need to recover anything, and it's in those moments when having more DR is invaluable.


----------



## jrista (Jan 12, 2016)

heptagon said:


> Jack Douglas said:
> 
> 
> > I really want better IQ with lower noise in the ISO 2000 to 6400 range.
> ...



It's already happened. The A7s did it. The A7s II did it. The D500 did it. Those cameras are already pushing 65% Q.E. Sony's also got sensors on the market (not in DSLRs) that have ove 77% Q.E. The A7s has phenomenal high ISO performance not because it's got higher Q.E. but because it's got bigger pixels...and more importantly, because it switches to a higher gain mode at ISO 2000. In the higher gain mode Sony mapped the input analog signal to the output digital signal differently, which is how they were able to gain back dynamic range at higher ISOs. 

We were talking about transfer curves earlier. That is another way that the analog signal could be mapped more effectively into the output bit depth. Instead of using a linear amplifier, a non-linear amplifier (log amplifier or something similar) could be used to amplify darker pixels more and brighter pixels less. Maybe even combine that with something like Sony's high gain mode. That would preserve dynamic range even more, at both low and high ISO. A reversing curve could be stored in the RAW (or simply specified in the specifications for how to decode the given RAW format) allowing the non-linear compression to be reversed by a RAW editor for proper rendering to screen. 

There are many ways to improve dynamic range beyond just increasing Q.E. or pixel size.


----------



## heptagon (Jan 12, 2016)

jrista said:


> Jack Douglas said:
> 
> 
> > I really want better IQ with lower noise in the ISO 2000 to 6400 range.
> ...



Are there solid tests out there, yet? I didn't see it on DXOmark yet (only preview, not tested).


----------



## heptagon (Jan 12, 2016)

jrista said:


> heptagon said:
> 
> 
> > Jack Douglas said:
> ...



I interpreted the question not in regards to DR but in regards of SNR instead.

The 18% grey area does not benefit from increased DR. 

You won't improve SNR with any technique when it's limited by poisson noise. You need more detected photons and you only get that with more QE, more sensor area, less ISO.

Regarding DR, yes, that is an interesting area to improve upon.


----------



## heptagon (Jan 12, 2016)

jrista said:


> heptagon said:
> 
> 
> > Lee Jay said:
> ...



You are actually playing into my argument for more DR with a pretty good example:

1) I think that "correctly exposed" such that "nothing needs to be pushed or pulled" is ill defined. (What does "correct" mean anyways without a reference to a specific technique?)

2) Different photographers have different needs. (Who would have thunk that I'm not the centerpin of the universe.)

- Some photographers can control their scene. (Good for them.)

- Some photographers cannot control their scene. (Usually it's not their fault.)

3) Some scenes intrinsically have a high dynamic range. (Should we not take an image of these situations because it would be a "bad" scene?)

4) Some photographers need pushing and pulling to actually make an image look good on the display devices we currently have. (It's and art about showing what you want to show as good as possible. Different people want to show different things. Some people want to do things that are not possible with current technology. Some people want to do things that are not possible ever.)


----------



## neuroanatomist (Jan 12, 2016)

jrista said:


> neuroanatomist said:
> 
> 
> > Jack Douglas said:
> ...



Let me explain. No, there is too much...let me sum up. 

Question: Why do the images used to brag on demonstrate how much more DR one camera has compared to another so often look unnatural?

Answer N: Because shadows are supposed to be dark. 

Answer J: Because Canon sensors are bad and Sony sensors are good and dark is relative and Sony allows more shadow pushing. 

Well, I know which one sounds like an answer to the question posed.


----------



## kaihp (Jan 12, 2016)

jrista said:


> The 5D III has 11 stops of DR. If I'd had 13.8 stops, it would have required fewer integrations to capture the entire dynamic range of the whole object. If I'd had 15 stops, I'd have required even fewer integrations...maybe only two or three.



Nah, you would still do att that hard tedious work so you could gain even more DR 

See scientist complain about long computational time. Give a scientist a computer that is 10x faster. See scientist refine model and data to make computation 10x longer. See runtime stay the same.

Trust me, I've seen this too many times ;D


----------



## rfdesigner (Jan 12, 2016)

jrista said:


> Lee Jay said:
> 
> 
> > Jack Douglas said:
> ...



Just for reference. I took a high resolution, high dynamic range. image of the trapezium, at the core of this image. In order not to blow out the stars I had to look beyond 16 bit / channel pre-processing as when doing that I either blew out the stars, or I got banding in the nebula surrounding them.. that's more than 16EVs of DR in the linear image.


----------



## Orangutan (Jan 12, 2016)

neuroanatomist said:


> jrista said:
> 
> 
> > Dark does not mean noisy, though. Neither does dark mean black, or devoid of detail. By definition dark means "approaching black"...but not actually black. When it comes to whether DR matters...it's not the darkness that matters, it's how much noise is in that darkness, and how much that noise costs you useful detail.
> ...



While I often agree with Neuro on these things, I'll say that there have definitely been some times with my camera, a 70D, that I've felt more DR would have helped. To be clear, I understand that this is not Canon's best sensor for DR. The most notable example is a forest scene with dappled light. I don't expect to capture the full scene DR, it would be nice to reduce the amount of blowout, and reduce the noise in the high shadows.

I think we all agree on the following:


More DR is better, but not at the expense of other important qualities (everything here relative to how you shoot)
Until we hit 20+ we will not have enough
You work with what you've got


----------



## jeffa4444 (Jan 12, 2016)

sanj said:


> Jack Douglas said:
> 
> 
> > Lee Jay said:
> ...


Our eyes constantly pan & scan but the debate about how much DR we can see in any one time has constantly changed as the science & knowledge has changed and that debate is just as fierce as this one.. I saw a German paper that said most people can see up to 14 stops in any one time but the eyes & brain can adjust to up to 20 stops. Think of sitting in a movie theatre and then going out into bright sunny blue summer skies the variance can be 20 stops but your eyes need time to adjust and our protection mechanism is things like squinting. 

A similar arguement is made about how many colors we can see, this has constantly been adjusted up as the science has improved and a couple of years back a US optromitist discoverd some women have three cones instead of the normal two in each eye and they describe even more colors.


----------



## 3kramd5 (Jan 12, 2016)

heptagon said:


> How is tonemapping 25 stops DR into an 8 bit JPEG different from pushing and pulling like crazy?



With the former, you can actually have detail throughout the range. 



heptagon said:


> Are there solid tests out there, yet?



Doubtful; it was only just announced.



jrista said:


> The A7s has phenomenal high ISO performance not because it's got higher Q.E. but because it's got bigger pixels...and more importantly, because it switches to a higher gain mode at ISO 2000. In the higher gain mode Sony mapped the input analog signal to the output digital signal differently, which is how they were able to gain back dynamic range at higher ISOs.



A7R2 implements DR-Pix as well, albeit at 640.


----------



## bdunbar79 (Jan 12, 2016)

Question: Suppose you do increase QE of the pixel without making the dimensions larger. Can you "count" that as part of the FWC?


----------



## scyrene (Jan 12, 2016)

jrista said:


> Canon users are *used *to having to keep shadows very dark (and personally I would say unnaturally dark in many cases), because to do otherwise means dealing with more noise.



I don't disagree with most of what you say, but this statement strikes me as odd. It's certainly not my experience (and while one person's experience is merely anecdote, it's no worse than 'lots of unspecified people think/do XYZ'. Partly, it comes back to - could you tell, given a series of finished images, what brand of camera was used to create each one? If you're right, it should be obvious. But it's not - I spend a lot of time looking at images made by all manner of equipment, and hardly ever can you tell what was used except in the broadest terms (obviously we'd all welcome equipment that makes things easier - and more DR, lower noise, etc. would mean less intensive processing etc.) But also, technique like ETTR* (which I appreciate is more a Canon thing nowadays) means the shadow worries are just not the case the way you mean - you're pulling the exposure down, so the shadows aren't especially compromised. It's not perfect, but I just don't think Canon shots have 'unnaturally dark' shadows, nor do I approach exposure or editing with trepidation towards darker areas - does anyone else?

*You talk about black and white birds later in the thread. I shoot birds more than any other subject, and certainly black and white ones are a challenge (in direct sunlight). But I don't feel like it's the massive problem you make out - I can only conclude you want images with brighter shadows than I do (a legitimate difference of taste). You can ETTR a fair bit before the highlights are truly blown. Much more and the shadows would look weird imho.


----------



## scyrene (Jan 12, 2016)

Orangutan said:


> neuroanatomist said:
> 
> 
> > jrista said:
> ...



Pretty much!

Just to reiterate - I've seen nobody say more DR would be worse (that it might encourage 'unnatural-looking' images is not to say it wouldn't help in other ways). A few don't think it would help them, that's fine. Most of us would like the extra leeway *sometimes*. A few people feel their photography is suffering a lot for want of more DR.

Given this is a thread on the rumour that the next Canon body will have more DR, it's a mild surprise the discussion has gone this way. If you want more, more may be coming! If you don't, it's not gonna harm you in any way!


----------



## Lee Jay (Jan 12, 2016)

Orangutan said:


> neuroanatomist said:
> 
> 
> > jrista said:
> ...



So have I, but never at base ISO. At high ISO, I'm always struggling for DR but at base ISO, it's never really been a problem except for that time I was trying to shoot a solar eclipse setting behind the mountains. The dark mountain foliage in shadow compared to the surface of the sun is about a 30 stop range so the little bit of extra you get from on-sensor ADC would have made no difference.


----------



## Orangutan (Jan 12, 2016)

Lee Jay said:


> Orangutan said:
> 
> 
> > neuroanatomist said:
> ...



I certainly have had the problem at base ISO, even on tripod, with LiveView and bracketing.

Edit: I should add that I'm not in the market for the new 1DXII, but I hope the tech trickles down quickly to my price range.


----------



## scyrene (Jan 12, 2016)

Lee Jay said:


> So have I, but never at base ISO. At high ISO, I'm always struggling for DR but at base ISO, it's never really been a problem except for that time I was trying to shoot a solar eclipse setting behind the mountains. The dark mountain foliage in shadow compared to the surface of the sun is about a 30 stop range so the little bit of extra you get from on-sensor ADC would have made no difference.



The ISO thing is a good point I'd forgotten - with regard to shooting those high contrast birds, it's surely always gonna be ISO 400-800+...


----------



## ewg963 (Jan 12, 2016)

Interesting!!!


----------



## neuroanatomist (Jan 12, 2016)

scyrene said:


> ...I just don't think Canon shots have 'unnaturally dark' shadows, nor do I approach exposure or editing with trepidation towards darker areas - does anyone else?



Clearly some do. 




scyrene said:


> Just to reiterate - I've seen nobody say more DR would be worse (that it might encourage 'unnatural-looking' images is not to say it wouldn't help in other ways). A few don't think it would help them, that's fine. Most of us would like the extra leeway *sometimes*. A few people feel their photography is suffering a lot for want of more DR.



Agreed. 

It seems that for some people, a couple of extra stops of low ISO DR can elevate their photography to the sublime and end world hunger.


----------



## NancyP (Jan 12, 2016)

OK, I haven't read the entire thread - I am not generally an early adapter - but I want to express my appreciation for the really beautiful Orion nebula photos (looking at you, rfdesigner, and of course the Hubble folks) and the blown out one for comparison.

Yes, I would love more DR - who wouldn't?
I am a dull dog, and am working on getting the most out of my 6D.


----------



## Lee Jay (Jan 12, 2016)

jrista said:


> You can't always expose a scene such that nothing needs to be pushed or pulled. Not when the scene has more dynamic range than the camera. One of the most difficult subjects for me to shoot as a bird photographer are birds with white and black feathers in direct sunlight.



I'm not much of a bird photographer (I've never gone out to shoot birds, only getting a few bird shots when I was doing something else) but I've never found this to be a major problem.


----------



## Orangutan (Jan 12, 2016)

jrista said:


> One of the most difficult subjects for me to shoot as a bird photographer are birds with white and black feathers in direct sunlight. Even if I've got the sun behind me, black feathers don't reflect much, and white feathers reflect a ton.
> 
> <snip>
> 
> Sometime you just can't get that perfectly exposed shot without the need to recover anything, and it's in those moments when having more DR is invaluable.



I have this problem as well: white feathers blown (by a stop or two) and dark feathers indistinct. I've attributed this to the fact that I don't have a top-end camera, but this is a situation where just a few stops more DR would help me. A more common problem is a bright but overcast day, shooting birds on the water or in flight.

To be fair, these are niche problems, but real to me.


----------



## Jack Douglas (Jan 12, 2016)

scyrene said:


> *You talk about black and white birds later in the thread. I shoot birds more than any other subject, and certainly black and white ones are a challenge (in direct sunlight). But I don't feel like it's the massive problem you make out - I can only conclude you want images with brighter shadows than I do (a legitimate difference of taste). You can ETTR a fair bit before the highlights are truly blown. Much more and the shadows would look weird imho.



Agree, Jon you're almost frightening me away from shooting any more birds. Personally, my eyes seem to accept deep shadow better than blown highlights. Still, the present discussion has been quite educational for someone just learning the basics. Thanks to all.

Jack


----------



## neuroanatomist (Jan 12, 2016)

Lee Jay said:


> I'm not much of a bird photographer (I've never gone out to shoot birds, only getting a few bird shots when I was doing something else) but I've never found this to be a major problem.



That's because you haven't processed the image properly. You've left the underside unacceptably dark, and when you correct that in post, it has a horrible effect on IQ as you can clearly see from the 100% crop.


----------



## Jack Douglas (Jan 12, 2016)

"Shadows must be destroyed". This thread has shifted to DR as I guess they always do but in this case I'm really benefiting from the information and understanding a lot more. Here is a recent experience where I decided I'd like to try the moon with clouds and see if something close to what the eye sees is feasible with a single shot. This was pushed and pulled in DPP as much as I was able and it doesn't do justice to the clouds. Just a single hand held exposure where DR is relevant no doubt. Any comments or suggestions?

Jack


----------



## Lee Jay (Jan 12, 2016)

The moon can be a difficult target, especially when full or near full because of two problems. First, your eyes tend to adapt to the dark sky around it thus making it appear very, very bright to an observer, and because it has very low surface contrast (it's all the same color, and that color is charcoal).

I tried to shoot the fullest full moon you could ever shoot from Earth (it was taken during a penumbral lunar eclipse with the penumbra shadow filled in) and I tried to process it so that it appears similarly to the way it appears through a telescope. This is a two-shot panorama, but there was no HDR type blending. The key to this, in my opinion, is capturing it exposed pretty far to the right, and processing so that the brightest areas are right at blow-out.


----------



## Jack Douglas (Jan 12, 2016)

Lee Jay said:


> The moon can be a difficult target, especially when full or near full because of two problems. First, your eyes tend to adapt to the dark sky around it thus making it appear very, very bright to an observer, and because it has very low surface contrast (it's all the same color, and that color is charcoal).
> 
> I tried to shoot the fullest full moon you could ever shoot from Earth (it was taken during a penumbral lunar eclipse with the penumbra shadow filled in) and I tried to process it so that it appears similarly to the way it appears through a telescope. This is a two-shot panorama, but there was no HDR type blending. The key to this, in my opinion, is capturing it exposed pretty far to the right, and processing so that the brightest areas are right at blow-out.



That's great but what if you want some context to the moon, like a beautify cloud pattern with that iridescence that we see?

Jack


----------



## Lee Jay (Jan 12, 2016)

Jack Douglas said:


> That's great but what if you want some context to the moon, like a beautify cloud pattern with that iridescence that we see?
> 
> Jack



If the actual contrast is really, really large, I don't have a problem letting things crush and blow a bit.

Yours looks over exposed to me (too blown) and like you pushed the shadows way too much.


----------



## 3kramd5 (Jan 12, 2016)

Jack Douglas said:


> This thread has *shifted to* DR



Um. 

Thread title: EOS-1D X Mark II Claims of 15 Stops of DR [CR3]
First sentence: We’re told that Canon will claim 15+ stops of dynamic range for the new Canon EOS-1D X Mark II.


----------



## PureClassA (Jan 12, 2016)

3kramd5 said:


> Jack Douglas said:
> 
> 
> > This thread has *shifted to* DR
> ...



Ya think?  : ;D


----------



## Jack Douglas (Jan 12, 2016)

3kramd5 said:


> Jack Douglas said:
> 
> 
> > This thread has *shifted to* DR
> ...



No more from me here since I'm not qualified to comment on Canon's 15+ stops. Actually, how could a thread with that title go on for so many pages, almost has to take twists and turns. I was just _selfishly_ trying to extract some useful information (so much to learn and CR helps a lot). My apologies :-[ 

Jack


----------



## Sporgon (Jan 12, 2016)

Jack Douglas said:


> 3kramd5 said:
> 
> 
> > Jack Douglas said:
> ...



The difference in EV range (and so available DR of the camera) between a full moon and anything else other than the stars is huge - more than any 15 stops. The moon has become the source of light for lighting everything else, so you are trying to expose for not only what is being lit but the light source itself. 

Also all this extra recoverable latitude in the bottom of the file (because that's what it really is) is a fraction of the light available mid range, so it's not as much as it sounds. Still, if it come at no loss anywhere else who's going to not welcome it ?

What would excite me is a sensor with greater DR at the upper - highlight end. Now that would make a difference, but it will be harder to achieve as you are talking about much much more of a light intensity increase.


----------



## Jack Douglas (Jan 12, 2016)

Makes total sense. Thanks.

Jack


----------



## 3kramd5 (Jan 12, 2016)

Jack Douglas said:


> No more from me here since I'm not qualified to comment on Canon's 15+ stops. Actually, how could a thread with that title go on for so many pages, almost has to take twists and turns.
> 
> Jack



Threads with no explicit reference to DR can go on this long. 

I agree regarding the twists and turns - that's usually where the good information comes from. 



Sporgon said:


> What would excite me is a sensor with greater DR at the upper - highlight end. Now that would make a difference, but it will be harder to achieve as you are talking about much much more of a light intensity increase.



Couldn't you do that with larger pixels?


----------



## scyrene (Jan 12, 2016)

Jack Douglas said:


> "Shadows must be destroyed". This thread has shifted to DR as I guess they always do but in this case I'm really benefiting from the information and understanding a lot more. Here is a recent experience where I decided I'd like to try the moon with clouds and see if something close to what the eye sees is feasible with a single shot. This was pushed and pulled in DPP as much as I was able and it doesn't do justice to the clouds. Just a single hand held exposure where DR is relevant no doubt. Any comments or suggestions?
> 
> Jack



I'm afraid to say moon-in-context shots are one of the hardest of all, and frankly no photograph can do justice to what we see (or to put it another way, what we see isn't quite what's there).

If you shoot it as one exposure, you cannot - with any sensor - get details on the moon's face and the full range of tones in the cloud. If you try a multiple exposure blend, you'll get weird transitions (it's possible someone very skilled could do it with seven or nine exposures blended manually, but that point where the cloud meets the moon will usually look odd or fake in my opinion). And the moon always looks too small in a context shot - we see the moon as larger than it really is, if that makes sense, especially close to the horizon (see https://en.wikipedia.org/wiki/Moon_illusion - note the first shot in that article is a good attempt, but relies on a very long FL, a distant foreground subject, and the fact it's taken before it got dark).

Attached are some recent shots to illustrate what I mean. Shot 1: some context (sky light and tree silhouettes), but moon blown. 2: Moon details, but to get anything in the background it's +5 exposure in Lr plus pulling the highlights down -100 (no sensor will produce a clean result here) - and this was the brightest I could shoot it without blowing the moon. Also at the time, the moon was low and appeared large and yellowish, neither of which is obvious from these shots.

Shot 3: a single exposure from an HDR - moon blown but much better transitions, a much better look overall than shot 4, an attempted HDR with cloud details, showing weird effects around the moon.


----------



## scyrene (Jan 12, 2016)

Addendum: it's probably to do with the massively steep drop off in brightness from the moon's disc and the surrounding sky. Our eyes are probably adjusting as they flick between the two.


----------



## Sporgon (Jan 12, 2016)

3kramd5 said:


> Sporgon said:
> 
> 
> > What would excite me is a sensor with greater DR at the upper - highlight end. Now that would make a difference, but it will be harder to achieve as you are talking about much much more of a light intensity increase.
> ...



I don't think so because it's a result of light density rather than light volume. If you went with really large pixels the base ISO would end up being higher.


----------



## bdunbar79 (Jan 12, 2016)

Sporgon said:


> 3kramd5 said:
> 
> 
> > Sporgon said:
> ...



Hmm. Larger pixels are the ONLY way I can think of to do this. How else would you be able to collect more light without blowing highlights? I also don't think it makes sense to say "more DR at the upper end." That is not how DR works.


----------



## Jack Douglas (Jan 12, 2016)

scyrene, thanks! Very kind of you to share that. Most of this is probably common sense, but when one is a beginner sometimes common sense is faulty and wrong conclusions are drawn.

What really tweaked my understanding was the statement that we're photographing a light source and the lit objects together, so the light levels are many magnitudes apart and DR specs will never resolve the issue.

Jack


----------



## neuroanatomist (Jan 12, 2016)

Jack Douglas said:


> What really tweaked my understanding was the statement that we're photographing a light source and the lit objects together, so the light levels are many magnitudes apart and DR specs will never resolve the issue.



In many ways that's the crux – when the light source is not in the frame, 11-12 stops of DR are usually more than enough, and when the light source is in the frame, 15-16 stops falls short.


----------



## Lee Jay (Jan 12, 2016)

bdunbar79 said:


> Sporgon said:
> 
> 
> > 3kramd5 said:
> ...



Larger pixels don't help. Think about putting out a bucket in the rain. Whether it's a huge diameter bucket or a small one, they fill at the same rate.

The way to do this is deeper wells (the analogy being taller buckets). This would result in lower base ISOs. I've heard that ISO 25 base is not out of the range of possibilities for today's technologies.


----------



## bdunbar79 (Jan 12, 2016)

Lee Jay said:


> bdunbar79 said:
> 
> 
> > Sporgon said:
> ...



Sorry. I didn't mean larger pixels, I really meant a larger FWC.


----------



## DR. High ISO (Jan 12, 2016)

dilbert said:


> When DxO test cameras and say > 14 stops of DR for Nikon, everyone here says "bullsh*t, DxO are stupid/wrong."
> 
> When Canon does a press release and says "15 stops of DR", everyone goes "wow, cool."
> 
> ...



LOL! A good one, dilbert!  I'm dying right now..


----------



## jhpeterson (Jan 12, 2016)

dilbert said:


> I suspect if Canon said "The Sun will rise in the west tomorrow" lots of people here would go "Cool! Where can I go and see it?"


Hasn't it been rising there there the last few days? 
Oh, wait! I just got back from a nearly month-long Australia trip and still have trouble remembering to drive on the right side of the road.


----------



## can0nfan2379 (Jan 12, 2016)

MIT researchers reported working on this earlier this year. Perhaps in a Canon 1DX Mk IV 

http://www.dpreview.com/articles/5923827506/mit-proposes-new-approach-to-hdr-with-modulo-camera


----------



## scyrene (Jan 12, 2016)

Jack Douglas said:


> scyrene, thanks! Very kind of you to share that. Most of this is probably common sense, but when one is a beginner sometimes common sense is faulty and wrong conclusions are drawn.
> 
> What really tweaked my understanding was the statement that we're photographing a light source and the lit objects together, so the light levels are many magnitudes apart and DR specs will never resolve the issue.
> 
> Jack



Ah my pleasure. Not common sense so much as a lot of trial and error! It's a great subject, but one of the most frustrating...

And that description is very good, I hadn't thought of it that way. You could experiment by trying to photograph a candle and the room it illuminates without blowing out the flame - similarly tricky!


----------



## Don Haines (Jan 12, 2016)

Rutgerhermelin said:


> jrista said:
> 
> 
> > You can't always expose a scene such that nothing needs to be pushed or pulled. Not when the scene has more dynamic range than the camera. One of the most difficult subjects for me to shoot as a bird photographer are birds with white and black feathers in direct sunlight. Even if I've got the sun behind me, black feathers don't reflect much, and white feathers reflect a ton. The 5D III suffers severely in that case. So did my 7D. I simply don't have enough dynamic range in the camera to get an exposure where I could just leave it be without recovering highlights AND lifting shadows.
> ...


Sometimes all you need is a multi-coloured cat in the sunshine.... some parts overexposed (red), some parts underexposed (blue).... a bit more DR won't solve all problems, but it will certainly help....


----------



## PureClassA (Jan 12, 2016)

Don Haines said:


> Sometimes all you need is a multi-coloured cat in the sunshine.... some parts overexposed (red), some parts underexposed (blue).... a bit more DR won't solve all problems, but it will certainly help....



Don Haines. Posting bad DR pics of cats on the internet since 1996. :


----------



## 3kramd5 (Jan 13, 2016)

Lee Jay said:


> bdunbar79 said:
> 
> 
> > Sporgon said:
> ...



Deeper = larger, no? Fair enough, larger in the sensor context generally refers to frontal area. Regardless, I intended to convey pixels with larger capacities. Noise floor is already pretty low relatively speaking. So increase the total amount of light a given pixel can collect, or add buffers to count the number of times each pixel saturates on a given exposure.


----------



## jrista (Jan 13, 2016)

heptagon said:


> jrista said:
> 
> 
> > heptagon said:
> ...



Depends on what ISO we are talking about.  An 18% gay at ISO 6400 is more like a shadow at ISO 100 than anything. An 18% gray at ISO 6400 is still going to be affected by read noise. Not a lot, but it will still be affected by it. 

Increasing Q.E. will help as well, but at the moment the best high ISO performers are already at 60-65% Q.E. There isn't much headroom left, and I don't know of any consumer grade sensors that top 90% Q.E....and what few sensors do cost immense sums of money in most cases. (I don't know of any sensor that has over 100% Q.E....theoretically it's possible, where a single photon strike releases more than one electron...however I don't think that is possible with visible spectrum photons...they just don't have enough energy.) So Q.E. is not where we are going to get the most gains. 

We could move to multi-layer photodiodes. That's been patented, a good deal...but there are a lot of inherent problems with it that make the gains too small for the cost, so it hasn't hit the main stream yet. Maybe in the future, assuming the problems can be overcome. 

I wouldn't say that LESS ISO is actually really the best option. Higher ISO actually makes far more efficient use of each and every electron in the photodiode, low ISO is actually more wasteful. The reason we have more DR at lower ISO is just because of the linear and limited nature of the output buffers for the amplified signals. If the buffer is limited to the ISO 100 capacity, 65ke-, then every time you increase ISO you lose about half that in terms of pixel capacity. If you amplify the pixel signal by a factor of two, then you could only amplify a 32.5ke- signal before the output buffer saturated. However...what if the output buffer was larger? What if it was 130ke-? What if it was 260ke-? At that point, we could amplify signals at every higher ISO's more (i.e. use higher gain, like the Sony A7s), without clipping the output signal...and gain dynamic range. That wouldn't help much for short exposures, but if you had the option of using longer exposures (that would normally clip at a higher ISO), then you could have more dynamic range at high ISO without needing bigger pixels or higher Q.E. 

Anyway...there are a lot of options out there. It's just a matter of figuring out which ones are most viable. There are some amazing patents for amazing sensor technology out there from just the last couple years that have radically increased the sensitivity of sensors in unconventional ways. The thing is, most of them now require fairly advanced fabrication processes, some even require down to 65nm, which is WELL beyond what Canon is capable of (which as far as I know is limited to 180nm at best, and 500nm for most of their DSLR sensors.)


----------



## jrista (Jan 13, 2016)

scyrene said:


> jrista said:
> 
> 
> > Canon users are *used *to having to keep shadows very dark (and personally I would say unnaturally dark in many cases), because to do otherwise means dealing with more noise.
> ...









I guess it is best to demonstrate. This is a shot that I grabbed before noticing that the sun had popped out from behind a cloud. If you've ever photographed buffleheads...when they are bathing or eating, you usually only have a few seconds from the time they bob up to the surface, look around for a second, then dive again. The sun popped out for about 5 seconds, then was behind a cloud again. In that moment, the bird had already popped up and I had pressed the shutter button. Not much I could do. The moment was gone. It was overexposed, not a ton, but enough to clip some of the white highlights. It overexposed because I was already ETTRed to try and get more detail in the darker feathers. Despite that...the shadows still ended up unnaturally dark. What I saw with my own eyes did not look like this...I could still see color in the area back underside of the bird's iridescent feathers on it's head.

To me, the extreme darkness of those shadows (and they run right down to the left edge of the histogram, which represents pure black) is unnatural. I've shared this image on these forums in the past, and the responses I got were along the lines of "Well shadows are supposed to be dark!" Well, sure...but how dark? Black dark? Totally devoid of detail dark? That was a "shadow" in real life...but it wasn't black. I could still see color and detail in that region of the bird. It was a shadow in real life...it just didn't have that much global contrast. I see the shadow as unnatural, other Canon users on these forums in the past did not. 

Furthermore, this photo WAS ETTRed, and it was because I had it ETTRed for a slightly lower light level that the highlights clipped in the short moment that the sun came out. It's not just the highlights that were affected, though...all of those beautiful iridescent feathers on the bird's head were brightened and washed out as well. With careful processing a better contrast can be restored, and color saturation can be restored, but it still doesn't look quite as amazing as those feathers do in real life. They look a bit over-processed...slightly unnatural. ETTR is a risky and less than ideal NECESSITY because of the more limited DR in Canon cameras. I'd have much preferred to not ETTR, and maybe even ETTL a bit, so those iridescent feathers drifted back towards the upper midtones, where they would naturally have the greatest color saturation and would preserve their natural contrast without washing out like they did here.

Anyway. I don't see landscapes or astrophotography as the only reason to NEED more dynamic range. I could have totally used a couple extra stops on this image right here with this bird photograph.


----------



## Gnocchi (Jan 13, 2016)

jhpeterson said:


> dilbert said:
> 
> 
> > I suspect if Canon said "The Sun will rise in the west tomorrow" lots of people here would go "Cool! Where can I go and see it?"
> ...


Hope you enjoyed your time in straylia


----------



## Jack Douglas (Jan 13, 2016)

Jon, I think your case is well illustrated here and I agree that the shadows would have contained more color detail, having shot these guys a few times myself. 

In these discussions there will always be folk who are not as focused on perfection. As a youth I could never understand how my mother could complement the music emanating from a 3" speaker but that's not unlike looking at a person with many faults and still praising them. 

Sometimes we need to mentally fill in the last bit of perfection that was missing in the photo and be satisfied or even thrilled that we are getting such gems. Trouble is you and I somehow caught the perfection bug in our formative years.

Jack


----------



## LetTheRightLensIn (Jan 13, 2016)

jrista said:


> So they ARE using a compression curve to achieve it. Well, that is less than ideal. Sounds much the same as Sony using craw to preserve dynamic range with their lossy raw compression. Bummer. Definitely not good for Astro...we need linear signals. Might be fine for landscapes though.



For for video they often do since it's often cooked files into 8-10bits so there is no other way.


----------



## LetTheRightLensIn (Jan 13, 2016)

neuroanatomist said:


> Lee Jay said:
> 
> 
> > heptagon said:
> ...



I'm not quite sure the spec but I think Dolby Cinema Laser projectors can display at least 14 stops of DR (I see some claims of 21, but that might be the goal for the next version). LG and a few others have just released the first HDR displays. Hard to get exact specs but I think they are maybe 10-14 stop range.

As I've been saying for some time, UHD, ultra wide gamut and HDR displays will soon be sold all over the place and close to standard in another two years.


----------



## Eldar (Jan 13, 2016)

I share Jon's views. I have the same experience with birds, where the white is blown and black is texture free black, with noise (debated to death in a previous thread). I also share his point about shadow noise. Quite a few posters on CR claim that all this shadow lifting makes images look unnatural and several examples have been posted to prove that point. When you lift it as much as some of these examples show, you have to be blind to disagree. I do not wish to lift shadows nearly that much, but I’d like the shadows to have structure and texture, which requires DR and/or better controlled noise. Because that looks natural. Jon’s example of feathers is a good one. When you look at a black and white bird through your binoculars, you can see that it is dressed in feathers, it is not painted black and white. 

I also find noise and DR to be less of a problem with landscapes. Firstly because I have the time to make sure exposure is dead on and secondly, if the contrast is bad enough, I can use HDR. Birds and wildlife in shifting light conditions is a totally different ball game.


----------



## heptagon (Jan 13, 2016)

LetTheRightLensIn said:


> I'm not quite sure the spec but I think Dolby Cinema Laser projectors can display at least 14 stops of DR (I see some claims of 21, but that might be the goal for the next version). LG and a few others have just released the first HDR displays. Hard to get exact specs but I think they are maybe 10-14 stop range.
> 
> As I've been saying for some time, UHD, ultra wide gamut and HDR displays will soon be sold all over the place and close to standard in another two years.




Are we talking about a) static or b) dynamic dynamic range?

a) Where each pixel on the same image has the whole dynamic range.

b) Where they dim the light source (or parts of it) to increase the dynamic range between images or between distant parts of the image.


Using OLED or laser displays in a dark room sound promising to me.


----------



## Sporgon (Jan 13, 2016)

For those that show pictures of iridescent black and white subject where the software is showing both highlight and lowlight to be lost; prepare for disappointment in the increased DR. For a given exposure you are getting no more highlight range, so, to preserve the highlight with your new higher DR camera you under expose to hold the highlights. But even with your previous longer exposure you had lost all shadow data, so by under exposing to preserve highlights with a camera that doesn't actually have any more highlight range you use up your extra DR range in the shadows anyway, and you end up trying to lift zero data. 

The extra DR does have occasional advantages in a very narrow EV band, but this example isn't one of them.


----------



## jrista (Jan 13, 2016)

Sporgon said:


> For those that show pictures of iridescent black and white subject where the software is showing both highlight and lowlight to be lost; prepare for disappointment in the increased DR. For a given exposure you are getting no more highlight range, so, to preserve the highlight with your new higher DR camera you under expose to hold the highlights. But even with your previous longer exposure you had lost all shadow data, so by under exposing to preserve highlights with a camera that doesn't actually have any more highlight range you use up your extra DR range in the shadows anyway, and you end up trying to lift zero data.
> 
> The extra DR does have occasional advantages in a very narrow EV band, but this example isn't one of them.



There is no such thing as "highlight DR", nor is there "shadow DR"...there is simply DR. You either have more dynamic range or not. Dynamic range is by definition the ratio of the full well capacity to the read noise floor. This is something I hear a lot from Canon users, and it's just a misconception, a misnomer. Dynamic range represents the entire range of tones the camera can discern, without segregation.

In practice, with a camera that has nearly 14 stops of DR, you can indeed back off exposure a bit to preserve highlights, and still have plenty of room left to recover detail out of the shadows, and with significantly less noise than a camera that has 11 or even 12 stops of DR. The shadows won't be totally noise-free, but they don't need to be. They just need to have low enough noise to support an acceptable shadow push to reveal the right amount of detail in them.


----------



## ewg963 (Jan 13, 2016)

Eldar said:


> I share Jon's views. I have the same experience with birds, where the white is blown and black is texture free black, with noise (debated to death in a previous thread). I also share his point about shadow noise. Quite a few posters on CR claim that all this shadow lifting makes images look unnatural and several examples have been posted to prove that point. When you lift it as much as some of these examples show, you have to be blind to disagree. I do not wish to lift shadows nearly that much, but I’d like the shadows to have structure and texture, which requires DR and/or better controlled noise. Because that looks natural. Jon’s example of feathers is a good one. When you look at a black and white bird through your binoculars, you can see that it is dressed in feathers, it is not painted black and white.
> 
> I also find noise and DR to be less of a problem with landscapes. Firstly because I have the time to make sure exposure is dead on and secondly, if the contrast is bad enough, I can use HDR. Birds and wildlife in shifting light conditions is a totally different ball game.


+1000000000


----------



## bdunbar79 (Jan 13, 2016)

jrista said:


> Sporgon said:
> 
> 
> > For those that show pictures of iridescent black and white subject where the software is showing both highlight and lowlight to be lost; prepare for disappointment in the increased DR. For a given exposure you are getting no more highlight range, so, to preserve the highlight with your new higher DR camera you under expose to hold the highlights. But even with your previous longer exposure you had lost all shadow data, so by under exposing to preserve highlights with a camera that doesn't actually have any more highlight range you use up your extra DR range in the shadows anyway, and you end up trying to lift zero data.
> ...



Then would the read noise floor, the weaker part of Canon's sensor, govern the shadow lifting ability? Is that where the noise originates in the shadows?


----------



## 3kramd5 (Jan 13, 2016)

bdunbar79 said:


> jrista said:
> 
> 
> > Sporgon said:
> ...



Noise doesn't originate in the shadows, noise is everywhere. Signal, however, is low relative to noise in the shadows.

So yes, the noise level governs how far you can lift the shadows. However, many sensors it seems are nearing diminishing return with respect to read noise.

But imagine if you could keep exposing twice as long without blowing out highlights (assume some magical doubling of full well capacity and no associated increase in the noise floor). You would in turn double the signal in the shadow range.


----------



## bdunbar79 (Jan 13, 2016)

3kramd5 said:


> bdunbar79 said:
> 
> 
> > jrista said:
> ...



Right. So, again, does that come from read noise? I'm talking about shadow lifting and apparent noise. Yes I realize that the noise is always present but I'm also keeping in mind that we are viewing photos. If read noise were to improve, would that then translate to more latitude in the shadows? Or not? Because there are some sensors with large pixels that also have lower read noise than Canon sensors, even with large pixels.


----------



## 3kramd5 (Jan 13, 2016)

bdunbar79 said:


> Right. So, again, does that come from read noise? I'm talking about shadow lifting and apparent noise. Yes I realize that the noise is always present but I'm also keeping in mind that we are viewing photos. If read noise were to improve, would that then translate to more latitude in the shadows? Or not? Because there are some sensors with large pixels that also have lower read noise than Canon sensors, even with large pixels.



It comes from total noise, but yes: if you reduce noise, all else being equal, you have a corresponding higher signal to noise ratio, and therefore more latitude in the shadows. My question is: how much room is there to improve in noise (sensors in general, not just Canon's), versus the potentially limitless growth in well capacity?


----------



## Sporgon (Jan 13, 2016)

jrista said:


> Sporgon said:
> 
> 
> > For those that show pictures of iridescent black and white subject where the software is showing both highlight and lowlight to be lost; prepare for disappointment in the increased DR. For a given exposure you are getting no more highlight range, so, to preserve the highlight with your new higher DR camera you under expose to hold the highlights. But even with your previous longer exposure you had lost all shadow data, so by under exposing to preserve highlights with a camera that doesn't actually have any more highlight range you use up your extra DR range in the shadows anyway, and you end up trying to lift zero data.
> ...



You know exactly what I mean: ability to record a greater light density within the overall EV range of the camera.

The current greater DR cameras cannot do this so to preserve highlights you have to use a faster exposure - relatively - and then use the greater shadow recovery. Then what I described above is exactly what happens in this situation. I know. I bought an Exmor censored camera with a purported 14.5 stops of DR at 100 ISO, and in the intense back and white scenario that I describe above it is of little benefit - precisely because it clips the highlights at the same exposure as the older Canon camera. 

This is my whole argument with those that hype up the odd stop or two of dynamic range; they speak as if it is able to record a higher light density and it can't.


----------



## kphoto99 (Jan 13, 2016)

3kramd5 said:


> Noise doesn't originate in the shadows, noise is everywhere. Signal, however, is low relative to noise in the shadows.
> 
> So yes, the noise level governs how far you can lift the shadows. However, many sensors it seems are nearing diminishing return with respect to read noise.
> 
> But imagine if you could keep exposing twice as long without blowing out highlights (assume some magical doubling of full well capacity and no associated increase in the noise floor). You would in turn double the signal in the shadow range.



Your statement just game me an idea.

Normally you start with an empty bucket (at the photosite) and you fill it with electrons as photons strike the photosite.
Canon has the dual pixel photosite now, so maybe half can operate as normal and the other half could start being full and as photons strike it electrons are removed from the bucket.
Since the noise is the problem when the bucket is mostly empty, this should increase SNR. At the end of the exposure the second backup is inverted and compared to the first bucket. Either the bigger ones wins or maybe both are added together. Or the difference is taken to remove the noise.

Is this even a possibility?


----------



## bdunbar79 (Jan 13, 2016)

3kramd5 said:


> bdunbar79 said:
> 
> 
> > Right. So, again, does that come from read noise? I'm talking about shadow lifting and apparent noise. Yes I realize that the noise is always present but I'm also keeping in mind that we are viewing photos. If read noise were to improve, would that then translate to more latitude in the shadows? Or not? Because there are some sensors with large pixels that also have lower read noise than Canon sensors, even with large pixels.
> ...



Yeah, good point. I don't know. You could increase QE several different ways and USE more of the photons captured. Or, like others said, a completely different non-Bayer design.


----------



## fentiger (Jan 13, 2016)

if i want to see detail in the shadows i use a flash gun


----------



## Neutral (Jan 13, 2016)

3kramd5 said:


> bdunbar79 said:
> 
> 
> > Right. So, again, does that come from read noise? I'm talking about shadow lifting and apparent noise. Yes I realize that the noise is always present but I'm also keeping in mind that we are viewing photos. If read noise were to improve, would that then translate to more latitude in the shadows? Or not? Because there are some sensors with large pixels that also have lower read noise than Canon sensors, even with large pixels.
> ...


I think there is still a lot of room to decrease sensor noise based on using smaller manufacturing technologies (currenly 14nm processes are being used for building processors chips), 3d technologies for active elements and sensor layers , copper or better conductors, BSI with cooling layer at circuits layer side etc. So reducing dark currents, having individual ADC per each pixel thus reducing signal path and as result almost eliminating read noise. There are number of things available now for implementation, some of them being used in sCMOS.
This all would result in better DR and less noise in shadows for sitiations with good light.

For low light photography there still will be limiting factor which is shot noise - smaller number of captured photons will results in lower SNR which would be close to SQRT(Number of photons).
The only one way here for improvements is to increase number of captured photons.
Bayer sensor design itself is limiting factor here. It captures only 25% of photons for blue and red channels and 50% for green channel. So Foveon like type sensor is the way to go here (or one with microprizms using Panasonic patent) - both ensure that 100% photons captured for each color channel. 
This could give about 2 stops improvdments in light capturing capabilities.
Another way is using true MF sensors - like latest Sony 100mpx one used by Phase One which is already available now but too much expensive so far.
In some time MF will be be the same as FF currently when MF sensor production cost will be much lower.
Than at some time later MF Foveon like type sensor - resulting in 3 stops better low light performance than current FF sensors.
Then using controlled photon multiplication elements/layers in lenses or on sensors which would allow more than 100% QE.
Also using other than silicon materials for semiconductors.
All depends on technologies evolution and ability to implement them at affordable cost using better manufacturing prosesses.


----------



## PureClassA (Jan 13, 2016)

fentiger said:


> if i want to see detail in the shadows i use a flash gun



I would agree, but unfortunately that isn't always possible to do. That still doesn't mean an extra 2 stops of latitude will save the world, but may be helpful in certain specific situations.


----------



## Neutral (Jan 13, 2016)

Neutral said:


> 3kramd5 said:
> 
> 
> > bdunbar79 said:
> ...


Also post prosessing could do a lot.
From my experience I believe that DXO PRIME noise reduction cope very well with shot noise described by Poisson distribution.


----------



## Lee Jay (Jan 13, 2016)

fentiger said:


> if i want to see detail in the shadows i use a flash gun



Sound's good. What model do I need to illuminate the dark side of the moon?


----------



## 3kramd5 (Jan 13, 2016)

Lee Jay said:


> fentiger said:
> 
> 
> > if i want to see detail in the shadows i use a flash gun
> ...


----------



## jrista (Jan 13, 2016)

bdunbar79 said:


> jrista said:
> 
> 
> > Sporgon said:
> ...



The way I see it is this. If your working with a scene, or a subject, that has more dynamic range than your camera, then you have options. Risk clipping the highlights with ETTR, to preserve as much as you can in the shadows, or preserve the highlights and lose more detail and color fidelity to shadow noise. The dynamic range is just a range...how you use it is up to you. You can use that range to preserve the highlights or to preserve the shadows. With more limited dynamic range (i.e. 11 stops) you are going to have to make compromises more often, with greater dynamic range (i.e. 13.8 stops or even 15 stops) you are going to have more leeway to move the signal around within the dynamic range, and the ability to preserve more detail...at either or both ends of the tonal range. 

This can happen with any camera, because until we get to the point where cameras have like 20 stops of DR, there are always going to be scenes with higher DR. The difference is how much the noise affects your shadows if you end up choosing to preserve the highlights.

The difference with a camera that has 2-3 stops more DR is that when you pull back exposure a bit to preserve the highlights, you have far less noise to eat away at those darker details, making them much more recoverable before the post-push noise levels of the shadows increase to the point where they are unsightly. 

I really hope Canon delivers 15 stops of DR. Even if it isn't literal DR a 16-bit RAW, if they find a way to use a non-linear compression curve to preserve more shadow detail in the data that is ultimately stored in a 14-bit RAW, I still think that would be much better than just leaving those extra...what, it would be nearly four stops of DR if we compare to the 5D III's 10.97 stops, buried in the noise. I think that would be a huge step forward, and I'd take it in a heartbeat. ;P I would still prefer true 16-bit RAW with the full precision if it was an option, but I'll take any interim step Canon can give me.


----------



## StudentOfLight (Jan 13, 2016)

3kramd5 said:


> Lee Jay said:
> 
> 
> > fentiger said:
> ...


That recycle time won't keep up with the 1D-X II


----------



## 3kramd5 (Jan 13, 2016)

StudentOfLight said:


> 3kramd5 said:
> 
> 
> > Lee Jay said:
> ...



I find your lack of faith disturbing.


----------



## PhotographyFirst (Jan 13, 2016)

How much of the bit depth is taken up by DR that is measurable but not useful? I really doubt some of the 14 stop sensors that DXO measures really offer that much usable latitude. 

Couldn't Canon just clip off the bottom end where most of the noise is stored and create 14 bit files with the effective usefulness of measured 15 stops? The files would measure around 14 stops like other cameras but the usable DR is a stop of two greater?

Sony cameras do not do true 14 bit files, do they? Yet they still kill the Canon cameras at low ISO for exposure latitude. 

Just some thoughts from someone who knows far less than everyone else in this thread about sensor tech.


----------



## jeffa4444 (Jan 13, 2016)

jrista said:


> bdunbar79 said:
> 
> 
> > jrista said:
> ...


Speaking from a cinematography point of view where we get 14 stops of DR from the Alexa preserving highlights is far more important because clipped highlights are simply unrecoverable in any shape. Getting noise and even muted color from shadow is not too disimilar to the human eye which sees less than modern CMOS sensors can anyway. Different artificial light sources such as street lighting also change the look of colors dark reds for instance can first appear brown until we adjust to what were looking at. 
In Astro photography I can see your desire to preserve both ends of the spectrum but in general photography we also need to preserve a sense of realism and the fact is the human eye can adjust for 20 stops of DR but not see 20 stops of DR at any one time. 
14 Stop cameras still need to use filters to balance exposure and so would 15 stop, certainly not to the same level as a 12 stop camera but its not uncommon to be using filters for a three to four stop difference and generally this is to reduce overexposure of highlights.


----------



## LetTheRightLensIn (Jan 13, 2016)

heptagon said:


> LetTheRightLensIn said:
> 
> 
> > I'm not quite sure the spec but I think Dolby Cinema Laser projectors can display at least 14 stops of DR (I see some claims of 21, but that might be the goal for the next version). LG and a few others have just released the first HDR displays. Hard to get exact specs but I think they are maybe 10-14 stop range.
> ...



static


----------



## LetTheRightLensIn (Jan 13, 2016)

Sporgon said:


> jrista said:
> 
> 
> > Sporgon said:
> ...



But it can in essence and you are still thinking about it all wrong. You are free to expose so as to not clip the highlights without having to then end up with no shadow detail. You set the exposure as needed to not clip but then with say Exmor vs current Canon you can still make use of a few stops more in the darkest parts and have a real usable signal there.


----------



## Jack Douglas (Jan 13, 2016)

jrista said:


> bdunbar79 said:
> 
> 
> > jrista said:
> ...



I don't detect anything in this comment that is not on the mark. I guess what it comes down to is how each of us individually relate to the shadows that aren't as defined as some would like. 

It reminds me of a friends comment that my "black" bird was overexposed because it showed hints of gray. It was the Pileated woodpecker that I now observe daily often up close and it is grayish (more definition in the black) or it is black in real life depending on the quality of the light. 

If I choose to lift the shadows then I have more detail but the bird may not really impress the public, depending on their foreknowledge and personal taste. As said by others I think DR has been presented as critically important when it's just important. I'll gladly take whatever more they can deliver but it's not going to phase me too much.

So what should be done with this sample (excluding cleaning the beak!) - trash it, raise the shadows or??

Jack


----------



## 3kramd5 (Jan 13, 2016)

Jack Douglas said:


> So what should be done with this sample (excluding cleaning the beak!) - trash it, raise the shadows or??
> 
> Jack




Keeper. 

I'd perhaps change is the aspect ratio (3:2, to lose some of the black at the left hand side at the expense of driving the subject towards the center of the frame), and I'd probably brighten the iris slightly. Regardless, fantastic as is. I don't need to see the right hand side of the frame lifted, it's dramatic this way.


----------



## jrista (Jan 13, 2016)

3kramd5 said:


> Jack Douglas said:
> 
> 
> > So what should be done with this sample (excluding cleaning the beak!) - trash it, raise the shadows or??
> ...



I agree here. Bit too much negative black space around the bird, I'd crop a little of that out. I'd do the same with the iris. I might also brighten the dark part of the beak just a bit to improve contrast along the upper edge with what's behind it. 

Jack kind of hit the nail on the head though. I've received similar comments about many of my bird photos...that they look too overexposed on darker feathers because those feathers "should be black". Chickadees are probably the most common case. Chickadees have "black" feathers...but if you spend any amount of time watching them, you'll quickly learn that those feathers are not black...they are dark gray. They usually end up getting crushed to black because Chickadees also have very light grey and nearly white feathers, and the two side by side results in a very high contrast ratio...making the scene a high dynamic range scene. Depending on the angle of the light, that may fit within 11 stops (i.e. light directly behind you) or it might expand to 12, 14, or more stops (light over your shoulder or off to the side a bit more). Once that contrast kicks up, the noise in the black feathers when they are increased to that proper dark gray level ticks up significantly with a Canon DSLR (and with my 5D III, I get a bit of the salt and pepper speckled noise as well...hate that.) 

I compensate for the issue by usually shooting birds with the sun right behind me tight over my left or right shoulder. That balances out the dynamic range, gives me better illumination on the darker feathers...but it's not ideal. Those tend to make for more bland compositions. I like birds in action, at a bit of a higher angle (but not too high), but that is usually where I run into DR limitations and higher noise in darker feathers.


----------



## PureClassA (Jan 13, 2016)

StudentOfLight said:


> 3kramd5 said:
> 
> 
> > Lee Jay said:
> ...



I heard the Empire used Canon FD glass back in the day to focus the Death Star's laser beams...


----------



## neuroanatomist (Jan 13, 2016)

3kramd5 said:


> Lee Jay said:
> 
> 
> > fentiger said:
> ...



Haven't you heard? The Empire recently upgraded to a different brand that provides substantially more *D*estructive *R*adiance.


----------



## Jack Douglas (Jan 13, 2016)

3kramd5 said:


> Jack Douglas said:
> 
> 
> > So what should be done with this sample (excluding cleaning the beak!) - trash it, raise the shadows or??
> ...



Actually, I really just dropped the shot in to see if it would provoke more thoughts regarding the acute need for DR; not my intention to sidetrack the thread . I never sensed I needed it but that's in a more artistic context, not rendering the bird for inclusion in Stokes. Is that part of the debate, artistic vs. accurate rendering?

Jack

Jack


----------



## jrista (Jan 13, 2016)

Jack Douglas said:


> 3kramd5 said:
> 
> 
> > Jack Douglas said:
> ...



More DR will never limit you artistically. Less DR might, however.


----------



## 3kramd5 (Jan 14, 2016)

Jack Douglas said:


> 3kramd5 said:
> 
> 
> > Jack Douglas said:
> ...



Well, it really is up to you. Did you want to show detail throughout the shadows? If so, you were likely DR limited to some extent, although there is detail in the RHS of the frame and it doesn't appear to be near clipping anywhere (eg ETTR would have helped). From the perspective of an outsider, I like it as is, but since I don't know your goals I would be remiss to tell you "you have plenty of DR."

I don't think there is necessarily a debate between artistic and accurate rendering (and desire for the latter is overblown outside of journalism, anyway, else nobody would want smooth bokeh thin DOF portraiture or motion blur waterscapes, etc). All the matters is that you can show what you want to show, IMO.


----------



## Jack Douglas (Jan 14, 2016)

3kramd5,

Every comment helps in the process of becoming a better photographer for me personally. I'm all ears and always thankful for criticism and advice. On that shot I dropped the exposure on purpose to get the effect that you see. If I had more expertise and software I'd probably be tweaking it more. Although one should please oneself it's good for the ego to know others like what you're doing. 

A few months back a comment by jrista really encouraged me and caused me to look back and reflect on the improvements I've made. The more I read on CR the more I can adjust my shooting. Sure is a fun hobby.

Jack


----------



## 3kramd5 (Jan 14, 2016)

Jack Douglas said:


> On that shot I dropped the exposure on purpose to get the effect that you see.
> 
> Jack



Then... bravo


----------



## Diltiazem (Jan 14, 2016)

Jack Douglas said:


> jrista said:
> 
> 
> > bdunbar79 said:
> ...



Hi Jack
This picture looks great as it is. But if it was me I would just crop a bit from the sides to get a 3:2 width/height ratio. I was also thinking that the top of the bird's head should have slightly more definition along with slightly more texture of the background tree. Then I downloaded the picture and the image appeared a little more brighter and that gave me the definition I wanted. So, there is a difference when we see a picture in a browser vs when we see it in our own monitor. Just my experience and opinion. No more right than anyone else's. 

While we are talking about DR let me tell you my personal experience. I have been a Canon shooter. When D800 and D600 were released lots of people were talking about how good the Nikon DR was. I always wanted to have a second system, so I took this opportunity and got a D600 ( 14 stops of DR). I compared it with my 5DIII and 6D. Clearly, shadows with D600 were much more cleaner. It was exciting. For 2-3 months I kept shooting with D600 and Canons side by side. Soon I realized that I was struggling to find a scene where I needed to lift shadows more than I could do with my Canons with acceptable results. As if I was trying to create scenes to justify D600's shadow lifting abilities. My shooting style and preferences changed. I realized that I was not a happy shooter anymore. The changed photographer was not me. So, I stopped doing comparisons and stopped looking for shadow lifting opportunities. I went back to my 5DIII as my primary camera (nearly 99% of my shots) and to my shooting style. I only use D600 when I feel like I need to use Nikon's fabulous 14-24/2.8 lens, not to lift shadows. 
But that's my experience only. I understand that other people may have different needs or taste. So, having cleaner shadows can't be a bad thing. As jrista said, this would give you more artistic leverage if you were inclined.


----------



## Jack Douglas (Jan 14, 2016)

Diltiazem, thank you for that interesting commentary (always appreciate positive critical feedback). I believe you are bang on. There are aspects of the technology that definitely are more critical than others. We all are enjoying amazing features that allow us, if we really concentrate and challenge our capability, to produce very impressive photos (for me, at least relative to the average person who's not into photography). 

So, speaking for myself I get drawn into these discussions because I'm learning an awful lot (my ignorance from 3 years ago is embarrassing, present ignorance too!) Then, if I'm not careful I get caught up in the hype, in this case it's DR.

No question more DR will help but for me I'm always on the edge relative to natural lighting and would greatly appreciate that extra F stop or shutter speed increase without stepping into ISO's that give me grain. For my taste, with the 6D, I don't like going above ISO 1250 given that I often crop 50-60%. With the 1D4 ISO 640 pleased me but 800 was on the edge. 

Others have first hand disappointments relative to DR but mine are more ISO related .

Relative to the woodpecker shot, it's just hitting me. I've been converting to 16:9 because it fills the monitor. Often I've sensed that did not work advantageously for the composition, especially since I've framed at 3:2, so I need to rethink that. I'm still working myself out of the idiosyncrasy of feeling compelled to maximize the size of the subject, which is often a small bird (jrista has helped there). The Freeman book, The Photograher's Eye was a very helpful purchase, thanks to other great CR contributors. Anyone else out there new to photography - that's a great book!

Jack


----------



## hubie (Jan 14, 2016)

So that means:

1/8000 s
1/4000 s
1/2000 s
1/1000 s
1/500 s
1/250
1/125
1/60
1/30
1/15
1/8
1/4
1/2
1
2
4

so when I shoot into the bright sun at ISO 100... I can sit at 1/60... the sun still wouldn't exceed the sensor's capacity (if 1/8000 is short enough) and I could see features in the image that need exposures of ~4 s to be seen?

I don't believe it , even if it's a horror show considering the noise in the images I doubt that's true .
A hoax, definitely. Still, I like to be proven wrong by Canon.


----------



## 3kramd5 (Jan 14, 2016)

hubie said:


> he sun still wouldn't exceed the sensor's capacity (*if 1/8000 is short enough*)



It's not.


----------



## Lee Jay (Jan 14, 2016)

3kramd5 said:


> hubie said:
> 
> 
> > he sun still wouldn't exceed the sensor's capacity (*if 1/8000 is short enough*)
> ...



That depends on sun elevation, atmospheric transparency, and f-stop.


----------



## jhpeterson (Jan 14, 2016)

Gnocchi said:


> jhpeterson said:
> 
> 
> > dilbert said:
> ...


I did and I'll post in more detail on the thread: http://www.canonrumors.com/forum/index.php?topic=26630.msg563135#msg563135


----------



## Maiaibing (Jan 14, 2016)

PureClassA said:


> I heard the Empire used Canon FD glass back in the day to focus the Death Star's laser beams...



The Dark Side uses Nikon - Why do you think Nikon is called the "Dark Side" in the first place?!? ;D


----------



## Jack Douglas (Jan 14, 2016)

Photographing the sun - yesterday my wife said get that photo, look you can clearly see the moon. It did look like the moon except it was perfectly full and should have been a sliver and it kept fading in and out and didn't have moony characteristics. So out I went to get the "moon" in a good location for a landscape shot, with some definition and lo and behold it was totally blown unless I seriously underexposed. 

So, yes, I now have finally, definitively, demonstrated for myself a strong need for more DR! I want more than 15 stops!! And I refuse to be happy until I get it!!! Or else (reader, supply their choice of retributive/destructive/indolent behaviour involving camera brands) !!!!!

Jack


----------



## tpatana (Jan 14, 2016)

With the modulo-sensor, in theory you could have unlimited DR. Just keep going until each pixel has enough data for no-noise signal level information.


----------



## scyrene (Jan 15, 2016)

jrista said:


> scyrene said:
> 
> 
> > jrista said:
> ...



Black and white birds in direct sun are notoriously tricky to expose properly, and I absolutely agree that more processing latitude would help us with these especially. But it is nonetheless possible to photograph these birds with current Canon sensor technology and get good results.

Actually in this example I'd underexposed a little by accident. But I don't think you could tell in the finished image one way or the other. Perhaps I like the dark parts darker than some others. You can have a little more noise in these dark feathers, I'd say, as can be hard to tell apart noise from feather texture. But again, not having to do anything special to retain highlight and shadow detail would be most welcome. I just don't think it's as nightmarish at present as some might infer from these discussions.

Edit: I've included a second processed version, with brighter shadows for those who prefer them. I don't think the noise is really noticeable, and certainly not objectionable.


----------



## scyrene (Jan 15, 2016)

Jack Douglas said:


> jrista said:
> 
> 
> > bdunbar79 said:
> ...



That's a nice shot, I don't think it needs much doing (for me, just a little extra contrast/brightening the highlights). It's chiaroscuro, which is all too rare in a lot of current photography, despite often having the most visual impact.


----------



## PureClassA (Jan 15, 2016)

Maiaibing said:


> PureClassA said:
> 
> 
> > I heard the Empire used Canon FD glass back in the day to focus the Death Star's laser beams...
> ...



Vader's Lightsaber had "VR" on it. Luke's said "IS"


----------



## Don Haines (Jan 15, 2016)

PureClassA said:


> Maiaibing said:
> 
> 
> > PureClassA said:
> ...


Yeah, but how many people do you see at ComicCon dressed up as Luke Skywalker and how many do you see as Darth Vader?


----------



## 3kramd5 (Jan 16, 2016)

Lee Jay said:


> 3kramd5 said:
> 
> 
> > hubie said:
> ...



Well sure, obviously if one can continue stopping down past common limits, one could shoot the sun at a full second.

Here are some benign conditions: early morning, sun partially occluded, 1/8000sec exposure time, f/40, ISO 100. The sun is blown out.


----------



## Lee Jay (Jan 16, 2016)

Here's a solar eclipse, 1/4000th, f/16, ISO 100, the sun is not blown out. I could show you images of the sun just coming up over the horizon with much more exposure and a sun that isn't blown out.


----------



## 3kramd5 (Jan 16, 2016)

Lee Jay said:


> Here's a solar eclipse, 1/4000th, f/16, ISO 100, the sun is not blown out. I could show you images of the sun just coming up over the horizon with much more exposure and a sun that isn't blown out.



No filtering?

Pretty crazy that atmospheric conditions could account for that much disparity (2-1/2 stops or so).


----------



## tpatana (Jan 16, 2016)

Moon looks underexposured IMHO.


----------



## Don Haines (Jan 16, 2016)

3kramd5 said:


> Lee Jay said:
> 
> 
> > Here's a solar eclipse, 1/4000th, f/16, ISO 100, the sun is not blown out. I could show you images of the sun just coming up over the horizon with much more exposure and a sun that isn't blown out.
> ...


When the sun is low there is a LOT of attenuation through the atmosphere.

At 15 degrees up, the magnitude is -26.28
at 10, it's -26.09
at 8, it's -25.89
at 6, it's -25.63
at 4, it's -25.17
at 3, it's -24.81
at 2, it's -24.25
at 1, it's -23.35

The drop from 15 degrees up to just above the horizon is 3 degrees of magnitude, or a factor of 16, or 4 stops.....


----------



## Don Haines (Jan 16, 2016)

tpatana said:


> Moon looks underexposured IMHO.


and where is the shadow detail


----------



## tron (Jan 16, 2016)

tpatana said:


> Moon looks underexposured IMHO.


 ;D ;D ;D ;D ;D ;D ;D ;D ;D ;D ;D


----------



## Lee Jay (Jan 16, 2016)

Don Haines said:


> tpatana said:
> 
> 
> > Moon looks underexposured IMHO.
> ...



This is the one base-ISO shot I've taken where there isn't enough dynamic range for all subjects. However, if you do the math, you'd need close to 30 stops of DR to get it all so the 1-2 extra you get from a Sony sensor would be utterly useless.


----------



## Don Haines (Jan 16, 2016)

Lee Jay said:


> Don Haines said:
> 
> 
> > tpatana said:
> ...


This is just my personal observation, but most of the time 12 stops is fine.....yeah, a couple more stops would be better, and I certainly would not complain if Canon delivered, but I can live with 12. That said, I could live happier with 15 stops......


----------



## K (Jan 17, 2016)

I can hear it already...Nikon fanboys saying "you have to spend $6,500 to do a 5-stop push"


----------



## jrista (Jan 17, 2016)

K said:


> I can hear it already...Nikon fanboys saying "you have to spend $6,500 to do a 5-stop push"



Unless the 5D IV comes out with similar DR.


----------



## Don Haines (Jan 17, 2016)

jrista said:


> K said:
> 
> 
> > I can hear it already...Nikon fanboys saying "you have to spend $6,500 to do a 5-stop push"
> ...


Or the EOS-M


----------



## jrista (Jan 17, 2016)

dilbert said:


> jrista said:
> 
> 
> > K said:
> ...



Yes, CR_*1*_ rumored. The 5Ds is NOT due for replacement...the 5D III is long past time for replacement.


----------



## jrista (Jan 17, 2016)

Don Haines said:


> jrista said:
> 
> 
> > K said:
> ...



Not as something that would directly compete with a Nikon D810 though. I'm not terribly impressed nor excited about Canon's mirrorless offerings...they would really have to pack it full of features including a high DR sensor to even twitch the needle as far as I am concerned. Canon is much farther behind on the mirrorless front than they are on the sensor front.

The 5D IV is overdue, and if there is any camera out there people are looking forward to for having a big jump in IQ, especially in comparison to Nikon cameras, it's the 5D IV.


----------



## tron (Jan 17, 2016)

I do not believe that it will have 15 stops of DR. I will be satisfied though if it will come very close to 14.


----------



## PureClassA (Jan 17, 2016)

If they can generate the ISO1600 noise levels at ISO 3200 and/or 6400, that would be a good win. When I use the 1DX, I'm never below ISO 1600 and 75%+ as ISO 3200 - 6400.



tron said:


> I do not believe that it will have 15 stops of DR. I will be satisfied though if it will come very close to 14.


----------



## tron (Jan 18, 2016)

PureClassA said:


> If they can generate the ISO1600 noise levels at ISO 3200 and/or 6400, that would be a good win. When I use the 1DX, I'm never below ISO 1600 and 75%+ as ISO 3200 - 6400.
> 
> 
> 
> ...


OK, that's High ISO performance but I am interested in it too. I use my 5D3 for landscape astrophotography and I have to use it up to ISO 10000. So, I would welcome a serious improvement in high ISO too.... 
Also, the fact that 1DxII will be 22Mp it will make it look like a super charged 5D3!


----------



## Sporgon (Jan 18, 2016)

jrista said:


> Don Haines said:
> 
> 
> > jrista said:
> ...



Have you tried the M3 Jon ? The overall 'IQ' of the 24 MP digic 6 driven sensor is superb, so much so I'm sorely temped myself, though I don't like the mirrorless configuration with no built in viewfinder.


----------



## JMZawodny (Jan 18, 2016)

tron said:


> PureClassA said:
> 
> 
> > If they can generate the ISO1600 noise levels at ISO 3200 and/or 6400, that would be a good win. When I use the 1DX, I'm never below ISO 1600 and 75%+ as ISO 3200 - 6400.
> ...



You should always shoot Astro at the ISO setting that provides the gain to digitize 1 electron as one count. Typically that ISO is near 800. Anything less and you begin to lose information for the sake of increased dynamic range. More than that and you lose dynamic range. Shooting at 10000 simply reduces the dynamic range with absolutely nothing in return. Your ISO 10000 histograms very likely look like a comb.


----------



## PhotographyFirst (Jan 18, 2016)

JMZawodny said:


> tron said:
> 
> 
> > PureClassA said:
> ...


Strange. I don't know of anyone else who shoots astro landscapes at ISO 800 unless using a star tracker and blending sky and ground exposures. 

Are you saying shoot at ISO 800, f2.8, and 20-30 seconds then push the exposure in post? That the exact same as shooting at a native ISO level of 6400 or greater?


----------



## JMZawodny (Jan 18, 2016)

PhotographyFirst said:


> JMZawodny said:
> 
> 
> > tron said:
> ...



Yup, my bad! I somehow missed the "landscape" part of that. My comment referred only to pure astrophotography where the image is used as a straight linearly scaled image without any enhancements (i.e., straight RAW from the sensor). Obviously, if landscape is included then other criteria come into consideration.


----------



## rfdesigner (Jan 18, 2016)

JMZawodny said:


> You should always shoot Astro at the ISO setting that provides the gain to digitize 1 electron as one count. Typically that ISO is near 800. Anything less and you begin to lose information for the sake of increased dynamic range. More than that and you lose dynamic range. Shooting at 10000 simply reduces the dynamic range with absolutely nothing in return. Your ISO 10000 histograms very likely look like a comb.



If I may I'd like to disagree..

I've played astrophotography for a long while, built my own drive system, built a CCD camera, currently use a 383L+ and am in a long winded process of building an observatory.

You do not need to aim for 1ADU/e.. because the readout noise exceeds 1e. Most DSLRs readout noise is 2.5e or above... so the need to ensure you don't miss any steps is removed by the spreading effect of the noise.

You do need to aim for the lowest ISO where the readout noise is more or less unchanged from very high ISO.. this usually works out as 400~1600 on canon cameras, so in use terms we agree, but the reasoning is different. Nikons can go as low as ISO200 in this regard.. but Nikon has history of meddling with their RAW files, either actively hunting out hot pixels so deleating stars, or clipping blacks with the D800(e?)

For reference my dedicated astro CCD has a conversion factor of about 2.2ADU/e

I only bring this up to prevent a misunderstanding from propogating too far and wide.


----------



## Lee Jay (Jan 18, 2016)

I thought the choice of ISO for astro subs was usually set by the longest tracking time your mount can sustain and/or the sky glow of your site.

http://www.samirkharusi.net/sub-exposures.html

"As long as we expose long enough for the entire skyfog mountain to be entirely detached from the camera's Read Noise we can be reasonably confident that stacking N exposures each T-minutes long will be closely equivalent to shooting one very long exposure that is NT-minutes long, yielding similar SNRs in the final processed images. "

http://www.cloudynights.com/topic/351257-stacking-efficiency-sub-length-skyfog-histogram/


----------



## jrista (Jan 18, 2016)

rfdesigner said:


> JMZawodny said:
> 
> 
> > You should always shoot Astro at the ISO setting that provides the gain to digitize 1 electron as one count. Typically that ISO is near 800. Anything less and you begin to lose information for the sake of increased dynamic range. More than that and you lose dynamic range. Shooting at 10000 simply reduces the dynamic range with absolutely nothing in return. Your ISO 10000 histograms very likely look like a comb.
> ...



I agree with this!

Unit gain is more a myth than anything. Most of the time we don't know enough about what the actual unity gain is for it to matter, and furthermore, we usually cannot actually use unity gain. Additionally, while sampling every electron as finely as you can tends to be better, it is still possible to image at lower ISO/gain settings where multiple electrons are required per ADU (i.e. ISO 400 or lower on my 5D III, which I have imaged at on many occasions). The read noise or quantization error are just too small to worry about in most cases.

The actual quantization noise that you get in any pixel is also actually quite small. It's around SQRT(RN^2 + (RN-SQRT(RN^2 + QN^2))^2). Quantization noise is +/-1 bit, pretty much regardless, so if you have 3e- RN (common for most DSLRs) you would have ~3.004e- noise with the quantization error (and we can only really measure it if we actually have proper low-level specs on the sensor, including input referred and output referred noise at the ADC). The quantization error is so small as to be meaningless in the grand scheme of things. Read noise itself is usually too small to matter in most cases...the exception being LRGB imaging at an EXCEPTIONALLY dark site, or narrow band imaging with a very narrow band pass (5nm or smaller). In both cases, getting significant background skyfog levels can be difficult, so you may not be able to sufficiently swamp read noise with background sky level. 

FAR more important sources of noise in astrophotography is the noise from dark current, and the noise from light pollution. Both tend to DWARF any other sources of noise with a DSLR for sure, and often with a CCD (light pollution is usually dominant with CCD, as dark current is minimized by thermal regulation most of the time). My 5D III can have as much as 5e-/s/px dark current during the summer...which actually makes it the single most significant source of noise in the image, period.


----------



## jrista (Jan 18, 2016)

Lee Jay said:


> I thought the choice of ISO for astro subs was usually set by the longest tracking time your mount can sustain and/or the sky glow of your site.
> 
> http://www.samirkharusi.net/sub-exposures.html
> 
> ...



I have found it doesn't much matter in the end. I have imaged at ISO's from 400 through 3200 on my 5D III. While there is higher read noise at ISO 400, you are also able to gather twice the signal strength as at a higher ISO. Signal grows faster than noise, so 9.8e- RN @ ISO 400 with appropriately long subs (say 15-20 minutes) vs. 5.6e- RN @ ISO 800 with about half the sub length (7-10 minutes) vs. 3.6e- RN @ ISO 1600 with again half the sub length (3-5 minutes) usually means you have a higher SNR with the longer subs at lower ISO:

200/SQRT(200 + 9.8^2) = 11.62:1
100/SQRT(100 + 5.6^2) = 8.75:1
50/SQRT(50 + 3.6^2) = 6.3:1

If you throw in dark current, that normalizes things a bit, because read noise becomes a secondary noise factor. Throw in light pollution, and things tend to flatten out even more, as read noise becomes a distant tertiary noise factor. However I have never actually encountered a situation where imaging with longer subs at a lower ISO gave me worse results than imaging with shorter subs at a higher ISO. Half a dozen of one, six of the other. It tends to break even in the end. On a PER-SUB basis, anyway.

The real caveat with using shorter subs is you need to stack more of them. Then read noise compounding becomes a problem...you get a constant amount of read noise in every frame, and since you need to stack MANY more ISO 1600 subs than ISO 400 subs, you end up with more total read noise in the end. However, it still usually doesn't matter if you are dark current and/or light pollution limited, as the noise from both of those is still significantly higher.


----------



## JMZawodny (Jan 19, 2016)

jrista said:


> Lee Jay said:
> 
> 
> > I thought the choice of ISO for astro subs was usually set by the longest tracking time your mount can sustain and/or the sky glow of your site.
> ...



I guess we'll agree to disagree on some points. What ever works for you.

I'm a bit surprised by your variation in read noise vs ISO. Neither my 5D or 5D2 behave like that. Makes me wonder whether Canon did something different with the 5D3.

I gave up trying to use a DSLR for astro as the dark current and spectral filtering were severe issues. Given my site, I mostly use a cooled camera (KAF-8300 chip) and narrow band filters now.


----------



## Don Haines (Jan 19, 2016)

JMZawodny said:


> jrista said:
> 
> 
> > Lee Jay said:
> ...


I am nowhere near jrista, but when I moved from my 60D to a 7D2, I found a sighificant jump in the quality of my long night exposures.... The 60D and the 5D2 are similar age of technologies, and the 7D2 and 5D3 are of similar age, so a jump in quality of Astro imaging from the 5D2 to the 5D3 would seem likely to me.


----------



## jrista (Jan 19, 2016)

JMZawodny said:


> jrista said:
> 
> 
> > Lee Jay said:
> ...



The 5D and 5D II have insane levels of dark current. I complain about my 5D III's ~5e-/s/px dark current @ 25C, because it can be a real problem (a 300 second sub @ 5e-/s DC has 1500e- dark current (which is suppressed itself by CDS), which leaves behind almost 39e- dark current noise!) A 5D II has several times as much dark current. The majority of that horrid red color noise in a 5D II at higher ISO is primarily from dark current (it's high enough to add several electrons worth of noise in and of itself (even at sub-second exposure times), which at high ISO could be higher than read noise). 

So I would venture to guess that your primary problem with the 5D and 5D II was not necessarily read noise (IIRC, the 5D II actually had LESS read noise, not more). The problem was the ridiculous levels of dark current. It's still fairly ridiculous with my 5D III. It got a bit better with the 6D, which dropped to around 2.5-3e-/s @ 25C, and scaled better as you went cooler thanks to a lower doubling temp. The 7D II is the first camera Canon produced that actually had truly competitive dark current levels relative to other brand sensors. It's around 0.2-0.3e-/s @ 25C, and also has a smaller doubling temp. 

In my experiences over the last year, particularly summer and fall this year, I have learned that dark current is often the single most significant noise term when doing astrophotography with a DSLR. During winter, depending on where you are, that can change. My recent images have had sensor temps around -2C to -5C, so dark current doesn't matter (it's less than read noise with my average sub lengths.) But even during winter, it's rarer that I get to sub-zero (C) temps at the sensor, and usually it's around 8-10C or higher, and most of the year it's 20C or higher.


----------



## JMZawodny (Jan 19, 2016)

jrista said:


> JMZawodny said:
> 
> 
> > jrista said:
> ...



Yes, I was never worried about read noise with the 5D or 5D2, but it was very constant vs ISO except at the high and low extremes of the ISO range. With the KAF-8300 cooled 30C below ambient, I'm no longer worried by dark current either. Now I wish that I had a darker site.


----------



## jrista (Jan 19, 2016)

JMZawodny said:


> tron said:
> 
> 
> > PureClassA said:
> ...



To address this more directly. There is always noise in the conversion. It actually is not possible to always convert 1 electron to 1 ADU because noise will usually mean you get 0 ADU, or >1 ADU. 

Technically speaking, it is better to sample every electron as finely as possible. That means you want a gain (in terms of e-/ADU) of LESS than 1. Nyquist would dictate a gain of 0.5e-/ADU, however that only really works for something like an audio signal, however because of the various noise terms that play a role with digital imaging (including read noise and PRNU) and because we usually stack, a gain of 0.3e-/ADU is better, and higher is certainly not bad, especially with very faint signals. There is certainly a balancing act here...too high of an ISO and you will throw away too much dynamic range...and in the case of a Canon DSLR, too low of an ISO and read noise will increase to ridiculous levels (anything over 10e- and you have to start wondering why your bothering...the 5D III has ~35e- RN @ ISO 100...square that, you add 1225e- noise in your noise term when calculating SNR!!!)

There is a caveat here. Outside of the >10e- read noise scenario...read noise usually barely matters! This is because for most imagers, there are other sources of noise that are more significant. Dark current is one, which can add as much as 1500e- signal or around there, which is 30e- noise. Another significant source of noise is light pollution, which can add anywhere from 200 to 2000e- additional signal outside of a nice dark site (yellow through white zones on the bortle scale). With minimal dark current and no light pollution, an SNR calculation might look like this (assuming 50e- signal from space in say a 120 second exposure ):


```
50e-/SQRT(50e- + 10e-^2) = 4.1:1
```

At high ISO, where your better sampling each and every electron, your read noise (in relative terms) is smaller:


```
50e-/SQRT(50e- + 3e-^2) = 6.5:1
```

An improvement of almost 60%. Quite significant. However it's never that simple. Add in dark current at say 10e-/s/px @ 20C (i.e. 5D II):


```
50e-/SQRT(50e- + (10*120) + 3e-^2) = 50e-/SQRT(50e- + 1200e- + 9e-) = 1.41:1
```

That dark current totally decimated our SNR. Increasing read noise to 10e- really doesn't matter much at that point:


```
50e-/SQRT(50e- + (10*120) + 10e-^2) = 50e-/SQRT(50e- + 1200e- + 100e-) = 1.36:1
```

Less than a 4% difference here. Read noise matters even less when we factor in light pollution (say deep in a red zone, suburbia central):


```
50e-/SQRT(50e- + 1500e- + (10e-/s*120s) + 10e-^2) = 50e-/SQRT(50e- + 1500e- + 1200e- + 100e-) = 0.94:1
```

Even if you had a mere 1e- read noise, it wouldn't matter:


```
50e-/SQRT(50e- + 1500e- + (10e-/s*120s) + 1e-^2) = 50e-/SQRT(50e- + 1500e- + 1200e- + 1e-) = 0.95:1
```

It really doesn't matter if you image at ISO 400, 800, or 1600...in the end, the amount of read noise barely affects the results. Dark current and light pollution dominate by such a significant margin. That changes if you can regulate your sensor temp, and find dark skies, though:


```
50e-/SQRT(50e- + 30e- + (0.02e-/s*120s) + 3e-^2) = 5.23:1
```

Bump read noise up to 10e-:


```
50e-/SQRT(50e- + 30e- + (0.02e-/s*120s) + 10e-^2) = 3.7:1
```

Read noise becomes a more significant factor when your imaging under pristine dark skies (i.e. 21.5mag/sq" or better, deep blue zone, gray zone, black zone) with very low dark current. Dark current itself could easily be the single most devastating noise term you may have to deal with (i.e. dark sky...in the summer):


```
50e-/SQRT(50e- + 30e- + (5e-/s*120s) + 3e-^2) = 1.9:1
50e-/SQRT(50e- + 30e- + (10e-/s*120s) + 3e-^2) = 1.4:1
```


----------



## JMZawodny (Jan 19, 2016)

jrista said:


> To address this more directly. There is always noise in the conversion. It actually is not possible to always convert 1 electron to 1 ADU because noise will usually mean you get 0 ADU, or >1 ADU.



That would depend upon the what the gain of the system is as well as the nature of the read noise. I must note that a proper signal chain must have a noise level of at least 0.5 ADU (assuming you plan to stack). You say as much when you cited Nyquist. You must agree that digitization is the final step in the process. These factors are also why placing the ADC chain on chip improves performance so much. You can control the nature of the noise.



jrista said:


> Technically speaking, it is better to sample every electron as finely as possible.



I'm sorry, but in my quantum world measurements of electrons always yield an integer result. Sampling them more finely only serves to reduce the DR of the system.



jrista said:


> Read noise becomes a more significant factor when your imaging under pristine dark skies (i.e. 21.5mag/sq" or better, deep blue zone, gray zone, black zone) with very low dark current. Dark current itself could easily be the single most devastating noise term you may have to deal with (i.e. dark sky...in the summer):



Naturally.


----------



## jrista (Jan 19, 2016)

JMZawodny said:


> jrista said:
> 
> 
> > To address this more directly. There is always noise in the conversion. It actually is not possible to always convert 1 electron to 1 ADU because noise will usually mean you get 0 ADU, or >1 ADU.
> ...



I was speaking explicitly for unity gain, which is by definition 1e- to 1 ADU. I was trying to point out that you aren't going to always get 1ADU because of noise, including PRNU.



JMZawodny said:


> jrista said:
> 
> 
> > Technically speaking, it is better to sample every electron as finely as possible.
> ...



Your misunderstanding. At a gain of 0.3e-/ADU, one single electron would result in 3 to 4 ADU. That would be a more finely sampled electron VS unity gain. At a gain of 0.15e-/ADU, one single electron would result in 6-7 ADU. That would be even more finely sampled. The quantization error becomes smaller with the finer sampling as well, as +/-1 ADU the discrepancy is only 0.15e- rather than 0.3e-.

Whether you use all the DR depends on what your imaging, and how much clipped stars matter to the end result. I tend to always clip my brighter stars (and I usually image around ISO 800-1600, which is sampling each electron well, but not necessarily as ideally as possible), and my results are usually excellent, and very deep:






















I could expose less, and avoid clipping any stars at all...but then I'm getting SIGNIFICANTLY less faint detail in each sub. Stars saturate at a much higher _*rate*_ than nebula and dust, so when you reduce exposure enough to prevent star clipping, you've usually stuffed all the faint details down well into the noise floor. That can greatly increase the number of subs required to fully reveal all those faint details. 

If push comes to shove, you can always do an HDR blend to bring out immense dynamic range. This Orion Sword image has around 18 stops, and was the result if a very meticulous linear fitting and HDR blending process in PixInsight:






This was sampled at 0.36e-/ADU, which was probably closer to ideal, resulting in great outer dust SNR, but requiring several sets of shorter subs to add in the ultra bright details around the Trap.


----------



## JMZawodny (Jan 19, 2016)

jrista said:


> JMZawodny said:
> 
> 
> > jrista said:
> ...



Yes, most impressive results to be sure. Better than what I was able to do during my Pre-CCD days at the university in Boulder back in the 80's. Yet what I am saying is that you are being most conservative in your approach to manipulating noise.


----------



## jrista (Jan 19, 2016)

JMZawodny said:


> jrista said:
> 
> 
> > JMZawodny said:
> ...



Manipulating noise at the level you are talking about only matters if you are truly limited by the noise of the camera, though. I am FAR from being that limited, even though I use a dark site. If I was still imaging under 21.6mag/sq" skies, then it MIGHT be a different story...maybe...if airglow was at a minimum and I had no inversion layer, and I was getting 15 minute color subs. At 21mag/sq" skies, I'm working under skies almost 1.8 stops brighter, and am totally skyfog limited (I have more signal from LP than from object, so it doesn't take all that much to make SQRT(SkyFog) my most dominant source of noise.

If I was doing narrow band imaging with 3nm filters on ultra faint objects, say OU4 in OIII, then I would be more concerned about read noise, FPN, PRNU, quantization error, etc. I still wouldn't be all that concerned about any of them except maybe read noise if I was doing 3-5nm Ha imaging, as again it's not terribly difficult to get extremely high contrast subs with Ha.

Very few astrophotographers need to be that concerned about noise at this level, because it is such a minuscule amount of noise in the grand scheme of things. Light pollution at the very least will be the most significant source of noise for any DSLR or mirrorless astrophotographer, followed by dark current noise. The two together are at least a factor of ten more significant than read noise, quantization error, etc.

If you do a lot of narrow band imaging with that Atik CCD camera, then sure, you should pay attention to the read noise. Either that, or just get some seriously long sub exposures, 60-90 minutes or so, and actually get your subs skyfog limited.


----------



## JMZawodny (Jan 19, 2016)

jrista said:


> Manipulating noise at the level you are talking about only matters if you are truly limited by the noise of the camera, though. I am FAR from being that limited, even though I use a dark site. If I was still imaging under 21.6mag/sq" skies, then it MIGHT be a different story...maybe...if airglow was at a minimum and I had no inversion layer, and I was getting 15 minute color subs. At 21mag/sq" skies, I'm working under skies almost 1.8 stops brighter, and am totally skyfog limited (I have more signal from LP than from object, so it doesn't take all that much to make SQRT(SkyFog) my most dominant source of noise.
> 
> If I was doing narrow band imaging with 3nm filters on ultra faint objects, say OU4 in OIII, then I would be more concerned about read noise, FPN, PRNU, quantization error, etc. I still wouldn't be all that concerned about any of them except maybe read noise if I was doing 3-5nm Ha imaging, as again it's not terribly difficult to get extremely high contrast subs with Ha.
> 
> ...



In my day job, I guess I have to quantify the detailed nature of noise more than most do (google "zawodny sage" if you have not already). As a result, I have a refined expectation for the performance of measurement HW and the justified expectations for a proposed approach to making a measurement. All I can say is that you are being rather conservative in your approach to making the measurement. You are certainly most practical in your methodology, but you may not be pushing the state of the art.

Let me be quite clear here. Jon, I find your technique both in data capture and post processing to be first rate. Your imagery is masterful and very pleasing to the eye. I must ask, however, what will it take to be the master 5 years from now? Do not be afraid of digitization/quantization (sp?) noise.


----------



## jrista (Jan 19, 2016)

JMZawodny said:


> jrista said:
> 
> 
> > Manipulating noise at the level you are talking about only matters if you are truly limited by the noise of the camera, though. I am FAR from being that limited, even though I use a dark site. If I was still imaging under 21.6mag/sq" skies, then it MIGHT be a different story...maybe...if airglow was at a minimum and I had no inversion layer, and I was getting 15 minute color subs. At 21mag/sq" skies, I'm working under skies almost 1.8 stops brighter, and am totally skyfog limited (I have more signal from LP than from object, so it doesn't take all that much to make SQRT(SkyFog) my most dominant source of noise.
> ...



Oh, I'm NOT pushing the state of the art.  I have no illusions there.  But I don't really need to, either...not with the LP and dark current I have these days. I also have no good specs, as with a CCD, for my DSLRs...I don't know what the actual input referred read noise is, what the actual PRNU the pixels is, etc. so I couldn't really be all that accurate about perfectly optimizing my exposure times even if I wanted to. Things change enough throughout a night at my dark site that it would be difficult to get a truly optimal exposure time and stick with it as well...post-midnight, my dark site can improve by 0.2-0.4mag/sq" as cities go to sleep and LP fades, and the shift in the signal peak can easily render too many faint details into the noise floor. So indeed...I choose a more conservative approach. But it works for what I'm currently doing. I've been doing astrophotography for just shy of two years now (started Feb. 12, 2014), and I have results that rival well known DSLR imagers who have been doing it for a decade or more. 

(Although I'm never satisfied...heh, I'm always looking to do more, get more, etc. Last year my average dark site integration time with the DSLR was 4-5 hours, which was decent. This year I am aiming to make my minimum integration time with the DSLR 8 hours, preferably over 10 hours, and go after even more faint stuff. I'm never satisfied! )

Out of curiosity, what do you do for a living?



JMZawodny said:


> Let me be quite clear here. Jon, I find your technique both in data capture and post processing to be first rate. Your imagery is masterful and very pleasing to the eye. I must ask, however, what will it take to be the master 5 years from now? Do not be afraid of digitization/quantization (sp?) noise.



Thank you. 

I plan to get into narrow band imaging this year (if the funds pan out...and if the camera pans out, looking at the new APS-H sized KAF-16200 cameras from FLI, QHY, Moravian), and I'll be furnishing myself with a full set of Astrodon 3nm Narrow Band filters. Once that rig is going, I'll be far more concerned about minute noise factors than I am now. I've been processing narrow band images for most of this year, using data from various friends, so I am quite well versed in the processing techniques. I am also aware of how clean CCD data can be...even if it has higher read noise. I am particularly partial to QSI and FLI CCD cameras...incredibly clean noise, pure gaussian, small standard deviation...wonderful. Despite 5-7e- read noise, which is higher than the 2-3e- common with DSLRs at high ISO. The 5D III is just NOT a clean camera, even after removal of the bias signal, after cosmetic correction, after everything I can do to clean it up, it's just not all that clean. I'll be quite happy to move to CCD, narrow band, and to deal with the worries over quantization noise.


----------



## JMZawodny (Jan 19, 2016)

jrista said:


> Out of curiosity, what do you do for a living?



I am a senior research scientist at NASA (28+ years at NASA and have a PhD from Univ of Colorado Boulder, 1985). I design and develop remote sensing space flight hardware as well as develop the analysis algorithms for that HW. I am currently the project scientist for SAGE III - headed to the ISS in August of this year (hopefully) - and manage a team of over-achiever science padawans (but I'll never tell them that they know more than I ever did). In my spare time, I am a (former) coauthor of the the AviStack v2 (now open source) software.

I live, breathe, and eat noise. I have turned crap into gold and cannot wait for the next adventure to begin.

Thanks for asking!


----------



## rfdesigner (Jan 19, 2016)

JMZawodny said:


> jrista said:
> 
> 
> > Out of curiosity, what do you do for a living?
> ...



I hate the transatlantic time gap.. I keep missing the middle of conversations. I see nothing that Jrista has said that I really disagree with.

It also sounds like we both have to concern ourselves with noise professionally, in my case almost a quarter century of radio development. The astronomy is pure hobby, but parts do occasionally spill over.

Given your background I'd be interested in fully understanding why you think 1e/ADU is important in the noisy systems we have today (I agree in a noiseless system 1e/ADU is optimal). My understanding is that the readout noise alone will disturb the measurement sufficiently not to have to concern ourselves, as Jrista has already stated we normally adjust length of shots to maximise overall performance and that usually means skyglow>K*RN^2, where K is an arbitrary constant, I use K=2 as an absolute minimum.

On the subject of 5D rn have you seen: http://www.astrosurf.com/buil/50d/test.htm Buil does seem to know his stuff, but then he started with CCDs in the 80s, so he ought to.


----------



## jrista (Jan 19, 2016)

I think it boils down to the difference between art and science in the end. 

If your a scientist doing science with sensors and imaging, then getting the most accurate replication of the original signal as possible matters. In that case, your probably not going to be using a DSLR. Your not even going to be using an average CCD...your going to be using a Grade 0 CCD camera that has minimal if any defects, with extremely deep cooling, probably deep depletion, and you'll effectively be counting photons. You might even be using emCCD or something similar, where you literally count photons, and get as close to possible to perfectly replicating every signal in as near-perfect accuracy as possible.

At the other end of the spectrum is me. I'm an artist, not a scientist (although I DO love the science, that's not really why I do what I do.) I am not using my images for the purposes of analysis and discovery...I use them to share my view of the universe with the people around me, hopefully with my own artistic bent. There is certainly an inextricable scientific aspect to astrophotography...it's just the nature of the hobby. I could put a lot of extra time into gathering every single photon as efficiently as humanly possible, but it wouldn't be an efficient use of my time, not for the goals I have. It might give me marginally better results, however few would actually notice, and then, they would probably only notice if I had a comparison image produced with my normal techniques to compare with.


----------



## JMZawodny (Jan 19, 2016)

rfdesigner said:


> I hate the transatlantic time gap.. I keep missing the middle of conversations. I see nothing that Jrista has said that I really disagree with.



When we were doing the AviStack development our team of three had the lead in Germany, me in the US and the third fellow in Australia. Somehow we managed to get everyone together for regular discussions. It was great fun.



rfdesigner said:


> It also sounds like we both have to concern ourselves with noise professionally, in my case almost a quarter century of radio development. The astronomy is pure hobby, but parts do occasionally spill over.



Radio is quite a different beast from simply collecting and estimating the number of photon that fall into a bucket. Imagers do not have to worry about phase jitter/recovery, bandwidth, or proper temporal sampling of the signal. These days RF work is a mix of analog front ends and digital (de)modulation. With some of the new direct synthesis chips out there, the analog bits may soon disappear altogether.



rfdesigner said:


> Given your background I'd be interested in fully understanding why you think 1e/ADU is important in the noisy systems we have today (I agree in a noiseless system 1e/ADU is optimal). My understanding is that the readout noise alone will disturb the measurement sufficiently not to have to concern ourselves, as Jrista has already stated we normally adjust length of shots to maximise overall performance and that usually means skyglow>K*RN^2, where K is an arbitrary constant, I use K=2 as an absolute minimum.



A noiseless system is never optimal unless the detection system is analog (continuous, not quantized). Digital systems require noise to work properly (specifically when combining samples or doing any sort of statistical manipulations). While we can measure and characterize the various sources of noise, in the end noise is noise. Knowing their origins does allow us to take corrective action and/or design the system properly. At the point of digitization, noise should never be less than 0.5ADU (1-sigma). Lower than that, quantization artifacts emerge and can be quite annoying. Maximizing performance is very subjective and necessarily implies a pre-selected course of action for processing the signal to extract the desired information. As I stated originally, 1e/ADU maximizes both the DR and the amount of information (available levels of signal if you will). Alter that gain and you will give up one or the other. While it has been a while since I've gone shopping for imager HW, back when I bought my FLI camera the standard was to build HW that operated very near 1e/ADU. I believe that is still standard practice. So, don't just take my word on this approach.

Depending upon what your measurement requirements are, it is certainly reasonable to set the gain lower so that it takes more electrons per ADU. Although there are practical limits relating to pixel size and the associated well capacity. The SAGE III detector (developed in the mid 1990's) has a gain of 75e/ADU, but we have a reasonably bright source and a high SNR requirement.



rfdesigner said:


> On the subject of 5D rn have you seen: http://www.astrosurf.com/buil/50d/test.htm Buil does seem to know his stuff, but then he started with CCDs in the 80s, so he ought to.



Interesting, the 5D and 5D2 numbers look very familiar. I can't find my original characterization data and results, so I suspect that my recollection was that the read noise was a relatively constant ADU - not electrons. His data do show that. It is a shame that he did not report the full well capacity at ISO 100. Characterizing these sensors is not difficult. It takes only a handful of exposures and some simple code. I am a bit surprised that he did not specify the values for the R, G, and B channels separately. Perhaps he does, but I did not dig into the details on his site.

I have to run and will edit this later if needed. Cheers!


----------



## jrista (Jan 19, 2016)

JMZ, I do have a question for you. You note that 1e-/ADU maximizes both DR and signal simultaneously. However, does that not imply that your output buffer for amplification is also limited to the same capacity as the pixels themselves? What about more innovative sensor and pixel architectures that have a larger output buffer. You could theoretically get at least just as much dynamic range, (but possibly more) with better sampling of each electron in the pixel, if you could amplify the entire pixel range (say up to 65ke- for a largish pixel sensor with a 16-bit ADC) with an output buffer 3x as large a capacity as the pixels themselves, with 0.33e-/ADU. I've read some papers about prototype sensors (or just patentable ideas) that cover such things. There are existing CCD cameras that are able to achieve this to some degree as well even, as at 1x1 binning the extra output buffer capacity can be used to achieve better gain/DR characteristics (although for some reason it rarely seems ideal, so DR gain is not as significant as some of the prototypical ideas out there; not possible with binning, as the extra output buffer capacity is needed to store the combined charges of the binned pixels.)


----------



## JMZawodny (Jan 19, 2016)

jrista said:


> JMZ, I do have a question for you. You note that 1e-/ADU maximizes both DR and signal simultaneously. However, does that not imply that your output buffer for amplification is also limited to the same capacity as the pixels themselves?



Typically the output buffer is reasonably matched to the pixel full well. There are always exceptions to this. We did a custom sensor development quite some time ago (15 years or so) where we needed tiny pixels that also needed to have a very large full well. It was a linear array, not 2D, so we simply made the pixels out of photo-diodes that drained to huge, un-illuminated CCD pixels. Performance was excellent, but we had to deal with some non-linearities arising from the FET capacitance. Signal chain design is very important and compromises must be made when you alter specifications (such as the size of the output buffer).



jrista said:


> What about more innovative sensor and pixel architectures that have a larger output buffer. You could theoretically get at least just as much dynamic range, (but possibly more) with better sampling of each electron in the pixel, if you could amplify the entire pixel range (say up to 65ke- for a largish pixel sensor with a 16-bit ADC) with an output buffer 3x as large a capacity as the pixels themselves, with 0.33e-/ADU. I've read some papers about prototype sensors (or just patentable ideas) that cover such things.



I'm still baffled by the perceived need to crank the gain to 1e/3ADU. It seems to me that one should focus on managing the undesirable signal/noise source that is driving you to want each electron to be 3 counts. The current state of the art on-chip digitization methods have no issue reliably counting electrons with reasonable speed. There most certainly are applications where a very large effective full well would improve performance, but it is not immediately clear to me whether such a system would also perform well in signal limited applications as is found in some (most?) aspects of photography.



jrista said:


> There are existing CCD cameras that are able to achieve this to some degree as well even, as at 1x1 binning the extra output buffer capacity can be used to achieve better gain/DR characteristics (although for some reason it rarely seems ideal, so DR gain is not as significant as some of the prototypical ideas out there; not possible with binning, as the extra output buffer capacity is needed to store the combined charges of the binned pixels.)



We are getting to the point where there will be little advantage to analog binning vs digital summation.


----------



## privatebydesign (Jan 19, 2016)

Canon have very clearly, and exceptionally openly, demonstrated exactly how they measure 15 stops of DR from the C300 II, I see no reason why they would change their methodology for another camera. 

https://www.cinema5d.com/canon-measured-15-stops-dynamic-range-c300-mark-ii/

Basically they removed the entirely subjective idea of 'how much noise is too much' and counted everything above background noise. So those expecting 15 stops of noiseless DR are going to be disappointed. As always I don't believe that is the entire picture, I believe there will be good improvements in the actual usable RAW data and I am pretty confident I will be happy with the malleability and workability of those RAW files.

I also expect to piss myself laughing at the arguments, lectures and condescending technical micro deconstructions the absence of a 'traditional 15 stops' will create.

What I would point out is that many were very disappointed on the release of the 5DS/R on the expectation of 'better' sensor performance and those expectations not being realised on paper spec sheets, however when people actually started working the files they all seemed exceptionally happy with the real world output. I expect the exact same thing with the 1DX MkII, people will argue it is crap and the end of Canon, until the next distraction comes along, meanwhile those that actually use it will be surprised at how much better than its predecessors it actually is.


----------



## JMZawodny (Jan 20, 2016)

rfdesigner said:


> On the subject of 5D rn have you seen: http://www.astrosurf.com/buil/50d/test.htm Buil does seem to know his stuff, but then he started with CCDs in the 80s, so he ought to.



I dug up the characterization I did of my 5D back in 2007. Since this is now slightly off topic I'll keep this brief. Compared to Buil's results, I measured (in the green pixels) a "full well" of 15,380 at ISO 400, a read noise of 6.7e, gain of 4.32e/ADU. I put the full well in quotes since the actual full well is used only at ISO 100. Gain was slightly lower in the blue pixels and less than half in the red. Read noise in the red and blue pixels was virtually identical to that in the green. It also varied with ISO in a similar manner. At ISO 100, I calculated a DR of 11.7 stops (3330) with a peak SNR of 260. At ISO 400 the DR was still a healthy 2300 although SNR dropped by half. I apparently never did characterize my 5D2, but I did do my 300D.


----------



## jrista (Jan 20, 2016)

JMZawodny said:


> jrista said:
> 
> 
> > JMZ, I do have a question for you. You note that 1e-/ADU maximizes both DR and signal simultaneously. However, does that not imply that your output buffer for amplification is also limited to the same capacity as the pixels themselves?
> ...



Yeah, they call those multi-bucket or memory-backed pixels these days. Same idea, and from the patents I've read, some of the same problems. 



JMZawodny said:


> jrista said:
> 
> 
> > What about more innovative sensor and pixel architectures that have a larger output buffer. You could theoretically get at least just as much dynamic range, (but possibly more) with better sampling of each electron in the pixel, if you could amplify the entire pixel range (say up to 65ke- for a largish pixel sensor with a 16-bit ADC) with an output buffer 3x as large a capacity as the pixels themselves, with 0.33e-/ADU. I've read some papers about prototype sensors (or just patentable ideas) that cover such things.
> ...


[/quote]

I agree reducing the read noise that gives one reason to want higher gain is a better solution...however sometimes we really do work with EXTREMELY faint details. A decent OIII signal is maybe up to 0.125 photons/second, meaning you get maybe one photon every eight seconds, and if your Q.E. is say around 50% (as is the case for most KAF CCDs, not even that good), then you really only get an electron every 16 seconds. To get a reasonable SNR with 5e- read noise (which really isn't a lot in the grand scheme of things), you would need to wait through at least five 16-second periods just for the signal to match the read noise, and many more for the signal to reach a reasonable SNR (which personally I consider to be no less than 7:1, and some NB imagers I know prefer higher than that). It would take 10 minutes just to reach that minimal desired SNR, and 20-30 minutes at least to get a reasonably strong SNR (per sub, BTW) across the plenum of pixels (thanks, Poisson!). There are much fainter objects out there, like OU4, which I think is at least an order of magnitude fewer photons/second (0.0125), so you would only get a photon every 80 seconds and an electron every 160 seconds. It would take about 20 minutes just to reach the minimal SNR in a single pixel, and an hour to reach a reasonably strong SNR, on a target like OU4, for all pixels. (I actually know some fellas who have actually done that, 60-90 minute subs to get a rather faint signal on OU4 with IIRC a KAF-8300). 

How easy is it, really, to get your read noise, for reasonably sized pixels in the 5-6 micron range, below 3-5e-? I gather it isn't as easy as it sounds, as most of the sensors I know of that have 1-2e- also have really tiny pixels (2-3 micron tops) and more limited FWC and DR.


----------



## Lee Jay (Jan 20, 2016)

jrista said:


> There are much fainter objects out there, like OU4, which I think is at least an order of magnitude fewer photons/second (0.0125), so you would only get a photon every 80 seconds and an electron every 160 seconds. It would take about 20 minutes just to reach the minimal SNR in a single pixel, and an hour to reach a reasonably strong SNR, on a target like OU4, for all pixels.



This is why Hyperstar makes a lot of sense, especially for fainter and bigger targets. Switch from f/10 or f/7 to f/2 or so.


----------



## JMZawodny (Jan 20, 2016)

jrista said:


> How easy is it, really, to get your read noise, for reasonably sized pixels in the 5-6 micron range, below 3-5e-? I gather it isn't as easy as it sounds, as most of the sensors I know of that have 1-2e- also have really tiny pixels (2-3 micron tops) and more limited FWC and DR.



The answer depends upon how much money you are willing to spend. I think your real question is can a company produce these devices at a commercial scale and feed them into reasonably priced consumer goods. Let's wait and see what the 1Dx2 performance looks like. If they are going to keep the full well size in the vicinity of 60ke- they are going to have to produce sensors with ~2e- read noise if the claims of 15 stops of DR are to be realized. I'll predict here that the 1Dx2 will have 16-bit RAW files. Canon has increased the bit depth every time they increased DR. 16-bit files aren't unusual, in fact they are more "normal" than 12-bit or 14-bit files are. We'll know a lot more in 2 weeks.


----------



## jrista (Jan 20, 2016)

JMZawodny said:


> jrista said:
> 
> 
> > How easy is it, really, to get your read noise, for reasonably sized pixels in the 5-6 micron range, below 3-5e-? I gather it isn't as easy as it sounds, as most of the sensors I know of that have 1-2e- also have really tiny pixels (2-3 micron tops) and more limited FWC and DR.
> ...



Certainly, it boils down to money. I'm concerned about what is accessible to a middle class to upper middle class consumer. The price for extremely high end equipment skyrockets, tens of thousands to hundreds of thousands of dollars. There already are amazing cameras on the market, like this one:

http://www.andor.com/scientific-cameras/ikon-xl-and-ikon-large-ccd-series/ikon-xl-231

That thing is a superbeast of a camera. It's got MONSTER pixels with a massive FWC, but a mere 2.1e- read noise. MASSIVE dynamic range (17.4 stops!!) with 18-bit encoding. Insane cooling (better than an emCCD) at -100C dT. I mean, that's it, right there. Basically the holy grail. But it almost costs as much as a house! 

The 1DX II won't have 15 stops of DR. It won't even come close. It'll have about 12.3 stops, like the C300 II. If anyone expects more than that, they are deluding themselves. I won't be disappointed when that's all it delivers. I don't even bother looking to Canon for cutting edge sensor technology anymore...I've had my hopes pumped and dashed far too many times. Canon cameras are fine, but they are far from world shattering when it comes to sensor technology and core image quality. 

If anyone is going to deliver a class leading consumer-affordable sensor that has great FWC, low read noise, and excellent dynamic range, something that could really be a powerhouse CMOS sensor for astro, it'll be Sony. I am actually curious why no one has stuck a FF Exmor in a CCD-class camera housing with cooling and all that (sans bayer array, of course) yet...but perhaps it is just a matter of companies like QSI and FLI learning how to use CMOS sensors rather than CCD sensors.


----------



## jrista (Jan 20, 2016)

Lee Jay said:


> jrista said:
> 
> 
> > There are much fainter objects out there, like OU4, which I think is at least an order of magnitude fewer photons/second (0.0125), so you would only get a photon every 80 seconds and an electron every 160 seconds. It would take about 20 minutes just to reach the minimal SNR in a single pixel, and an hour to reach a reasonably strong SNR, on a target like OU4, for all pixels.
> ...



Yeah, a fast aperture can help. It isn't without it's own difficulties, though. Collimating for an f/2 scope is extremely difficult...the focal plane is like 5 microns thick. It becomes more difficult with larger sensors. With a giant aperture like that, you also have problems exposing stars...because star flux and nebula flux are a ratio of ratios, stars saturate EXTREMELY quick with such a fast scope, and by the time your nebula data is deep enough, you have massively clipped stars riddled with reflections. My original plan when I got into astrophotography was to get the 11" EdgeHD with Hyperstar...and while it's still a future goal, I passed on it because of the challenges. 

Narrow band imaging also already has star halo challenges. You will usually experience a discrepancy in OIII halo size vs. Ha halo size (vs. SII halo size, if you gather SII as well). Hyperstar exacerbates those problems several fold. That often makes NB channel combinations very challenging, leaving behind funky star halos. It's lead to much more advanced processing techniques like starless tonemapping and RGB star replacement, both of which are not easy and very tedious. 

It's always half of one, six of the other with astrophotography. You make a gain in one area by trading off in another. (That is, unless you are independently wealthy and dropping a couple hundred grand on a personal observatory with top of the line gear is also a drop in the bucket.


----------



## JMZawodny (Jan 20, 2016)

jrista said:


> The 1DX II won't have 15 stops of DR.



One of us will be right!


----------



## 3kramd5 (Jan 20, 2016)

JMZawodny said:


> One of us will be right!



Not necessarily. It could have 13.5 and you'd both be wrong


----------



## PureClassA (Jan 21, 2016)

Bingo. I've cited that same piece over and over and I think that's what we will see. However, that's a super 35 sensor. Obviously this will be FF. So We have to assume more light gathering and less noise compared to C300II. I'm thinking internet blogger standard will show something in the 13-14 range. The 5DSR is in the 12 range and that's 50MP. We're talking a FF, 22MP, on sensor ADC here.... It'll be 13-14 Id wager. Canon will claim 15 based on their measure.

And yes, the 5DSR, for all the silly (pardon my French) bitching and moaning out there by people who dont own one, I have been very, very pleased with my push/pull latitude. It's notably better than my 5D3 and noise on par or better (real world appearance) than even my 6D. And I mean at 1600-6400 ISO. I fully expect the 1DX2 to be 1-2 stops better than that, which should put it in the 13-14 range.




privatebydesign said:


> Canon have very clearly, and exceptionally openly, demonstrated exactly how they measure 15 stops of DR from the C300 II, I see no reason why they would change their methodology for another camera.
> 
> https://www.cinema5d.com/canon-measured-15-stops-dynamic-range-c300-mark-ii/
> 
> ...


----------



## jrista (Jan 21, 2016)

PureClassA said:


> Bingo. I've cited that same piece over and over and I think that's what we will see. However, that's a super 35 sensor. Obviously this will be FF. So We have to assume more light gathering and less noise compared to C300II. I'm thinking internet blogger standard will show something in the 13-14 range. The 5DSR is in the 12 range and that's 50MP. We're talking a FF, 22MP, on sensor ADC here.... It'll be 13-14 Id wager. Canon will claim 15 based on their measure.
> 
> And yes, the 5DSR, for all the silly (pardon my French) bitching and moaning out there by people who dont own one, I have been very, very pleased with my push/pull latitude. It's notably better than my 5D3 and noise on par or better (real world appearance) than even my 6D. And I mean at 1600-6400 ISO. I fully expect the 1DX2 to be 1-2 stops better than that, which should put it in the 13-14 range.
> 
> ...



I don't think it'll even hit 13 stops. Canon would have to do a radical rearchitecture, changing their entire design for both the sensor and the off-die components. That would be an extremely expensive endeavor. I think we would have seen patents for other components that indicated they were going down that path by now, if they were. Given the nature of the C300 II, I see Canon...pushing it...with measurements (even though they have been forthcoming with why they call the C300 II a 15 stop camera, they are still purposely measuring differently than everyone else and accounting for stops where SNR is less than 0dB. NO ONE does that. Such measures are negative decibels for a reason.)


----------



## JMZawodny (Jan 21, 2016)

jrista said:


> I don't think it'll even hit 13 stops. Canon would have to do a radical rearchitecture, changing their entire design for both the sensor and the off-die components. That would be an extremely expensive endeavor.



But they have already said as much. They said they are switching to on die ADC and that these would be coming to market very soon. That was several months ago. As for expense, can they afford not to invest in this?


----------



## jrista (Jan 21, 2016)

JMZawodny said:


> jrista said:
> 
> 
> > I don't think it'll even hit 13 stops. Canon would have to do a radical rearchitecture, changing their entire design for both the sensor and the off-die components. That would be an extremely expensive endeavor.
> ...



I would have figured it was an expense Canon couldn't have afforded not to invest in years ago...but they persisted with their previous overall architecture for years since. I guess it is possible that they have moved the 1D X II to an on-die ADC, but they would have had to do that years ago as well, not months ago, as months ago the product would have had to be well into field testing already. 

I guess we'll see, but I'm so skeptical of Canon these days, between things like the C300 II "15 stops" negative dB measurements, sketchy comments from Maeda in several interviews, and Canon having one of the lowest sensor patent filing/granted rates of any of the major imaging companies out there. I'm totally in "believe it when I see it" mode with Canon. I can't get my hopes up again.  Been getting my hopes up since 2009, tired of having them dashed. (Although in this case, I'd be quite happy to have my pessimism thwarted. ;D)


----------



## JMZawodny (Jan 21, 2016)

jrista said:


> JMZawodny said:
> 
> 
> > jrista said:
> ...



Obviously they would have had to make the move on a small developmental scale years ago. It should also be obvious that the company would not say anything publicly until the technology was ready for production. I see nothing wrong with the timing of things. We'll know very soon. Prepare to be happy.


----------



## StatisticsRule (Jan 21, 2016)

jrista said:


> I guess we'll see, but I'm so skeptical of Canon these days, between things like the C300 II "15 stops" negative dB measurements,



Out of curiosity, where did you see anything about negative dB?

The best explanation I have seen of what Canon did comes from: https://www.cinema5d.com/canon-measured-15-stops-dynamic-range-c300-mark-ii/

There is little mention of dB, but looking at the plot, the 15th step in the waveform monitor appears to me like it is fairly near SNR=1 (0dB). Of course if I had the data instead of a screen capture, I could be more precise in determining what the SNR of that last step is. In any case, the article by cinema5d claims to be more objective by using IMATEST, but according to the IMATEST website (http://www.imatest.com/docs/dynamic/):


> A dynamic range corresponding SNR = 1 (1 f-stop of noise) corresponds to the intent of the definition of ISO Dynamic range in section 6.3 of the ISO noise measurement standard: ISO 15739: Photography — Electronic still-picture imaging — Noise measurements. The *Imatest measurement differs in several details from ISO 15739*; hence the results cannot be expected to be identical.



Or in other words, the IMATEST measurement used by cinema5d to "disprove" Canon's claim of 15 stops does not conform to ISO standards for dynamic range.

Decisions about "usable" or "useful" DR are subjective and I won't argue that point, but I have yet to see anything inaccurate about Canon's measurements. I am sure you know that SNR=1 (dB=0) means the signal is the same level as the noise and so yes, it will look bad. Where did you see they used negative dB though?


----------



## privatebydesign (Jan 21, 2016)

I don't understand where the insinuation that Canon are measuring below 0dB comes from, or that they are including any information below that level in their measurements of the C300II DR. They state quite clearly "_*indeed, the definition of Dynamic Range is when the final lower step has a signal to noise of approximately 0 dB.*_"

Or, everything between FWC and 0dB, the issue seems to be that people (or their testing software) seem to take an arbitrary dB value and say anything below that isn't usable, which would be fair if everybody agreed a single value for 'too much noise' and all sensors behaved the same way at the same dB level, but we know they don't. Besides, even Canon themselves say _*"Even with the impressive 67dB Luma signal to noise specification this means those two steps ARE noisy."*_ but who are we to judge what is too noisy for somebody else? We can't, some people will use the D5 at 1,000,000 iso and be happy with the results, others will say their usable max is 25,000 iso!

As for Canon not being able to make better sensors, that is farcical considering some of the specialist equipment they actually make, the simple reason they have not made them for general camera release up to now is cost and marketing, simply put they have not been sufficiently pushed by other camera manufacturers (their market share has not suffered enough) to force them to spend the money and absorb the costs. They believe they have not needed to give us 'better' sensors up to this point, but indications seem to be that they are prepared to do that now.


----------



## privatebydesign (Jan 21, 2016)

StatisticsRule said:


> Or in other words, the IMATEST measurement used by cinema5d to "disprove" Canon's claim of 15 stops does not conform to ISO standards for dynamic range.



Oh, don't spoil their fun by pointing out one person is measuring to international standards and one isn't. ;D


----------



## jrista (Jan 22, 2016)

From Canon's own plot:







The white band at the top of each strip IS noise. It's the noise intrinsic to the signal itself...the shot noise. The way I believe Cinema5D evaluated that plot is that given the last two stops have no separation between the noise in the signal and the read noise, it's all noise. So SNR would not be >0dB for those last two stops. The 13th stop had a very small amount of separation between the two, so it was the first stop with >0dB SNR.

Given the thickness of the band of shot noise at the top of each signal strip grew with each successive stop, the signal for the last two stops would have been completely buried in noise, hence the reason I called it negative decibels. I don't know what else to call it, not the way I understand this anyway. 

Noise is noise. Your going to have, at the very least, shot noise and read noise in every image, assuming zero dark current and a perfect sensor devoid of any defects and identical pixel response. The SNR of a signal cannot ignore the shot noise:

SNR = Signal/SQRT(Signal + ReadNoise^2)

So I understand why Cinema5D has a different interpretation of the plot than Canon, and personally I agree with their interpretation. I expect the 1D X II to be similar...however I'd be pleased if it was something totally new, with a much thinner band of read noise at the bottom of the graph.


----------



## neuroanatomist (Jan 22, 2016)

jrista said:


> From Canon's own plot



There is signal visible above the noise (although not separated) in the 14th and 15th stop, and obviously the scale of the plot is not ideally suited to distinguishing differences under 10.



jrista said:


> ..the C300 II "15 stops" negative dB measurements...



I'm confused...where is the negative dB on that plot?


----------



## privatebydesign (Jan 22, 2016)

Well you might not know what to call it, but cinema5D say* "Was Canon’s evaluation parameter of 0db as a threshold too loose or are cinema5D’s testing parameters maybe too strict?"* so clearly they don't believe Canon were counting sub 0dB stops, and the ISO, by all accounts, also consider Canons methodology to be sound.

As I said earlier, the problem with the way _"everybody else"_ is measuring DR is that it isn't to an international standard and there is no agreement on when 'noisy' is too noisy and shouldn't be counted. However, as we do have an international standard, and Canon use it, I truthfully don't see what the issue is.

Again, as I said earlier, Canon are not saying the C300 II has more DR than another camera, they are just stating what their measurements are when done to that international standard.


----------



## privatebydesign (Jan 22, 2016)

neuroanatomist said:


> jrista said:
> 
> 
> > From Canon's own plot
> ...




Obviously jrista considers the bottom of the top white stripe to the top of the bottom white stripe (0dB) be the clear signal, he is saying because the bottom of the top white stripe goes below the top of the bottom white stripe, that he considers 0dB, then stops 14 and 15 are "below 0dB", though nobody else actually agrees with him, including cinema5D , Canon, and the ISO.

Everybody else seems to consider 0dB as the top of the bottom white stripe and the signal to be the top of the top white stripe.


----------



## neuroanatomist (Jan 22, 2016)

privatebydesign said:


> I bviously jrista considers the bottom of the top white stripe to the top of the bottom white stripe (0dB) be the clear signal, he is saying because the bottom of the top white stripe goes below the top of the bottom white stripe, that he considers 0dB, then stops 14 and 15 are "below 0dB", though nobody else actually agrees with him, including cinema5D , Canon, and the ISO.



My bad. I thought zero meant...zero. I mean, it's right there at the bottom of the labels for the ordinate axis. Perhaps further explanation is required? Maybe theoretical physicist Michio Kaku can help with the concept of 'zero'. 

http://youtu.be/UV5Wo7YrZIU

;D


----------



## privatebydesign (Jan 22, 2016)

dilbert said:


> jrista said:
> 
> 
> > From Canon's own plot:
> ...



Good god dilbert, surely even you can follow this and not come out with the idiotic "pretending" nonsense? 

As for anybody else having similar read outs, it would be great if they did, and there is no doubt that the Arri one would look noticeably better, as would the Sony and Nikon stills cameras when compared to a similar plot from a Canon stills sensor.


----------



## jrista (Jan 22, 2016)

neuroanatomist said:


> privatebydesign said:
> 
> 
> > I bviously jrista considers the bottom of the top white stripe to the top of the bottom white stripe (0dB) be the clear signal, he is saying because the bottom of the top white stripe goes below the top of the bottom white stripe, that he considers 0dB, then stops 14 and 15 are "below 0dB", though nobody else actually agrees with him, including cinema5D , Canon, and the ISO.
> ...



An SNR of 0dB is when the signal strength is equal to the noise, not when the plot is zero (if the plot was at zero, then there wouldn't even be any noise...for that matter, if we go by your interpretation of the graph, where zero on that graph is actually 0dB, then stops 16 and up would also be an SNR > 1...that clearly is not the case, so your interpretation of the 0 line on the graph also being 0dB SNR must be incorrect). 

For stops 14 and 15 in the plot, there is nothing but noise...it is white from the bottom of the graph to the top of the peak for that band in the image. I think the scale of the graph is fine, the thickness of the noise in the signal for each band kept getting thicker for each stop darker strip...and the thickness of the noise for the final two stops is much thinner than the thickness for the 13th stop. Given that, I interpret those two strips to have a signal strength less than the noise (total noise, the signal may just barely be stronger than the read noise alone, but again...you cannot ignore the noise intrinsic in the signal itself). The moment that happens:

SNR = 0.99/1 = -0.0873dB


----------



## neuroanatomist (Jan 22, 2016)

jrista said:


> I interpret



Ahh, well then. It seems others interpret differently.


----------



## Neutral (Jan 22, 2016)

neuroanatomist said:


> jrista said:
> 
> 
> > I interpret
> ...


Sounds like a troll statement.
Others is not equal to ALL others )
For anyone who deals with systems noise as part of high sensitivity systems designs/development/deployment it is all clear from the first glance at this graph. Strips 12 and 13 give instant visual representation of SNR and what is approximate SNR for strip 14.
So nothing wrong with the graph scale.
So I do not see here 15 stops DR and tend to agree with JRista.
Will be happy though to see that 1DXII have real 15 stop DR and more than one stop of better high ISO performance.
Also will be happy if Canon will be able to prove that scientifically disclosing to public all the tests methodology )
Personally I do not see any point to argue about specifications which is yet unknown.
This is just waste of time.
I beleive that 1DXII will be definitely better than 1DX but how much better we yet have to see in near future.


----------



## JMZawodny (Jan 22, 2016)

dilbert said:


> privatebydesign said:
> 
> 
> > dilbert said:
> ...



Do you have any idea what that curve is supposed to look like? It does have a well defined theoretical shape. (Hint: all of the points have a small positive bias added to them) It looks just fine to me. Additionally, the fact that there is a bump at the 14th and 15th steps is the significant aspect of this plot, not the lack of separation. Said differently, averaging many of the data would eventually produce a useful (clear) signal. I'd like to see these plots produced for any/all cameras. (unfortunately this appears to be an analog video signal direct from the chip) I'd also like to see the results for the R, G, and B pixels plotted separately as this would tell us a lot about the color accuracy at very low signal.


----------



## JMZawodny (Jan 22, 2016)

dilbert said:


> JMZawodny said:
> 
> 
> > ...
> ...



Incorrect! If you take a strictly power progression plotted on a log plot, it is indeed a straight line. However, if you simply add a small constant to that same data, the line begins to curve as the value approaches the constant and becomes asymptotic to the constant.


----------



## StatisticsRule (Jan 22, 2016)

jrista said:


> The white band at the top of each strip IS noise. It's the noise intrinsic to the signal itself...the shot noise. The way I believe Cinema5D evaluated that plot is that given the last two stops have no separation between the noise in the signal and the read noise, it's all noise. So SNR would not be >0dB for those last two stops. The 13th stop had a very small amount of separation between the two, so it was the first stop with >0dB SNR.



jrista, thank you for explaining how you got the negative dB estimate. I have to agree with others that the band separation is not the important point though. To help explain, I created some random data in MATLAB. When the x-axis is between zero and 5, the signal is exactly 0. Between 5 and 10, the signal is exactly 1. On top of that signal, I add 100,000 points of normal random distributed noise with std dev = 1. Finally, to look similar to Canon's plot, I offset all the data by 4 units. 

I'd say the data is a reasonable approximation (qualitatively) to Canon's data for the high DR values. Most importantly, because I created the data, I know the right-half of the data has exactly SNR = 1 by definition of the signal height and the noise statistics. You can see the separation between steps is much smaller than the noise height.

From the crude MATLAB approximation here and the screenshot from Canon, I personally still believe Canon shows at least 14+ stops DR. Of course there is only so much you can get from a screenshot.

(As a side note, you might argue I did not use Poisson statistics, but by the time you make the mean value large enough so the data does not show discrete height steps, the poisson distribution is essentially the same as a normal distribution.)


----------



## neuroanatomist (Jan 22, 2016)

Neutral said:


> neuroanatomist said:
> 
> 
> > jrista said:
> ...



True. But consider that one of those others is the International Organization for Standardization (see ISO 15739).


----------



## Lee Jay (Jan 22, 2016)

jrista said:


> From Canon's own plot:
> 
> 
> 
> ...



When you shoot a dim nebula, the separation between its light and the skyglow is zero - they overlap. Yet, you are able to see the nebula anyway because its light is added to the skyglow, and you can thus lop-off some of the combined signal.

In other words, the lack of a gap between the shot noise and the read noise floor is not the same thing as "all noise" or zero signal-to-noise ratio.


----------



## jrista (Jan 22, 2016)

StatisticsRule said:


> (As a side note, you might argue I did not use Poisson statistics, but by the time you make the mean value large enough so the data does not show discrete height steps, the poisson distribution is essentially the same as a normal distribution.)



First off, I agree with this.



StatisticsRule said:


> jrista said:
> 
> 
> > The white band at the top of each strip IS noise. It's the noise intrinsic to the signal itself...the shot noise. The way I believe Cinema5D evaluated that plot is that given the last two stops have no separation between the noise in the signal and the read noise, it's all noise. So SNR would not be >0dB for those last two stops. The 13th stop had a very small amount of separation between the two, so it was the first stop with >0dB SNR.
> ...



Is your signal just representative of shot noise, or does it also factor in read noise? Based on Canon's plot, it appears that signal starts at the zero line (you can see the bits of shot noise between each band reaching down all the way to the zero line in the graph in a few places, below the band of read noise). I believe it is only the read noise band that is actually offset from 0 by about 4 units. If you remove read noise from the plot, stops 14 and 15 would indeed have separation of the shot noise above the zero line, and the snr would be > 0dB. However with read noise added in, I do not believe that remains the case.

That is not to say there isn't any signal...of course there is signal...it's just that the signal is buried in the noise, SNR <1. If we are talking single-shot imaging, particularly with the CHARACTERISTIC of that noise (see the image below), I don't think those last two stops are very usable by any means (maybe if your doing non-artistic work...police photography of a crime scene or something):






The Canon image has a purple band running right up through stop 12, and a good deal of banding running through stop 13 and up. The Arri, on the other hand, has very clean noise up through stop 18. Those upper stops on the Arri are undoubtedly no better, in terms of SNR, than Canon's stops 14 and 15...they are likely less than SNR 1, however they are much more usable. Similar to an Exmor sensor...you can easily dig very deep into the noise floor and pull out usable information, which is not the case with Canon data (about a seven stop or so lift here, well beyond what should be possible with either camera, but the A7r held up extremely well regardless; top row is just the shadow push, bottom row included additional processing to restore some aesthetic appeal, as much as was possible, to each image):






I also wonder what the power of those bands, particularly that purple band, is in Canon's data. I don't think a simple 2D slice of the signal is sufficient to explain how far those bands may protrude into the signal...and since the purple band is visible at least at stop 12, it's certainly more powerful than the signal at higher stops. 

Now, if we are talking astrophotography, we have the option of stacking to reduce noise. Stack 4 subs and you'll reduce the noise by half. That would undoubtedly improve the SNR over 1 in stop 14. Stack 16 subs and it'll probably make stop 15 viable. Stack 64 subs, and your probably good across the board. That isn't going to help much with sports photography or anything like that, though.


----------



## StatisticsRule (Jan 22, 2016)

jrista said:


> Is your signal just representative of shot noise, or does it also factor in read noise?



My goal was not to try and guess the noise source in the images, but rather to come up with an approximation of the signal we are seeing. The calculation for how I did that was described with my last post. 

You do raise an interesting point as to what falls under the category of noise, and I will admit I can imagine some cases where the offset could be considered "noise" even though I did not. Offhand, I don't know how the ISO standard defines noise.



jrista said:


> If we are talking single-shot imaging, particularly with the CHARACTERISTIC of that noise (see the image below), I don't think those last two stops are very usable by any means (maybe if your doing non-artistic work...police photography of a crime scene or something):



No argument from me. Everyone has their own interpretation of useful signal, and Canon has historically shown banding in the images. Hopefully the 1Dx M II will take care of that as well. My main point was that by everything I see, Canon appears to have a legitimate claim of 15 stops DR.

Anyhow, this has been a good discussion. If nothing else, it has made me re-examine some of my previous assumptions and I learned a few things I had not completely considered before.


----------



## jrista (Jan 23, 2016)

StatisticsRule said:


> jrista said:
> 
> 
> > Is your signal just representative of shot noise, or does it also factor in read noise?
> ...



Same.  Your Matlab example makes me wish I had it, as I think it would make demonstrating what I try to explain with just math and formulas easier at times. 

Regarding the bias offset, at 14-bit ADUs, it does contain some banding, potentially up to 2 ADU in my astrophotography testing. Removal of the bias signal can sometimes reduce vertical banding in a Canon signal in the deepest reaches of the signal. However if you do so, you have to restore another offset (called a pedestal) that is completely devoid of noise before subtracting the bias. If you do not, you'll clip negative values due to the read noise. 

Canon's bias offset is also 512 ADU...at least, it is in their DSLRs. I am a little surprised at how much read noise is indicated by Canon's chart, as despite the offset, it reaches right back down to nearly the zero line. It may be that the C300 II has a smaller bias offset, which might explain the read noise more.


----------



## 3kramd5 (Jan 23, 2016)

Jon -
Take a look at this open source math code:
https://www.r-project.org


----------



## jrista (Jan 23, 2016)

3kramd5 said:


> Jon -
> Take a look at this open source math code:
> https://www.r-project.org



Thanks, looks interesting. Extensive, but interesting. Might do in a pinch.


----------



## JMZawodny (Jan 23, 2016)

3kramd5 said:


> Jon -
> Take a look at this open source math code:
> https://www.r-project.org



Yay, R! We use that at work.


----------



## msm (Jan 23, 2016)

This claim is based on a signal after processing and noise reduction. It also appears to be dowsnampled to 1080p ie 2mpix. Then using this signal and a waveform monitor they subjectively determine the DR to be 15 stops, instead of using a mathematical criteria. Judging from this forum the last step alone introduces variations of several stops depending on who does it.

This is just pure marketing and is not saying much about the actual raw capabilities of the camera.


----------



## 3kramd5 (Jan 23, 2016)

msm said:


> This claim is based on a signal after processing and noise reduction. It also appears to be dowsnampled to 1080p ie 2mpix. Then using this signal and a waveform monitor they subjectively determine the DR to be 15 stops, instead of using a mathematical criteria. Judging from this forum the last step alone introduces variations of several stops depending on who does it.
> 
> This is just pure marketing and is not saying much about the actual raw capabilities of the camera.



It's a video camera...

Also, it looks like an objective measure. What's subjective is the non-standardized evaluation of a screenshot 

(And the singular is "criterion")


----------



## msm (Jan 23, 2016)

3kramd5 said:


> msm said:
> 
> 
> > This claim is based on a signal after processing and noise reduction. It also appears to be dowsnampled to 1080p ie 2mpix. Then using this signal and a waveform monitor they subjectively determine the DR to be 15 stops, instead of using a mathematical criteria. Judging from this forum the last step alone introduces variations of several stops depending on who does it.
> ...



And the 1DX II which is the topic of this thread is mainly a stills camera. So if anyone base their expectations of the 1DX II stills capabilities on this measure, they may be up for some dissappointment.


----------



## 3kramd5 (Jan 23, 2016)

msm said:


> 3kramd5 said:
> 
> 
> > msm said:
> ...



I totally agree. It is silly to draw conclusions about how a vaporware stills camera will perform based on poorly understood measurements of a video camera.


----------



## PhotographyFirst (Jan 23, 2016)

Too bad the c300 II does not take RAW stills as well. Would be interesting to see how that would turn out. 

I actually have good faith the new stills sensor will be great for low ISO DR. Seeing as how the Sony A7 cameras are around the same as the c300 for video, it might be very possible that the stills DR is about on par as well.


----------



## Neutral (Jan 23, 2016)

JMZawodny said:


> 3kramd5 said:
> 
> 
> > Jon -
> ...


Yes, very interesting alternative to commercial applications.
I personally prefer Mathcad which I was using quite often in the past when was on R&D side.

In general I found this discussion quite useful and this even triggered couple of ideas which could drastically improve sensor performance and useful DR.
If Canon be smart enough ( or any other company) they could already have come with this idea commercially implemented.
Basically this could allow drastically reduce impact of read noise and have DR to be equal to Full Well Capacity at base ISO and keep it the same up to ISO 800 or even 1600 using current sensor manufacturing technologies.
For 1DX with FWC=90101e- this could result to up to 16.5 stop DR at base ISO and the same at ISO 800.
For Sony A7S with FWC =155557e- this could result with base ISO DR over 17 stops.
But for Sony it could be much more easy to implement than for Canon.

Basically idea is so simple and obvious and floating in the air so close to the people nose that it virtually could be just smelled. May be because it is so obvious it is overlooked by most of the big sensor players who has kind some of the thinking inertia.
Was searching Internet to see if anyone came out with the same idea and its implementation and found that sensor company named Andor has something close to this (though a bit different and only part of the whole concept and also a bit complicated) already being implementing this for their sCMOS sensors.
Maybe they already patented their solution and this is why we do not see this in Canon and Sony implementations.
My concept is more generic, wider and more universal.
And I do not know what to do with this now.
To publish it somewhere (what could be the best media ?) where free license could be declared to all manufactures or try to sell it to Canon or Sony ?
Filing patents individually is so difficult and requires so much bureaucratic work so it could be better done with companies with some dedicated staff to it.


----------



## jrista (Jan 23, 2016)

Neutral said:


> In general I found this discussion quite useful and this even triggered couple of ideas which could drastically improve sensor performance and useful DR.
> If Canon be smart enough ( or any other company) they could already have come with this idea commercially implemented.
> Basically this could allow drastically reduce impact of read noise and have DR to be equal to Full Well Capacity at base ISO and keep it the same up to ISO 800 or even 1600 using current sensor manufacturing technologies.
> For 1DX with FWC=90101e- this could result to up to 16.5 stop DR at base ISO and the same at ISO 800.
> ...



What, exactly, is the idea? There are two options, given your mention of Andor.

I linked Andor's new low read noise high DR ccd camera on the previous page:



jrista said:


> Certainly, it boils down to money. I'm concerned about what is accessible to a middle class to upper middle class consumer. The price for extremely high end equipment skyrockets, tens of thousands to hundreds of thousands of dollars. There already are amazing cameras on the market, like this one:
> 
> http://www.andor.com/scientific-cameras/ikon-xl-and-ikon-large-ccd-series/ikon-xl-231
> 
> That thing is a superbeast of a camera. It's got MONSTER pixels with a massive FWC, but a mere 2.1e- read noise. MASSIVE dynamic range (17.4 stops!!) with 18-bit encoding. Insane cooling (better than an emCCD) at -100C dT. I mean, that's it, right there. Basically the holy grail. But it almost costs as much as a house!



The concept is not new, really. It's been around for a long time in the form of emCCD, or electron multiplying CCD. The idea is to multiply every electron by a significant amount, thus creating a large charge from the faintest signals. Because each electron is amplified so much, there are two consequences. One, read noise becomes sub-electron relative to the final amplified signal, and thus the cameras are effectively read noise free. Two, you cannot have any dark current at all, because even one electron from leakage would be amplified like any other, and that would damage the signal. So emCCD cameras, while effectively noiseless (excluding signal shot noise), must be cooled to an excessive degree to eliminate dark current as well. That makes the concept impractical for a digital camera as it requires TE cooling with liquid cooling to remove the heat. That works for fixed installations or setups where you don't really hold the camera (astrophotography).

The sCMOS concept is better in the long run, and will undoubtedly take over in place of CCD/emCCD when it fully matures. The sCMOS design uses many of the same improvements Sony Exmor did, by moving and hyperparallelizing all the readout electronics onto the sensor die. The sCMOS design actually parallelizes the readout logic for each column, as every column has dual column amps, dual CDS, and dual ADC. So for a 2000 column sensor, there are 4000 readout units. Each unit is independently tunable with a unique gain, which eliminates banding. The high parallelism allows each unit to operate at a lower frequency, which allows lower read noise. The CDS units are apparently a more advanced design that nearly eliminate dark current, meaning you don't need to cool as much to keep dark current levels very low. The high parallelism allows very fast readout with read noise levels similar to current CCD and CMOS sensors operating at much slower readout rate.

The sensors are only in a couple sizes at the moment, and are quite small (smaller than APS-C). They also don't have the extremely high Q.E. that emCCD designs have reached...~60% for sCMOS, over 90% for emCCD. I think those limitations will ultimately be overcome, but at the moment, an emCCD design like the Andor camera I linked still delivers the cleanest results of any type of camera on the market (so for astrophotography, it's hands down the winner, because you effectively only have shot noise). An sCMOS sensor would undoubtedly deliver better IQ for a DSLR, and the design isn't that much more advanced than a Sony Exmor...although we are probably talking about much more stringent grading of the sensor quality, with defects classified similar to Grade 0, 1, and 2 CCDs (so a true scientific grade 0 sCMOS is likely to be extremely expensive.)


----------



## Neutral (Jan 25, 2016)

jrista said:


> Neutral said:
> 
> 
> > In general I found this discussion quite useful and this even triggered couple of ideas which could drastically improve sensor performance and useful DR.
> ...


I do not think I can disclose at the moment details of the idea, it might happen to become my bread and butter at some later time if happens to be feasible )
I need some spare time (which is a real problem for me now) to research it more and see if anyone did come up earlier with something similar.
Idea is not to improve single photocell performance on the sensor but how to use it most efficiently and have overall better performance and more flexibility for the sensor in a way which could be applicable to the current level of sensor technology.
Basically some changes in sensor signal processing design concept.
I mentioned Andor as I was searching if anyone already came up with the similar idea or have some elements required for this concept implementation and found that they have some pieces required for the solution implementation.

As my occupation is not related to imaging technology (photography is just expensive hobby ) I never heard of Andor before and when found them researching my idea I was really impressed with their advances and the level of sensor technology which they reached so far.

And yes , their iKon-xl-231 is truly impressive imaging device.
Dream of any astrographer. 
Medium format 6x6 cm CDD sensor and every part of the sensor is an amazing piece of the state of the art technology starting from photocell and then precision 18bit ADC allowing to get most of the photocell performance.
It could provide stellar performance for space optical telescopes where you can see deep space free from airglow light pollution. And many other applications where imaging device cost is not a limiting factor.


----------



## jrista (Jan 26, 2016)

Neutral said:


> And yes , their iKon-xl-231 is truly impressive imaging device.
> Dream of any astrographer.
> Medium format 6x6 cm CDD sensor and every part of the sensor is an amazing piece of the state of the art technology starting from photocell and then precision 18bit ADC allowing to get most of the photocell performance.
> It could provide stellar performance for space optical telescopes where you can see deep space free from airglow light pollution. And many other applications where imaging device cost is not a limiting factor.



Yes, it's definitely a drool-sucking device.  Even here on earth with dark skies, I'd love to have one. 

There are a couple guys I know who actually use real emCCDs in their work (I have some bookmarks at home, so I may be able to share some examples of what these things can do). With an emCCD you effectively have no read noise (it's around 0.1e- tops, so for a normal conversion, it does not actually add any noise). Because of the ultra deep cooling, you also have no dark current. That means the exposure time doesn't matter...your purely photon shot noise limited...so you can use ultra short exposures or very long, doesn't matter. I know guys who have stacked many hundreds to thousands of frames of only a few seconds (usually 5-10 seconds), and the results are mind bogglingly clean. It's pretty amazing stuff. The cameras, even second hand, were quite expensive though. One old, used emCCD this guy picked up was over $20k I think. Kind of ridiculous.


----------



## Neutral (Jan 27, 2016)

jrista said:


> Neutral said:
> 
> 
> > And yes , their iKon-xl-231 is truly impressive imaging device.
> ...


It could be very interesting so see what could be achieved using their sensors.
They even have device with the single photon sensitivity.
As for price this could be compared with the Phase One digital backs which is about USD 50k for the latest 100mp one using Sony 4x6cm MF sensor.
So even if iKon-xl-231 could be about or over 100k then this just could be couple of Phase One digital backs and this looks pretty reasonable.
If I had such passion and addiction as you for astrography I might consider buying such one for myself ))) One possible option is to buy one by several people for time shared use - concept that is used sometimes to share cost of expensive property or expensive yacht.

What is interesting that iKon-xl-231 sensor is the biggest MF square 6x6cm BSI sensor that I ever seen so far.
I remember there was a lot of hype about Sony A7r2 sensor - that it is first FF BSI sensor and a lot of people were arguing about BSI benefits for FF but nobody mentioned that there was already true MF 6x6cm BSI sensors existing.


----------



## jrista (Jan 28, 2016)

Neutral said:


> jrista said:
> 
> 
> > Neutral said:
> ...



From what I've heard, an iKon XL is around $200,000. Guess making a BSI that large really does cost.  It's got over 95% Q.E., which basically means it's a photon counter. Most emCCD cameras are also photon counters.

The difficulty with these cameras is finding a scope with a large enough image circle. Most don't have a circle larger than 44-45mm. Some have image circles up to 65mm, and even fewer have larger image circles. Most of the scopes that could handle an 84mm sensor diagonal are pretty expensive. In the $30,000 and up range, with the exception of maybe a couple of Tak's and maybe the TEC140 which are between $5000 and $10,000.


----------



## kaihp (Jan 28, 2016)

jrista said:


> From what I've heard, an iKon XL is around $200,000. Guess making a BSI that large really does cost.  It's got over 95% Q.E., which basically means it's a photon counter. Most emCCD cameras are also photon counters.
> 
> The difficulty with these cameras is finding a scope with a large enough image circle. Most don't have a circle larger than 44-45mm. Some have image circles up to 65mm, and even fewer have larger image circles. Most of the scopes that could handle an 84mm sensor diagonal are pretty expensive. In the $30,000 and up range, with the exception of maybe a couple of Tak's and maybe the TEC140 which are between $5000 and $10,000.



This is getting fairly off-topic of the original subject, but what the heck.

If the sensor you're getting it $200K, isn't a scop for $30K kinda 'small change'? 

WRT the PhaseOne XF100MP, if I understood right, the $50K is not just for the back - that includes the XF modular camera 'body'.


----------



## jrista (Jan 30, 2016)

kaihp said:


> jrista said:
> 
> 
> > From what I've heard, an iKon XL is around $200,000. Guess making a BSI that large really does cost.  It's got over 95% Q.E., which basically means it's a photon counter. Most emCCD cameras are also photon counters.
> ...



The cheapest high quality reflecting scope (which are usually used for scientific and research applications because of the lack of glass optics) is ~$35k (i.e. a PlaneWave...and actually, I'm not even sure the smaller ones have a large enough image circle, which would mean the cheapest scope might be around $50k or so), but they can top a million bucks if you go for one of the larger institutional grade scopes.

Anyway. No one outside of an institution of some kind is going to be using an iKon. It's just kind of the pinnacle of CCD technology, showing what can be done. 

Interestingly, the PhaseOne cameras, or any MF camera for that matter, has the same problem as the iKon. It's got a huge sensor, with a diagonal larger than 44mm. That means it is limited to the same scopes...so if you spend $50k on a PhaseOne XF100MP, then you'll need either a Tak FSQ106 (which costs about $5500 for the base scope, but you also have to buy a bunch of accessories for it as well, which usually puts the cost around $8000-$10,000), or one of the larger reflectors (so another $50k or more.) 

Using large sensors for astrophotography is basically limited to institutions and the independently wealthy.  

That's one of the reasons using full frame DSLRs is so popular these days. It's by far the cheapest way to get a big sensor frame that works with a lot more scopes. My $1000 AstroTech 8" Ritchey-Cretien, a decent scope, actually works with FF DSLR sensors (with a bit of vignetting). There are a good deal of refractors out there that support FF, and there are plenty of reflecting scopes that do as well. They don't perform as well as a proper monochrome camera, but they are significantly more cost effective. The most popular FF DSLR option on the market these days is the 6D, because it's price is so low. It performs ok, but I suspect if the 6D II has similar low dark current as the 7D II, then the 6D II might become the FF DSLR of choice for those looking for a big frame. The best FF DSLR option on the market at the moment is the D810a, and the images from that are just stunning.


----------



## JMZawodny (Jan 30, 2016)

jrista said:


> kaihp said:
> 
> 
> > jrista said:
> ...



Get on the the list for an AP Honders.


----------



## hubie (Jan 31, 2016)

Sorry, but 15 stops of DR is bull****... that would mean everything between for example 1/8000 and 4" exposure would be perfectly exposed. I doubt that very much... shoot into the sun on a summer day and have no burned white and still see details in shadows that would compare to a 4" exposure... haha ;D


----------



## 3kramd5 (Feb 1, 2016)

hubie said:


> Sorry, but 15 stops of DR is bull****... that would mean everything between for example 1/8000 and 4" exposure would be perfectly exposed.



That's not at all what it means.


----------



## Sporgon (Feb 1, 2016)

3kramd5 said:


> hubie said:
> 
> 
> > Sorry, but 15 stops of DR is bull****... that would mean everything between for example 1/8000 and 4" exposure would be perfectly exposed.
> ...



Indeed. The whole 'DR' think is horribly misunderstood on the internet.


----------



## Alejandro (Feb 1, 2016)

Sporgon said:


> 3kramd5 said:
> 
> 
> > hubie said:
> ...



Exactly. Most overrated feature in a camera. 

They just want to "mistakenly" take a pitch black picture, move a few sliders and have everything in good exposure.


----------



## Don Haines (Feb 1, 2016)

Alejandro said:


> Sporgon said:
> 
> 
> > 3kramd5 said:
> ...


until I can take a picture with the lens cap on, and adjust it in photoshop to see the white cat in front of a white background, it just won't be good enough!


----------



## Alejandro (Feb 1, 2016)

Don Haines said:


> Alejandro said:
> 
> 
> > Sporgon said:
> ...



Dxomark wet dream.


----------



## jrista (Feb 1, 2016)

Well, the camera has been announced. Hopefully it will hit the streets sooner rather than later, and we'll know for sure what it's really capable of. I'm pessimistic...but that means I'll be pleasantly surprised if it actually does have significantly more DR.


----------



## MintChocs (Feb 3, 2016)

jrista said:


> Well, the camera has been announced. Hopefully it will hit the streets sooner rather than later, and we'll know for sure what it's really capable of. I'm pessimistic...but that means I'll be pleasantly surprised if it actually does have significantly more DR.


What is significant? 1/2 stop( well it is Canon) or 1 stop, 2 stop?


----------



## Stu_bert (Feb 3, 2016)

jrista said:


> Well, the camera has been announced. Hopefully it will hit the streets sooner rather than later, and we'll know for sure what it's really capable of. I'm pessimistic...but that means I'll be pleasantly surprised if it actually does have significantly more DR.



I'm with you Jon, but a couple ok uk sites have informed that it has on chip adc. Check out Andy Rouse blog about it, appreciate he is an EOL, but he does say the noise in darks is reduced.


----------



## Don Haines (Feb 3, 2016)

jrista said:


> Well, the camera has been announced. Hopefully it will hit the streets sooner rather than later, and we'll know for sure what it's really capable of. I'm pessimistic...but that means I'll be pleasantly surprised if it actually does have significantly more DR.


me too....

now if it had 16 bit RAW files..... that would be a good indication of some major improvements!


----------



## Eldar (Feb 3, 2016)

A friend of mine, who is a Canon Ambassador, has spoken with several of the early testers. According to him, they were all "lyrical" about the performance, including noise and DR. I have a preorder in and hold my fingers crossed. The news of an on-chip ADC was encouraging.


----------



## Mark D5 TEAM II (Feb 3, 2016)

Pictures of the 1DX2 sensor confirms column ADCs top & bottom of the pixel array:


----------



## Stu_bert (Feb 3, 2016)

Only jpeg and already been posted in another thread here and on FM forums but

http://www.fotosidan.se/cldoc/vi-har-provat-canon-1d-x.htm?page=-1


Mark 1 vs Mark 2 shadow post push

Quite promising...


----------



## jrista (Feb 4, 2016)

MintChocs said:


> jrista said:
> 
> 
> > Well, the camera has been announced. Hopefully it will hit the streets sooner rather than later, and we'll know for sure what it's really capable of. I'm pessimistic...but that means I'll be pleasantly surprised if it actually does have significantly more DR.
> ...



A solid 2 stops minimum is what I would consider significant. That would bring it in line with the competition, and around 14 stops total. I haven't read anything specific yet, but I still doubt it will have 16-bit RAW.


----------



## Don Haines (Feb 4, 2016)

jrista said:


> MintChocs said:
> 
> 
> > jrista said:
> ...


One of the press releases/spec sheets said in writing that it is 14 bit RAW........


----------



## jrista (Feb 4, 2016)

Don Haines said:


> jrista said:
> 
> 
> > MintChocs said:
> ...



Well there ya go. I guess it could use some kind of non-linear amplification pre-ADC, and compress more than 14 stops into the 14-bit RAW, but I also kind of doubt that is the case as well. Still, would be interesting to see if the camera actually scores over 13 stops engineering DR.


----------



## Don Haines (Feb 4, 2016)

jrista said:


> Don Haines said:
> 
> 
> > jrista said:
> ...


They could do a non-linear compression on the analog, but once you are 14 bit digital you can not go back to 16 bit without loosing resolution. (it is blazingly fast with a look-up table). When you un-compress you loose resolution. That's why I don't believe anyone who claims more stops of DR than bits of data. If they really had the resolution, they would bump up the number of bits.... after all, that's why we went from 12 bit RAW to 14 bit


----------



## jrista (Feb 4, 2016)

Don Haines said:


> jrista said:
> 
> 
> > Don Haines said:
> ...



It depends. There is a massive amount of highlight information, more than you really need. If you compress properly, what you lose isn't necessarily as important as what you gain. It's 14 bits of data, but that data could represent more than 14 stops of information.


----------



## Don Haines (Feb 4, 2016)

jrista said:


> Don Haines said:
> 
> 
> > jrista said:
> ...


agreed! The highlight information is nowhere near as critical as the lows....going from 11 to 13 while skipping 12 is significant, while going from 16371 to 16379 may be 4 times the jump, but the difference in an image would be almost completely undetectable.


----------



## jrista (Feb 4, 2016)

Don Haines said:


> jrista said:
> 
> 
> > Don Haines said:
> ...



Exactly! Hah! Glad someone gets it.


----------



## Don Haines (Feb 4, 2016)

jrista said:


> Don Haines said:
> 
> 
> > jrista said:
> ...


Yes, but I still think if they had that much DR they would go 16 bit....the marketing people would go nuts  "Our new 1DX2 has so much dynamic range that we had to expand our RAW files to 16 bits to handle it!!!! Those inferior Sony and Nikon cameras only need 14 bits with their inferior colour depth...."


----------



## jrista (Feb 4, 2016)

Don Haines said:


> jrista said:
> 
> 
> > Don Haines said:
> ...



I dunno. I don't think the two are synomous. There is having more true dynamic range, which in the case of a linear sensor means you have lower read noise and thus can actually benefit from 16-bit data. A non-linear sensor, on the other hand, amplifies shallow signals more and changes the relationship of those shallow signals vs. read noise more than it does deeper signals. If Canon has not reduced their read noise enough to actually support 16 stops of linear dynamic range, then going to 16-bit wouldn't help. However a non-linear sensor capable of 15 stops of DR compressed into 14-bit data WOULD help. 

I would obviously prefer to have a linear sensor with read noise low enough to support 15 stops of DR. I think that would be much better than applying curves to the information to compress and decompress it. A non-linear sensor would just be a means of overcoming high read noise...it would just be a stop-gap measure. However Canon would have to get their read noise down to around 2e- with a 60ke- FWC (i.e. 5D IV), or 3e- with a 100ke- FWC (i.e. 1D X II), in order to have 15 true stops of linear DR. Rather doubtful they have achieved that...I might believe they have reduced read noise down to the ~10e- range @ ISO 100...that would be almost 1/4 what the 1D X had, and if the II still has around 90ke- FWC, then that would be ~13.2 stops DR (I'd take that, though!!). Sony has barely achieved that even...most FF exmors have 4-6e- read noise at ISO 100, and the A7s has ~25e- (although it also has 150ke- FWC).


----------



## rfdesigner (Feb 4, 2016)

jrista said:


> Don Haines said:
> 
> 
> > jrista said:
> ...



RE pre-de-emphasis.. really what is the point?.. what is easier to encode, 14 bits non-linear or 16 bits linear, CR2 files already do lossless compression of 14 bits into about 9 on average, card costs are now almost zero compared to camera costs. If it were my decision I'd go with CR3 which would merely be 16bit data with TIFF compression and keep everything linear. Perhaps Canon see CR3 as being viewed as a potential problem so want everyone on side (Adobe etc) before letting that little nugget of info out (after all Pros want things to "just work" and don't care so much about specmanship).. but that's pure speculation.

It would explain them only releasing jpegs


----------



## kaihp (Feb 4, 2016)

jrista said:


> Don Haines said:
> 
> 
> > jrista said:
> ...



Think "floating point numbers"


----------



## jrista (Feb 4, 2016)

rfdesigner said:


> jrista said:
> 
> 
> > Don Haines said:
> ...



Non-linear amplification of the analog signal is significantly more complex, requiring a considerable investment of die space per pixel. Unless Canon has shrunk to something smaller than 180nm, I don't think they even have the space to implement such a thing. It's not about what's easy, either...it's about what's effective. If Canon's read noise hasn't changed, then encoding in 16-bit buys no one anything, because instead of 15adu noise you'll have 60adu noise...it's the same amount of noise, just encoded more finely. Your still limited to the same tonal range, which before topped out at not even 4096 levels...why store less than 4096 usable levels of information in 65535 levels worth of numeric space? It's totally wasteful, would limit their ability to process at a fast rate, would increase data size (they probably wouldn't have been able to achieve 170 continuous RAW frames), would increase storage space requirements, etc.


----------



## LonelyBoy (Feb 5, 2016)

Is higher bit-depth files something that could be released later via firmware if they're still working on everything else around it, like DPP?


----------



## 3kramd5 (Feb 5, 2016)

LonelyBoy said:


> Is higher bit-depth files something that could be released later via firmware if they're still working on everything else around it, like DPP?


If the hardware supports it (16-bit ADCs, memory registers, etc).


----------



## Lee Jay (Feb 6, 2016)

kaihp said:


> jrista said:
> 
> 
> > Don Haines said:
> ...



The natural logarithmic encoding of floating point numbers is ideal for storing images. Lightroom can do 16 bit floating point math for a very satisfactory 30 stops or so of DR in the image.


----------

