# Patent: Canon 5 Layer UV, IR, RGB Sensor



## Canon Rumors Guy (Jun 27, 2014)

```
<div name="googleone_share_1" style="position:relative;z-index:5;float: right; /*margin: 70px 0 0 0;*/ top:70px; right:120px; width:0;"><g:plusone size="tall" count="1" href="http://www.canonrumors.com/?p=16785"></g:plusone></div><div style="float: right; margin:0 0 70px 70px;"><a href="https://twitter.com/share" class="twitter-share-button" data-count="vertical" data-url="http://www.canonrumors.com/?p=16785">Tweet</a></div>
A patent showing a 5 layer image sensor from Canon has appeared. UV and IR layers help with color reproduction especially for skin tones.</p>
<p><a href="http://www.northlight-images.co.uk/cameras/Canon_rumours.html" target="_blank">Keith at Northligh</a>t has this to add about the patent: <em>“The pixel structure above, shows a BSI design (back illumination), but obviously no pixel structure. Extending the range of light response to UV and IR would cause issues for current lens designs, but the idea of more than three colour primaries is not new, although it would require a major re-write for support from RAW converter software.”</em></p>
<p><a href="http://www.canonrumors.com/wp-content/uploads/2014/06/5-layer.gif"><img class="alignnone size-medium wp-image-16786" src="http://www.canonrumors.com/wp-content/uploads/2014/06/5-layer-575x341.gif" alt="5-layer" width="575" height="341" /></a></p>
<p><strong><span style="color: #444444;">Patent Information (Google Translated)</span></strong></p>
<ul style="color: #444444;">
<li>Patent Publication No. 2014-103644
<ul>
<li>Publication date 2014.6.5</li>
<li>Filing date 2012.11.22</li>
</ul>
</li>
<li>Canon patents
<ul>
<li>I get the visible light</li>
<li>I get the ultraviolet light</li>
<li>I get the infrared light</li>
<li>I want to extract the skin area</li>
<li>The skin region of visible light, I will correct the image from the difference of the skin area of ​​ultraviolet light</li>
<li>Signal value of the infrared light is regarded as the skin area, the high part</li>
</ul>
</li>
</ul>
<p>Source: [<a href="http://egami.blog.so-net.ne.jp/2014-06-27" target="_blank">EG</a>] via [<a href="http://www.northlight-images.co.uk/cameras/Canon_rumours.html" target="_blank">NL</a>]</p>
<p><strong><span style="color: rgb(255, 0, 0);">c</span>r</strong></p>
```


----------



## Lee Jay (Jun 27, 2014)

This is likely to mean very near IR and very near UV, and thus existing lenses would be okay. Far UV would be removed by the glass, as would far IR. While it's likely true that CA would increase as you go away from green, I doubt it would matter for this use.


----------



## Don Haines (Jun 27, 2014)

very interesting....

BTW, those posters who claim Canon has a lack of innovation... comments please?


----------



## Max ☢ (Jun 27, 2014)

Don, this is going to be really a meaningfull innovation when an actual product will hit the shelves ... untill then this remains only a patent.


----------



## bvukich (Jun 27, 2014)

Max ☢ said:


> Don, this is going to be really a meaningfull innovation when an actual product will hit the shelves ... untill then this remains only a patent.



It still represents a commitment to R&D, which is important. But not everything you throw at the wall sticks.


----------



## Click (Jun 27, 2014)

Very interesting information. 8)


----------



## Max ☢ (Jun 27, 2014)

This is surely a strong commitment to R&D, and I really like the idea - but all innovations remaining in Canon's lab do not benefit the consumer and for this reason I think this is not meaningful for us, the users. So, untill this hits the shelves in the form of an actual product, this remains only a patent.


----------



## keithcooper (Jun 27, 2014)

Purely coincidence of course ;-)

...but a week or two before the 7D was announced, there was a Canon multi-layer patent, which was apparently related to the dual layer one that appeared in the 7D as its metering sensor.

This was the patent image and the colour dual layer one is from the 7D launch info











both from my original 7D page
http://www.northlight-images.co.uk/cameras/Canon_7d.html


----------



## mackguyver (Jun 27, 2014)

keithcooper said:


> Purely coincidence of course ;-)
> 
> ...but a week or two before the 7D was announced, there was a Canon multi-layer patent, which was apparently related to the dual layer one that appeared in the 7D as its metering sensor.


Keith, thanks for the great info as always, and I'm curious, how does the 7D metering sensor compare to the 1D X metering sensor?


----------



## nostrovia (Jun 27, 2014)

Canon Rumors said:


> I want to extract the skin area



I love it when machines are able to absolutely nail the translation.

With this type of sensor, could one extract information from each layer separately? Could I get a "normal" shot, then tweak a buttons on the back of the camera or few sliders in lightroom and get an IR image?


----------



## keithcooper (Jun 27, 2014)

mackguyver said:


> keithcooper said:
> 
> 
> > Purely coincidence of course ;-)
> ...



The 1Dx says "252 zone from 100,000-pixel RGB AE sensor"
The 7D just says 'dual layer 63 zone'

I've never seen more details about the 1D X

The 5D3 does say though "iFCL metering with 63-zone dual-layer sensor" which would suggest that the 7D metering chip went into the 5D3 but something more went into the 1D X


----------



## preppyak (Jun 27, 2014)

nostrovia said:


> I love it when machines are able to absolutely nail the translation.


I'm also a fan of this

I get the visible light
I get the ultraviolet light
I get the infrared light

ALL YOUR LIGHT ARE BELONG TO ME!


----------



## NancyP (Jun 27, 2014)

This is quite interesting. I wonder what bandpass filters they will use on the sensor? I am assuming this would not be the equivalent of a full-spectrum conversion on an existing sensor, in which the bandpass filter is swapped with a clear filter of equal thickness. My 60D is still "at risk" ;D for mods, should the 7D2 come out and I decide to "replace" the 60D for regular APS-C (wildlife) shooting. The major implication is a serious disruption to many people's workflow, while the third-party software companies (Adobe, Capture One, DxO, etc) make a serious addition to their RAW conversion algorithms to accommodate the new sensor. That alone would be one reason why a new sensor would premiere in a mid-level camera. Pros, most of whom use something other than DPP, can't be bothered to wait around while Adobe etc get on this. The workflow disruption can be somewhat annoying for amateurs as well. I shoot Sigma Foveon files (DP#M series) x3f and these can only be processed in the Sigma SPP program and in Iridient Developer. I am currently using the Sigma program to make global adjustments, and then exporting tiff files to other software (LR, Pano, etc) for local adjustments, because there aren't good local adjustment tools in the Sigma program. PITA.


----------



## Jan (Jun 27, 2014)

Ya... nice. Please, Canon: announce the f*ckin camera. 
And announce the T6i featuring the same sensor too, please.


----------



## CANONisOK (Jun 27, 2014)

preppyak said:


> ALL YOUR LIGHT ARE BELONG TO ME!


Nice.


----------



## dadgummit (Jun 27, 2014)

I wonder if this could eliminate the need for IR conerted cameras? Maybe with this sensor you could only record the light from one of the layers. 
I know this is a small market and not at all likely the original point of the sensor but it could potentially be a happy side effect.


----------



## mackguyver (Jun 27, 2014)

dadgummit said:


> I wonder if this could eliminate the need for IR conerted cameras?


IR converted cameras are a niche item, but IR surveillance cameras are a huge market!



keithcooper said:


> mackguyver said:
> 
> 
> > keithcooper said:
> ...


Thanks, and I guess it's a mystery to all of us...


----------



## Meh (Jun 27, 2014)

bvukich said:


> Max ☢ said:
> 
> 
> > Don, this is going to be really a meaningfull innovation when an actual product will hit the shelves ... untill then this remains only a patent.
> ...



Fully agree. It shows Canon is working on next generation sensors. On the other hand, real products matter and many large tech companies file a lot of patents defensively so that no one else can develop a product and then sit on the technology rather than invest in developing real products.


----------



## Meh (Jun 27, 2014)

Does the comment from Northlight that the patent doesn't show any pixel structure make sense? If it's a layered (Foven type) sensor then there wouldn't be any "pixel structure" per se.


----------



## Lawliet (Jun 27, 2014)

Lee Jay said:


> This is likely to mean very near IR and very near UV, and thus existing lenses would be okay. Far UV would be removed by the glass, as would far IR.



Very near would be enough to solve for example the purple/violett problem, i.e. colors that would be represented as a red/blue blend in RGB, but due to being of shorter wavelength then blue only register on those blue sensor cells and shift colors.


----------



## Meh (Jun 27, 2014)

Lawliet said:


> Lee Jay said:
> 
> 
> > This is likely to mean very near IR and very near UV, and thus existing lenses would be okay. Far UV would be removed by the glass, as would far IR.
> ...



"Red/blue blend" and "shorter wavelength than blue" doesn't quite jive... can you explain further what you mean?


----------



## rrcphoto (Jun 27, 2014)

dadgummit said:


> I wonder if this could eliminate the need for IR conerted cameras? Maybe with this sensor you could only record the light from one of the layers.
> I know this is a small market and not at all likely the original point of the sensor but it could potentially be a happy side effect.


it certainly could - while it's a small market - the ability to flip a sensor and shoot strictly UV or IR or a combination - would be incredible; and there's more converted cameras out there than some give credit to.


----------



## Lawliet (Jun 27, 2014)

Meh said:


> "Red/blue blend" and "shorter wavelength than blue" doesn't quite jive... can you explain further what you mean?



You can get violet hues either directly from the pigment or by mixing red and blue(additive color mixing is the key word, or two flashlights with gels for experimenting) - your screen does the latter. Nature has a bit of both.
Now look at a picture, preferable a drawing, not a photo, of a rainbow; the colors go red(long wavelength) orange yellow green blue (and now the violet hues the camera mistakes for blue, because the red you'd require to mix the color is so far away it doesn't register on the corresponding sensor cells).

Now you can have two problems: really bad reproduction of some colors, think flowers, minerals and such. And the other occurs if two things have the same color, but use the different ways to get it as described at the start. half the stuff will be properly pink, magenta, violet - but the other renders in blue. Now you can't even explain that this is the way its supposed to be...


----------



## Meh (Jun 27, 2014)

rrcphoto said:


> dadgummit said:
> 
> 
> > I wonder if this could eliminate the need for IR conerted cameras? Maybe with this sensor you could only record the light from one of the layers.
> ...



Not likely. IR or UV converted cameras typically filter for just IR or UV in order to get unique images based just on those wavelengths that we can't see. The images are "false color images" with the variation in IR (or UV) mapped back to visible wavelengths. I would suspect that this new 5 layer sensor tech would not be designed to pick up wavelengths too far from visible... rather, just extending slightly into the IR and UV in order to use that information to improve color rendering at the edges and possibly correct better for color shifts and other optical anomalies.


----------



## Meh (Jun 27, 2014)

Lawliet said:


> Meh said:
> 
> 
> > "Red/blue blend" and "shorter wavelength than blue" doesn't quite jive... can you explain further what you mean?
> ...



Except that the human eye works in a similar way as an RGB sensor so your eye would make the same "mistake" and therefore it wouldn't be a mistake relative to our vision.

I believe (I'm no expert) the fact that humans perceive a mix of red and blue to be "visible purple" is not the same thing as observing light of a "violet" wavelength. If you look at an object and see it as purple it actually is preferentially reflecting red and blue wavelengths of light. Therefore an RGB sensor would not be confused by that... the blue pixels would register the blue photons and the red pixels would register the red photons just like our eyes do.


----------



## Lawliet (Jun 27, 2014)

Meh said:


> the blue pixels would register the blue photons and the red pixels would register the red photons just like our eyes do.



No, sensels seperate wavelengths relatively sharp via filters, while L- cone cells are still somewhat sensitve to short wavelengths; akin to the spectral response of a Foveon sensor.
Take a sample of cobalt violet for example, light reflected of it as no spike in the red band, it absorbs red light about as good as black.


----------



## brianleighty (Jun 27, 2014)

Nobody's brought it up so I'll mention it. Isn't the timing on this interesting in that Canon just released a new version of DPP that isn't backwards compatible? Maybe the new RAW formula is already in the software?


----------



## Meh (Jun 27, 2014)

Lawliet said:


> Meh said:
> 
> 
> > the blue pixels would register the blue photons and the red pixels would register the red photons just like our eyes do.
> ...



Technically true, the response curves of our cone cells do not have sharp cut-offs but please define "somewhat sensitive to short wavelengths"... if by that you mean "close to zero" then you are right. If you observe short wavelength light your L cones register a tiny response but the response in the S cone would be orders of magnitude higher and your brain would register that as blue light. Similarly incident light that is green would cause a response in all cones almost equally but your brain knows that is green, rather than white because of the relative responses to blue and red components.

Our brains have to be more complex to deal with the overlap and larger range of response patterns but that still does not mean our eyes, or a sensor, would be confused by UV light.... your eye simply will not see UV light as purple... our visual perception of "visible purple" is NOT the observation of near UV light.


----------



## keithcooper (Jun 27, 2014)

Meh said:


> Does the comment from Northlight that the patent doesn't show any pixel structure make sense? If it's a layered (Foven type) sensor then there wouldn't be any "pixel structure" per se.


I was referring to there being no details of the internal structure of the light sensitive regions, or positions of wiring interconnects and the like.

Compare it, for example, to the Canon patent drawing I linked earlier. This one is very much a block diagram - although I like the assorted boxes and stuff on the underside, to suggest a BSI sensor...


----------



## zim (Jun 27, 2014)

got excited at the start, now thinking mmm this is for surveillance cameras, aren't patents filed long before any (if any) product sees the light of day?? jrista...... where are you help ;D ;D ;D ;D


----------



## Cheryll (Jun 27, 2014)

Canon Rumors said:


> A patent showing a 5 layer image sensor from Canon has appeared. UV and IR layers help with color reproduction especially for skin tones.</p>




New Patent here new Sensor there
New Patent here new Sensor there
New Patent here new Sensor there

Canon it is time to give us a camera(s) with this new technologies, let us testing how good it is ;D


----------



## Meh (Jun 27, 2014)

keithcooper said:


> Meh said:
> 
> 
> > Does the comment from Northlight that the patent doesn't show any pixel structure make sense? If it's a layered (Foven type) sensor then there wouldn't be any "pixel structure" per se.
> ...



Ah ok, got what you mean now.


----------



## wsmith96 (Jun 27, 2014)

Perhaps this is part of the reason there was a DPP 4.0 redesign. Maybe there is more in that tool than is "turned on" right now to allow for such future sensors.


----------



## danski0224 (Jun 27, 2014)

Canon Foveon... bring it


----------



## LetTheRightLensIn (Jun 28, 2014)

Hmm someone on another site showed a different version where they show faces and stuff that makes it more clear.

Oh I think I get it now, actually kinda disappointing. This reminds me of the Nikon Scanners with Infrared ICE technology to help remove dirt and dust and such. The fact they talk about skin and wrinkles and blemishes makes me think they want to grab some UV and IR and use it as a sort of auto skin specialized fine bump/wrinkle/blemish remover. Maybe helpful for particular types of portraits, but nothing of any use for anyone else at I'd think. Actually I guess it is sort of different than IR ICE as here it takes the IR hot spots as the better parts, but using a bit of UV, IR and visible it's the same sort of idea in the grandest scheme of things.

I hope this isn't the big thing that is coming and that they tested a portrait studio the other month.

(unless the regular three foveon-like layers, themselves do wonders when you have the IR and UV stuff shut off or I don't get what they are trying to imply)


----------



## LetTheRightLensIn (Jun 28, 2014)

LetTheRightLensIn said:


> Hmm someone on another site showed a different version where they show faces and stuff that makes it more clear.
> 
> Oh I think I get it now, actually kinda disappointing. This reminds me of the Nikon Scanners with Infrared ICE technology to help remove dirt and dust and such. The fact they talk about skin and wrinkles and blemishes makes me think they want to grab some UV and IR and use it as a sort of auto skin specialized fine bump/wrinkle/blemish remover. Maybe helpful for particular types of portraits, but nothing of any use for anyone else at I'd think.
> 
> ...



They have a diagram with more to it here:
http://thenewcamera.com/


----------



## LetTheRightLensIn (Jun 28, 2014)

I guess there could be a large market for helping to automatically clean up skin and complexions a bit these days (with all the selfies and portrait shots and quick snapshots of friends taken these days). For any other purpose though I think this patent is nothing doing and some are reading into it stuff it's not meant for at all I suspect.

There is a chance this is aimed at P&S only, hard to say.


----------



## jrista (Jun 28, 2014)

Cheryll said:


> Canon Rumors said:
> 
> 
> > A patent showing a 5 layer image sensor from Canon has appeared. UV and IR layers help with color reproduction especially for skin tones.</p>
> ...



Hah! I agree. Canon has a lot of really good patents for camera sensors...they just never seem to apply them. I'd love to have a 120mp APS-H that can do 9.5fps...I really wonder why they haven't stuffed that wonder into an actual DSLR and just trounced all the competition.


----------



## Jan (Jun 28, 2014)

brianleighty said:


> Nobody's brought it up so I'll mention it. Isn't the timing on this interesting in that Canon just released a new version of DPP that isn't backwards compatible? Maybe the new RAW formula is already in the software?


Good point. I thought about it, too... but, why should the 6D, 1Dx... already use the new RAW formula?


----------



## zalas (Jun 28, 2014)

LetTheRightLensIn said:


> Oh I think I get it now, actually kinda disappointing. This reminds me of the Nikon Scanners with Infrared ICE technology to help remove dirt and dust and such. The fact they talk about skin and wrinkles and blemishes makes me think they want to grab some UV and IR and use it as a sort of auto skin specialized fine bump/wrinkle/blemish remover. Maybe helpful for particular types of portraits, but nothing of any use for anyone else at I'd think. Actually I guess it is sort of different than IR ICE as here it takes the IR hot spots as the better parts, but using a bit of UV, IR and visible it's the same sort of idea in the grandest scheme of things.


Yes, you have the right idea. The original Japanese blog post says that the patent assumes the availability of a 5-channel sensor and focuses on what you can do with the extra information -- namely extracting the blemishes and wrinkles from the UV image and using the IR image to extract where the skin is visible, combining the two pieces of information to do blemish and wrinkle removal without reducing contrast or the feeling of depth (I assume it means that current algorithms remove shadows by accident).


----------



## wickidwombat (Jun 28, 2014)

mackguyver said:


> keithcooper said:
> 
> 
> > Purely coincidence of course ;-)
> ...


the 7D licks balls and the 1Dx doesn't... well thats the short version


----------



## candyman (Jun 28, 2014)

jrista said:


> Cheryll said:
> 
> 
> > Canon Rumors said:
> ...




But if they do, then what is next? After that it will be difficult to come up again with something revolutionary.
Look at cars. They develop a concept car and over the next three models you see technology & design from that concept car incorporated in the new models.


----------



## leGreve (Jun 28, 2014)

Don Haines said:


> very interesting....
> 
> BTW, those posters who claim Canon has a lack of innovation... comments please?



Was it 7 or so years ago Sigma presented a 3-layered sensor that unfortunately didnt become a hit... Canon just added ir and uv and youcall that innovation..... 7 years later.
I suspect Canon maybe bought Sigmas patent and let it rot on the shelf.


----------



## Dylan777 (Jun 28, 2014)

dilbert said:


> jrista said:
> 
> 
> > Cheryll said:
> ...



I would say....another failures in Nikon lineup


----------



## jrista (Jun 28, 2014)

bvukich said:


> Max ☢ said:
> 
> 
> > Don, this is going to be really a meaningfull innovation when an actual product will hit the shelves ... untill then this remains only a patent.
> ...



Aye. This is basically "'photoshopping' in a chip".  Lately, it seems for many people and photographers who do portraiture, 'photoshopping' has kind of fallen to the wayside. Ultra crisp, ultra sharp, ultra detailed portraits, with all the blemishes, seems to be a growing "in" thing these days.


----------



## dgatwood (Jun 28, 2014)

dilbert said:


> jrista said:
> 
> 
> > Cheryll said:
> ...



I'd say, "I hope they came up with a novel compression scheme that provides lossless reproduction without taking up so much space."

And chances are, they would. I'm amazed at how big Canon's CR2 files are. My 6D's RAW files are somewhere on the order of 25-30 megs for an 18 MP photo, which comes out to (on average) about 12 bits per sample, or only about a 5–10% reduction over raw, uncompressed 14-bit data. I think a reasonably competent data compression engineer ought to be able to quadruple that compression rate while drunk.

And as resolution increases, I'd expect to see, on average, smaller and smaller differences between adjacent pixels, so I would think that a particularly intelligent encoding ought to be able to losslessly do far better than 2:1. With that said, data compression isn't my specialty, so I could easily be wrong. For example, one approach that springs to mind is to do a lossy encoding at a reasonably high quality (say an 8:1 wavelet encoding), decode it, subtract the result from the original raw data, and Huffman-encode the resulting difference signal, which should be mostly zeroes....

Of course, this would require real CPUs with serious horsepower in the cameras....


----------



## jrista (Jun 28, 2014)

dilbert said:


> jrista said:
> 
> 
> > Cheryll said:
> ...



Oh, I've made no claims that 120mp images would be "nice" to work with. They would be an utter pain to work with. Even my beast of a new computer, with 32Gb ram and an overclocked 4930K would have trouble. Doesn't change the fact that if Canon stuffed their ALREADY EXISTING 120mp APS-H sensor in a 9.5fps camera, I'd buy it in a heartbeat.

With that many pixels, you would always be downsampling, unless you were printing 40x30" 300ppi prints (which would be the native size for a 120mp image, or ~90x60" 150ppi !!! ;D). For "normal" prints or even display at full size on 4k screens, the IQ, in terms of noise (although minimally, you would still be suffering a little from having less total area than FF), but especially in terms of color fidelity, crispness of detail and sheer resolution, of downsampled output images from processed 120mp images would trounce anything out there, even if the pixels weren't _technologically _the best. I'd deal with the processing hassle for that. It would be a significant enough of an improvement in raw resolution that few other factors would matter...

The D800? It's fantastic at low ISO, which makes it excellent for landscapes. At high ISO? In practice, the difference is minimal at best, and at worst the D800 exhibits more color noise than the 6D, 5D III, or 1D X at high ISO. Not enough of a raw resolution margin, and certainly not enough frame rate, to make me go out and buy one in a heartbeat like a 120mp 9.5fps sensor would. A 36mp sensor isn't even twice as many pixels as the 5D III, 5D II/1Ds III, etc. A 120mp APS-H on the other hand...that's more than FIVE TIMES the pixel count as a 5D III, and still over three times that of a D800. 



dilbert said:


> Let me put this another way. If Nikon or Sony debuted a camera next week with a 120MP APS-H sensor, what do you think the forums here would have to say about it?



If they debuted a *CAMERA *next week with a 120mp APS-H sensor, it would be big news. If they debuted a 120mp _SENSOR _next week...eh, copycats. Already done. 

APS-H is Canon's thing. I believe they hold patents for it. That renders the point moot. 

So what if SoNikon drop a 120mp FF sensor on everyone next week? Again, not going to happen. Sony already has 50mp sensors...but the only cameras using one of them are MFDs. So again, renders the point moot. There is no IF about Canon's 120mp APS-H...it has actually been done, the thing exists...Canon is just sitting on it until it becomes a more lucrative product (logically, and from a business standpoint, jumping suddenly from 20-30mp sensors to 100mp range sensors is a BIG jump...it cuts out a lot of interim improvements that Canon could be making money off of for....YEARS.) It'll be interesting to see if Nikon uses one of the FF versions of that in a camera...then we would be talking about a 2x pixel count improvement over anything Canon has...that would be interesting. Again, from a logical and business standpoint, probably not going to happen. Especially for Nikon...Nikon desperately NEEDS to milk, and I mean REALLY MILK, EVERY advantage they have to restore their business to health. If they rapidly jump from 36.3mp to 50mp or so, they are wasting opportunities. It would be a terrible business decision. That's not to say they won't...Nikon execs don't seem to have the business sense that many of their competitors, especially Canon, have. Nikon burns a lot of resources on too-rapid R&D cycles, niche and fad products, etc. and it's hurt their bottom line.


----------



## Cheryll (Jun 29, 2014)

jrista said:


> I'd love to have a 120mp APS-H that can do 9.5fps...I really wonder why they haven't stuffed that wonder into an actual DSLR and just trounced all the competition.



Its a wonder for Photographers who need much MP.
I want a camera with extremely lowlight performance (like or better than Sony a7s). A 120 MP Sensor hasn't it :-\


----------



## Lawliet (Jun 29, 2014)

dgatwood said:


> And chances are, they would. I'm amazed at how big Canon's CR2 files are. My 6D's RAW files are somewhere on the order of 25-30 megs for an 18 MP photo, which comes out to (on average) about 12 bits per sample, or only about a 5–10% reduction over raw, uncompressed 14-bit data.



Step 1: throw the jpg preview that has to be of high enough resolution to check for details&focus out.
Just convert the actual crop from the raw for high magnifications.


----------



## 9VIII (Jun 29, 2014)

Cheryll said:


> jrista said:
> 
> 
> > I'd love to have a 120mp APS-H that can do 9.5fps...I really wonder why they haven't stuffed that wonder into an actual DSLR and just trounced all the competition.
> ...



In all the testes I've seen of low MP vs. High MP sensors, once you bring them to the same resolution you get the same noise. The D810 is getting a one stop boost to high ISO, just like everything else.


----------



## Cheryll (Jun 29, 2014)

9VIII said:


> In all the testes I've seen of low MP vs. High MP sensors, once you bring them to the same resolution you get the same noise. The D810 is getting a one stop boost to high ISO, just like everything else.



A user here plan a test between the canon 5DMark3 and the Sony a7s. I and one another photographer give him the tip to down sample the pictures and videos to see the difference better. I'm to wonder of the results.
I mean I have read a test with down sample to the a7r, the result is: The a7s is 1,5 Stops better than the a7r - in the same resolution.

See here:
Sony not only enlarged the pixel size. Sony makes more changing in the sensor to bring more light to it. Result. Despite with down sample is the a7s better than other cameras. Only a little better but better..

http://www.sony.jp/ichigan/products/ILCE-7S/feature_1.html


----------



## jrista (Jun 30, 2014)

Cheryll said:


> jrista said:
> 
> 
> > I'd love to have a 120mp APS-H that can do 9.5fps...I really wonder why they haven't stuffed that wonder into an actual DSLR and just trounced all the competition.
> ...



Pixel size doesn't matter for low light performance. Total sensor area and quantum efficiency matter. It doesn't matter how finely you divide the light your receiving and converting into free charge. If you increase the amount of light your receiving (more total sensor area) and increase the rate of incident photon strikes to electron conversions, then you have better high ISO performance. It wouldn't matter if you had 10mp, 50mp, 120mp, or 500mp.

The notion that pixel size affects noise is largely a myth. All pixel size does is make noise finer. On a normalized basis, i.e. when you render images at the same size, there is little difference in noise but a huge difference in detail and resolution when moving to a higher resolution sensor. The only reason there is a TINY (and imperceptible to the human eye) difference in noise with smaller pixels is fill factor...with more pixels, you have more sensor area dedicated to transistors and wiring, and less to photodiode. You need mathematical tools to determine the difference, though (Something like PixInsights Statistics script, which can derive a whole host of details about an image, including noise STDevs, could tell you, and it you significantly magnified, overlayed, and compared by alternating back and forth, you MIGHT be able to tell the difference with your bare eyes...but on a normalized basis...there is never anything bad about having more pixels.)


----------



## Cheryll (Jun 30, 2014)

jrista said:


> If you increase the amount of light your receiving (more total sensor area) and increase the rate of incident photon strikes to electron conversions, then you have better high ISO performance. It wouldn't matter if you had 10mp, 50mp, 120mp, or 500mp.



The sensor from the sony a7s has an other construction to bring more light to the sensor (photodiode).

And a question what is with this the sensitive from the sensor? alike the yet not build graphene sensor. The sensor need less photons to conversion to electrons. So the sensor need less light for build the same picture what a actual cmos sensor need more light?



jrista said:


> The notion that pixel size affects noise is largely a myth. All pixel size does is make noise finer.



Do you mean with this. Downsample a picture from a 120 MP (or 36 MP) to 12 MP the noise is the same as from a camera with 12 MP and a large pixel area?


----------



## PhotographerJim (Jun 30, 2014)

jrista said:


> Cheryll said:
> 
> 
> > jrista said:
> ...



I think it comes from film days, where the higher the film speed, the more grain, due to the larger silver halide crystals which made it more sensitive to light.


----------



## Orangutan (Jun 30, 2014)

jrista said:


> Pixel size doesn't matter for low light performance. Total sensor area and quantum efficiency matter. It doesn't matter how finely you divide the light your receiving and converting into free charge. If you increase the amount of light your receiving (more total sensor area) and increase the rate of incident photon strikes to electron conversions, then you have better high ISO performance. It wouldn't matter if you had 10mp, 50mp, 120mp, or 500mp.



I get this, but I've wondered whether there _might_ be some truth to the myth, though not in the way many people imagine. While I accept that your explanation is true, it applies when using identical tech throughout the sensor. I've wondered whether it's disproportionately more expensive to make high-density sensors and whether some compromises would be made to keep the costs of the higher MP sensors within reason. The practical result would be that higher MP had worse low-light performance, but only because it's not identical sensor tech.


----------



## Woody (Jun 30, 2014)

jrista said:


> The notion that pixel size affects noise is largely a myth. All pixel size does is make noise finer.



This myth was championed and promulgated by former chief editor of DPReview, Phil Askey.


----------



## GaryJ (Jun 30, 2014)

Cheryll said:


> jrista said:
> 
> 
> > I'd love to have a 120mp APS-H that can do 9.5fps...I really wonder why they haven't stuffed that wonder into an actual DSLR and just trounced all the competition.
> ...


+1


----------



## jrista (Jun 30, 2014)

Orangutan said:


> jrista said:
> 
> 
> > Pixel size doesn't matter for low light performance. Total sensor area and quantum efficiency matter. It doesn't matter how finely you divide the light your receiving and converting into free charge. If you increase the amount of light your receiving (more total sensor area) and increase the rate of incident photon strikes to electron conversions, then you have better high ISO performance. It wouldn't matter if you had 10mp, 50mp, 120mp, or 500mp.
> ...



There has certainly been a LOT of research into making smaller sensors (which pretty much always have smaller pixels) more sensitive to light. That research undoubtedly has cost billions. That said, most of the research into making better small pixels has been done to make ultra tiny sensors viable...the kinds of 1/3" down to around 1/8" sized sensors found in small compact cameras, tablets, phablets, phones, and every other device that uses a microscopic sensor. Each of those sensors is usually a tiny fraction of the cost of one APS-C or FF sensor, though, despite having considerably smaller pixels (between 1 to 2 microns these days, with a new generation of sub-micron pixel sensors coming very soon.)

The reason those sensors have problems with noise, again, isn't because of the small pixels...its the small sensor area. They are WAY smaller than even an APS-C. A couple orders of magnitude smaller at least, if not many more. To have enough pixels to be useful on such small sensors, the pixels themselves have to be tiny. That doesn't increase noise...all it means is that the sensor is "resolving" and/or "exhibiting" noise at a higher frequency. Blend a 2x2 matrix of pixels together with a median algorithm, and you would have the same noise as a sensor with pixels twice as large (linearly, 4x as much area...again, assuming similar tech, however within a given generation of cameras, sensor tech is usually very similar). These tiny sensors in tiny cameras in all the tiny devices we have these days perform so well because they actually use significantly better technology that what is found in our DSLRs. These tiny cameras employ some cutting edge science to increase their light gathering capacity, increase photodiode surface area, increase quantum efficiency, use per-pixel memories to increase charge capacity, etc. If a full-frame DSLR had the same kind of technology as a 1/8" sensor, we would have something like a 864mp 15fps ISO 1.6 million megapixel MONSTER that used color splitting (rather than color filtration) with at least 24 stops of dynamic range thanks to multi-bucket memories, digital readouts, black silicon (basically silicon that uses the same general technology as nanocoated lens elements to eliminate reflection), and a host of other advancements. A full-frame sensor in a DSLR that used the same technology as the microsensor used in the upcoming iPhone or Android would be utterly mind blowing. (Not to mention space guzzling...we would need a new kind of storage technology to handle 2.7Gb per RAW. )

BTW, when I talk about noise in this context, I am pretty much referring to random sources of noise. That is primarily photon shot noise, as well as a bit of random noise from dark current and the random component of read noise. Pattern noise, which is always due to the electronics, is a different story. That is a matter of specific technological construction, materials, and sensor design. Pattern noise is usually buried very deeply within the signal, though, and unless your lifting your shadows by many stops, it is usually a non-factor. Photon shot noise and dark current are really the big ones. In normal photography, dark current is pretty much inconsequential, as CDS takes care of it (in astrophotography, dark current can be your worst enemy, as it accumulates with time....ugh...)

Its this difference in noise frequency...all noise frequency, particularly random noise frequency, where image normalization matters (LTRLI will like this). Dynamic range is talked about a lot, however it's usually talked about in the context of editing latitide: "How many stops can I lift my shadows?" That is certainly a factor of dynamic range, and clearly the one that everyone cares about today. Increasing dynamic range in such a way that you gain editing latitude means reducing read noise such that the original RAW, unscaled or anything like that, has less noise in the shadows, thereby increasing the usable range of bit depth in the RAW image. Dynamic range is also affected by other sources of noise than just the pattern read noise, however. All random sources of noise affect it as well, though, and that includes random noise introduced during read as well as the primary source of random noise, photon shot noise. 

In order to compare noise of cameras with different size sensors, one must normalize their outputs. Scale them to the same size. It really doesn't matter if you scale up or down, however scaling down to a common target is usually the approach taken. Assuming you downsampled the images from a number of cameras all with different sensor sizes, but all with the same pixel count, to the same image size, say an image with 2000 pixels on the long side, you'll find that the larger the sensor, the lower the noise. If we instead had a set of cameras where the larger sensors had fewer pixels and smaller sensors had more pixels, again we would still see that the larger sensor had less noise...however we would also find that the smaller sensors had more detail. The thing about detail is, especially when there is a lot of it, it tends to drown out noise. This is a perceptual matter...the noise of the smaller sensors with smaller pixels is still higher, statistically speaking (i.e. if it was measured), however that higher level of noise would be more readily recognized when it occurs in smooth areas, gradients and solid areas (i.e. background boke). 

The perceptual factor is difficult to nail down, it's highly subjective, but it does play a role in whether we as humans THINK one camera is noisier than another. This is actually one of the big problems with the 7D. It still has a very high resolution sensor...it's pixels are still a lot smaller than those of the 5D III, 6D, and most other DSLRs on the market with the exception of less than a handful (i.e. the 70D, a couple Nikon APS-C cameras). The reason the 7D is perceived as noisy is because it has a tendency to be a bit soft. It's got a "strong" AA filter (personally, I think it's just right for the job it was designed to do, but it does blur more than a lot of AA filters on newer cameras these days), and that strong AA filter eliminates a certain amount of high frequency detail...high frequency detail that would otherwise drown out noise. (The other problem is that the 7D doesn't actually gather as much light as newer counterparts, even including some of the lower end Rebels that ended up with the same sensor...the 7D can only gather a charge of about 20ke- per pixel, vs. say the 70D, which gathers nearly 27ke- per pixel...per SMALLER pixel, which indicates the 70D is gathering almost 50% more light than the 7D within the same sensor area). The 7D isn't necessarily much noisier than its counterparts and competitors...it just SEEMS noisier because it's a bit softer, and that softer detail has a harder time drowning out noise with meaningful information. I also think, in practice, that the 7D's noise is more difficult to clean up, as photon shot noise isn't "crisp" and just per-pixel...it kind of "bleeds" into multiple pixels (probably because of the AA filter).

Anyway, when it comes to sensors of the same size, the biggest differences are usually quantum efficiency and read noise (and, for some applications, dark current). The Sony Exmor, for example, is a superior sensor in all three of those categories. It has quite a bit more Q.E. than any Canon sensor (by as much as 15%), it has significantly lower read noise, and it actually also has less dark current (which only really matters for longer exposures.) Full frame Exmors are still the same area as the sensors in the 5D III and 1D X, but they gather a lot more light, and they introduce far less noise into the deep shadows. That's the only real difference. Assuming one created an exposure where the lowest pixel level was well above the read noise floor...you would find little of significant difference between cameras with these sensors that actually had anything to do with the sensor (you would find differences, but if you really looked into the reasons for those differences, I am willing to bet good money you would find the AF system, metering system, frame rate, and ability of the photographer to work quickly with the camera to change settings, find their subject, focus it, etc. as the key factors driving the differences in IQ. 

I had an increasingly tough time with my 7D getting it to focus consistently...using the 5D III is EFFORTLESS...it practically works itself, and when I need to do anything, it's like it knows my mind. It's that factor right there, the ability to expend little effort using a camera to get good results, that makes Canon king of the DSLR. Canon is at the pinnacle of DSLR design. Their current generation of cameras are truly exquisite when it comes to making it easy, making it effortless, for the photographer to be a photographer, instead of a camera operator. I put off the 5D III for a good long while, largely because I wanted to see what the 7D II turned out to be. I rather regret that decision now, as even if the 7D II turns out to be phenomenal, and is just as effortless to use as the 5D III or 1D X...I spent an extra year hassling around with the 7D when I didn't really have to.

If you want low noise, go with a bigger frame, regardless of pixel count or size. If you want more detail, go with a smaller frame and more pixels. That's all that should really go into the decision making of whether to get a FF camera or an APS-C camera. Once you've picked one of those two things, then it's time to figure out what of all the other features will best serve your needs...and in my experience, it's all those other factors that are WAY, WAY more important. "Effortless"....that should really be Canon's new ad campaign. That's what Canon's current cameras do for you...they make photography effortless. I couldn't really give a crap about the minutia IQ when I can just point and shoot and the camera just does what I need it to.


----------



## Orangutan (Jun 30, 2014)

jrista said:


> Orangutan said:
> 
> 
> > jrista said:
> ...



You've written about all that before and, again, I don't disagree with any of it. I may not have made my point very clearly: I'm not talking about R&D, but about actual production costs. I presume that P&S sensors can tolerate a higher pixel defect rate than SLR-quality sensors, so yield is pretty high for those sensors. I _assume_ that keeping the defect rate down in order to get a reasonable yield is easier (hence cheaper) with recent, but not leading edge, technology. It's my non-expert understanding that there are many refinements that occur to get a beautiful new design to produce a high yield. I'm _assuming_ that this problem is increased for smaller pitch pixels and the needed smaller circuitry. Of course, once you get those production problems worked out the yield is comparable.


----------



## weixing (Jun 30, 2014)

Hi,
Got one question to ask: Does readout noise increase when resolution increase?

Have a nice day.


----------



## jrista (Jun 30, 2014)

Orangutan said:


> jrista said:
> 
> 
> > Orangutan said:
> ...



Regarding your very specific point, I'm honestly not sure. I don't know that the costs of fabricating a sensor with any of the modern pixel sizes is more expensive just because of the pixel sizes. Sensor transistors are pretty large these days...I mean, 180nm, 90nm? CPUs are using 22nm, with 14nm on the way. Yield certainly becomes a bigger issue with smaller pixels, however even sensors with pixels in the 1-2 micron range are still much more capable of handling defects than a processor. Based on ChipWorks analysis on multiple chips from camera manufacturers across the board, it is not uncommon for companies to farm out help for designing and fabricating their DSPs like DIGIC or EXPEED. On the other hand, Canon has continued to fabricate their own sensors. 

Why hasn't Canon moved to a smaller process yet? Building a fab is a monster investment. That's hundreds of millions to a couple billion dollars just for one, depending on how small you need to fabricate transistors. I'm more inclined to assume Canon put that off simply because it's a gargantuan up-front outlay that they haven't critically needed until now. Up through the 5D III, I don't think Canon's use of an old 500nm fabrication process was hurting them. Now? I think that there is certainly a perception that Canon is really starting to lag behind the competition from a low-level technical standpoint, and spending a billion on a new high tech fab that can produce sensors with huge photodiodes and tiny transistors on large wafers is probably a worthwhile expenditure, both from a perceptual and technological standpoint. Would they do it because making sensors with smaller pixels is too expensive? They already make sensors with smaller pixels on a 180nm process using copper interconnects. They have been for years, and that is their cheaper fab (I think it's already on 300mm wafers). I think that there are increased costs when building any sensor, with any size pixels, when your just working out the kinks in a particular process. Whether the pixels are big or small, I think moving to a more modern, advanced fab capable of creating more sensors on larger wafers with smaller transistors will actually, in the long term, be a huge cost saver, even (and maybe particularly when) they move to much smaller pixels.


----------



## jrista (Jun 30, 2014)

weixing said:


> Hi,
> Got one question to ask: Does readout noise increase when resolution increase?
> 
> Have a nice day.



In practice, no. Actually, in practice, it often seems to be the opposite, particularly in the case of Canon. Canon sensors have historically seen a drop in read noise as pixel size shrinks. The 1D X has nearly 39e- read noise. The 5D III has 33.1e-, while the 7D and it's other 18mp siblings have about 8e- read noise. Sony Exmor sensors tend to have around 2.7e- to 3.3e- read noise at all ISO settings regardless of pixel size.

Read noise is not specifically dependent upon pixel size when it comes to many CCD astro imagers as well. Some older generation KAF (what were originally Kodak, and now TrueSense Imaging sensors) CCD sensors, like the KAF-11000 series, which have 9µm pixels, used to have very high read noise...as much as around 40e-. Today, newer cameras with those same sensors have about 10e- read noise. Similarly, older KAF-8300 CCD cameras used to have 25-30e- read noise. Today, they often have as little as 7e- read noise. 

Read noise is often a term used to describe the conglomeration of potential electronic sources of noise in a sensor. In more specific terms, read noise is the noise introduced by the readout electronics. Dark current is probably better kept separate (and in CCD cameras, read noise and dark current usually are specified independently). Dark current is a variable type of noise...it accumulates over time, and the rate of accumulation is ultimately dependent upon temperature. Dark current is a form of noise that is intrinsic to all electronics, including sensors, and is therefor the one type of noise that may specifically be caused by sensor electronics. 

Read noise, on the other hand, is usually introduced by "downstream" electronics. When the sensor signal is shipped over a bus and through processing electronics, such as amplifiers (in the case of CCDs) and ADC units, that's where it can pick up a lot of noise. Higher frequency components tend to add more noise, and with the exception of a very few sensors, most use high frequency downstream ADC units in external components that are one of the primary sources of read noise, and one of the primary reasons that dynamic range often falls off and flattens out at lower ISO.

CCD cameras often resort to alternate readout speeds as a means of lowering read noise. Some CCD cameras might take several seconds to read out one frame, and one frame of significantly lower resolution (maybe as little as a few megapixels) compared to your average DSLR sensor. This very low readout rate can reduce read noise by a significant degree. When it comes to astrophotography, that is often not a big deal. You are usually going to be doing other things while readout is occurring anyway, such as dithering (basically, moving the mount ever so slightly to offset the star positions, which helps greatly in reducing random noise once you stack). In the case of DSLRs, where high frame rate, often as high as 10fps or so, is a critical feature, then high frequency components are to be expected, and so is an increase in read noise. 

An increase in parallelism, or the number of units dedicated to high workload tasks, such as reading out pixels and converting them to digital units, is one way of reducing the need for high frequency output. Canon has also filed a patent for dual-scale ADC units, where they have the ability to switch to a slower readout rate, and thus a lower operating frequency, when a high readout rate is unnecessary. This could lead to even lower noise images for types of photography that are inherently "slow"...such as landscapes. The reduced noise leads to higher dynamic range, and everyone is happy.

So no...there really isn't any reason to link read noise with pixel size in general, and especially not with smaller pixel sizes. Read noise is a much more complex beast than that, and multiple things factor into determining how much read noise a given camera has.


----------



## Michael Clark (Mar 7, 2018)

jrista said:


> Orangutan said:
> 
> 
> > jrista said:
> ...



Pixel size does affect full well capacity, which directly affects dynamic range.


----------

