# Another my Stupid question = Sensor Sizes



## surapon (Aug 19, 2014)

Dear Teachers and Friends.
Well, Yes, I can take a SoSo-or Good Photos, Because of I take the photos so long time. But for the High Tech of Digital Photography, I almost know nothing a bout this New Technology.
My Stupid Question are :
1) Are the Size of the Sensor Matter ?---Or the MP. count are matter ?
Such As the Tiny Sensor on Nokia Lumia 1020 = 41 MP, compare to Canon 1DX FF = 18.1 MP, and Canon 5D MK II FF = 22.3 MP
2) If the Sensor are same Size and Same MP-----The Camera company are matter or not that can claim , My Ca--- are Sharpper than your Ni--- ???
3) What Make the Same size of Sensor ( of this Company ) to be better than another Company Sensor ?
Thanks for your Answers, That will make me up to date of new Technology.
Have a great day, Sir/ Madam.
Surapon


----------



## Don Haines (Aug 19, 2014)

surapon said:


> Dear Teachers and Friends.
> Well, Yes, I can take a SoSo-or Good Photos, Because of I take the photos so long time. But for the High Tech of Digital Photography, I almost know nothing a bout this New Technology.
> My Stupid Question are :
> 1) Are the Size of the Sensor Matter ?---Or the MP. count are matter ?
> ...



It's not really the size of the sensor that counts, it's the size of the pixel.

Make the pixel larger and it gathers more light. With more light you get better low light performance and you get more flexibility with exposure times and apertures...

Make the pixel smaller and you get more resolving power, but at the expense of less light, worse low light performance, and less flexibility with exposure times and apertures.

In the end, it all comes down to striking a balance that the public will accept. For example, make a 2Mpixel APS-C sensor and you have 2 1/3 stops MORE low light performance than a 5D3...but is anyone going to buy it?

Obviously, the larger the sensor, the more pixels of your chosen size will fit.... but as sensors get larger, so does the size of the lens required to get the same field of view and the prices go up astronomically..


----------



## surapon (Aug 21, 2014)

Don Haines said:


> surapon said:
> 
> 
> > Dear Teachers and Friends.
> ...



Thousand Thanks, Sir, Dear Mr. Don Haines
Great / Clear Answer from you , and I learn some thing news to day, Yes, The Bigger Pixels Size are get more Light than the small pixels in the same Sensor Size. = More Higher ISO and less digital noise. That why the less MP of the FF sensor of Canon 1Dx is a lot better than The 41 MP Cellphone camera that have Tiny Sensor.
Thank you, Sir.
Have a great Day.
Surapon


----------



## PureClassA (Aug 21, 2014)

There are no stupid questions ;D

Most folks in general do not understand the relationship of sensor size, pixel density, and pixel size. For many, the picture they take on their phone looks great on that little 4 inch screen. Put it on an 8 by 10 print and it doesn't so much anymore.

Most folks who own good DSLRs are shooting crop sensors (APS-C). Those sensors are already 500 times bigger than an iPhone. Full frame is simply twice as big as that. Mr. Haines nailed it for you. 

But again there are never stupid questions when you're trying to learn!


----------



## surapon (Aug 21, 2014)

PureClassA said:


> There are no stupid questions ;D
> 
> Most folks in general do not understand the relationship of sensor size, pixel density, and pixel size. For many, the picture they take on their phone looks great on that little 4 inch screen. Put it on an 8 by 10 print and it doesn't so much anymore.
> 
> ...




Thankssss, You, Sir, Dear Mr. PureClassA.
That why, I love This Great Site, CR, that I have a great Knowledge Friends and Great Teachers , who help us to solve the problems and Clarify the questions , which we do not know.
Have a great day, Sir.
Surapon


----------



## Keith_Reeder (Aug 21, 2014)

Don Haines said:


> It's not really the size of the sensor that counts, it's the size of the pixel.



That's _precisely_ back to front, Don - pixel size doesn't matter one little bit in terms of a sensor's light-gathering abilities, in any practical sense. Sensor _size_ is the whole story, at any given "state of the art".

Jon Rista must've explained this about a million times on here - and he's completely, demonstrably right. Simply put, a big window lets in more light than a small one, whether it's made up of one pane of glass, or many - a perfect sensor analogy in this context.


----------



## AlanF (Aug 21, 2014)

Keith_Reeder said:


> Don Haines said:
> 
> 
> > It's not really the size of the sensor that counts, it's the size of the pixel.
> ...



That is a perfect sensor analogy: have a sensor made of one pane and it will have superb signal to noise and absolutely zero resolution.


----------



## Don Haines (Aug 21, 2014)

Keith_Reeder said:


> Don Haines said:
> 
> 
> > It's not really the size of the sensor that counts, it's the size of the pixel.
> ...


So let's use a hypothetical example... and both cameras use the exact same lens...

Example 1:
You use the exact same technology to manufacture a pair of sensors. One is a 10Mpixel APS-C sensor, and the other is a 25.6Mpixel FF sensor. The pixels on the two sensors are exactly alike. Both will have the same electrical characteristics and both will have the same optical characteristics. They will have the exact same noise, the exact same ISO performance, the exact same DR..... because they are exactly the same.

Obviously, the FF sensor takes in more light, but it is spread over a wider field of view and the light per pixel is the same.

Example 2: 
You use the exact same technology to manufacture a pair of sensors. One is a 20Mpixel APS-C sensor, and the other is a 20Mpixel FF sensor. The pixels on the FF sensors are 2.56 times larger than the pixels on the APS-C sensor. In this case, the FF sensor receives 2.56 times the amount of light. It's performance will be about 1 1/3 stops better than the APS-C sensor. This is your typical scenario when comparing sensor sizes.... both are near the same pixel count and the added real estate lets you make the pixels larger on FF....

Example 3:
The Sony A7S. Larger pixels. ISO409,600. nuff said...

Either way, it is the pixel size that matters....

FF does not perform better because it is larger, it performs better because the pixels are larger. The larger sensor size allows you place a similar number of pixels of greater size. It is a subtle difference.


----------



## sgs8r (Aug 21, 2014)

Keith_Reeder said:


> Don Haines said:
> 
> 
> > It's not really the size of the sensor that counts, it's the size of the pixel.
> ...



Really, both matter. Resolution increases as the pixels get smaller. But noise performance increases as they get larger. 
So for a given sensor size, there's always a tradeoff. Also, smaller pixels make greater demands on the performance 
of the lens, exposing any weaknesses in resolution. On the other hand, a larger sensor is more demanding of the edge-to-edge 
performance of the lens. A normal lens on an APS-C sensor only uses the center of the image circle, so the performance in the 
corners and edges isn't important. And bigger sensors require more glass in general, so the cost is higher for equivalent quality. So choosing a sensor (or camera, lens, etc.) involves a tradeoff that you need to make based on the type of photography you do (and your budget).


----------



## AlanF (Aug 21, 2014)

There is a very recently started ongoing, now 11-page, thread on the topic - http://www.canonrumors.com/forum/index.php?topic=22161.0


----------



## Orangutan (Aug 21, 2014)

Don Haines said:


> Obviously, the FF sensor takes in more light, but it is spread over a wider field of view and the light per pixel is the same.



But the output image (print, projected image, etc) is the same size, so the light gathered by the FF sensor requires less enlargement (attenuation) to achieve that output size, and this negates your argument.

So I believe you're mistaken: with identical technology, the size of the sensor is all that matters for low-light properties. For ample-light IQ, total MP, AA filter, etc are definitely important for resolution.

I'm sure jrista will jump in here any minute to correct us all.


----------



## troppobash (Aug 22, 2014)

Hi Surapon

Where is the G1X Mark II sensor size 1.5"?

8)


----------



## Ophthaltographer (Aug 22, 2014)

Make the pixel smaller and you get more resolving power, but at the expense of less light, worse low light performance, and less flexibility with exposure times and apertures.

[/quote]

I believe the number of pixels a sensor has determines it's resolving power not the size of the pixel. Large sensors with 18 (large) MP will resolve the same as a tiny sensor with 18 (tiny) MP.


----------



## weixing (Aug 22, 2014)

Hi,


Orangutan said:


> Don Haines said:
> 
> 
> > Obviously, the FF sensor takes in more light, but it is spread over a wider field of view and the light per pixel is the same.
> ...


 No, for example a 36MP FF and 18MP FF using the same manufacturing technology, the 18MP FF will have better low-light properties as which pixel received more light than the 36MP FF.



> Ophthaltographer said:
> 
> 
> > Make the pixel smaller and you get more resolving power, but at the expense of less light, worse low light performance, and less flexibility with exposure times and apertures.
> ...


 Pixel size will determine the resolving power (provided the lens and environmental condition are not limiting the resolution), so a smaller sensor will have better resolving power than a larger sensor of the same MP.

Have a nice day.


----------



## sgs8r (Aug 22, 2014)

Ophthaltographer said:


> Make the pixel smaller and you get more resolving power, but at the expense of less light, worse low light performance, and less flexibility with exposure times and apertures.



I believe the number of pixels a sensor has determines it's resolving power not the size of the pixel. Large sensors with 18 (large) MP will resolve the same as a tiny sensor with 18 (tiny) MP.


[/quote]

This true if the image seen by the tiny sensor is the same as the image seen by the large sensor, which requires a different lens (to map the image onto a smaller area. 
So this combines/confuses the effect of the sensor and that of the lens. Instead, imagine a 35mm camera with a specific lens and specific sensor size, so the image registering on the sensor area is fixed. Then smaller pixels (everything else the same) will allow resolving finer line pairs (e.g. on a test chart image). Of course this assumes that smaller pixels would mean they would be packed closer on the sensor, so for the fixed sensor, the pixel count would increase as the pixel size shrinks (you could, of course, keep the pixel count the same by adding "dead area" around each pixel, but this would make no sense).


----------



## sagittariansrock (Aug 22, 2014)

Orangutan said:


> Don Haines said:
> 
> 
> > Obviously, the FF sensor takes in more light, but it is spread over a wider field of view and the light per pixel is the same.
> ...




Don Haines posted "the larger sensor size allows you place a similar number of pixels of greater size". This is a key sentence. 

When you're comparing FF to APS-C, you are always comparing disparate pixel counts. You say "with identical technology, size of sensor is all that matters". I disagree. If you simply made a bigger 7D sensor, with the same technology and same pixel density, then it will have the same noise characteristics as the 7D sensor. The only thing different would be the field of view, and the ability to enlarge. 

Going by the sentence I quoted above, on the other hand- a full frame sensor, even with the same tech as the 7D, will allow larger pixel size for the same pixel count. Here, the noise characteristics become better due to the pixel size.

So, bottom line is, you cannot improve light capturing ability without changing pixel size, given the same sensor technology. I am not sure what you mean by "enlargement of light gathered", and the output image is also subjective. The fact remains that a 46MP FF sensor can be enlarged to a greater size without pixellation than an 18MP APS-C sensor due to the larger size of the original image. It has nothing to do with reduced noise or increased light gathering in the former.


----------



## Don Haines (Aug 22, 2014)

Scenario 1: Normalize for pixel size
We have a camera that we can swap sensors on. We mount a 100mm lens to that camera.
All of our sensors use the exact same technology and have the exact same 0.01mm by 0.01mm pixels.

FF sensor - 36x24 mm, 3600x2400 (8.60Mp), 20.0 degree fov
APS-C sensor - 22.2x14.8 mm, 2220x1480 (3.29Mp), 12.3 degree fov
4/3 sensor - 17.3x13.3 mm, 1730x1330 (2.30Mp), 9.6 degree fov
1/2.3 sensor - 5.76x4.29 mm, 576x429 (0.25Mp), 3.2 degree fov

In this case, each sensor will have the exact same resolving power. Each sensor will have the exact same IQ, the ISO performance and the noise performance will be identical. In this case, what sensor size buys you is the number of pixels and the field of view.

The crop value is what the focal length would have to be to get the same field of view from a FF camera.
FF - crop value is 1
APS-C - crop value is 20/12.3 or 1.62
4/3 - crop value is 20/9.6 or 2.08
1/2.3 - crop value is 20/3.2 or 6.25

to look at what the equivalent FF lens length would be for the same field of view...
APS-C, 100mm has the same field of view as 162mm on FF
4/3, 100mm has the same field of view as 208mm on FF
1/2.3, 100mm has the same field of view as 625mm on FF


Scenario 2: normalize for the field of view (20.0 degrees)
FF sensor - 36x24 mm, 3600x2400 (8.60Mp), 100mm lens
APS-C sensor - 22.2x14.8 mm, 2220x1480 (3.29Mp), 61.7mm lens
4/3 sensor - 17.3x13.3 mm, 1730x1330 (2.30Mp), 48.1mm lens
1/2.3 sensor - 5.76x4.29 mm, 576x429 (0.25Mp), 16.0mm lens

We now have the same field of view from the cameras. Each sensor will have the exact same IQ, the ISO performance and the noise performance will be identical. 
In this case, what sensor size buys you is the number of pixels and over the same field of view, the FF camera has far greater resolving power.


Scenario 3: Normalize for the number of pixels.
In this case, the lens stays the same but the size of the pixels varies to keep a constant 8.6Mpixels on the sensor.

FF sensor - 36x24 mm, 0.0100mm pixels, 20.0 degree fov
APS-C sensor - 22.2x14.8 mm, 0.0062mm pixels, 12.3 degree fov
4/3 sensor - 17.3x13.3 mm, 0.0048mm pixels, 9.6 degree fov
1/2.3 sensor - 5.76x4.29 mm, 0.0016mm pixels, 3.2 degree fov

We now have smaller pixel sizes on the smaller sensors, and as they get smaller ISO performance drops and noise rises. In this case, the smaller sensors have greater resolving power than the larger sensors.

In summary:
Larger pixels give you better ISO performance and lower noise.
Smaller pixels give you more resolving power ON THE SAME LENS.
Sensor size affects the field of view.

The balance you select between the three is what determines the performance of your camera.


----------



## Orangutan (Aug 22, 2014)

I'm no expert on this, so I'll refer you to: http://www.clarkvision.com/articles/does.pixel.size.matter and to jrista's various write-ups.



Don Haines said:


> Scenario 2: normalize for the field of view (20.0 degrees)
> FF sensor - 36x24 mm, 3600x2400 (8.60Mp), 100mm lens
> APS-C sensor - 22.2x14.8 mm, 2220x1480 (3.29Mp), 61.7mm lens
> 4/3 sensor - 17.3x13.3 mm, 1730x1330 (2.30Mp), 48.1mm lens
> ...



To me this is the *only* case that matters -- the question is irrelevant and misleading unless we're talking about identically-framed shots. With identically framed shots, a larger sensor will collect more light from the overall field of view (and therefore per-unit-area of the scene), even if the smaller sensor has larger pixels. With our hypothetical _identical technology_, a smaller sensor with larger pixels simply cannot collect the same amount of light as a larger sensor. Compare this to 35mm film vs MF film using identical emulsion. To what degree that's important depends on the lighting of the scene. Higher pixel density may give higher resolution (if the lens allows it).


----------



## Keith_Reeder (Aug 22, 2014)

weixing said:


> No, for example a 36MP FF and 18MP FF using the same manufacturing technology, the 18MP FF will have better low-light properties as which pixel received more light than the 36MP FF.



Not true either - the world is _awash_ with examples that prove the opposite: Nikon's D7000 has clearly superior how light performance to the D300; the Canon 70D is much better than the 30D; the 1D Mk IV is far superior to the 1D II!n.

And so on.

Smaller pixels _do not_ mean inferior low noise performance. 

Even DxO gets it:
http://www.dxomark.com/Reviews/More-pixels-offset-noise


----------



## robbinzo (Aug 22, 2014)

Orangutan said:


> I'm no expert on this, so I'll refer you to: http://www.clarkvision.com/articles/does.pixel.size.matter and to jrista's various write-ups.
> 
> 
> 
> ...



This is something that I have had many discussions about. My understanding from talking to more experienced togs is that the pixel size is what affects light gathering capability, not the sensor size. 
A larger sensor (say FF vs APS-C) will capture more light because obviously it is bigger. However, the intensity of light reaching the pixels on each sensor will be identical for a given lighting scenario. Therefore, the light reaching the sensor "per-unit-area of the scene" will be identical. The light entering the pixels will not be identical, however.
So how does sensor size alter exposure? For example: Generally F/4 on FF means you will require F/2.8 for the same exposure on APS-C but from what I have been lead to believe that is due to pixel size, not sensor size.
Thoughts?


----------



## jrista (Aug 22, 2014)

Don Haines said:


> Scenario 1: Normalize for pixel size
> We have a camera that we can swap sensors on. We mount a 100mm lens to that camera.
> All of our sensors use the exact same technology and have the exact same 0.01mm by 0.01mm pixels.
> 
> ...



The only thing I would dispute is the regular use of "each sensor will have the exact same IQ". Hate to say it, but that is wrong...at least, so long as another required factor is not specified.  Sensor size is the primary factor that controls "image quality", not pixel count, not pixel size. You could have twice the Q.E. on a smaller sensor, and the only one that would then only EQUAL the IQ of the larger sensor is the one that is exactly half it's size. A 4/3rds or 1/2.3 could never compare to the IQ of a FF sensor, not even with double the Q.E. 

Now, the statement about equal IQ would be true...IF the aperture was specified. The 100mm lens on a FF sensor has to be using a smaller aperture than the 61.7mm lens, by a factor of the differences of the sensor diagonals. If the 61.7mm lens is f/2.8, then the 100mm lens should be f/4.5. THEN, and ONLY THEN, would "each sensor have the exact same IQ." You have to use a smaller aperture to normalize the amount of light reaching the sensor. Otherwise, one has to assume the same aperture. A 100mm lens on FF produces the same FoV as a 61.7mm lens on APS-C, however if they are both f/2.8, the FF sensor is without question gathering more light. 

Equivalence. Aperture matters here.

I've said this so man times...but, I think Orangutan is the only one who actually heard it. So I'll just quote his answer:



Orangutan said:


> I'm no expert on this, so I'll refer you to: http://www.clarkvision.com/articles/does.pixel.size.matter and to jrista's various write-ups.
> 
> 
> 
> ...



This is EXACTLY CORRECT. The only thing that really matters in the end, assuming all the sensors use the same technology, is TOTAL light gathered. More light, less noise. It's as simple as that. Pixel size doesn't really matter from a noise standpoint. Smaller pixels in the SAME sensor size mean you get more resolution, that does increase IQ...but smaller pixels in a smaller sensor DO NOT mean better IQ....they just mean more resolution, but with worse IQ. 

This is equivalence. This scientific concept is documented very, very, very thouroughly here:

http://www.josephjamesphotography.com/equivalence/

If you still doubt, please, read the article on equivalence linked above. It's really not that complicated of a concept.



robbinzo said:


> Orangutan said:
> 
> 
> > I'm no expert on this, so I'll refer you to: http://www.clarkvision.com/articles/does.pixel.size.matter and to jrista's various write-ups.
> ...



You are incorrect. Pixel size affects per-pixel dynamic range and per-pixel noise. That affects what you see when editing a RAW image at 100%. Larger pixels do gather more light. But larger pixels stuffed into the same area as smaller pixels (i.e. larger vs. smaller pixels in a full frame sensor) don't actually change the final noise levels *of an identically framed subject.* Smaller pixels affect the final detail levels, however noise is ultimately relative to the whole sensor frame. And I have to stress...for identically framed subjects. There is no point to using a larger sensor if you are not framing the same. You don't gain any of the benefits of using a larger frame if your not filling it just as much a you would fill a smaller frame. That means either getting closer to your subject (which full frame cameras allow, it's one of their primary attractions for a lot of photographers), or it means your going to be using a longer lens.

Now, the following assumes the same ISO setting for every circumstance. Let's just assume ISO 100. It also assumes the same aperture is used. Let's say f/4. 

Two FF sensors, one with 10µm pixels with a 100ke- FWC and one with 5µm with 25ke- FWC pixels, framing a portrait of a woman, are both gathering the same amount of total light when the woman's head fills the frame. Bin 4x5µm pixels together, and you combine four distinct 25ke- charges together to create...what? Yup, one single 100ke- charge in an effective 10µm pixel. Similarly, combining four pixels together reduces the individual noise levels of each pixel by...what? Yup, the square root of the number of pixels combined...SQRT(4) is 2. Noise levels drop by a factor of two...they are cut in half. This is effectively the same as downsampling the larger image that is produced by the sensor with smaller pixels to the same dimensions as the smaller image that is produced by the sensor with larger pixels. Same difference...the noise of each small image pixel is interpolated...averaged together...to produce a less noisy output pixel. 

So, while noise PER PIXEL is less with larger pixels...your actually LOSING something with larger pixels. Your losing resolution. With smaller pixels...you can always just average them together, and end up with the same amount of noise as you would have gotten if you'ed sacrificed resolution to have larger pixels. However, downsampling also enhances detail, so a downsampled image taken with a sensor the same size but with smaller pixels is pretty much always going to look better than a sensor (of the same size) with larger pixels. 

Pixel size does not change the TOTAL amount of light gathered by a sensor. It just divides the incoming real image signal into smaller parts. Each smaller part has more RELATIVE noise, because the noise is relative to a maximum signal level of 25ke- rather than relative to 100ke-. Combine four pixels together, and now your noise is relative to 100ke-. It's equivalent. Sensor size is the primary factor that affects image quality, assuming all other factors are equal (i.e. same fabrication process, same Q.E., same ISO, same exposure).

Let's say you have a FF 10µm pixel sensor and a 5µm pixel sensor that is exactly 1/4 the FF sensor size (18x12mm). These two sensors have the exact same pixel count, and the exact same relative pixel size (relative to the size of the sensor.) According to Don and Robbinzo's theory, the IQ of these two sensors would be identical. Which one is going to produce better IQ? The one with bigger pixels? Well, yes. But is it because of bigger pixels? What happens if we make the FF sensor use 5µm pixels as well? Now it has four times the pixels as the 18x12mm sensor. Will it's image quality drop? How do you know whether it's IQ will drop or not? 

Well, think of it this way. The large pixels of the FF sensor are now one quarter as large...however so long as you keep the subject framed the same, for every one pixel that you used to place on the subject before, you now have four pixels. However, with the 18x12mm sensor...you still have just one pixel. Worse, that one pixel represents an area four times larger with the smaller sensor than it does with the FF sensor. So technically speaking, the FF sensor with 5µm pixels is putting a total of 16 pixels onto the same absolute area of the subject as every one pixel with the 18x12mm sensor. You could halve the pixels of the 18x12mm sensor...but it's the same deal. Finer resolution relative to it self, but nothing changes in terms of the difference between the FF sensor and the 18x12mm sensor. The subject is still framed the same...and the same relative differences exist. For any given absolute area of the subject, the larger sensor will always use a larger area of the sensor to resolve that area of the subject than the smaller sensor. 

No matter how you slice it, no matter how much you either make pixels smaller or larger...fundamentally, IQ is related to total sensor area, not pixel size. The only thing that is really related to pixel size is resolving power...the ability to delineate elements of detail. Smaller pixels delineate finer elements of detail then larger pixels. That's it. So, having to use f/4 on FF and f/2.8 on APS-C *has everything to do with sensor size*, _and nothing to do with pixel size!_ Otherwise, how could that ratio work? We have had APS-C sensors with pixel sizes ranging from 10µm down through 3.8µm released over the last 10 years. However, people have been saying that same thing, that you have to use f/4 on FF and f/2.8 on APS-C with FoV-equivalent lenses to get the same IQ for the same time period. Actually, they have been saying it for a whole hell of a lot longer, back in the film days, when we had APS-C film vs. 35mm film. And before that, when we had various medium format films vs. large format films. And even among large format films, say 4x5 vs. 8x10! This debate has been raging on for decades, the better part of a century! If it was the pixel size that mattered...how could that statement have remained true for a decade? For decades? For nearly 100 years?!? Because it's not the pixel size that matters...it's the sensor size that matters!


----------



## AlanF (Aug 22, 2014)

Keith_Reeder said:


> weixing said:
> 
> 
> > No, for example a 36MP FF and 18MP FF using the same manufacturing technology, the 18MP FF will have better low-light properties as which pixel received more light than the 36MP FF.
> ...



To draw conclusions about pixel size and noise you must compare sensors with the same technology. We know, for example, current Nikon sensors have much better S/N than equivalent Canon at low ISO. So comparing Nikon with Canon is misleading. 

The DXO article says that the Canon 350D and 1Ds have identical sized pixels (6.4 micron) and identical S/N, which could be interpreted as it is pixel size that determines S/N. It then goes on to say that at the same field of view the 1Ds has better S/N than the 350D when both images are printed at the same size. That is not due to the pixel size but results from the larger sensor of the 1Ds.


----------



## Straightshooter (Aug 22, 2014)

Oh, no, here we go AGAIN......... :


----------



## sagittariansrock (Aug 22, 2014)

jrista said:


> Don Haines said:
> 
> 
> > Scenario 1: Normalize for pixel size
> ...



I am sorry but neither you nor Orangutan are following Don's logic, and are traversing an entirely parallel path. It is funny, because no one is actually disputing anyone, but saying the other is wrong.

1. Forget aperture (or presume DoF to be infinity for comparison purposes)- focus on the question at hand: if the pixel size is the same, with same sensor tech, an APS-C sensor will have same noise characteristics as a FF sensor. Example: DX mode in a Nikon FF dSLR- what happens there is that a portion of the sensor is being used. Exactly the same thing will happen if you have a smaller sensor with same pixel size. 

2. I don't think Don or anyone is claiming that a smaller sensor can collect more light than a larger one, if the pixel size is larger in the former. We all know that is quite impossible. The point being made is that the amount of light per pixel depends on the size of the pixel- in case of a larger sensor it is additionally multiplied by more number of pixels. However, if a FF sensor was made, using the same technology as a 7D sensor and with the same pixel size (making it 46MP I think) then it would be equally noisy. 

I am not arguing that aperture affects image quality, but for this discussion let's just talk noise characteristics.




jrista said:


> This is EXACTLY CORRECT. The only thing that really matters in the end, assuming all the sensors use the same technology, is TOTAL light gathered. More light, less noise. It's as simple as that. Pixel size doesn't really matter from a noise standpoint. Smaller pixels in the SAME sensor size mean you get more resolution, that does increase IQ...but smaller pixels in a smaller sensor DO NOT mean better IQ....they just mean more resolution, but with worse IQ.
> 
> This is equivalence. This scientific concept is documented very, very, very thouroughly here:
> 
> ...



Sorry, but that's only partially true. The pixel size does matter. Quoting from your own link:

Given four cameras, one with...

...an mFT (4/3) sensor,
...another with a 1.6x sensor,
...another with a 1.5x sensor,
...and another with a FF sensor...

...and...

...a photo of a scene from the same position with the same focal point and the same settings (e.g. 25mm f/1.4 1/200 ISO 400) with all cameras,
...the photos cropped to the same framing as the photo from the mFT (4/3) camera,
...and the photos are displayed at the same size...

...then the resulting photos will be Equivalent. In addition, if...

...all the sensors are equally efficient, then all the photos will also have the same noise,
...the pixels are all the same size, the AA filter the same strength, and the lens is the same sharpness, then all the photos will also have the same detail,
...the exact same lens is used and the sensors are of the exact same design with the exact same size pixels, AA filter, CFA, and processing...


It is possible to have same noise characteristics if the pixel size and sensor tech is the same, irrespective of total sensor size.


----------



## robbinzo (Aug 22, 2014)

Thoughts on the following anyone:

http://www.clarkvision.com/articles/does.pixel.size.matter/

Conclusions 

Current good quality sensors in digital cameras are photon noise limited. This means there is no possible improvement in performance for the high signal region (bright things in an image) except to increase quantum efficiency of the devices and/or the fractional active area for which the sensor converts photons to electrons (called the fill factor). As both of these properties are reasonably high already, there is limited room for improvement. And even if these properties were improved, there would still be a big difference between large and small pixels. Larger pixels enable higher signal-to-noise ratios at all levels, but especially at low signal levels, assuming the lens scales with the sensor. The obvious improvement still possible would be to reduce the read noise, but that would likely improve large sensors also, thus large sensors with large pixels will always have an advantage for the same field of view, and correspondingly longer focal length lenses are used. Whether the difference in noise is great enough for you to choose a larger sensor, and thus likely a larger and heavier camera, is a decision you must make for yourself. 

When choosing between cameras with the same sized sensor but differing pixel counts, the one with larger pixels (and fewer total pixels) will have better high ISO and low light performance (assuming read noise and fixed pattern noise are similar, which may not be the case), while the camera with more pixels can deliver images with finer detail in good light. You will need to decide where that trade point is. My models show the optimum in DSLR-sized sensors have pixels around 5 microns. You will need to determine what your prime imaging will be. For low light work, I might bias the pixels to a little larger than 5 microns; if low light/high ISO work is not as important, I might bias my choice to slightly smaller than 5 microns. For P&S cameras with small sensors, I prefer cameras with pixels larger than 2 microns. 

Because good digital cameras are photon noise limited, the larger pixels will always have higher signal-to-noise ratios unless someone finds a way around the laws of physics, which is highly unlikely. Important to remember, however, is larger pixels enable more light to be collected, but it is the lens that delivers the light. An analogy is buckets of water. A large bucket will hold more water than a small bucket. But if you want to collect more water in a given time, one must turn the faucet on higher. So too with cameras and lenses: the bigger lens collects more light and delivers it to the sensor. 

Image detail can be blurred by diffraction. Diffraction is more of an issue with smaller pixels, so again cameras with larger pixels will perform better, giving sharper images with higher contrast in the fine details. A direct example of this effect is a small sensor P&S camera can be diffraction limitied at f/5.6 to f/8, whereas the larger pixels in a DSLR will not show the same effects until f/11, f/16, and slower. And given the same pixel count in the P&S and DSLR, the DSLR will resolve finer details.


----------



## ewg963 (Aug 22, 2014)

PureClassA said:


> There are no stupid questions ;D
> 
> Most folks in general do not understand the relationship of sensor size, pixel density, and pixel size. For many, the picture they take on their phone looks great on that little 4 inch screen. Put it on an 8 by 10 print and it doesn't so much anymore.
> 
> ...


Ditto +10000000000000000000000


----------



## Lee Jay (Aug 22, 2014)

sagittariansrock said:


> You say "with identical technology, size of sensor is all that matters". I disagree. If you simply made a bigger 7D sensor, with the same technology and same pixel density, then it will have the same noise characteristics as the 7D sensor.



This is totally false. If you make the 7D sensor bigger, you'll have more of the same pixels AND about a stop and a third better noise performance, assuming constant f-stop and constant framing. That means, for the same image, you're going to have to either get closer or use a longer focal length.

The right-hand column of this image demonstrates this. It's all the same sensor (and so all the same pixels) just using different sized portions of that sensor, and reframing to keep the final image framing constant. According to what you said above, the noise performance should all be the same. It isn't, and it isn't even close. The left column demonstrates by just how much. It's exactly how much you would think - the light you've lost with cropping is the amount of noise performance you've lost.


----------



## Orangutan (Aug 22, 2014)

sagittariansrock said:


> Given four cameras, one with...
> 
> ...an mFT (4/3) sensor,
> ...another with a 1.6x sensor,
> ...



Here are the problems: as I said above, the comparison is irrelevant and misleading unless it's the same framing. I will clarify that to say that it must be photographed initially at the same framing, without any cropping. *If you don't start with the same framing you are not comparing the IQ of the sensors, and the comparison is invalid.*

There are certain reach-limited circumstances (which jrista has illustrated) where a high-density crop sensor can demonstrate superior IQ for a heavily cropped image. However, that's not a comparison of the sensors themselves.

You must start with the same frame from on each sensor, or you have no valid data on which perform comparisons.


----------



## sagittariansrock (Aug 22, 2014)

Lee Jay said:


> sagittariansrock said:
> 
> 
> > You say "with identical technology, size of sensor is all that matters". I disagree. If you simply made a bigger 7D sensor, with the same technology and same pixel density, then it will have the same noise characteristics as the 7D sensor.
> ...



I wish people wouldn't be so vehement in their comments, and leave some room for discussion ;P

You lose light when you magnify, so it is not an apples to apples comparison. This is why a macro lens loses half the incident light (with the same aperture) when you move from, say, 1:2 magnification to 1:1. Try using a microscope and go through the different magnifications- it will be immediately apparent.

I am not talking of the same image. I am talking of the same subject distance, therefore I am talking of a different FoV. In this situation, the total luminous flux on the center 1/2.56th of an FF sensor is the same as the total luminous flux on an APS-C sensor placed in the same location. This is why, when you select DX mode on a Nikon FF dSLR, you still get the same amount of light per unit area as before, and don't need to readjust your ISO, shutter speed or aperture, etc.


----------



## sagittariansrock (Aug 22, 2014)

Orangutan said:


> Here are the problems: as I said above, the comparison is irrelevant and misleading unless it's the same framing. I will clarify that to say that it must be photographed initially at the same framing, without any cropping. *If you don't start with the same framing you are not comparing the IQ of the sensors, and the comparison is invalid.*
> 
> You must start with the same frame from on each sensor, or you have no valid data on which perform comparisons.



Why is the comparison irrelevant unless it's the same framing? Last time, you used "to me" which is understandable, but to understand sensor and pixel properties, it totally makes sense to compare apples to apples where the luminance on the sensors is the same (i.e., same position of the hypothetical camera to capture the same incident light or part thereof in case of the smaller sensor).

You can compare sensors on the basis of equal sensor area. In that case, is the total light on an APS-C sized chunk located in the center of the FF sensor the same as the total light on an APS-C sensor at the exact same location. I'd say it is (it violates the laws of physics to be otherwise). Now, let's say both of these (chunk of FF vs APS-C) have the same number of pixels. Will the images on these sensors have the same noise characteristics. Yes, they will. This is easy to measure, both objectively and subjectively. What do you mean by not having data to perform comparisons!




Orangutan said:


> There are certain reach-limited circumstances (which jrista has illustrated) where a high-density crop sensor can demonstrate superior IQ for a heavily cropped image. However, that's not a comparison of the sensors themselves.



Exactly, that is completely irrelevant here. An APS-C sensor will never have the same or even close IQ to a same-generation FF sensor, while in reach-limited circumstances a higher resolution will demonstrate advantages. In this case, we are NOT talking about that. We are NOT talking about APS-C sensor being better than FF. This is a very focused argument: the size of pixel defines its light-gathering capacity. This capacity will be the same whether the same pixel resides in an FF sensor, an APS-C sensor or an MF sensor.


----------



## jrista (Aug 22, 2014)

@sagittariansrock: There is a difference in your explanation than the standard one: Your explanation does not utilize the full sensor area of larger sensors. Your explanation is based on the subject filling the same absolute area of the sensor, regardless of the total sensor area. 

That is the reach-limited argument. That is the ONE AND ONLY case where smaller sensors can achieve the same image quality as a larger sensor. However, it SEVERELY handicaps the larger sensors. The fair comparison is when your subject is framed the same, which means that for progressively larger sensors, a greater absolute area of sensor covers the subject. In that case, everything Orangutang, Lee Jay, and myself have stated is true. There is no circumstance where smaller sensors, regardless of their pixel size, can ever outperform a larger sensor. 

There are real-world use cases where a limited reach is an actual problem. I already posted a topic on that, demonstrating the differences between a 5D III and a 7D, and the 7D does indeed maintain the IQ edge (I really need to try that on a day with better seeing, or find a good terrestrial subject to compare.) But in the "normative" case, you buy a larger sensor to use the greater area to get better IQ. I mean, that's the entire point. That's where improved IQ comes from. 

Before I found the equivalence article, I used to think the same thing...that pixel size mattered. But it simply doesn't. Not at lower and midrange ISO settings anyway. At really high ISO settings, then the game does change a bit. Spatially, information in an incomming wavefront is sparser when your working in really low light, or at a really small aperture, or any other circumstance where you NEED something like ISO 12800 or higher. Sparser data ultimately renders smaller pixels useless, since you just don't have complete enough information to render a whole picture. Then, pixel size really does start to matter. Or, conversely, downsampling your image becomes more important to reducing noise. 

For the ultra high ISO use cases, I would actually love to see Canon create a sensor that had some kind of dynamic binning. At low ISO, use full maximum resolution, then have a configurable option to switch to a hardware binning mode of 2x2 for say ISO 6400 through 26500 and maybe even have an additional 4x4 binning option for ISO 51200 through 400k or whatever. I think that would be awesome, since you can't really get clean high resolution at ultra high ISO anyway. 

However, fundamentally, in a fair or normative situation where your utilizing all the sensor area you can (i.e. assuming identical framing) and and for the same aperture used, larger sensors gather more light per subject area. If you read the equivalence article, when he gets down into the myths, he clearly covers how with a larger sensor, you need to use a narrower aperture on larger sensors for a given FoV to make image quality equivalent. For 80mm f/4 FF, you would need 50mm f/2.5 APS-C (that is 4 divided by 1.6, the scale factor between FF and Canon APS-C...it does not take pixel size into account at all), 40mm f/2 4/3rds:

http://www.josephjamesphotography.com/equivalence/#1



> *1) f/2 = f/2 = f/2*
> 
> This is perhaps the single most misunderstood concept when comparing formats. Saying "f/2 = f/2 = f/2" is like saying "50mm = 50mm = 50mm". Just as the effect of 50mm is not the same on different formats, the effect of f/2 is not the same on different formats.
> 
> ...


----------



## sagittariansrock (Aug 22, 2014)

jrista said:


> @sagittariansrock: There is a difference in your explanation than the standard one: Your explanation does not utilize the full sensor area of larger sensors. Your explanation is based on the subject filling the same absolute area of the sensor, regardless of the total sensor area. That is the reach-limited argument. That is the ONE AND ONLY case where smaller sensors can achieve the same image quality as a larger sensor.



That is exactly right. Except, here we are discussing the capacity of a pixel to collect light, which is why this scenario should be used- that is, where the incident light is exactly the same in terms of intensity and quality.



jrista said:


> The fair comparison is when your subject is framed the same, which means that for progressively larger sensors, a greater absolute area of sensor covers the subject. In that case, everything Orangutang, Lee Jay, and myself have stated is true. There is no circumstance where smaller sensors, regardless of their pixel size, can ever outperform a larger sensor.



I don't know if I will call it a fair comparison, but I can call it a real-world comparison. And as I said before I am sure everyone agrees with what you, Lee Jay and Orangutang are contending here- larger sensors have better IQ. No way can a smaller sensor collect the same amount of light. Except that is not the point here- the point is, would a smaller pixel collect less light than a larger pixel? Yes. Would a pixel collect the same amount of light whether its part of a large sensor or a small sensor? Of course! 
This is why I said you all are disputing each other while everyone being right at the same time


----------



## Lee Jay (Aug 22, 2014)

sagittariansrock said:


> I am not talking of the same image. I am talking of the same subject distance, therefore I am talking of a different FoV.



In which case, you're only talking about a focal-length or magnification-limited situation. That happens, and I work with it regularly, but it's not the normal situation when comparing the use of different formats in the same conditions.

Constant framing is the norm.


----------



## Lee Jay (Aug 22, 2014)

sagittariansrock said:


> Exactly, that is completely irrelevant here. An APS-C sensor will never have the same or even close IQ to a same-generation FF sensor, while in reach-limited circumstances a higher resolution will demonstrate advantages. In this case, we are NOT talking about that. We are NOT talking about APS-C sensor being better than FF. This is a very focused argument: the size of pixel defines its light-gathering capacity. This capacity will be the same whether the same pixel resides in an FF sensor, an APS-C sensor or an MF sensor.



In which case, smaller pixels covering the same area almost always win in a comparison of final images. There are edge cases where larger pixels win (generally, exceptionally photon-starved conditions with effective ISOs into the 6 and 7 digits), but even those are because of specific limitations of certain technologies.


----------



## jrista (Aug 22, 2014)

sagittariansrock said:


> jrista said:
> 
> 
> > @sagittariansrock: There is a difference in your explanation than the standard one: Your explanation does not utilize the full sensor area of larger sensors. Your explanation is based on the subject filling the same absolute area of the sensor, regardless of the total sensor area. That is the reach-limited argument. That is the ONE AND ONLY case where smaller sensors can achieve the same image quality as a larger sensor.
> ...



I'm sorry, but I beg to differ, given that this is the title of the thread:

*Another my Stupid question = Sensor Sizes*

And this is the actual question asked:



surapon said:


> Dear Teachers and Friends.
> Well, Yes, I can take a SoSo-or Good Photos, Because of I take the photos so long time. But for the High Tech of Digital Photography, I almost know nothing a bout this New Technology.
> My Stupid Question are :
> 1) *Are the Size of the Sensor Matter ?---Or the MP. count are matter ?*
> ...



The question is whether the size of the sensor matters or not. So, the point here is NOT about whether smaller pixels will collect more light...the point, very specifically, is whether differences in sensor size matter. Within the scope of the original question asked by Surapon, the proper context to discuss comparisons in is a normalized context...on in which the subject is framed identically, and, to be truly fair, where all the output images are resampled to the same dimensions.

That is the standard context for comparing images. It's the requirement of using ISO 12233 test charts, the standard test chart that pretty much every lens and camera tester, with maybe the exception of DXO, use to compare the IQ of different camera systems, that all framing be identical regardless of sensor size.

The reach-limited comparison is not wrong, however it does place a handicap on larger sensors, a handicap that becomes increasingly severe the larger the discrepancy between the small sensor and the large sensor. In a reach limited scenario, I'd rather have an APS-C sensor, or maybe even smaller, with a small pixel, than a FF sensor with large pixels. But when I have the option of getting closer, or using a longer lens, I'll take the FF every time. I'd certainly rather have more pixels in the FF than fewer, as I can always downsample if I need less noise...but when I have the option of framing identically, larger sensors trounce smaller sensors.


----------



## sagittariansrock (Aug 23, 2014)

jrista: Of course, sensor size does matter. Pixel size matters too, but that is not the topic of the OP's question. Now I can see how that got transformed over the few pages.
Lee Jay: You are right in considering a real world situation where an equally framed image should be the parameter of comparison, whereas I am supporting Don Haines' theoretical consideration that a larger pixel will gather more light than a smaller pixel, and that the same sized pixel will gather equal amount of light irrespective of the total area of the sensor it is a part of.

I think we all understand the physics, essentially, and also the real world fact that a larger sensor provides a ton of benefits under certain conditions, and a smaller sensor provides benefit in one specific situation. Good for us...


----------



## surapon (Aug 24, 2014)

Wow, Wow, Wow.
Thank you Sir/ Madam for Complete Explain and answer my question to me---Yes, Thousand Thanks that I just Learn some thing New from all of my Teachers and All of My Friends.
Have a great Night, Sir/ Madam.
Surapon


----------



## sgs8r (Aug 24, 2014)

Lee Jay said:


> sagittariansrock said:
> 
> 
> > You say "with identical technology, size of sensor is all that matters". I disagree. If you simply made a bigger 7D sensor, with the same technology and same pixel density, then it will have the same noise characteristics as the 7D sensor.
> ...



Well, how do you re-frame to keep the image on the larger sensor the same? With the same lens/optics, you need to get closer. But then you're getting more light on the lens, and thus on the sensor. Let's assume the large sensor is twice the diagonal size of the small sensor. You'll have to halve the distance to the subject. So 4 times as much light but also 4 times as many pixels. So same light on each pixel. Each pixel on the large sensor thus has the same SNR as those on the small sensor, but you could downsample, combining groups of 4 pixels to get the same image and number of pixels as the smaller sensor, but with better noise performance by a factor of two (averaging N pixels drops noise by sqrt(N)). But really this is because you've moved closer and thereby increased the light (signal).

On the other hand, without changing position, you could use a different lens to fill the larger sensor with the same view (so keeping the framing the same). This implies an increase in the focal length, which, for the same aperture, implies an increase in the f-stop, i.e. a reduction in the light density on the sensor. We've kept the total captured light the same but spread it over a larger area with more pixels, so the per-pixel SNR would decrease with the larger sensor. Again, you could combine pixels, downsampling, to improve the SNR. But I think only by a factor of two (again assuming the large sensor diagonal is twice that of the smaller sensor). So worse SNR as compared to the small sensor (but higher resolution due to more pixels).

One difficulty with this whole discussion it that one wants to say "Keeping everything else the same, here's what happens when you change the pixel size...". But it's actually impossible to keep everything else the same. Same optics, same lens , same shooting location, same framing, same viewing size, etc. One issue raised with the original post (which was excellent, by the way) was the upsampling applied to the full-frame image. But if you want to view them so the moon is the same size on your screen in both images, you need to either upsample one or downsample the other. Otherwise one image will be bigger than the other, making comparison problematic.

One other thing. Based on my somewhat crude calculation above (maybe this is well-known to the rest of you), it seems like from an SNR standpoint (with sensor size fixed), you are better off using bigger pixels, rather than subdividing each big pixel into smaller pixels, then averaging/downsampling them to recover the same number of pixels (as with the big pixels). I'm assuming that the noise comes from the electronics downstream of the light-gathering component, so that a big pixel has the same absolute amount of noise as a small pixel (but more signal), so 4 times the area means 4 times the SNR, whereas combining pixels will add the 4 light values, but also the 4 noise values. Assuming the noise is random and independent, you'll get some noise cancellation but only a Sqrt(4)=2 factor reduction, so lower SNR than the big pixel. To put this in practical terms, you get better SRN from the HTC One's 4 MP camera than downsampling the Nokia 1020's 40 MP image to 4 MP (assuming the same sensor size and optics, which may not be the case, but you get my point).


----------



## Lee Jay (Aug 24, 2014)

sgs8r said:


> One difficulty with this whole discussion it that one wants to say "Keeping everything else the same, here's what happens when you change the pixel size...". But it's actually impossible to keep everything else the same. Same optics, same lens , same shooting location, same framing, same viewing size, etc.



Same focal length, same shutter speed, same f-stop, same ISO, same lighting, same shooting position, shot in raw, same raw processor, pixel area different by a factor of 16 (small pixels on the left).


----------



## jrista (Aug 24, 2014)

sgs8r said:


> Lee Jay said:
> 
> 
> > sagittariansrock said:
> ...



Downsampling uses averaging, not adding. If you added, then you would end up with a bunch of blown pixels. Averaging reduces noise, where adding does not. So downsampling has the exact same effect on SNR as using larger pixels or binning smaller pixels in hardware. Additionally, noise is poisson. If you have a pixel with twice the pixel pitch, you have four times the area, and you have four times the SNR...but you still have SQRT(4) noise. A pixel twice the pitch still only has half the noise. It doesn't matter if you use a larger pixel, or bin/average smaller pixels together. It doesn't even matter if you integrate four separate frames with the same noise together. It's always the same noise in the end. A pixel four times the area, averaging four pixels together, integrating four separate frames, all have SQRT(4) the amount of noise. 

Also, your not quite right about a smaller aperture reducing SNR to a level below that of the smaller sensor. If you really do have two sensors, one with half the diagonal, then you could use a 100mm f/4 on the larger sensor and a 50mm f/2.8 on the smaller. That would get you identical framing. In that case, the total amount of light reaching the sensor is also identical. THAT right there is exactly what equivalence is all about. But...pixel size isn't a factor. Because downsampling averages (which involves first adding, yes...but then dividing) pixels together, when you NORMALIZE, pixel size doesn't matter. Two large sensor cameras with different pixel sizes are still going to gather the same amount of light for any absolute area of the subject. A small sensor camera, for an identically framed subject (50mm f/2.8 instead of 100mm f/4) is going to gather the same amount of light for the same absolute area as the larger sensor camera...however it's only gathering the same amount of light because of the wider aperture. Slap a 100mm f/2.8 lens on your larger sensor, and it is now gathering twice the amount of light. (Plus, there are other benefits with the larger sensor...narrower depth of field, or a wider field of view, etc.)

Pixel size is irrelevant. SNR, and therefor dynamic range (assuming you have no other source of noise than what is inherent to the image signal itself) and noise are ultimately relative to total sensor area. That's it.


----------



## sagittariansrock (Aug 24, 2014)

jrista said:


> Pixel size is irrelevant. SNR, and therefor dynamic range (assuming you have no other source of noise than what is inherent to the image signal itself) and noise are ultimately relative to total sensor area. That's it.



If that is so, what is stopping Canon from making a 46 MP FF camera with the same sensor tech as, say 7D? 
I am not really an expert on this, but I think every pipeline (pixel-->signal processor) must add its own bit of noise. So noise from 4 1x1micron pixels > noise from 1 2x2 micron pixel. 
It also has a bearing on processor power, but that's another topic.
Maybe an expert can chime in on this?


----------



## Lee Jay (Aug 24, 2014)

sagittariansrock said:


> jrista said:
> 
> 
> > Pixel size is irrelevant. SNR, and therefor dynamic range (assuming you have no other source of noise than what is inherent to the image signal itself) and noise are ultimately relative to total sensor area. That's it.
> ...



You already got the correct answer.


----------



## Aglet (Aug 24, 2014)

jrista said:


> Pixel size is irrelevant. SNR, and therefor dynamic range (assuming you have no other source of noise than what is inherent to the image signal itself) and noise are ultimately relative to total sensor area. That's it.



Uhmm... except when pixel size is not irrelevant.
I have to disaggree with you, somewhat, on one point; dynamic range will become limited when pixels become too small, and hence their full-well capacity decreases by more than just the ratio of their surface area.
I say this because, I suspect, the vertical dimension of the photodiode will have some aspect ratio limit with regards to the surface area. When the surface area becomes too small, the other dimension will have to shrink also, and that will iimit the full-capacity/surface area, decreasing maximum DR. You'll still be able to reduce noise levels quite effectively by binning/averaging, either hardware or software, but you'll reach a lower maximum when the pixel geometry gets too small.
I suspect something like 40MP smartphone camera may be an example.

EDIT: Actually, we're already there in varying degrees.
Since many sensor systems are already counting individual electrons, smaller pixels are just gonna be DR-limited. 14bits at 1 bit per electron is only 16384 e-
Small pixels are useful even with full well counts well below that, like 2^10, but then that's already a 10-stop or less DR. When you start averaging them, you're not gonna gain quite all of that DR back. And then when you hit the aspect ratio limit for the photo-diode, the DR curve will really drop off.
Perhaps a resident math-whiz could graph that curve for a demo.... (nudge, hint-hint  )


----------



## jrista (Aug 24, 2014)

sagittariansrock said:


> jrista said:
> 
> 
> > Pixel size is irrelevant. SNR, and therefor dynamic range (assuming you have no other source of noise than what is inherent to the image signal itself) and noise are ultimately relative to total sensor area. That's it.
> ...



I think Canon's 500nm process is stopping them. They could make a 46mp FF sensor today...but I don't think it would perform as well as an Exmor. As I've said...pixel size, and therefor pixel count, don't really matter. It's primarily the sensor size that matters. When the sensor sizes are the same, then it's the core technology that matters. A D800 is better not because of it's pixel size or sensor size...it's better because of the higher Q.E., because of the lower read noise, and because of the clean, random nature of the tiny bit of read noise that does exist. 

You are correct that electronics throughout the whole pipeline add noise. How you design that pipeline can have a big impact on how much noise is added and where. Based on Roger Clark's work, Canon's sensors themselves are actually not that bad. They suffer from the large 500nm transistor size and lower Q.E., but from an electronic noise standpoint, the noise introduced by the sensor itself is quite low. It's the downstream components, the high frequency ones that all have to process huge numbers of pixels, that add most of the read noise. Canon cameras have both a secondary downstream amplifier, which is used to amplify the signal post-read for really high ISO settings (i.e. to get ISO 12800, Canon first amplifies to ISO 3200 strait off the pixel with the per-pixel amplifiers, then amplifies another two stops using their downstream amp...the downstream amp processes all the pixels, and must operate at a much higher frequency, which produces more heat (so more dark current), and the higher frequency of the oscillations results in high frequency noise being introduced into the signal.) Canon also places it's ADC units off the sensor die in the DIGIC chips. There are either 8 or 16 ADC channels, depending on whether the camera has one or two DIGIC chips. Those ADC channels each have to process tens of thousands to millions of pixels, and again must operate at a higher frequency, which introduces more noise.

Canon's competitors have moved to on-die ADC units. Most use a column-parallel ADC design, one unit per column of pixels. Most are also fabricated with smaller transistors, which reduces power consumption and reduces energy dissipation. Since each CP-ADC unit processes fewer pixels, they can operate at a lower frequency, which reduces heat and introduces less noise. In Sony Exmor's case, the high frequency clock was also located on a remote corner of the sensor die, away from the ADC units, to avoid any high frequency noise from being introduced.

In practice, read noise is actually higher the larger the pixel. Look at Sensorgen.info pages for the 7D and 5D III:

http://sensorgen.info/CanonEOS_7D.html
http://sensorgen.info/CanonEOS_5D_MkIII.html

The 7D has 8e- read noise, while the 5D III has 35e- read noise. That's the amount of read noise introduced into each pixel during the readout and ADC pipeline. I'm not exactly sure why that is. Even if you compute the relative areas of the pixels for both cameras, and multiply the 7D's RN by that ratio, it still only comes out to 16e-. So on an absolute area basis, the 7D has less read noise per area than the 5D III. The 1D X has slightly more read noise than the 5D III. The main difference in the read pipelines are the DIGIC chips...the 7D uses a DIGIC 4, where as the 5D III and 1D X use DIGIC 5 generation chips. The DIGIC 5's use much higher frequency ADC units...but, I'm just speculating that's the sole or primary cause of the higher read noise.


----------



## jrista (Aug 24, 2014)

Aglet said:


> jrista said:
> 
> 
> > Pixel size is irrelevant. SNR, and therefor dynamic range (assuming you have no other source of noise than what is inherent to the image signal itself) and noise are ultimately relative to total sensor area. That's it.
> ...



Your absolutely right...at some point, fill factor becomes an issue. I've talked about fill factor in many of my posts in the past. The fill factor issue is why we have BSI sensor designs, and why pretty much every very small sensor, ones using 1.2µm pixels and smaller, are BSI. The BSI design maximizes fill factor, effectively creating an ideal pixel, thereby exhibiting the ideal behavior I've described. 

Large sensors don't use BSI, however I think that Canon's APS-C sensors could actually benefit from it. There is most certainly a small loss to fill factor, so you are correct in that smaller pixels, on a 500nm process, are not going to be able to gain back 100% of the DR during the downsampling process. I don't know that the pixels are quite small enough for that to result in a difference in noise that can be determined with anything other than a computer algorithm, though.


----------



## K-amps (Aug 24, 2014)

Guys: Great responses, but I have a simple question and will take Nikon as an example:

Will the D800 have same noise performance of downsampled file to 12mp as the D700?


----------



## jrista (Aug 24, 2014)

K-amps said:


> Guys: Great responses, but I have a simple question and will take Nikon as an example:
> 
> Will the D800 have same noise performance of downsampled file to 12mp as the D700?



The D800 would have better performance. The D700 was one of the last cameras that still used a Nikon sensor, IIRC. The D700 uses a sensor much more like a Canon sensor than an Exmor. It is also limited to ISO 200...which is a pretty severe handicap. It would be trounced by the D800.


----------



## sgs8r (Aug 24, 2014)

jrista said:


> sgs8r said:
> 
> 
> > Lee Jay said:
> ...



I was a bit sloppy. I agree and was trying (sort of) to say the same thing. Averaging involves adding, then normalizing and it is the adding that affects SNR. I was imagining a low-noise situation where one was trying to get the "signal" into a reasonable range. Imagine shooting a white card in low light. You want it to be 8-bit-RGB=(256,256,256), but it is coming out at (64,64,64). To get it to the "correct" value, you would scale/normalize x4. Alternatively, you could add the values of 4 pixels together (no normalization necessary) and increase SNR at the same time. From an SNR standpoint, it is the adding step of the averaging that is significant. The divide-by-number-of-pixels can be lumped with the downstream normalizing you do via ISO/Levels/Curves or whatever since it doesn't change the SNR.

More interestingly (to me at least) is that you seem to be saying that the noise is intrinsic to the light capture at the front-end of the processing chain and not due to downstream amplification (or whatever). So (effectively) turning the photon count into (say) a voltage already has the Poisson noise (Shot Noise), and increasing the pixel area already does the adding. Thus 4 times the area generates 4 times the signal and Sqrt(4)=2 times the noise, so 2 times the SNR, before any subsequent processing (e.g. ISO-related amplification).

This gets at the point I was trying to make in comparing the HTC One vs the Nokia Lumina. And sagittariansrock is basically asking a similar question: Is it better to have pixels 4 times bigger or have 4 times as many pixels and downsample/average them together (by a factor of 4) in low-noise situations. Sounds like you are saying that the result is the same (SNR-wise). It which case (as sagittariansrock says) it would seem to make sense to put as many pixels on the sensor as possible and simply average/downsample in high-noise (low light) situations. When light is ample, you don't downsample and you have the advantage of higher resolution.



> Also, your not quite right about a smaller aperture reducing SNR to a level below that of the smaller sensor. If you really do have two sensors, one with half the diagonal, then you could use a 100mm f/4 on the larger sensor and a 50mm f/2.8 on the smaller. That would get you identical framing. In that case, the total amount of light reaching the sensor is also identical.



Hmmm. Doubling the focal length would scale the image diagonal to produce the same framing on the larger sensor. Decreasing by one stop from 2.8 to 4 would cut the light intensity by a factor of 2. But with 4 tmes the area, it seems like the large sensor would capture 2 times the total light. I think the issue is that from 2.8 to 4, the aperture diameter increases by sqrt(2), not by 2 as the sensor does. So seems like a 100mm f/5.6 would produce the same total light on the large sensor as the 50 mm 2.8 on the small sensor. In this case, when you consider the actual aperture diameter, 100/5.6 on the 100mm vs. 50/2.8 on the 50 mm, you get the same diameter. If we suppose that the lens cost is mostly based on the diameter of the aperture (lens pricing being what it is, reality may be totally different, of course), the cost of the two lenses would be about the same. So even though two different lenses are involved, the cost is about the same, which makes for more of an apples-to-apples comparison lens-wise. 

But if the total light has stayed the same, while the total Poisson noise has increased with the area by SQRT(4)=2, it seems like we have lost overall SNR. If the SNR is still ample, then no big deal. Otherwise, it seems that we would be better off concentrating the light on a smaller sensor area where we would get less total noise, so for a fixed number of pixels, you'd get higher pixel SNR. 



> THAT right there is exactly what equivalence is all about. But...pixel size isn't a factor. Because downsampling averages (which involves first adding, yes...but then dividing) pixels together, when you NORMALIZE, pixel size doesn't matter. Two large sensor cameras with different pixel sizes are still going to gather the same amount of light for any absolute area of the subject.



Right, so for a given sensor size, both the total signal and total noise are the same. But on a pixel-by-pixel basis, as you shrink the pixels by a factor of N, the signal goes down as 1/N but the noise goes down as 1/sqrt(N) (the reverse of the averaging/downsampling/binning situation. So increasing resolution involves decreasing SNR for each pixel. But if you fix the output resolution (monitor or print) then the sensor pixel size doesn't matter as long as the sensor resolution exceeds the output resolution. If you have more sensor pixels (with more noise) you'll just average them back together for each output pixel getting back to the same SNR. On the other hand, if the output resolution is higher than the sensor resolution, then, well, how do you up-res? You have some type of resolution-noise tradeoff.



> A small sensor camera, for an identically framed subject (50mm f/2.8 instead of 100mm f/4) is going to gather the same amount of light for the same absolute area as the larger sensor camera...however it's only gathering the same amount of light because of the wider aperture. Slap a 100mm f/2.8 lens on your larger sensor, and it is now gathering twice the amount of light. (Plus, there are other benefits with the larger sensor...narrower depth of field, or a wider field of view, etc.)



Of course a 100mm f/2.8 will be pricier than a 50mm f/2.8. But I'm nitpicking. 



> Pixel size is irrelevant. SNR, and therefor dynamic range (assuming you have no other source of noise than what is inherent to the image signal itself) and noise are ultimately relative to total sensor area. That's it.



I think I'm finally seeing how things are getting confused. One needs to fix the output size & pixel count. Then the SNR of each output pixel is independant of the sensor pixel size---for a fixed sensor area. The SNR of each sensor pixel varies with the size of the sensor pixel, but the output pixel SNR does not. If the output is being displayed in a 1500x1000 window (on, say, a 90 dpi display), then a full frame sensor of 1500x1000 pixels will produce the same result as a 3000x2000 pixel full frame sensor. The individual pixels of the latter sensor will have lower SNR, but you'll be averaging 4 pixels to produce each output pixel and end up with the same SNR. So it all washes out. But this brings us back to the earlier question: Why isn't the finest pixel size used across all sensor sizes, since the effect of a larger pixel can be gained by downsampling (but the reverse cannot)? I assume that the cost of the electronics and/or lower yield with finer features is the reason?


----------



## jrista (Aug 25, 2014)

sgs8r said:


> > Pixel size is irrelevant. SNR, and therefor dynamic range (assuming you have no other source of noise than what is inherent to the image signal itself) and noise are ultimately relative to total sensor area. That's it.
> 
> 
> 
> I think I'm finally seeing how things are getting confused. One needs to fix the output size & pixel count. Then the SNR of each output pixel is independant of the sensor pixel size---for a fixed sensor area. The SNR of each sensor pixel varies with the size of the sensor pixel, but the output pixel SNR does not. If the output is being displayed in a 1500x1000 window (on, say, a 90 dpi display), then a full frame sensor of 1500x1000 pixels will produce the same result as a 3000x2000 pixel full frame sensor. The individual pixels of the latter sensor will have lower SNR, but you'll be averaging 4 pixels to produce each output pixel and end up with the same SNR. So it all washes out. But this brings us back to the earlier question: Why isn't the finest pixel size used across all sensor sizes, since the effect of a larger pixel can be gained by downsampling (but the reverse cannot)? I assume that the cost of the electronics and/or lower yield with finer features is the reason?



First, you are correct, we needed a 100mm f/5.6 lens for the larger sensor. Was stuck on the equivalence article, where the differences in sensor sizes were factor of two, rather than factor of two (I was mostly quoting). In your particular scenario, since the sensors differ in area by a factor of four, then we do indeed need two stops less light with the longer lens/larger sensor to get an equivalent result.

As for the above, now you got it. It has to do with relative OUTPUT size. It doesn't really matter what size pixels you use if all you ever do is sample your images to 1920x1080 size for viewing on line on computer screens. However, total sensor area DOES matter, for the same framing, a larger sensor will produce a better 1920x1080 pixel image than a smaller sensor. 

As for your question:



sgs8r said:


> The individual pixels of the latter sensor will have lower SNR, but you'll be averaging 4 pixels to produce each output pixel and end up with the same SNR. So it all washes out. But this brings us back to the earlier question: Why isn't the finest pixel size used across all sensor sizes, since the effect of a larger pixel can be gained by downsampling (but the reverse cannot)? I assume that the cost of the electronics and/or lower yield with finer features is the reason?



Smaller pixels are more difficult to manufacture. I don't exactly know where the threshold is...it's probably easy to compute. But at some point fill factor, or the ratio of light-sensitive photodiode area vs. total sensor area (the difference of which is used by readout logic and wiring), will become small enough that smaller pixels consistently perform worse than larger pixels. The primary solution for manufacturers that do make sensors with very small pixels (usually less than two microns) is BSI, or Back-Side Illuminated sensors (sometimes just BI, Back-Illuminated). Manufacturing BSI sensors is quite a bit more difficult than FSI sensors. With BSI, the entire sensor surface is effectively photodiodes. There are literally no gaps, only microlenses and CFA. The other side has all the transistors and wiring. The problem with these sensors is they tend to be fragile. Usually, FSI sensor designs have a pretty thick silicon substrate, but when you etch both sides the substrate becomes quite thin. It is easier to manufacture small sensors with BSI designs than it is to manufacture really large sensors with BSI designs. As such, APS-C and FF sensors are FSI these days.

At some point I suspect manufacturers will start pushing the pixel size envelope, and they will figure out a way to stabilize BSI sensor designs so they can be used for larger form factors. I doubt Canon will be the one to figure out the solution. At the moment, some of the patents that Omnivision has filed seem to have to do with making sensors more rigid and less fragile, so they may be the first to find a solution. If/when someone DOES figure out how to make a stable BSI design for larger form factors, I'm sure we will see another quantum leap in full frame sensor resolution. I wouldn't be surprised to see pixels in the 3µm or smaller range on a full frame sensor. Since the light-sensitive surface would effectively have 100% or nearly 100% fill factor, such a sensor should exhibit "ideal" characteristics...fill factor wouldn't be an issue, only output magnification would matter.


----------

