# dual pixel tech going forward



## 3kramd5 (Apr 24, 2014)

On the 70D, canon has two photo-diodes per pixel almost across the entire sensor. The fact that they can get useable phase information from them suggests that they can read them independently.

So, could they change the bayer filter out and double resolution rather than get sensor level phase detection? Perhaps being co-located they couldn't use a traditional bayer design, but could they for example have green AND either red or blue at every pixel?

If so, that could be a cost-effective way forward to producing 1DmkV and 1DmkVs cameras once DPAF is perfected to the point that it equals or betters SIR AF. The former could have a traditional bayer filter with the second processor dedicated to amazing autofocus; the latter could have double the resolution and use a simpler last-gen SIR AF unit.

I am probably fundamentally misunderstanding the implications of having two photo-diodes per pixel, though. More likely DPAF is their way into high end mirrorless.


----------



## neuroanatomist (Apr 24, 2014)

Hope you like the pano look… 

The 'dual pixels' are all split vertically, so if they altered the microlenses and CFA to increase the actual resolution of the sensor, you'd end up with images having a 3:1 aspect ratio.


----------



## 3kramd5 (Apr 24, 2014)

Good point.

I was thinking along the lines of how bayer filters have twice as much green as either red or blue. So perhaps it's not so much a read resolution increase (like 20MP becoming 40MP) as it's an information increase that comes with the same dimensions. 

This way (again assuming they can read/record them individually), they could have as I mentioned red or blue at each pixel, rather than one red and one blue per every four pixels. They still get the 2:1 ratio of green to red and green to blue, but do so without dedicating individual pixels to green - they get green everywhere. It's like 2/3 of a foveon.


----------



## Don Haines (Apr 24, 2014)

neuroanatomist said:


> Hope you like the pano look…
> 
> The 'dual pixels' are all split vertically, so if they altered the microlenses and CFA to increase the actual resolution of the sensor, you'd end up with images having a 3:1 aspect ratio.



That's why the 7D2 is held up.... Quad Pixel technology so they can use vertical and horizontal phase for the AF system...


----------



## caruser (Apr 24, 2014)

neuroanatomist said:


> The 'dual pixels' are all split vertically, so if they altered the microlenses and CFA to increase the actual resolution of the sensor, you'd end up with images having a 3:1 aspect ratio.


I think that's not the right way to look at this. It'd be more like having two color channels per pixel in the raw file rather than only one as input to the demosaic.


----------



## Drizzt321 (Apr 24, 2014)

Wouldn't you also end up having to deal with a significant drop-off in number of photons hitting the photo-diodes? After all, you're essentially turning one 'pixel' site into 3 sub-pixels, none of which covers the entire area of the 'pixel'. Not that I don't want them to try innovative new things like that, but I don't think it's practical except for maybe some specialized applications.


----------



## 3kramd5 (Apr 24, 2014)

Drizzt321 said:


> Wouldn't you also end up having to deal with a significant drop-off in number of photons hitting the photo-diodes? After all, you're essentially turning one 'pixel' site into 3 sub-pixels, none of which covers the entire area of the 'pixel'. Not that I don't want them to try innovative new things like that, but I don't think it's practical except for maybe some specialized applications.



I don't know if you'd lose any additional light. Right now, there is a color filter immediately covering two diodes. If you had two smaller color filters adjacent to one another, you aren't going to halve the light, though you may move it around. Rather than "all light hitting here is red", it would be "some of the light hitting here is red and some of it is green," and they would have varying intensities. I think.


----------



## 3kramd5 (Apr 24, 2014)

caruser said:


> neuroanatomist said:
> 
> 
> > The 'dual pixels' are all split vertically, so if they altered the microlenses and CFA to increase the actual resolution of the sensor, you'd end up with images having a 3:1 aspect ratio.
> ...



To be fair, in my initial post I did indeed mean spatially, so neuro's comment was pertinent.

That being said, correct: You wouldn't have say 10,368 × 3,456 (1Dx with twice as many horizontal), you'd have 5,184 × 3,456, but each pair of pixels would be sufficient to yield RGB values rather than every four, meaning the color accuracy could be doubled, which could have a meaningful impact on a subsequent raster.


----------



## Bahrd (Apr 24, 2014)

I think you need to take into account the side-effect of the split pixels: they collect different (shifted) half-images in out-of-focus areas. See the following simple example of the POV-Ray-produced GIF presenting left and right half-images (the scene consists of just two, front- and back-focused, spheres): 





It thus seems that you can take advantage of the twice-as-much number of pixels in the in-focus regions if you are able to precisely distinguish them from the out-of-focus ones...


----------



## pwp (Apr 25, 2014)

Don Haines said:


> That's why the 7D2 is held up.... Quad Pixel technology so they can use vertical and horizontal phase for the AF system...


Interesting...speculation or information?

-pw


----------



## Don Haines (Apr 25, 2014)

pwp said:


> Don Haines said:
> 
> 
> > That's why the 7D2 is held up.... Quad Pixel technology so they can use vertical and horizontal phase for the AF system...
> ...


A healthy mix of speculation and sarcasm..... I have zero inside information. I also have a perfect record on predicting future camera bodies..... wrong every time


----------



## Drizzt321 (Apr 25, 2014)

3kramd5 said:


> Drizzt321 said:
> 
> 
> > Wouldn't you also end up having to deal with a significant drop-off in number of photons hitting the photo-diodes? After all, you're essentially turning one 'pixel' site into 3 sub-pixels, none of which covers the entire area of the 'pixel'. Not that I don't want them to try innovative new things like that, but I don't think it's practical except for maybe some specialized applications.
> ...



Well, you realistically would. Since a photon can only hit 1 of the photo-diodes, if you give a photo-diode less surface area, there are less photons that can hit it. With the current, you end up with 2 photo-diodes that get the same color of light, which ends up with nearly as much surface area combined as a single photo-diode at the same pixel location.

It probably would also screw up the phase detect AF, since now you have different colors of light be compared for the phase, and which you can't be sure you're getting the same _amount_ of light of the different colors...so it'd probably be really, really hard to accurately do phase-detect. Then again...I'm no scientist, so maybe it's not so bad and you can reliably correct it via software.


----------



## 3kramd5 (Apr 25, 2014)

But don't you double your chances of getting the correct color?

Maybe my understanding is wrong, but doesn't the CFA filter out all but one color (frequency range) per pixel? So a red pixel in blue light won't recieve any charge? 

In the case of red light on red pixels, yah you'll get half as much, but in the case of green light on red or blue pixels, you'll get a reading. So yah, on a per pixel level maybe you'd affect available signal, but it seems like it would average out across the entire array. But I'm not a scientist either 

And yah, it would likely prevent sensor level phase detection, which is why I suggested it could be for a "1DmkVs" model while the Bayer + dual pixel AF could go to a sports 1DmkV. Two lines, identical hardware except the CFA; different firmware.


----------



## jrista (Apr 26, 2014)

3kramd5 said:


> On the 70D, canon has two photo-diodes per pixel almost across the entire sensor. The fact that they can get useable phase information from them suggests that they can read them independently.
> 
> So, could they change the bayer filter out and double resolution rather than get sensor level phase detection? Perhaps being co-located they couldn't use a traditional bayer design, but could they for example have green AND either red or blue at every pixel?
> 
> ...



Having two photodiodes per pixel means the photodiode pair exists _*underneath *_the CFA filter and the microlens(es). That is actually the only way DPAF really works...to be able to detect a phase differential, you need to check the *HALVES *of each *PIXEL*. If you just shrink the pixel size and put different color filters over those smaller pixels...well, now you have smaller pixels (and an odd image ratio), and you no longer have DPAF. It's a tradeoff...resolution or a focus feature, which do you want/need? (Or, as the case may be, you get a cross between both, slightly smaller pixels (i.e. 20mp 70D vs. the 18mp that came before) AND DPAF.)

I know everyone likes to speculate about all the wonderful things that DPAF might potentially bring to the table...but so long as it is Dual-Pixel *Autofocus*, that's all your really going to get. There really isn't any magic bullet here, no trickery that you can pull of by somehow using one half of the pixels at ISO 100 and the other half at ISO 800 for more dynamic range, etc. Pixel area is pixel area, and phase detect is phase detect. DPAF pixels serve one purpose when read out for AF, and another purpose when the halves are binned and read out for an image. Those are really the only two functions DPAF will ever serve, and while I'm sure the Magic Lantern guys will figure out something cool about the specific mechanism of DPAF's implementation...they will still only be able to work within the bounds of the sensors design. The ML DR increases was ultimately thanks to an OFF-die downstream amplifier that allowed them to control the readout process, not really due to any specific nuance of Canon's actual sensor design. 

Assuming Canon does not remove that downstream amp in favor of some kind of on-die parallel ADC and readout system, I honestly don't expect them to be able to do anything more radical with DPAF. They may find a way of doing creative focus things with AF, maybe add the ability to remember AF positions for video purposes, things like that...but the design of DPAF doesn't really mean Canon suddenly has some amazing wildcard on their hands that can give them a significant edge in the stills photography department.


----------



## 3kramd5 (Apr 26, 2014)

jrista said:


> 3kramd5 said:
> 
> 
> > On the 70D, canon has two photo-diodes per pixel almost across the entire sensor. The fact that they can get useable phase information from them suggests that they can read them independently.
> ...



Right. I'm more curious about what else they can do with dual pixels in general than DPAF itself. I am running on the assumption that using the pixel pairs for an IQ (be it resolution, color depth, better demosaicing, etc) would come at the cost of losing sensor level phase AF. 

Maybe I'm grasping at straws, it just struck me as potentially a HUGE leap in pixel density for Canon SLRs.


----------



## jrista (Apr 26, 2014)

3kramd5 said:


> Right. I'm more curious about what else they can do with dual pixels in general than DPAF itself. I am running on the assumption that using the pixel pairs for an IQ (be it resolution, color depth, better demosaicing, etc) would come at the cost of losing sensor level phase AF.
> 
> Maybe I'm grasping at straws, it just struck me as potentially a HUGE leap in pixel density for Canon SLRs.



I'm not really sure what kind of sensor design your proposing. The name, technically speaking, is missleading. There are not actually "dual pixels" in Canon's design...given what a pixel actually is. There is a split photodiode within each single pixel. Since PIXELS are the elementary element of an image, the fact that there are two photodiodes per pixel doesn't actually change anything from an imaging standpoint. 

Here is a diagram of the DPAF FSI Sensor design:






The split photodiode is beneath the color filter and the microlens. This is an essential aspect of sensor-plane phase-detection, as in order to detect phase, you have to have phase in the first place. The way a DPAF pixel works is light from the left half of the lens, the left phase, is detected by the left side of the split photodiode, and the right phase is detected by the right side. The PDAF firmware in the camera deals with determining if there is a phase differential between these two detections, and if there is, it computes how much of an AF adjustment is necessary to eliminate the differential. This CAN NOT BE DONE if the two halves of the photodiode are not contained within a SINGLE pixel.

Canon's marketing moniker, DPAF or "Dual Pixel" AF is missleading. It is not dual pixels. It's dual photodiodes PER pixel. The two photodiodes in each pixel are electronically binned during readout. Electronic binning isn't the same as pixel averaging...by binning, Canon is effectively able to maintain the same behavior with a split photodiode as they would have with a single photodiode...instead of having half the full well capacity, because they are binning the two charges, they maintain the same kind of FWC as they would have with a single photodiode.

If Canon decided to put different color filters and shrink the microlens size over the two photoiode halves, they would no longer be able to do sensor plane phase detection AF. They would simply have a higher resolution sensor, although that sensor would have a 2:1 pixel aspect ratio (twice as many pixels horizontally as vertically), which would be a little odd to demosaic and might not produce the best quality results. Canon might as well just shrink the entire pixel size by a factor of two, drop four times as many pixels on the sensor, and just call it a day if they are going to do that. 

DPAF, as it is currently designed (according to the diagram above) is pretty strictly an AF thing. As far as imaging goes, since the photodiode halves are binned per pixel, there is really no difference vs. a sensor that just has one photodiode per pixel. There isn't anything special or magical about DPAF that will give Canon the ability to do something no other manufacturer can. It won't improve resolution (since the photodiode is at the bottom of the pixel well, below the color filter and microlens), it won't improve dynamic range (I've discussed this at length elsewhere, but reading one half at one ISO and the other half at another ISO ultimately results in a net-zero gain...you can't really improve DR, you can't improve SNR, it won't be giving ML anything better than they already have by using Canon's downstream amplifier...actually, the downstream amp is better.) 

DPAF is just that...an autofocus feature. Nothing more.


----------



## mb66energy (Apr 26, 2014)

jrista:
"It won't improve resolution (since the photodiode is at the bottom of the pixel well, below the color filter and microlens),"

I think that the whole structure below the filter is the photodiode - to discriminate both "phases" you need to discriminate light that hits both photodiodes of 1 pixel. So there is a chance to enhance resolution SLIGHTLY by reading out of both photodiodes separately.


jrista:
"it won't improve dynamic range (I've discussed this at length elsewhere, but reading one half at one ISO and the other half at another ISO ultimately results in a net-zero gain"

If you can make one of both photodiodes "less sensitive" by some procedure (I do not know how) you have additional non saturated information about brightness.

Both theoretically possible improvements need
* the capability to read out both photoiodes independently. That is possible because it is necessary for DPAF
but it is questionable that you can read the WHOLE sensor in this manner 
* the capability to play with sensitivity curves of both photodiodes independently ...

So basically you are right that - at the moment - the sensor will use the two-photodiode-per-pixel-design for AF only. And binning (adding both photodiode charges) will give reasonable "photosite size".


----------



## jrista (Apr 26, 2014)

mb66energy said:


> jrista:
> "It won't improve resolution (since the photodiode is at the bottom of the pixel well, below the color filter and microlens),"
> 
> I think that the whole structure below the filter is the photodiode - to discriminate both "phases" you need to discriminate light that hits both photodiodes of 1 pixel. So there is a chance to enhance resolution SLIGHTLY by reading out of both photodiodes separately.



Trust me, the entire structure below the filter is not the photodiode. The photodiode is a specially doped area at the bottom of what we call the "pixel well". The diode is doped, then the substrate is etched, then the first layer of wiring is added, then more silicon is added, more wiring. Front-side Illuminated sensors are designed exactly as I've depicted. The photodiode is very specifically the bit of properly doped silicon at the bottom of the well.

As for phase, it doesn't matter how deep the photodiode is, as I've said many times before, depth does not matter, only area. Phase is detected because the left half of the photodiode only receives light from the left half of the lens, and the right half only receives light from the right half of the lens. This is exactly how dedicated PDAF sensors work...the AF unit contains lenses that do exactly the same thing...split the light from the lens, sending light from one half to the AF strips on one side of the sensor, and sending light from the other side do the AF strips on the other side of the sensor. 



mb66energy said:


> jrista:
> "it won't improve dynamic range (I've discussed this at length elsewhere, but reading one half at one ISO and the other half at another ISO ultimately results in a net-zero gain"
> 
> If you can make one of both photodiodes "less sensitive" by some procedure (I do not know how) you have additional non saturated information about brightness.



In terms of the actual silicon, there is only one sensitivity. Quantum Efficiency dictates how efficient the sensor is, and that is a fixed trait based on materials purity, doping, dark current levels, temperature, etc. At room temperature (usually that is defined as 70° or 72°), Q.E. of current Canon sensors is around 50% (+/- 2%).

The photodiodes are as sensitive as they are. The only thing that can chance how sensitive they are is to design an entirely new sensor with the explicit goal of improving Q.E. (ISO has nothing to do with sensitivity, ISO is simply a means of controlling gain, the amount the signal is amplified, during readout.) 



mb66energy said:


> Both theoretically possible improvements need
> * the capability to read out both photoiodes independently. That is possible because it is necessary for DPAF
> but it is questionable that you can read the WHOLE sensor in this manner



This is already possible. If it was not, then there would be no way for the AF feature to work. Both halves of the photodiode are indeed read independently, and the entire sensor (or to be more specific, the 80% of the sensor that actually has dual photodiodes) must indeed be read out at once, for FP-PDAF to work. There is no need to "innovate" this "improvemnt", it is essential and intrinsic to the design in the first place. 

But again...this has nothing to do with imaging. It doesn't matter that the two photodiode halves of the entire sensor can all be read out at once. They are BELOW THE COLOR FILTER. I don't know how else to explain it, but since the photodiodes are below the CFA, it doesn't matter if you read them as independent halves, or binned...they are still just one color. You aren't gaining any improvement in resolution or anything like that by reading them independently. All your doing is creating two pixels with half the signal range and therefor half the maximum brightness. You would still need to find some way of digitally binning them in post to achieve the proper brightness levels to produce a full pixel at the proper exposure.

Again...no magic bullet here. What your saying is necessary is already possible and present. It's actually essential for DPAF's sensor-plane (focal-plane, or FP)-PDAF function to work in the first place.



mb66energy said:


> * the capability to play with sensitivity curves of both photodiodes independently ...



Again, sensitivity is an intrinsic and fixed trait of the silicon itself. ISO is simply a means of controlling gain, not sensitivity. There is no way to play with sensitivity curves of photodiodes...period. Doesn't matter if there are one, two, or more per pixel. Their sensitivity is fixed for a given sensor design.



mb66energy said:


> So basically you are right that - at the moment - the sensor will use the two-photodiode-per-pixel-design for AF only. And binning (adding both photodiode charges) will give reasonable "photosite size".



This will be right forever, so long as Canon desires to support AF via the image sensor. Even if they move to a quad design, for phase detection in both the vertical and horizontal, the fundamental design characteristics will not change...the photodiodes will still have to be beneath the CFA and Microlens layers for the ability to detect phase to work. This would also remain true even if Canon moved to a BSI design...all that would change is where the wiring is and how deep the pixel well is...the photodiodes would again remain below the CFA.


----------



## mb66energy (Apr 26, 2014)

jrista said:


> mb66energy said:
> 
> 
> > jrista:
> ...



The "well" or better potential well of a photodiode is the part of the photodiode where the charge is stored during exposition. It is made of (doped) silicon which is intransparent. The image you provided seems to me a little bit strange: How could the light hit the photodiode at the bottom if the well is intransparent? Please send me the source of the image and hopefully I could find some enlightening information about it!

Thanks in advance - Michael


----------



## 3kramd5 (Apr 26, 2014)

jrista said:


> 3kramd5 said:
> 
> 
> > Right. I'm more curious about what else they can do with dual pixels in general than DPAF itself. I am running on the assumption that using the pixel pairs for an IQ (be it resolution, color depth, better demosaicing, etc) would come at the cost of losing sensor level phase AF.
> ...



Not proposing, just wondering.



jrista said:


> If Canon decided to put different color filters and shrink the microlens size over the two photoiode halves, they would no longer be able to do sensor plane phase detection AF.



Agreed. Giving up sensor level phase detect is part and parcel to what I'm asking. Specifically, COULD they do what you wrote: put different color filters and shrink the microlenses, and read out each photodiode individually rather than binning them?



jrista said:


> which would be a little odd to demosaic and might not produce the best quality results.



Interesting? Why could it not produce as good if not better. I'm assuming an array like this:


```
RG-BG-RG-BG-RG-BG
BG-RG-BG-RG-BG-RG
RG-BG-RG-BG-RG-BG
BG-RG-BG-RG-BG-RG
```

Each pixel would read two colors, either green + either red or blue. That maintains the same color ratio of a bayer type CFA (2G/1R/1B), it's just collocating each green with another color. 




jrista said:


> Canon might as well just shrink the entire pixel size by a factor of two, drop four times as many pixels on the sensor, and just call it a day if they are going to do that.



Sure, but if they could build one sensor and then have either high resolution OR sensor level phase detect, depending on which CFA is packaged, I figure they could cut their development and production costs in a two-camera offering.

shrug.


----------



## jrista (Apr 27, 2014)

mb66energy said:


> jrista said:
> 
> 
> > mb66energy said:
> ...



Silicon is naturally semitransparent to light, even well into the UV range, and particularly in the IR range. The natural response curve for silicon tends to peak somewhere around the yellow-greens or orange-reds, and tapers slowly off into the infrared (well over 1100nm). The entire structure I've drawn is only a dozen or so microns thick at most. Light can easily reach the bottom of the well or photochannel or whatever you want to call it. The photodiode is indeed at the bottom. Sometimes the entire substrate is doped, sometimes it's an additional layer attached to the readout wiring. Sometimes it's filled with something. Here is an image of an actual Sony sensor:







Here is Foveon, which clearly shows the three layers of photodiodes (cathodes) that penetrate deeper into the silicon substrate for each color (the deeper you go, more higher frequencies are filtered, hence the reason the blue photodiode is at top, and red is at bottom), and there is no open well, it's all solid material:






Here is an actual image of one of Canon's 180nm Cu LightPipe sensor designs. This particular design fills the pixel "well" as I called it with a highly refractive material, the well itself is also lined with a highly reflective material, and the photodiode is the darker material at the bottom attached to the wiring:






Regardless of the actual material in the well, which is usually some silicon-based compound, the photodiode is always at the bottom. Even in the case of backside illuminated sensors, the photodiode is still beneath the CFA, microlens layers, and all the various intermediate layers of silicon:






This image is from a very small sensor. It's overall thickness is a lot thinner than your average APS-C or FF sensor. The entire substrate is apparently photodiode cathodes, you can see the CFA, microlenses, and some wiring at the bottom. The readout wiring is at the top. The photodiode layer is in the middle. 

Every sensor design requires light to penetrate silicon to reach the photodiode.


----------



## jrista (Apr 27, 2014)

3kramd5 said:


> jrista said:
> 
> 
> > which would be a little odd to demosaic and might not produce the best quality results.
> ...



You have a pixel size ratio issue here. You have twice as many pixels horizontally as you do vertically. I think this was the first thing Neuro mentioned. To correct that, you would have to merge the two halves during demosaicing...in which case...why do it at all? You lose the improved resolution once you "bin"...regardless of whether your binning is electronic or a digital algorithmic blend.

Regarding color fidelity, I don't know that there is any evidence that your particular design would improve color fidelity. There have been a LOT of attempts to use various alternative CFA designs to improve color fidelity. Some may have worked, for example Sony added an emerald pixel (which is basically a blend of blue and green), Kodak experimented with various arrangements with white pixels. Fuji has used a whole range of alternative pixel designs, as well as utilizing a 6x6 pixel matrix with lots of green, some red, and some blue pixels, extra luminance pixels, and a variety of other designs. Sony has even designed sensors with triangular and hexagonal pixels and alternative demosaicing algorithms to improve color fidelity, sharpness, reduce aliasing. 

None of these other designs have ever PROVEN to offer better color fidelity than your simple, standard RGBG bayer CFA. The D800 is an excellent example of how good a plain old bayer can get...it's color fidelity is second to none (and even bests most MFD sensors).

Anyway...DPAF isn't a magic bullet. It solved an AF problem, and solved it quite nicely, while concurrently offering the most flexibility by rendering the entire sensor (well, 80% of it) as usable for AF purposes. To get more resolution, use more pixels that are smaller. If you want better dynamic range, reduce downstream noise contributors (bus, downstream amps, ADC units.) If you want better high ISO sensitivity, increase quantum efficiency. If you want improved light gathering capacity, make the photodiodes larger, increase the transparency of the CFA, employ microlenses (even in multiple layers), move to BSI, use color splitting instead of color filtration, etc. Sensor design is pretty strait forward. There isn't anything magical here, and you also have to realize that a LOT of ideas have already been tried, and most ideas, even if they ultimately get employed at one point or another, often end up failing in the end. The good, old, trusty, strait-forward Bayer CFA has stood the test of time, and withstood the onslaught of countless alternative layouts.


----------



## mb66energy (Apr 27, 2014)

jrista said:


> [...]
> 
> Every sensor design requires light to penetrate silicon to reach the photodiode.



Thanks to your extensive explanations but I disagree in some important details.

Your last sentence ist truly correct - you need to reach the pn-junction of the photodiode which is "inside" the dye structure.
But after checking a lot of images in the web I came to the following conclusion:

1 micron of silicon would (according to http://www.aphesa.com/downloads/download2.php?id=1 page 2) reduce the amount of light at 500 nm to 0.36^3 = 0.05 or 5 % - a sensor with 1 micron silicon between front and photodiode structure would be orthochromatic (red sensitive).

Therefore the space between semiconductor chip surface and photodiode is filled by oxides. If silicon is the base material the oxide is usually silicon dioxide which is the same as quartz and highly transparent. I have tried to depict that in the sketch "Simplified Imaging Sensor Design" attached here (transistors, x-/y-readout channels are omitted).

*According to photodiode sensitivity:* You can shurely reduce the sensitivity of the photodiode in a system by
(1) using a filter
(2) initiating a current that discharges the photodiode permanently
(3) stopping integration during exposure independently
For (1) think about a tiny LCD window in front of the second photodiode of one color pixel: blackening the LCD has the same effect like a - e.g. ND3 - gray filter. Both photodiodes read the same pixel at different sensitivity. The unchanged photodiode has full sensitivity, the filtered photodiode has 3 EV lower sensitivity. The LCD should be closed during exposure but is left open for DPAF.
For (2) think of a transistor for the second photodiode of a pixel which acts as a variable resistor between sth. like 1000 MOhms and 100 kOhms - photodiode 1 of the pixel integrates the charge fast, photodiode 2 of the pixel integrates the charge slowlier because some charge is withdrawn by the transistor acting as discharge resistor.
For (3) you need a transistor too and stop integration after e.g. 10% of the exposure time before the full well capacity is reached.
All methods require to replace information from saturated photdiodes 1 by the non saturated photodiodes 2 (with slower integration rate). It is like doing a HDR shot combined from 2 images which were taken SIMULTANOUSLY (except (3)).

*Enhancing resolution (perhaps) slightly* (according to 3kramd5's or caruser's description): ( <=EDIT)
Typical pattern is (for DPAF sensor in current config, AF and exposure): ( <=EDIT)


```
rr  GG  rr  GG  rr  GG  rr  GG
GG  bb  GG  bb  GG  bb  GG  bb
rr  GG  rr  GG  rr  GG  rr  GG
GG  bb  GG  bb  GG  bb  GG  bb
```

Just resort to this (after AF is done) to the following readout with 20MPix but 2 colors per (virtual) pixel: (<=EDIT)

```
r  rG  Gr  rG  Gr  rG  Gr  rG  G
G  Gb  bG  Gb  bG  Gb  bG  Gb  b
r  rG  Gr  rG  Gr  rG  Gr  rG  G
G  Gb  bG  Gb  bG  Gb  bG  Gb  b
```

You are right (and that was my feeling to) that this will not dramatically enhance resolution but I see one special case there it might help a lot: Monochromatic light sources which will used more and more while signs (street signs, logos, etc.) are lit by LEDs. I observed that de-bayering works bad with LED light, especially blue and red light because the neigboured green photosites aren't excited enough. I very often see artifacts in that case that vanish if you downsample the picture by a factor 2 (linear).

My conlusion is that "dual photodiode per pixel"-structures might have a strong potential beyond the AF-method which it provides now. Don't know if the current 70D sensor has this potential but I think there is some headroom for real products.


----------



## jrista (Apr 27, 2014)

mb66energy said:


> jrista said:
> 
> 
> > [...]
> ...



Indeed. I did mention that it was a silicon-based compound, not pure silicon: "Regardless of the actual material in the well, *which is usually some silicon-based compound*, the photodiode is always at the bottom. "

I agree, though, SiO2 is usually the material used for layers deposited above the substrate, or is silicon dioxide based, but not always. It depends on the design and size of the pixel. As most small form factor designs have moved to BSI, I don't think there are many lightpipe designs out there, however in sensors with pixels around 2µm and smaller, the channel to the photodiode is lined with SiN, then filled with a highly refractive material. In one paper (http://www.silecs.com/download/CMOS_image_sensor_with_high_refractive_index_lightpipe.pdf, very interesting read, if your interested), they mentioned two other compounds used: Silecs XC400L and Silecs XC800, which are organosiloxane based materials (partially organic silicates, so still generally SiO2 based, but the point is to make them refractive enough to bend light from highly oblique angles from the microlens down a deep, narrow channel to the photodiode).

I have another paper bookmarked somewhere that covered different lightpipe materials, but with BSI having effectively taken over for small form factor sensors, I don't think it much matters.



mb66energy said:


> *According to photodiode sensitivity:* You can shurely reduce the sensitivity of the photodiode in a system by
> (1) using a filter
> (2) initiating a current that discharges the photodiode permanently
> (3) stopping integration during exposure independently
> ...



I understand what your getting at, but it isn't quite the same as doing HDR. With HDR, your using the full total photodiode area with multiple exposures. In what you have described, your reducing your photodiode capacity by using one half for the fast-saturation and the other half for slow-saturation. Total light sensitivity is determined by the area of the sensor that is sensitive to light...your approach effectively reduces sensitivity by 25% by reducing the saturation rate of half the sensor by one stop.

If your photodiode is 50% Q.E., has a capacity of 50,000e-, you have a photonic influx rate of 15,000/sec, and you expose for five seconds, your photodiode ends up with a charge of 37,500e-. In your sensor design, assuming the same scenario, the amount of photons striking the sensor is the same...you end up with a charge of 18,750e- in the fast-sat. half, and 9,375e- in the slow-sat. half. for a total saturation of 28,125e-. You gathered 75% of the charge that the full single photodiode did, and therefor require increased gain, which means increased noise. 

I thought about this fairly extensively a while back. I also ran through Don's idea of using ISO 100 for one half and ISO 800 for the other, but ultimately it's the same fundamental issue: sensitivity (true sensitivity, i.e. quantum efficiency) is a fixed trait of any given sensor. Aside from a high speed cyclic readout which constantly reads the charge from the sensor and stores it in high capacity accumulators for each pixel, for standard sensor designs (regardless of how wild you get with materials), there isn't any magic or clever trickery that can be done to increase the amount of light gathered than what the base quantum efficiency would dictate. The best way to maximize sensitivity is to:

A) Minimize the amount of filtration that occurs before the light reaches the photodiode.
B) Maximize the quantum efficiency of the photodiode itself. 

I think, or at least hope, that color filter arrays will ultimately become a thing of the past. Their name says it all, color FILTER. They filter light, meaning they eliminate some portion of the light that reached the sensor in the first place, before it reaches the photodiode. Panasonic designed a new type of sensor called a Micro Color Splitting array, which instead of using filters, used tiny "deflector" (SiN) to either deflect or pass light that made it through an initial layer of microlenses by taking advantage of the diffracted nature of light. The SiN material, used every other pixel, deflected red light to the neighboring photodiodes, and passed "white minus red" light to the photodiode of the current pixel. The alternate "every other pixel" had no deflector, and passed all of the light without filtration. Here is the article:

http://image-sensors-world.blogspot.com/2013/02/panasonic-develops-micro-color-splitters.html

The ingenuity of this design results in only two "colors" of photodiode, instead of three: W+R and W-R, or White plus Red and White minus Red. I think that, if I understand where your going with the descriptions both above and below, that this is ultimately where you would end up if you took the idea to it's extreme. Simply do away with filtration entirely, and pass through the microlenses as much light as you possibly can. Panasonic claims "100%" of the light reaches the photodiodes...I'm doubtful of that, there are always losses in every system, but it's certainly a hell of a lot more light reaching photodiodes than is currently possible with a standard bayer CFA. 

I think Micro Color Splitting is probably the one truly viable alternative to your standard Bayer CFA, however the sad thing is it's Panasonic that owns the patent, and I highly doubt that Sony or Canon will be licensing the rights to use the design any time soon...so, once again, I suspect the trusty old standard Bayer CFA will continue to persist throughout the eons of time. 




mb66energy said:


> *Enhancing resolution (perhaps) slightly* (according to 3kramd5's or caruser's description): ( <=EDIT)
> Typical pattern is (for DPAF sensor in current config, AF and exposure): ( <=EDIT)
> 
> 
> ...



I understand the general goal, but I think Micro Color Splitting is the solution, rather than trying to use DPAF in a quirky way to increase the R/B color sensitivity. Also, LED lighting is actually better than sodium or mercury vapor lighting or even CFL lighting. Even a blue LED with yellow phosphor has a more continuous spectrum than any of those forms of lighting, albeit at a lower intensity level. However progress in the last year or so with LED lighting has been pretty significant, and were starting to see high-CRI LED bulbs with around 89-90 CRI, and specially designed LED bulbs are starting to come onto the market that I suspect will ultimately replace the 95-97 CRI CFL bulbs that have long been used in photography applications where clean, broad-spectrum light is essential.

Regardless of what kind of light sources we'll have in the future, though, I think that, assuming Panasonic can get more manufacturers using their MCS sensor design, or maybe if they sell the patent to Sony or Canon, standard Bayer CFA designs will ultimately disappear, as they simply filter out too much light. MCS preserves the most light possible, which is really what we need to improve total sensor efficiency. Combine MCS with "black silicon", which employs the "moth eye" effect at the silicon substrate level to nearly eliminate reflection, and we have ourselves one hell of a sensor. ;D

(Sadly, I highly doubt Canon will be using any of these technologies in their sensors any time soon...most of the patents for this kind of technology is held by other manufacturers...Panasonic, Sony, Aptina, Omnivision, SiOnyx, etc. There have been TONS of CIS innovations over the last few years, some with amazing implications (like black silicon)...the only thing Canon developed that barely made it on the sensor-innovation radar is DPAF, and it was like someone dropped a pebble into the ocean, the DPAF innovation was pretty much ignored entirely...)


----------



## mb66energy (Apr 27, 2014)

So I think we have some substantial "overlap" now - great.

The silecs paper is interesting read! It's funny what they put together in these dimensions and 20 million fold to make an image sensor that really works well. But the improvements are gradual and ...



jrista said:


> [...]
> 
> The alternate "every other pixel" had no deflector, and passed all of the light without filtration. Here is the article:
> 
> ...



... I agree that this is the "Königsweg" as we in germany say, the "kings way": Splitting the color instead of filtering out miswanted colors (throwing light away) at the cost of system efficiency.
I read about that technology and I think they use interference filters which reflect a part of the spectrum and transmit the opposite part of the spectrum.
An alternative might be a sensor which uses a prism or optical grating to separate wavelengths and three or four photodiodes to sense the colors.

These sensors are the counterpart to OLED displays which omit filtering (like LCD displays) and produce light in the wanted colors directly. It is the way of the future.

Before I forget: I found an interesting patent about a dual photodiode per pixel architecture which is used to increase the DR to 120 dB called _"Dynamic-Range Widening in a CMOS Image Sensor Through Exposure Control Over a Dual-Photodiode Pixel"_. They have a pixel split into a L shaped photodiode with 75% area and the 25% area which complets the L-shape to a square (might be not available at the moment due to a web site update):
http://www.researchgate.net/publication/224611868_Dynamic-Range_Widening_in_a_CMOS_Image_Sensor_Through_Exposure_Control_Over_a_Dual-Photodiode_Pixel/file/e0b495219bbfda2524.pdf

Best - Michael


----------



## jrista (Apr 27, 2014)

mb66energy said:


> Before I forget: I found an interesting patent about a dual photodiode per pixel architecture which is used to increase the DR to 120 dB called _"Dynamic-Range Widening in a CMOS Image Sensor Through Exposure Control Over a Dual-Photodiode Pixel"_. They have a pixel split into a L shaped photodiode with 75% area and the 25% area which complets the L-shape to a square (might be not available at the moment due to a web site update):
> http://www.researchgate.net/publication/224611868_Dynamic-Range_Widening_in_a_CMOS_Image_Sensor_Through_Exposure_Control_Over_a_Dual-Photodiode_Pixel/file/e0b495219bbfda2524.pdf



Aye, I read about that. There are a few other patents for similar technology as well. They all use a different exposure time for the luminance pixels, though, and the way they achieve that is to extend the exposure time for the luminance pixels across frames. The majority of these sensors are used in video applications, which is the primary reason they can employ the technique. They can expose luma for two frames, and blend that single luma value into the color for both. (I cannot actually access the patent article you linked without an account, and they require an institutional email to sign up, however based on the abstract it sounds pretty much the same as other patents that use exposure control for DR improvement.)


----------

