# Patent: Quad pixel AF sensor



## Canon Rumors Guy (Oct 15, 2020)

> Canon News has uncovered another Quad Pixel Autofocus (QPAF) sensor patent.
> *Canon News breaks down why QPAF should be coming in the future.*
> Right now with dual pixel AF (DPAF) sensors, you can focus reliably while the camera is in the horizontal position and your edge of contrast that you are locking on are vertical.  If the edge is horizontal (or parallel to the camera orientation) then it has extreme difficulty in locking on.  This is because all the pixels are arranged in one direction for dual pixel AF.  What Canon needs is a quad pixel, where the pixel is split up, not once but twice, allowing for different phase different arrangements.
> This patent application specifically deals with suppressing the deterioration that may happen with...



Continue reading...


----------



## Maximilian (Oct 15, 2020)

There we have the “groundbreaking” new AF system


----------



## BroderLund (Oct 15, 2020)

Sounds like R1 to me


----------



## usern4cr (Oct 15, 2020)

If this is implemented in the R1, then it would help to define the "groundbreaking" new AF system, as Maximilian mentioned. Since each "quad pixel" would have to cover more area than a "dual pixel" for all the circuitry then I could see the R1 coming in at the 20+MQP (where a pixel is a quad pixel) which is ideal for 4K video as well as plenty for professional stills use. It wouldn't surprise me if the marketing department then decides to call this a "80MQP" sensor and asks the programmers to come up with a new Bayer decoding (up-res'ing) to get 80MP to display & advertise.

Just keep in mind that the IQ and dynamic range will be affected by the increased circuitry on the sensor if that circuitry blocks more of the total light sensed by the sensor. And more false artifacts will occur if the Bayer decoding shifts from 4 cells (RGBG) to 16 cells(RRRRGGGGBBBBGGGG) - but I don't think the marketing department will mention that! 

You know, at some point there's going to be so much circuitry and high resolution that I can see Canon finally coming out with a BSI (back-side illuminated) version of their sensors. That's really the only way to keep getting more complexity and resolution and image quality in a FF sensor as you approach the limits of what a non-BSI sensor can do. We may be seeing Canon's first BSI introduction, which would be quite an announcement in itself! Do I think this will happen in the R1? - No, but it sure wouldn't surprise me. And if they want to get a 45M QP sensor and beyond then I think they may be forced to use BSI in the future.


----------



## Josh Leavitt (Oct 15, 2020)

It would be interesting to see if a quad-pixel AF array is actually going to be four true pixels with independent circuitry for each. It might open up some possibilities for enhancing DR by altering exposure time or signal amplification between each pixel row prior to being merged into a single pixel. Sony's quad Bayer arrays have that functionality and it seems to work pretty well. But yeah, this is probably going to debut on the R1.


----------



## Deleted member 381342 (Oct 15, 2020)

This makes sense where a lot of mirrorless systems struggle a wee bit If lines are going in a particular direction. Somewhat how we needed cross type points. On sensor AF has a lot of innovation waiting to be untapped and it is an exciting time for photographers.


----------



## H. Jones (Oct 15, 2020)

usern4cr said:


> You know, at some point there's going to be so much circuitry and high resolution that I can see Canon finally coming out with a BSI (back-side illuminated) version of their sensors. That's really the only way to keep getting more complexity and resolution and image quality in a FF sensor as you approach the limits of what a non-BSI sensor can do. We may be seeing Canon's first BSI introduction, which would be quite an announcement in itself! Do I think this will happen in the R1? - No, but it sure wouldn't surprise me. And if they want to get a 45M QP sensor and beyond then I think they may be forced to use BSI in the future.



Well, when Canon announces the new R1 sensor as a 21 megapixel quad-pixel back-side illuminated global shutter full-frame CMOS sensor... 

We'll have *all* the sensor adjectives. All of them.


----------



## usern4cr (Oct 15, 2020)

H. Jones said:


> Well, when Canon announces the new R1 sensor as a 21 megapixel quad-pixel back-side illuminated global shutter full-frame CMOS sensor...
> 
> We'll have *all* the sensor adjectives. All of them.


Yep - "Buzz-Word Bingo WINNER!"


----------



## blackcoffee17 (Oct 15, 2020)

Splitting pixels in 4 might be too difficult.
They could also arrange pixels horizontally and vertically in alternate order. There are plenty of pixels to work with, half of them could be sensitive to vertical lines only.


----------



## adrian_bacon (Oct 15, 2020)

Canon Rumors Guy said:


> Continue reading...



this could potentially used as a quad gain output for more dynamic range Just like how the dual gain output works with the current dual pixel sensor where one side has half the gain. With 4 sub-pixels, you can potentially have 4 different gains that get combined into one pixel with significantly more captured DR.


----------



## addola (Oct 15, 2020)

We have seen Quad-Pixel AF patents since at least 2017, and whether this has anything to do with the rumored R1 is anyone's guess.

From CanonNews ( Japan Patent Application 2017-228829):
https://www.canonnews.com/another-quad-pixel-af-sensor-patent-application


----------



## tron (Oct 15, 2020)

Whaaaaat R5 has in theory focusing issues? (due to dual pixel technology) 

1. I will wait for R5 Mk II (at least) 

2. Canon is *******!


----------



## canonnews (Oct 15, 2020)

addola said:


> We have seen Quad-Pixel AF patents since at least 2017, and whether this has anything to do with the rumored R1 is anyone's guess.
> 
> From CanonNews ( Japan Patent Application 2017-228829):
> https://www.canonnews.com/another-quad-pixel-af-sensor-patent-application


I know. but like i said. I think it's more likely we'd see something of the sort to address vertical and horizontal shooting and the AF's ability to lock on more on the R1 versus a global shutter.

I think there's even a GREATER chance we'll just see the 20MP 1DX Mark III sensor again.


----------



## Mr Majestyk (Oct 16, 2020)

Given they discussed QPAF in the orginal DPAF patent way back in arund 2012 it's a long time coming. They also discussed asymmetric DPAF and suing it for HDR. One does not really need QPAF for x-type, they could easily make groups of pixels with DPAF in the perpendicular direction so AF points could consist of just two orientations of DPAF. Would be easier to implement and have greater sensitivity than QPAF.


----------



## mclaren777 (Oct 16, 2020)

This seems dumb.

Keep DPAF, but make the microlenses so they split each pixel at a 45º angle.

Problem solved.


----------



## Rzrsharp (Oct 16, 2020)

mclaren777 said:


> This seems dumb.
> 
> Keep DPAF, but make the microlenses so they split each pixel at a 45º angle.
> 
> Problem solved.


It will sensitive to pyramid only if so.


----------



## usern4cr (Oct 16, 2020)

mclaren777 said:


> This seems dumb.
> 
> Keep DPAF, but make the microlenses so they split each pixel at a 45º angle.
> 
> Problem solved.


Interesting idea, but it won't work. Then if you have lines half way between these 2 angles then you can't see any difference at all. So you have all the problem and you've reduced your contrast sensitivity by a good amount (about a 35% reduction, I think), and rotated your "blind angle" by 45 (or whatever) degrees.


----------



## David - Sydney (Oct 16, 2020)

H. Jones said:


> Well, when Canon announces the new R1 sensor as a 21 megapixel quad-pixel back-side illuminated global shutter full-frame CMOS sensor...
> 
> We'll have *all* the sensor adjectives. All of them.


Well, that would be a 84mp sensor which is entirely possible. DPRAW files never really took off though. Maybe QPRAW will be different

... do do we start talking about megadots (like for EVFs) now instead of subpixels?


----------



## Famateur (Oct 16, 2020)

usern4cr said:


> 16 cells(RRRRGGGGBBBBGGGG) - but I don't think the marketing department will mention that!



Yeah...it kinda sounds like pirates that love disco. 

Then again, it kinda worked in 1982: https://www.imdb.com/title/tt0084504/


----------



## EOS 4 Life (Oct 16, 2020)

adrian_bacon said:


> this could potentially used as a quad gain output for more dynamic range Just like how the dual gain output works with the current dual pixel sensor where one side has half the gain. With 4 sub-pixels, you can potentially have 4 different gains that get combined into one pixel with significantly more captured DR.


Quad gain output would not surprise me in the RF mount C700 replacement.
I would not expect it in a flagship hybrid mirrorless but I could see it in a flagship cinema camera.


----------



## Sporgon (Oct 16, 2020)

I'm curious as to why Canon would use all vertical sensing DPAF on sensor when in their higher end DSLRs the AF points that aren't cross type ( or dual cross type) are horizontal sensing.

Just thinking on this, I thought DPAF had the split vertical, left and right, (with camera horizontal) so it would be horizontal sensing, not vertical ?


----------



## Sibir Lupus (Oct 16, 2020)

Mr Majestyk said:


> Given they discussed QPAF in the orginal DPAF patent way back in arund 2012 it's a long time coming. They also discussed asymmetric DPAF and suing it for HDR. One does not really need QPAF for x-type, they could easily make groups of pixels with DPAF in the perpendicular direction so AF points could consist of just two orientations of DPAF. Would be easier to implement and have greater sensitivity than QPAF.



My guess for the delay might have to do with processing power catching up with QPAF tech, as I'm sure it needs at least twice as much calculations vs DPAF II. We'll either see a cranked up DIGIC X in the R1, or possibly dual DIGIC X to handle QPAF.


----------



## jam05 (Oct 16, 2020)

Maximilian said:


> There we have the “groundbreaking” new AF system


My thoughts exactly. Time, 2021 Olympics. Being that it's from "Canon News" and not some patent rediscovered by a consumer.


----------



## jam05 (Oct 16, 2020)

Sporgon said:


> I'm curious as to why Canon would use all vertical sensing DPAF on sensor when in their higher end DSLRs the AF points that aren't cross type ( or dual cross type) are horizontal sensing.
> 
> Just thinking on this, I thought DPAF had the split vertical, left and right, (with camera horizontal) so it would be horizontal sensing, not vertical ?


Even the ctoss types are arranged in columns and have vertical sensing and horizontal components. Now rotate the camera. That pixel is still in it's original alignment.


----------



## masterpix (Oct 16, 2020)

Canon Rumors Guy said:


> Continue reading...


Quad AF sensor will allow creating HDR which wll be 1/4 - 1/2 - 3/4 and 1 factors to be integrated into the same pciture, as the CR2 of the 5D does.


----------



## Mt Spokane Photography (Oct 16, 2020)

Making a quad pixel sensor isn't so difficult but the software to operate it must be a nightmare. If it ever comes to market, the s0ftware is going to be fairly simple at first. I expect that they have been working on software for years in research, but in the real world I'd bet that there are all kinds of strange issues. I've seen Canon patents in the past for "n" numbers of subpixels since they were dealing with the electronics portion and patents want to cover every possible permutation.

When dual pixel came out, Canon said that it was the software that was the problem with bring it to market and they brought in experts from their professional video division to help figure it out. Even then, it was difficult. Presumably, there are now engineers with a much greater understanding as to how a quad pixel software might work to have autofocus vertically and horizontally from the same pixel. I think that diagonally will be a future development if ever. The other possibilities like independent gain for each sub pixel may be future developments, the complications in processing something like that will also require a lot of testing. They have it working for dual pixel sensors, the processing power needed may restrict it to video cameras right now, but its coming.


----------



## adrian_bacon (Oct 17, 2020)

EOS 4 Life said:


> Quad gain output would not surprise me in the RF mount C700 replacement.
> I would not expect it in a flagship hybrid mirrorless but I could see it in a flagship cinema camera.



Well, Canon is always getting bagged on for not enough dynamic range... If they went with quad gain in a flagship like the R1 and got a very usable 15-16+ stops, that would put a lot of heat down on Sony and Nikon.


----------



## yeahright (Oct 17, 2020)

Sporgon said:


> I'm curious as to why Canon would use all vertical sensing DPAF on sensor when in their higher end DSLRs the AF points that aren't cross type ( or dual cross type) are horizontal sensing.
> 
> Just thinking on this, I thought DPAF had the split vertical, left and right, (with camera horizontal) so it would be horizontal sensing, not vertical ?


If, as in Canon's current DPAF cameras, the pixels are split *horizontally* (meaning the line that splits the pixel in two halves is vertical and the pixels are therefore horizontally next to each other, i.e. left and right pixel half in horizontal, i.e. landscape camera orientation) in DPAF, then the arrangement can focus on *vertical* structures. Horizontal structures appear identical in both left and right pixels and thus cannot be used for focusing. Because focusing relies on the different appearance of the structure to focus on in the two pixel halves.


----------



## Sporgon (Oct 17, 2020)

yeahright said:


> If, as in Canon's current DPAF cameras, the pixels are split *horizontally* (meaning the line that splits the pixel in two halves is vertical and the pixels are therefore horizontally next to each other, i.e. left and right pixel half in horizontal, i.e. landscape camera orientation) in DPAF, then the arrangement can focus on *vertical* structures. Horizontal structures appear identical in both left and right pixels and thus cannot be used for focusing. Because focusing relies on the different appearance of the structure to focus on in the two pixel halves.


Ah, thanks for that. I’d just assumed that they worked like a split image focus finder but I now see it’s more like a rangefinder. 
So in fact it’s the same orientation as the non x type on DSLRs.


----------



## AlanF (Oct 17, 2020)

Sporgon said:


> Ah, thanks for that. I’d just assumed that they worked like a split image focus finder but I now see it’s more like a rangefinder.
> So in fact it’s the same orientation as the non x type on DSLRs.


Just make sure your subject is not a vertical line 1 pixel wide on the sensor.


----------



## Sporgon (Oct 17, 2020)

AlanF said:


> Just make sure your subject is not a vertical line 1 pixel wide on the sensor.


I’ll leave that kind of detail to you Alan !


----------



## usern4cr (Oct 17, 2020)

Since we're on the subject of quad-pixel or dual-pixel focus, I was wondering if anyone knew of the details of how a phase detect (not contrast detect) pixel actually works. Everywhere I look on the internet they don't really explain it. I do understand that there are 2 sensor areas next to each other and there is some sort of micro lens in front of each area that "somehow" splits the light differently into the two areas. But how can they make one of the areas focus at a nearer distance relative to the farther distance of the other sensor so that they can decide which direction to move the focal distance to reach the correct focus? Do they have a concave lens above one and a convex lens above the other? And if so, why would this technique be sensitive to vertical lines and not to horizontal lines (which would be what I'd expect for a contrast detection sensor but I'm interested in a phase-detect sensor).

This is the kind of detail I'm interested in, if anyone knows?


----------



## AlanF (Oct 17, 2020)

usern4cr said:


> Since we're on the subject of quad-pixel or dual-pixel focus, I was wondering if anyone knew of the details of how a phase detect (not contrast detect) pixel actually works. Everywhere I look on the internet they don't really explain it. I do understand that there are 2 sensor areas next to each other and there is some sort of micro lens in front of each area that "somehow" splits the light differently into the two areas. But how can they make one of the areas focus at a nearer distance relative to the farther distance of the other sensor so that they can decide which direction to move the focal distance to reach the correct focus? Do they have a concave lens above one and a convex lens above the other? And if so, why would this technique be sensitive to vertical lines and not to horizontal lines (which would be what I'd expect for a contrast detection sensor but I'm interested in a phase-detect sensor).
> 
> This is the kind of detail I'm interested in, if anyone knows?


This is the best explanation - Marc Levoy's applet http://graphics.stanford.edu/courses/cs178/applets/autofocusPD.html just rejig it so the focussing sensors are on the image sensor for mirrorless.


----------



## usern4cr (Oct 17, 2020)

AlanF said:


> This is the best explanation - Marc Levoy's applet http://graphics.stanford.edu/courses/cs178/applets/autofocusPD.html just rejig it so the focussing sensors are on the image sensor for mirrorless.


Thanks, AlanF, for the link. While the example shows how to focus on a bright dot on a black background with 2 lenses and 2 arrays of sensors that are not on the final image sensor itself, I really would like to see a diagram where they show how they put the system together on the final image sensor itself. It's difficult to imagine each pixel of the 45MP array having such a system in each pixel.


----------



## Rzrsharp (Oct 18, 2020)

Dual Pixel AF = DSLR Lines AF
Quad Pixel AF = DSLR Crosses AF
Octa Pixel AF = DSLR Double-Crosses AF


----------



## Joules (Oct 18, 2020)

Rzrsharp said:


> Dual Pixel AF = DSLR Lines AF
> Quad Pixel AF = DSLR Crosses AF
> Octa Pixel AF = DSLR Double-Crosses AF


Wouldn't Quad Pixel AF already be a bit more precise than just a cross, as you'd get two pairs of horizontal contrasts, and two of vertical (although each along the same line, so different from double cross) in each pixel with QPAF?


----------



## AlanF (Oct 18, 2020)

What's been really impressive is the way that Canon has been able to get the current DPAF to focus so fast and accurately. Some of us were worried that the processing of the dual pixels would be more processor intensive not be able to compete with embedded phase detect, but the R5 is up there with Sony speed and with a more flexible system, not requiring missing out pixels to accommodate phase detect.


----------



## Mt Spokane Photography (Oct 18, 2020)

AlanF said:


> What's been really impressive is the way that Canon has been able to get the current DPAF to focus so fast and accurately. Some of us were worried that the processing of the dual pixels would be more processor intensive not be able to compete with embedded phase detect, but the R5 is up there with Sony speed and with a more flexible system, not requiring missing out pixels to accommodate phase detect.


A bigger AF area has been a development as well. I recall patents with various tweaks to sensor design proposed that would allow more accurate AF at the edges and corners. I wonder if any of those have been implemented. A global shutter type sensor has been rumored, and perhaps quad pixel as well for a R1. 

As I understand it, a global shutter CMOS sensor will have memory associated with each photosite and values will be saved in those memory sites all at once. Then read out to the camera processor and memory in the standard parallel/sequential fashion. It may require backlit sensor technology to do that. Canon has a ton of patents for doing it, its just a matter of cost to get enough good sensors out of the process. Canon is extremely price conscious, they squeeze every penny.


----------



## yeahright (Oct 20, 2020)

Sporgon said:


> Ah, thanks for that. I’d just assumed that they worked like a split image focus finder but I now see it’s more like a rangefinder.


 and @usern4cr:
Actually, dual pixel autofocus in fact works pretty much like an optical split image range finder. In a split image range finder with which you can focus on vertical structures, there is a horizontal line splitting top and bottom image in your focus area. Your image is in focus if the top and bottom images match up, i.e. the vertical structure in your image that you are trying to focus on has no horizontal offset between top and bottom image but instead runs in one line from top to bottom. If your image is out of focus, then the top image is either shifted to the left or to the right vs. the bottom image, corresponding to focus too close or too far away (the actual direction is specific to the particular implementation in the camera). This is how in a manual focus camera you can determine the direction that you need your focus to change - by the direction of misalignment between top and bottom image. Such rangefinders are realized by including a prism in the focusing screen in a way that the top half shows only rays from the left side of the lens, and the bottom half shows only rays from the right side of the lens (or vice versa, depending on the implementation). Dual pixel autofocus works exactly the same: one subset of pixels receiving light from the left side of the lens, the other subset of pixels from the right side of the lens, by placing appropriate microlenses on top of each pixel. So if you take a horizontal row of pixels (say e.g. 50 pixels) in the image area in which you want to achieve focus, you compare the image (in our example this 'image' is 1 pixel high and 50 pixels wide) you get from the 50 left-sensitive pixels to the image from the 50 right-sensitive pixels. If the two images are shifted to the left or to the right with respect to each other, your image is out of focus, and the direction of the shift determines the direction of the necessary focus change. The left- and right sensitive pixels have essentially the same function as the top and bottom optical split image focusing screen, but the optical split image focusing screen is rearranging the light from left and right to top and bottom in order to achieve both a 'human interpretable' optical focusing information and showing a complete image in the focusing screen. For DPAF of course this is not necessary, the outputs from the left- and right-sensitive lines of pixels are compared to each other and the relative shift between the images is determined for autofocus. Note (an apparently common misconception) that a single pixel in DPAF is NOT enough to perform autofocus. You always need a number of (in current Canon implementations horizontally) adjacent pixels in order to compute the shift between left- and right-sensitive images.



Sporgon said:


> So in fact it’s the same orientation as the non x type on DSLRs.


In fact (at least e.g. in the 5D4) it appears to be the other way round: in the manual it says that the non x-type autofocus points are sensitive to horizontal structures. But this could also be due to technical reasons on where a particular type of autofocus sensor can be placed in the presence of everything that is also necessary in a DSLR (mirror box, viewfinder prisma, etc.), and not so much because it is more advantageous from a real-world focusing perspective, in which I believe that accurate focusing on vertical structures might be more important.


----------



## usern4cr (Oct 20, 2020)

yeahright said:


> and @usern4cr:
> Actually, dual pixel autofocus in fact works pretty much like an optical split image range finder. In a split image range finder with which you can focus on vertical structures, there is a horizontal line splitting top and bottom image in your focus area. Your image is in focus if the top and bottom images match up, i.e. the vertical structure in your image that you are trying to focus on has no horizontal offset between top and bottom image but instead runs in one line from top to bottom. If your image is out of focus, then the top image is either shifted to the left or to the right vs. the bottom image, corresponding to focus too close or too far away (the actual direction is specific to the particular implementation in the camera). This is how in a manual focus camera you can determine the direction that you need your focus to change - by the direction of misalignment between top and bottom image. Such rangefinders are realized by including a prism in the focusing screen in a way that the top half shows only rays from the left side of the lens, and the bottom half shows only rays from the right side of the lens (or vice versa, depending on the implementation). Dual pixel autofocus works exactly the same: one subset of pixels receiving light from the left side of the lens, the other subset of pixels from the right side of the lens, by placing appropriate microlenses on top of each pixel. So if you take a horizontal row of pixels (say e.g. 50 pixels) in the image area in which you want to achieve focus, you compare the image (in our example this 'image' is 1 pixel high and 50 pixels wide) you get from the 50 left-sensitive pixels to the image from the 50 right-sensitive pixels. If the two images are shifted to the left or to the right with respect to each other, your image is out of focus, and the direction of the shift determines the direction of the necessary focus change. The left- and right sensitive pixels have essentially the same function as the top and bottom optical split image focusing screen, but the optical split image focusing screen is rearranging the light from left and right to top and bottom in order to achieve both a 'human interpretable' optical focusing information and showing a complete image in the focusing screen. For DPAF of course this is not necessary, the outputs from the left- and right-sensitive lines of pixels are compared to each other and the relative shift between the images is determined for autofocus. Note (an apparently common misconception) that a single pixel in DPAF is NOT enough to perform autofocus. You always need a number of (in current Canon implementations horizontally) adjacent pixels in order to compute the shift between left- and right-sensitive images.
> 
> 
> In fact (at least e.g. in the 5D4) it appears to be the other way round: in the manual it says that the non x-type autofocus points are sensitive to horizontal structures. But this could also be due to technical reasons on where a particular type of autofocus sensor can be placed in the presence of everything that is also necessary in a DSLR (mirror box, viewfinder prisma, etc.), and not so much because it is more advantageous from a real-world focusing perspective, in which I believe that accurate focusing on vertical structures might be more important.


I'm still trying to understand this: So, you're saying you have a horizontal row of 50 pixels, each pixel with 2 sub pixels. One sub pixel is sensitive to the rays from the left of the main lens, and the other sub pixel sensitive to the right of the main lens. I'm wondering what type of micro lens above each of these 2 sub pixels can be so selective? Wouldn't each of the micro lenses have to reject half (at least) of the light, and how would they make such an optical structure that would be so completely effective in the splitting when they sub pixels are both next to each other in the sensor? I think the physical construction of the two sub pixel micro lenses and how they split the sensation of light from the left & right side of the main lens is the crux of what I need in order to really understand it.

I also assume that you could shift your computational logic by 1 (or more) whole pixel to the left or right to have another 50 pixels (25 to the left, and to the right) to give you a new AF value, correct? That is, the 2 sub pixels of a single pixel could be used by up to 50 different sets of AF logic if they wanted to design that many AF points, correct?


----------



## yeahright (Oct 21, 2020)

usern4cr said:


> I'm still trying to understand this: So, you're saying you have a horizontal row of 50 pixels, each pixel with 2 sub pixels. One sub pixel is sensitive to the rays from the left of the main lens, and the other sub pixel sensitive to the right of the main lens. I'm wondering what type of micro lens above each of these 2 sub pixels can be so selective? Wouldn't each of the micro lenses have to reject half (at least) of the light, and how would they make such an optical structure that would be so completely effective in the splitting when they sub pixels are both next to each other in the sensor? I think the physical construction of the two sub pixel micro lenses and how they split the sensation of light from the left & right side of the main lens is the crux of what I need in order to really understand it.
> 
> I also assume that you could shift your computational logic by 1 (or more) whole pixel to the left or right to have another 50 pixels (25 to the left, and to the right) to give you a new AF value, correct? That is, the 2 sub pixels of a single pixel could be used by up to 50 different sets of AF logic if they wanted to design that many AF points, correct?


Here is an article including a (simplified) image of a DPAF pixel with microlens:






Canon EOS R: A deep-dive Q&A session with the Canon engineers


Canon's new EOS R full-frame mirrorless camera has stirred a lot of buzz, and I'm sure readers will have as many questions as we've had about it. I had a chance to sit down with a panel of top Canon engineers at the…



www.imaging-resource.com





There is actually one lens on each dual-pixel, and it does not reject light (which would reduce sensor sensitivity) but rather directs the light to the appropriate sub-pixel.

Yes, you could move your 50-pixel AF 'sensor' by only one pixel, however, in this case the result would be almost the same, because 49 pixels are the same. So I assume that there is a sensible amount of pixels between AF positions (maybe at least half the length?), and I assume this is why the number of AF points is significantly lower than the number of horizontal sensor pixels. You could also vary the length of the sensor which would have an impact on the selectivity of the AF point and on the probability of finding a target.


----------



## Rzrsharp (Oct 21, 2020)

yeahright said:


> Here is an article including a (simplified) image of a DPAF pixel with microlens:
> 
> 
> 
> ...


They fix the location and number of pixel for AF point is a choice of faster AF. It will reduce the process timing. There have no extra sensitive pixels.


----------



## usern4cr (Oct 21, 2020)

yeahright said:


> Here is an article including a (simplified) image of a DPAF pixel with microlens:
> 
> 
> 
> ...


Thanks, Yeahright!  That was an excellent article on AF, and some other good info on the R mount. Now I do feel that I understand enough of how dual pixels work. It also makes is pretty obvious that a quad pixel would work the same way, but with the 4 pixels in the 4 corners of the pixel (in an "X" pattern) instead of in a cross "+" pattern for maximum packing of pixels on the sensor.

A little thing I noticed about the R mount (over the EF mount) is the shape of the 3 mount ridges that hold the lens in place once properly inserted. The R mount has a much longer ridge at the top of the circle than the EF mount. This helps a *lot* (IMHO) since the top of the mount is where the weight of the lens is trying to pull the lens away from the mount and only this ridge is holding it on. The ridges at the bottom don't hold it on from the downward pull of gravity much at all, and are appropriately smaller. Those big long lenses stress the mount a lot, especially if you don't choose to use the lens' tripod collar when there is one (as I try to do).


----------



## canonmike (Apr 25, 2021)

usern4cr said:


> If this is implemented in the R1, then it would help to define the "groundbreaking" new AF system, as Maximilian mentioned. Since each "quad pixel" would have to cover more area than a "dual pixel" for all the circuitry then I could see the R1 coming in at the 20+MQP (where a pixel is a quad pixel) which is ideal for 4K video as well as plenty for professional stills use. It wouldn't surprise me if the marketing department then decides to call this a "80MQP" sensor and asks the programmers to come up with a new Bayer decoding (up-res'ing) to get 80MP to display & advertise.
> 
> Just keep in mind that the IQ and dynamic range will be affected by the increased circuitry on the sensor if that circuitry blocks more of the total light sensed by the sensor. And more false artifacts will occur if the Bayer decoding shifts from 4 cells (RGBG) to 16 cells(RRRRGGGGBBBBGGGG) - but I don't think the marketing department will mention that!
> 
> You know, at some point there's going to be so much circuitry and high resolution that I can see Canon finally coming out with a BSI (back-side illuminated) version of their sensors. That's really the only way to keep getting more complexity and resolution and image quality in a FF sensor as you approach the limits of what a non-BSI sensor can do. We may be seeing Canon's first BSI introduction, which would be quite an announcement in itself! Do I think this will happen in the R1? - No, but it sure wouldn't surprise me. And if they want to get a 45M QP sensor and beyond then I think they may be forced to use BSI in the future.


""""You know, at some point there's going to be so much circuitry and high resolution that I can see Canon finally coming out with a BSI (back-side illuminated) version of their sensors. That's really the only way to keep getting more complexity and resolution and image quality in a FF sensor as you approach the limits of what a non-BSI sensor can do. We may be seeing Canon's first BSI introduction, which would be quite an announcement in itself! Do I think this will happen in the R1? - No, but it sure wouldn't surprise me. And if they want to get a 45M QP sensor and beyond then I think they may be forced to use BSI in the future. """" 

Well, my fellow CR member, you sure called it and back in Oct, 2020, no less. BSI on its way in the R3.


----------



## usern4cr (Apr 25, 2021)

canonmike said:


> """"You know, at some point there's going to be so much circuitry and high resolution that I can see Canon finally coming out with a BSI (back-side illuminated) version of their sensors. That's really the only way to keep getting more complexity and resolution and image quality in a FF sensor as you approach the limits of what a non-BSI sensor can do. We may be seeing Canon's first BSI introduction, which would be quite an announcement in itself! Do I think this will happen in the R1? - No, but it sure wouldn't surprise me. And if they want to get a 45M QP sensor and beyond then I think they may be forced to use BSI in the future. """"
> 
> Well, my fellow CR member, you sure called it and back in Oct, 2020, no less. BSI on its way in the R3.


Thanks, Canonmike! I'm probably just one of the many who have predicted that Canon would have to eventually come out with BSI sensors as you can only have so much circuitry blocking your photo receptors before there is insufficient light for them. But thanks for seeing my post on it and mentioning it.

I also remember "guessing" early on that the price of the RF 100-500L would be the same as the RF 70-200L ($2,700 as I figured rounding was appropriate). So I missed by $1 on that one. Well, that one was just a lucky guess (I've guessed prices on others that have missed as well).

The one I like to remember was when I saw the early patent post for the RF 800mm DO lens. I noticed 2 things: it didn't have any IS elements indicated in it (obviously they did add IS to it in production) and that there were absolutely no lenses in the back of it so that I predicted it would have a collapsing ability to shorten it's length by ~30% or so for storage. That one I nailed!

Now, if I only knew which way the stock market was going ...


----------



## canonmike (Apr 26, 2021)

usern4cr said:


> Thanks, Canonmike! I'm probably just one of the many who have predicted that Canon would have to eventually come out with BSI sensors as you can only have so much circuitry blocking your photo receptors before there is insufficient light for them. But thanks for seeing my post on it and mentioning it.
> 
> I also remember "guessing" early on that the price of the RF 100-500L would be the same as the RF 70-200L ($2,700 as I figured rounding was appropriate). So I missed by $1 on that one. Well, that one was just a lucky guess (I've guessed prices on others that have missed as well).
> 
> ...


Ha! Ha!. When you figure that out, I hope you share your info with the same insight and enthusiasm as your camera predictions. Now, go get to work on it.


----------

