# Patent: Expanded dynamic range using DPAF sensors



## Canon Rumors Guy (Aug 14, 2019)

> Canon News has uncovered a patent showing how to increase dynamic range using DPAF equipped sensors.
> Here’s a breakdown of Canon Japan Patent Application 2019-129491:
> The basic idea is that during read mode, one half of the pixel is amplified at one level, and the other half is amplified at another level.  reading the two simultaneously and processing can increase the dynamic range of the sensor.  Basically dual ISO.
> It should be noted that the patent does mention in-vehicle sensors, but can also be applied to any other type of sensor as well.



Continue reading...


----------



## SecureGSM (Aug 14, 2019)

as it was discovered 3 years ago:






Forget subtle focus tweaks, Canon’s Dual Pixel RAW tech can give you an additional stop in the highlights!


A week ago, the Canon 5D Mark IV DSLR launched, and with it a brand-new feature based around the same technology that underlies Canon's Dual Pixel CMOS AF function. Called Dual Pixel RAW, it's explained in great detail…



www.imaging-resource.com


----------



## Deleted member 381342 (Aug 14, 2019)

This would certainly be a differentiating feature that Canon would be able to pull over everyone else for a long time. However, for myself, I do not see the need. I never suffer the lack of ISO(I do animals) and more just sometimes miss focus by a few mm on occasion.


----------



## BurningPlatform (Aug 14, 2019)

SecureGSM said:


> as it was discovered 3 years ago:
> 
> 
> 
> ...



No, this is completely different. Current dual pixel RAW files contain the other pixel data in the secondary frame and the combined (by adding the two pixels' data together) in the main frame. But there is no difference in the amplification of the original dual pixels. With the dual gain system they could achieve a bigger than 1 stop boost.

I guess the maximum achievable benefit would be depending on the noise characteristics of the sensor. This would not help for photon shot noise, but certainly would help if read noise is a factor.


----------



## amorse (Aug 14, 2019)

SecureGSM said:


> as it was discovered 3 years ago:
> 
> 
> 
> ...


I remember that - it's an interesting feature that I'd considered using in some rare occasions, but it just doesn't seem practical in its current form since the DPRaw system wasn't designed for that (yet, I guess). 

I shoot a lot of high DR scenes and have a 5D IV, but I've never gone through the trouble of using DPRaw in that way since it seems quicker and more reliable to just bracket. For me, I wouldn't enable DPRaw to extend DR unless I knew I'd be clipped otherwise to save card space just like I wouldn't bracket unless I knew I'd be clipped otherwise. The issue that keeps me from using DPRaw for increasing DR is I can't check if the file is still clipped on until I get the file off the camera and split it up into two exposures, I can only guess based on the assumption of an ~1-stop darkening in the highlights. With that said, I believe the histogram produced for captured images is really only related to the JPG preview so even that isn't a perfect measure of clipping either. Bracketing just seems more practical for conditions where the subjects aren't moving.


----------



## privatebydesign (Aug 14, 2019)

BurningPlatform said:


> No, this is completely different. Current dual pixel RAW files contain the other pixel data in the secondary frame and the combined (by adding the two pixels' data together) in the main frame. But there is no difference in the amplification of the original dual pixels. With the dual gain system they could achieve a bigger than 1 stop boost.
> 
> I guess the maximum achievable benefit would be depending on the noise characteristics of the sensor. This would not help for photon shot noise, but certainly would help if read noise is a factor.


Yes its amazing what can be learned when you actually read the article isn't it!

Current dual file RAW images give you plus one stop of DR in the highlights (which much to the Sony fanboy club puts the Canon sensor above the Sony sensor for DR at most ISO's), that is just a simple mathematical fact. Enabling dual amplification/sensitivity/ISO gives even greater mathematical potential.


----------



## Kit. (Aug 14, 2019)

All this stuff would work unreliably in the areas where the information it is supposed to provide interferes with the information supposed to be provided by DPAF, i.e, in defocused areas with local contrast.


----------



## Jack Douglas (Aug 14, 2019)

Kit. said:


> All this stuff would work unreliably in the areas where the information it is supposed to provide interferes with the information supposed to be provided by DPAF, i.e, in defocused areas with local contrast.


I guess you are right but isn't it the focused area where you're most interested in the detail via dynamic range?

Jack


----------



## mb66energy (Aug 14, 2019)

BurningPlatform said:


> [...]
> 
> I guess the maximum achievable benefit would be depending on the noise characteristics of the sensor. This would not help for photon shot noise, but certainly would help if read noise is a factor.


The simplest thing to increase DR with dual pixels is to make one pixel less sensitive electronically: Switch a series resistor (or Op Amp construction) between photo diode and capacitor structure to decrease the speed of charging. In dark areas the sensitive pixel gives DR and in bright areas the "blinded" one gives the DR. This would eventually result in less ISO, e.g. ISO 12 or ISO 6 for ultra high DR.

But I wonder if we will see some imperfections in lenses (flares / lower contrast by stray light) which aren't visible in sensors with less DR. But maybe on reason of the improved coatings Canon promotes in the currently released lenses?!


----------



## Jack Douglas (Aug 14, 2019)

Is it reasonable to assume that Canon has been paying attention to all the CR hype about DR? 

Jack


----------



## Quarkcharmed (Aug 14, 2019)

privatebydesign said:


> Current dual file RAW images give you plus one stop of DR in the highlights (which much to the Sony fanboy club puts the Canon sensor above the Sony sensor for DR at most ISO's),



Unfortunately no, they don't. Have you actually used this DPRSplit app? On a couple of occasions I was able to recover some highlights. 
But in general it doesn't really work as expected, merged files have no additional DR and it often creates an awful greenish colour cast in the highlights. 
Good to have as a last resort, but I never rely on it. And I wouldn't recommend anybody to rely on it.


----------



## olympus593 (Aug 14, 2019)

This is kinda similar to Magic Lantern's Dual ISO, where odd and even pixel rows are read in different ISOs. The file must go through a demosaic process but the result is very impressive, at the cost of some vertical resolution (proportional to stops difference). Maybe this is the dynamic range stuff the rumors were about.


----------



## sdz (Aug 14, 2019)

Codebunny said:


> This would certainly be a differentiating feature that Canon would be able to pull over everyone else for a long time. However, for myself, I do not see the need. I never suffer the lack of ISO(I do animals) and more just sometimes miss focus by a few mm on occasion.



The needs: 

1. More effective Dynamic Range is better than lesser Dynamic Range all else being equal.
2. To put large quantities of crow into the mouths of its critics.


----------



## masterpix (Aug 14, 2019)

Canon Rumors Guy said:


> Continue reading...


I recall this post some time ago: https://www.dpreview.com/news/08686...ge-from-canon-5d-mark-iv-dual-pixel-raw-files , seems it does the same thing.


----------



## Kit. (Aug 14, 2019)

Jack Douglas said:


> I guess you are right but isn't it the focused area where you're most interested in the detail via dynamic range?


We would also not want to mess with bokeh balls in highlights.


----------



## sebasan (Aug 14, 2019)

Quarkcharmed said:


> Unfortunately no, they don't. Have you actually used this DPRSplit app? On a couple of occasions I was able to recover some highlights.
> But in general it doesn't really work as expected, merged files have no additional DR and it often creates an awful greenish colour cast in the highlights.
> Good to have as a last resort, but I never rely on it. And I wouldn't recommend anybody to rely on it.



I have tried dprsplit and its outputs and it works very well. 
You need to dual proccess the files to obtain the dynamic range benefits which is better than dual proccess one file. (Its like bracketing but with one frame).


----------



## Mt Spokane Photography (Aug 14, 2019)

Kit. said:


> All this stuff would work unreliably in the areas where the information it is supposed to provide interferes with the information supposed to be provided by DPAF, i.e, in defocused areas with local contrast.


Why do you say that? Did you understand the patent??

Looking at many of the comments, it is obvious that some have not read or did not understand the patent. Are they just commenting on what they imagine it might say! 

Its not simple, its is very complex, it does not interfere with DPAF Autofocus, It takes sensor noise into account, its very well thought out. The patent has several different ways of accomplishing the process too.

Read it, it goes into step by step detail for each of the different methods. Timing charts are shown for the AF Mode and Readout mode for each case.


----------



## Kit. (Aug 14, 2019)

Mt Spokane Photography said:


> Why do you say that? Did you understand the patent??


I say that because people assume that it will work with DPAF and I understand DPAF.

I don't know Japanese, but I won't be surprised if the patent says nothing about DPAF at all.


----------



## Mt Spokane Photography (Aug 14, 2019)

Codebunny said:


> This would certainly be a differentiating feature that Canon would be able to pull over everyone else for a long time. However, for myself, I do not see the need. I never suffer the lack of ISO(I do animals) and more just sometimes miss focus by a few mm on occasion.


This is not boosting ISO, its increasing DR, even at ISO 100.

For myself, I definitely find cases where DR is totally inadequate. I'm not sure if this would fix those cases, but any help is good. I'd put this to the test right away. I've just been editing 2,000 shots from a theater event. Generally, the stage is not brightly lit, then, they hit a subject with white clothing with a spot while the rest of the stage is dim, it results in a exposure nightmare. Or, there is a light in a scene, perhaps a streetlight which blows out without dropping the exposure drastically on the fly. Then, in post, there is always more noise than desirable for those shots.

I toss far more images due to exposure issues than I do with poorly focused images. The DR of my 5D MK IV and my EOS R is very good, but is still not able to capture a significant percentage of my shots unless I underexpose. Sometimes, I set EC to -2 or even more.


----------



## Mt Spokane Photography (Aug 14, 2019)

masterpix said:


> I recall this post some time ago: https://www.dpreview.com/news/08686...ge-from-canon-5d-mark-iv-dual-pixel-raw-files , seems it does the same thing.


If you mean it increases DR, then yes, but its not done the same way. DPR split often produces images with strange artifacts, and resolution suffers too. Results are different for different photo stacking software, its a lot of work .


----------



## SecureGSM (Aug 14, 2019)

Quarkcharmed said:


> Unfortunately no, they don't. Have you actually used this DPRSplit app? On a couple of occasions I was able to recover some highlights.
> But in general it doesn't really work as expected, merged files have no additional DR and it often creates an awful greenish colour cast in the highlights.
> Good to have as a last resort, but I never rely on it. And I wouldn't recommend anybody to rely on it.


What software have you used for blending the two subframes into a single one? If you haven’t that greenish colourcast in either subframe - and you shouldn’t- the resulting frame should be clean. 
On another note, it has been proven and confirmed that there is about 0.7 to 1 full stop of DR improvement that we gain In the end. 

Now, I am keen to understand what DR advantage a quad pixel RAW technology may provide in terms of Dynamic Range broadening.


----------



## Quarkcharmed (Aug 15, 2019)

SecureGSM said:


> What software have you used for blending the two subframes into a single one? If you haven’t that greenish colourcast in either subframe - and you shouldn’t- the resulting frame should be clean.
> On another note, it has been proven and confirmed that there is about 0.7 to 1 full stop of DR improvement that we gain In the end.
> 
> Now, I am keen to understand what DR advantage a quad pixel RAW technology may provide in terms of Dynamic Range broadening.



I use Lightroom for HDR merge, but the colour cast is evident *before *the merge, check the file produced by DPRSplit on the left and the normal CR2 on the right. The cast is actually greenish-blueish, but anyway




(some LR sliders where tweaked on the left and on the right, I didn't bother to reset the sliders, the point is the cast is always there, you can't recover).

As above, a couple of times DPRSplit worked ok for me, but I didn't bother to figure out what conditions make it produce the colour cast sometimes. It's just unreliable and can't be used as a part of normal workflow - e.g. I wouldn't recommend to deliberately overexpose relying on DPRSplit recovery.



SecureGSM said:


> On another note, it has been proven and confirmed that there is about 0.7 to 1 full stop of DR improvement that we gain In the end.



I highly doubt it was 'proven' and 'confirmed'. It might be working in certain conditions. For me, it just spoils the images.


----------



## Mt Spokane Photography (Aug 15, 2019)

I did try DPR split long ago, it required DNG files and a lot of hassle, but it worked to provide more DR. 

Here are 3 files, a jpeg created from the original raw, notice the detail at the top of the white post, and then a helicon merge image and a photomatix merge image.

I also tried Lightroom merge, but the result was so poor I must have deleted it.

Notice that the software does have different results you could tweak them to look the same, but I chose not to.


----------



## Mt Spokane Photography (Aug 15, 2019)

Of course, I could just use lightroom and click on auto tone with the original cr2 file, so maybe its not so clear after all.


----------



## Mt Spokane Photography (Aug 15, 2019)

Kit. said:


> I say that because people assume that it will work with DPAF and I understand DPAF.
> 
> I don't know Japanese, but I won't be surprised if the patent says nothing about DPAF at all.


Follow the link. Click on English in the upper right corner. It does go thru the DPAF process in fine detail. 


Prior to the part where different amplification is applied to one of the pixels, the pixels are both operating in normal mode with no differential amplification and a DPAF is performed as usual. Then, the additional amplification is applied, the output averaged, and sent to the D/A converter. A 2nd method involves the D/A converter as part of the averaging, its done as a full frame, or individual pixel columns. Lots of things are happening in a very short period, most of them normal things that happen for every digital photo, but they step thru it all.

They have a single transistor switch that switches from averaged output to separate output so its very fast. But as I said, lots of things are happening to make it all work smoothly.

This is the description / chart for DPAF, you need to see the entire patent to see what parts of the circuit they refer to. There is a 2nd figure for how the part works with different ISO for each half of the pixel. Then, there are alternate methods ...

1) AF read mode
Referring to FIG. 3, a reading method (referred to as an "AF reading mode" or an "AF reading") for acquiring phase difference information from an object will be described. In the AF reading, the amplification gains of the 1 read circuit 211 - 1 and the 2 read circuit 211 - 2 are set to be equal. At time t 1, SEL 1 and SEL 2 are set high, and the pixel of the corresponding row is set to the select state. At time t 2, RES 1 and RES 2 are set low to end pixel reset. Thereafter, sampling of the noise level is performed until time t 3. At time t 3, TX 1 and TX 2 are set high, and charge transfer from the photoelectric conversion unit is started. At time t. sub. 4, TX. sub. 1 and TX. sub. 2 are low and charge transfer is terminated. Thereafter, sampling of the signal level is performed until time t 5. At time t 5, RES 1 and RES 2 are set high, and pixel reset is performed. At time t 6, SEL 1 and SEL 2 are turned low, and the pixel selection is canceled. During time t 1 to time t 6, ADD is always low, and the signals corresponding to the photoelectric conversion units 202 1 2 are individually amplified and input to the signal output circuit 106 independently. By simultaneously performing the AF reading by the 2 photoelectric conversion units 202 - 1 and 2 -, it is possible to obtain the phase difference information of the subject. When 2 photoelectric conversion sections 202 - 1 and 2 - are arranged on the left and right sides in a pixel, phase difference information in a horizontal direction is acquired.

When 2 photoelectric conversion units 202 - 1 and 2 - 2 are arranged vertically within a pixel, vertical phase difference information can be acquired.


----------



## SecureGSM (Aug 15, 2019)

As I mentioned above, a quad Pixel raw tech may theoretically provide up to 3 addition stops of DR. I do not care how large the resulting file might be. 4 frames taken simultaneously, moving objects or not. This may end up being a strong value proposition for landscape, architectural and outdoor sports where we are forced to shoot under lighting conditions that are available at the time . Having up to 18stops of DR on demand .... just wow.


----------



## koenkooi (Aug 15, 2019)

SecureGSM said:


> As I mentioned above, a quad Pixel raw tech may theoretically provide up to 3 addition stops of DR. I do not care how large the resulting file might be. 4 frames taken simultaneously, moving objects or not. This is may end up being a strong value proposition for landscape, architectural and outdoor sports where we are forced to shoot under lighting conditions that are available at the time . Having up to 18stops of DR on demand .... just wow.



I do wonder, if this becomes available in a real product, how Canon will present this to the user. Will it be a single, regular CR3 file with 18 stops DR or something like the current DPRAW CR3s? If it's the latter, I hope 3rd parties like Adobe will add support for that. But I fear I will still be using DPP to generate TIFFs that I import into LR for a long, long time.


----------



## Quarkcharmed (Aug 15, 2019)

koenkooi said:


> I do wonder, if this becomes available in a real product, how Canon will present this to the user. Will it be a single, regular CR3 files with 18 stops DR or something like the current DPRAW CR3s? If it's the latter, I hope 3rd parties like Adobe will add support for that. But I fear I will still be using DPP to generate TIFFs that I import into LR for a long, long time.


I highly doubt it ever becomes a real product. Realistically, what we're going to get is a new tech sensor, probably (hopefully) better than 5DIV, but not exceptional, just good enough.


----------



## sulla (Aug 15, 2019)

This is basically ML's dual-ISO, just done on the subpixels in DPAF Sensor while ML does it on neighbouring full pixels.
While it is fabulous that Canon implements something ML invented years ago, how on earth can this be patentable? Canon's inventive step is minimal.


----------



## Kit. (Aug 15, 2019)

sulla said:


> This is basically ML's dual-ISO, just done on the subpixels in DPAF Sensor while ML does it on neighbouring full pixels.
> While it is fabulous that Canon implements something ML invented years ago, how on earth can this be patentable? Canon's inventive step is minimal.


Do you realize that in order for ML to use this functionality, first it needs to be implemented in hardware by... guess whom?


----------



## Kit. (Aug 15, 2019)

Mt Spokane Photography said:


> Follow the link. Click on English in the upper right corner. It does go thru the DPAF process in fine detail.


Ah, it auto-translates when you select "Text" instead of "PDF"?

Still, I couldn't find anything that addresses the concern that amplifying the subpixel that got highlighted by one half of the lens exit pupil and merging it with the subpixel that did not get highlighted by another is a bad idea. Bad translation, omission or not a concern in practice?


----------



## Mt Spokane Photography (Aug 15, 2019)

Kit. said:


> Ah, it auto-translates when you select "Text" instead of "PDF"?
> 
> Still, I couldn't find anything that addresses the concern that amplifying the subpixel that got highlighted by one half of the lens exit pupil and merging it with the subpixel that did not get highlighted by another is a bad idea. Bad translation, omission or not a concern in practice?


You can't prove a negative. Please explain why its a bad idea? Both halves of the pixel are under the same micro lens, is that what you are referring to? Its the same as averaging two adjacent pixels as in pixel binning that has been done for many years, why is it suddenly a problem?


----------



## Kit. (Aug 15, 2019)

Mt Spokane Photography said:


> You can't prove a negative. Please explain why its a bad idea? Both halves of the pixel are under the same micro lens, is that what you are referring to? Its the same as averaging two adjacent pixels as in pixel binning that has been done for many years, why is it suddenly a problem?


Imagine the case with an ideal separation by phase shift: left half-pixel is almost saturated, right half-pixel is almost black. Now, imagine you decided to amplify the left half-pixel. The result will be full saturation and loss of the actual amplitude information about that defocused highlight.


----------



## nchoh (Aug 15, 2019)

Kit. said:


> Imagine the case with an ideal separation by phase shift: left half-pixel is almost saturated, right half-pixel is almost black.



I can't image a situation where the left half-pixel is almost saturated and the right half-pixel is almost black. And that's talking about a single pixel... how about the other pixels around the same area? Same thing where one half is almost saturated and the other half is almost black? Do you realize how nearly impossible such a scenario would be?


----------



## Kit. (Aug 15, 2019)

nchoh said:


> I can't image a situation where the left half-pixel is almost saturated and the right half-pixel is almost black. And that's talking about a single pixel... how about the other pixels around the same area? Same thing where one half is almost saturated and the other half is almost black? Do you realize how nearly impossible such a scenario would be?


It's when we have a close to perfect separation of a phase shift on a defocused point light source in the night, for example.


----------



## Quarkcharmed (Aug 15, 2019)

nchoh said:


> I can't image a situation where the left half-pixel is almost saturated and the right half-pixel is almost black. And that's talking about a single pixel... how about the other pixels around the same area? Same thing where one half is almost saturated and the other half is almost black? Do you realize how nearly impossible such a scenario would be?


Happens in almost every image. Sharp edges of contrast objects, such as a line between the sky and buildings.


----------



## Bob Wiglz (Aug 16, 2019)

Not for nothing, what is that schematic from? Does anyone remember their highschool analog electronics? That is about as basic as a differential amplifier gets, like page 4 in any electronics 101 type book.

Also - isn't this what A1ex of Magic Lantern / CHDK fame did with Canons back in '13 with the dual iso hack? https://www.magiclantern.fm/forum/index.php?topic=7139.0 Without low-level access to the opamp circuits he was toggling ISO high and low alternating... he didn't have per-pixel access but as I understand it the sensor data is scanned, so it was line-by-line... and now a Canon engineer who does in fact have low level access implements it correctly? nice.

I've always wondered if they allowed the hacking so they could get ideas and implement the ones that were "_oh duh! why didn't we think of that!_"... I guess it's too much to hope that a1ex learned Japanese and moved to Ohta Ku.


----------



## Sharlin (Aug 16, 2019)

sulla said:


> While it is fabulous that Canon implements something ML invented years ago, how on earth can this be patentable? Canon's inventive step is minimal.



It may have escaped you, but Canon did not try to patent the single sentence "Expand dynamic range by using different gains on each DPAF subpixel" or something. The patent is about a detailed design for electronics that _implements_ that idea on Canon's real-world sensor hardware, and does it fast enough to be useful. _That's_ the patentable part.


----------



## nchoh (Aug 16, 2019)

Quarkcharmed said:


> Happens in almost every image. Sharp edges of contrast objects, such as a line between the sky and buildings.


The alignment where the left-half pixel and right-half pixel such that the left-half pixel is sky and the right-half pixel is the building and vice versa... on one row of pixels?


----------



## Uneternal (Aug 16, 2019)

I've been predicting this feature since months and it seems the gods of Canon have heard me.
Finally Canon pulled this off, and if I'm right we are going to see this in the next cameras. I'm already exited to see Sony fanboys shut up about their great 1 stop higher dynamic range.
What would be even greater was, if Canon would put this into EOS R with a firmware update. However I'm realistic and don't get my hopes up, but still crossing my fingers.
If they would pull this stunt with the EOS R, it would take away one of Sonys unique selling points, so they just have 2 left (full frame 4K and IBIS).


----------



## Quarkcharmed (Aug 17, 2019)

nchoh said:


> The alignment where the left-half pixel and right-half pixel such that the left-half pixel is sky and the right-half pixel is the building and vice versa... on one row of pixels?



In your message https://www.canonrumors.com/forum/i...e-using-dpaf-sensors.37447/page-2#post-787800 you never said it was about a whole row. You were talking about one pixel. In one row that's unlikely of course. But in two half-pixels or in several adjacent half-pixels - easily.


----------



## Don Haines (Aug 17, 2019)

This is the same idea as HDR photography. Take one shot at one ISO, another at a different ISO, and merge the pictures. Now we get to take both pictures at the same time.

I believe that several forum members proposed this about a day after Canon announced their first DPAF camera.


----------



## SecureGSM (Aug 17, 2019)

nchoh said:


> The alignment where the left-half pixel and right-half pixel such that the left-half pixel is sky and the right-half pixel is the building and vice versa... on one row of pixels?



just thinking.. with this dual pixel raw tech.. we are essentially looking at a two subframes with a half-pixel shift each if combined in a single image.
similar to this one here:









Is Pentax’s Pixel Shift Technology Worth Using?


We were pretty impressed with the Pentax K-1 when we got our hands on it last month, and one of the most-hyped inclusions was Ricoh’s Pixel Shift technology. The feature isn’t entirely new, debutin…




www.digitalrev.com





I might consider investing some time experimenting to see if there is a detectable difference in sharpness between a single and "stacked" dual pixel raw 5d IV files.

p.s. actually, looking at this post here: there is a difference 





__





Patent: Expanded dynamic range using DPAF sensors


Canon News has uncovered a patent showing how to increase dynamic range using DPAF equipped sensors. Here's a breakdown of Canon Japan Patent Application 2019-129491: The basic idea is that during read mode, one half of the pixel is amplified at one level, and the other half is amplified at...




www.canonrumors.com


----------



## Uneternal (Aug 18, 2019)

SecureGSM said:


> p.s. actually, looking at this post here: there is a difference
> 
> 
> 
> ...


The difference there is just from different saturation and contrast. I don't see a difference in sharpness.


----------



## nchoh (Aug 19, 2019)

Quarkcharmed said:


> In your message https://www.canonrumors.com/forum/i...e-using-dpaf-sensors.37447/page-2#post-787800 you never said it was about a whole row. You were talking about one pixel. In one row that's unlikely of course. But in two half-pixels or in several adjacent half-pixels - easily.



Yes, true. I just didn't know how to explain at that time... sometimes, taking an experiment considering just one pixel does not make sense when expanded to the whole sensor.

That said. It seems that many people are not reading the article correctly. It is about amplification during read.

The amplification of one half of the pixel could also mean de-amplification of the other half, so that the pixel as a whole would be at the "correct" exposure". Blown out pixels would still be blown out on the amplified pixel but would have information on the de-amplified half-pixel. And vice-versa for pixels that are too dark for any information. That would give expanded ISO.


----------

