# Just Touching the Surface of Dual Pixel Technology? [CR1]



## Canon Rumors Guy (Nov 25, 2013)

```
<div name="googleone_share_1" style="position:relative;z-index:5;float: right; /*margin: 70px 0 0 0;*/ top:70px; right:120px; width:0;"><g:plusone size="tall" count="1" href="http://www.canonrumors.com/2013/11/just-touching-the-surface-of-dual-pixel-technology-cr1/"></g:plusone></div><div style="float: right; margin:0 0 70px 70px;"><a href="https://twitter.com/share" class="twitter-share-button" data-count="vertical" data-url="http://www.canonrumors.com/2013/11/just-touching-the-surface-of-dual-pixel-technology-cr1/">Tweet</a></div>
<p><strong>Dual Pixel technology more than just AF?

</strong><a href="http://www.northlight-images.co.uk/cameras/Canon_rumours.html">NL</a> reports that they’ve been told to expect even more new features from Canon’s Dual Pixel technology other than autofocus. Currently the tech appears in the <a href="http://www.bhphotovideo.com/c/product/986389-REG/canon_8469b002_canon_eos_70d_dslr.html/bi/2466/kbid/3296" target="_blank">EOS 70D </a>and will also appear in an upgraded <a href="http://www.bhphotovideo.com/c/product/889545-REG/Canon_EOS_C100_EF_Cinema.html/bi/2466/kbid/3296" target="_blank">EOS C100</a>. Will the <a href="http://www.canonrumors.com/2013/11/from-interbee-interview-with-the-head-of-cinema-eos/" target="_blank">C300 get a similar upgrade</a>?</p>
<p>Apparently dual pixel design will need the latest generation of processing (DIGIC 6/7?) technology to realize its full potential. The benefits of this alongside new CODECS will be seen in the next Cinema EOS cameras and possibly in new high end DSLRs.</p>
<p>The video and stills segments of the professional lineup will get upgrades in 2014. Cinema EOS will get it first, and possibly be shown in April at NAB 2014 in Las Vegas. DSLRs will get it in the second half of the year and will most likely be shown at Photokina 2014 in Cologne, Germany.</p>
<p>Source: [<a href="http://www.northlight-images.co.uk/cameras/Canon_rumours.html" target="_blank">NL</a>]</p>
<p><strong><span style="color: #ff0000;">c</span>r</strong></p>
```


----------



## Mt Spokane Photography (Nov 25, 2013)

I expect that future generations will have a lot more features, it was apparently pretty difficult to get the first version working, but it has gained acceptance and has good reviews, so maybe we will see tracking, and even replace the phase detect for lower end DSLR's. That will cut the cost to make them.

Right now, there are plenty of weak points even with such a good start.


----------



## Ruined (Nov 25, 2013)

I wonder if this will be seen in the 6D2 next year? 6D2 needs new autofocus system by far the most out of all the FF cameras, plus it will be a good testbed before they put it in the higher end cameras.


----------



## Mantanuska (Nov 25, 2013)

6D2 next year? No. Try '15 or '16


----------



## unfocused (Nov 25, 2013)

Trying to imagine what the uses might be. 

I don't know if this is possible, but it would be fantastic if the technology could be used to eliminate the need to micro adjust lenses. 

Photographer points camera at a target (high contrast image with sharp lines, such as a block of type). Focusing pixels feed information back to camera,which displays a message telling photographer what setting to micro adjust the camera to for that lens. 

Since the dual pixels are used to gather focusing data, are there other kinds of data they can gather and incorporate? 

Could this be used for noise reduction, higher resolution, dynamic range etc. etc.

It would certainly disappoint many on this forum if Canon were to leapfrog the competition utilizing this new technology.


----------



## Ruined (Nov 25, 2013)

Mantanuska said:


> 6D2 next year? No. Try '15 or '16



Do you think they are perhaps referencing 7D2?


----------



## JimKarczewski (Nov 25, 2013)

Higher Dynamic range?? Would be an obvious choice.

1 Pixel pulls low info, 1 pixel pulls high..


----------



## Drizzt321 (Nov 25, 2013)

JimKarczewski said:


> Higher Dynamic range?? Would be an obvious choice.
> 
> 1 Pixel pulls low info, 1 pixel pulls high..



Yea, that's what I was thinking. Especially ever since Magic Lantern started doing that with the dual-readout line skipping on the 5d3. Not quite the same thing, but if the dual-pixel setup lets them have a different read-out/amp for each photosite per bayer CFA point...that'd be pretty awesome. Remains to be seen if they can maintain the good high ISO performance as they are essentially cutting the size of each photosite in half.


----------



## Don Haines (Nov 25, 2013)

Drizzt321 said:


> JimKarczewski said:
> 
> 
> > Higher Dynamic range?? Would be an obvious choice.
> ...



HDR is almost a given..... but with 20 million plus focus points and gobs more computing power you can expect to see object tracking autofocus that puts the capabilities of the 1DX to shame...


----------



## pedro (Nov 25, 2013)

What are the possible effects on better high ISO IQ with this tech? Let's say: in the 25k - 51k range?


----------



## neuroanatomist (Nov 25, 2013)

Regarding HDR, as pointed out, the chip architecture must support separate amplifier circuitry for each sub-pixel, and there will be a noise penalty (base ISO will effectively be 200, but current HTP is similar, so it would be like HTP on steroids).


----------



## Marsu42 (Nov 25, 2013)

Ruined said:


> I wonder if this will be seen in the 6D2 next year? 6D2 needs new autofocus system by far the most out of all the FF cameras, plus it will be a good testbed before they put it in the higher end cameras.



If there is a 6d2 in 2014 my bet is that it will *not* get a new or significantly better phase af system - why should Canon waste a main selling point of the even more expensive dslrs?

What will probably happen that the live view dual pixel af is supposed to "fix" this shortcoming and is expected to be used as the video and some stills af system for amateurs just like on the 70d.


----------



## transpo1 (Nov 25, 2013)

Canon Rumors said:


> <div name=\"googleone_share_1\" style=\"position:relative;z-index:5;float: right; /*margin: 70px 0 0 0;*/ top:70px; right:120px; width:0;\"><glusone size=\"tall\" count=\"1\" href=\"http://www.canonrumors.com/2013/11/just-touching-the-surface-of-dual-pixel-technology-cr1/\"></glusone></div><div style=\"float: right; margin:0 0 70px 70px;\"><a href=\"https://twitter.com/share\" class=\"twitter-share-button\" data-count=\"vertical\" data-url=\"http://www.canonrumors.com/2013/11/just-touching-the-surface-of-dual-pixel-technology-cr1/\">Tweet</a></div>
> <p><strong>Dual Pixel technology more than just AF?
> 
> 
> ...



"DSLRs will get it in the second half of the year" means the 7D2.


----------



## vscd (Nov 25, 2013)

>but if the dual-pixel setup lets them have a different read-out/amp for each photosite
>per bayer CFA point...that'd be pretty awesome. Remains to be seen if they can maintain
>the good high ISO performance as they are essentially cutting the size of each photosite in half.

They don't have to. While increased ISO's are used, you could switch back to use both diodes together as a bigger one. Or you could do 2 frames while the mirror is up, one with pushed/pulled sensors and then one with normal sensitiviy. Rendered together gives you more Range with the same ISO.

There are a lot of possibilities in here because you can do a lot of mathematical variations in realtime. You could even get rid of the AA-Filter with comparing 2 different pictures, captured with one or another sensor-pixel. They already wrote of the Digic6+7, needed for the increasing data/algorithms )

I don't think they will remove the AF-Modul from the higher spec-Kameras... you can't compare some phasedetecting pixels on the chip with a dedicated AF-Sensor. The areas are much bigger and more precisely... especially in the dark. But you could do nearly silent cameras for shootings, the mirror doesn't have to move upwards.


----------



## meauounji (Nov 25, 2013)

Thing I'm excited about in that post: new codecs.

Thing I hope dual pixel AF tech will do: reduce rolling shutter in video (faster CMOS read/reset speed)


----------



## jebrady03 (Nov 25, 2013)

I'm ready for QPAF (Quad Pixel). 
HDR plus AF. 
it seems like a natural evolution to me


----------



## dufflover (Nov 25, 2013)

A lot of these ideas sound pretty neat but at the same times some of them don't sound like dual-pixel but rather something that you can do if you just crammed in twice as many normal pixels into the camera? To put it another way, if the dual-pixel system is basically two fully functioning pixels (because that's what some of the ideas seem to be using) what's the difference?


----------



## 9VIII (Nov 25, 2013)

dufflover said:


> A lot of these ideas sound pretty neat but at the same times some of them don't sound like dual-pixel but rather something that you can do if you just crammed in twice as many normal pixels into the camera? To put it another way, if the dual-pixel system is basically two fully functioning pixels (because that's what some of the ideas seem to be using) what's the difference?



That's what I'm wondering right now.
I would use the dual pixels for a more compact RGBG pixel layout with better colour accuracy, but that's just halfway to increasing resolution by 4 times and getting a perfect RGB signal per pixel (counting four photosites as one pixel).
It would be nice if camera companies would just switch to the same standards as display companies use and count groupings of three sub-pixels as one pixel.


----------



## rs (Nov 25, 2013)

9VIII said:


> I would use the dual pixels for a more compact RGBG pixel layout with better colour accuracy, but that's just halfway to increasing resolution by 4 times and getting a perfect RGB signal per pixel (counting four photosites as one pixel).
> It would be nice if camera companies would just switch to the same standards as display companies use and count groupings of three sub-pixels as one pixel.


Nice idea, but that's presuming you want to display the image on screen at 1:1 using a current generation display. The problem is people print, people display at other sizes than 1:1, and display technology changes. Compare colour CRT's with their seemingly unrelated pixel and RGB layout, LCD's with predictable pixel to RGB layout, pentile displays etc.

Take video for example. Rolling shutter is a very real problem, but roll back the clock to the very first video camera and TV - a one pixel camera with a spinning Nipkow disk. It had zero rolling shutter because the display device was a single light lit by the electrical output of the single pixel, and another Nipkow disk. Great system, but only good when matched with a specific output system.

The best is surely to get the recorded image as close to theoretically perfect as possible, then as output devices mature (by chasing that same goal), it all looks good regardless. However, with retina displays, high DPI printers and high MP cameras most of us have within reach now, the detailed arrangement of how prime colours are individually captured and reproduced has become almost meaningless.


----------



## dgatwood (Nov 26, 2013)

rs said:


> Take video for example. Rolling shutter is a very real problem, but roll back the clock to the very first video camera and TV - a one pixel camera with a spinning Nipkow disk. It had zero rolling shutter because the display device was a single light lit by the electrical output of the single pixel, and another Nipkow disk. Great system, but only good when matched with a specific output system.



I would argue that Nipkow disk designs have rolling shutter problems just like a CMOS sensor, just like tube cameras, etc., and for the same reason. Any time you scan an image from left to right, top to bottom over the course of a thirtieth of a second, the scene you are shooting can change considerably between when you read the upper left corner and when you read the bottom right corner. That's rolling shutter. The only difference is that you never stored a whole frame image from a Nipkow disk, so the viewer would probably not have *perceived* the rolling shutter. 

There's only one way to avoid rolling shutter in an all-electronic image system (*), and that's to use a secondary off-secreen buffer. Basically, you simultaneously reset all of the pixels to start sampling, then wait a period of time (the exposure time), and then simultaneously shift all of the pixels into that secondary buffer so that they won't change while you're reading them, and finally read the pixels out in whatever order you want to, at whatever speed you can manage.

(*) If you don't care about being all-electronic, you can use either a physical shutter as DSLRs do for stills or use film with a physical shutter and then scan it later.


----------



## Dylan777 (Nov 26, 2013)

For higher iso I'm in. Otherwise just another features for video guys.


----------



## Rienzphotoz (Nov 26, 2013)

This is fantastic ... hope to see an awesome 7D II with cool features and performance.


----------



## Lawliet (Nov 26, 2013)

Dylan777 said:


> Otherwise just another features for video guys.



And for still life/high res guys. Landscape, fashion, and so on.


----------



## Marsu42 (Nov 26, 2013)

Lawliet said:


> Dylan777 said:
> 
> 
> > Otherwise just another features for video guys.
> ...



How so? Not to shamelessly promote Magic Lantern (again ), but focus peaking in live view is terrific for manual focus, and personally I really wouldn't know what I'd want dual pixel af in stills for as I nearly never use contrast af. Great feature if you get it for free with the latest gen cameras, but nothing to write home about unless it's in a mirrorless body.


----------



## 9VIII (Nov 26, 2013)

Dylan777 said:


> For higher iso I'm in. Otherwise just another features for video guys.



You wouldn't give up anything for better video performance even if it means that all youtube videos will forever suffer from constant focus hunting?






rs said:


> 9VIII said:
> 
> 
> > I would use the dual pixels for a more compact RGBG pixel layout with better colour accuracy, but that's just halfway to increasing resolution by 4 times and getting a perfect RGB signal per pixel (counting four photosites as one pixel).
> ...



http://forums.lenovo.com/t5/Idea-Windows-based-Tablets-and/Re-Yoga-2-Pro-13-Yellow-Color-Issues/td-p/1270427

After seeing this I'm not sure if the sub pixel layout is ever going to be less important than it is now.


----------



## vscd (Nov 26, 2013)

> A lot of these ideas sound pretty neat but at the same times some of them don't sound like dual-pixel but rather something that you can do if you just crammed in twice as many normal pixels into the camera? To put it another way, if the dual-pixel system is basically two fully functioning pixels (because that's what some of the ideas seem to be using) what's the difference?



The difference is to use different Voltages/sensitivities to each of those "half" Pixels. With just doubling the Pixelamount you can't shot HDR at one frame because all Pixels are bounded. 

The idea was to use one pixel for example from -8EV to 0EV and the other from 0EV to 8EV, making a total of (theoretical) 16EV... the real range of one *single* cell couln't capture 16EV. Today you need to make 2 Shots.


----------



## AvTvM (Nov 26, 2013)

dufflover said:


> A lot of these ideas sound pretty neat but at the same times some of them don't sound like dual-pixel but rather something that you can do if you just crammed in twice as many normal pixels into the camera? To put it another way, if the dual-pixel system is basically two fully functioning pixels (because that's what some of the ideas seem to be using) what's the difference?



I see it the same way. With enough (sub) pixels and enough computing power and some clever algorithms/ firmware it would be possible to kill all birds with one stone ... in real time.


incredibly good hi res - unbinned
incredibly good Hi-ISO - lower res, binned any which way
incredibly good DR - combining low/hi ISO takes from pixel-subsets - any which way
incredibly fast and precise AF - limited only by lens AF-drive capabilities 

All of it in a "truly digital" camera body without any mechanics, noise and (internal) vibrations. End of mirrorslapping. Camera size defined by sensor-size, battery-size and ergonomic reasons (grip, balance). 

Will be interesting, when Canon (as well as Nikon) finally see the light.


----------



## vscd (Nov 26, 2013)

> I see it the same way. With enough (sub) pixels and enough computing power and some clever algorithms/ firmware it would be possible to kill all birds with one stone ... in real time.



You are totally right, but you assume that every Pixel has his own ability to get controlled by the CPU. The sensor isn't working that way. You can even try to read out one single Bit of your computermemory, but don't affect the other bits by reading from the responding bytes!


----------



## M.ST (Nov 26, 2013)

The dual pixel AF is only the beginning.

Canon is working very intensely on new functions for the dual pixel technologie.

You will see some new features in the upcoming 7D Mark II and the new EOS 1 series body.


----------



## neuroanatomist (Nov 26, 2013)

dufflover said:


> A lot of these ideas sound pretty neat but at the same times some of them don't sound like dual-pixel but rather something that you can do if you just crammed in twice as many normal pixels into the camera? To put it another way, if the dual-pixel system is basically two fully functioning pixels (because that's what some of the ideas seem to be using) what's the difference?



The difference is the dual-pixel method has both of the sub-pixels under one microlens. Two separate pixels would mean a loss of spatial resolution in one dimension or the other, or a 'stretched' image (3:1 or 3:4 instead of 3:2 aspect ratio) if interpolation isn't done. So, you wouldn't need twice as many, but four times as many separate pixels with individual microlenses. For a '20 MP' APS-C sensor like 70D, that drives pixel size down into the ~2um range – PowerShot territory.


----------



## Lawliet (Nov 26, 2013)

Marsu42 said:


> How so? Not to shamelessly promote Magic Lantern (again ), but focus peaking in live view is terrific for manual focus, and personally I really wouldn't know what I'd want dual pixel af in stills for as I nearly never use contrast af.



Some of my favorite models are at their best in the inbetween moments - good luck keeping up with the girls 

ML...well, with the 70D or 1Dx I get about 50% more flashes out of a battery, corresponding shorter flash durations and faster recycle times...then with a 5D3. The difference in rental & logistics fees outweight the costs of the camera by far. ML is nowhere to be seen. Now with a high res body - how long will that take? That kind of stuff makes "out of the box"/"officially supported" valuable; without such assertions ML remains a bonus, but can't be a factor in mid- to long term decision making.


----------



## neuroanatomist (Nov 26, 2013)

Lawliet said:


> ...well, with the 70D or 1Dx I get about 50% more flashes out of a battery, corresponding shorter flash durations and faster recycle times...then with a 5D3.



Sorry, but...huh? Of the three cameras you list, only the 70D has a popup flash. How does the 1D X provide shorter flash durations and faster recycle times than the 5DIII for an external flash? Since you ascribe the same benefit to the 70D, I assume you're not referring to something like using a higher ISO. Can you explain?


----------



## Cali_PH (Nov 26, 2013)

*Interesting rumor of a new sensor developed by Hasselblad and Sony* paralleling some of the ideas discussed here. 

_"Every single pixel can have a different shutter time! This means the sensor allows a dramatic increase of the dynamic range. What sources didn’t tell me is how exactly this works and if the sensor is going to be first used by Hasselblads new medium format camera or by a new generation of FF sensors. Anyhow, its great news to see that Hasselblad is working on some exciting new tech with Sony!"_


----------



## neuroanatomist (Nov 26, 2013)

Cali_PH said:


> *Interesting rumor of a new sensor developed by Hasselblad and Sony* paralleling some of the ideas discussed here.
> 
> _"Every single pixel can have a different shutter time! This means the sensor allows a dramatic increase of the dynamic range. What sources didn’t tell me is how exactly this works and if the sensor is going to be first used by Hasselblads new medium format camera or by a new generation of FF sensors. Anyhow, its great news to see that Hasselblad is working on some exciting new tech with Sony!"_



Sounds interesting…at least for static subjects.


----------



## Cali_PH (Nov 26, 2013)

neuroanatomist said:


> Sounds interesting…at least for static subjects.



Yes, I was wondering how that'd work, what settings one would use vs. what the camera actually does...at least for my main personal interest (landscape), I wouldn't have to worry much. But for something moving fast...you'd get some interesting mistakes. If that's something they (or someone else) has experimented with, I'm guessing the early test shots were very interesting. ;D

Of course, the rumor could be incorrect about different exposure times, and it's actually different ISO's as some have discussed here. Or just incorrect altogether.


----------



## Don Haines (Nov 26, 2013)

Cali_PH said:


> Of course, the rumor could be incorrect about different exposure times, and it's actually different ISO's as some have discussed here. Or just incorrect altogether.



NO!!! Not an incorrect rumour! Say it isn't so....

I find it interesting that all of a sudden dual pixel technology has popped up in several sources as rumours and that Canon and Olymus (to a limited degree) have it on the market. Since this is something that has taken at least 5 years to go from the labs to the marketplace you can bet that everyone is working on it... my bet is that in a couple of years everyone will have it on all thier DSLRs and mirrorless cameras...


----------



## AvTvM (Nov 26, 2013)

Don Haines said:


> I find it interesting that all of a sudden dual pixel technology has popped up in several sources as rumours and that Canon and Olymus (to a limited degree) have it on the market.



My understanding is that currently only Canon 70D (and C100, possibly soon also C300) utilize "split/dual pixel"-on-sensor PD-AF with sensels on 80% of sensor surface useable for on-sensor PD-AF - as well as capturing light for image data just like any "regular" sensel. http://www.dpreview.com/reviews/canon-eos-70d/3

All other hybrid "on-sensor phase-detect-AF" implementations seem to be of the "older" (2010) "Fuji-type" http://www.dpreview.com/news/2010/8/5/fujifilmpd where only a small number [e.g. 99 for Sony NEX 5R] of "special-purpose" AF-pixels [partially masked off] are used for PD-AF purposes. These sensels do not capture light for image data, the blanks have to be filled by interpolation. 

This latter approach seems to be used by a number of companies: 

Canon in EOS 650D, 700D, EOS-M and 
in an improved version II with wider coverage of sensor area in the SL-1/100D http://www.dpreview.com/reviews/canon-eos-100d-rebel-sl1/6 
by Oly in the OMD5 and OMD1 and 
by Sony - first in their NEX-5R http://www.dpreview.com/previews/sony-alpha-nex-5r/3, then also in NEX-6 and NEX-7 and maybe in a slightly different version in the Alpha A7 (but not in A7R which uses CD-AF only) 
and also in a number of Fuji cameras 

Panasonic also filed an on-sensor PD-AF patent in 03/2012 http://www.freepatentsonline.com/8482657.html - from the looks of one of the illustration images, it seems to also use a finite number of designated PD-AF sensels on the sensor, but has a rather different layout with (separate) PD-AF line sensors behind transmissive image sensor layers and condenser lenses.

The Sony/Hassy rumour - if true at all - would be yet an all together different thing.


----------



## Lawliet (Nov 26, 2013)

neuroanatomist said:


> Since you ascribe the same benefit to the 70D, I assume you're not referring to something like using a higher ISO. Can you explain?



Its about the sync speed, the 5D3 is noticable behind there. The additional power required to balance with the increased influx of ambient light takes its toll on all fronts.


----------



## neuroanatomist (Nov 26, 2013)

Lawliet said:


> neuroanatomist said:
> 
> 
> > Since you ascribe the same benefit to the 70D, I assume you're not referring to something like using a higher ISO. Can you explain?
> ...



I wouldn't have thought 1/3 of a stop would make that much of a difference...


----------



## Lawliet (Nov 26, 2013)

neuroanatomist said:


> I wouldn't have thought 1/3 of a stop would make that much of a difference...


With fast triggers its 2/3 of a stop, or a full one if you allow for the same amount of shading, difference(there is a reason the manuals are quite YMMV in that regard), I.E. twice the number of packs, no more lightweight heads, but bitubes that each cost not much less then a 1Dx. Or a D800&a nice set of lenses.
Enough difference to put it rather high on my priority list.


----------



## 9VIII (Nov 26, 2013)

Wow, all I've been reading for months is that camera technology is mature and stale, and now it sounds like we're going to be seeing all sorts of big changes that affect fundamental aspects of using a camera.
Per pixel exposure and ISO? A camera that never blows highlights? Count me in!


----------



## thome (Nov 27, 2013)

I actually don't understand why this aspect has not yet been discussed - not here, nor an Facebook, but eventually I am just wrong:

If you have phase detection capability on *every* pixel of your sensor, which means for *every* pixel in the final picture, it should be easy to get a 3D image from it.

As I understand phase detection AF, you can actually get the *distance* from just one metering.
Buffer the readout of *ALL* dual pixels, render the image from the light, save a "debth map" document alongside to the image and let the software on a PC render the scene in 3D. Or let the camera do it. There are even 3D capable displays that could be used in camera.

What did I overlook on the technical side?

I think, THIS would be a HUGE step in photography. Although I am completely happy with 2D, but 3D movies and TVs showed us where things could lead us. Then 3D images for everyone were just a logical step.

Any thoughts on this?


----------



## Busted Knuckles (Nov 28, 2013)

neuroanatomist said:


> _"Every single pixel can have a different shutter time! This means the sensor allows a dramatic increase of the dynamic range. What sources didn’t tell me is how exactly this works and if the sensor is going to be first used by Hasselblads new medium format camera or by a new generation of FF sensors. Anyhow, its great news to see that Hasselblad is working on some exciting new tech with Sony!"_
> 
> Sounds interesting…at least for static subjects.



Would this be similar to the old CCD type - track the time needed to establish a defined signal level and use it to provide luminosity vs. signal level for a given time???

Self Edit- wouldn't it be cool that instead of clipped highlights - regardless of shutter speed - the "shuttered" itself at some % of the set shutter speed i.e. the shutter speed is set at 1/1000 and at 1/2000 the pixel achieves a set value other than blown out white so it stops recording and provides a time value. This time value is then used to predict a clipped highlight if that is what is desired or w/ software apply some sort of sliding scale to recapture all that detail that would have been lost????

this could be really cool at longer shutter speeds... 

Got to go, the nurse is here with my meds.


----------



## Marsu42 (Nov 28, 2013)

neuroanatomist said:


> Lawliet said:
> 
> 
> > neuroanatomist said:
> ...



It can, at least when shooting macro with the 100L on 60d - 1/250s is *just* enough to motion-stop something that moves a bit, unfortunately with some ambient light you end up in too high iso regions to get good results.

With the 6d, the 1/180s max. x-sync is too slow so I'm mostly ending up shooting with hss when using macro. You would thing 1/250s for 100mm*1.6x (crop) would be about the same as 1/180s for 100mm*1.0x (ff), but my recent experience is that it isn't.


----------



## neuroanatomist (Nov 28, 2013)

Marsu42 said:


> neuroanatomist said:
> 
> 
> > Lawliet said:
> ...



Makes sense. 

But...the benefit of the 1D X and 70D over the 5DIII being discusses was 50% more flash battery life, faster flash recycle times, etc.


----------



## rs (Nov 28, 2013)

thome said:


> I actually don't understand why this aspect has not yet been discussed - not here, nor an Facebook, but eventually I am just wrong:
> 
> If you have phase detection capability on *every* pixel of your sensor, which means for *every* pixel in the final picture, it should be easy to get a 3D image from it.
> 
> ...


http://www.samsung.com/uk/consumer/smart-camera-camcorder/lenses/special-purpose-lenses/EX-S45ADW

Different way of accomplishing what you're after. Both will suffer from nasty looking half cut bokeh from each of the two images used to make up the stereoscopic image, but they go about achieving the same end result in a different way - one blocks off half the lens, the other blocks off half of each pixel.


----------



## thome (Dec 1, 2013)

I don't know what you mean by "nasty looking half cut bokeh", but there is a biiig difference in capturing stereoscopic pictures with two "lenses" set apart to get depth information (two different angles, two pictures) and calculating it from a depth map from just one picture. Actually I don't know if the last one will be better, as you would have to have this one picture splitted again for the human eyes into.... different angles. At best from a source that capured both pictures from a "human eyes distance". But for a PC these depth map would be sufficient. But how to get it done for human eyes? Mh. I do know much too less about 3D. ;-)


----------



## jrista (Dec 2, 2013)

jebrady03 said:


> I'm ready for QPAF (Quad Pixel).
> HDR plus AF.
> it seems like a natural evolution to me



I'm not sure DPAF or a hypothetical evolution to QPAF is really a means to achieving HDR. Remember, ML had to cut resolution in half in order to achieve its makeshift approach, not because they did not have dual pixels...but because they had to use both the per-pixel amps as well as a secondary downstream amp. Doesn't matter how many times you dice up a pixel...if you have to use the downstream amplifier to achieve ML's style of "HDR", then diced pixels won't help.

Additionally, HDR implies 32-bit float data storage. Current camera ADCs are still limited to 14 bits int. Canon already has 12 stops of DR...seems a bit extreme to use such a convoluted approach to improving that by a mere two stops, when their problem actually lies in the ADCs themselves. Canon could take a far simpler approach...increase the parallelism of the ADCs, and move them closer to the pixels, to reduce the amount of noise they introduce into the signal. That's what everyone else is doing, and it is quite effective.

Assuming Canon was able to use QPAF to do some form of HDR...unless they increase the bit depth of the ADC, it isn't really going to be HDR. You would still be limited to 14 stops of DR, albeit achieved via a rather convoluted apprach that could be more costly and less effective than simply modernizing their read pipeline architecture. To get true HDR, Canon would need to use 32-bit ADC, and use floats rather than ints. At the very least, to improve DR by a meaningful degree, they would need to move to 16-bit integer ADC, however that wouldn't necessarily be "HDR".


----------

