# More Sensor Technology Talk [CR1]



## Canon Rumors Guy (Apr 30, 2014)

```
<div name="googleone_share_1" style="position:relative;z-index:5;float: right; /*margin: 70px 0 0 0;*/ top:70px; right:120px; width:0;"><g:plusone size="tall" count="1" href="http://www.canonrumors.com/?p=16448"></g:plusone></div><div style="float: right; margin:0 0 70px 70px;"><a href="https://twitter.com/share" class="twitter-share-button" data-count="vertical" data-url="http://www.canonrumors.com/?p=16448">Tweet</a></div>
<p>We’re told by a few other people that Canon is working on a “foveon like” sensor for their next generation of full frame cameras. The goal is to have the new sensor tech in their next “prosumer” camera, so perhaps that means a new large megapixel camera with a smaller body than an EOS-1, or the EOS 5D Mark IV in 2015.</p>
<p>The replacement to the EOS 7D will have a new sensor not currently available in the Canon lineup, but we’re told the camera is being specced more on features, especially for fast action and video than brand new EOS technology. By the sounds of things, the new sensor tech is best suited for the full frame segment.</p>
<p>More to come…</p>
<p><strong><span style="color: #ff0000;">c</span>r</strong></p>
```


----------



## scottkinfw (Apr 30, 2014)

I would be up for a 5DIV.

Aside from a better image, what else would others want in a 5DIII upgrade to make it worthwhile? Hmmmm.

I for one wold like better/faster focusing, and it would be great if they included an intrinsic automated afma adjustment like Focal, or better, if there was a way that it could be done in real time all the time.

sek



Canon Rumors said:


> <div name="googleone_share_1" style="position:relative;z-index:5;float: right; /*margin: 70px 0 0 0;*/ top:70px; right:120px; width:0;"><glusone size="tall" count="1" href="http://www.canonrumors.com/?p=16448"></glusone></div><div style="float: right; margin:0 0 70px 70px;"><a href="https://twitter.com/share" class="twitter-share-button" data-count="vertical" data-url="http://www.canonrumors.com/?p=16448">Tweet</a></div>
> <p>We’re told by a few other people that Canon is working on a “foveon like” sensor for their next generation of full frame cameras. The goal is to have the new sensor tech in their next “prosumer” camera, so perhaps that means a new large megapixel camera with a smaller body than an EOS-1, or the EOS 5D Mark IV in 2015.</p>
> <p>The replacement to the EOS 7D will have a new sensor not currently available in the Canon lineup, but we’re told the camera is being specced more on features, especially for fast action and video than brand new EOS technology. By the sounds of things, the new sensor tech is best suited for the full frame segment.</p>
> <p>More to come…</p>
> <p><strong><span style="color: #ff0000;">c</span>r</strong></p>


----------



## Drizzt321 (Apr 30, 2014)

Interesting.

A 7d2 more focused on fast action definitely makes sense. Birders & sports guys will probably love that.

5d4 features...hmmm...

I'd love to have metering based on focused AF points, even if it's not the full RGB of the 1DX, it'd be great to still meter based on the AF point.

Dual CFast slots, or at worst, dual UDMA7 CF slots.

More AFMA points along a zoom could be nifty, not sure if needed though. Add in an automated AFMA adjust (with appropriate target of course) would be awesome.

USB3 port instead of USB2.

Full, uncompressed 4:2:2 via HDMI, and it'd be great for similar off of the USB3 port, but I find that unlikely.

'Dual-Pixel' sensor Phase Detect AF could be handy, although I'd rather the better color accuracy/DR that Foeven-like might bring.

Probably a few other things that would be great, although I highly doubt I'd buy it day one. My 5d3 is working great, but eventually I imagine I'd go for a 5d4. Some day. Unless I win the lotto first *crosses fingers*


----------



## 2n10 (Apr 30, 2014)

As a birder with a 7D I like that it is to have great speed and so on.


----------



## BL (Apr 30, 2014)

Can someone help me understand why foveon sensors are a big deal? 

I get that the 3 layers can do away with moire and other false color noises, but this tech doesn't feel like a game changer, and rather a small evolution of what we currently use now.


----------



## Dylan777 (Apr 30, 2014)

Use this sensor to FF mirrorless, similar to Sony A7 series body size 

I doubt I'm the only one in CR want a Canon FF mirrorless + some pancakes


----------



## ScottyP (Apr 30, 2014)

I think they lost me at "Foveon-like". 

So it will have all the negatives of a high MP camera, like massive files to store, and a slowed FPS, and a faster-clogging buffer, but none of what you actually want from all those MP's, namely higher resolution and more detail to spare when doing things like shooting at high ISO, or cropping heavily.

Am I missing something wonderful about Foveon? If so, then so is everyone else based on the failure of Sigma's Foveon bodies to fly off the shelves. Why not copy FUJI sensors instead? That more complex, non-bayer pixel, no filter thing sounds much more interesting to me, anyway. 

Crud.


----------



## gmrza (May 1, 2014)

scottkinfw said:


> I would be up for a 5DIV.
> 
> Aside from a better image, what else would others want in a 5DIII upgrade to make it worthwhile? Hmmmm.
> 
> I for one wold like better/faster focusing, and it would be great if they included an intrinsic automated afma adjustment like Focal, or better, if there was a way that it could be done in real time all the time.



I wonder if it might not be a little early for a 5DIII replacement, because a lot of people have not fully ammortised the purchase cost of their 5DIIIs yet. When the 5DIII came, most people with 5DIIs sorely needed to replace them, so there was an almost guaranteed revenue stream for Canon. I think Canon would want to delay a launch long enough to guarantee a sufficiently strong stream of upgrades.


----------



## Lee Jay (May 1, 2014)

I wonder if quad pixel technology, with a blue, a red, and two green pixels under a single microlens would be Foveon like to Canon.


----------



## EverydayGetaway (May 1, 2014)

ScottyP said:


> I think they lost me at "Foveon-like".
> 
> So it will have all the negatives of a high MP camera, like massive files to store, and a slowed FPS, and a faster-clogging buffer, but none of what you actually want from all those MP's, namely higher resolution and more detail to spare when doing things like shooting at high ISO, or cropping heavily.
> 
> ...



Do your homework before b***ing. Foveon sensors are a big deal for landscape shooters. If Canon can sort out the processing time for that sensor tech, they'd have a big hand in sensor tech.


----------



## Drizzt321 (May 1, 2014)

ScottyP said:


> I think they lost me at "Foveon-like".
> 
> So it will have all the negatives of a high MP camera, like massive files to store, and a slowed FPS, and a faster-clogging buffer, but none of what you actually want from all those MP's, namely higher resolution and more detail to spare when doing things like shooting at high ISO, or cropping heavily.
> 
> ...



I thought with Foveon-like you'd actually have higher effective resolution & detail, because no de-bayering needs to occur. Plus, no anti-aliasing filter generally needed. Granted, based on Wikipedia, the way in which Sigma resolution (the number of actual pixels) doesn't add up to the same spatial resolution (number of actual pixel buckets), but 3xPixels since they considered each photosite, even those within the same pixel bucket, as being part of the Mega-Pixel count.

That said, based on the Wikipedial article, a Foveon sensor likely would outperform a slightly higher spatial resolution bayer pattern sensor in general.

If Canon has a improved QE significantly, and individual color response for each photodiode and improved the read-out times, it might make for awesome photos, even if the actual spatial resolution stays at it's current value, or even decreases slightly.


----------



## Mt Spokane Photography (May 1, 2014)

Canon has several patents relating to Foveon like sensors that claim to bypass the problems with the Sigma Camera and its Foveon sensor. If they can eliminate the issues in the technology, I'd go for one.

One issue with Digital cameras is color accuracy and depth. Its not bad, but can be greatly improved.

Once again, video at the professional level would be the biggest benefactor.


----------



## TAF (May 1, 2014)

Canon Rumors said:


> <div name="googleone_share_1" style="position:relative;z-index:5;float: right; /*margin: 70px 0 0 0;*/ top:70px; right:120px; width:0;"><glusone size="tall" count="1" href="http://www.canonrumors.com/?p=16448"></glusone></div><div style="float: right; margin:0 0 70px 70px;"><a href="https://twitter.com/share" class="twitter-share-button" data-count="vertical" data-url="http://www.canonrumors.com/?p=16448">Tweet</a></div>
> <p>We’re told by a few other people that Canon is working on a “foveon like” sensor for their next generation of full frame cameras. The goal is to have the new sensor tech in their next “prosumer” camera, so perhaps that means a new large megapixel camera with a smaller body than an EOS-1, or the EOS 5D Mark IV in 2015.</p>
> 
> <p>More to come…</p>
> <p><strong><span style="color: #ff0000;">c</span>r</strong></p>



Uh-oh. Foveon like 5D4? My bank account trembles.

Assuming, of course, that we get excellent high ISO, fast readout (effectively frames per second), and even better autofocus.


----------



## EchoLocation (May 1, 2014)

after using DSLR's for years, I can say that they are simply too big for me.
If Canon wants me to buy one of their products in the future, then they should focus on making a FF camera of the a7 ilk. 
I use my EOS-M a lot and love it, except that it is rather poor in low light, has no flash and no viewfinder. If Canon wants me to consider buying their products in the future, instead of just reading about them, they will have to solve these issues.
This 5DIV tech sounds cool, but in my case(where size matters quite a bit,) DSLR's are way more camera then I need on a regular basis


----------



## jrista (May 1, 2014)

ScottyP said:


> I think they lost me at "Foveon-like".
> 
> So it will have all the negatives of a high MP camera, like massive files to store, and a slowed FPS, and a faster-clogging buffer, but none of what you actually want from all those MP's, namely higher resolution and more detail to spare when doing things like shooting at high ISO, or cropping heavily.
> 
> ...



Your making a LOT of assumptions. The "negatives" of high MP cameras can be mitigated. With on-die CP-ADC (which canon does have a patent for), they can dramatically improve readout speed (they already proved they could read out a 120mp APS-H sensor at 9.5fps). With CFast 2 technology, we'll have faster write to memory, so the buffer won't necessarily be a problem. With Foveon, we get full color information at every single pixel, full spatial information, no longer need AA filters that are nearly as strong as is usually necessary with Bayer, etc.

Sigma's failure is that they market their product with lies and misleading information, and their bodies/firmware have never been very good (in comparison to Canon and Nikon bodies anyway.) Basing the success of ALL layered sensor designs on Sigma's success is a fallacy. 

Fuji's 6x6 pixel interpolation is just another way of blurring high frequency data, only it is LESS effective than a standard AA filter. I covered this in very great detail in a long topic a while back, and the impact of the 6x6 pixel interpolation is quite obvious when comparing fine detail (i.e. hairs, telephone wires, etc.) between Fuji's X-Trans sensor and pretty much any bayer sensor.) 

I could care less about what technology "sounds" more interesting. I care about what technology DELIVERS better results. Canon is a very conservative company...if they are going to move to a Foveon-like sensor design, then they must have solved some of the more significant problems that Sigma has encountered, and made it a viable design. They wouldn't bet on it if they hadn't. (And the chances hey HAVE solved many of those problems is very high, Canon has a couple patents on layered foveon-like pixels that use a different structure both for the photodiodes themselves, as well as readout; throw in their patents for on-die per-column dual-rate ramp ADC, and Canon could have a real powerhouse sensor in development that could really give the competition a run for the money...especially if it hits at a literal 40mp (i.e. 120 million photosites in 40 million actual pixels, not a trumped up 40mp like Sigma's Foveon.))


----------



## Nitroman (May 1, 2014)

Canon ... please stop f*rting about and just give me a higher megapixel camera !

My 21Mp 1Ds3 is six years old, tired and itching to be replaced. 

We've waited long enough ...


----------



## Quackator (May 1, 2014)

The patent for the sensor has been posted here in May last year,
and my opinion is that the (working!) engineering sample of a 
120 MP APS-C body that Canon has exhibited was the first time 
we have actually seen this sensor.

I expect at least a development announcement for Photokina,
if not a working sample of at least one new camera with this 
technology.

No, the technology does not have Foveons problems, as can be
seen when you compare both patents.


----------



## CarlTN (May 1, 2014)

ScottyP said:


> I think they lost me at "Foveon-like".
> 
> So it will have all the negatives of a high MP camera, like massive files to store, and a slowed FPS, and a faster-clogging buffer, but none of what you actually want from all those MP's, namely higher resolution and more detail to spare when doing things like shooting at high ISO, or cropping heavily.
> 
> ...



You certainly are missing out, obviously you've never owned a foveon. But I agree, if Canon ever builds one of these sensors into one of their cameras, absolutely NO ONE will buy it. Zero. All bayer array fanboys unite! I pronounce this future Canon camera an abysmal failure, based on the fact that I am proud of my foveon prejudice! No real photographer would ever use one!


----------



## geonix (May 1, 2014)

If true, that news is another hint not to expect too much from a 7D successor. 
The 7D successor will have a sensor not yet used in the eos line but it will not be "brand new EOS technology" ? What does that mean? They will use a sony-senor? ;D YES !!!


----------



## jonjt (May 1, 2014)

Just as I'm trying to justify an upgrade to a refurbished 5DIII, this rumor happens. I'll definitely have to wait and see what the 7DII brings.


----------



## Don Haines (May 1, 2014)

jrista said:


> ScottyP said:
> 
> 
> > I think they lost me at "Foveon-like".
> ...


+1


I like how people can passionately ridicule or dismiss a new technology sight unseen.....


----------



## traveller (May 1, 2014)

Nitroman said:


> Canon ... please stop f*rting about and just give me a higher megapixel camera !
> 
> My 21Mp 1Ds3 is six years old, tired and itching to be replaced.
> 
> We've waited long enough ...



I've got a friend in the same position: he likes the 1-series bodies and now wants to upgrade to higher resolution and better noise control. I believe that Keith Cooper over at Northlight Images is also waiting... Neither find the 1D X the right solution, as they are unwilling to spend the money on a camera that doesn't improve on the resolution they get from the 1Ds MkIII. 

People assume that their needs are the same as everyone else's and that people who want a high resolution body would prefer a smaller camera. For some, this is indeed the case, but I also believe that there is a significant proportion of 1Ds owners that are happy with their cameras' configuration. How many of these will continue to wait if Canon further delays a replacement, and how many will be tempted to migrate to the likes of the Pentax 645Z?


----------



## Lightmaster (May 1, 2014)

http://www.luminous-landscape.com/reviews/cameras/sigma_sd1_review.shtml




> What We See
> 
> There has been a lot of nonsense promulgated over the so-called 3D qualities of Foveon / X3 images. I now understand (I think) what people have been talking about, but there really is no magic involved. There are also some issues that are relevant to the current version of SPP software.
> •The SD1 does not have a blurring (anti-aliasing) filter. When used with a very good lens this allows extremely fine micro-detail to be recorded, creating prints and on-screen images (sometimes) with a feeling of greater depth and dimensionality. This isn't unique to the SD1 or other Foveon / X3 cameras because it isn't a function of this sensor technology; it is simply a result of not having a softened image caused by an AA filter. This is also seen with the Leica M8 / M9, which similarly do not have an AA filter, and which many users claim have a comparable 3Dish quality to their files. Indeed the absence of an AA filter is part of the appeal of medium format cameras and backs, and in the above comparison series is seen as well with the Pentax 645D.
> ...







> Direct Colour vs. Colour Filter Array
> 
> Other than sensors that use Foveon X3 technology all sensors (CCD and CMOS) use what is called a Bayer Matrix so as to be able to reproduce colour. Silicon photo sites are not able to record colour directly, and so various Filter Array technogies have been developed to make this possible. A Bayer matrix is by far the most common, and is used in virtually every camera on the market, from the smallest point-and-shoot to the largest medium format backs.
> 
> ...


----------



## scyrene (May 1, 2014)

Someone asked what a 5D3 owner would want to induce them to upgrade?

Maybe I'm more relaxed than most people here seem to be, because my two desires - more megapixels and better high ISO performance - are almost certain to be delivered, simply following past progression and what pretty much every other manufacturer has done (my bird work almost always requires cropping, and rarely allows shooting below ISO 400). That's just me, I understand why others' needs differ.

A minor concern but it would be nice to see a higher rated shutter life - I take a lot of photos and have always had that niggling worry at the back of my mind, although replacing the shutter isn't too much of a problem. But the difference between 150,000 and 400,000 is massive.

I don't actually care what sensor technology Canon uses, just the end results (sorry).


----------



## bbasiaga (May 1, 2014)

I don't need tons of megapixels, but if I can't take a picture in complete darkness and recover 24 stops of DR in post this will be a total fail. Its 2014 Canon! 

:


----------



## scyrene (May 1, 2014)

bbasiaga said:


> I don't need tons of megapixels, but if I can't take a picture in complete darkness and recover 24 stops of DR in post this will be a total fail. Its 2014 Canon!
> 
> :



High ISO is often portrayed - even if only jokingly - like that, but a lot of us wildlife folks (outside of the sunny tropics) could make good use of clean shots in the 12800-25600 range, easily.


----------



## jrista (May 1, 2014)

scyrene said:


> bbasiaga said:
> 
> 
> > I don't need tons of megapixels, but if I can't take a picture in complete darkness and recover 24 stops of DR in post this will be a total fail. Its 2014 Canon!
> ...



At some point I suspect quantum efficiency of sensors will reach the 70-80% level (at least, I hope it happens someday.) Once it does, we can expect a *real world* improvement of about 2x for high ISO settings. To get any better than that, we would need larger sensors.


----------



## scyrene (May 1, 2014)

jrista said:


> scyrene said:
> 
> 
> > bbasiaga said:
> ...



I'd settle for another stop - actually roughly what the 1Dx can do (from what I've seen) but with more megapixels for cropping. I once read an article about someone using medium format for bird photography, but I don't think that would be practical for most people, given the extra size lenses would have to be for the same reach (*unless* the extra MP allowed for so much cropping as to cancel it out).


----------



## jrista (May 1, 2014)

scyrene said:


> jrista said:
> 
> 
> 
> ...



The 1D X does not even get one full stop. It's largely a perceptual thing regarding how good the 1D X looks, but technically, the 1D X is only a fraction of a stop better, and the 5D III when downsampled gets similar results (not quite as good due to less total photodiode area). 

When it comes to smaller pixels and croppability, your going to lose high ISO noise performance. I've mentioned this in other topics, but overall, high ISO performance is fundamentally due to total sensor area and quantum efficiency. It is a higher Q.E. and a larger sensor area that makes the 1D X better in the long run, not it's pixel size. Once you bring cropping into the picture, especially with smaller pixels, then you start to experience worsening high ISO performance. Your not only putting fewer pixels on subject, your using a smaller area of the sensor, which means less total light for your subject. 

There really isn't any way that a FF sensor with smaller pixels will produce better results than a FF sensor with bigger pixels. It will have more detail, but per-pixel noise will be higher, so cropping means more noise. Cropping a 1D X means less per-pixel noise, but also less detail. It's a tradeoff...low noise, or more detail. For any given sensor area, the only way to improve noise performance is to improve Q.E. The 1D X has 47% Q.E., which means to actually double high ISO noise performance with the 1D neXt, you need 94% Q.E. The 5D III actually has 49% Q.E., which means you need 98% to double it's noise performance. That's not going to happen...not with consumer-grade devices. The highest grade Astro CCD sensors that have 82% Q.E. or more, Grade 1, are exceptionally expensive. They also require significant cooling (usually with two- or three-stage peltier, or themo-electric, cooling), which requires SIGNIFICANT power over what a DSLR normally draws.

Hopes of a super high resolution sensor that performs as well or better than a 1D X when cropped is just a pipe dream. It will resolve more detail, but that detail will be more noisy, not less noisy.


----------



## NancyP (May 1, 2014)

I am both a Canon and a Sigma X3F (Foveon) user, and there are good arguments for both sensors. There is no question that the Sigma CAMERAS are deficient in many areas, some not related to the sensor demands, but that for specialized uses, mostly landscapes, the Sigma Foveon cameras have unique qualities making it worthwhile to put up with annoyances. Canon cameras are good all-around cameras, Sigma cameras are specialty cameras.

The Sigma Foveon sensors are perceptually sharper, per pixel, than the Canon Bayer sensors, 15 Mp APS-C DP#M sensor is sharper (slightly) than my FF Canon 6D 20 Mp Bayer sensor, and users of both Sigma and Sony/Nikon FF 24 Mp sensor rank them as similar, with the Sony sensor minimally better in sharpness.

The major rendition difference in the Foveon and Bayer sensors in the current iterations is that there is a certain color subtlety in the Foveon sensor RAW files, sometimes called "film-like", that is not present in the Bayer sensor RAW files. Low local contrast, hue-restricted areas are considerably more detailed on the Foveon sensor files than the Bayer sensor files. 

To my mind, the combination of color subtlety and acutance is the one and only reason to go for the Foveon sensor. Foveon sensors excel at landscapes and floral portraits, and are "too sharp" for most portrait use - one is likely to need to do more blemish-removal post-processing than for Bayer sensor files.

Canon Bayer sensors: very well developed computational protocols mean fast high-throughput processing. Great for action because you can get very high still frame rates and you can pack more frames into the buffer. 
Sigma Foveon sensors: fewer generations of computational protocols, new ones being tested, current (non-Quattro) generation is s-l-o-w, still frame rate is maybe 3 fps, and it takes several seconds to clear a full (7 photos at ~55 Mb each for the 15 Mp DP#M/ SD1M sensors) buffer.

Canon Bayer sensors: nearly "infinite" post-processing software options that work with RAW files. Seamless integration of your RAW developing program with external programs and plug-ins.
Foveon sensors: 2 RAW processors, Sigma Photo Pro and Iridient Developer. If I want to make a panorama, I need to do the RAW color / contrast / exposure adjustments in Sigma Photo Pro, then export as a 16 bit tif into my pano program. Ditto for HDR program. SPP gives good results but lacks some exceedingly simple local editing maneuvers such as cropping (!). On the other hand, SPP monochrome mode makes gorgeous B&W images from your color-adjusted (fake filter) RAW files.

I have two parallel workflows, and this is a PITA. I have two parallel file trees, one for Bayer, one for Foveon files. I use Lightroom as my organizer and RAW developer, and LR does not recognize (Adobe likely NEVER will recognize) the X3F Foveon files. So, if I want to catalog my Sigma images, I have to export a small jpg next to its parent file, and acquire that proxy jpg in LR. Then, I can score, tag, keyword, etc., but in order to work with the X3F RAW file, I have to leave LR and manually go to the physical location of the X3F file, launch SPP, etc. 
P-I-T-A!

I will be lining up for the Canon 7D2 for bird photography (replacing 60D), and will be very interested in the Canon Foveon-like FF sensor.


----------



## jrista (May 1, 2014)

NancyP said:


> I am both a Canon and a Sigma X3F (Foveon) user, and there are good arguments for both sensors. There is no question that the Sigma CAMERAS are deficient in many areas, some not related to the sensor demands, but that for specialized uses, mostly landscapes, the Sigma Foveon cameras have unique qualities making it worthwhile to put up with annoyances. Canon cameras are good all-around cameras, Sigma cameras are specialty cameras.
> 
> The Sigma Foveon sensors are perceptually sharper, per pixel, than the Canon Bayer sensors, 15 Mp APS-C DP#M sensor is sharper (slightly) than my FF Canon 6D 20 Mp Bayer sensor, and users of both Sigma and Sony/Nikon FF 24 Mp sensor rank them as similar, with the Sony sensor minimally better in sharpness.



This is a complete fallacy, and easily demonstrable with actual images. I've disproven this concept many times on these forums, recently. I'm happy to use your own images even, but more pixels, even with an AA filter, still leads to greater sharpness. The difference becomes clear when you downsample say the 6D 20mp images to the same image dimensions as the Foveon sensors. Even when your talking about a 15mp (what Sigma calls a 46mp) Foveon, on a normalized basis it isn't as sharp as a bayer.

Foveons strengths are not in the spatial resolution/sharpness category. They are in the color fidelity and moire departments. 



NancyP said:


> The major rendition difference in the Foveon and Bayer sensors in the current iterations is that there is a certain color subtlety in the Foveon sensor RAW files, sometimes called "film-like", that is not present in the Bayer sensor RAW files. Low local contrast, hue-restricted areas are considerably more detailed on the Foveon sensor files than the Bayer sensor files.



This is indeed where the Foveon's strengths truly lie...in color fidelity...richness, saturation, rendition, etc. That's expected, given that each pixel has full color data. 



NancyP said:


> To my mind, the combination of color subtlety and acutance is the one and only reason to go for the Foveon sensor. Foveon sensors excel at landscapes and floral portraits, and are "too sharp" for most portrait use - one is likely to need to do more blemish-removal post-processing than for Bayer sensor files.



Compare a full-resolution D800 (non-E) image with a Foveon image. The D800 will trounce the Foveon (even the 15mp version) in terms of sharpness. The foveon cannot touch "too sharp", the D800 is often so sharp that aliasing becomes a problem (even with the non-E version.)


----------



## LetTheRightLensIn (May 1, 2014)

jrista said:


> ScottyP said:
> 
> 
> > I think they lost me at "Foveon-like".
> ...



We can hope, a LOT to have overcome, but if they have as well as keeping DR high with that style....  could be that Exmor folks are suddenly looking over in envy at Canon sensors for some years to come.


----------



## jrista (May 1, 2014)

LetTheRightLensIn said:


> jrista said:
> 
> 
> > ScottyP said:
> ...



Yeah, it's definitely a LOT to overcome. There is no question at this point that Canon is behind the curve on sensor tech. I watch patents pretty closely these days, and Canon is practically non-existent in the new patent arena. Now, that is not to say that they don't have any. They do...they have Dual-Scale CP-ADC (basically a CP-ADC with two alternate readout rates, as slower readout, when possible (i.e. long enough exposure time to support a slower readout rate) usually results in cleaner read noise (they switch to a lower frequency counter); they have a few patents on layered foveon-style pixels; they have a number of patents on low-noise readout concepts, such as power disconnection (supposedly that can nearly eliminate dark current noise...I'm much more interested in that for astrophotography applications, but it could help a little for very high ISO readout as well). 

The big question is, given Canon HAS these patents...will they actually use them in a sensor, and when.


----------



## scyrene (May 1, 2014)

jrista said:


> scyrene said:
> 
> 
> > jrista said:
> ...



Well that's depressing. How much would you say future improvements in software noise reduction will improve the final output?


----------



## jrista (May 2, 2014)

scyrene said:


> jrista said:
> 
> 
> > scyrene said:
> ...



Software is a difficult thing to discuss. The biggest reason why is: Which software? There are countless ways of, countless algorithms for, reducing noise. There are your basic averaging/blurring algorithms, your wavelet algorithms, your deconvolution algorithms, etc. Some denoising tools are more complex, and thus more difficult to use effectively, but when used effectively, can produce significantly better results. Some denoising tools are extremely simple, but don't produce as good of results.

Fundamentally, though, pretty much every algorithm suffers from the same core problem, to varying degrees: They blur detail. Your most basic denoising algorithm takes high frequency data and blurs it by a certain amount...for each pixel, it takes some component of the surrounding pixels, generates an averaged result (with some given weight, usually attenuated by some UI control somewhere), and replaces the original pixel value with the weighted average value. Do that for each and every pixel, and each and every pixel ends up blending itself with it's neighboring pixels. There are varying matrix sizes, i.e. 3x3, 6x6, that can be used when performing a very basic noise reduction, that will spread the effect out more or less.

Wavelets and deconvolution tend to be more intelligent about how they reduce noise. They either try to generate a "kernel" based on the information in the image, or try to break up the image into multiple spatial frequency levels, and apply different degrees of noise reduction on each wavelet level, in an attempt to preserve certain frequencies while blurring others, with the ultimate goal of preserving detail. Problem with these algorithms is that, while they can reduce noise without blurring detail as much, they often suffer from greater artifact introduction...halos or excessive acutance or blotching, things like that. 

Noise reduction is best applied in extreme moderation, in which case it will always have very significant limitations. It can only take you so far, and the less noisy your images start out as, the better the results will be. This is one of the reasons why the "low" resolution images from the 1D X clean up so well...1D X pixels start out with significantly more dynamic range than sensors with smaller pixels, so there is less per-pixel noise to start with, so a minimal amount of NR is perceived as being more effective (it really isn't, there was less noise to start with, so less noise to remove, so a small amount of NR is has a greater relative effect than with images that start with more noise to remove. In other words, to ridiculously simplify things down to simple numbers, if a 1D X has noise of 7, and a 5D III has noise of 12, and you reduce noise by 5, the 1D X is left with noise of 2, where as the 5D III is left with 7...it's as bad after NR as the 1D X was before NR.)

Noise reduction algorithms are already extremely powerful and extremely intelligent. I recently purchased software called PixInsight, which is primarily an astrophotography processing program, but it's tools can be used on regular photos as well. It has a whole suite of noise reduction tools that work in different ways. Depending on the kind of noise you have, and the region of your image that you wish to denoise, PixInsights noise reduction tools can be more effective than any other tool...but as advanced as they are, they are still not perfect. Wavelets still introduce mottling and blotching, deconvolution can still introduce halos, median sharpening and denoise can still introduce sparkles and panda eyes, etc.

The best way to reduce noise is to increase the rate of conversion of light to charge in a pixel, increase the maximum charge of each pixel, increase the total maximum charge of the sensor, etc. The more light you can convert into charge in a given time, the less noise you will have. I don't expect to see a major jump in Q.E. any time soon....I suspect Canon's next round of sensors will be around 51-53%, maybe 56% at most, up from the current 47-49%. That will certainly help in the noise department, but it is no where even remotely close to supporting a true one-stop improvement in noise. It's less than a third stop improvement in noise (less than a tenth stop improvement in noise, even!) Elimination of color filters in favor of color splitting, a reduction in heat conversion (i.e. with light pipes or BSI), reduction in reflection (i.e. with black silicon), etc. can all increase the rate at which photons convert to charge, and increase Q.E. These technologies exist, lots of patents exist, however I don't see any patents for these specific kinds of technology from Canon, so I don't expect them to show up in Canon's next sensor designs. A layered sensor is capable of converting more light to charge per pixel, however that charge is divvied up amongst different color channels, so it's effectiveness is attenuated...a foveon-like design from Canon is a step in the right direction, but I don't expect the impact on noise to be all that much (and we'll see a conversion of which color channels are noisiest...instead of blue being noisiest, red is likely to become noisiest, and green will become noisier, while blue would likely experience a modest drop in noise levels.)


----------



## pedro (May 2, 2014)

scyrene said:


> bbasiaga said:
> 
> 
> > I don't need tons of megapixels, but if I can't take a picture in complete darkness and recover 24 stops of DR in post this will be a total fail. Its 2014 Canon!
> ...



I'll second that, although I am not into birding. Give me a clean ISO 25k and I am a happy camper ;-) Low light is my passion! But as jrista said: lower pixel count is the base for better high ISOs. So hopefully they keep at least to the current 22.3 MP if not I guess we're in the same boat again despite new sensor tech. Then a 6D in my case looks much more attractive in a next cycle, as long as they keep the same MP count...Wouldn't consider it as a downgrade, as I am only looking for best and affordable low light/high ISO IQ.


----------



## scyrene (May 2, 2014)

jrista said:


> scyrene said:
> 
> 
> > Well that's depressing. How much would you say future improvements in software noise reduction will improve the final output?
> ...



I'll look into that software, thanks for the tip! And you've given me increased respect for what noise reduction is doing - it sounds hugely complicated. I know nothing about programming, but I wonder how intelligent it could be made - my eye can tell what is noise and what is detail by parsing the scene, knowing what the photograph is *of*. I wonder if machine intelligence can move in that direction? Even if it was just a matter of cases - telling it 'this area is feathers, so expect lots of fine linear detail' etc. Asking a lot, no doubt 

I think the conclusion is at this point, have a large megapixel camera for good light (for cropping), and a lower-MP camera with better low light noise for dusk and dawn (I'm intrigued by the A7s in this regard), and accept I won't have the same reach 

(I should stress I think the current technology is still amazing).


----------



## NancyP (May 2, 2014)

Jrista, I was comparing DP2M to 6D - 15 Mp physical pixels (which Sigma has called "46 Mp" in the past) to 20 physical Bayer Mp. In low-ISO situations, the DP2M does look a tad sharper than the 6D, despite the 5 Mp disadvantage. I attribute this to the color fidelity, because my subjects are generally landscapes with subtle color variation in leaves, grasses, etc. It is not 100% fair comparison because my current 50mm lens is a pre-computer-design manual AIS Nikkor 50mm f/1.2 used on an adapter on the 6D, which does look pretty darn sharp at f/4 to f/5.6 and still sharp f/8. The DP2M's fixed Sigma 30mm f/2.8 lens (45 mm equivalent) at same f stops looks sharper, but then again, the lens is 30 years younger. The real test would be the new Sigma 50mm f/1.4 Art - new design, no adapter, best affordable lens resolution-wise for the EF mount.


----------



## jrista (May 2, 2014)

NancyP said:


> Jrista, I was comparing DP2M to 6D - 15 Mp physical pixels (which Sigma has called "46 Mp" in the past) to 20 physical Bayer Mp. In low-ISO situations, the DP2M does look a tad sharper than the 6D, despite the 5 Mp disadvantage. I attribute this to the color fidelity, because my subjects are generally landscapes with subtle color variation in leaves, grasses, etc. It is not 100% fair comparison because my current 50mm lens is a pre-computer-design manual AIS Nikkor 50mm f/1.2 used on an adapter on the 6D, which does look pretty darn sharp at f/4 to f/5.6 and still sharp f/8. The DP2M's fixed Sigma 30mm f/2.8 lens (45 mm equivalent) at same f stops looks sharper, but then again, the lens is 30 years younger. The real test would be the new Sigma 50mm f/1.4 Art - new design, no adapter, best affordable lens resolution-wise for the EF mount.



Thanks for the additional facts! That's always helpful when trying to understand things like this.

One of the things photographers don't quite understand, possibly in no small part to Sigma's marketing of Foveon, is that DSLR's have full luminance data...they only really suffer a loss in color resolution, and therefor color fidelity. Standard bayer interpolation uses 2x2 matrices of RGBG sensor pixels, in overlapping fashion, to produce RGB pixels in an image rendered to screen or saved to a file (i.e. TIFF). Effectively, the dimensions of an image from a bayer sensor is a count of the intersections between 2x2 pixel matrices in the sensor. This tends to cost you a little in luminance spatial resolution, and a fair bit in chroma spatial resolution, and is prone to artifacts like stair stepping. 

More advanced algorithms, like AHDD or Adaptive Homogeneity-Directed Demosaicing, aim to maximize the luminance detail in each and every individual pixel (so it uses the luma information directly, rather than interpolating), while concurrently interpolating chroma data from pixels in such a manner that it maximizes chroma spatial resolution while eliminating stairstepping and other artifacts. Lightroom, Apple Aperture, RawThearapy, DarkTable all use/support AHDD, which means that generally speaking, demosaiced RAW images always have nearly the full resolution of modern bayer sensors.

A sharper lens used with the 6D, when demosaiced with something like Lightroom, will produce superior sharpness compared to the Foveon (even the 15mp Foveon.) It will have lower color fidelity, but because of the higher resolution luminance detail, it won't matter all that much. Color depth on a bayer can be extremely high. Canon sensors don't have the best color fidelity, but Sony Exmor sensors have very high color fidelity.


----------



## Surfwooder (May 2, 2014)

The very last real input for the new version of the 7D II was that Canon was having problems with manufacturing the new sensor for the 7D II. From then, nothing. There is no need to further discuss this new cameras engineering. The discussion should focus on production problems.


----------



## Drizzt321 (May 2, 2014)

arco iris said:


> Nothing indicates that Canon have a new sensor ready.
> I do not understand all the rumors that are flourishing.
> That a Foveon-like sensor is to be launched is a joke, what is the probability that Canon could do anything better than the five major sensor manufacturers? Sony alone has over 50% of the whole world wide sensor market.
> 
> ...



And we have lots more processing power, even in camera, available to the average user, to say nothing of the systems that are available for design simulations. Not saying it's automatically a solution, just pointing out that we have a LOT more CPU horsepower, to say nothing of a properly designed camera chip having a few specialist bits as ASICs.


----------



## jrista (May 2, 2014)

dilbert said:


> jrista said:
> 
> 
> > ...
> ...



"so it goes without saying"? LOL, love that. :

Oh, and...prove it! (I have already proven the opposite with visual examples on multiple occasions, so the burden of proof is on you this time.)


----------



## jrista (May 2, 2014)

dilbert said:


> jrista said:
> 
> 
> > ...
> ...



Because Canon's in-camera NR (excluding LENR), which only applies to JPEG, really sucks. It blurs the crap out of data, so IMO it is not a viable option. It isn't particularly advanced, either, given the limited horse power in Canon's DIGIC processors, so it can't be much more advanced than a tweaked averaging algorithm anyway.


----------



## neuroanatomist (May 2, 2014)

dilbert said:


> How is it that in your treatise on software and noise reduction you didn't include the method used by Canon cameras?



Probably because he doesn't shoot JPG, which is the only situation where the method used by Canon cameras is relevant, and if you're shooting SOOC JPG images, then achiving the best IQ of which your camera is capable is certainly not your priority.


----------



## 3kramd5 (May 2, 2014)

[quote author=arco iris], what is the probability that Canon could do anything better than the five major sensor manufacturer.
[/quote]

1/6


----------



## x-vision (May 3, 2014)

Canon Rumors said:


> We’re told by a few other people that Canon is working on a “foveon like” sensor for their next generation of full frame cameras.



Any sensor expert can testify that Foveon is an impractical technology.

At the same time, Canon's has its own 'dual pixel' tech, which has a lot of potential and room to improve. 
So, why would Canon go chasing Foven when they already have a new, promising technology in-house ?? 

This Foveon rumor is totally fake.


----------



## jrista (May 3, 2014)

x-vision said:


> Canon Rumors said:
> 
> 
> > We’re told by a few other people that Canon is working on a “foveon like” sensor for their next generation of full frame cameras.
> ...



Why would you say Foveon type sensors are impractical? Sigma doesn't even rank within throwing distance of the radar, let alone on the radar, when it comes to sensor design and manufacture. Foveon's struggles have far less to do with it being "impractical", and more to do with the fact that it's Sigma, a company that doesn't really have the resources to really bring Foveon to bear and make it as competitive as it has the potential to be.

Fundamentally speaking, gathering full color information at each and every pixel is a superior means of imaging with digital image sensors. It's a more complicated sensor design, requiring advanced techniques and the use of the proper silicon-based compounds in both the substrate and the photodiodes in order to actually allow enough light to penetrate deeply enough to work. Given that you do end up with three times as many photodiodes to read for any given pixel count, you need a faster readout system, however that generally required higher frequency components to support a reasonable frame rate. Sigma just hasn't demonstrated that they either have the R&D budget, nor technological resources nor the prowess to develop the technology that would allow them to build a truly high resolution Foveon style sensor that had a high speed readout without junking the signal with a crap ton of read noise. 

Canon, on the other hand...they very possibly DO have the resources to make a Foveon style sensor both high resolution, and a truly viable alternative to bayer. I would actually bet on Sony as being the most likely to have the resources to do it, but Sony hasn't shown any interest, where as Canon actually has patents for the technology.



> At the same time, Canon's has its own 'dual pixel' tech, which has a lot of potential and room to improve.
> So, why would Canon go chasing Foven when they already have a new, promising technology in-house ??



Because DP tech has nothing to do with base image quality. DP adds an ALTERNATIVE option for performing autofocus, one that as of yet has not proven to be better than the tried and true approach of using a dedicated AF unit and a mirror. A Foveon-style sensor design, on the other hand, if done right with new techniques to increase parallelism and readout rate without increasing read noise, improve pixel structure, and increase pixel count, has the potential to radically improve image quality. 

So...the real question is, why *wouldn't* Canon pursue the technology?


----------



## Don Haines (May 3, 2014)

dilbert said:


> neuroanatomist said:
> 
> 
> > dilbert said:
> ...


I thought (assumed) that dark frame subtraction in jpegs was done by taking a raw image, subtracting a raw dark frame, and then creating the JPEG. Wouldn't it be harder to do this with subtracting jogs?


----------



## neuroanatomist (May 3, 2014)

dilbert said:


> neuroanatomist said:
> 
> 
> > dilbert said:
> ...



Canon's in-camera LENR doesn't really remove noise (defined as such) from images, in many cases it actually _adds_ noise. It's effective at removing hot/stuck pixels, but that's about it.


----------



## jrista (May 3, 2014)

x-vision said:


> jrista said:
> 
> 
> > Because DP tech has nothing to do with base image quality.
> ...



I've been through every free Chipworks article they have ever published. They do not have any images of Canon's 70D sensor that shows anything like quad pixel. Their full teardown of the 70D sensor costs $16,000, so I highly doubt anyone on these forums has seen those documents, but I have little doubt they show a dual-pixel design, not a quad pixel design.

I've also read Canon's actual patents on the technology, which show two separate photodiodes, not four. Even their revised patent, which includes high and low sensitivity halves, is still just HALVES, so there are only two photodiodes.

So sorry, but I'm calling bulls*it on this one.  Unless you can furnish an actual image of the actual sensor die it self that proves otherwise (and believe me, I spent a lot of time looking for that image after the 70D was released). I've actually found real sensor images for prototype DPAF sensors from Omnivision and Panasonic, which basically steal Canon's design...however again, they clearly show two photodiodes per pixel, not four. (It also means Canon won't be the only company with DPAF technology in the relatively near future...so their potential advantage in this area will probably dissapear. Omnivision's patents have DPAF pixels in 100% of the sensor pixels, vs. Canon's 80%.)

The term "dual PIXEL" is also very missleading terminology. A pixel is a single image element, however in Canon's designs, each single pixel has a split photodiode. The photodiode exists _*below *_the microlenses and CFA. As such, there is nothing special that can be done to make it magically provide higher resolution or anything like that, and changing the design to actually allow higher resolution



x-vision said:


> So, what does this have to do with image quality?
> 
> The quad-pixel tech will eventually allow Canon to use non-bayer color filters.
> I can easily see them using dichromatic or polychromatic filters for each sub-pixel - and then deriving the overal pixel color from the values of the different sub-pixels.
> ...



This is the same old logical fallacy that everyone seems to be making. DPAF is not a magic bullet for higher image resolution. What happens if you make a green _quad-pixel_ a green, red, blue, and say white (luma) "quad pixel"? You no longer have a quad pixel!! You have separate, smaller pixels with one quarter of the area...that's all! You probably also lose the ability to do focal-plane autofocus as well, as having a SINGLE microlens over the split photodiodes is essential to how focal-plane AF works. If you try to keep one microlens over a "quad pixel", then you have problems with distributing the right amount of light over each of the RGBW "sub pixels", which would increase noise. 

There is nothing special here about this technology. It has ONE purpose: To support autofocus. That's it. Anything else, and you simply have smaller color pixels. 



x-vision said:


> Using non-bayer color filters has the advantage of both better resolution and better light sensitivity, as a bayer filter 'throws' away two thirds of incoming light vs one third for a dichromatic filter, for example.
> 
> In addition to allowing the use of non-bayer color filters, the quad-pixel tech also has the known ability that sub-pixels can be read at different ISO/amplification.
> MagicLantern has shown that Canon already implements this in the 5DIII - but they are not using it.
> ...



Again, this is a lot of wishful thinking. For one, it has NEVER been demonstrated that the pixel halves can be read at different ISO settings. Even if they could, I've debunked the notion that it would improve anything on several occasions...in the end, assuming you read one half at ISO 100 and the other half at ISO 800 or 1600, you have a net neutral result: Your jacking up the ISO on HALF the amount of light (since it's half the photodiode), which does NOT get you the same result as what Magic Lantern is doing with current Canon sensors (which uses a downstream amplifier, not the per-pixel amplifiers, to achieve their results...plus, they are using full pixels, not half pixels). I highly doubt that we will ever see Canon's DPAF technology used in such a way that it improves dynamic range. Not even from Magic Lantern.

Sensitivity has to do with total photodiode (or rather sensor) area and quantum efficiency. Splitting the photodiode does nothing to improve quantum efficiency....Q.E. is a fixed trait of the silicon, how the silicon is doped and design of the photodiodes. Were already at about 50% Q.E., so to gain ONE stop of improved ISO, we would need to double that to 100% Q.E. That isn't ever going to happen in a consumer-grade device. 

Reducing the amount of light filtered is also a means of increasing the rate at which light fills the sensor, however again, total sensor area is the real limiting factor here. A reduction in filtering simply means you fill the sensor with charge faster. That lets you use a lower gain, however it also means that you could end up clipping your signal for any given exposure length...and the only way to avoid that is to increase the total sensor area (i.e. move from APS-C to FF, which is the only way to really improve dynamic range.) Switching to some kind of dichromic filter still means your using a filter, and still means your losing light. Your actually losing about 50% of the light per pixel color, which isn't anywhere close to a two-stop improvement in high ISO (it's actually only half a stop improvement at best). 

The only real improvement on a bayer sensor design is the use of MCS, or Micro Color Splitting. This concept replaces filtration entirely with light splitting, using a special kind of prism-like component between the microlens and the photodiodes that utilizes the diffracted nature of light to either channel one part of light downward or deflect the other part of light to the neighboring pixels. You end up with White-Red and White+Red light in this kind of sensor. MCS preserves nearly 100% of the light. This does indeed improve low light sensitivity, but again, the gain is at most one stop, and you still benefit from it most by using a larger sensor. Assuming we could never actually achieve 100% Q.E. (the only sensors that I know of that achieve higher than 70% Q.E. are Grade 1 CCDs, extremely expensive commercial and scientific grade, and generally speaking the Q.E. curve peaks somewhere in the greens or yellows, and falls off again to around 50-60% for all other wavelengths.)




x-vision said:


> So, the dual pixel tech has a lot to do with image quality.
> 
> Note that I have no inside info or anything like that.
> I'm just making informed guesses of where Canon might take this technology.



Your making wild assumptions, not informed guesses.  I spend a considerable amount of time reading actual sensor patents, reading everything Chipworks and Image Sensors World posts, and while I do not have inside info, I do have a lot of references I can point to that back up my conclusions. 

Canon's dual "pixel" technology is not dual pixels. It is dual photodiodes within each pixel, and it is quite LITERALLY "dual", not "quad". Even if they had "quad" photodiodes, that still isn't going to improve image quality. The photodiodes are split below the microlenses and color filters, by necessity (as that is required for the AF functionality to work). All you've really talked about is Canon making the pixels (the whole pixels, microlenses and filters and all) smaller...which really isn't anything special, and it precludes the option of having split photodiodes (which must exist below the microlens and CFA in order to function properly for AF purposes.)


----------



## jrista (May 3, 2014)

dilbert said:


> jrista said:
> 
> 
> > dilbert said:
> ...



This doesn't prove anything. For one, the reviewer is highly biased. He talks about a resolution advantage for the Foveon in comparison to the M9. Despite the fact that the M9 image CLEARLY suffers from camera shake blur, it still has the resolution advantage. The guy is comparing 100% results, rather than normalizing the image size. The M9, for example, has about a 30% advantage on spatial resolution, meaning it should have been downsampled a bit. Once downsampled, any perception of a resolution advantage for the Foveon disappears entirely. (And, of course, if the guy actually used an appropriately stable tripod to snap his images, that would have allowed the M9 to trounce the Foveon hands down with or without downsampling.) 

The NEX is in the same boat as the M9...it has a significant true resolution (spatial resolution) advantage over the Foveon. The reviewer was also pretty clear that he used a sucky lens on the NEX, but still used it anyway because he was more interested in framing parity (which is naive, you can achieve that by moving the camera). The NEX comparison had what appears to be an intentional handicap vs. the Foveon not because the sensor isn't as sharp...but because a soft lens was used. Despite that, downsample the NEX image, and most of the resolution advantage of the Foveon disappears. Use an appropriately sharp lens, and the NEX will best the Foveon both at native size and downsampled. 

The Sony RX100? MASSIVELY diffraction limited, as it's only a 1" sensor. Fuji X100? The X100 sensor has never been one for the sharpest images. The sensor uses shifted microlenses. That helps reduce the kind of vignetting that occurs when you place the lens so close to the sensor, but it doesn't really do anything to improve resolution. The X100 is softer than pretty much any Canon camera except the Rebels.

And, finally, that article does NOTHING to prove that the DP2M offers more resolution than the Canon 6D.


----------



## jrista (May 3, 2014)

dilbert said:


> neuroanatomist said:
> 
> 
> > dilbert said:
> ...



Dark Frame Subtraction (LENR, Long Exposure Noise Reduction, which I explicitly excluded in my response) is not the standard JPEG noise reduction. LENR is user-togglable for either RAW or JPEG, and it's sole purpose is to remove hot pixels. However because of how LENR works, it actually tends to make deep shadow random noise worse. This is why astrophotographers generally do not use in-camera LENR, and instead choose to take 30-50 "dark frames" which are then averaged together. The averaging greatly reduces the random noise component to the point where it is practically non-existent, and makes the hot pixel information more accurate. A master dark frame is then subtracted from each light frame, as that again is superior to using in-camera LENR.


----------



## x-vision (May 3, 2014)

jrista said:


> I've been through every free Chipworks article they have ever published.



Hmm. Obviously not, because Chipworks has a free partial die photo of the 70D sensor: 
https://chipworks.secure.force.com/catalog/ProductDetails?sku=CAN-EOS-70D_Pri-Camera&viewState=DetailView

Take a careful look and consider the geometry of a dual-photodiode pixel: 
- you can have two rectangular photodiodes that form a square pixel
- or, you can have two square photodiodes that form a rectangular pixel
- finally, you can have two square photodiodes plus wasted space on the die that form a square pixel

Now, as I said, take a careful look at the partial sensor die and tell me if you see:
a) anything rectangular features on this photo
b) any apparently wasted space

A partial die photo is certainly not a definitive proof.
It's a very good clue, though, that the 70D sensor is in fact using a quad photodiode design, not a dual one.
Again, just think of the geometry of a dual pixel design and make your own conclusions. 

As for the resolution of a non-bayer filter: I should have been more clear.
The 70D sensor is a bayer sensor, where each pixel has a monochromatic R/G/B color filer. 
Thus, each of the four constituent photodiodes of that pixel lies under a single, common monochromatic filter - that happens to throw away 2/3 of the incoming light.

Now, imagine if each of the photodiodes had their own, individual color filters.
You still have a single pixel with a single microlens. 
Underneath, however, there are four individual color filters - one for each photodiode.
Here's the thing about the individual color filters: they don't have to be monochromatic R/G/B filters anymore.
Instead, you can use a combination of di/poly-chromatic filters, from which you can derive the overall pixel color. 
And instead of deriving a single R/G/B color, as in a bayer sensor, you derive *all three* primary colors. 

In summary, if you have a single, monochromatic filter for the entire pixel, you can only get one color per pixel (either R, G, or B).
But if you use individual di/poly-chromatic filters for each photodiode, you can derive all three primary colors per pixel (R+B+G).
Plus, you have a more sensitive/efficient pixel, as di/poly-chromatic filters by definition are filtering-in more light than a monochromatic filter.

Hopefully I'm able to communicate the point.

Back to the topic of extra resolution: 
The increase in resolution comes from the fact that you have all three primary colors per pixel vs the single color per pixel in a beyer sensor. 
Admittedly, the resolution increase is not all that big - but it's still an increase. 
(Sigma/Foveon fans will tell you that it is a significant increase 8) ).

Think about all those things. 
You seem to be dismissing the quad-photodiode tech - seemingly without fully realizing its potential.
If you believe that Foveon is better than Bayer, just consider that a quad-photodiode design with individual non-bayer color-filters (one per photodiode) is a better solution that Foveon.

Finally, even if you are still not convinced that the 70D sensor is a quad-photodiode sensor, consider that going from dual-photodiode to quad-photodiode is the next evolutionary step of this design - for all the reasons outlined above.

The simple fact is that a bayer sensor throws away 2/3 of the incoming light. 
And the seemingly low-hanging fruit for improving sensor efficiency is to throw away less light.
Foveon is just one solution to the problem. There will be others soon.

Regards


----------



## jrista (May 3, 2014)

x-vision said:


> jrista said:
> 
> 
> > I've been through every free Chipworks article they have ever published.
> ...



The image you are referring to does not even have any of the dual pixels in it, assuming those are pixels at all (on the contrary, they kind of look like readout pins in a land grid array, which would be on the BOTTOM of the sensor, the opposite side from where the actual pixels are, and assuming they are not readout pins, I would know a CMOS sensor pixel if I saw one...those are not even remotely close to what a CMOS pixel looks like...they don't even have microlenses or color filters...it's just wiring and bare silicon substrate). That image is from the outer periphery of the sensor die, which is usually riddled with power regulation transistors and other non-pixel logic. Canon's DPAF pixels are only in the center 80% of the part of the die that actually contains pixels...so even if the image WAS of pixels (which it is not), then they wouldn't be DPAF pixels...they would be standard single-photodiode pixels.




x-vision said:


> Now, as I said, take a careful look at the partial sensor die and tell me if you see:
> a) anything rectangular features on this photo
> b) any apparently wasted space
> 
> A partial die photo is certainly not a definitive proof.



It isn't proof, because you are gravely mistaken about what that photo is actually of. There is even some kind of stamp on top of the electronics in the region of that photo that ChipWorks has shared. You don't stamp the actual pixels...and usually such stamps are again on the back side or very outer periphery of the sensor, not the side with the pixels. This photo is either of peripheral logic on the top side of the sensor, or circuitry or pinning on the bottom side of the sensor.



x-vision said:


> It's a very good clue, though, that the 70D sensor is in fact using a quad photodiode design, not a dual one.
> Again, just think of the geometry of a dual pixel design and make your own conclusions.



Again, your completely misinterpreting what that image is.



x-vision said:


> As for the resolution of a non-bayer filter: I should have been more clear.
> The 70D sensor is a bayer sensor, where each pixel has a monochromatic R/G/B color filer.
> Thus, each of the four constituent photodiodes of that pixel lies under a single, common monochromatic filter - that happens to throw away 2/3 of the incoming light.
> 
> Now, imagine if each of the photodiodes had their own, individual color filters.



I don't need to imagine, as that is exactly what a sensor with split photodiodes WITHOUT DPAF or QPAF would be...each photodiode would have it's own color filter...because each photodiode would *be a pixel*!  Thus, what you are proposing is the removal of DPAF technology, and a factor of two reduction in pixel size, and a higher resolution. That's it! There really, truly, honestly isn't anything special about giving each smaller photodiode it's own filter. That just means you have a sensor with four times as many pixels, which is pretty much what each new generation of sensors gets anyway. (Well, not four times as many pixels, but a pixel size reduction and an increase in pixel count is a pretty consistent fact of just about every new still photography camera release.)



x-vision said:


> You still have a single pixel with a single microlens.



If you do this, then you are going to have problems properly distributing light into each photodiode. The entire purpose of the microlens is to guide as much light as possible onto the photodiode. If you try to increase the pixel resolution below the microlens, then the problem you have is that one of those four subpixels gets more light than the rest, as the microlens, just like any other _lens_, *FOCUSES LIGHT*. The focal point, where the majority of the light is concentrated, is rarely dead center underneath the microlens (the farther from the center of the sensor you go, the more off-centered the focal point from the microlens will be). So, if you split the color filter and photodiode underneath the microlens, you'll greatly increase noise levels...one out of four subpixels will get most of the light, and the other subpixels will get significantly less light. You idea effectively trades noise for resolution.

You counter might be, well just use more layers of microlenses for each photodiode. If you throw in more layers of microlenses, then you further screw with the AF capability of the subpixels, as you would be mucking with the phase of the light below the initial microlens. Muck with phase, and you can no longer "phase detect" (PD), or at least not detect it as well or as accurately. So again, as I said before, all you are proposing is a factor of two reduction in pixel size, or a factor of four increase in pixel count. In other words, a standard (non-AF capable) sensor with higher resolution...and more noise.



x-vision said:


> Underneath, however, there are four individual color filters - one for each photodiode.
> Here's the thing about the individual color filters: they don't have to be monochromatic R/G/B filters anymore.
> Instead, you can use a combination of di/poly-chromatic filters, from which you can derive the overall pixel color.
> And instead of deriving a single R/G/B color, as in a bayer sensor, you derive *all three* primary colors.



Look up Micro Color Splitting Sensor. Panasonic's design is vastly superior to any kind of di/poly-chromatic *filter*, because it simply _doesn't filter_. It *splits *light, but directs *all of it* into photodiodes. 



x-vision said:


> In summary, if you have a single, monochromatic filter for the entire pixel, you can only get one color per pixel (either R, G, or B).
> But if you use individual di/poly-chromatic filters for each photodiode, you can derive all three primary colors per pixel (R+B+G).
> Plus, you have a more sensitive/efficient pixel, as di/poly-chromatic filters by definition are wasting less light than a monochromatic filter.



And, by definition, MCS wastes zero light. Why invest time, money, and effort into a very complicated pixel design, one that is prone to being much noisier due to improper use of a microlens, when there are proven techniques that eliminate filtration entirely?



x-vision said:


> Back to the topic of extra resolution:
> The increase in resolution comes from the fact that you have all three primary colors per pixel vs the single color per pixel in a beyer sensor.
> Admittedly, the resolution increase is not all that big - but it's still an increase.



What your proposing *is *a significant increase in resolution. The fact that you don't understand even that demonstrates that you don't understand sensor technology all that well, which indicates that your just speculating and dreaming. Nothing wrong with dreaming, but you should be aware that's what your doing.  Your DOUBLING resolution in both the horizontal and vertical by making each photodiode one quarter the size. The D800 clearly has a lot more resolution than the 1D X, and it's basically the same thing...twice the resolution.

You are really just talking about a pixel size reduction. Again...there isn't anything special here, and because your proposing that one single microlens be used for multiple pixels, your going to have an increase in noise due to what I described above. The increase in noise is going to be a severe drag on IQ, so again...your talking about at the very best, a _net neutral_ difference, and at worst, your going to get WORSE IQ with your sensor design because of the increased noise.



x-vision said:


> Think about all those things.
> You seem to be dismissing the quad-photodiode tech - seemingly without fully realizing its potential.
> If you believe that Foveon is better than Bayer, just consider that a quad-photodiode design with individual non-bayer color-filters (one per photodiode) is a better solution that Foveon.



I fully understand what DUAL-photodiode technology is, how it works, why it's designed the way it's designed, and I also understand that it isn't some magical technology that will suddenly slingshot Canon ahead of the competition. You are _dreaming_, pure and simple, that somehow Canon has solved their IQ problems with an *AF *invention. It's just a dream, though. It's the same dream a lot of Canon users have, because they all want better IQ out of Canon sensors, but it's still just a dream. It's an ill-educated dream, I am sorry to say, and your misinterpreting a lot of information (such as the Chipworks photo of the OUTER PERIPHERY of the 70D sensor...anyone who knows anything about die fabrication understands that the outer periphery of any CMOS die, sensor, cpu, memory, whatever, is the domain of power regulation, control circuitry, wiring and pin solder points, etc. not core logic, memory cells, or pixels.)

Canon does not have quad pixel technology. If they had already used it in the 70D, then they would have received patents for it years ago. I've read all of Canon's photography-related patent releases for the last three years. They have several for DPAF technology, some new ones since the 70D that have not been implemented anywhere. Their patents, being patents, MUST be extremely precise and explicit about the design (that's what patents are, specific details about specific implementations of a concept). Not one single patent Canon has ever filed for DPAF has ever detailed quad photodiodes. Neither would Canon have sold themselves short by announcing DUAL pixel technology if in reality they had QUAD pixel technology...if they had QPAF, they would have told the world. It would be big news. 

Finally, Canon also already has patents for layered sensor technology that really, truly DOES have the potential to increase image quality. Given some of the things their patents discuss, such as the use of what is basically akin to the nanocoating technology they use on some of their lenses on the second and third photodiode layers, Canon has the potential to improve the total amount of light their red and green photodiodes are sensitive to by reducing the chance of reflection at those lower layers, thereby increasing Q.E. Canon Foveon-like technology has the potential to be superior to Sigma Foveon technology, and with Canon's R&D budget, they certainly have the power to bring the technology to market and continue improving it.

If you want to root for Canon, and really want better image quality (which has less to do with photodiode count, and more to do with pixel design quality, quantum efficiency, etc.), then you should look into their layered sensor patents and root for them to actually make a DSLR camera that uses it. If Canon is indeed using nano-crystal technology to reduce reflection and increase Q.E. of the photodiodes in their layered sensors, I think they really have something that could outdo Sigma's Foveon, and outdo it enough that Canon could produce a 30 or 40 megapixel layered sensor that not only has the benefit of higher color fidelity, but also have higher native, non-bayer spatial resolution. THAT is where a meaningful increase in IQ for Canon DSLRs will come from....not DPAF.


----------



## jrista (May 3, 2014)

CarlTN said:


> Don't pretend you don't have your own biases, though. You are proud of, and trumpet often, your bias against an entire company, Sigma.



I've never pretended. I'm pretty strait up about what I think of Sigma. I am not against the entire company. I've said on many occasions I think their new lenses from the last couple of years are excellent, and that I appreciate the competitive force they bring in that arena.

I have NEVER hidden my feelings about how Sigma has handled Foveon. I have been quite open about it. I think they do Foveon, which I believe is technology with a lot of potential, a severe disservice by missleadingly selling it as having some magical powers to increase resolution, when it does nothing of the sort. Spatial resolution is determined by pixel size, plain and simple. Foveon's strengths lie in other areas than spatial resolution, and they are good strengths. No color moire, good sharpness (for the resolutions that Foveon sensors come in), and excellent color fidelity. 

Sigma wastes far too much time, money, and effort trying to trick potential customers into thinking they will get more resolution with a Foveon than a bayer, which is just a blatant, outright lie. I don't appreciate that, and yes, I fault Sigma for it. If Sigma would take a big chunk of their false advertising budget and inject it into their R&D department instead, I think they could make Foveon viable both on the color fidelity and spatial resolution fronts, and actually have a real competitor on their hands. But sadly, they keep pushing their missleading advertising. 



CarlTN said:


> Your bias and the need to feel proud of it somehow, is rather juvenile, don't you think?



Bait. Hmm. I'll let another fish bite. 



CarlTN said:


> Since you are very concerned about having the highest image quality, you should never use an aps-c camera, yet you do, very often. Practice what you preach.



I use an APS-C camera because I haven't had the money to buy a full-frame camera. I spent over ten grand on a lens last year. No one who isn't independently wealthy spends that kind of money, then turns right around and spends thousands more on MORE equipment. I do practice what I preach. Soon as I have the funds, I'll be using a full frame camera. Until then, my 7D has more reach, thanks to it's higher spatial resolution, and that's a fact I greatly appreciate. Oh, it's also a fact I preach, too. ;P


----------



## scyrene (May 4, 2014)

The more you post, jrista, the more I respect you.


----------



## jrista (May 4, 2014)

CarlTN said:


> Lol, well I know you like the higher spatial resolution...but that only works if the lens is up to the task (at least for "spatial" resolution of the image itself...not for comparing final or effective resolution of the larger sensor to a smaller denser one at the same lens focal length, etc...obviously ultimate image quality is less of a factor in that case). Not many lenses are up to the task. Also I'm not saying the 600 ii is not able to pull it off, obviously of course it is. For astro imaging, would you not still need to do a similar multi shot NR process, even for a 6D, 5D3, or 1DX sensor? How about for the 24MP or 36MP Exmors? Wouldn't the A7r be an interesting option (since it can be adapted for EF lenses)? Or is the closer flange distance enough to discourage trying that, due to the higher ghosting? I assume in that process, you are not using (and would not want to try to use) ISO settings above 1000 or so (meaning the Exmors would have clear advantage).



If you are talking about astrophotography (honestly not really sure what your trying to get at here), then the answer would really be "none of the above". I use my 7D for AP only because it's what I have right now. As far as the best sensors for AP, one doesn't use a camera built for normal photography. Every normal photography camera "cooks" the images. Even Canon's, even though they cook them less than the competitors, are always modifying the raw signal in some ways, but more than enough that it can make it difficult to properly calibrate and integrate a stack of images to produce a low noise, easily stretched astro image.

Astro CCD imagers tend to be vastly superior to any CMOS image sensor from normal photography cameras. They are usually monochrome, therefor their spatial resolution, particularly for color filtered frames, is higher despite the fact that they often have slightly larger pixels. They use higher grade silicon and fabrication processes, and usually have higher Q.E. (55-65% is common for low end CCDs, 70-96% is what you get for higher end CCDs). They also usually have considerably higher dynamic range. About the best DR for a modern CMOS imaging sensor for a normal photography camera is around 40-43 dB. Even a lower end astro CCD gets about 55dB, and the midrange and higher end CCD can get anywhere from 70-105dB of dynamic range. About every 3dB is a one-stop improvement. Most of the nice high end astro CCDs that use the Kodak KAF-16803 full frame (36x24mm) sensor with 9µm pixels (or similar variants, some use a 36.7x36.7 4096x4096 pixel square Kodak KAF sensor, but it's specs are generally the same) get between 79 and 91 dB of dynamic range (depends on the actual grade). FWC is around 100,000e-, read noise is about 9-11e-, and dark current (when fully cooled) is around 0.02e-/s or less. Factoring in read noise, that's anywhere from 24-29 stops of dynamic range...which utterly TROUNCES the D800 and any other Sony Exmor based imager on the market. 

When it comes to core technology, a lot of the technology that matters for normal photography really doesn't matter a wit for astrophotography. Spatial resolution is an important factor for normal photography. Not the single most important (you should know me well enough by now that I don't believe in the concept of a single most important feature for IQ ). When it comes to astrophotography, it's a very keen balancing act, between getting enough resolution, but not so much that your dramatically oversampling your subject. You have a number of factors that go into producing a "spot size", the size of a diffraction-limited star at the sensor. When you factor in seeing (atmospheric turbulence), most of the time it's difficult for amateur astrophotographers to find seeing good enough that stars are less than 2-3" (arcseconds) in diameter. For nebula, galaxies, clusters, basically anything non-planetary, you want your sensor resolution to be fairly close to your spot size, not oversampling them too much, but also not undersampling them. For the most part, a pixel size around 5-6µm is pretty ideal for this purpose, but most astro CCDs allow pixel binning, so you can make your effective pixels larger or smaller as necessary when adding barlows or focal reducers in order to match your pixel size to your seeing/spot size. Astrophotography is also dependent on having sensitivity to wavelengths of light that are either utterly unimportant for normal photography, or which may even have a negative impact on color accuracy (i.e. deep reds and near IR and near UV), while concurrently being averse to other wavelengths that are often very important to normal photography (i.e. the various bandwidths within which sodium and mercury vapor lighting emit...yellows, greens, and violets, which contributes to light pollution in cities, is often filtered out with light pollution reduction filters.) 

What I need for astrophotography is very different than what I need for stills photography. There is nothing wrong with more spatial resolution for normal photography, more of it certainly doesn't hurt. Total sensor area is also important for normal photography for VERY different reasons that it is important for astrophotography. Total sensor area leads to higher real sensitivity with normal photography. Larger sensor will always trump smaller sensor when it comes to high ISO performance. 

With astrophotography, most of what your imaging are point light sources. This makes full well capacity, quantum efficiency, and having a low gain setting far more important than high ISO performance, as the higher you crank gain (or ISO), the faster your stars saturate and "bloom" (clip, then begin to spill over into neighboring pixels, which also eventually clip). Physical aperture size vastly more important than relative aperture in astrophotography, as it doesn't matter so much how fast you image as how much light you get from each and every definable point of the sky that you are resolving. Physical aperture is also the primary factor in determining limiting magnitude, so a larger physical aperture, even if the telescope is effectively only f/8 or f/10, is important if your goal is to resolve very small details of very distant objects, or very small, dim stars.

It's generally illogical to compare normal photography needs with astrophotography needs. They are very different. What I argue for here on CR is very different than what I may argue for in the astrophotography threads here, or on astrophotography forums. Conflating what I've said about CMOS image sensors for normal photography with what I may have said about astrophotography is generally pointless, as there is no real correlation between those two types of photography.



CarlTN said:


> Have you seen Sigma's internal balance sheets and accounting? You claim you know where their money goes. I admit obviously their foveon sensor is still very much in infancy, which is a shame. However, they did buy the rights to the design from the American company. And, they are the only ones producing a sensor like it (so far). They even have a new one (which you were quick to trash, without ever having tried it).



Your kind of missing the point of what I was saying. It doesn't matter how much money is involved. My point is that if they dumped their Foveon advertising budget into Foveon R&D, the money would be better spent. Regardless of how much they actually spend. A truly competitive Foveon (one that has BOTH the color fidelity advantage as well as competitive spatial resolution) would speak for itself, in images and by a much larger community and word of mouth. 



CarlTN said:


> I see nothing wrong with giving Sigma credit for trying, for being different...it seems like it works for the segment of the market they have laid claim to.



I've never faulted Foveon for trying. Ever. I've only faulted them for lying or being misleading and creating this mistaken notion that somehow, Foveon's layered pixels somehow give them the magical ability of creating more resolution out of nothing. Sigma has a misleading, fallacious advertising agenda for Foveon. They seem to think they NEED to falsely trump up Foveon's resolution capabilities in comparison to bayer sensors, when they really don't. That's my beef with them. If they were truthful and sold Foveon on it's REAL strengths, I'd have nothing to call Sigma out for, and we wouldn't be having this discussion.



CarlTN said:


> Primarily they make lenses, after all. The cameras are a very small niche. Why would you expect them to be able to spend the funds necessary for the R&D to develop the sensor to your liking, when Canon and Sony have (as yet) not been able to do it? Canon is trying to do it, and they are the largest camera company in the world. Yet it's still not even for sale.



Based on the earliest patents from Canon for similar technology, they haven't been at it for even half as long as Sigma (or the prior owner of the technology). Hence my quip about Sigma better spending their money on R&D...it shouldn't take so long for such an intriguing sensor technology to go...almost nowhere. It was at 4-5mp for years, then it had a jump in the last couple of years to higher resolution, but it still lags behind bayer sensors. Foveon still suffers from noise problems, so it's never been as viable at high ISO (which immediately makes it a non-viable option for a LOT of photographers). Some of the technology in Canon's patents already surpasses Sigma's technology that is already in Foveon.

I sincerely hope that as more cash flows into Sigma from their lens division, they will be able to better prioritize more funds for Foveon R&D. I do like the core concept. I just don't believe that Sigma has done Foveon justice (so far). Things could change, and if/when they do, I'll applaud Sigma for the change...but to date, the snail is still losing the race.



CarlTN said:


> (And let's face it, if Sigma spent $1 billion to develop it, it would still be a failure in your opinion, no matter how good it ultimately was...how is that fair or unbiased?)



Now your just assuming things. If you actually learned anything about me over my time on these forums, you would understand how ludicrous that assumption is.  

I could care less, really, about how much money Sigma spends. What matters more to me is whether they money they spend results in progress that produces real value, and whether they HONESTLY sell the thing or whether they resort to misleading factoids and spurious claims. If Sigma could make the Foveon a truly competitive sensor TECHNOLOGICALLY (and it certainly has the potential, nothing wrong with the technology itself), it wouldn't matter if it cost $1,000,000 or $1,000,000,000...so long as in the end they turned enough of a profit to continue investing in the technology and _keep _it competitive. If they end up failing in the end, well it still wouldn't matter if they spent a hundred grand or a hundred billion, it would all be a waste in the end.



CarlTN said:


> It will be both interesting and amusing, to see your criticism of Canon's new camera (assuming it even uses this technique...for all we know the next full frame model may not even use it after all. It's just rumors...)
> 
> Again, your disgust with Sigma for simply existing, is juvenile, misplaced, and unnecessary. As is your harsh view of those who use, or have used their products. If we state our opinion of the images we got from using the camera, who are you to say we don't have a right to state it?



And were back to the personal insults. You and I do indeed have a mutual loathing of each other, and I have no interest in being friends with you...but I'm really trying to keep it off the public forum. No one else wants to see us fight, so I respectfully ask that if you want to insult me, please use PMs. Then you can get as nasty and hateful as you want.


----------



## jrista (May 4, 2014)

dilbert said:


> jrista said:
> 
> 
> > ...
> ...



If Canon comes out and makes spurrious claims about how their 15mp layered sensor is really a 45mp sensor, I'll be the first to call them out for using the same missleading tactics as Sigma. I almost hope they do, and if they do, I really hope your still around, because I would love to prove to you that I stick to the facts and the physics, regardless of brand.

How many times have you heard me say the D800 has a superior sensor at low ISO, or in terms of resolution (hell, just a couple posts ago I stated that the D800 had twice the resolution as the 1D X)? I only dispute what's wrong. The Foveon, like Canon's DPAF, is not a magic bullet. It cannot give you more resolution than it actually has. Canon DPAF cannot give you more resolution, because DPAF isn't about resolution. The D800 cannot give you better high ISO performance because high ISO performance is physics-limited. I could care less about the brand...all I really care about are the facts, the engineering, and the physics when it comes to what a sensor or camera is capable of. 

I would have thought my tiraid against the mistaken notions of Canon's DPAF also being a magic bullet for better IQ in the future would be an indication of how little I care about brand when debating the facts. 



dilbert said:


> > I spent over ten grand on a lens last year.
> 
> 
> 
> Why should we care about this?



Well, if your going to intentionally miss the point, you shouldn't.


----------



## 3kramd5 (May 4, 2014)

CarlTN said:


> Actually I did read it, but I'm not wasting my time reading this new one. Try to help some people out, and they bite your head off. Irrational? Indeed...and you're extremely guilty of outright insulting me and trolling me, many times over. *Again...I asked a simple question, how about a simple answer that is less than 100 words, with no insults and no whining? What is the reason you would not buy an A7r, to try for astrophotography? Is it the ghosting? I could understand that, if that's what it is. *It can't be the cost, because we both know how upset you got when I suggested you would not "buy a $30k lens right this moment"...and you said you would, if you thought it would help your photography achieve new heights of greatness. You could also use the A7r for static bird photography, something a $1995 CCD imager couldn't do.



Glancing at his gear wish list, it looks like he's more into action than astro. An A7R is 2500 less in the budget (camera + EF adapter). Personally I would love one for portrait and landscape work, but I can not justify the expense. I suspect I'd get more use from that tamron 150-600 and a new tripod.


----------



## jrista (May 4, 2014)

3kramd5 said:


> Glancing at his gear wish list, it looks like he's more into action than astro. An A7R is 2500 less in the budget (camera + EF adapter). Personally I would love one for portrait and landscape work, but I can not justify the expense. I suspect I'd get more use from that tamron 150-600 and a new tripod.



I'm actually pretty into astrophotography. It splits my budgets now. The A7r, along with pretty much any Sony camera, Nikon camera (with the exception of a couple that use different sensors), and a lot of other cameras that use Sony sensors (i.e. Pentax) are all pretty poor choices for astrophotography. Those manufacturers all mess with the image signal pretty heavily. 

They clip the black point, rather than using a bias offset (Canon uses a bias offset). That causes two problems for astrophotography: By clipping to the black point, you simply eliminate a lot of the dimmer background stars entirely, they are gone from the signal, unable to be retrieved; They make it difficult to use standard bias frame calibration techniques to remove any noise caused by sensor bias and recover those dim stars (which IS possible with Canon cameras.) 

Sony/Nikon/Pentax/etc. also tend to apply noise reduction to the RAW signal in hardware...an unconfigurable noise reduction, that's just always applied. Having total control over noise is a pretty critical facet of astrophotography...the vast majority of images you create for astrophotography have image data only in the lowest echelons of the signal, stars are the only things that have levels throughout the signal. While you can do some pretty amazing things with the D800 at ISO 100 when it comes to lifting shadows, that's nothing compared to the kind of lifting you do in astrophotography. The D800 can be lifted about six stops. In astrophotography, your often lifting by a lot more than that...to really pull out dust lane detail and dark nebula detail and things like that, it's common to lift things by an equivalent of 10-15 stops! Not even the great D800 or any other Exmor DSLR camera can handle that, in part because of the black point clipping, which is throwing away a couple/few stops of potentially recoverable information in the first place.

A proper astro CCD camera has at least 18-19 stops of dynamic range, and usually well over 20 stops. They are thermally regulated (anywhere from -40°C to -80°C Delta-T from ambient), which nearly eliminates dark current noise, generally have relatively low read noise, usually have much higher Q.E., and usually have larger pixels (smaller astro CCD sensors usually have around 5-6µm pixels, larger astro CCD sensors usually have 9-24µm pixels; FF DSLRs tend to have pixels in the 6-7µm range, and APS-C DSLRs are now around 3.5-4.5µm). Since astro CCD sensors are also most often monochrome, and you usually image in LRGB (luminance + RGB), you can produce images with much stronger signals than you can with bayer-filtered DSLRs.

So, while I'd like an A7r for my landscape photography, it is actually one of the worst possible choices for astrophotography. I do landscapes sometimes, wildlife and birds most of the time, and astrophotography every time there is a clear night. Since Canon cameras don't mess with the image signal nearly to the degree that other manufacturers to (they do some response curve tweaks at certain higher ISO settings, but I usually image at ISO 400, which Canon pretty much leaves alone), and since the 5D III can be used for landscapes (it has a very respectable pixel count and frame size for that), wildlife and birds (it meets my minimum expectations for rate at 6fps), AND can be used for astrophotography, it's a far better investment in the interim (especially with prices hitting $2700 pretty regularly now.) It may not have the DR of the A7r, but it is a vastly more versatile device. 

If it wasn't for the astrophotography, I'd get a 1D X. By getting a 5D III, that leaves me plenty of cash to invest in a proper astro CCD, a filter wheel and filter system, and a few other accessories.

So...given how versatile Canon's DSLRs already are...do they really need to become a Sony clone with their new sensors? ;D


----------



## 3kramd5 (May 4, 2014)

Are you using telescopes for astro or the 600?


----------



## jrista (May 4, 2014)

3kramd5 said:


> Are you using telescopes for astro or the 600?



Currently using the 600, however this baby is at the top of my list:

Astro-Tech 10" f/8 truss tube Ritchey-Chrétien optical tube

Telescopes are kind of like lenses, though. You usually need a few. The 600 is ideal for wider field work. I think the 200mm f/2 L would be an excellent one for very wide field work, but I think when I get the 300/2.8 L II that will be the last Canon supertele for a long while. The 300 is still excellent for wide field work. The 10" RC is a longer focal length, which is better for galaxies and clusters, and for close-up work of parts of nebula.


----------



## 3kramd5 (May 5, 2014)

Nice. That's quite an apparatus. My only experience (about 7 years of engineering design work) with telescopes is with a rather different variety, unless you're into IR.


----------



## GaryJ (May 5, 2014)

scyrene said:


> The more you post, jrista, the more I respect you.


+1


----------



## Stu_bert (May 5, 2014)

jrista said:


> 3kramd5 said:
> 
> 
> > Glancing at his gear wish list, it looks like he's more into action than astro. An A7R is 2500 less in the budget (camera + EF adapter). Personally I would love one for portrait and landscape work, but I can not justify the expense. I suspect I'd get more use from that tamron 150-600 and a new tripod.
> ...



I was looking at the A7R with adapter for landscapes, but then I read on Thom Hogan's site that Sony uses lossy compression on their RAWs (unless I misread him), and you can't switch it off!

Why would they do that?  

On that basis, it may have amazing DR but then it surely will just smudge out some of the detail for err, actually I'm not sure for what benefit...

Had a look at that astro link - it's a whole new language there  If I understood correctly, then it's a 2000mm lens? And optically is it better than your 600mm lens with a 1.4x and 2.x attached? Just curious as to the benefits. Thanks.


----------



## jrista (May 5, 2014)

Stu_bert said:


> jrista said:
> 
> 
> > 3kramd5 said:
> ...



Hmm, I hadn't heard of that. If they do, it's foolish, and you really no longer have a RAW image. I am a bit skeptical of that...it doesn't seem logical, but who knows.



Stu_bert said:


> Had a look at that astro link - it's a whole new language there  If I understood correctly, then it's a 2000mm lens? And optically is it better than your 600mm lens with a 1.4x and 2.x attached? Just curious as to the benefits. Thanks.



Reflecting light tends to produce superior spots at the sensor plane in comparison to refracting light. Reflecting light can warp star diffraction spots due to coma and astigmatism, but that's about it. Refracting light, on the other hand, suffers from all forms of optical aberrations...which also includes chromatic aberrations, spherical aberration, etc. The RC, or Ritchey-Chretien, telescope design is one of the more superior designs. It's the same design used in all the major earth-bound telescopes...the huge ones, up to 10 meters in size. It tends to produce superior results, although it does suffer from some coma and astigmatism in the corners. 

There is a better telescope design than even the RC, called a CDK or Corrected Dall-Kirkham. The CDK uses a mirror and built in corrector to get one of the best spot shapes, center to corner, of any telescope design I've ever seen. PlaneWave makes CDK scopes, but they are pretty pricey. From what I've read and seen, a CDK is about the best telescope design in the world today.

As good as my lens is, and it is very good with a very flat field corner to corner, it is no RC and certainly no CDK. If I throw on teleconverters, that gets me more focal length (which is not necessarily the best thing...a LOT of nebula are even larger than I can fit in my field with the 600mm, let alone a 2000mm scope), but it also increases the optical aberrations. For galaxies, clusters, and getting close up on parts of nebula, a longer, better scope like the Astro-Tech 10" RC is better. The larger aperture, ten inches vs. six inches, also means I can resolve smaller magnitude stars, galaxies, and other details. Most scopes work with focal reducers, so while it is 2000mm natively, I can use a 0.63x reducer to make it an f/5 1260mm telescope. That is relatively fast with a moderately wide field. For planetary work, I can also throw on a 2x or 3x barlow lens, and get a 400mm f/16 or 6000mm f/24 scope, which is much better for planetary imaging (f-ratio doesn't usually matter for planetary, as you image planets by taking videos with thousands of frames for anywhere from a couple minutes to as long as a half hour...then filter, register, and stack the best frames of the video, which is basically performing a superresolution integration...that eliminates blurring from seeing, and effectively allows you to image well beyond the diffraction limit.)


----------



## NancyP (May 5, 2014)

Very dangerous, jrista, very dangerous. Astrophotography is like boating - you start out with a $300.00 (slow) toy kayak, you end up wanting an America Cup yacht. I am at the toy kayak stage, and likely to stay there. A combination of living in the center of a "white" zone (central city to the rest of you), having a day job, no longer having the ability to easily adapt to swing schedules, and living in an often cloudy location (St. Louis MO) make serious application to astrophotography difficult. I can learn a bit at our local astronomy park, 45 minutes away in an "orange-soon-to-be-red" zone. High quality darkness is about 2.5 to 3 hours away at minimum.

Hats off to you for taking on PixInsight. 

I am still drinking the Sigma DP#M koolaid because the color subtlety is very suitable for landscape, and the camera weighs ~300 grams including an aluminum L bracket/grip and can be well supported by a 1600 gram tripod/head/QR kit. Pop some extra batteries, filters, and "nodal" slide in my pocket, and I have a great fast-hiking compatible landscape kit.


----------



## jrista (May 5, 2014)

NancyP said:


> Very dangerous, jrista, very dangerous. Astrophotography is like boating - you start out with a $300.00 (slow) toy kayak, you end up wanting an America Cup yacht. I am at the toy kayak stage, and likely to stay there. A combination of living in the center of a "white" zone (central city to the rest of you), having a day job, no longer having the ability to easily adapt to swing schedules, and living in an often cloudy location (St. Louis MO) make serious application to astrophotography difficult. I can learn a bit at our local astronomy park, 45 minutes away in an "orange-soon-to-be-red" zone. High quality darkness is about 2.5 to 3 hours away at minimum.



I know a lot of imagers who shoot under red and white zones. Have you ever looked into a Light Pollution Reduction/Suppression filter? There are a number of them. I'm in a yellow zone myself, but I still use the Astronomik CLS filter myself (I prefer shooting nebula, if you shoot galaxies, lp filters are a mixed bag). You could also look into doing Narrow Band imaging...with NB, you block out a ton of light except the one (or three) very narrow bands your interested in. You need longer exposures, but NB works extremely well under red and white zones, and I've seen some stellar work from people in some of the most heavily populated places in the eastern half of America. 



NancyP said:


> Hats off to you for taking on PixInsight.



PI isn't so bad once you get used to it. It has a funky way of doing things until you learn why...then you realize how incredibly awesome it is.  I also recommend it if you image under light polluted skies. It's DBE or Dynamic Background Extraction script can help you extract light pollution from your background skies and flatten it, and can do so if you use LPR filters or not.



NancyP said:


> I am still drinking the Sigma DP#M koolaid because the color subtlety is very suitable for landscape, and the camera weighs ~300 grams including an aluminum L bracket/grip and can be well supported by a 1600 gram tripod/head/QR kit. Pop some extra batteries, filters, and "nodal" slide in my pocket, and I have a great fast-hiking compatible landscape kit.



For those who understand what Sigma Foveon cameras offer, I say more power to 'em! There is no question the color fidelity is extremely high with Foveon sensors. The light weight is also pretty nice for when you gotta hike to your vistas. That's one of the reasons I like the idea of an A7r for landscape photography...but the camera overall is just...not general purpose enough to justify the cost.


----------



## scyrene (May 5, 2014)

jrista said:


> (f-ratio doesn't usually matter for planetary, as you image planets by taking videos with thousands of frames for anywhere from a couple minutes to as long as a half hour...then filter, register, and stack the best frames of the video, which is basically performing a superresolution integration...that eliminates blurring from seeing, and effectively *allows you to image well beyond the diffraction limit*.)



This is very interesting, and news to me. Dare I ask how that is possible? I assumed stacking would take the image to the theoretical best the setup can produce - how does it deal with diffraction? I was using my 500L with extenders to photograph planets using stacking recently, and assumed softness due to diffraction (I was at 4000mm f/40 for Jupiter and 5600mm f/56 for Mars).


----------



## jrista (May 5, 2014)

scyrene said:


> jrista said:
> 
> 
> > (f-ratio doesn't usually matter for planetary, as you image planets by taking videos with thousands of frames for anywhere from a couple minutes to as long as a half hour...then filter, register, and stack the best frames of the video, which is basically performing a superresolution integration...that eliminates blurring from seeing, and effectively *allows you to image well beyond the diffraction limit*.)
> ...



There are different ways to stack. The most common is averaging, either basic averaging, weighted-averaging, or sigma-kappa clipping averaging. Those forms of stacking are usually used on star field images, for nebula, galaxies, clusters, to reduce noise (noise is reduced by a factor of SQRT(stackCount)...so stacking 100 frames reduces noise by a factor of 10.) 

You can also use "drizzle" stacking and other forms of superresolution stacking. The purpose of these methods is less to reduce noise (although they do help reduce noise), and more to increase detail. Stacking for superresolution aims to chose the best version or versions of any given pixel out of thousands of frames, and sample each pixel in each frame and across frames multiple times with alternate "rotation" factors or something similar. That allows the algorithm to extract the maximum amount of information for each point of your subject.

While diffraction certainly limits your resolution when doing planetary imaging, seeing limits it to a FAR greater degree. The vast majority of blurrieness when doing planetary imaging is due to atmospheric turbulence and poor transparency, by about an order of magnitude compared to diffraction. Stacking thousands of frames with a superresolution algorithm easily cuts through both, assuming you get enough high quality frames. Because these algorithms pick the best version of a pixel and multisample each pixel, you can end up with surprisingly high detail images, despite the effects of seeing and diffraction.


----------



## Stu_bert (May 5, 2014)

jrista said:


> Stu_bert said:
> 
> 
> > jrista said:
> ...



Thanks for the comprehensive reply.

Re A7R - http://www.sansmirror.com/cameras/a-note-about-camera-reviews/sony-nex-camera-reviews/sony-a7-and-a7r-review.html

Scroll down to "How do they Perform?"


----------



## scyrene (May 6, 2014)

jrista said:


> scyrene said:
> 
> 
> > jrista said:
> ...



It's certainly powerful, though I don't understand the technicalities and I'm not sure what the program is doing to obtain the results (for nebulae I do it all by hand, which takes a long time but I have a grasp of every step of the process).

Sadly, I can't do thousands of images, due to limitations of my setup. I know most planetary work nowadays is done with video, stacking lots of extracted frames, but because even at 5600mm (lens focal length), the targets are too small in the frame to use the camera's video function, I take stills manually at full sensor resolution, and then crop to a reasonable size for stacking. That way I can do tens to over a hundred, but I could never do much more (I'm also aiming by hand, so it's a matter of human fatigue, no way of automating the process). Still, it's amazing what you can do without dedicated kit - a few key postprocessing techniques are what make the difference.


----------



## jrista (May 6, 2014)

Welcome.



Stu_bert said:


> Re A7R - http://www.sansmirror.com/cameras/a-note-about-camera-reviews/sony-nex-camera-reviews/sony-a7-and-a7r-review.html
> 
> Scroll down to "How do they Perform?"



I believe that only applies to their 11-bit "RAW" encoding. That would be something akin to Canon's sRAW and mRAW, not necessarily in encoding, but in lossyness. Neither are actually RAW files, they encode data in a specific way. In Canon's case, the m/sRAW formats are YCb'Cr' formats, or Luminance + Chrominance Blue-Yellow + Chrominance Red-Green. The Y or Luminance channels is stored full resolution, however the Cb and Cr channels are stored "sparse". In Canon's case, all of the stored values are still 14-bit precision, but they do store lower chrominance data. Canon's images would be superior to Sony's, in both that they store more information in total, as well as with a greater bit depth...however both will suffer from the same limitation: The information is not actually RAW, which severely limits your editing latitude.

Generally speaking, the fact that these formats store lower resolution color information doesn't matter all that much. Because of the way our brains process information, if done carefully, a lower resolution chrominance is "missed" in favor of a higher level of detail. YCbCr formats have been around for a long time, since the dawn of color TV even. The Luminance channel was extracted and sent in full detail, while the blue/yellow and red/green channels were sent separately, in a more highly compressed format. This actually allowed color information to be piggybacked on the same signal that "black and white" TV channels were sent on, making it possible for B&W TVs to pick up the same signal as Color TVs.

If you have paid any attention to Canon's video features, you've already heard of similar video compression techniques. You may have heard of 4:1:1, 4:2:2, or 4:4:4. Those numbers refer to the Y, Cb, and Cr channel encoding. A 4:1:1 encoding has full luminance and 1/4 Cb & Cr channels. A 4:2:2 encoding has full luminance and 1/2 Cb and Cr channels. As you might expect, a 4:4:4 encoding use the same sampling rate for all three channels, and is effectively "full resolution". A standard RAW image is also technically a 4:4:4 R'G'B' image.


----------



## Stu_bert (May 6, 2014)

jrista said:


> Welcome.
> 
> 
> 
> ...



Jrista - did not appreciate that for mRaw/sRaw so thank you. My understanding on Sony is however, that their 11 bit RAW is their standard raw. I'm not sure if you read the whole article, but Tom is talking about deficiencies as a result, and he's comparing it to the D800. There's a further link embedded

http://www.rawdigger.com/howtouse/sony-craw-arw2-posterization-detection

which I believe confirms that they do the same on all their RAW encoding.

Don't get me wrong, I think many people would not notice it or can work around it - Fred Miranda has been positive in his review and he's a canon user - in fact I think there's a whole forum on his site discussing it in detail. To me it just seems somewhat self-defeating - you have a sensor with better DR than your competitors but you then impair the output with your compression scheme which "fails" when you have scene with higher DR.

Maybe I misinterpreted the information...


----------



## NancyP (May 6, 2014)

This is an entertaining, if now grossly off-topic, thread. 
I am still identifying the more obvious Messier objects by binoculars. Learning the basics is a good idea. At some point soon, I will buy a second-hand beginner's (Dobson) telescope from a club member for learning basic visual observation. A good german equatorial mount and a starter astrophotography optical tube are further away.

I have run some Moon shot stacks (400mm f/5.6L + 1.4x TCII) through Lynkeos freeware, which is really designed as a very simple moon/planet video image processor.


----------



## whatfind (May 9, 2014)

could this be a EF mount without mirror?
Thus "lower cost" got explained, because without a mirror, the AF chip and Exposure measuring chip etc. can be discard, which will low down the cost quite lot.
Look at 70D, if Dual Pixel AF can be used for AF in stills(and the make it lower battery use), there will be no mirror needed.
also EVF rumor explained.


----------

