# Multilayer Sensors are Coming From Canon [CR2]



## Canon Rumors Guy (Oct 8, 2014)

```
<p>Canon has been hard at work on multi-layer sensors and they will be coming in an upcoming professional level full frame DSLR we’re told. There will be two cameras based on this technology coming in 2015, one would expect a replacement for the EOS-1D X and perhaps another professional body with a high megapixel count. There was no mention of an EOS 5D Mark IV.</p>
<p>The high resolution camera is expected to be announced in Q1 of 2015. Even though we have heard from places unknown that such a camera would make an appearance at PhotoPlus at the end of this month. A Canon Explorer of Light has told us they’ve been promised a high megapixel camera to come “sooner than later”.</p>
<p>I suspect most of this came from Photokina chatter.</p>
<p><strong><span style="color: #ff0000;">c</span>r</strong></p>
```


----------



## Maximilian (Oct 8, 2014)

Sounds reasonable! 

Especially the part with the 5D4.
Some one to two years until that tech trickles down to the semi pro bodies.

Now let's hope that it comes true within a short time and that we'll see some real good IQ improvements on this.


----------



## LetTheRightLensIn (Oct 8, 2014)

Hopefully it will be worth the wait. (although the wait for a 5D4 sounds perhaps longer, but if this tech delivers, then at least it would be jsut a Sony to tie things over for now instead of a full out switch to Nikon/Sony mix).


----------



## Hannes (Oct 8, 2014)

It would certainly go some way to explain the lack of fundamental updates in the canon sensors other than DPAF obviously. Very interesting though.


----------



## Dylan777 (Oct 8, 2014)

Brings it on


----------



## ajfotofilmagem (Oct 8, 2014)

From what I know the Sigma Foveon sensor, we will see major improvements in color depth. This could be the true replacement of 1DS Mark iii.

Does anyone know how good (or bad) is the DR in low ISO of Sigma Foveon sensors?


----------



## zim (Oct 8, 2014)

and straight to a CR2, this does seem very plausible. I do hope one of those top end cameras is the high MP camera and they leave the 5 series at around 24 though, if that's even how you measure these things with a multilayer sensor ?


----------



## Mt Spokane Photography (Oct 8, 2014)

Those Canon patents are supposed to be a improvement over the Foveon weaknesses, but the big benefit is accurate and better colors.


It might be interesting to see what happens. Canon is very conservative, so we do not see new technology until its been tested pretty well. Even so, there are always issues when you get tens of thousands using it.


http://www.canonrumors.com/2014/07/patent-another-multi-layer-related-sensor-patent-from-canon/

http://thenewcamera.com/canon-patent-news-three-layer-sensor-from-canon/


----------



## Drizzt321 (Oct 8, 2014)

zim said:


> and straight to a CR2, this does seem very plausible. I do hope one of those top end cameras is the high MP camera and they leave the 5 series at around 24 though, if that's even how you measure these things with a multilayer sensor ?



Yea, with a CR2 I'm a lot more hopeful that it'll actually happen. Still not there yet, but IMHO a lot better than a CR1, which means he's got some reason to trust the source.


----------



## lintoni (Oct 8, 2014)

Mt Spokane Photography said:


> Those Canon patents are supposed to be a improvement over the Foveon weaknesses, but the big benefit is accurate and better colors.
> 
> 
> It might be interesting to see what happens. Canon is very conservative, so we do not see new technology until its been tested pretty well. Even so, there are always issues when you get tens of thousands using it.
> ...


Another one:

http://www.canonrumors.com/2014/06/patent-canon-5-layer-uv-ir-rgb-sensor/


----------



## scottkinfw (Oct 8, 2014)

New sensors have to be coming sooner or later. The questions I have are what improvements will we see, when will we see them, and what camera bodies will they come in. I'm ready for a new camera by next spring/summer, if it is the right fit. Come on Canon, hit one out of the park for us.

Sek


----------



## hoodlum (Oct 8, 2014)

I don't expect this sensor to be used in a high speed body due to the low sensor read-out speed of a stacked sensor. That would also explain why the 7Dii was just replaced with existing sensor tech. I could see a totally new body announced as a niche for landscape or other areas that require greater detail without the need for high FPS. Video would also not work very well with this sensor due to the low read-out speed.


----------



## msm (Oct 8, 2014)

Hope it is true this time.


----------



## KAS (Oct 8, 2014)

Well, put me on the pre-order list for 1Ds replacements for next spring! This might be a great way to start the new year.


----------



## jrista (Oct 8, 2014)

I'm curious if this new layered sensor is the one that uses UV and IR layers to remove skin blemishes. If so, is this next Canon DSLR a replacement for the 1Ds, a studio and portrait camera?


----------



## daniela (Oct 8, 2014)

Actual rumored in Japan: Next high end body will be a superb successor. No mention about which body it is. Clear seems to be, that this sensor technology enters in Canons high end line. A definitively clear plus will be added on the current price.
The rumored detailed specifications are still _not_ really believable. They think that there are 2-3 bodies mixed up: 22-50MP, 8-20fps, clear iso up to 51000, AF speed halved, better battery ... The 1DX successor is rumored around 20MP.
But: No Canon fanboy knows exactly what will be inside. 
Before the Photokina, Japenese forums rumored 24MP and 10fps and clear pictures at 6400ISO for the 7DII.


----------



## V8Beast (Oct 8, 2014)

Other than more accurate color, what are the fundamental benefits of a multi-layer sensor?

IMHO, as great as the 1DX may be, Canon could definitely benefit from a true studio/landscape-oriented 1Ds3 successor. The whole "merging of the 1D and 1Ds" line is pure marketing non-sense. Nikon dropped a full-frame sensor into their sports body, so Canon had to follow suit. It's as simple as that. The 1DX is essentially a full-frame replacement for the 1D4. 

I know lots of pros still shooting with 1Ds3 bodies. Maybe a real studio/landscape camera will give them reason to finally upgrade.


----------



## danski0224 (Oct 8, 2014)

I suppose that "multi layer" doesn't necessarily mean Foveon, but I'd really like to see a "mainstream" Foveon sensor in a Canon dSLR body.


----------



## Maui5150 (Oct 8, 2014)

The big question... When does Canon take on the Camera Phone market. Clearly if the 1Dxs can make phone calls, play games, and text.., then they will have a real winner on their hands


----------



## Maiaibing (Oct 8, 2014)

Hmmm. Getting a high MP camera next year one way or the other. Great if Canon finally can deliver visible progress on the sensor side. Would much prefer a Canon but their pro models are too big for me - so this may be bad "news". Have my eyes set on a worthy 5DIV. Remain hopeful that Canon will come with some announcement before end of year.


----------



## bchernicoff (Oct 8, 2014)

I hope we get at least a development announcement soon. I've got at least $10k in Canon gear that rarely gets used any more. My Fuji X-T1 is used > 90% of the time now and the two reasons I haven't sold any Canon gear yet are my 400mm 2.8 and hope for something revolutionary in a sensor.


----------



## drjlo (Oct 8, 2014)

Sigma Foveon sensors have high noise in low light/high ISO situations, so I hope Canon has effectively addressed that problem somehow.


----------



## Dylan777 (Oct 8, 2014)

bchernicoff said:


> I hope we get at least a development announcement soon. I've got at least $10k in Canon gear that rarely gets used any more. My Fuji X-T1 is used > 90% of the time now and the two reasons I haven't sold any Canon gear yet are my 400mm 2.8 and hope for something revolutionary in a sensor.



Many would go for 300 & 600 combo. I'm an odd egg in the basket, went with 200mm f2 IS & 400mm f2.8 IS II combo. So far, so good ;D


----------



## ariliquin (Oct 8, 2014)

Does multi layer mean foveon like or does it mean something else? Seems there are other possibilities with additional filters. 

I am a foveon user and can say for some applications it is exceptional and for others terrible. It would be fantastic to see another company push this type of technology beyond its current state. I hope Canon is this company. Maybe Canon does not need to make this type of sensor to be good at everything, just good at specific things. Look at Sony A7r and A7s. Both sell well, yet both are good at different tasks. 

Really looking forward to seeing companies take image quality more seriously, by this I mean, eliminating IQ destroying Interpolation and bayer colour filters.


----------



## ScottyP (Oct 8, 2014)

ariliquin said:


> Does multi layer mean foveon like or does it mean something else? Seems there are other possibilities with additional filters.
> 
> I am a foveon user and can say for some applications it is exceptional and for others terrible. It would be fantastic to see another company push this type of technology beyond its current state. I hope Canon is this company. Maybe Canon does not need to make this type of sensor to be good at everything, just good at specific things. Look at Sony A7r and A7s. Both sell well, yet both are good at different tasks.
> 
> Really looking forward to seeing companies take image quality more seriously, by this I mean, eliminating IQ destroying Interpolation and bayer colour filters.



I dunno. 
"Truer colors" does not blow my hair back, nor does less chromatic aberration (but with monochrome aberrations still there, thus still needing a Bayer filter as stated above??). 

And the negatives seem problematic. Lessened frame rate and ginormous files?

And better IQ at higher ISO a is the most important thing for me, but as stated above, Foveon may actually be WORSE rather than better?


----------



## Woody (Oct 8, 2014)

Finally, a noteworthy rumor. ;D


----------



## Diko (Oct 8, 2014)

Yep.

We've got an instant *CR2* after stupid countdown non-sense from yesterday. 

Neuro and I both wrote on *that article* mentioned.

And both were not happy with Canon staying behind others. As many others in those comments. The main point was the lack of new product features and bold actions from Canon managment in such competitive times.

And today we have FINALLY an (almost) unofficial announcement of new products to come. Such an obvious action seems a little late.

And nobody can't persuade me that there wasn't that call to that member of _Canon Explorer of Light_ who was told to tell to CR about it... 

Come on! That's cheap!


----------



## bosshog7_2000 (Oct 9, 2014)

bchernicoff said:


> I hope we get at least a development announcement soon. I've got at least $10k in Canon gear that rarely gets used any more. My Fuji X-T1 is used > 90% of the time now and the two reasons I haven't sold any Canon gear yet are my 400mm 2.8 and hope for something revolutionary in a sensor.



Ditto....I have thousands in Canon gear yet these days I find myself shooting with an X-Pro1 and 2 lenses for almost everything. I would love to replace my 5D2 with something worth upgrading to....namely a 5D4 with this rumored new sensor.


----------



## Diko (Oct 9, 2014)

bosshog7_2000 said:


> ... I would love to replace my 5D2 with something worth upgrading to....namely a 5D4 with this rumored new sensor.



Just imagine the dual AF and that new sexy sensor .... in MF ;-) What do you think? If this was more than kust daydreaming - prepare thy kidney


----------



## LetTheRightLensIn (Oct 9, 2014)

jrista said:


> I'm curious if this new layered sensor is the one that uses UV and IR layers to remove skin blemishes. If so, is this next Canon DSLR a replacement for the 1Ds, a studio and portrait camera?



The patent with the UV/IR layers sounded like it was for P&S cams.


----------



## Woody (Oct 9, 2014)

Completely off-track here but I've a question for Fujifilm users. Doesn't Adobe still struggle with Fujifilm RAW files? I know there are improvements, but they are still not entirely artifact free. So, how does one cope with that?


----------



## LetTheRightLensIn (Oct 9, 2014)

drjlo said:


> Sigma Foveon sensors have high noise in low light/high ISO situations, so I hope Canon has effectively addressed that problem somehow.



And the effective res is a bit lower per processing power and storage space required and they have relatively modest low ISO DR.

If Canon pulls it off well and sensibly than maybe they will show us the impossible and not in a silly marketing way.
It would be cool. We'll see. Not much word on a new fab and this would require some shocking new tech.


----------



## Diko (Oct 9, 2014)

LetTheRightLensIn said:


> jrista said:
> 
> 
> > I'm curious if this new layered sensor is the one that uses UV and IR layers to remove skin blemishes. If so, is this next Canon DSLR a replacement for the 1Ds, a studio and portrait camera?
> ...



I don't believe that Canon would leave such a strong concept available only to P&S. It can might be also available to security cams.


----------



## Woody (Oct 9, 2014)

Diko said:


> Neuro and I both wrote on *that article* mentioned...
> 
> And nobody can't persuade me that there wasn't that call to that member of _Canon Explorer of Light_ who was told to tell to CR about it...
> 
> Come on! That's cheap!



Judging by the numerous grammatical errors in those Adweek comments, I am pretty sure it's not the same Neuro in CR forum. ;D

Also, the Canon Explorer of Light has not revealed anything more than what Maeda said in the DPReview interview. ;D


----------



## V8Beast (Oct 9, 2014)

LetTheRightLensIn said:


> And the effective res is a bit lower per processing power and storage space required and they have relatively modest low ISO DR.



So what's the point, then? It sounds like there are lots of drawbacks with the only potential advantage being "truer colors." 

Does modest low ISO DR mean worse than current Canon sensors, similar to current Canon sensors, or similar to Canon sensors with the potential for a big DR increase given innovative tech/manufacturing?


----------



## pedro (Oct 9, 2014)

Looking forward to more information...good hints...meanwhile my 5D3 still rocks.


----------



## raptor3x (Oct 9, 2014)

Woody said:


> Completely off-track here but I've a question for Fujifilm users. Doesn't Adobe still struggle with Fujifilm RAW files? I know there are improvements, but they are still not entirely artifact free. So, how does one cope with that?



I've not noticed any artifacts but X-T1 RAW files processed in LR tend to be very soft.


----------



## Woody (Oct 9, 2014)

V8Beast said:


> So what's the point, then? It sounds like there are lots of drawbacks with the only potential advantage being "truer colors."
> 
> Does modest low ISO DR mean worse than current Canon sensors, similar to current Canon sensors, or similar to Canon sensors with the potential for a big DR increase given innovative tech/manufacturing?



There is another plus: DXOMark won't be able to grade the new sensor, just like they do not test Fujifilm sensors with their unique RGB layout. So, less DXOMark angst in these forums. ;D

As for DR, high ISO performance and AF speed, no one really knows how these will be affected. But if Canon is confident enough to introduce new sensor technology into their top end body, I am sure all these potential drawbacks have already been addressed. Let's hope we do not see a repeat of the 1D3 AF saga.


----------



## Good24 (Oct 9, 2014)

Canon EF 50 f/1.4 IS in 2013 [CR2]
« on: December 12, 2012, 03:00:12 PM »
More non-L primes coming
Expect to see a new EF 50 f/1.4 IS sometime in 2013

http://www.canonrumors.com/forum/index.php?topic=11586.0

I'm still waiting for my new 50mm despite CR2.


----------



## NancyP (Oct 9, 2014)

The Sigma Merrill Foveon sensor is cracking good on some subjects in some conditions. Dynamic resolution at low ISO is worse than current Canon, don't bother trying at high ISO because the noise is awful. If the light is good or you have your camera on tripod with a non-moving subject, and the contrast is not excessive, the files are amazing. Yes, the camera writes files slowly, but as I use it almost exclusively for still life or landscape, who cares? Very much a specialist camera. I will be quite interested to see Canon's take on the multilayer sensor idea.

I will also be very interested to see how Adobe deals with this development, as it has resisted working with .x3f Sigma Foveon files (and to be fair, most other RAW developer platforms are also not supporting the Sigma files) . The single most annoying thing about shooting with Sigma is the s*cky Sigma Photo Pro software, which is buggy and crashes and lacks some features and is not the most user friendly. On the other hand, its fill light slider is awesome.


----------



## LetTheRightLensIn (Oct 9, 2014)

V8Beast said:


> LetTheRightLensIn said:
> 
> 
> > And the effective res is a bit lower per processing power and storage space required and they have relatively modest low ISO DR.
> ...



Maybe they have some totally new tech that makes it able to do better at high ISO and low ISO than the regular type of sensors they are using now? And maybe it provides great colors and they have a ton of processing power to handle all the data and the less efficient data storage is more than outweighed by how well they have the basic tech working (CFA filters rob a lot of light, if they managed to capture almost all of it with three layer that could be the only way to give a big boost to high iso at this point; but it would take some amazing new tech as all the current three layer tech actually does worse in most regards)?

If not, though, then it does seem potentially questionable yeah.


----------



## neuroanatomist (Oct 9, 2014)

Woody said:


> Diko said:
> 
> 
> > Neuro and I both wrote on *that article* mentioned...
> ...



No, certainly not me. Likely one of the banned ex-CR trolls attempting a lame and pathetic sort of payback.


----------



## ScottyP (Oct 9, 2014)

neuroanatomist said:


> Woody said:
> 
> 
> > Diko said:
> ...



Haha. The writing (and spelling) was like 2nd grade skill level. They should have used a copy of Neuro's avatar, but with a little cross section of a peanut where the brain would go.


----------



## skoobey (Oct 9, 2014)

At this point, it much better IQ and no anti-aliasing filter, with really good software to remove moire right away, if needed, and I'll upgrade. Other vise, I'll skip yet another generation.


----------



## Lee Jay (Oct 9, 2014)

A problem with Foveon sensors is lousy color separation. It's not a red, blue and green layer, it's three white layers with a little bit of bias on each one. This is why they have lousy, inaccurate colors with lots of color artifacts like purple and green splotches all over the place.

I hope Canon has a way to dramatically improve on Foveon sensors before they'd release this into the wild. Foveon's have lousy DR, lousy high ISO performance, lousy colors, and the lack of an AA filter means a ton of aliasing artifacts.


----------



## eml58 (Oct 9, 2014)

neuroanatomist said:


> No, certainly not me. Likely one of the banned ex-CR trolls attempting a lame and pathetic sort of payback.



Mt Spokane copped a mention as well, both sets of comments seemed, off ?

A Mt Spokane ashamed of being a Canon user ? don't think so

A Nuero supportive of Sony & unable to spell or write concise grammar ? don't think so


----------



## jrista (Oct 9, 2014)

Lee Jay said:


> A problem with Foveon sensors is lousy color separation. It's not a red, blue and green layer, it's three white layers with a little bit of bias on each one. This is why they have lousy, inaccurate colors with lots of color artifacts like purple and green splotches all over the place.
> 
> I hope Canon has a way to dramatically improve on Foveon sensors before they'd release this into the wild. Foveon's have lousy DR, lousy high ISO performance, lousy colors, and the lack of an AA filter means a ton of aliasing artifacts.



Hmm, that hasn't been my experience with Foveon images. They seem to have pretty good color fidelity at low ISO. They also seem to handle blues quite well, which isn't surprising given that blue is the top layer.

I was never impressed with the high ISO capabilities, and I think their higher ISO noise is pretty splotchy...but Canon noise is often just as bad (only Canon color splotches tend to be primarily reddish, with a bit of green.)


----------



## Lee Jay (Oct 9, 2014)

jrista said:


> Lee Jay said:
> 
> 
> > A problem with Foveon sensors is lousy color separation. It's not a red, blue and green layer, it's three white layers with a little bit of bias on each one. This is why they have lousy, inaccurate colors with lots of color artifacts like purple and green splotches all over the place.
> ...



I'm talking about different splotches. They aren't a few pixels like chroma noise, they're a few thousand pixels.


----------



## jrista (Oct 9, 2014)

Lee Jay said:


> jrista said:
> 
> 
> > Lee Jay said:
> ...



Yeah, Canon RAWs have the same problem. I am not sure it's thousands of pixels, but certainly several hundred.


----------



## Woody (Oct 9, 2014)

LetTheRightLensIn said:


> Maybe they have some totally new tech that makes it able to do better at high ISO and low ISO than the regular type of sensors they are using now? And maybe it provides great colors and they have a ton of processing power to handle all the data and the less efficient data storage is more than outweighed by how well they have the basic tech working (CFA filters rob a lot of light, if they managed to capture almost all of it with three layer that could be the only way to give a big boost to high iso at this point; but it would take some amazing new tech as all the current three layer tech actually does worse in most regards)?
> 
> If not, though, then it does seem potentially questionable yeah.



One thing we ought to remember is that when Canon first embarked on CMOS sensor technology, there were lots of skeptics out there. At that time, most believed in the superiority of CCD to CMOS, until they were proven wrong. ;D

On another note, Sony also has a patent on multilayer sensor:
http://www.sonyalpharumors.com/sony-3-layer-patent-in-detail/

So, Canon is not alone. Almost appears as if Foveon type sensor is the Nirvana for sensor designers, despite Eric Fossum's doubts. ;D


----------



## dufflover (Oct 9, 2014)

A CR2 involving new, and actually "real" new sensor tech. Finally, bring it on!


----------



## SoullessPolack (Oct 9, 2014)

Can someone explain to me why they're working on this rather than more megapixels? Rather, why the focus is on more layers? I understand it's to better represent colors. But what is wrong with colors? My cameras have always nice, realistic, vivid colors as long as I use a good lens. I've never had a photograph where I even had the slightest hint of a thought that the color is not accurate. When I look at my pictures, it looks like when I was there. Granted, the dynamic range is not the same, but we're not taking pictures with our eyes, so that's expected. What is it about color that they need to squeeze that last 0.01% of color accuracy out of the camera?


----------



## jrista (Oct 9, 2014)

SoullessPolack said:


> Can someone explain to me why they're working on this rather than more megapixels? Rather, why the focus is on more layers? I understand it's to better represent colors. But what is wrong with colors? My cameras have always nice, realistic, vivid colors as long as I use a good lens. I've never had a photograph where I even had the slightest hint of a thought that the color is not accurate. When I look at my pictures, it looks like when I was there. Granted, the dynamic range is not the same, but we're not taking pictures with our eyes, so that's expected. What is it about color that they need to squeeze that last 0.01% of color accuracy out of the camera?



It's more than just better color fidelity. Bayer sensors have sparse data (sparse color data, you generally have more complete luminance data, but it's still not ideal), and need to be debayered. Assuming Canon is able to create a layered sensor with similar photosite counts as bayer sensors today, say 20mp, the image from a layered sensor should be much more complete, more detailed, sharper. The only real drawback to current Foveon sensors is they are really low resolution. For cameras of similar resolutions, Foveon is better because it's sharper out of camera for the given file size.

Sparse color information, and the act of debayering, is a primary source of color noise. Canon weakened the color filters in their more recent sensors (excluding the 7D II...not sure about that one yet), which results in more color bleed between pixels of differing colors, which just makes the color noise issue even worse. Luminance information is also biased...while it's higher resolution than the color information, different color channels have different sensitivities. When the color profile tone curves are applied to correct that discrepancy, it exacerbates noise (both luminance and color.)

When you gather a full constituent of color information at every photosite, if done right, you should have far lower color noise (doubtful it can be eliminated, but certainly lowered), and since every photosite gathers full luminance information, you won't get that increase in luminance noise due to different amplification of each color channel. 

There are a lot of benefits for moving to a layered sensor design. The difficulties lie in getting good sensitivity at each layer, and in handling the photodiode count. A 20mp layered sensor with three colors is 60 million photodiodes that need to be read out. That's roughly triple Canon's current highest pixel count...I don't think even DIGIC 6 can handle that at even moderately reasonable frame rates...assuming 14-bit, a 20mp RGB layered sensor could do maybe 3.3fps with a pair of DIGIC 6 (based on the 10fps frame rate of the 7D II.) At best, that's a slow studio camera. 

If Canon is intending to use this in the 1D X replacement, either they have something seriously powerful in DIGIC 7, or they are dramatically lowering the photosite count. If they released a 7mp RGB layered sensor with ~21 million photodiodes, they could get 12fps with dual DIGIC 5 or 6. They would need twice the throughput of DIGIC 5/6 to do 12fps at 14mp. They would need to process 2GB/s (which is basically the equivalent of eight DIGIC 5/6) to do 12fps at 28mp.


----------



## Lee Jay (Oct 9, 2014)

SoullessPolack said:


> Can someone explain to me why they're working on this rather than more megapixels? Rather, why the focus is on more layers? I understand it's to better represent colors. But what is wrong with colors? My cameras have always nice, realistic, vivid colors as long as I use a good lens. I've never had a photograph where I even had the slightest hint of a thought that the color is not accurate. When I look at my pictures, it looks like when I was there. Granted, the dynamic range is not the same, but we're not taking pictures with our eyes, so that's expected. What is it about color that they need to squeeze that last 0.01% of color accuracy out of the camera?



The problem with Bayer cameras isn't color, it's that the Bayer dyes absorb something like half the light coming in. In theory, if you could go without that dye layer, you could gain a stop, or perhaps a bit more, of high ISO performance.


----------



## wtlloyd (Oct 9, 2014)

So, at first I'm thinkin', "Here we go, 100 pages, easy".

Then I see only 4 pages in 8 hours... ???

And I realized, what kinda rumor is that? Multilayer sensors, what does that even mean? Stupid rumor, let's get back to talking about DR and diffraction and AA filters, and all that good stuff we're so familiar with...

How are you supposed to argue, when you don't know what your're talking about?


----------



## Perio (Oct 9, 2014)

Guys, is it possible to get 16-bit images with FF DSLRs in theory? Would it give any real life benefit vs. 14-bit?


----------



## jrista (Oct 9, 2014)

Here is one of the layered sensor patents from a few years ago (2011):

http://translate.google.com/translate?hl=en&sl=ja&u=http://egami.blog.so-net.ne.jp/2013-05-22&prev=/search%3Fq%3Dhttp://egami.blog.so-net.ne.jp/2013-05-22%26client%3Dfirefox-a%26hs%3DC0u%26rls%3Dorg.mozilla:en-USfficial

This one seems to apply the nanocoating concept to the red layer. Nanocoating uses nanoscopic scale spikes of differing sizes on a reflective surface to produce a non-abrupt transition layer. Reflections occur at abrupt transistions in refractive index, so by creating a non-abrupt transition layer, you can nearly eliminate reflections entirely. This is different from standard multicoating, which still allows reflections to occur, it just cancels them out via wave interference. 

Here is another patent from 2012:

http://translate.google.com/translate?hl=en&sl=ja&u=http://egami.blog.so-net.ne.jp/2013-05-22&prev=/search%3Fq%3Dhttp://egami.blog.so-net.ne.jp/2013-05-22%26client%3Dfirefox-a%26hs%3DC0u%26rls%3Dorg.mozilla:en-USfficial

This is another sensitivity increasing patent. This apparently uses dielectric antireflective layers underneath the preceeding layer to reduce ghosting reflections. Not sure if this is intended to be used in conjunction with the nanocoating of the red layer or not...it seems to explicitly call out the blue and green layers (which are higher up than the red layer).

Canon also has their more recent patent for the five-layer sensor with UV and IR layers:

http://translate.google.com/translate?sl=auto&tl=en&u=http://egami.blog.so-net.ne.jp/2014-06-27

This patent is interesting, because it seems to depict a multi-layered BSI design, at least based on the diagram of the sensor (all the transistors are on the back side...that alone would be HUGE for layered sensor sensitivity...if you look at the ChipWorks electron micrographs of current Foveon designs, the transistors take up a huge amount of die space, as Foveon is still an FSI design...which is probably the biggest reason that sensor suffers so in low light.)

It was discovered some time ago that infrared light diffuses and reflects back subcutaneously in human skin. It can be used to greatly reduce the appearance of skin blemishes (I found a page a while back that shows that most skin features effectively disappear when you shoot full infrared). I'm not sure what UV light does for skin...apparently Canon found something useful with UV light.

Anyway, wtlloyd, there's some reference material.  That's what Canon's got for multi-layered sensors.


----------



## neuroanatomist (Oct 9, 2014)

Perio said:


> Guys, is it possible to get 16-bit images with FF DSLRs in theory? Would it give any real life benefit vs. 14-bit?



Sure, if they start using 16-bit ADCs. Whether or not it's of real life benefit depends on the specific sensor designs, and on the importance you personally place on a couple extra stops of dynamic range.


----------



## Lee Jay (Oct 9, 2014)

Perio said:


> Guys, is it possible to get 16-bit images with FF DSLRs in theory? Would it give any real life benefit vs. 14-bit?



Yes, and possibly.


----------



## BozillaNZ (Oct 9, 2014)

Perio said:


> Guys, is it possible to get 16-bit images with FF DSLRs in theory? Would it give any real life benefit vs. 14-bit?



For current line of Canon sensors, even 14-bit is overkill, you will do just fine with 12-bit. The extra bits are only recording noise.

If you can make a sensor that outputs clean 65536 grades of shade, then a 16-bit signal path will unleash a lot more potential.

And no, I am not a drone. ???


----------



## jrista (Oct 9, 2014)

Perio said:


> Guys, is it possible to get 16-bit images with FF DSLRs in theory? Would it give any real life benefit vs. 14-bit?



You won't benefit from a higher bit depth if your full well capacity in electrons is less than the maximum digital unit supported by the bit depth. With 14 bits, you can represent digital units from 0 through 2^14, or 16,384. With 16 bits, you can represent digital units from 0 through 2^16, or 65,536.

Most APS-C sensors don't have enough full-well electron charge to really benefit from 16-bit ADCs. Canon sensors top out at around 26,000e-. That's more than the 16k supported by 14 bits of data, but not enough more to warrant 65k. It may even be beneficial to "oversample" electrons, at base ISO, relative to the output bit depth. If you had ~32ke- FWC, you would effectively convert every two electrons into one digital unit. That's good dynamic range (oh please, don't let that be misinterpreted! ) A couple Nikon APS-C cameras have 30-40ke- FWC.

Full frame sensors, at least with current pixel sizes, gather a lot more charge per pixel than APS-C sensors. Most are over 55ke-, including Canon's older FF 1D series cameras. The 5D II had nearly 65ke- exactly, and even the old 5DC had over 55ke-. The 5D III, 6D, 1D X all have FWCs over 65ke-. The D800 (and A7r) have 45ke- FWC, which is on the lower side, but the D810 cranks it up to nearly 80ke- at ISO 64. The A7s has a whopping 155ke- FWC at ISO 100. 

I'd say that most FF cameras could benefit from a 16-bit ADC. Even Canon cameras, which still have high read noise, can benefit. You won't see an editing latitude increase on a Canon camera (not with current read noise levels anyway), however overall, you should still see improved tonal grading. Convert 65, 80, 150 thousand electrons into 16k digital units, and your needlessly limiting your tonal range. Convert 65, 80, 150 thousand electrons into 65k digital units, and you greatly expand your tonal range...that should mean smoother gradients, softer shadow falloff (until you hit the read noise floor), etc. 

So, assuming you have the electron charge capacity in each pixel to support it, you could benefit from 16-bit ADC. I don't think many APS-C sensors currently would really benefit. I think most FF sensors could benefit, especially those with really high FWC counts...the 1D X, the 6D, the A7s, the D810.


----------



## Etienne (Oct 9, 2014)

Multi-layer, 16 bit, super charged bosons, or whatever. They could have jellyfish tentacles inside for all I care. It's a new tech, and when it hits the street we'll see if it redefines "awesome" or not. 
What I do like, is that the leapfrog game of competition appears to be on, and Canon, as we all should know, is not sleeping while Sony pumps out new tech.


----------



## BozillaNZ (Oct 9, 2014)

jrista said:


> I'd say that most FF cameras could benefit from a 16-bit ADC. Even Canon cameras, which still have high read noise, can benefit.



You won't benefit anything by sampling the noise floor 4 times more precise. I could as well read the sensor at 12-bit and fill the lower 4-bits with random noise generator and get a "smoother tonal gradient".


----------



## wtlloyd (Oct 9, 2014)

Thanks! That's gonna be some good readin'!





jrista said:


> Here is one of the layered sensor patents from a few years ago (2011):
> 
> http://translate.google.com/translate?hl=en&sl=ja&u=http://egami.blog.so-net.ne.jp/2013-05-22&prev=/search%3Fq%3Dhttp://egami.blog.so-net.ne.jp/2013-05-22%26client%3Dfirefox-a%26hs%3DC0u%26rls%3Dorg.mozilla:en-USfficial
> 
> ...


----------



## jrista (Oct 9, 2014)

BozillaNZ said:


> jrista said:
> 
> 
> > I'd say that most FF cameras could benefit from a 16-bit ADC. Even Canon cameras, which still have high read noise, can benefit.
> ...



Your only thinking about the bottom range of the signal. Once your above the read noise floor, it's clean signal limited only by photon shot noise. You very much do still benefit from higher bit depth in that (very vast) range of signal. 

Think about it. If read noise is 35e-, and the maximum signal strength is 68,000e-. If your doing 16-bit conversion, then your gain is 1.0376e-/ADU . That's almost unity gain...at ISO 100! Unity gain is what you want. With 14-bit conversion, your gain is 4.15e-/ADU. So, with 14-bit, your read noise turns into 8-9 tonal levels. With 16-bit, your read noise turns into 33-34 tonal levels. That's the bottom of the signal, though. For 16-bit, you still have 65502 levels for all the signal detail above the noise floor. Any gradients in the image at tones 35 through 65535 are going to be smoother with 16-bit conversion than with 14-bit conversion.

You don't gain in editing latitude...you would still have the same amount of dynamic range...but you do gain in tonal fidelity.


----------



## V8Beast (Oct 9, 2014)

jrista said:


> I'd say that most FF cameras could benefit from a 16-bit ADC. Even Canon cameras, which still have high read noise, can benefit. You won't see an editing latitude increase on a Canon camera (not with current read noise levels anyway), however overall, you should still see improved tonal grading. Convert 65, 80, 150 thousand electrons into 16k digital units, and your needlessly limiting your tonal range. Convert 65, 80, 150 thousand electrons into 65k digital units, and you greatly expand your tonal range...that should mean smoother gradients, softer shadow falloff (until you hit the read noise floor), etc.



Very interesting info. Other than higher bit ADC, what other methods can be used to improve tonal range? 

I'm often shooting product images of engine or suspension parts that are some slightly different shade of gray, white, or black. Since I have 100 percent of the control over lighting, in these scenarios I'm far more interested in improvements in tonal range than dynamic range, although dynamic range is still a nice convenience. 

Similar products shots I've seen captured with medium format gear absolutely kicks the $hit out of anything captured on 35mm sensors in terms of super fine tonal gradations. It makes me envious, but I don't shoot nearly enough product gigs like this to warrant investing in medium format.


----------



## BozillaNZ (Oct 9, 2014)

jrista said:


> Think about it. If read noise is 35e-, and the maximum signal strength is 68,000e-. If your doing 16-bit conversion, then your gain is 1.0376e-/ADU . That's almost unity gain...at ISO 100! Unity gain is what you want. With 14-bit conversion, your gain is 4.15e-/ADU. So, with 14-bit, your read noise turns into 8-9 tonal levels. With 16-bit, your read noise turns into 33-34 tonal levels. That's the bottom of the signal, though. For 16-bit, you still have 65502 levels for all the signal detail above the noise floor. Any gradients in the image at tones 35 through 65535 are going to be smoother with 16-bit conversion than with 14-bit conversion.



The noise floor does not only 'exist' in the lower-end of the signal range, it exists through-out the range event to the clip point.

Let's do a math exercise:

Here's a stream of signal:
0 10 100 1000 10000 100000

If my sensor has noise floor of 3-bits (0~8), even if I sample it using full precision, I got this:
*3 12 107 1004 10002 100005*, my lowest 3 bits are drown in noise, even for the highlight.

If I sample it with 1/8 precision (chops off lower 3 bits), I get:
0 1 13 125 1250 12500

Then recreate the signal by multiplying the sample signal by 8 times:
0 8 104 1000 10000 100000

And mix it with random number generator for 3-bits:
*5 10 109 1001 10007 100004*

The results of high and low precision sampling only fluctuates within noise floor, so are essentially the same.

The conclusion? If you are sampling more precision than your SNR, you are just sampling noise more precisely, which is still noise, and is same as if you don't sample as precise, then add noise in post.

Apply this to your example, when you have 35e- noise floor, your tonal range is not 65535-35 = 65500, but rather 65535/35 = 1872 levels (~10.8 stops), because the bottom 5 (!) bits are unstable noise.


----------



## jrista (Oct 9, 2014)

BozillaNZ said:


> jrista said:
> 
> 
> > Think about it. If read noise is 35e-, and the maximum signal strength is 68,000e-. If your doing 16-bit conversion, then your gain is 1.0376e-/ADU . That's almost unity gain...at ISO 100! Unity gain is what you want. With 14-bit conversion, your gain is 4.15e-/ADU. So, with 14-bit, your read noise turns into 8-9 tonal levels. With 16-bit, your read noise turns into 33-34 tonal levels. That's the bottom of the signal, though. For 16-bit, you still have 65502 levels for all the signal detail above the noise floor. Any gradients in the image at tones 35 through 65535 are going to be smoother with 16-bit conversion than with 14-bit conversion.
> ...



I don't disagree, however I would still take a finer sampling of noise over a coarser sampling of noise. I mean, if your sensor has an analog range up through 68,000e-, with 35e- noise, that's how much noise can fluctuate up to. You might have a bunch of pixels where the correct signal value is for a midtone gray at 34,000e-. With noise, you are going to randomly fluctuate up to 35e- around that value, so you might have:

34,004
34,035
34,009
34,014
34,010
34,020
34,012
34,000

If you sample these analog values at 14-bit, you get the following:

8193
8200
8194
8195
8194
8196
8194
8192

If you sample them at 16-bit, you get the following:

35283 
35315 
35288 
35293 
35289 
35299 
35291 
35279

If you multiply the 14-bit samples by four to put them into the same numeric range as the 16-bit sampling:

32772
32800
32776
32780
32776
32784
32776
32768

I'd rather take the finer and more random sampling that a 16-bit ADC allows, than the often repetitive sampling that 14-bit ADC does. It's that repetitive sampling with fewer bits that can lead to posterization in smooth gradients (like sky), lower SNR regions (shadows), etc. I'll take a finer, more random sampling of noise any day...despite the fact that it's still just sampling noise.


----------



## BozillaNZ (Oct 9, 2014)

The kicker is, you can take the 14-bit samples,scale up to 16-bit, then add 2-bit randomly generated noise and get the indistinguishable (same) result. You can't get something for nothing really.

I' doing a lot of raw data manipulation recently, Canon 12-bit Raw files has the range of 127-3850. Seems pretty low isn't it? But the tonal range and dr of this model (1Ds 2) in DXO is still not much different than newer 14-bit sensors.


----------



## dgatwood (Oct 9, 2014)

hoodlum said:


> I don't expect this sensor to be used in a high speed body due to the low sensor read-out speed of a stacked sensor. That would also explain why the 7Dii was just replaced with existing sensor tech. I could see a totally new body announced as a niche for landscape or other areas that require greater detail without the need for high FPS. Video would also not work very well with this sensor due to the low read-out speed.



Readout speed is almost infinitely parallelizable, as is image compression/encoding. Performance is almost entirely a question of how much hardware they decide to throw at it. If they're feeling particularly nuts, they could use a two-sided silicon wafer, put all the circuitry on the back side with vias, and use a per-pixel ADC, as in this design....


----------



## Lawliet (Oct 9, 2014)

dgatwood said:


> Readout speed is almost infinitely parallelizable, as is image compression/encoding. Performance is almost entirely a question of how much hardware they decide to throw at it. If they're feeling particularly nuts, they could use a two-sided silicon wafer, put all the circuitry on the back side with vias, and use a per-pixel ADC, as in this design....



For starters one could take a look at the NX1 - with the action trigger mode reading and evaluating all of those 28MP at quite impressive 240 fps. Enough power to read 8K video at 120fps for slow motion, for example. Or 36m full RGB pixels faster then the 1Dx is now. Well, at 12bit, still not bad compared to 14bit with the lower parts filled with a blend of random&pattern noise.
Not that a 1DsIII-successor would need such framerates...most of the time the strobes would be the limiting factor anyway.


----------



## ChristopherMarkPerez (Oct 9, 2014)

Interesting. I hope this comes to market.

It could explain why Canon has been to quiet on the topic of offering a Bigger Is Better sensor while Sony has ben extending the old Bayer technologies. New designs, new processes and, very likely, new the development of new tooling has to have cost Canon a boat-load of money.

Could it be that some of these sensors (escaped from the lab as demo-units) were the ones shown around NY recently (as rumored on this site)?


----------



## sigint (Oct 9, 2014)

As a old Sigma cams user, i can tell you that if this rumor is real you can only expect :


- High end bulky camera from top of the line (It needs tripple memory buffer, multicore specially designed GPU and much bigger battery)

- Full frame max 7Mpix sensor, so it will be equivalent of 35Mpix in Bayer sensors, but in real life tests it will generate ~20-25Mpix IQ 

- 5 layers will generate so much data to process that it will be very slow camera, max 7-8 FPS (5 layers are used to run potential patent lawsuit from Sigma, but it is uncertain because Sigma can't patent physical properties of silicon which are used in multilayered sensors, but only methods of image data processing.)

- Canon can use only in minimal way IR and UV layers, this potentialy can lower data stream and speed image processing in camera.

- ISO range from 100 to 6400, in color useful to 1800, in monochrome to 6400, and it will blow IQ of Foveon sensors, if Canon can get it right to ISO 1800 in color.

- Multilayered sensors needs telecentric lenses, to get away many problems with sharpes of image in corners.

- Very good IR cut filter to get away green corners problem (3 layers in sensor), but if 5 layers are used then special method to manage this problem which was a big problem for Sigma users in many years.

- And finally biggest problem - newly designed high speed RAW developer, even Sigma can manage this till now, it takes ages to process files from new sensor even on very new computers.

If Canon will hit the market with this multilayer sensor, then lots of user will be forced to buy new very expensive high end computers which could manage RAWs in reasonable time and not frustrate users in workflow (one Quattro Raw file could develop even up to 60 seconds), which is currently main factor of escapes from Sigma camp.

Look how long it takes to develop only one RAW file from Sigma Dp2 Quattro, you can get taste of future Canon multilayer cam workflow, and keep in mind that was APS-C size sensor, and what hell it will be with full frame files.

http://www.youtube.com/watch?list=UUqpOf_Nl5F4tjwlxOVS6h8A&v=o7ktvDUyTyU&feature=player_detailpage#t=603

Newest Foveon sensor it's a terrible mistake, there is almost half a year after Quattro introduction and Sigma can't get noise level right even at base ISO100, just look at pictures there is so distinctive noise in blues and light parts of the images that even older sensors not only Merrill but even 1.7 crop Foveon sensors can get better results. Theres is obvious engineering mistake in Quattro Foveon sensor to use blue layer in 4 times resolution which can't be corected in softwere, even Sigma pro photographers speaking about it. So Sigma is in blind alley with new sensor, and sales are dropping, Sigma users are reluctant to buy Quattro, instead they buy cheap Merrill or skip Quattro and want to wait next 2-3 years for new generation of Foveon sensors and in the mean time they are also buying Fujifilm MILCs (Sigma don't have MILC & DP Zoom in offer, Sigma DSLRs are not in that league in IQ as DP series).

Sigint


----------



## lintoni (Oct 9, 2014)

ChristopherMarkPerez said:


> Interesting.  I hope this comes to market.
> 
> It could explain why Canon has been to quiet on the topic of offering a Bigger Is Better sensor while Sony has ben extending the old Bayer technologies. New designs, new processes and, very likely, new the development of new tooling has to have cost Canon a boat-load of money.
> 
> *Could it be that some of these sensors (escaped from the lab as demo-units) were the ones shown around NY recently (as rumored on this site)?*



http://www.canonrumors.com/2014/05/new-full-frame-camera-in-testing-cr1-2/



> "much" better colour accuracy and detail



That certainly sounds like something to expect from a Foveon type sensor.


----------



## ChristopherMarkPerez (Oct 9, 2014)

So... would a multi-layer sensor favor cameras with mirror-boxes as a way of getting the light down the layered tunnel at the edges? Might existing SLR lens designs work better on such a sensor?

;D ;D ;D



sigint said:


> ...
> - Multilayered sensors needs telecentric lenses, to get away many problems with sharpes of image in corners...


----------



## danski0224 (Oct 9, 2014)

The file sizes from a Sigma DP2 Merrill are much larger than even a Canon 5DIII, and the Sigma DP2 is "15 MP".

The X3F files (RAW) are about 45 MB each at full resolution. Nothing but the Sigma software reads them, as far as I know.

They also take a bit of time to even open up on a computer once the Sigma ProPhoto software kicks in. I don't have a USB3 reader or the fastest SD card though.

Writing the image files in-camera to the SD card also takes a while. It is certainly 2 or 3 seconds from pressing the shutter to being able to review the image on the camera screen, which is an eternity compared to current Canon cameras. It is entirely comparable to the Canon 1Ds MK 1 though.

For color, ISO 100 or 200 is pretty much it. Monochrome is something else.

Battery life is abysmal on the DP2 Merrill cameras. Upside is the batteries are inexpensive.

I have not used the "pro" Sigma Foveon cameras.

Canon would have an enormous amount of work to make the Sigma DP2/DP3 process acceptably close to even 5DII operational standards, exclusive of high ISO (5DII high ISO- 1DX high ISO would be amazing). 

When you get a keeper with a Merrill, it's a good one though.


----------



## Destin (Oct 9, 2014)

Just throwing it out there..... could it be multiple layers for highlights and shadows rather than RGB?


----------



## Woody (Oct 9, 2014)

sigint said:


> As a old Sigma cams user, i can tell you that if this rumor is real you can only expect...
> 
> Newest Foveon sensor it's a terrible mistake, there is almost half a year after Quattro introduction and Sigma can't get noise level right even at base ISO100, just look at pictures there is so distinctive noise in blues and light parts of the images that even older sensors not only Merrill but even 1.7 crop Foveon sensors can get better results.



This is all very amusing to me. When Sigma Foveon cameras first hit the market, users were literally RAVING about them. They were putting down Bayer sensors in the market, especially the CMOS sensors from Canon. Of course, all problems associated with Foven sensors were either disputed, denied or dismissed.

Now, the skeletons are coming out of the closet. I am wondering if this may also be true for current users of other competing brands... you know who you are. ;D

Anyway, like what I said earlier, if this mutlilayer sensor technology indeed comes to pass, we can be sure most of the apparent drawbacks will have been ironed out already. Didn't folks make the same dire prognostications when Canon first announced their implementation of CMOS instead of CCD sensors? Sigma may not be able to resolve the accompanying issues of multilayer sensors, but this does not imply Canon and Sony (which also has patents on multilayer sensors) do not have other clever tricks up their sleeves.


----------



## memoriaphoto (Oct 9, 2014)

Destin said:


> Just throwing it out there..... could it be multiple layers for highlights and shadows rather than RGB?



Interesting thought. Something like the legendary (?) Fuji S5 Pro. I guess that would dramtically increase the DR performance.

Personally though, I hope for a more "true to life" color approach out of cam with film-like quality. The colorpalette of older Canons was/is more pleasing imho. Lately Canon has focused too much on high ISO performance with thinner CFA:s which has required more color-work in post for us VERY RARE daylight shooters ;-)


----------



## jeffa4444 (Oct 9, 2014)

A pro in a particular line of photography has already alluded to a 45MP camera. What will be interesting is whether it will seek its way into the Cinema line of the C300 / C500 as Canon has cheaper competition from Sony with the FS-7 which trumps them in many ways that the FS-700 never did. The C300 was annouced late 2011 and didnt have the same competition it now has in Black Magic, AJA, Panasonic & Sony.


----------



## jeffa4444 (Oct 9, 2014)

We shouldnt also forget that Fuji & Panasonic are also colaborating on organic sensors with BSI. 

http://www.fujifilm.com/news/n130611.html

http://www.fujirumors.com/updated-organic-sensor-patent/

And also Sony. 

http://thenewcamera.com/sony-hybrid-organic-inorganic-sensor-patent/

All due in camera in 2015


----------



## LetTheRightLensIn (Oct 9, 2014)

Lee Jay said:


> A problem with Foveon sensors is lousy color separation. It's not a red, blue and green layer, it's three white layers with a little bit of bias on each one. This is why they have lousy, inaccurate colors with lots of color artifacts like purple and green splotches all over the place.
> 
> I hope Canon has a way to dramatically improve on Foveon sensors before they'd release this into the wild. Foveon's have lousy DR, lousy high ISO performance, lousy colors, and the lack of an AA filter means a ton of aliasing artifacts.



Agreed, they'd have to have had mega tech breakthroughs. That seems unlikely. However, I don't know that there is anything in basic physics that instantly jumps out at you and says that such a breakthrough would be impossible in this case.


----------



## LetTheRightLensIn (Oct 9, 2014)

BozillaNZ said:


> jrista said:
> 
> 
> > Think about it. If read noise is 35e-, and the maximum signal strength is 68,000e-. If your doing 16-bit conversion, then your gain is 1.0376e-/ADU . That's almost unity gain...at ISO 100! Unity gain is what you want. With 14-bit conversion, your gain is 4.15e-/ADU. So, with 14-bit, your read noise turns into 8-9 tonal levels. With 16-bit, your read noise turns into 33-34 tonal levels. That's the bottom of the signal, though. For 16-bit, you still have 65502 levels for all the signal detail above the noise floor. Any gradients in the image at tones 35 through 65535 are going to be smoother with 16-bit conversion than with 14-bit conversion.
> ...



+1


----------



## LetTheRightLensIn (Oct 9, 2014)

V8Beast said:


> jrista said:
> 
> 
> > I'd say that most FF cameras could benefit from a 16-bit ADC. Even Canon cameras, which still have high read noise, can benefit. You won't see an editing latitude increase on a Canon camera (not with current read noise levels anyway), however overall, you should still see improved tonal grading. Convert 65, 80, 150 thousand electrons into 16k digital units, and your needlessly limiting your tonal range. Convert 65, 80, 150 thousand electrons into 65k digital units, and you greatly expand your tonal range...that should mean smoother gradients, softer shadow falloff (until you hit the read noise floor), etc.
> ...



improving SNR/DR so you can make use of more captured bits per channel and making the color filter array super well tuned (and likely a lot less color-blind than they have become, although then you run into issues of decreasing SNR so it's a balance, but since noise is not so bad in upper tones at low ISO, I think you'd do better with a more titghtly refined color filter array). And you also may want a display with 10bits or more per channel.


----------



## StudentOfLight (Oct 9, 2014)

I wonder if it would be helpful to have a quasi-bayeresque multi-layer sensor. 

So instead of having every pixel being constructed with what I assume would be a B/G/R construction, they decide to have some interspersed pixels with other channel arrangements. (e.g. R/G/B)


----------



## bchernicoff (Oct 9, 2014)

Woody said:


> Completely off-track here but I've a question for Fujifilm users. Doesn't Adobe still struggle with Fujifilm RAW files? I know there are improvements, but they are still not entirely artifact free. So, how does one cope with that?



No, starting with Lightroom 5.4 and the associated Camera RAW release it's perfectly fine. The also added the Fuji camera profiles (Provia, Astia, etc). The camera JPGs and other RAW converters (especially PhotoNinja) can pull a bit more detail out that Lightroom can, but I only worry about that for a large print or tightly cropped image. LR is great 95% of the time. Here is a comparison. I took a screenshot while zoomed to 100% in Lightroom. RAF is left, JPG is right. I choose an area with green because that is where detail is hardest to come by. Edit: the forum recompresses the upload when shown here in the post. You can right click on it and choose "Open image in New Tab" in Chrome web browser to see full-size screenshot. Other browsers are similar.

Also here is a link to view and download the camera JPG in Google Drive: https://drive.google.com/file/d/0Bxu8IhRmJPZ2Rmc2c1BwUmNwd1E/view?usp=sharing


----------



## Lee Jay (Oct 9, 2014)

jrista said:


> StudentOfLight said:
> 
> 
> > I wonder if it would be helpful to have a quasi-bayeresque multi-layer sensor.
> ...



Di and Trichroic filters have been used before, most notably in "3 CCD camcorders".


----------



## Lee Jay (Oct 9, 2014)

jrista said:


> Lee Jay said:
> 
> 
> > jrista said:
> ...



You saw this?

http://www.dpreview.com/articles/6555348105/nikonimagesensor


----------



## Diko (Oct 10, 2014)

Yep. Quite interesting is that Panasonic idea. And I hear about it here for the first time. Seems to have missed it.

About that 16 bit talk a few posts earlier... I wonder about one alternative (aside from Sport photography maybe) Canon could develop an image processor to use stack/HDR-like/ photos to produce a 32 bit TIFF or even a RAW (which is similar). Perhaps combined with that dual ISO from ML exploit (which mechanics I honestly don't remember quite well). I enjoy quite to Adobe PS feature for 32 bit TIFFs. But Native is always better. I admit 10 FPS wouldn't be maybe possible, but who knows.... Anyways - tech is here: Hardly too much of R&D is required for this kind of feature compared to 16 bit Sensor option. Although having in mind Jrista's data on the wells... who knows. But 32 is always bigger than 16 ;-)


Jrista - you got your numbers from http://sensorgen.info/ ? BTW their Domain has expired... Bah! Google cache is still here


----------



## Woody (Oct 10, 2014)

jrista said:


> Canon, while a highly innovative company, doesn't seem to be all that innovative when it comes to sensors. (Which is REALLY weird to me...given that they are an imaging company...pretty much everything they do revolves around digital image sensor technology.) Canon just doesn't seem to be in the sensor innovation game right now...
> 
> And this is with Aptina's current HDR technology...I can only imagine what they can do with multi-bucket technology.



Canon did change the sensor game when they first started with CMOS technology instead of the widely accepted CCD stuff. But after Sony embarked on their own CMOS sensors (first shown in D300/D3), Canon basically stagnated. Sigh... I am still waiting for them to leapfrog Sony... not sure if the day will ever come...


----------



## Diko (Oct 10, 2014)

jrista said:


> Yeah, I'm bummed the sensorgen.info domain expired. Those guys do seem to update the site with recent cameras...maybe it will come back when they notice the domain expired.


I will try to ask around about it :-(



jrista said:


> Regarding the whole stacking/in-sensor HDR idea. Other companies are working on that. There are patents that support just that very thing. Some companies are even researching how to use dual-gain (basically the same thing as Magic Lantern's Dual ISO) to improve dynamic range WELL beyond 14 stops (20, 22 stops maybe more).


 You sure about it? However a simple stacking of 3/5/9 frames @ different stops embedded in on image @32 bit is the best even without the multibucketing. The latter could only be useful for sport & High-FPS photography. 

However if you have some time could I kindly ask for a patent or two links - it would be certainly of high interest to me and you definitely read patents better than me. :-[



jrista said:


> This is why I'm frustrated with Canon, and now open to alternative brands. Canon, while a highly innovative company, doesn't seem to be all that innovative when it comes to sensors. (Which is REALLY weird to me...given that they are an imaging company...pretty much everything they do revolves around digital image sensor technology.) Canon just doesn't seem to be in the sensor innovation game right now. They have had a couple, but none of it (at least so far...maybe their layered sensors will change things) has been very ground breaking. DPAF is pretty awesome for video...but even that was an evolution on an idea Fuji originally implemented (and I think Fuji got it from a much older paper.)


Yeah! Tell me about it.

At least for one thing that *Neuro imposter IMO was right*:

"_the real problems are the managers at canon
they are lazy and have made their fortune.
hell is there even one under 60?_"

Or Canon was more conservative on R&D spending (which I believe is not the case since if I recall correctly they have kept investing about 2% of their profit).



jrista said:


> Back to the HDR sensor topic. I think the most viable technology at the moment is multi-bucket CCD-backed pixels. This is an Aptina innovation...they took the basic single-CCD backing buffer or "memory" for global shutter, and expanded it to four CCDs per pixel (it's a CCD, or charge-coupled device, as that's a very simple and efficient way to move charge from one place to another). With four memories per pixel, the charge in the pixel can be moved to memory four times.


 Patent link, patent link PLZZZzzzzz 

As for the global shutter - don't even remind me about that. Everybody is slowly getting on that ship too. If I remember correctly Canon has no such feature in any of its CMOS sensor within any Pro level camera.

And that is a good trend to follow at least.


----------



## Diko (Oct 10, 2014)

jrista said:


> Not the patent, not sure where that is. Here is a paper on the topic, though:
> 
> https://graphics.stanford.edu/papers/gordon-multibucket-jssc12.pdf
> 
> Mostly by Aptina engineers.


 Nice one, but main goal should be to get all the 32 bit of information and not some discrete auto HDR image.

I love to walk around with a brush or any other PS tool in order to get what I want.


----------



## Jon_D (Oct 10, 2014)

Will Canon use more than 3 layer?

What does the patents say?


----------



## dgatwood (Oct 12, 2014)

jrista said:


> Jon_D said:
> 
> 
> > Will Canon use more than 3 layer?
> ...



I would expect the extra information to be included in RAW files as a separate data layer, so that post-processing tools could benefit from it in the same way. That would also have the rather sizable advantage of making it possible to do IR photography with an unmodified camera just by changing the way that you combine the various layers in post-processing, which would be really cool.

Incidentally, there shouldn't be any demosaicing with a multilayer sensor. That's the whole point of having spatially coincident subpixels.


----------



## dgatwood (Oct 12, 2014)

jrista said:


> It would also be a tremendous amount of data, and a lot more data to be factored into image processing. Five layers at 25megapixels is 125megaphotodiodes. At 14-bit, that's around 235-245 megabytes *per image*. RAW editors would also have to add the right kind of support to utilize those extra layers.



Even three layers would be unworkable uncompressed at 25 megapixels per layer. It's hard enough to deal with 25–30 megabyte image files, much less four times that. They're clearly going to have to come up with a good lossless compression algorithm. A lossless scheme similar to PNG should get you about 2.7:1 compression, which means about 81 MB with all five layers included, or 49 MB with only three layers. But I think it is possible to do better than 2.7:1. After all, the high order bits of nearby pixels are likely to be fairly similar except near high-contrast edges, and the more bit depth you have, the more identical bits you'll probably have.


----------



## Lee Jay (Oct 12, 2014)

jrista said:


> dgatwood said:
> 
> 
> > jrista said:
> ...



So what? When you're working in Lightroom or Camera Raw you're working on demosaiced data anyway at 16 bits per channel for four channeks. The size is 8 bytes * pixel count.


----------



## modeleste (Oct 12, 2014)

I agree, it would be pretty cool to have an IR layer that was usable, but I don't think it'll happen immediately. I just don't see image files that large being...viable. "

I don't agree at all. It might not be viable on some systems, but much larger files are quite viable on todays pro systems. I regularly work with stitched air photos that can be 10s or 100s of gigs in size.

20 years ago a Photoshop filter could slow a Mac Pro down.
Now they can apply filters in real time to 4k video. For still photos we've had grossly excessive computing power available for a long time.

I tablet might be overwhelmed but there are certainly computers you can buy that wouldn't break a sweat over a 5 layer Phase One


----------



## Lee Jay (Oct 12, 2014)

jrista said:


> Your *rendering* demosaiced data, which doesn't necessarily require the constant memory space.



Yes, it does. When you're in the Develop module of Lightroom or using Camera Raw, the entire 64 bit per-pixel image is in memory. The rendered view is on top of that.


----------



## StudentOfLight (Oct 12, 2014)

A few questions pertaining to the usefulness of capturing IR data in a separate channel:
1) Can humans see InfraRed?
2) How much of the IR spectrum can be transmitted through DLSR lenses?
3) Can you gain added colour accuracy by sampling additional channels which overlap with wavelengths outside human visual perception?
4) For a given ISO and Aperture, what is the difference in exposure time needed to create an IR image vs a visible light image?


----------



## Lee Jay (Oct 12, 2014)

StudentOfLight said:


> A few questions pertaining to the usefulness of capturing IR data in a separate channel:
> 1) Can humans see InfraRed?


No.


> 2) How much of the IR spectrum can be transmitted through DLSR lenses?


All of the near IR spectrum. But little gets through the sensor's IR filter.


> 3) Can you gain added colour accuracy by sampling additional channels which overlap with wavelengths outside human visual perception?


I doubt it.


> 4) For a given ISO and Aperture, what is the difference in exposure time needed to create an IR image vs a visible light image?


With the IR filter, orders of magnitude


----------



## StudentOfLight (Oct 12, 2014)

Lee Jay said:


> StudentOfLight said:
> 
> 
> > A few questions pertaining to the usefulness of capturing IR data in a separate channel:
> ...


WRT "4)" 
I meant with an IR-modified camera or for example with the 60Da


----------



## Lee Jay (Oct 12, 2014)

StudentOfLight said:


> I meant with an IR-modified camera or for example with the 60Da



Oh...in that case, roughly the same (give or take a stop or two - not orders of magnitude).


----------



## modeleste (Oct 12, 2014)

"- 5 layers will generate so much data to process that it will be very slow camera, max 7-8 FPS (5 layers are used to run potential patent lawsuit from Sigma, but it is uncertain because Sigma can't patent physical properties of silicon which are used in multilayered sensors, but only methods of image data processing.)"

But remember, for some of us the issues you cite aren't drawbacks.

I mostly shoot landscape and architecture, almost always shoot on a tripod and rarely go over ISO 100 (in film days I almost never bought anything other than ISO 25 and 64)

ISO and FPS just don't matter to people like me.

I'd jump to buy a camera that had a max ISO of 100 and 1 fp 10 seconds if it offered better resolution and DR. 

Actually I'd looking hard at a used last-generation 80mp Phase One back that is quite limited in ISO and fps.

(The prices of the last generation of MF backs seem to be crashing now that the Sony MF sensors are out, which offer much improved high ISO performance and live view)

However, this rumor has me thinking I'll put that purchase off in hope that the new Canon camera can meet my needs just as well.


----------



## Lee Jay (Oct 12, 2014)

modeleste said:


> I'd jump to buy a camera that had a max ISO of 100 and 1 fp 10 seconds if it offered better resolution and DR.



Get a 7D Mark II and a Gigapan. In 10 seconds, you can shoot something like a 9 shot panorama with a 5-shot bracket at each spot. That should get you 16+stops of DR and 115 megapixels.


----------



## modeleste (Oct 12, 2014)

Lee Jay said:


> modeleste said:
> 
> 
> > I'd jump to buy a camera that had a max ISO of 100 and 1 fp 10 seconds if it offered better resolution and DR.
> ...



Thank you, I'll take a look


----------



## Diko (Oct 12, 2014)

Lee Jay said:


> modeleste said:
> 
> 
> > I'd jump to buy a camera that had a max ISO of 100 and 1 fp 10 seconds if it offered better resolution and DR.
> ...


 I would go with the 6D instead ;-)


----------



## sigint (Oct 13, 2014)

modeleste said:


> "- 5 layers will generate so much data to process that it will be very slow camera, max 7-8 FPS (5 layers are used to run potential patent lawsuit from Sigma, but it is uncertain because Sigma can't patent physical properties of silicon which are used in multilayered sensors, but only methods of image data processing.)"
> 
> But remember, for some of us the issues you cite aren't drawbacks.
> 
> ...



Hi Modeleste,

I didn't said that this were drawbacks in my opinion, i was just talking about technical limits of potential new product. Just drawing the line above which there was no reason to go, because it would be now a impossible dream, as people here tend to run ahead without a realistic knowledge in this technology.

I use Sigma cams for theirs low ISO special color rendition capabilities and micro contrast of details in images, which i can't find in other cams (in that price range), slowness of operation of Sigma cams was never a problem for me at all, but for many people from Canon camp could be, because they are accustomed to high-speed operation.

MF cams (Phase One, Mamiya Leaf, HB) is other league of IQ, but if you plan go this road you know that cam is not the only the only cost you will be obliged to pay, so i would advice you to wait a little, and see what Canon maybe will show in 1Q of 2015. Even if a new Canon Multi-L sensor cam will be expensive at the beginning, it would be much cheaper to buy a needed lenses than go with MF camps.

But most important it will show you that if other manufacturers do not follow the same path of Multi-L sensors, and it will help you to plan future investments, so it does not turn out that you want to change the system again in the near future. Lots of Sigma user count very much on Canon because they could spend 10 times more money on R&D of this technology then Sigma, and speed development and market use of this type of sensors.

Potentially it's now a right moment to take step ahead the same as it was with CCD vs CMOS sensors, there is also, a new window of possibilities how to walk around today's drawbacks of this M-L sensors. There are now tested new meta materials for sensors like graphene and much easier to introduce in FAB production molybdenum disulfide and molybdenum diselenide on silicon base. 

Two of the last ones materials could be now mass produced, as in last year was developed scalable industrial method their production, but what is more important it's a adventage that this materials could be used in existing FABs only with minor changes in production lines because there will be used silicon as a wafer base, so potentialy it can be introduced in the sensor materials market very quickly.











But most important should be news that sensors with this materials exist and are tested for 3 years, so that isn't a dream but a not known for everyone reality, as this projects exist only in high tech labs in universities and labs of sensor producers. 






This is one of first versions of molibdenite sensors used for testing in EPFL labs, testing results are amazing, sensor needs five times less light to trigger a charge transfer than the silicon-based sensors that are currently available, so it means that sensor is five times sensitive to light, in practice it will reduce a reason for using lamp in many situations as sensor will be so much sensitive to existing light in scene. There are also other adventages, molibdenite has a 4:1 signal to noise ratio, so from this point of view noise level will be so small, and if you take in equation material light sensitivity noise level will be a problem only in astrophotography, which will benefit hugely from this sensors.

Other advantage of molibdenite is huge electron band gap, which classify graphene as much harder in use material (graphene has no electron gap, and 1:1 signal vs noise ratio), so from production point of view this material will be introduced much quicker in the sensors then graphene on which so many high tech equipment producers counts so much.

I can add at the end that, molibdenite Multi-L sensors are now also tested, and not only by one campany, but also by Samsung and Sony are in this very deeply, so you can imagine that Canon also will not miss this oportunity.

If some one want to read more about this just look here:

http://lanes.epfl.ch/publications
http://phys.org/search/?search=molybdenum
http://phys.org/search/?search=molybdenite


----------



## dgatwood (Oct 13, 2014)

Lee Jay said:


> jrista said:
> 
> 
> > dgatwood said:
> ...



I doubt they maintain an alpha channel during processing; it would always be 1.0f/65535.

Either way, jrista is correct that when you process the data, you'll need more working space, because every time you edit the IR/UV handling, you'd have to redo the computation where you collapse the five channels into three. (I'm not going to call it demosaicing because it isn't mosaiced in the first place.) With that said, outside of cell phones, the difference between 50 megabytes (25 megapixels at two bytes each) and 250 megabytes is IMO mostly noise compared with all the other memory usage in these sorts of apps.

It also requires more CPU power to read three or five 16-bit values than one; effectively, each destination pixel in a traditional debayer algorithm requires reading on average one new subpixel value that hasn't been read before, so assuming your algorithm achieves maximum reuse of values (which it won't), a multilayer sensor would be 3–5 times as CPU-intensive. In practice, it is probably closer to a factor of two, though, and I'm pretty sure the debayer algorithm is a small percentage of the total processing, so I doubt this will be a serious problem.

Basically, if your computer is barely tolerable now, it might be intolerable with a multilayer sensor. In practice, though, the apps will probably evolve to take better advantage of multiple cores, and this will probably make the difference moot.


----------



## jrista (Oct 16, 2014)

dgatwood said:


> Basically, if your computer is barely tolerable now, it might be intolerable with a multilayer sensor. In practice, though, the apps will probably evolve to take better advantage of multiple cores, and this will probably make the difference moot.




They should really be evolving to take advantage of GPUs. GPUs were designed to do this kind of stuff, and do it wicked-fast. They also have gobs of their own memory, so you wouldn't necessarily have to waste as much system memory on image rendering. Just about every computer has a GPU of some kind these days...either integrated into the CPU, or as an add-on card. Even laptops have dedicated GPUs. 


I've been curious for some time why Lightroom doesn't make extensive use of the capabilities of my video cards...if games can render vastly more complex scenes 60 to 120 times per second using a GPU, Lightroom should be able to do what it does on a 5-layer RAW quicker than it renders a bayer RAW now.


----------



## neuroanatomist (Oct 16, 2014)

jrista said:


> I've been curious for some time why Lightroom doesn't make extensive use of the capabilities of my video cards...if games can render vastly more complex scenes 60 to 120 times per second using a GPU, Lightroom should be able to do what it does on a 5-layer RAW quicker than it renders a bayer RAW now.



Agreed. DxO Optics Pro used to be rather slow at displaying images at 100% on my Mac, and even filmstrip thumbnails weren't very fast. A version back (IIRC), they added GPU acceleration and it sped the rendering up significantly.


----------



## Don Haines (Oct 16, 2014)

neuroanatomist said:


> jrista said:
> 
> 
> > I've been curious for some time why Lightroom doesn't make extensive use of the capabilities of my video cards...if games can render vastly more complex scenes 60 to 120 times per second using a GPU, Lightroom should be able to do what it does on a 5-layer RAW quicker than it renders a bayer RAW now.
> ...


Agreed!
When AutoPano (panorama rendering) added GPU rendering the time to render large panoramas dropped from hours to minutes. My video card has 512 CUDA cores running at a gigahertz.... WAY!!!! more computing power than a quad core Pentium....


----------



## Lee Jay (Oct 16, 2014)

neuroanatomist said:


> jrista said:
> 
> 
> > I've been curious for some time why Lightroom doesn't make extensive use of the capabilities of my video cards...if games can render vastly more complex scenes 60 to 120 times per second using a GPU, Lightroom should be able to do what it does on a 5-layer RAW quicker than it renders a bayer RAW now.
> ...



The guy that writes the Camera Raw code says GPU acceleration would help very little with the Camera Raw pipeline.


----------



## jrista (Oct 16, 2014)

Lee Jay said:


> neuroanatomist said:
> 
> 
> > jrista said:
> ...




I honestly have a very hard time believing that. There is no way the current code is as parallel as it could be when run on a GPU. CPU's simply cannot achieve that kind of parallelism. I wouldn't be surprised if they had to completely rewrite the ACR pipeline to properly take advantage of GPU power, but I think they should do that anyway, and build in support for pipeline-level plugins so third parties could add things people have been asking for since v2 was released...like debanding support, or AF point overlays, etc.


----------



## Lee Jay (Oct 16, 2014)

jrista said:


> Lee Jay said:
> 
> 
> > neuroanatomist said:
> ...



So, you know more than the guy that's writing the code? Kind of arrogant, don't you think?


----------



## Don Haines (Oct 16, 2014)

jrista said:


> Lee Jay said:
> 
> 
> > neuroanatomist said:
> ...



For creating a RAW file in the camera, it is doubtful that GPUs would accelerate the process. Creating the RAW file is a read/dump process with very little (if any) processing being done. It is basicly read from the sensor as fast as you can and dump to the buffer....

Creating a Jpg out of the RAW file is a completely different story... Processing that RAW file is a massively parallel operation... the image is typically broken up into 8x8 blocks and run through the jpg compression engine... then groups of blocks are run through the compression engine... and so on until the whole image is done. The 18Mpixel sensor makes an 5184x3456 image... and that makes 279,936 blocks to compress on the first pass, 4374 blocks on the second pass, and 68 blocks to finish off on the third pass..... Since it is essentially the same sequence of operations on each block, parallel cores on a GPU can speed things up by well over a magnitude....

Same thing holds true for rendering images in software to display on the screen or to create print files...


----------



## Lee Jay (Oct 16, 2014)

Don Haines said:


> jrista said:
> 
> 
> > Lee Jay said:
> ...



The CR Pipeline doesn't create 8x8 blocks of compressed data. It creates uncompressed raster data that's highly interdependent (think about applying gradient filters, healing spot corrections, brushed adjustments, etc.).


----------



## jrista (Oct 16, 2014)

Lee Jay said:


> jrista said:
> 
> 
> > Lee Jay said:
> ...




I write heavily parallelized and highly threaded code for a living. I have been for nearly two decades. I think I have the background knowledge to know.


Will you guys knock it off with this crap? I've had enough.


----------



## Lee Jay (Oct 16, 2014)

jrista said:


> Lee Jay said:
> 
> 
> > jrista said:
> ...



The CR Pipeline is not very parallelizable, according to the guy that writes it.


----------



## jrista (Oct 16, 2014)

Don Haines said:


> jrista said:
> 
> 
> > Lee Jay said:
> ...




I wasn't talking about creating RAW images in the camera. I was talking about rendering RAW images to a computer.


That said, CP-ADC is effectively a means of hyperparallelizing the most critical processing done in-camera. Sony has one ADC per pixel column, vs. Canon's 8 or 16 ADCs per output channel. A patent linked here recently described a means of integrating one ADC per 2x2 pixel group, with 4 processing channels per ADC for what was effectively per-pixel ADC.


Move the DSP either onto the sensor die, or at least as part of a system-on-chip package, make parts of it column-parallel, and you can gain even more parallelism.




Don Haines said:


> Creating a Jpg out of the RAW file is a completely different story... Processing that RAW file is a massively parallel operation... the image is typically broken up into 8x8 blocks and run through the jpg compression engine... then groups of blocks are run through the compression engine... and so on until the whole image is done. The 18Mpixel sensor makes an 5184x3456 image... and that makes 279,936 blocks to compress on the first pass, 4374 blocks on the second pass, and 68 blocks to finish off on the third pass..... Since it is essentially the same sequence of operations on each block, parallel cores on a GPU can speed things up by well over a magnitude....
> 
> 
> Same thing holds true for rendering images in software to display on the screen or to create print files...




Aye. It wouldn't matter if you were rendering to JPEG or simply rendering to some kind of viewport buffer. Each pixel can be independently processed. Since you have millions of pixels, and each one is processed the same, you can write very little code, and run it on a GPU which is explicitly designed to hyperparallelize pixel processing. You would simply be executing pixel shaders instead of standard CPU code. With the modern architectures of GPUs, you can make highly efficient use of the resources available.


----------



## Don Haines (Oct 16, 2014)

Lee Jay said:


> Don Haines said:
> 
> 
> > jrista said:
> ...


Different words, but close to what I was saying.... (RAW data is NOT highly interdependent)
Sensor to RAW - serial process.... all you need is a single core to read the sensor quickly and dump to memory. There is no way to "parallelize" the process unless you redesign the sensor and A/D to dump out enough bits at a time to make it worthwhile... in other words, instead of reading one pixel at a time, read multiple pixels at a time.... perhaps it will be done that way in the future, but as things stand now with Canon sensors and A/D you get a byte at a time and any attempts to throw multiple cores at that process would probably slow it down.

RAW to JPG - parallel process. The more cores the better.

Since Neuro's and Jrista's comments were about rendering RAW (or other formats) files on a computer, the comment/argument of creating RAW files in-camera is a tangent that sidetracks from the discussion at hand, which is rendering images on a computer.

In theory, using a GPU with multiple cores (There are NVidia chips with 512 CUDA cores) will speed up rendering of images. THAT IS THE REASON THE CHIPS WERE CREATED!!!!! You can plop 3 cards with dual chips into a computer for 3072 cores.... if you so choose. BTW, Cray made a supercomputer out of Nvidia craphic chips....

In practice, on my home system, rendering a panorama from 324 images took 2 1/2 hours with the GPU disabled and 11 minutes with it enabled.... about a 14 times increase in speed.

EDIT:
I was wrong about the GPU specs.... the Nvidia 980 cards have 2048 cores running at 1.2Ghz and render 144 BILLION points per second. I could fit 3 into my chassis at home for 6144 cores... that's 7.3 TERRA flops! over 12 times the GPU power I have now......


----------



## Don Haines (Oct 16, 2014)

jrista said:


> Don Haines said:
> 
> 
> > Creating a Jpg out of the RAW file is a completely different story... Processing that RAW file is a massively parallel operation... the image is typically broken up into 8x8 blocks and run through the jpg compression engine... then groups of blocks are run through the compression engine... and so on until the whole image is done. The 18Mpixel sensor makes an 5184x3456 image... and that makes 279,936 blocks to compress on the first pass, 4374 blocks on the second pass, and 68 blocks to finish off on the third pass..... Since it is essentially the same sequence of operations on each block, parallel cores on a GPU can speed things up by well over a magnitude....
> ...



YES!
The GPU's are far more efficient than general purpose CPUs for running shaders and the like.... as mentioned above, That's what the chip was designed for!


----------



## Lee Jay (Oct 16, 2014)

Don Haines said:


> Different words, but close to what I was saying.... (RAW data is NOT highly interdependent)
> 
> ...
> 
> ...



The raw data is interdependent, but demosaicing the images isn't the hold up in Lightroom for just about anything. Raw to JPEG is largely irrelevant since turning a raster image into a JPEG is only done for previews and exports, and takes very little time. Turning demosaiced data into raster data *with all your corrections applied* takes some time and is not highly parallelizable.

Of all the things that are inefficient in LR, the raw processing pipeline is the least. It's actually pretty efficient. Now, handling huge numbers of previews and putting them up in a grid, doing the resizing, scrolling them, adding the metadata and other badges, interfacing to the database, saving metadata to files and to the database, updating previews and preview thumbs, handling the user interface, etc., now those are things that LR could do a lot better. The CR pipeline is already pretty good and largely not a holdup for most things.


----------



## jrista (Oct 16, 2014)

Lee Jay said:


> Don Haines said:
> 
> 
> > Different words, but close to what I was saying.... (RAW data is NOT highly interdependent)
> ...




It's possible to put more than simple pixel processing onto GPUs these days. That's where the term GPGPU came from, General Purpose GPU. That's why the supercomputers of today are really just massive numbers of GPUs configured in parallel, to hyperparallelize the hyperperallelism. It's possible to rewrite Lightroom to operate primarily off the GPU. You could solve all the performance problems. Most GPUs have at least a gig of memory these days, and even midrange ones have as much as three gigs. That much memory could be used to cache a lot of previews. There is a direct and ultra high speed pipeline between GPU memory and system memory, allowing massive amounts of information to be paged in on demand...and if that information is images, all the better, as it's optimized for that.


Processing a RAW...all of it, not just the demosaicing but the entire render pipeline, can easily be handled by pixel shaders. There is plenty of lag in Lightroom in the develop module when I run Lightroom full screen on my 30" CinemaDisplay. I have an extremely powerful system, an overclocked i7 4930K with 16Gb of high speed, low timing ram, and a pair of 4Gb 760's running in SLI. It's a massive amount of computing power. LR should be able to handle rendering a full-screen full detail image off a RAW at 30fps...it can barely handle 12fps (and that's with a D III 22.3mp RAW). A GPU would make it a no-brainer to achieve at least 30fps performance.


As I said before, it would probably take a rewrite of ACR. I don't doubt the current author that ACR, as it is currently written, couldn't benefit from a GPU. They would have to redesign it to take advantage of a GPU's parallelism. I don't think it's just a patch to do that...it would be a massive overhaul at the very least, if not a total rewrite. I still think it is not only valuable...it'll probably be necessary in the future if pixel counts keep increasing. General purpose CPUs aren't good at massively parallel processing. They have some parallelism, but it pales in comparison to what GPUs can do (especially when you use two or three or four of them together.)


And with that, I'm out.


----------



## Lee Jay (Oct 16, 2014)

jrista said:


> It's possible to rewrite Lightroom to operate primarily off the GPU. You could solve all the performance problems.



Keep in mind that something like 90% of all new machines use the on-CPU (embedded) GPU. You have to be able to support those who use those as well. The machine I use at work for running LR has no separate GPU, and buying a machine with a separate GPU isn't really allowed where I work.


----------



## Don Haines (Oct 16, 2014)

Lee Jay said:


> jrista said:
> 
> 
> > It's possible to rewrite Lightroom to operate primarily off the GPU. You could solve all the performance problems.
> ...


But when we buy a computer, we buy it for the task at hand...

For example, when I built my computer for home, I wanted something for image processing and my software supported GPUs with CUDA cores.... So I got a solid state "scratch" drive (on a card, NOT one of the slow SATA drives) and a video card with 1024 CUDA cores....

If they re-write Lightroom, they are going to look to the future, not the past. This is why software packages have "recommended hardware". This is why my panorama software lists a decent Nvidia card as "recommended hardware", yet still runs without.... just a LOT more slowly.


----------



## Lee Jay (Oct 16, 2014)

Don Haines said:


> Lee Jay said:
> 
> 
> > jrista said:
> ...



In the past, all GPUs were off-CPU. Now they are 90% on-CPU. In the past, 100% of computers were desktops. Now, they are more than 70% laptops.

The last time I bought a desktop was 2004. If they are looking to the future, they are looking to smaller devices that are not going to include board-based SSDs and huge off-CPU GPU cards, since both are going away as fast as CRT televisions.


----------



## neuroanatomist (Oct 16, 2014)

Lee Jay said:


> In the past, all GPUs were off-CPU. Now they are 90% on-CPU. In the past, 100% of computers were desktops. Now, they are more than 70% laptops.



Lightroom and Photoshop have mobile versions for the iPad. From a processing standpoint, that's about doing more with less hardware, not better leveraging the fastest hardware.


----------



## Don Haines (Oct 16, 2014)

Lee Jay said:


> Don Haines said:
> 
> 
> > Lee Jay said:
> ...


I guess we had better tell the gamers about that...

Yes, the bulk of the market is now tablets and laptops, but there is a very vibrant market for "power systems". If you want something with decent power, that's the way you go.


----------



## Lee Jay (Oct 16, 2014)

Don Haines said:


> Yes, the bulk of the market is now tablets and laptops, but there is a very vibrant market for "power systems". If you want something with decent power, that's the way you go.



Adobe likely doesn't want to exclude 90% of their market by making products that only work properly on very high-performance machines.

By the way, my current laptop is about 100 times faster than my previous desktop - a desktop on which I ran the first versions of Lightroom.


----------



## Don Haines (Oct 16, 2014)

Lee Jay said:


> Don Haines said:
> 
> 
> > Yes, the bulk of the market is now tablets and laptops, but there is a very vibrant market for "power systems". If you want something with decent power, that's the way you go.
> ...


It doesn't exclude anyone. You write the software to take advantage of a GPU. If you don't have one, the software works perfectly. If you have one, it is faster. The user can run it on their tablet, thier laptop, or their desktop ( rack mount for me  ). If you want more performance, buy better hardware. It is better to have that option than no option at all.


----------



## Lawliet (Oct 16, 2014)

Don Haines said:


> It doesn't exclude anyone. You write the software to take advantage of a GPU. If you don't have one, the software works perfectly. If you have one, it is faster. The user can run it on their tablet, thier laptop, or their desktop ( rack mount for me  ). If you want more performance, buy better hardware. It is better to have that option than no option at all.



And sooner or later the naysayer will realize that your average tablet also has a GPU that can do those calculations faster while using less energy per operation. Their cores are actually very closely related to their desktop/console counterparts, much more then the main CPUs.
They see quite a lot of action in the image manipulation done for GUI representation, so using them to manipulate images isn't exactly terra incognita.


----------

