# Canon Dual-Scale Column-Parallel ADC Patent



## jrista (Dec 11, 2013)

It's been a while since I last scanned through Image Sensors World blog. Around the beginning of August, as a matter of fact. Since that time, they noted that Canon filed for a "Dual Scale" CPADC patent:

http://image-sensors-world.blogspot.com.es/2013/08/canon-files-for-dual-range-column.html

If I understand the diagrams and the patent correctly, and I am no CMOS engineer, it sounds like Canon is maybe following ML's lead in using a dual gain (i.e. Dual ISO) approach to achieving higher dynamic range. Given how long it takes to produce technology viable enough for a patent, I suspect Canon had this idea long before ML...perhaps it was simply that ML got wind of this patent, and looked for a way to achieve the same thing with current Canon sensors...either way, interesting.

The more interesting thing to me than the dial gain, though, is the CP-ADC design. I've long said that Canon needs to modernize their sensor design, get rid of the noise generators (i.e. ADCs) in their DIGIC chips, and bring all that image processing onto the same die as the rest of the sensor. This is what Sony did (although they took it a step farther and converted to a digital readout/CDS approach, whereas as far as I can tell Canon's is still analog CDS and whatnot until it is actually converted to digital), and they achieved some significant DR benefits from the move. 

Anyway, personally, I'm glad to hear Canon is investigating these options. CP-ADC is something I've wanted Canon to do for a long time, happy to see they might actually do it. God only knows if/when this technology may actually find it's way into their sensors...I only hope and pray it is soon. And dual-gain to boot...which has the potential to support FAR more than 14 stops of DR. With a 16-bit CP-ADC, we might even see a full 16 stops of DR (and who knows what might come after that...20-bit, 24-bit ADC? Can't imagine the file sizes though...46mp * 24bit...phew, 1.1Gb RAW (uncompressed) data size! Canon will need a DIGIC more than four times as fast as the current DIGIC chip...)


----------



## CarlTN (Dec 11, 2013)

Very interesting, but recently you had said that if Canon relies on dual ISO, that's only a bandaid, and might not yield enough of a DR increase, at least with the combined benefit of a lower noise floor. Obviously you meant more akin to what ML did, rather than starting from quasi-scratch, as this link hints at.

It seems to me there will be a lot of lossless compression necessary for the large RAW files (and a lot of processing power). Also though, does this not make it likely, that the 2014 1-series camera, assuming it's in the 40MP range, may not use the above process? If so, it might just "only" have 14 bit RAW capability. I too was hoping it was actually going to be 16 bit, whether it actually got much over 14 stops of "real" DR or not. That would really be something, if Canon just suddenly introduced a camera that could actually do 16 stops.

Are you planning on buying the new camera, early on?


----------



## jrista (Dec 11, 2013)

CarlTN said:


> Very interesting, but recently you had said that if Canon relies on dual ISO, that's only a bandaid, and might not yield enough of a DR increase, at least with the combined benefit of a lower noise floor. Obviously you meant more akin to what ML did, rather than starting from quasi-scratch, as this link hints at.



Using the existing downstream amplifier on half the pixels, which is what ML is doing, is a bandaid (and not ideal, as it costs you in resolution). What Canon has patented here is MUCH better...the way I would expect it to be done. Since they are reading the sensor with two different gain levels, I really don't see why there would be any reasonable limits on DR for the foreseeable future...ML is only limited to 14 stops because the ADC is 14-bit. Technically, the potential for very scalable DR is there in Canon's patent (assuming I've understood it correctly, that is.)



CarlTN said:


> It seems to me there will be a lot of lossless compression necessary for the large RAW files (and a lot of processing power). Also though, does this not make it likely, that the 2014 1-series camera, assuming it's in the 40MP range, may not use the above process? If so, it might just "only" have 14 bit RAW capability. I too was hoping it was actually going to be 16 bit, whether it actually got much over 14 stops of "real" DR or not. That would really be something, if Canon just suddenly introduced a camera that could actually do 16 stops.
> 
> Are you planning on buying the new camera, early on?



Agreed, normally a RAW file will have lossless compression. Still, a gigabit of information is a lot...you can't compress the read stream, really...you have to process it all in order to compress the output file. So, while from a storage space standpoint it wouldn't be all that bad, from an image processing standpoint...you would need much faster processors.

Canon, or someone, mentioned around a year ago, maybe not quite that long, that Canon might push a bit depth increase with the Big MP camera. Who knows if that is the case, it was a CR1, but still, interesting nevertheless. I can't imagine anyone pushing bit depth until there is a definitive reason to do so. For all of DXO's claims about the Nikon D800 and D600 offering more than 14 stops of DR, they are talking about downscaled output images. The native DR of the hardware itself is still less than 14 stops...13.2 for the D800 IIRC. 

That's with 3e- of read noise...which is INSANELY LOW (usually, you don't see that kind of read noise until you start peltier cooling sensors to sub-freezing temperatures). There are a few new ideas floating about regarding how to reduce read noise. There have been a number of patents and other things floating around lately about "black silicon", a structural modification of silicon that gives it an extremely low reflectivity index, which supports a natural read noise level of around 2e- and some of the best low light sensitivity known, and it is being researched for use in extreme low light security cameras that can see by starlight (which blows my mind.) Theoretically, this can greatly improve DR at what would be high ISO settings.

Canon's approach with dual scaling is potentially another way to get a lot more average dynamic range at low or high ISO out of a single read by using two separate signals with different gain and sampling (I guess) to effectively do a low ISO and high ISO read at the same time for each pixel, and blend the results together using on-die CP-ADC.

As for new cameras...all that is on hold until I can get my business started and start making some money again. I don't have any plans to purchase anything at the moment, outside of possibly a 5D III if the price is right. I certainly won't be buying a 1D MPM (megapixel monster) any time soon if it hits with a price over $5k. Besides, I like to wait and see how things settle first...I am still interested in the 7D II, and want to wait for both cameras to hit the street and demonstrate their real-world performance before I make a decision.


----------



## Don Haines (Dec 12, 2013)

I keep wondering what is going to happen in the future with dual-pixel technology. They have the ability to read both sides of the pixel seperately, I wonder how much work it would be to set the two sides to different ISO values, read them both, and combine the values for greatly expanded DR.

This would obviously require more computing power than just reading the sensor would, but comments out of Canon about the greater computational needs of future cameras ties in with this... I am really curious to see what happens with the 7D2..... It should be dual-pixel and dual processor (Digic6 or even 6+????) so it will be able to do a lot more computing than a 70D. The next year or so could be interesting.


----------



## jrista (Dec 12, 2013)

Don Haines said:


> I keep wondering what is going to happen in the future with dual-pixel technology. They have the ability to read both sides of the pixel seperately, I wonder how much work it would be to set the two sides to different ISO values, read them both, and combine the values for greatly expanded DR.
> 
> This would obviously require more computing power than just reading the sensor would, but comments out of Canon about the greater computational needs of future cameras ties in with this... I am really curious to see what happens with the 7D2..... It should be dual-pixel and dual processor (Digic6 or even 6+????) so it will be able to do a lot more computing than a 70D. The next year or so could be interesting.



They wouldn't need to bother with the dual-pixel approach with this patent. They simply read "the pixel" (regardless of whether it is a single photodiode, or two/four binned, whatever) with two different gain levels (different ISO settings, done simultaneously on different signals). This patent offers a much better way to solve the problem without resorting to "hackish" approaches like what ML did, or like what you suggest with reading one half the pixel at one ISO and the other half at another ISO (which wouldn't be nearly as good, since each half pixel is only getting half the light, so the half-reads would already be at a disadvantage large enough to completely eliminate any gains you might make with the dual-read process in the first place.)

Even better than simply reading half pixels at different ISO settings, this patent reads each pixel twice simultanesously at different gain levels, while also bringing the ADC on-die and column-parallelizing them, allowing them to run at a lower frequency, thus reducing their potential to add downstream noise. With column-parallel ADC, they could do what Sony Exmor does...per-column read tuning to eliminate vertical banding. It also brings in the benefit of shipping image data off the sensor in an error-correctable digital form, eliminating the chance that the data picks up even further noise as it travels along a high frequency bus and through a high frequency DIGIC chip. This patent would single-handedly solve a LOT of Canon's noise problems. 

The only real difference between Canon's Dual-Scale CP-ADC patent and Exmor's is that Exmor uses digital CDS and digital amplification (basically, it is an entirely digital pipeline)...I see no mention of Canon's patent referring to digital data processing on-die. There are theoretically pros and cons to both digital and analog readout, so only time will tell (assuming Canon actually IMPLEMENTS this design sometime soon) whether Canon's approach produces results that are as good as Exmor or not. Sometimes it is easier, and more accurate/precise, to apply certain kinds of processing and filtering on an analog signal rather than digital bits.


----------



## Don Haines (Dec 12, 2013)

jrista said:


> Don Haines said:
> 
> 
> > I keep wondering what is going to happen in the future with dual-pixel technology. They have the ability to read both sides of the pixel seperately, I wonder how much work it would be to set the two sides to different ISO values, read them both, and combine the values for greatly expanded DR.
> ...



Good explanation! Now I understand.... Thanks!


----------



## neuroanatomist (Dec 14, 2013)

Also, Canon has a prior Foveon-like patent, and now this patent. While there is certainly a large (sometimes insurmountable) gap between patent and product, these patents belie the statements those who suggest Canon is failing to innovate in the area of sensor design (as do prototypes like the 120 MP APS-H sensor).


----------



## jrista (Dec 14, 2013)

Don Haines said:


> jrista said:
> 
> 
> > Don Haines said:
> ...





I just hope it finds its way into a Canon camera body soon. The patent was filed pretty recently, so I am pretty doubtful we would see it in the likes of say the Big MP camera, or even the 7D II. If Canon employs the technology, I suspect it would be in something like a next generation 1D X or maybe the 5D IV. Kind of a bummer, thinking that far out...but then again, we don't yet know what technology Canon HAS employed in either the 7D II or Big MP camera yet!


----------



## Mt Spokane Photography (Dec 14, 2013)

Digic 6 is coming, we would likely see such a new converter in conjunction with a new processor. I'd also bet on dual pixel technology for most of the new cameras that come out. That technology has potential for producing mirrorless bodies that are very competitive with DSLR's. Fewer moving components in a camera body means more reliability. That flapping mirror is the cause of many issues in photography, but even so, it works and nothing has matched it yet.


----------



## jrista (Dec 14, 2013)

Mt Spokane Photography said:


> Digic 6 is coming, we would likely see such a new converter in conjunction with a new processor. I'd also bet on dual pixel technology for most of the new cameras that come out. That technology has potential for producing mirrorless bodies that are very competitive with DSLR's. Fewer moving components in a camera body means more reliability. That flapping mirror is the cause of many issues in photography, but even so, it works and nothing has matched it yet.



Yeah, I bet we see DPAF in all new Canon bodies as well. I wonder if/when they will start improving that (QPAF?) The thing I want to see from Canon is something REALLY compelling on the EVF front. I can't even consider mirrorless, even with its advantages, until there is one HELL of an EVF to accompany it. Outside of landscapes, I rely so heavily on the viewfinder for everything else (even astrophotography...you MUST have an OVF to find and frame sky objects).


----------



## Don Haines (Dec 14, 2013)

jrista said:


> Mt Spokane Photography said:
> 
> 
> > Digic 6 is coming, we would likely see such a new converter in conjunction with a new processor. I'd also bet on dual pixel technology for most of the new cameras that come out. That technology has potential for producing mirrorless bodies that are very competitive with DSLR's. Fewer moving components in a camera body means more reliability. That flapping mirror is the cause of many issues in photography, but even so, it works and nothing has matched it yet.
> ...



digic6 has been out now for about half a year is a p/s.... Perhaps we get to see dual digic6 in the 7D2...

I agree about EVFs... They are the future, but not quite ready yet. There are a few nice ones starting to appear that are getting close, but they are not there yet.


----------



## LetTheRightLensIn (Dec 14, 2013)

Yeah I posted this some months ago. I could swear you even commented on it then ;D . But I think the thread got quickly turned into a mess by all those calling it a troll thread and more DRibble and all that sort of nonsense and perhaps everyone forgot the basis of that thread.

We can just hope it is ready for the next main bodies.


----------



## jrista (Dec 15, 2013)

LetTheRightLensIn said:


> Yeah I posted this some months ago. I could swear you even commented on it then ;D . But I think the thread got quickly turned into a mess by all those calling it a troll thread and more DRibble and all that sort of nonsense and perhaps everyone forgot the basis of that thread.
> 
> We can just hope it is ready for the next main bodies.



It's entirely possible I DID see it...but I've had a lot going on since August, and am still trying to get a business going, so it isn't out of the question that I forgot. 

The only thing, tickling the back of my brain, that I worry about is Canon's pension for announcing really KICK-ASS things that...just disappear. Like a 120 MEGApixel APS-H sensor that could rip out 9.5 frames a second. I mean, come on. I wanted that years ago...I only want it even MORE so now. Why the hell isn't it in the *Canon EOS Whoop-ASS Ds X* yet?!??!?!?!!?!?!?!!!!!!! ;P


----------



## jrista (Dec 15, 2013)

Don Haines said:


> jrista said:
> 
> 
> > Mt Spokane Photography said:
> ...



If Canon does indeed move the ADC onto the sensor die, they would have to design a new DIGIC to pair it with, since the ADC currently lives inside the DIGIC chip up through DIGIC 6. So, if say the 7D II got this new DS/CP-ADC design...I would then expect it to have a DIGIC 7 paired with it. I would also expect that it would only need one DIGIC...the only reason the 7D, 1D IV and 1D X have had multiple DIGIC chips was to increase the number of ADC channels...with ADC on the sensor die, so long as the DIGIC 7's raw processing power was sufficient, you wouldn't even need two.


----------



## CarlTN (Dec 15, 2013)

jrista said:


> CarlTN said:
> 
> 
> > Very interesting, but recently you had said that if Canon relies on dual ISO, that's only a bandaid, and might not yield enough of a DR increase, at least with the combined benefit of a lower noise floor. Obviously you meant more akin to what ML did, rather than starting from quasi-scratch, as this link hints at.
> ...



Very informative points, thank you. And I think it was you who first mentioned "black silicon" on here earlier this year. I recall trying to read more about it, probably a link you posted. I think I read something on Wikipedia about it as well, for what little that is worth.

Thanks for pointing out that the compression would be useless during the read and processing stage. I knew that but hadn't even considered it...I was just thinking of the large files being written to a storage media of some kind. It almost seems like the high processing power is more achievable than the speed required to write and store the files, say while at 5 frames a second or more. You would need large internal buffer capacity. I suppose some kind of wireless technique could be used to write very large files quickly to an external computer, or watch phone or something...haha! I guess it would all get designed to work, if the need for really large files came to the fore...or rather _when_ it does.


----------



## CarlTN (Dec 15, 2013)

neuroanatomist said:


> Also, Canon has a prior Foveon-like patent, and now this patent. While there is certainly a large (sometimes insurmountable) gap between patent and product, these patents belie the statements those who suggest Canon is failing to innovate in the area of sensor design (as do prototypes like the 120 MP APS-H sensor).



But was that 120MP aps-h sensor ever tested? What process was used to produce the 120MP sensor? All I've seen on here is how Canon's process hasn't gotten small enough, but something must have been small to make that sensor.

As you might know I'm a bit of a fan of the foveon technique, so if Canon actually produces one for sale, maybe it will perform really well. It does seem almost plausible to me that the 60 or 75MP sensors that have been rumored, would use the technique...Because at this point, is a bayer RGB array with that many photo sites really viable on a 36mm wide sensor? Seems like that would descend into the noise levels of compact point and shoot sensors...


----------



## jrista (Dec 15, 2013)

CarlTN said:


> neuroanatomist said:
> 
> 
> > Also, Canon has a prior Foveon-like patent, and now this patent. While there is certainly a large (sometimes insurmountable) gap between patent and product, these patents belie the statements those who suggest Canon is failing to innovate in the area of sensor design (as do prototypes like the 120 MP APS-H sensor).
> ...



It was most certainly tested. That was the entire point of the press release now so many years ago...that they had successfully fabricated AND tested a 120mp APS-H sensor that was capable of 9.5 frames per second. It was an amazing feat. As for process, it would have had to have been done on their small form factor fab, as the pixels would have been only 2µm in size (too small for a 500nm process to effectively create, especially with the added logic for column-parallel readout (which the press release did mention.) Canon does have the capacity to fabricate larger sensors in multiple exposures on the fab that is dedicated to their smaller parts. It isn't particularly efficient, but that doesn't matter when you are only creating a few prototype parts for testing.


----------



## jrista (Dec 15, 2013)

CarlTN said:


> jrista said:
> 
> 
> > CarlTN said:
> ...



When it comes to the processing power required to process the image on the sensor, it has to be uncompressed data. But not only that, it has to be uncompressed data PLUS overhead...there is always a certain amount of overhead, additional data, additional processing to combat one problem or another, etc. So while the data size may be a gigaBIT (about 140mb per image), the actual total amount of data read is going to be larger, maybe closer to 160m per image. If one wanted a high readout rate...say 9.5 fps, then the total throughput rate would need to be 1.6gigaBYTE per second!


----------



## CarlTN (Dec 16, 2013)

jrista said:


> CarlTN said:
> 
> 
> > jrista said:
> ...



And what type of device or computer is currently capable of that kind of throughput?


----------



## jrista (Dec 17, 2013)

CarlTN said:


> jrista said:
> 
> 
> > CarlTN said:
> ...



The original SATA standard was capable of 1.5Gbit/s, SATA2 was capable of 3.0Gbit/s, and SATA3 is currently capable of 6.0Gbit/s. That would be one of the SLOWEST data transfer rates for modern computing devices. A modern CPU is capable of around 192Gbit/s data throughput on the CPU itself and along its primary buses. A modern GPU is capable of even higher transfer rates in order to process graphics at up to 144 times per second (on 144Hz computer screens), meaning several hundred million pixels at least to the tune of trillions of operations per second requiring data throughputs of hundreds of billions of bits. 

In order to handle 120 or 144 frames per second on modern high framerate gaming and 3D screens at 2560x1600 or even 3840x2160 (4k) with 10-bit precision, you would need at least 11,943,936,000bit/s throughput rate from video card to screen. (This is, BTW, the next generation of hardware, already trickling onto the market...high end gaming and graphics computing hardware, running on next generation GPUs and on early 4k SuperHD screens, using interfaces like Thunderbolt, which so happens to operate via a single channel at 10Gbit/s for v1, and 20Gbit/s for v2 via "aggregate" channels.) 

General computing currently is capable of very significant data throughput. With the next generation of GPU's paving the way for high performance 4k 3D gaming (and even multi-screen 3D gaming, at that!), the average desktop of 2015 and beyond should be able to handle 150mp image files as easily as they handle 20/30/50/80mp image files from DSLR and MFD cameras today.

Assuming a 120mp camera operates at 9.5fps for 16-bit image frames (just speculating), since Canon has at least demonstrated a sensor like that. The raw per-frame bit size at 16-bit is 1,920,000,000 bits (1.92Gigabit), divide by 8 for bytes, which comes to 240,000,000MB (240Megabyte). Multiply by 9.5 frames per second, and you have a total data throughput of 2.28GB (2.28Gigabyte) per second, or 18.2Gbit/s. A single Thunderbolt v2 aggregate channel would be sufficient to handle that kind of data throughput, and be capable of transferring a full 120mb RAW image onto a computer in around 1 second...assuming you had comparable memory card technology that could keep up (which certainly doesn't seem _unlikely _given the rate at which memory card speed is improving.)

The real question is, will onboard graphics processors, DSPs (or rather computing packages, as they are today...usually a DSP stacked with a general purpose processor like ARM and usually a specialized ultra high speed memory buffer), will be able to reach the necessary data throughput rates. As a matter of fact, they already operate at fairly decent speeds. A single 120mp frame is 240MB. With a pair of DIGIC5+, you would be able to process 2 120mp frames per second. The DIGIC5+ chip was about seven times faster than it's predecessor, DIGIC4. If we assume a similar jump for the next DIGIC part, it would be capable of processing 3.36GB/s, more than the necessary 2.28GB/s to process 120mp at 9.5 frames per second, and quite probably enough to handle around 11 frames per second (and still have room for the necessary overhead.) 

Given that release cycles for interchangeable lens cameras is usually on the order of several years, we probably wouldn't see next generation memory card performance until 1D X Next and 5D Next ca. 2016 or 2017. Sadly, at least historically, DSLRs have lagged even farther behind in data transfer standards support, so it could very likely be that we don't see comparable interface support in DSLRs and other interchangeable lens parts until 2019/2020. :\ Which means, instead of being able to transfer our giant 100mp+ images in about one second each, we will still have to slog through imports at about a quarter the speed our desktop computer technology is capable of...but were all used to that already. ;P (Which, BTW, is one of the key reasons I believe desktop computers are a LONG way from being dead...they are still the pinnacle of computing technology, and no matter how popular ultra-portable tablets and convertibles are, I think most people still have and use a desktop computer with a trusty old keyboard and mouse for their truly critical work. Tablets and convertibles and phablets and phones simply augment our computing repertoire.)


----------



## Don Haines (Dec 17, 2013)

jrista said:


> (Which, BTW, is one of the key reasons I believe desktop computers are a LONG way from being dead...they are still the pinnacle of computing technology, and no matter how popular ultra-portable tablets and convertibles are, I think most people still have and use a desktop computer with a trusty old keyboard and mouse for their truly critical work. Tablets and convertibles and phablets and phones simply augment our computing repertoire.)



Could not agree more....

One of the packages I run at home is AutoPAno Giga, and it allows you to enable GPU computing to speed up rendering. Think of it as 1000 1Ghz cores.... as opposed to four 3.4Ghz I7 cores.... My desktop renders a large panorama 20-30 times faster than my laptop....


----------



## jrista (Dec 17, 2013)

Don Haines said:


> jrista said:
> 
> 
> > (Which, BTW, is one of the key reasons I believe desktop computers are a LONG way from being dead...they are still the pinnacle of computing technology, and no matter how popular ultra-portable tablets and convertibles are, I think most people still have and use a desktop computer with a trusty old keyboard and mouse for their truly critical work. Tablets and convertibles and phablets and phones simply augment our computing repertoire.)
> ...



Yeah, the PC rules.  This is also why I think Microsoft will be a very successful company in the long term. They may not still be a "WOW" company like Apple or Facebook or Google or Twitter, but they have it where it counts. Windows 8, for all that people complain about it, offers the best of all worlds in a single, unified platform experience...WITHOUT forgetting about the desktop, keyboard, and mouse. That will be the key thread of their success five, ten, twenty years out.


----------



## unfocused (Dec 17, 2013)

jrista said:


> Don Haines said:
> 
> 
> > jrista said:
> ...



This is interesting to me because I think it has parallels to DSLRs. 

A few years ago, laptops were the wave of the future. It seemed like everyone under 30 (which leaves me way, way out) was buying a laptop and wouldn't even consider having a clunky old desktop, even though they were paying a huge premium in price and performance for that portability.

The tech gurus were predicting the death of desktop.

Then the next wave hit. Netbooks, tablets, e-readers and smart phones and the under 25 crowd looked on the laptop in much the same way their older siblings had looked on desktops. Who needs all that computing power when you just want to surf the web and post to social media? Why would you carry around some big old laptop?

Suddenly it was the laptop, not the desktop that was endangered. Those who wanted and needed real computing power found the desktop form factor much more practical (larger screen or dual screen, more memory and hard drive space, much more suited for programs that require real computing power). Laptops have become a niche market. 

Today, in the photo world the tech gurus are predicting the death of the DSLR and saying the future will be mirrorless interchangeable lens cameras. But, to me, these seem like laptops. Too big to be truly portable, overpriced and with too many compromises to truly replace a DSLR.

I strongly suspect that in five years, the tech gurus will have moved on to the next big thing. Mirrorless will have run its course and the DSLR will still be plugging away because the form factor that has worked for 75 years remains the best form factor for its purpose.


----------



## jrista (Dec 17, 2013)

unfocused said:


> jrista said:
> 
> 
> > Don Haines said:
> ...



Very insightful! I totally agree, too. The parallel you've drawn is pretty intriguing, and while the desktop computer hasn't been around for 75 years, its life cycles have been about four times faster than cameras as well, I think the parallel scales well. 

I think there will always be a market for smaller and lighter, for sure. While I think everyone does ultimately turn back to their desktop for any important and critical work, I do think that tablets, convertibles, phablets, and even laptops are here to stay. It's just that they will never actually topple the vaunted desktop...at least, not until computing becomes so ubiquitous and so omnipresent that we could literally call up a virtual keyboard and mouse on any flat surface, turn on a holographic display, and crank away wherever whenever.

(I don't really foresee that kind of thing happening, despite the fact that it seems to be every key tech companies goal for the future...too many technologies that not only have to be perfected, but seamlessly intertwined, and capable of presenting a universal, omnipresent, ubiquitous computing interface. To achieve tech companies ubiquitous computing vision of the future would basically mean scrapping and replacing ALL major power and control infrastructures everywhere, with hooks and plugins for every person and every kind of device. It MAY happen in some homes, multi million dollar homes where the cost of setting up a centralized, wireless, and omnipresent computing system is still just a drop in the bucket... But overall, I think the desktop is here to stay, even if the rate of sale of desktop computers drops (which is expected given the strained economic times, as people work more hours for less pay, are faced with artificially increasing costs at the mandate of governments, and otherwise have their disposable income soaked up in the name of status equality, it's no surprise that discretionary spending on moderately big ticket items like desktop computers has waned.))


----------



## CarlTN (Dec 18, 2013)

jrista said:


> CarlTN said:
> 
> 
> > jrista said:
> ...



Very interesting speculation and statistics. The key word is "work". That's the problem with the mobile world. It's more about leisure than work...but in reality it's really just a toy, a slave obsession that robs people of living in the moment, and doing actual physical things, and interacting socially in person with people...replacing it with texting or playing games.

Speaking of games, why on earth does anyone need 4K 3D for gaming? It's all just computer generated cartoons anyway, and the senses can only take in so much detail. In real life your attention is on a narrow part of your field of view. So in a 4K 3D game, most of that pixel information is unnecessary detail, it seems to me. If this weren't so, then you should be able to read an entire large screen of text that fills your field of view, without ever moving your eyes...but you can't. Or at least I can't. I read a book on how to "speed read", but I never got very far.

But then I'm not a "gamer". I would prefer to watch "How the West was Won", "To Catch a Thief", "The Empire Strikes Back", and "Raiders of the Lost Ark" (in that order) remastered in 4K 2D, than to play a game.


----------



## CarlTN (Dec 18, 2013)

unfocused said:


> Today, in the photo world the tech gurus are predicting the death of the DSLR and saying the future will be mirrorless interchangeable lens cameras. But, to me, these seem like laptops. Too big to be truly portable, overpriced and with too many compromises to truly replace a DSLR.
> 
> I strongly suspect that in five years, the tech gurus will have moved on to the next big thing. Mirrorless will have run its course and the DSLR will still be plugging away because the form factor that has worked for 75 years remains the best form factor for its purpose.



Agree.


----------



## danski0224 (Dec 18, 2013)

jrista said:


> Yeah, the PC rules.  This is also why I think Microsoft will be a very successful company in the long term. They may not still be a "WOW" company like Apple or Facebook or Google or Twitter, but they have it where it counts. Windows 8, for all that people complain about it, offers the best of all worlds in a single, unified platform experience...WITHOUT forgetting about the desktop, keyboard, and mouse. That will be the key thread of their success five, ten, twenty years out.



One thing that amazes me about the Surface (Pro 2 specifically) is I can use my finger (or two), the keyboard touchpad, the stylus or the wireless mouse at any given point and it works. I can switch from any one to the other seamlessly. I've never had an iPad, so maybe it isn't a big deal for some, but prior iterations of touchscreen on Windows has kinda sucked...

The wireless keyboard adapter is pretty cool, too.

My only beef is I would actually prefer the iPad 4:3 form factor.

Windows 8 on a touchscreen is pretty awesome.

Big touch monitors are still kinda pricey, but there are some touch mice for desktops.


----------



## jrista (Dec 19, 2013)

danski0224 said:


> jrista said:
> 
> 
> > Yeah, the PC rules.  This is also why I think Microsoft will be a very successful company in the long term. They may not still be a "WOW" company like Apple or Facebook or Google or Twitter, but they have it where it counts. Windows 8, for all that people complain about it, offers the best of all worlds in a single, unified platform experience...WITHOUT forgetting about the desktop, keyboard, and mouse. That will be the key thread of their success five, ten, twenty years out.
> ...



My Surface Pro (original) works the same way. When it comes to touch, I think Microsoft has always done well. I remember my old Windows XP tablet which supported touch and stylus allowed pretty seamless switching between input modes. My Windows Phone 7 device had incredibly responsive and fluid touch support, as does my Lumia 920. When it comes to touch and voice control, Microsoft devices are actually pretty good, and have been for at least four years or so. The voice recognition capabilities of the Lumia 920 are pretty amazing...it rarely ever missed a beat, even while I am driving a car where noise levels are high.

Microsoft may be slow to the market, but what they do put out works exceptionally well. I honestly cannot say the same for Android. Every time I've used android devices, the newest features always seem to lack polish, have numerous quirks, don't seem to alway work with every device, etc. It took several iterations before Android's touch was consistent and fluid, and yet even today it just doesn't seem to have the responsiveness of either WP8 or iOS.


----------



## danski0224 (Dec 20, 2013)

My first tablet laptop, a HP TX series, couldn't even do pinch to zoom. There were 2 different screens available, and not knowing the differences, I chose poorly. I could switch between input devices without issues, but the rest of the "touch experience" wasn't there yet- at least for me. Vista only added to the problems.

I tried the early Windows smartphones, a HTC TouchPro2 and a HTC HD2, and I found both to fall short of expectations. When these devices were new, Microsoft didn't impress with upgrades and continued support. The HD2 still lives on if you like to mess with phones.

I went Android after that. Got burned with the HTC Amaze bluetooth issue, so that has soured my HTC experience.

I messed around a bit with Cyanogen on the HD2 through the SD card, and I actually prefer the "plain" Android interface over the skins that almost everyone else uses. Unfortunately, the phone OEM's make it increasingly difficult to root your device. I'd be happy with less bloatware.

When it came time to shop for a new phone, I would have went with the Nokia 1020, but it isn't available on my carrier natively and I'm not moving to AT&T. So, I stuck with Android, but no longer HTC. 

A unifying experience across platforms (computer, tablet, phone) has appeal and that is lacking in Android.


----------



## jrista (Dec 20, 2013)

danski0224 said:


> My first tablet laptop, a HP TX series, couldn't even do pinch to zoom. There were 2 different screens available, and not knowing the differences, I chose poorly. I could switch between input devices without issues, but the rest of the "touch experience" wasn't there yet- at least for me. Vista only added to the problems.
> 
> I tried the early Windows smartphones, a HTC TouchPro2 and a HTC HD2, and I found both to fall short of expectations. When these devices were new, Microsoft didn't impress with upgrades and continued support. The HD2 still lives on if you like to mess with phones.
> 
> ...



HTC doesn't make a particularly great phone. I've had a few of them, one for WP7 and two for Android...neither offered any particularly great quality. Even the first round of HTC WP8 phones were lackluster compared to the competition. I am not sure if HTC has anything really good out these days, but I've pretty much given up on them.

When it comes to Windows phones, Nokia really does well. Their Lumia line is excellent, and in areas where the HTC performs poorly, Lumia excels. So, when it comes to touch, I think issues are more hardware related than OS related. I have also been impressed with Samsung phones and tablets...but their products feel like plastic toys. People complain about the weight of the Lumia line of devices...personally, that heft to me is the mark of a solid, durable product.


----------



## roguewave (Dec 20, 2013)

jrista said:


> Don Haines said:
> 
> 
> > I keep wondering what is going to happen in the future with dual-pixel technology. They have the ability to read both sides of the pixel seperately, I wonder how much work it would be to set the two sides to different ISO values, read them both, and combine the values for greatly expanded DR.
> ...



Thank you for the explanation! I got the gist of it, but I still don't understand the basics: how is it possible to read the same pixel twice simultaneously? I thought you can't eat your cake and have it too ? I mean, wouldn't the signal become weaker if you split it?


----------



## jrista (Dec 20, 2013)

roguewave said:


> jrista said:
> 
> 
> > Don Haines said:
> ...



They aren't splitting it. I am not 100% exactly certain what they are doing, but from what I do understand, when a pixel is read, it is amplified twice, and the results of those different amplifications are transferred to the CP-ADC units simultaneously (on different channels). Same source pixel, two separate but full power signals, which are then blended together at conversion time. It is basically the same thing ML did, only with the appropriate dedicated hardware fabricated right into the sensor to do it right.


----------



## roguewave (Dec 20, 2013)

jrista said:


> They aren't splitting it. I am not 100% exactly certain what they are doing, but from what I do understand, when a pixel is read, it is amplified twice, and the results of those different amplifications are transferred to the CP-ADC units simultaneously (on different channels). Same source pixel, two separate but full power signals, which are then blended together at conversion time. It is basically the same thing ML did, only with the appropriate dedicated hardware fabricated right into the sensor to do it right.



I assume there is something clever somewhere in the implementation. HDR has been around for a while, even before ML. It's hard to believe nobody thought earlier about pushing the process into the sensor instead of software.


----------



## jrista (Dec 20, 2013)

roguewave said:


> jrista said:
> 
> 
> > They aren't splitting it. I am not 100% exactly certain what they are doing, but from what I do understand, when a pixel is read, it is amplified twice, and the results of those different amplifications are transferred to the CP-ADC units simultaneously (on different channels). Same source pixel, two separate but full power signals, which are then blended together at conversion time. It is basically the same thing ML did, only with the appropriate dedicated hardware fabricated right into the sensor to do it right.
> ...



I wouldn't call it HDR. HDR is a very misused term as it is. In its proper form, a High Dynamic Range image is an image with an EXCESSIVBLY HIGH dynamic range, stored as 32-bit floating point numbers with extremely fine precision and a dynamic range that could potentially equal thousands of stops (i.e. it can represent numbers from a couple billion down to billionths.)

HDR as it is commonly (mis)used simply refers to the mapping of tones into a limited dynamic range from a source file that might have slightly higher dynamic range. What Canon is doing isn't exactly HDR...it is a specialized read process that will allow them to better utilize the dynamic range they already have access to, but which is otherwise being *diminished *by read noise.


----------



## Lawliet (Dec 20, 2013)

roguewave said:


> Thank you for the explanation! I got the gist of it, but I still don't understand the basics: how is it possible to read the same pixel twice simultaneously? I thought you can't eat your cake and have it too ? I mean, wouldn't the signal become weaker if you split it?



The patent looks like a ramp ADC - they don't take the electrons out to count them, but use a voltage comparison. The unknown pile of e- on the right, you measure how long you have to add charge on the left side until both are equal(or the known one grows larger then the unknown). In theory nothing stops you from using multiple heaps that grow at different rates. You just have to keep crosstalk, external influences and such under control.


----------



## roguewave (Dec 20, 2013)

jrista said:


> I wouldn't call it HDR. HDR is a very misused term as it is. In its proper form, a High Dynamic Range image is an image with an EXCESSIVBLY HIGH dynamic range, stored as 32-bit floating point numbers with extremely fine precision and a dynamic range that could potentially equal thousands of stops (i.e. it can represent numbers from a couple billion down to billionths.)
> 
> HDR as it is commonly (mis)used simply refers to the mapping of tones into a limited dynamic range from a source file that might have slightly higher dynamic range. What Canon is doing isn't exactly HDR...it is a specialized read process that will allow them to better utilize the dynamic range they already have access to, but which is otherwise being *diminished *by read noise.



I didn't mean to call this process HDR in the strict sense - you're right, it is a misused term.

Regardless of the exact meaning, I was thinking of the common understanding of HDR along the lines of:
...HDR compensates for this loss of detail by capturing multiple photographs at different exposure levels and combining them to produce a photograph representative of a broader tonal range... (wikipedia)

I could be wrong, but isn't that the same idea? Creating the equivalent of two different exposures by applying two different gain levels and then combining them. The difference is pushing it onto the sensor rather than post-processing in software, so there is no need to take multiple shots at different exposure.


----------



## roguewave (Dec 20, 2013)

Lawliet said:


> The patent looks like a ramp ADC - they don't take the electrons out to count them, but use a voltage comparison. The unknown pile of e- on the right, you measure how long you have to add charge on the left side until both are equal(or the known one grows larger then the unknown). In theory nothing stops you from using multiple heaps that grow at different rates. You just have to keep crosstalk, external influences and such under control.



Thank you, I think I got it!


----------



## jrista (Dec 20, 2013)

roguewave said:


> jrista said:
> 
> 
> > I wouldn't call it HDR. HDR is a very misused term as it is. In its proper form, a High Dynamic Range image is an image with an EXCESSIVBLY HIGH dynamic range, stored as 32-bit floating point numbers with extremely fine precision and a dynamic range that could potentially equal thousands of stops (i.e. it can represent numbers from a couple billion down to billionths.)
> ...



Yeah, pretty much. I don't know exactly how they get the two reference signals, but in the end, the gain isn't huge. Canon sensors currently get around 11.5 stops on average. This could allow them to get ~13.5 stops on average unless they move to an ADC with a higher bit depth. If they do move beyond 14-bit ADC, then it would definitely be a lot more line hardware HDR (imagine 15.5 stops or around there for a 16-bit ADC.)


----------



## jrista (Dec 20, 2013)

Lawliet said:


> roguewave said:
> 
> 
> > Thank you for the explanation! I got the gist of it, but I still don't understand the basics: how is it possible to read the same pixel twice simultaneously? I thought you can't eat your cake and have it too ? I mean, wouldn't the signal become weaker if you split it?
> ...



Yeah, that sounds very much like what they are describing. Here is the actual abstract from the patent: 

_*ABSTRACT*

An image sensor comprises plural sets of a unit pixel outputting a pixel signal based on an electric charge generated through photoelectric conversion and a conversion unit converting the pixel signal into a digital signal. A reference signal source generates reference signals and supplies the generated reference signals to the conversion unit through signal lines. The conversion unit of each set comprises a comparator which compares the level of the reference signal with that of the pixel signal, a count circuit which counts a clock based on the comparison processing, a selection circuit which selects among the signal lines, a signal line to be selectively connected to the input of the comparator, and a switch which selectively connects the selected signal line to the input of the comparator, and selectively connects a load to an unselected one of the signal lines. _

I am still not entirely certain I understand what the purpose of this is. I read embodiments three and four, and the summary in the last section of each always refers to increasing the accuracy of ADC. I am interpreting that to mean less noise, but I am not sure how much less noise. Here is the summary from embodiment thre:

_*[0086]* As described above, by selecting, from two reference signals which have been offset from each other and have different voltage ramp gradients, a reference signal to be compared with a significant signal (not sure what this significant signal is -jrista), it is possible to shorten the conversion period as compared with a case in which one reference signal is used to perform A/D conversion. At this time, the load of the reference signal line is used for comparison processing, and the load variations of the reference signal line depending on the pixel signal are suppressed, thereby enabling to prevent the accuracy of A/D conversion from decreasing. _

From what I understand about Canon noise in their current setup, the high frequency ADC in their DIGIC chips is a significant source of banding noise. I've assumed that this patent, by increasing the accuracy if ADC, would reduce that noise, thereby allowing a gain in DR. Based on what I read in embodiment three, I am not really sure whether that is the case or not. 

I hadn't read much farther than that before, but reading into embodiments five, six, and seven, they start talking about inverting one of the reference signals during reset read which is applied to the reference signal during normal read. That sounds especially like what Sony does with Exmor for Digital CDS...but they don't actually call it that. They also state that an analog CDS is still supported, but not necessarily required.


----------



## Lawliet (Dec 21, 2013)

jrista said:


> I am still not entirely certain I understand what the purpose of this is.
> 
> 
> > It has two implications. At ~64K e full well capacity and 16k/14bit resulution you get away with 4e steps for sampling. In the same timeframe you can sample the low quartille at 1e Resolution, cutting the shadow noise by a good margin. Engineering details have impact on the actual numbers...
> ...


One prong gets us closer to the single electron counting/ISO-less readout, at least for pulling up shadows. No benefit for recovering blown out Highlights or spreading the midtones. We need some margins for whining I guess. 
The other should shift the main cause of banding from analog amplification and transmission to accuracy of Timing. The latter is much easier to handle, something at the clock rate of parts of current CPUs would do the trick. Good news: "normal" clock instabilities cause only a marginal drift of effective ISO, but at the same rate for all channels.

Question to be answered: how noise free are the comperators and their infrastructure?


----------



## horshack (Dec 21, 2013)

Some of this patent seems to overlap what Emil Martinec came up with back in 2008:

http://www.dpreview.com/forums/post/28750076


----------



## LetTheRightLensIn (Dec 21, 2013)

horshack said:


> Some of this patent seems to overlap what Emil Martinec came up with back in 2008:
> 
> http://www.dpreview.com/forums/post/28750076



yeah I thought of that as soon as I first saw this patent


----------



## jrista (Dec 21, 2013)

LetTheRightLensIn said:


> horshack said:
> 
> 
> > Some of this patent seems to overlap what Emil Martinec came up with back in 2008:
> ...



Seems similar, at least if I've read the patent correctly.


----------



## Aglet (Dec 21, 2013)

unfocused said:


> Today, in the photo world the tech gurus are predicting the death of the DSLR and saying the future will be mirrorless interchangeable lens cameras. But, to me, these seem like laptops. Too big to be truly portable, overpriced and with too many compromises to truly replace a DSLR.
> 
> I strongly suspect that in five years, the tech gurus will have moved on to the next big thing. Mirrorless will have run its course and the DSLR will still be plugging away because the form factor that has worked for 75 years remains the best form factor for its purpose.



I don't think you're correct on this ML prediction.
If you've tried a Olympus EM1 you'll see just how responsive and useful a good ML-EVF system can be and the tech's got some legs yet.

I don't think ML will "run its course." It will become an alternative to the traditional SLR type camera body. Each will have their pros and cons and appeal to different consumer segments.

All the MFT cameras and Fuji's higher end bodies are proving they're very capable already. It will only be a matter of another generation or 2 before they will likely outperform even the best DSLRs for shooting speed, both AF and fps.
Higher frame-rate EVFs with higher resolution will surely arrive though they're already adequate to rival optical VFs for functionality. Battery drain will be improved, extending their operating duration. These advances may even arrive from the traditional SLR mfrs first. I'm sure they can see the foreshadowing such technology is having on them. PentNikCan have already ventured into ML categories, not very successfully, but they've gotten their toes wet and will have to continue and may even have to get competitive, at least in their own way, within the next couple years.

The future WILL carry on WITH mirrorless cameras. The end-of-times for glass-flappers is nigh.
And I welcome the advantages it will bring.


----------



## jrista (Dec 22, 2013)

*UPDATE*

So, I've read through most of the patent's embodiments now. I am not sure that this is actually similar to what ML did. I do think it has to do with noise reduction, however it achieves it in a different way. From what I understand, this patent uses two signals of different ADC "precision", in comparison with the source pixel signal, and a set of circuitry to increase ADC speed, increase ADC accuracy, while maintaining a constant load level despite changing voltages.

The "constant load level" is what intrigued me the most. I believe it is a varying load level in Canon's current ADCs that leads to a bulk of their read noise. When load varies in an electrical circuit, it creates oscillations..."noise", a good example of which would be that electrical buzz in a DC circuit. If you can maintain a constant load, your noise level will drop considerably.

So, while this might not be as interesting as a Magic Lantern-style dual ISO read, I think it would still have the same effective result: Less read noise, more dynamic range at low ISO. The use of reference signals at different voltage ramps is simply to provide a secondary source for comparison with the actual pixel clock, and the option to select the more accurate signal...it really doesn't have anything to do with Dual ISO. I suspect that if Canon ever does pursue Dual ISO, the patent would probably refer more directly to such a mechanism...this patent only really directly referred to high and low precision ADC, constant load, and higher ADC accuracy...none of which really seemed to indicate ISO to me.


----------



## TheSuede (Dec 22, 2013)

The patent is old and probably conceptually invalid due to prior-art. And I'm pretty surprised about the confusion about what it does, as it's fairly straight-forward and easy to read.

What Canon patented here is a specific implementation, not a method, of a two-stage pre-selection of AD reference voltage.

As the signal is presented to the AD section, the absolute voltage is first presented to a comparator circuit. In the "determination period" the comparator sets the AD ramp signal to either a high (fast) reference ramp, or a low (slow) reference ramp.

If the signal is (was) lower than the comparator set point, then the AD works with a slower ramp and after that it scales the result down numerically by a factor of [high ramp] / [low ramp]. This enables a "slower" readout of weak signals, something which offsets the crappy (noisy) AD converters base level noise for low-level signals. Signals stronger than the comparator set point will be digitized with lower precision, but in strong signals that inaccuracy is totally dominated by photon shot noise.

If you use higher quality AD converters or a slower bitrate conversion this two stage setup is not necessary. Often slower reads are implemented by higher parallelism, using more AD converters per image. This is what Sony's Exmor, or indeed any other of the five big one's on-sensor AD conversions. They use the "slow ramp" for all pixels, all the time, anyway.


----------



## CarlTN (Dec 26, 2013)

jrista said:


> roguewave said:
> 
> 
> > jrista said:
> ...



Nice concise explanation...


----------



## jrista (Dec 26, 2013)

TheSuede said:


> The patent is old and probably conceptually invalid due to prior-art. And I'm pretty surprised about the confusion about what it does, as it's fairly straight-forward and easy to read.
> 
> What Canon patented here is a specific implementation, not a method, of a two-stage pre-selection of AD reference voltage.
> 
> ...



Thanks for the explanation! Basically what it sounded like, but I couldn't figure out the exact mechanism by which they reduced noise. Slower readout for lower signals makes total sense. Sorry, I was reading a rather poor translation from japanese...god awful ass-backwards sentences and funky wording.


----------

