# Canon Successfully Develops the World’s First 1-megapixel SPAD Sensor



## Canon Rumors Guy (May 31, 2021)

> Canon has once again announced the development of the world’s first 1-megapixel SPAD sensor. Back in June of 2020, Canon made a similar announcement.
> From Canon Global
> AR, VR, driverless vehicles, ultra-high frames-per-second shooting speeds, automated robots…the IT revolution has greatly expanded the limits of what’s possible. One of the key components that will change society as we know it is the “sensor,” a device that changes light into electronic signals. In June 2020, Canon announced that it had successfully developed the world’s first1 1-megapixel single-photon avalanche diode (SPAD) image sensor, drawing attention from industry watchers all over the world.
> 
> SPAD sensors are a type of image sensor. The term “image sensor” probably brings to mind the CMOS sensors found in digital cameras, but SPAD sensors operate...



Continue reading...


----------



## Traveler (May 31, 2021)

This looks big


----------



## Doug7131 (May 31, 2021)

Not sure why Canon are bringing this up again. This news is over a year old.


----------



## dak3 (May 31, 2021)

Mind blown! This is revolutionary in terms of electronic engineering, and will change the way we all take, process, and interpret photographs. BTW, does anyone know how to reattach a mandible? My jaw may have dislocated and fallen to the floor.


----------



## Canon Rumors Guy (May 31, 2021)

Doug7131 said:


> Not sure why Canon are bringing this up again. This news is over a year old.


I thought I remembered this, and yes... it's a year old. I'm comparing the articles to see if there is new information.


----------



## Flamingtree (Jun 1, 2021)

Maybe this is what they are holding back for the R1


----------



## calfoto (Jun 1, 2021)

Flamingtree said:


> Maybe this is what they are holding back for the R1


Hmmm, an R1(megapixel) camera doesn't really get my juices flowing - just sayin'


----------



## Mr Majestyk (Jun 1, 2021)

calfoto said:


> Hmmm, an R1(megapixel) camera doesn't really get my juices flowing - just sayin'


Well indeed, but maybe, just maybe, they could include a ToF SPAD sensor on the R1. I expect the R1 to still only be around 20MP due to it being global shutter. This and a price tag probaly over $7K makes the R3 the real deal for me unless they gimp that and also make it only 20MP or so.


----------



## Stu_bert (Jun 1, 2021)

Canon Rumors Guy said:


> I thought I remembered this, and yes... it's a year old. I'm comparing the articles to see if there is new information.


Canon released a full article on it via their global site just recently









Canon Successfully Develops the World’s First 1-megapixel SPAD Sensor | Canon Global


AR, VR, driverless vehicles, ultra-high frames-per-second shooting speeds, automated robots...the IT revolution has greatly expanded the limits of what’s possible. One of the key components that will change society as we know it is the “sensor,” a device that changes light into electronic...




global.canon





but the announcement between epfl and canon was a year ago.


----------



## CanonGrunt (Jun 1, 2021)

Skynet is next right?


----------



## jam05 (Jun 1, 2021)

Canon Rumors Guy said:


> I thought I remembered this, and yes... it's a year old. I'm comparing the articles to see if there is new information.


CR didn't take the time to edit the title. Thats what happens when one chases patents


----------



## Chig (Jun 1, 2021)

CanonGrunt said:


> Skynet is next right?


Start running now , before they send a Terminator


----------



## Chig (Jun 1, 2021)

calfoto said:


> Hmmm, an R1(megapixel) camera doesn't really get my juices flowing - just sayin'


I mp SPAD sensor will have much higher resolution than 1 mp CMOS sensor I suspect so perhaps equivalent to 20-30 mp ?


----------



## Rambutan (Jun 1, 2021)

> achieving an ultra-high frame rate of up to 24,000 frames-per-second (FPS) in 1-bit output. This enables the sensor to capture slow motion videos of phenomena that occur in extremely short time frames and were previously impossible to capture.


Can anyone explain why 24,000 FPS 1 bit video allow capturing of previously impossible phenomena when commercially available high speed camera with much faster speed had already existed.

For example the popular Phantom v2512 can capture 1 megapixels (1280x800) at comparable 25,700 FPS but with much higher 12 bit depth, and Phantom’s fastest camera TMX 7510 can capture the same 1 megapixels (1280x800) resolution at 76,000 FPS with also 12 bit output, or at reduced resolution of 1280 x 32 or 640 x 64 at _1.75 million FPS_.


----------



## Rambutan (Jun 1, 2021)

Chig said:


> I mp SPAD sensor will have much higher resolution than 1 mp CMOS sensor I suspect so perhaps equivalent to 20-30 mp ?


That's not how resolution or pixel works. It may have higher dynamic range or low light performance though.

BTW, SPAD sensor cannot discern color on its own, so for mirrorless camera the sensor will also need to have color beyer filter added just like traditional CMOS sensor.


----------



## Limpan4all (Jun 1, 2021)

Rambutan said:


> Can anyone explain why 24,000 FPS 1 bit video allow capturing of previously impossible phenomena when commercially available high speed camera with much faster speed had already existed.
> 
> For example the popular Phantom v2512 can capture 1 megapixels (1280x800) at comparable 25,700 FPS but with much higher 12 bit depth, and Phantom’s fastest camera TMX 7510 can capture the same 1 megapixels (1280x800) resolution at 76,000 FPS with also 12 bit output, or at reduced resolution of 1280 x 32 or 640 x 64 at _1.75 million FPS_.


I can see numerous reasons.
With signal processing, variable time could be used. So stop the sampling as soon as you reach the wanted bit depth for any pixel you freeze the bit number for all other pixels. Then you have the maximum possible resolution for the wanted bit depth that your output format can handle. This will make over-exposure impossible.
Or store the time at each and every photon is arriving, then thru after-processing could dynamic pictures be done that have a selection between speed or clarity or even both from one "video" stream. But none of the now the existing data storage formats could be used as they are based on completely different physics.
With a synchronized light source the distance to the reflection could be determined for each pixel and by so building a 3D model for the surrounding is "easy" with some signal processing. Or just use the easy way to have a hard limit on specific pixels that should NOT get any photons within a specific time frame.

The first CMOS sensor was not very impressive. This is a gigantic leap forward and I think it is a bit unfair to compare a 20-year mature technology with something that is brand new (or a year old..).

I do expect this technology to make a huge impact in machine vision to start with especially for autonomous cars.
But it will move into cameras also within 10 to 15 years and it will change everything. More will be moved from the time we take the "picture" into post-processing as the concept of pictures/video is just thrown out so a huge change of concept. So you as a photographer can have video, high-speed photos, or high-resolution photos and at the same time light range that is way above everything, we have seen so far. And it can all be done in the after-processing.

The only downside I can see is the storage requirements. But on the other hand, 15 years from now will most likely 2 Peta Byte memory devices exist as high-end components in the size of a CF card... Fully utilized will this technology consume huge amounts of storage, and totally new concepts about "lossy" data compression must be invented as compressed "RAW" formats.


----------



## Jasonmc89 (Jun 1, 2021)

Here is the first application fitted with dual SPAD sensors.


----------



## Limpan4all (Jun 1, 2021)

I do hope that you realize that the Terminator movies were not documentaries.

They can not exist before a new type of energy source has been invented, that is the key question, not vision or computers.


----------



## Stuart (Jun 1, 2021)

So could QPAF be assisted by SPAD ToF for super accurate AF 

LOL a one bit B&W dynamic range is probably best of for the Robots style of photography


----------



## RunAndGun (Jun 1, 2021)

Limpan4all said:


> I do hope that you realize that the Terminator movies were not documentaries.
> 
> They can not exist before a new type of energy source has been invented, that is the key question, not vision or computers.


And you do realize what a joke is, right?


----------



## melgross (Jun 1, 2021)

Chig said:


> I mp SPAD sensor will have much higher resolution than 1 mp CMOS sensor I suspect so perhaps equivalent to 20-30 mp ?


And how could they achieve that with a 1mp sensor?


----------



## slclick (Jun 1, 2021)

So this is going in the R7 with APS-H, right?


----------



## slclick (Jun 1, 2021)

Limpan4all said:


> I do hope that you realize that the Terminator movies were not documentaries.
> 
> They can not exist before a new type of energy source has been invented, that is the key question, not vision or computers.


Wait, what?


----------



## calfoto (Jun 2, 2021)

Chig said:


> Start running now , before they send a Terminator


What’s the use? They’ve already been here 5 or 6 times already


----------



## SnowMiku (Jun 2, 2021)

In 10-20 years time a higher megapixel version of this sensor could be in cameras and smartphones, just point at the milky way in the dark handheld and have no noise. But I don't understand how sensor technology works so I could be completely wrong.


----------



## Limpan4all (Jun 2, 2021)

Yes of course, but it was made on bad assumptions on reality, so I did play along.

The biggest problem with AF is not the focusing part it is to understand what to focus on, SPAD will not help out much in solving that problem. I think that twenty or thirty years from now we will have fix-focus cameras/lenses that do all the focusing in post-processing and that we will not care much about having perfect lenses. Any imperfections in the lens system will be compensated for in post-processing. The best part of doing it this way is that this could lead to fantastic lenses with low light performance that is at a price point very few can afford today.

One bit B/W dynamic range is the best thing that could happen (separation in colors can and will be done with filters unless the next stage will be to also determine the energy level and wavelength of photons, as that could lead to a fully linear color range). Back to signal processing, to understand it you must understand at least some basic signal processing.
Dynamic range can be given in a number of different ways. The way we have been having it so far is by taking a defined time (shutter time) and "chipping out electrons from a charged silicon piece". This is how both CMOS and CCD works. The main problem with this is all the noise that comes along with it. After that defined time we basically read (with an A/D converter that is far from perfect) what is left of the charge and say that this is the amount of photon energy inverted that has reached the sensor.
If we instead could count every photon that reaches the surface the noise will be way lower and never reach the "noise floor" or in other words the threshold when the noise will look like a valid signal. So regarding dynamic range, it could be infinite depending on your total or chosen sampling time, but without the noise issue. What would you prefer 24 bits of true signal range and zero noise or 24bit of the signal range where you only use 17 bits and have 7 bits of noise (at least). I prefer the first, every day of the week and twice on Sundays.

A lot can be done with signal processing but it is almost one of two things. Remove the noise with clever methods so it is never part of your data or hiding the noise but you will always have some of it left in your data (artifacts).


----------



## landon (Jun 2, 2021)

OT: Gordon Laing from Camera Labs has got a new video out on the R3. Oh, and a bunch of other videos as well.


----------



## COBRASoft (Jun 2, 2021)

Limpan4all said:


> The only downside I can see is the storage requirements. But on the other hand, 15 years from now will most likely 2 Peta Byte memory devices exist as high-end components in the size of a CF card... Fully utilized will this technology consume huge amounts of storage, and totally new concepts about "lossy" data compression must be invented as compressed "RAW" formats.


IBM already has a memory solution with crystals. Problem is CPU speed, they're not fast enough to even reset the 'memory' when starting up. Those crystals are 3d memory, ideal for scanning video for a specific face or database 'filtering'.
Since a laser system is used going through the crystal, a lot can be read in parallel instead of sequential.

Complete new applications have to be build around these concepts, but it is coming.


----------



## TAF (Jun 3, 2021)

Rambutan said:


> Can anyone explain why 24,000 FPS 1 bit video allow capturing of previously impossible phenomena when commercially available high speed camera with much faster speed had already existed.
> 
> For example the popular Phantom v2512 can capture 1 megapixels (1280x800) at comparable 25,700 FPS but with much higher 12 bit depth, and Phantom’s fastest camera TMX 7510 can capture the same 1 megapixels (1280x800) resolution at 76,000 FPS with also 12 bit output, or at reduced resolution of 1280 x 32 or 640 x 64 at _1.75 million FPS_.



Single photon avalanche. Suggesting that it is as sensitive to light as technically possible (can't do better than one photon, right?). Very low light capability, with very fast 'reset' time, in an array. Single pixel devices have had this sort of capability for years, but not an image sensor (to the best of my knowledge).

The Phantom's require a lot of light to work. I use them in the lab, and we use a multi-watt (not milliwatt) laser for illumination.


----------



## Jasonmc89 (Jun 3, 2021)

TAF said:


> Single photon avalanche. Suggesting that it is as sensitive to light as technically possible (can't do better than one photon, right?). Very low light capability, with very fast 'reset' time, in an array. Single pixel devices have had this sort of capability for years, but not an image sensor (to the best of my knowledge).
> 
> The Phantom's require a lot of light to work. I use them in the lab, and we use a multi-watt (not milliwatt) laser for illumination.


I have no idea what you do for work, but I wish my work sounded like that!


----------

