# Superresolution Photos with Photoshop



## chauncey (Sep 10, 2016)

I came across this technique for increasing image quality by using PS...
https://www.youtube.com/results?search_query=Superresolution+Photos+with+Photoshop

From what I can tell, it mimics some sort of sensor shift technology available in some cameras.
Can anyone explain it in layman's terms?


----------



## chauncey (Sep 11, 2016)

C'mon folks, someone's gotta understand it.


----------



## privatebydesign (Sep 11, 2016)

I do. I have used it to demonstrate noise reduction techniques in a lecture.

It doesn't add pixels, it just averages them to increase the detail. If you start out with twenty 20MP raw files you end up with one 20MP file.

All the technique does is take each pixel and compare the value of that one pixel across the number of exposures, it then works out a new value by either taking the middle value (which works best for when there are moving objects in the scene assuming the object didn't occupy the same place for more than half the exposures, that would be the median option). Or it adds the values from all the exposures and then divides by the number of exposures to give you a mean value.

So why does this make the image more detailed? Well it is removing noise, most noise is random so by averaging exposures it removes that random noise substantially. It is a very basic version of what the astrophotographers are doing.


So if you take a pixel value that in real life is 45, across five exposures you might get values like this, 30, 47, 38, 52, 48. Put them in numerical order 30, 38, 47, 48, 52. The mean value for your new and improved output file is 215/5 = 43, the median value is 47.

This makes for a much more accurate representation of the scene, it works best when you have to use higher iso's than you would like but can take multiple exposures if the same scene (astrophotography) and where you can keep the subject in roughly the same place in the frame (astrophotography) and where the subject is not moving relative to the background (astrophotography).

I haven't found much use for it in real life other than very dim landscapes, darker images work much better than lighter ones. But it is fun to play around with.


----------



## rs (Sep 11, 2016)

When you take a singe shot with a conventional camera, the image is limited to the resolution of the sensor, with effects such as the bayer array and AA filter only reducing that resolution further.

This technique is about taking multiple images, all slightly different, and then letting software fill in the gaps by using the different frames.

Sensor shift multi shot in Hasselblad, Olympus and Pentax cameras accomplish this automatically, and require the camera to be perfectly still on a tripod so the shifted images align as intended.

However, this technique works without sensor shift, so it relies on the user not framing the image 100% the same on each shot.

For simplicities sake, imagine a photo of a 2D scene, such as a test target. There will be a certain area each pixel covers. For example, a large test target with a low res camera could cover 10 mm by 10 mm square per pixel. Take one photo, then take another aligned exactly 5 mm to the right. Each pixel would see an average of what's going on in that different 10 mm by 10 mm square with a new brightness and colour. Offset it vertically to get something else. And then combine in software which can auto align, and you get the ability to see more detail than the sensor could otherwise.

If you take enough handheld photos, by process of elimination you will fill in enough blanks to at the very least claw back what the CFA and AA filter lost (putting it on a par with a Foveon sensor of equivalent resolution), or potentially even better (if you follow the instructions in your link to increase resolution beyond 100% for each frame). Obviously due to alignment issues the edges will be cropped out, but if the framing is similar enough its going to be negligible.

As its a form of image stacking, the signal to noise ratio will improve too. Signal is constant, shot noise is random. Add the constant signal together and you get a stronger signal. Add random noise together and it starts to cancels itself out.


----------



## privatebydesign (Sep 11, 2016)

rs said:


> When you take a singe shot with a conventional camera, the image is limited to the resolution of the sensor, with effects such as the bayer array and AA filter only reducing that resolution further.
> 
> This technique is about taking multiple images, all slightly different, and then letting software fill in the gaps by using the different frames.
> 
> ...



This is not how this technique works. It doesn't _"fill in the blanks"_ it averages the value for any one pixel and only works if PS can get a pixel perfect alignment of the scene, if it can't you get fringing and ghosting.

The technique is still limited to the sensor resolution, twenty 20MP images in gets you one 20MP image out, nothing is added, it is just refined/averaged to be more accurate.


----------



## Tuke (Sep 11, 2016)

Most cameras use bayer pattern in their sensors. So you wont actually get true 50MP image from 50MP sensor. This method fixes that. It's like using 50MP (canon) sensor and getting true foveon (sigma) results.


----------



## privatebydesign (Sep 11, 2016)

Tuke said:


> Most cameras use bayer pattern in their sensors. So you wont actually get true 20mpix in 20mpix sensor. This method fixes that. It's like using 50mpix (canon) sensor and getting foveon (sigma) results.



No it isn't.

If you have a 20MP sensor you get 20MP of data that is the truth, the quality of that data is what is being changed here by averaging the data from each pixel over several exposures.

CFA's/Bayer arrays are a red herring, the capture is in B&W and the resolution is the same if you change your image to B&W, you start with 20MP and you end with 20MP. The only thing the Bayer filter does is give the algorithms data to extrapolate colour to that pixel, each pixel has a clear and defined value of luminosity (which gives us the resolution). 

So this technique might improve colour, but it doesn't to any noticeable degree, what it does do very dramatically at massive enlargements is remove random noise.

It is nothing like using a 50mp sensor and getting Foven like results, nothing at all.

Same with the AA filter, if you start with 20mp you end up with 20mp (well slightly less because you normally have to crop a little), the quality of the average of several AA filter blurred images (not motion blurred images) is a sharper image that responds better to post sharpening.


----------



## Zeidora (Sep 11, 2016)

There are cameras that use piezo-electric elements to move the sensor around by (sub-)pixel values. One of them is the Zeiss Axiocam HRc, a dedicated peltier-cooled microscope camera. It has a small chip (1.3 MP) but by "co-site sampling" you get a 12 MP file with dedicated Zeiss software (Zen or Axiovision). I assume it involves some deconvolution, so nothing that can be done in PS.

It has other advanced features such as pixel binning for low light photography, and full well capacity of 20K e-.

The 12 MP files are plenty large given diffraction limitation of microscope optics, and they look very nice. I have one of those.


----------



## chauncey (Sep 11, 2016)

I don't doubt anything you guys have said. My problem is that I don't grasp it.


----------



## privatebydesign (Sep 11, 2016)

chauncey said:


> I don't doubt anything you guys have said. My problem is that I don't grasp it.



What don't you understand about this?



> "So if you take a pixel value that in real life is 45, across five exposures you might get values like this, 30, 47, 38, 52, 48. Put them in numerical order 30, 38, 47, 48, 52. The mean value for your new and improved output file is 215/5 = 43, the median value is 47."


----------



## chauncey (Sep 11, 2016)

ya might as well give up...like flogging a dead horse.


----------

