# FoCal Database for Lens Quality of Focus



## AlanF (Jan 20, 2017)

Reikan Focal collects the results automatically from the huge number of calibrations we do, and we can compare our "Quality of Focus (QoF)" data with the range of values found by other users. 

http://www.reikan.co.uk/focalweb/index.php/2016/08/focal-2-2-add-full-canon-80d-and-1dx-mark-ii-more-comparison-data-and-internal-improvements/

It's a very useful guide. to the performance of our copies of lenses with the rest out there and how different bodies react to different lenses. I think the QoF is a measure of the acutance of a lens, measuring the sharpness of a black-white transition. Here is a table for the telephotos I have used on various bodies over the years. The ranges given seem to fit in with the trends I find for my own lenses. Fortunately, my expensive primes are all above the average ranges. My 100-400mm II, which Lensrentals finds to be very consistent over many copies tends to be in the average ranges, which you would expect. 

The comparative values are regularly updated as I can see for some lens-camera combinations that were not covered until very recently. FoCal is providing an independent database over many copies. It's quite a resource.


----------



## bluenoser1993 (Jan 31, 2017)

My subscription is too old, I'd have to pay again to get access. It would be interesting, I just wonder how truethful the findings are. If the number is derived by the sharpness of the black to white transition, the results would be affected by the quality of the printed target, how much light is used, the distance from which the test was run. If you look at the 100-400 on the chart, the lowest resolution body (5Diii) scored highest, and they fall off in order of increasing resolution despite the 5D iv having the newest af system available. As resolution increases, sharpness suffers more with poor technique such as slow shutter due to lighting and clicking ok in FoCal too soon after the manual change of AFMA value on the body. 

The QoF might be relavent when considering just one body, but I don't think it is a good comparison of body/lens combinations.


----------



## neuroanatomist (Jan 31, 2017)

bluenoser1993 said:


> ...poor technique...
> 
> The QoF might be relavent when considering just one body, but I don't think it is a good comparison of body/lens combinations.



Crowdsourced data, dependent on the technical skills of individual users in widely varying conditions.


----------



## takesome1 (Jan 31, 2017)

Differences in lighting as well as other conditions can change the number. 
Perhaps the data would help someone who understands all the variables.
As a indication whether your lens is above average or not I do not think it is. 
It is an average and how do we know that the average is not pulled down due to the technical skill off the masses.
It would be interesting to see the highs.


----------



## Mt Spokane Photography (Jan 31, 2017)

Lens testers know that lenses perform differently on different bodies, but, as others noted, the unknown skill levels of the testers make one take the results with a grain of sand.

The high MP bodies are the most difficult to use to their maximum advantage, just putting one on a tripod is not enough. Testers have had to completely redo their test methodology in order to come up with reasonable test values. Even on concrete, nearby traffic causes issues, so very bright lighting and fast shutter speeds boost the numbers. If testing is done on the floor of a typical home or apartment under less than super bright lighting, the numbers will fall. 

As long as there is enough light, Focal will find the proper AFMA, but believing the QOF values represent what is possible for the highest values is a stretch.


----------



## AlanF (Jan 31, 2017)

The basic assumption in such crowd-sourcing data is that in a large number of measurements the errors and unknown factors average out so that the (relative) mean values are accurate but the spread is wider than for more carefully controlled measurements. It's a general principle in analysing statistics that a large number of inaccurate measurements often gives a more accurate measure of the true mean than a single precise measurement on one sample. 

I am sure that many CR members take their cholesterol-lowering drugs, anti-hypertensives etc based on the measured levels of their cholesterol and blood-pressure that have large uncertainties due to "copy (= your own body, not your camera's)" variation and other variables measured in different ways.


----------



## bluenoser1993 (Jan 31, 2017)

For example, I'm embarrassed to make public my recent testing, but I know the value of the results and was only trying to get a rough calibration in the limited time I had.

100-400ii on 5Ds @400mm QoF of 1525
@560mm QoF of 1320

The numbers compare with the bottom of the scale for the 5DsR

Now the embarrassing part. Test done at night, travel tripod on wood floor, target home printed on typical home inkjet and taped to a refrigerator (I know, I know. Only location with enough line of sight), and only lighting available which was a single LED work flood light 30W.

So if that test method actually made the range of a body that should be sharper than mine, take your QoF value comparisons with a grain of salt when you do your testing with even a little bit of care.


----------



## takesome1 (Jan 31, 2017)

bluenoser1993 said:


> For example, I'm embarrassed to make public my recent testing, but I know the value of the results and was only trying to get a rough calibration in the limited time I had.
> 
> 100-400ii on 5Ds @400mm QoF of 1525
> @560mm QoF of 1320
> ...



The refrigerator was running and a 30W bulb. Obviously you have a bad lens since it scored so low. :


----------



## takesome1 (Jan 31, 2017)

AlanF said:


> The basic assumption in such crowd-sourcing data is that in a large number of measurements the errors and unknown factors average out so that the (relative) mean values are accurate but the spread is wider than for more carefully controlled measurements. It's a general principle in analysing statistics that a large number of inaccurate measurements often gives a more accurate measure of the true mean than a single precise measurement on one sample.
> 
> I am sure that many CR members take their cholesterol-lowering drugs, anti-hypertensives etc based on the measured levels of their cholesterol and blood-pressure that have large uncertainties due to "copy (= your own body, not your camera's)" variation and other variables measured in different ways.



I think if they organized the data to have comparisons with the same lighting and exposure it would give you a better data base to compare. I haven't paid attention to see if they collect that data.


----------



## bluenoser1993 (Jan 31, 2017)

It's possible they're collecting the data, but kelvin is the only thing I see in the report. I remember older versions use to show EV level during set up, but it doesn't anymore (not that I saw anyway). I agree, if they could group the results with the shutter speed it would compare better.

The 30w was LED value, not the equivalent, but still way to low. Not to mention LED is not the best for AF anyway.


----------



## neuroanatomist (Jan 31, 2017)

AlanF said:


> The basic assumption in such crowd-sourcing data is that in a large number of measurements the errors and unknown factors average out so that the (relative) mean values are accurate but the spread is wider than for more carefully controlled measurements. It's a general principle in analysing statistics that a large number of inaccurate measurements often gives a more accurate measure of the true mean than a single precise measurement on one sample.



Given normal variance, yes. However, in the case of lens testing the variance is not normally distributed, it's skewed – there is proper testing, which will in effect yield the highest value possible for that copy of the lens; there is less proper (or improper) testing, which will yield lower values for the same lens copy; but, there no 'more proper' testing that will yield higher values. The mean is generally not a useful summary statistic for skewed distribution. Further, that skewed distribution is superimposed on the presumably normally distributed copy-to-copy lens variance.

However, the accuracy of user-derived QoF values is really not the major concern here. Rather, the bigger issue is that you are comparing your own measured absolute QoF value (the peak of the curve) with the absolute QoF values measured by other users (semi-arbitrary color coding of the Y-axis values as 'better'/green, 'typical'/blue, and 'poor'/red):







Why is comparing absolute QoF values a bad thing? Well, let's review what Reikan themselves had to say about it when FoCal v1.9 was released:

[quote author=Reikan]
First, it’s important to understand that FoCal works by analysing the relative differences between the QoF numbers, not the absolute value. For example, suppose you have two measurements during a test that give QoF values of 3000 and 1500 – the most important piece of information here is that the second value is 50% of the first value. If you change the lighting and target image, you may find that the actual QoF values for the same measurements are 2000 and 1000, but the end result is the same – the second is still 50% of the first.
...
*As we have said from the release of FoCal, the absolute QoF value is unimportant, so you cannot compare the numbers from one test to another.* 
[/quote]

But with v2.0, all of a sudden the absolute QoF _is_ important, and all of a sudden you _can_ compare the numbers from one test to another. So what changed 'from the release of FoCal' to the release of v2.0? Oh yeah, they developed a database and now require people to pay for access to those data. File that under things that make you go hmmmmm...


----------



## takesome1 (Jan 31, 2017)

neuroanatomist said:


> But with v2.0, all of a sudden the absolute QoF _is_ important, and all of a sudden you _can_ compare the numbers from one test to another. So what changed 'from the release of FoCal' to the release of v2.0? Oh yeah, they developed a database and now require people to pay for access to those data. File that under things that make you go hmmmmm...



Is it like the ADHD conspiracy theory and Ritalin? Promoting a disorder for an existing drug to treat. If you have it why not find a way to sell it.

Surely there is not a monetary motivation on Reikan's part.


----------



## AlanF (Jan 31, 2017)

neuroanatomist said:


> AlanF said:
> 
> 
> > The basic assumption in such crowd-sourcing data is that in a large number of measurements the errors and unknown factors average out so that the (relative) mean values are accurate but the spread is wider than for more carefully controlled measurements. It's a general principle in analysing statistics that a large number of inaccurate measurements often gives a more accurate measure of the true mean than a single precise measurement on one sample.
> ...



But with v2.0, all of a sudden the absolute QoF _is_ important, and all of a sudden you _can_ compare the numbers from one test to another. So what changed 'from the release of FoCal' to the release of v2.0? Oh yeah, they developed a database and now require people to pay for access to those data. File that under things that make you go hmmmmm...
[/quote]

First of all, they are not charging extra for the comparison data: it comes free with at least my version 2.4 with FoCal Pro. "_FoCal users have been uploading calibration and test results for over 4 years, the database contains literally tens of millions of data points across tens of thousands of camera and lens combinations. Starting from FoCal 2.0, FoCal Pro users started to benefit from information showing how their camera and lens compares to other FoCal users._" https://www.reikan.co.uk/focalweb/index.php/2016/08/focal-2-2-add-full-canon-80d-and-1dx-mark-ii-more-comparison-data-and-internal-improvements/ (They did charge for earlier versions and I don't know when it became free.)

Secondly, I do a really bad of job of using FoCal, and I know the spread from many repeat runs. I have a target set up on a wall in the garden, simple laser printed on regular photocopying paper, and it sometimes gets soaked in the rain and dries (though I do change it occasionally). The light levels vary from very dull, uniformly, to changing to such an extent when the sun comes out from behind a cloud that I get warned by the software, to brightly illuminated. Here is the chart of the FoCal values with the spread of the values of mine below. Despite my rotten technique, which should skew them below average, the spread of my QoF values parallels the database and tends to be above the spread of typical values.


----------



## takesome1 (Jan 31, 2017)

AlanF said:


> Secondly, I do a really bad of job of using FoCal, and I know the spread from many repeat runs. I have a target set up on a wall in the garden, simple laser printed on regular photocopying paper, and it sometimes gets soaked in the rain and dries (though I do change it occasionally). The light levels vary from very dull, uniformly, to changing to such an extent when the sun comes out from behind a cloud that I get warned by the software, to brightly illuminated. Here is the chart of the FoCal values with the spread of the values of mine below. Despite my rotten technique, which should skew them below average, the spread of my QoF values parallels the database and tends to be above the spread of typical values.



I think all that I would draw from this is despite how you feel about your technique, your technique is performing better than the average of the widely sampled group. Without data that shows the top level of performance from other lenses I wouldn't credit the results to the lens. Perhaps if you hand picked each of your lenses for the best copy when you bought them, then maybe you could make the claim it is the lens.


----------



## AlanF (Jan 31, 2017)

takesome1 said:


> AlanF said:
> 
> 
> > Secondly, I do a really bad of job of using FoCal, and I know the spread from many repeat runs. I have a target set up on a wall in the garden, simple laser printed on regular photocopying paper, and it sometimes gets soaked in the rain and dries (though I do change it occasionally). The light levels vary from very dull, uniformly, to changing to such an extent when the sun comes out from behind a cloud that I get warned by the software, to brightly illuminated. Here is the chart of the FoCal values with the spread of the values of mine below. Despite my rotten technique, which should skew them below average, the spread of my QoF values parallels the database and tends to be above the spread of typical values.
> ...



You cannot logically derive that my technique is performing better than average. What you can draw from the data is that the spread of my values is above the average. The reason for the better than average results could be a better technique or a better range of samples. Given the description of my technique, it is unlikely it is better than average.


----------



## takesome1 (Jan 31, 2017)

AlanF said:


> takesome1 said:
> 
> 
> > AlanF said:
> ...



If I can not derive that your technique is better than average, then you can not make the claim_ "it is unlikely it is better than average"_ since there is no data provided describing the skill level of the average user.

I based my assumptions on a few things, one is that you are a regular poster to this forum. Second you are serious enough that you would break down your data to compare. Both things I can relate to. Both would lead me to believe on your worse day you are probably testing better than the average tester.

The average user may at best one of the individuals that comes to the forum with 1 post asking why his camera is taking soft pictures. The forum ends up recommending focal, corn flake boxes and television screens to perform AFMA. Without knowing who the purchasers are of focal, we just do not know.


----------



## neuroanatomist (Jan 31, 2017)

AlanF said:


> First of all, they are not charging extra for the comparison data: it comes free with at least my version 2.4 with FoCal Pro. "_FoCal users have been uploading calibration and test results for over 4 years, the database contains literally tens of millions of data points across tens of thousands of camera and lens combinations. Starting from FoCal 2.0, FoCal Pro users started to benefit from information showing how their camera and lens compares to other FoCal users._" https://www.reikan.co.uk/focalweb/index.php/2016/08/focal-2-2-add-full-canon-80d-and-1dx-mark-ii-more-comparison-data-and-internal-improvements/ (They did charge for earlier versions and I don't know when it became free.)



Indeed, I read that. But note that they've moved from a buy a major version of FoCal (which was the case for v1 - you bought it, you got all the updates perpetually until the next major version - for me, that was 3 years) to an annual subscription model that includes the database. So while database access came with your v2.4 purchase, after your annual Included Updates period ends, you'll lose access to the database, unless you pay for another year of updates (which you likely will not need unless you buy a new camera or they include a feature you can't live without). 




AlanF said:


> Secondly, I do a really bad of job of using FoCal, and I know the spread from many repeat runs. I have a target set up on a wall in the garden, simple laser printed on regular photocopying paper, and it sometimes gets soaked in the rain and dries (though I do change it occasionally). The light levels vary from very dull, uniformly, to changing to such an extent when the sun comes out from behind a cloud that I get warned by the software, to brightly illuminated. Here is the chart of the FoCal values with the spread of the values of mine below. Despite my rotten technique, which should skew them below average, the spread of my QoF values parallels the database and tends to be above the spread of typical values.



Agree with takesome1 here. I'm not surprised at all that, as a scientist, your 'really bad job' is still better than the average person's typical effort.


----------



## bluenoser1993 (Jan 31, 2017)

AlanF said:


> First of all, they are not charging extra for the comparison data: it comes free with at least my version 2.4
> 
> Secondly, I do a really bad of job of using FoCal



They aren't charging extra if you are a new subscriber, but in the past there wasn't a limit to getting the updates. You didn't have to worry about getting a new body that wasn't supported, just get the update once available. Once this idea of data base sharing came up, your subscription became timed, and all the users that created that data base now have to subscribe again if they want access to it.

On your second point, did you read my test method? The target was taped to an operating refrigerator! Light level was low enough that the target setup option wouldn't work. I kept getting the message "focus couldn't be achieved". Those results and I'm sure worse ones are part of the database.


----------



## AlanF (Jan 31, 2017)

neuroanatomist said:


> Agree with takesome1 here. I'm not surprised at all that, as a scientist, your 'really bad job' is still better than the average person's typical effort.



As a scientist, I always have more unpublished data to present to the referees to counter their arguments. Here are some lenses where I scored below average (blue is the average spread): my old 100-400, now gone; 40mm f/2.8, borrowed and returned; Sigma 35/2, sent back; and EF-S 55-250 II, which is actually pretty good. In contrast, my favourite, the 400mm DO II + 1.4TC + 5DS R, which is almost off scale. If my technique is better than average, the first four must have been total cr*p, rescued by my outstanding skills.

You are right, I am not upgrading FoCal until I buy a new body with which it is not currently compatible. For the time being, I have all the data for my existing lenses saved.


----------



## takesome1 (Jan 31, 2017)

AlanF said:


> neuroanatomist said:
> 
> 
> > Agree with takesome1 here. I'm not surprised at all that, as a scientist, your 'really bad job' is still better than the average person's typical effort.
> ...



The peak in the chart indicate a good run. The second is probably the weakest.
On further review the original call stands. 

I have seen very bad runs, and these just do not qualify.


----------



## takesome1 (Jan 31, 2017)

I kind of get the feeling that the point of the thread was to say your lenses are above average.
We can not let you do that by tearing your own abilities down.

But I would concede that by the care you have taken to check that your gear is probably above average. More than likely you wouldn't have tolerated a sub standard lens.


----------



## AlanF (Jan 31, 2017)

takesome1 said:


> I kind of get the feeling that the point of the thread was to say your lenses are above average.
> We can not let you do that by tearing your own abilities down.



Read the first posts - I started this thread to discuss the usefulness of the database. It was after neuro's comments that the values are skewed by poor technique to lower the average values that I quoted my data with poor technique that gave values above average. Don't make it personal and make false assumptions about my motives.


----------



## takesome1 (Jan 31, 2017)

AlanF said:


> Fortunately, my expensive primes are all above the average ranges.



I re-read it as requested.


----------



## AlanF (Jan 31, 2017)

The point of starting this thread was to give the heads up that there is a database by which you can test your lenses' performances. I do that now and it has stopped me buying two lenses that were below par and confirmed my suspicions that my old 100-400mm was a bad copy. It also shows how lenses perform on different bodies. If you wish to be a naysayer and use it as yet another debating game, then it is your loss. I wondered how good my lenses were and I am now comforted I didn't buy lemons.


----------



## takesome1 (Feb 1, 2017)

AlanF said:


> The point of starting this thread was to give the heads up that there is a database by which you can test your lenses' performances. I do that now and it has stopped me buying two lenses that were below par and confirmed my suspicions that my old 100-400mm was a bad copy. It also shows how lenses perform on different bodies. If you wish to be a naysayer and use it as yet another debating game, then it is your loss. I wondered how good my lenses were and I am now comforted I didn't buy lemons.



I do not dispute that it is useful information, I could see it useful if you were continually low with results it could show that either your technique or equipment has a problem. As an indication if you have better than average exceptional equipment I think it would be lacking because of the variables involved.

Also I do not see how it would be a good indication of how lenses perform on different bodies. If this were the case wouldn't the 5Ds R be the worse one in in the sample?


----------



## bluenoser1993 (Feb 1, 2017)

Alan, what does the lens profile look like for the 400 DO @560 on your other bodies?


----------



## bluenoser1993 (Feb 1, 2017)

bluenoser1993 said:


> It's possible they're collecting the data, but kelvin is the only thing I see in the report. I remember older versions use to show EV level during set up, but it doesn't anymore (not that I saw anyway). I agree, if they could group the results with the shutter speed it would compare better.
> 
> The 30w was LED value, not the equivalent, but still way to low. Not to mention LED is not the best for AF anyway.



I'd like to correct myself after looking at old reports in my Focal History. The EV is not in the summary at the beginning of the report, it is included in the details for each AFMA value tested, so is collected. The shutter speed is also collected in 2.0 version reports. Reports from tests I did on version 1... do not have a shutter speed.


----------



## AlanF (Feb 1, 2017)

bluenoser1993 said:


> Alan, what does the lens profile look like for the 400 DO @560 on your other bodies?


See below. At 560, the 400 is very good on the 5DS R and 7DII. A few of us who had the 400 DO II lens agreed that there was very little improvement, if any, in resolution on going from the 1.4xTC to the 2xTC on these bodies with similarly small pixels. Conversely, there was a good gain with the 5DIV body and 2xTC. I had earlier found the same with the 300/2.8 II and 1.4xTC and 2xTC on the 5DIII (and now IV) vs the 5DS R and 7DII. The FoCal QoF data in the tables I presented on page 1 bear this out as well, although they measure acutance. So, with the 7DII I stick to 560mm, and with the 5DIV I use the 2xTC.


----------



## AlanF (Feb 1, 2017)

takesome1 said:


> AlanF said:
> 
> 
> > The point of starting this thread was to give the heads up that there is a database by which you can test your lenses' performances. I do that now and it has stopped me buying two lenses that were below par and confirmed my suspicions that my old 100-400mm was a bad copy. It also shows how lenses perform on different bodies. If you wish to be a naysayer and use it as yet another debating game, then it is your loss. I wondered how good my lenses were and I am now comforted I didn't buy lemons.
> ...



You are absolutely correct that the 5DS R (along with the 7DII) should be the worst on the QoF scores (because smaller pixels give less sharp transitions than large ones). And, if you had in fact read the table I presented in the opening post (and repeated with my own data added), you would have seen that the 5DS R is the worst.


----------



## takesome1 (Feb 1, 2017)

AlanF said:


> You are absolutely correct that the 5DS R (along with the 7DII) should be the worst on the QoF scores (because smaller pixels give less sharp transitions than large ones). And, if you had in fact read the table I presented in the opening post (and repeated with my own data added), you would have seen that the 5DS R is the worst.



Yes there is a reason for it.
The point is that the differences in the bodies are in the design, not necessarily in the quality of picture one should expect of the camera. For that reason comparing how one body performs versus another is really only useful in respect to this test.

Unless you see other information about the bodies that can be derived from it, I do not.


----------



## bluenoser1993 (Feb 1, 2017)

AlanF said:


> See below. At 560, the 400 is very good on the 5DS R and 7DII. A few of us who had the 400 DO II lens agreed that there was very little improvement, if any, in resolution on going from the 1.4xTC to the 2xTC on these bodies with similarly small pixels. Conversely, there was a good gain with the 5DIV body and 2xTC. I had earlier found the same with the 300/2.8 II and 1.4xTC and 2xTC on the 5DIII (and now IV) vs the 5DS R and 7DII. The FoCal QoF data in the tables I presented on page 1 bear this out as well, although they measure acutance. So, with the 7DII I stick to 560mm, and with the 5DIV I use the 2xTC.



I searched their site a little, but couldn't find how they define average range, knowing their limits would help a bit. I agree with others that the tool will certainly indicate to a user that technique or equipment is poor when well below the average, probably the best use of the tool. However, in your case the the result for the 400 is so far beyond the average I think it merits claim of good equipment and technique. For the 5DsR and 7DII I think it also shows your ability to get the best of the equipment, more so than the masses.

I went to a print shop last night to print a couple targets with far better quality than my printer, one of a larger size for the longer tests as well. Once I resolve the lighting, I intend to get my best result possible for the 100-400 to compare to the results I posted earlier in the thread. I hope it will compare the spread of the average range vs the spread of poor to better technique. I'll share the findings here once I've had a chance to complete the test.

I will add, I like the way you used it to compare teleconverter results.


----------



## Mt Spokane Photography (Feb 1, 2017)

AlanF said:


> You cannot logically derive that my technique is performing better than average. What you can draw from the data is that the spread of my values is above the average. The reason for the better than average results could be a better technique or a better range of samples. Given the description of my technique, it is unlikely it is better than average.



The big factor in the testing is vibration during the long exposures that focal uses in poor light. You get around the issue with brighter lighting and the resulting faster shutter speeds, or by a firmer tripod support, or in very few cases, both.

Generally, even in dull light outdoors, its very bright compared to indoor lighting. But, even if it isn't, the fact that your tripod is sitting on the ground rather than the wooden floor typical of many homes, which vibrate and ring like a bell with any movement. So, I would say that your technique likely surpasses most users from the critical vibration issue, because it addresses both issues, firmer support, and brighter lighting.


----------



## AlanF (Feb 1, 2017)

So, I am either a pretty average experimentalist with some above average lenses or I am better than average experimentalist who gets the best out of his average lenses? What would you prefer to be?


----------



## bluenoser1993 (Feb 1, 2017)

AlanF said:


> So, I am either a pretty average experimentalist with some above average lenses or I am better than average experimentalist who gets the best out of his average lenses? What would you prefer to be?



LOL. All I'm saying in the case of the 5DsR and 7DII paired with your 400 DO is that with their premium glass I'd like to think Canon's QC is better than what it would take for your lens quality alone to explain how far above the average range it is.


----------



## AlanF (Feb 1, 2017)

bluenoser1993 said:


> AlanF said:
> 
> 
> > So, I am either a pretty average experimentalist with some above average lenses or I am better than average experimentalist who gets the best out of his average lenses? What would you prefer to be?
> ...



Lensrentals frequently has articles on the poor quality control, e.g: "T_he summary is almost all of you greatly overestimate the type and amount of optical testing that lenses get, whether it’s at the factory after assembly or in the repair center when it has a problem_." https://www.lensrentals.com/blog/2016/09/is-your-camera-really-the-best-optical-test/

They have also measured copy to copy variation in lenses. The variation can be horrendous. Here is one article with references to their earlier studies https://www.lensrentals.com/blog/2015/07/variance-measurement-for-35mm-slr-lenses/ 

The copy of the 400mm DO II used by image resources is a pretty poor one, not being tack sharp http://www.imaging-resource.com/lenses/canon/ef-400mm-f4-do-is-ii-usm/review/

The ePhotozine copy tested was spectacular. https://www.ephotozine.com/article/canon-ef-400mm-f-4-do-is-ii-usm-lens-review-26785

There is considerable variation even in these super-expensive lenses.


----------



## takesome1 (Feb 1, 2017)

AlanF said:


> So, I am either a pretty average experimentalist with some above average lenses or I am better than average experimentalist who gets the best out of his average lenses? What would you prefer to be?



It seems everyone thinks you are better than average, I just can not see how that would be offensive. 

I don't think a comparison of Riekans reports eastablish lenses as just average or exceptional. It could indicate a bad one though.


----------



## bluenoser1993 (Feb 2, 2017)

Well, I hope someone can explain these results. I did screenshots of the reports at the peek AFMA value so you can see the images side-by-side that are giving the QoF. You can see the extra light in the report, and shutter speed. What you can't see is the heavy weight hanging on the tripod, tripod on basement floor (cement), central heat shut off (no vibration), tripod legs forced into wider spread at floor to reduce wobble, target on solid wall, target larger and of much better print quality, FoCal delay after mirror lock increased to 3 seconds, more time waiting after each AFMA change on body before continuing.

400mm and 140mm were done at almost same distance of 10.1 - 10.3 meters, I didn't have room to get any further away so the new 560mm was done at 10.1m where as the original was done at 13m.

It made sense to see the 400mm QoF increase (though I had hoped for more), but then I was puzzled to see a reduction of QoF at 560mm, and then essentially no change at 140mm.

The one thing I wonder, does FoCal use the declared target size (and hence calculated distance) as a part of the calculation of the QoF? I assumed having the target size increase from 116mm to 209mm would have improved the QoF because of the improved image captured, but maybe it's accounted for? 

EDIT - left image is the new (better technique) attempt in each focal length below.


----------



## takesome1 (Feb 2, 2017)

bluenoser1993 said:


> Well, I hope someone can explain these results. I did screenshots of the reports at the peek AFMA value so you can see the images side-by-side that are giving the QoF. You can see the extra light in the report, and shutter speed. What you can't see is the heavy weight hanging on the tripod, tripod on basement floor (cement), central heat shut off (no vibration), tripod legs forced into wider spread at floor to reduce wobble, target on solid wall, target larger and of much better print quality, FoCal delay after mirror lock increased to 3 seconds, more time waiting after each AFMA change on body before continuing.
> 
> 400mm and 140mm were done at almost same distance of 10.1 - 10.3 meters, I didn't have room to get any further away so the new 560mm was done at 10.1m where as the original was done at 13m.
> 
> ...



I can say they are absolutely meaningless when in comparison to each other.
Different lighting produces different results. The size of the target and the distance modify everything. The examples you show are from either different distances, the same distances or different targets. 

I think this is a good example of why the averages that focal provides can not be relied on as a guide of lens quality.

If you want to use this number to compare the quality of a lens you need to have all things equal. Here are a few factors I know of that you didn't mention. Matt paper vs glossy for a target. Printing with a printer capable of printing photo's vs a normal laser jet. Target position and placement being square to the set up. Positioning of lighting, is the lighting direct or indirect at target. Type of lighting, I have several halogens that cast shadows.


----------



## neuroanatomist (Feb 2, 2017)

AlanF said:


> As a scientist, I always have more unpublished data to present to the referees to counter their arguments.



It also seems that as a scientist, you initially 'published' your best examples, not representative examples. 

I may have done that a few times...


----------



## AlanF (Feb 2, 2017)

Bluenoser, I have done hundreds of calibrations at many different distances using the same target to see if the AFMA changes with distance, from 10-25m, and found regularly the reported QoF changes no more than it does for repeat measurements at the same distance. So target size doesn't make much distance. What your experiments show very clearly, and answer the objections thrown at me, the FoCal procedure is very robust and doesn't depend that much on technique - you have used very poor technique to a much improved one. 

I corresponded with FoCal about this a couple of days ago:

_On Wed, 1 Feb at 12:00 PM, AlanF wrote:
My experience is that the QoF for my telephoto lenses doesn’t change that much with distance or lighting conditions - I have done loads of repeat runs over the years.

Reply from Reikan:
Yep, the current QoF calculation is designed to provide comparison as much as possible. It's hard to guarantee as test environments can be very different. The more similar the test set up the more likely the results will be directly comparable _

I think that the FoCal database is very useful and far more robust than the critics here claim. I'll ask FoCal how the spreads of the average values are calculated.


----------



## neuroanatomist (Feb 2, 2017)

takesome1 said:


> If you want to use this number to compare the quality of a lens you need to have all things equal. Here are a few factors I know of that you didn't mention. Matt paper vs glossy for a target. Printing with a printer capable of printing photo's vs a normal laser jet. Target position and placement being square to the set up. Positioning of lighting, is the lighting direct or indirect at target. Type of lighting, I have several halogens that cast shadows.



^^This.

That's why, IMO, Reikan initially and correctly took the position they stated: "_The absolute QoF value is unimportant, so you cannot compare the numbers from one test to another._" It was only when they could make money from comparing absolute QoF numbers that their position changed. They're certainly not alone in changing their tune when money is at stake.

Nikon, until 2013: "Fluorite cracks easily and messes up focusing, so we developed ED glass because it's much better."
Nikon, post-2013: "Flourite is great becuase it optimally corrects CA and makes a lens lighter."


----------



## AlanF (Feb 2, 2017)

neuroanatomist said:


> AlanF said:
> 
> 
> > As a scientist, I always have more unpublished data to present to the referees to counter their arguments.
> ...



I know you you have a tongue in cheek emoticon. But, for the record, I "published" initially none of my values, just the values from the FoCal database. I then "published" my own data that overlapped with the FoCal database in the context of your arguments that poor technique would skew the numbers down - my less than perfect technique gave above average results. Bluenoser has elegantly shown that on going from about the worst conditions of having a crude target pasted on to a vibrating refrigerator door, illuminated by something a bit brighter than a candle, with his camera mounted on what Mt Spokane describes as a floor ringing like a bell to a using sophisticated techniques that there is hardly any change in measured QoF.


----------



## bluenoser1993 (Feb 2, 2017)

takesome1 said:


> bluenoser1993 said:
> 
> 
> > Well, I hope someone can explain these results. I did screenshots of the reports at the peek AFMA value so you can see the images side-by-side that are giving the QoF. You can see the extra light in the report, and shutter speed. What you can't see is the heavy weight hanging on the tripod, tripod on basement floor (cement), central heat shut off (no vibration), tripod legs forced into wider spread at floor to reduce wobble, target on solid wall, target larger and of much better print quality, FoCal delay after mirror lock increased to 3 seconds, more time waiting after each AFMA change on body before continuing.
> ...



I realized the results wouldn't be meaningful to compare, particularly if trying to compare different equipment. The point was that it is the same equipment, the first run was with very poor technique and the second run was done by changing the aspects I could in an attempt to get the best QoF value with the things I had at hand. I assumed going into the second test that it may shed a little light on the spread of the average, but instead I actually got a reduced value in one case and equal in another. Alan's reply from Reikan seems to confirm the findings of my test, the QoF value calculation is designed to attempt an equal playing field for comparison. I did doubt the comparability, but now think it is more comparable than I thought. A little bummed that my 100-400 scores low.


----------



## AlanF (Feb 2, 2017)

neuroanatomist said:


> takesome1 said:
> 
> 
> > If you want to use this number to compare the quality of a lens you need to have all things equal. Here are a few factors I know of that you didn't mention. Matt paper vs glossy for a target. Printing with a printer capable of printing photo's vs a normal laser jet. Target position and placement being square to the set up. Positioning of lighting, is the lighting direct or indirect at target. Type of lighting, I have several halogens that cast shadows.
> ...



Flourite is more likely to be used in baking cakes. 

Takesome1 comments are all hypothetical. You, as a fellow scientist, know full well that he should do experiments to test the actual magnitudes of his hypotheses before pronouncing them like Newton's Laws. I have done experiments under various conditions to know what he states are just second order effects, as now found under more extreme conditions by Bluenoser.


----------



## takesome1 (Feb 2, 2017)

AlanF said:


> neuroanatomist said:
> 
> 
> > takesome1 said:
> ...



That's funny. Perhaps it is because I bought one of the early versions of FoCal and maybe you have a new version that is far more refined. Or perhaps it is because of the variations I have seen from one test to the other. Or maybe the hundreds of posters who do not even know why they have soft pictures and the forum refers them to FoCal. It is just not a test that would find meaningful results.

I pulled this from your post, so I am relying on you for the accuracy:

Reply from Reikan:
Yep, the current QoF calculation is designed to provide comparison as much as possible. It's hard to guarantee as test environments can be very different. The more similar the test set up the more likely the results will be directly comparable

I have in the past used FoCal to compare identical lenses on the same body. I have always done this with the same identical set up, at the same distance and if possible on the same day. The last one that I did was several 500mm II's and my old 500mm. Interestingly in this test the version 1 matched the new versions, who were both within a few points of each other. Also while initially setting up the test my results varied by as much as 300.


----------



## neuroanatomist (Feb 2, 2017)

takesome1 said:


> Perhaps it is because I bought one of the early versions of FoCal and maybe you have a new version that is far more refined.
> 
> ...
> 
> ...



Interesting point, and one which caught my eye in Alan's earlier reply. That suggests they've refined the QoF algorithm.

The problem I have with that is they stated 'absolute QoF is irrelevant and not useful for comparisons' at the time of v1.9, then with v2.0 they launched their comparative database...which therefore must have been compiled with data generated from v.1.9 and earlier, when QoF was not to be used for comparisons. 

Honestly, I don't have the data to know whether their comparisons useful or not. But based on their own statements, I really question their motivation...


----------



## takesome1 (Feb 2, 2017)

neuroanatomist said:


> Honestly, I don't have the data to know whether their comparisons useful or not. But based on their own statements, I really question their motivation...



I think it would be useful if they published the top 5%, rather than the average from the masses.
But then if the motivation is monetary the average from the masses will make more people feel better. In contrast if an individual spends $12k on a new 600mm how will he feel when he is below the 5% group.


----------



## AlanF (Feb 2, 2017)

takesome1 said:


> neuroanatomist said:
> 
> 
> > Honestly, I don't have the data to know whether their comparisons useful or not. But based on their own statements, I really question their motivation...
> ...



The information we need and is most useful is whether the distribution is a standard Gaussian (normal), skewed or binomial, and if a normal distribution the mean and standard deviation. If Neuro's fears are correct, the distribution will be skewed.

Regarding the clientele, it's us geeks that buy and want to know about their lenses. In certain matters, I would be very happy to be in the top 5%. But, for lenses, average is good enough, although better than average is nice. It would be good for Canon to have their own dumbed-down database so everyone thinks they have bought a cracker of a lens.


----------



## FoCal Rich (Feb 2, 2017)

Hi!

I'm Rich - the lead developer of FoCal, thought I'd chip in with a couple of thoughts 

There's quite a lot of sets of data in the comparison information fed back to FoCal, but the main graphed values are typically the IQR (so 25th to 75th %ile range) of filtered data. The filtering is designed to remove results which are obviously erroneous or suspicious. Single user data is aggregated so there's no bias from a single user (even if they run 100's of tests), and there's a threshold for the number of unique cameras that are required to create any particular data set. Note that the comparision shown on the graphs is matched specifically to the camera/lens combination under test.

With regards to the QoF value - up to FoCal 1.9.5 we used an analysis method (called Q7) which was quite susceptible to lighting variations and image content. From 1.9.5 onwards, the analysis method (called Q10d) was changed to be much more robust. There's quite a detailed blog post here: http://www.reikan.co.uk/focalweb/index.php/2014/02/reikan-focal-rgb-analysis/

With Q10d, the QoF value is generally comparable for the same camera/lens combintation. It's even roughly comparable across different cameras and lenses, but there are some limitations. We only use Q10d results to build the comparison data so even though data was uploaded before FoCal 1.9.5, that data isn't considered robust enough to form part of the comparison data.

The building of the comparison data (the filtering and aggregation of raw uploaded data) is being continuously improved, and it's an area which will be receiving some quite considerable attention very shortly.

Hopefully useful information(!)

Rich


----------



## takesome1 (Feb 2, 2017)

AlanF said:


> takesome1 said:
> 
> 
> > neuroanatomist said:
> ...



Canon does have their own dumbed-down database. In the US if your less than 1 year old lens to Canon and tell them it is soft, they will send it back to you and say it tested within acceptable parameters. It is so dumbed down they just tell you what it says.

The whole conversation gives me an idea. In a forum such as this you could have a competition, or just a comparison, to see who with the same body and lens can produce the highest quality measure. It would be a competition for the Geeks.


----------



## takesome1 (Feb 2, 2017)

FoCal Rich said:


> Hi!
> 
> I'm Rich - the lead developer of FoCal, thought I'd chip in with a couple of thoughts
> 
> ...



Rich, thanks for the clarification.


----------



## AlanF (Feb 2, 2017)

Thanks Rich. You have answered the key questions, and our findings fit in with what you say about a new robust statistical analysis, and I think you will have satisfied the critics. I am also pleased that my large number of calibrations aren't distorting the database.


----------



## takesome1 (Feb 2, 2017)

Rich,

one question.

You say you only use the middle range. One would think the reports with the very high QoF values are more likely to be accurate than the low, since it would take both good technique and good lens to achieve the higher QoF.

Is there any chance you will be releasing a report that would show the upper 25%? Or perhaps one that would show the whole spectrum?

I do understand that your primary goal is showing data that indicates if you are in an acceptable range to do an AFMA on your camera, and part of our discussion is using your software to determine how well our equipment is performing.


----------



## neuroanatomist (Feb 2, 2017)

Thanks for the information, Rich.




AlanF said:


> ...I think you will have satisfied the critics.



Not quite. For example, they stated, "Back in FoCal 1.5, we started collecting data about the results of your tests (nothing to personally identify you, just a few numbers showing how cameras and lenses behave)." But according to Rich's statement above, results up to v1.9.5 aren't terribly reliable. I wonder what fraction of the current database comprises pre-v1.9.5 data (perhaps zero), and I also wonder what fraction of the database comprised pre-v1.9.5 at the time FoCal 2.0 was released.


----------



## FoCal Rich (Feb 2, 2017)

takesome1 said:


> Is there any chance you will be releasing a report that would show the upper 25%? Or perhaps one that would show the whole spectrum?



This is something we've got on the list. It requires a change to the structure of the data so it won't work with the current version of FoCal., but we will add more detail (probably every 10th percentile as a compromise between detail and data size).


----------



## FoCal Rich (Feb 2, 2017)

neuroanatomist said:


> Thanks for the information, Rich.
> 
> 
> 
> ...



It's slightly lost in the detail, but in my original post I mentioned that we *only *build comparison data using the Q10d results - which is only data since 1.9.5 with the new analysis method.

We have done some quite detailed profiling of the performance difference between the Q7 and Q10d analysis and there is some useful data to be obtained from the early results, but as it relates only to older cameras (2013 and before) it's less important than focusing on the newer data.


----------



## neuroanatomist (Feb 2, 2017)

Sorry, Rich...missed that. Thanks!

Also appreciate the conservative approach of using the interquartile range.


----------



## AlanF (Feb 2, 2017)

I think we have had a very good robust discussion and have come to a consensus, which is what we should be doing. The database may not be perfect but the uncertainties do not stop it from being useful.

All my dealings with Reikan convince me that it is an ethical company and very responsive. I don't know for how long there will be a market in selling AFMA software if Canon and Nikon introduce auto-AFMA methods and mirrorless erodes the market further. But, software for checking lenses could have a good future for all brands of cameras.


----------



## bluenoser1993 (Feb 2, 2017)

Thanks Rich, it's always nice to have a subject matter expert involved in the conversation. A lot of threads on this forum would be shorter if that happened more often . I think you've cleared up any doubt I had. 

In your testing, have you found RAW vs JPEG to have much affect on the QoF?


----------



## AlanF (Feb 2, 2017)

I have found it makes a difference in my testing for just one case only, 400mm DO II + 2xTC + 5DIV, where RAW has much higher QoF than for jpeg. It makes negligible differences for all other combinations I have used whether RAW or jpeg is used, but jpeg is much faster.


----------



## bluenoser1993 (Feb 2, 2017)

AlanF said:


> I have found it makes a difference in my testing for just one case only, 400mm DO II + 2xTC + 5DIV, where RAW has much higher QoF than for jpeg. It makes negligible differences for all other combinations I have used whether RAW or jpeg is used, but jpeg is much faster.



Thanks Alan, I've done RAW in the past, but not with the 5Ds. I won't bother with the extra time.


----------



## FoCal Rich (Feb 3, 2017)

In all "normal" test cases, it's fine to use JPEG analysis (and it's certainly a lot quicker).

Some info about raw analysis:

For raw processing we use a custom demosaicing algorithm which keeps all three colour channels totally isolated (unlike a processed JPEG which will have bleed of information between the colour channels). If you're trying to look at something very specific to individual colour channels or you're analysing under a different light source to normal (e.g. monochromatic light) then you'll want to use raw.

Also, using raw will ensure any camera processing effects are not applied to the analysed image - e.g. vignetting correction, distortion correction etc, white balance colour shifts etc. However, FoCal does adjust picture style, white balance and checks for various settings which may affect analysis before running tests so again in most cases you won't worry about this.

There is new processing related to Dual Pixel Raw in FoCal specifically for the 5Dmk4 - we use the dual pixel information to increase confidence in the AF Microadjustment result, to show focus offset in the AF Consistency results and to give an idea of lens focus shift in the aperture sharpness test. This blog post (http://www.reikan.co.uk/focalweb/index.php/2016/09/bringing-dual-pixel-raw-to-reikan-focal/) give more detail about this.


Just to re-iterate - you almost certainly *don't* need to use raw processing mode. Unless you understand specifically why you need raw, then you don't need to use it. It won't degrade the results you get to use raw unnecessarily, but it just takes longer!

Hope this helps.


----------



## AlanF (Feb 3, 2017)

Rich
Can you think of a reason why I get higher QoF with RAW than jpeg for just one isolated case of a particular lens combination, the 400mm DO II + 2xTC + 5DIV? It's a particularly bad example with FoCal in general; the profile of QoF against AFMA is very flat, maybe because of the f/8 aperture and long distance to target, and the r, g and b channels some times give widely different optimal AFMA.

I don't get the same QoF discrepancy on a 5DS R with the same lens.


----------



## FoCal Rich (Feb 3, 2017)

AlanF said:


> Rich
> Can you think of a reason why I get higher QoF with RAW than jpeg for just one isolated case of a particular lens combination, the 400mm DO II + 2xTC + 5DIV?



Actually, yes there is a possible reason. We use a third party tool for decoding and some basic preprocessing of raw files, and this utility hasn't yet been updated to support the 5Dmk4. 

FoCal 2.3 had an significant issue with contrast on 5Dmk4 raw files (i.e. the resulting analysed image could sometime have wildly incorrect contrast), but this was _mostly _corrected in FoCal 2.4. However, there may still be some issues under certain conditions so I'd be more inclined to trust JPEG results from the 5D4 if you get wildly differing values until this is fully fixed. (Note that this is JUST for the 5D4 - no other cameras)


----------



## AlanF (Feb 7, 2017)

Discovered a flaw in the comparisons. For telephotos, the spread given by FoCal represents the entire focal length range and not the individual focal length you are measuring. For example, for the Canon 100-400mm it will give the overall spread for measurements at 100mm and for 400mm, which reads the same when you compare yours at 100mm and at 400mm. So, for a lens like the Sigma 150-600mm, which is very good at 150mm and not as good at 600mm, it gives the overall spread of both and you think that your 150mm end is better than average and your 600mm is worse!


----------



## bluenoser1993 (Feb 7, 2017)

AlanF said:


> Discovered a flaw in the comparisons. For telephotos, the spread given by FoCal represents the entire focal length range and not the individual focal length you are measuring. For example, for the Canon 100-400mm it will give the overall spread for measurements at 100mm and for 400mm, which reads the same when you compare yours at 100mm and at 400mm. So, for a lens like the Sigma 150-600mm, which is very good at 150mm and not as good at 600mm, it gives the overall spread of both and you think that your 150mm end is better than average and your 600mm is worse!



Interesting. So this makes my 100-400 II a little better when compared to the 5DsR numbers. At 400 I was only getting 1553, pretty well bottom of the average, but at 100 the QoF is 1683 which is at least a little more like middle of the road. Funny they would report it that way, it would be simple to have separate results with all the data they are collecting.


----------



## FoCal Rich (Feb 7, 2017)

Actually, at the moment the data is split into 3 focal length regions (widest third, mid third and telephoto third). If there's enough data to pass the quality threshold (for the specific camera and lend), then the data for the specific focal length region is used for comparison, otherwise the combined data is used to give an idea.

After the next release of FoCal, we will we will be working a lot more with the comparison data so comparison at focal lengths may be made more specific.


----------



## AlanF (Feb 7, 2017)

For the 2 I have looked at, the Sigma 150-600mm is definitely an average value, and the. 100-400mm II probably is (both ends are the same, but the lens is fairly constant along its length). The extender data look pooled.


----------



## FoCal Rich (Feb 7, 2017)

I've had a quick play and generated the following 2 charts. These show the range of median data for all the camera bodies combined together for the 4 possible "focal length ranges" (wide, mid, tele and combined) for the Canon 100-400 and Sigma 150-600.

The error bars show the IQR and the blue marker is the median of the combined median sharpness (wide open) data for each camera.

Things to bear in mind:
1. This is ALL camera types combined, so you shouldn't take the exact values and apply them to your camera results, but it does give a rough idea of the range and sharpness across the range

2. I hacked this together quickly, so I make no promises as to the accuracy of the data! (although I do think it's correct)

3. This is generated from slightly newer data than is current available within FoCal - this new data will be uploaded shortly.


----------



## bluenoser1993 (Feb 9, 2017)

OK, once again I'm not completely sold on the comparability of the results. I was focusing more on my 100-400II with poor and better technique, but in getting ready to use my 135L this weekend I looked back at the test results. I had only paid attention to the AFMA setting at near and far distances, which only changed by 2. However, at 3.3m distance the QoF was 1520 and at 10.7m distance the QoF was 1820. Tested within minutes with the same lighting, target, setup, etc on my 5Ds. I don't have access to FoCal's data base, so don't know how this compares to the average range, but it does demonstrate how much spread is possible with the same camera, lens, target, lighting at different distances. It would appear that the algorithm has overcompensated for the softer image result because of the longer range, at least in this case.

Rich, I was doing this intentionally to see the AFMA required for longer range work and using my camera in crop mode. Because of the calculated distance, would this be a result that wouldn't make it into the data FoCal shares?

EDIT to change 10.3m to 10.7m, shouldn't have trusted my memory.


----------



## bluenoser1993 (Feb 9, 2017)

bluenoser1993 said:


> OK, once again I'm not completely sold on the comparability of the results. I was focusing more on my 100-400II with poor and better technique, but in getting ready to use my 135L this weekend I looked back at the test results. I had only paid attention to the AFMA setting at near and far distances, which only changed by 2. However, at 3.3m distance the QoF was 1520 and at 10.7m distance the QoF was 1820. Tested within minutes with the same lighting, target, setup, etc on my 5Ds. I don't have access to FoCal's data base, so don't know how this compares to the average range, but it does demonstrate how much spread is possible with the same camera, lens, target, lighting at different distances. It would appear that the algorithm has overcompensated for the softer image result because of the longer range, at least in this case.
> 
> Rich, I was doing this intentionally to see the AFMA required for longer range work and using my camera in crop mode. Because of the calculated distance, would this be a result that wouldn't make it into the data FoCal shares?
> 
> EDIT to change 10.3m to 10.7m, shouldn't have trusted my memory.



Just to demonstrate that this isn't a sample of one, the results with the 1.4X attached show the the same. At 4.7m the QoF is 1450 and at 10.7, the QoF is 1600.

So with all the same test conditions, my body and lens score a better QoF with the 1.4X attached at 10.7m than it does as a bare lens at 3.3m.

Again, these distances were not chosen to compare QoF, the 25x = 4.7m for standard use and the 50x plus 1.6 crop factor = 10.7m for long range use to determine if custom AFMA settings were required. I just mention it because the QoF results seem pertinent to this discussion. 

EDIT: I PM'ed Rich my serial number and test date/time in case it would allow him to look up my tests and add any insight to the results and hopefully post them here.

EDIT: changed the near distance with 1.4x attached to 4.7. I shouldn't post so late at night!!


----------



## AlanF (Feb 9, 2017)

FoCal has an on-line calculator for minimum distance. For the 5DS, the value for a 135mm lens is 4.35m (4.2m for a 120mm, 4.5m for a 150mm) and with a 1.4xTC, just under 6m. So, at 3.3m you are well under their recommended minimum values, which might mean you are out of range. 

A couple of other points

How many repeat runs do you make at each distance? All experimental measurements have a spread of mean values and a standard deviation. As an experimental scientist whose work depends on accurate measurements, I am anal compulsive about repeat runs, and even when measuring my blood pressure at home I do at least 5 repeats and calculate the mean and standard deviation. I find the QoF values do vary on repeat measurements at the same distance. You'll see on the chart in the thread for my lenses, I reported a range of values for each on each body.

The differences with distance could be real. Some lenses do change their MTF values at different distances. It gets reported, for example, that the Tamron 150-600mm is sharper at long distances while the converse is true for the Nikon 200-500mm (from memory). Lensrentals do their MTFs on "Olaf" at infinity whereas Imatest used by most testers is closer up, which some have suggested accounts for different review results.


----------



## Alex_M (Feb 9, 2017)

All Sigma Art lenses perform poor at MFD (approx. 15% less than peak sharpness) and absolutely worst at infinity (approx. 20% less than peak sharpness) with peak sharpness achieved at distance to Focal target being approx. x30 the focal length of the lens.



AlanF said:


> FoCal has an on-line calculator for minimum distance. For the 5DS, the value for a 135mm lens is 4.35m (4.2m for a 120mm, 4.5m for a 150mm) and with a 1.4xTC, just under 6m. So, at 3.3m you are well under their recommended minimum values, which might mean you are out of range.
> 
> A couple of other points
> 
> ...


----------



## AlanF (Feb 9, 2017)

Alex_M said:


> All Sigma Art lenses perform poor at MFD (approx. 15% less than peak sharpness) and absolutely worst at infinity (approx. 20% less than peak sharpness) with peak sharpness achieved at distance to Focal target being approx. x30 the focal length of the lens.
> 
> 
> 
> ...



The thought has crossed my mind that a lens manufacturer might be tempted to optimise their lenses for distances used by most reviewers for their test charts. But, of course they wouldn't.


----------



## bluenoser1993 (Feb 9, 2017)

AlanF said:


> FoCal has an on-line calculator for minimum distance. For the 5DS, the value for a 135mm lens is 4.35m (4.2m for a 120mm, 4.5m for a 150mm) and with a 1.4xTC, just under 6m. So, at 3.3m you are well under their recommended minimum values, which might mean you are out of range.
> 
> A couple of other points
> 
> ...



I just edited my post regarding the 1.4x attached to reflect 4.7m at the near distance, I mistakenly used the same 3.3m number that was used for the bare lens test. I just looked at the recommendations on their site and was surprised by the minimum they recommend. I have read in many places that the recommended distance for testing lens is 25x to 50x the focal length. Even on FoCal's site , if you click the more info that is below their distance calculator they actually say you can go as low as 20x for longer focal length (defined by them as 300mm or more). My near distances were 25x.


----------



## FoCal Rich (Feb 9, 2017)

Hi

I have seen this and the PM and will reply soon (very busy with release testing at the moment)

Rich


----------



## bluenoser1993 (Feb 9, 2017)

bluenoser1993 said:


> bluenoser1993 said:
> 
> 
> > OK, once again I'm not completely sold on the comparability of the results. I was focusing more on my 100-400II with poor and better technique, but in getting ready to use my 135L this weekend I looked back at the test results. I had only paid attention to the AFMA setting at near and far distances, which only changed by 2. However, at 3.3m distance the QoF was 1520 and at 10.7m distance the QoF was 1820. Tested within minutes with the same lighting, target, setup, etc on my 5Ds. I don't have access to FoCal's data base, so don't know how this compares to the average range, but it does demonstrate how much spread is possible with the same camera, lens, target, lighting at different distances. It would appear that the algorithm has overcompensated for the softer image result because of the longer range, at least in this case.
> ...



Quoted my post so all numbers would be in one spot. As suggested, I ran another set of tests on the 135 with and without the 1.4x, and I added distance to the near range to meet FoCal's minimum recommendation (though this puts it out of the intended range for portrait work). Bare lens there was improvement at both ranges, but the gap narrowed a bit:
At 10.4m = 1950
At 4.4m = 1700

With 1.4x attached:
At 10.4m = 1630
At 6m = 1520

Bare lens, total spread of 4 tests is 1520 - 1950
with 1.4x, total spread of 4 tests is 1450 - 1630

(note, there were in fact higher spikes of QoF values but I took the representative value from the FoCal report, not the highest spike)

A second finding: this second round of testing, similar to the 100-400II last week, was done with greater care. While the amount of care I took with the 100-400 had very little affect on the QoF, it had considerable affect on the outcome with the 135L. While some care has been taken by FoCal to equalize QoF value results by measuring test conditions and factoring for them with the algorithm, it doesn't work equally well in all cases. I'm afraid I'm back to believing the data isn't as reliable as we'd like it to be for comparing our lenses against. In saying that, I don't have the data of the 135 to compare to my results, perhaps Focal has found that lens to be less predictable in focus accuracy, though that would contradict many reviews.


----------



## takesome1 (Feb 9, 2017)

bluenoser1993 said:


> I'm afraid I'm back to believing the data isn't as reliable as we'd like it to be for comparing our lenses against.



What exactly do you think this comparison is going to tell you about your lens?


----------



## bluenoser1993 (Feb 9, 2017)

takesome1 said:


> bluenoser1993 said:
> 
> 
> > I'm afraid I'm back to believing the data isn't as reliable as we'd like it to be for comparing our lenses against.
> ...



It's being marketed as a means to compare your lens against all the other users lenses. There is a claim that the results are normalized to make this comparison possible. The trials I ran with one lens seemed to back that up, but the results from another lens do not support the claim.


----------



## takesome1 (Feb 9, 2017)

bluenoser1993 said:


> takesome1 said:
> 
> 
> > bluenoser1993 said:
> ...



Unless I am missing something here, and Rich can correct me, the data is collected from the Quality of Focus report that you can create on your computer. This tests relies on the AF system of your camera. Your AF system may be precise and accurate, or it may not, but your camera body and its AF system can have an impact on this test. So how you test and what body you test with can all sway the results. This data gives you a body and lens comparison, not just lens.

If you are testing just the lens the Aperture / Sharpness report would be a better judge of the lens. In this test you do not have to use the AF system, you can manually focus in live view and run the test. As AlanF mentioned in a previous post about repeating tests, if you run this test many times you will have a good understanding how your lens performs at various apertures and if the information is available how it compares to the same camera and lenses of others. I find this test the most useful.

I do not think either test is a good indication of a great lens since the lower and upper 25% of data is removed. I think an average lens might be indicated and bad lens yes it would.


----------



## AlanF (Feb 9, 2017)

Another set measurements that tell you just about the lens is for zooms where you can see the relative performance at different focal lengths. Rich posted wide, medium and tele for the 100-400mm II and Sigma 150-600mm. Now, both photozone and lenstip have MTFS for wide and tele very similar for the 100-400mm II, but ephotozine has a drop at 400mm from 100mm. The FoCal results suggest ephotozine results are more typical. My lens has very similar values at 100 and 400mm. FoCal has a sytematic decrease from wide to tele, like the two website. My own copy of the lens is really good at 400mm, rivalling the 100-400mm for resolution from charts, and the FoCal scores for mine are the same at 150mm and 400mm.

Otherwise, as you say, the scores are indeed generally camera-dependent. But, they do tell you something about how well a particular camera and particular lens go together.


----------



## takesome1 (Feb 9, 2017)

It was mentioned earlier, but with the data Reikan collects it wouldn't be a stretch to add a comparison tool geared specifically at determining lens sharpness.

Imagine a new poster asking about his new lens that is soft. We tell him to buy Focal and run a lens sharpness test.
He does and it tells him his lens performs in the top 10 percent (or where ever it falls).

Then the best part, Reikan sends all of us in this thread a royalty check for our brilliant idea and helping expand their customer base.


----------



## Mt Spokane Photography (Feb 9, 2017)

takesome1 said:


> It was mentioned earlier, but with the data Reikan collects it wouldn't be a stretch to add a comparison tool geared specifically at determining lens sharpness.




Imagine a new poster asking about his new lens that is soft. We tell him to buy Focal and run a lens sharpness test.
He does and it tells him his lens performs in the top 10 percent (or where ever it falls).

Then the best part, Reikan sends all of us in this thread a royalty check for our brilliant idea and helping expand their customer base.

There are companies that do this, their software is expensive and the setup is critical. http://www.imatest.com/

A amateur trying to run a lens sharpness test would likely come up on the low end, and keep exchanging lenses and complaining about how bad they are. We have plenty of that around the internet now.

One test that most users can run (but don't) is for the biggest issue by far to affect zoom lenses, and that's centering. A person can purchase or download and print a star chart. As long as you are squared up with it, it can tell you if you have a problem.

http://www.edmundoptics.com/test-targets/

Example: https://www.bhphotovideo.com/c/product/717671-REG/Zeiss_1849_755_Siemens_Star_Test_Chart.html#!


----------



## takesome1 (Feb 9, 2017)

Both good tests.

I would think an armature that doesn't follow FoCal instructions will most likely botch the other tests as well.

There are things Focal could monitor to insure comparable results, and fail errors. But no matter what safeguards and instructions you give, you can't prevent stupid.


----------

