Originally Posted by PeterLewis
If I were to add red according to the beta at the 30ire peoples faces would look like they are on fire.
That is overstating what the measurements are telling you.
Here is the 20% and 30% data you sent to me
x 0.311000 0.307636
y 0.331839 0.327961
Y 3.697756 8.434181
x 0.313392 0.311480
y 0.331604 0.327961
Y 3.653055 8.356387
so for 20% you measure dx = 0.0024, dy = 0.0002, for 30% dx = 0.0038, dy = 0
A dx of 0.002 - 0.003 might
result in a 1-2 click adjustment of your red and blue bias, hardly enough to cause fiery faces. By the way, this generates a difference in the red reading of 3.6%, half of what you state above.
If you really want to chase after this and nail down whether or not this is a true systematic offset from the previous code you need more statistics and should do what I first suggested. Run the gray scale sequence leaving the 30% pattern up and calculate the average and standard deviation of the 10 measurements. Do this for both versions of the code and with/without the ccss correction. Those numbers will tell you what, if any, statistical offset exists between the two versions and whether or not it is due to the ccss correction. I have done these tests with the D3 and no offset exists for this probe.
Here are aggregate results of my D3 testing:
average +/- 1 stdev dx between 184.108.40.206 and beta: -0.0017 +/- 0.0015
average +/- 1 stdev dy between 220.127.116.11 and beta: 0.0011 +/- 0.0012
average +/- 1 stdev dx between 18.104.22.168 and beta: -0.0016 +- 0.0018
average +/- 1 stdev dy between 22.214.171.124 and beta: 0.0013 +/- 0.0015
All averages are within 1-sigma of 0 meaning there is no statistically significant offset
Before you repeat all that there is one other thing you can try. I've made a beta3 package that increases the munki integration time to 800 ms, this will slow it down some but maybe you will be happier with the stability. Give it a try.Edited by zoyd - 11/7/13 at 6:56pm