Computer Program by Ken Muranaka
QCMP 178: Dixon's Q-Test Values by Monte Carlo Simulation

Many factors can affect repeatability and reproducibility. Standardized tests and analytical procedures usually involve repeated measurements. All processes of repeat determination have some variation. The standard deviation (or the range) is commonly used as a measure of sample variability. QTEST [1] and its upgrade QTEST2 [2] calculate test statistics for rejecting outliers on statistical grounds by Monte Carlo (or stochastic) simulation. The test statistic is the ratio known as Dixon's Q-value [3-6], Q being the ratio of the difference between the suspect value and its neighbor to a subrange in a sample.

A problem with the Q-test is that Dixon published general probability density function for the sample sizeds of 3 to 5 corresponding to some subrange ratios only. It was recently pointed out by the cubic spline method on the Dixon's published critical values that the statistical tables must have had many typographical errors. [7] Many standard science texts are presenting Dixon's Q-test for testing outliers; thus, incorrect statistical procedures must still be going on. QTEST and QTEST2 can provide a better percentage point at any desired confidence level without restriction on sample size, and can be extended to the subranges not originally defined by the Dixon's 6 different ratios. The computer programs accomplish these tasks by performing a repeated sampling from standard Gaussian distribution, computing the Q-ratios, and ranking the ratios to extract the value at a critical point; upon this whole process is repeated, median point can then be taken as a theoretical value for the Q-test value. Like the bootstrap and resampling techniques, stochastic calculations of percentage points by personal computers when the corresponding analytical solutions are not available will find the places in many fields of data analysis.

@

Lines of Code: 605+

FORTRAN 77


See an example of analysis involving the Dixon's test.