# 3: Evaluating Analytical Data

- Page ID
- 219785

When we use an analytical method we make three separate evaluations of experimental error. First, before we begin the analysis we evaluate potential sources of errors to ensure they will not adversely effect our results. Second, during the analysis we monitor our measurements to ensure that errors remain acceptable. Finally, at the end of the analysis we evaluate the quality of the measurements and results, and compare them to our original design criteria. This chapter provides an introduction to sources of error, to evaluating errors in analytical measurements, and to the statistical analysis of data.

- 3.1: Characterizing Measurements and Results
- One way to characterize data from multiple measurements/runs is to assume that the measurements are randomly scattered around a central value that provides the best estimate of expected, or “true” value. We describe the distribution of these results by reporting its central tendency and its spread.

- 3.2: Characterizing Experimental Errors
- Two essential questions arise from any set of data. First, does our measure of central tendency agree with the expected result? Second, why is there so much variability in the individual results? The first of these questions addresses the accuracy of our measurements and the second addresses the precision of our measurements. In this section we consider the types of experimental errors that affect accuracy and precision.

- 3.3: Propagation of Uncertainty
- A propagation of uncertainty allows us to estimate the uncertainty in a result from the uncertainties in the measurements used to calculate the result. Derivation of the general equation for any function and rectangular solid example added by J. Breen

- 3.4: The Distribution of Measurements and Results
- To compare two samples to each other, we need more than measures of their central tendencies and their spreads based on a small number of measurements. We need also to know how to predict the properties of the broader population from which the samples were drawn; in turn, this requires that we understand the distribution of samples within a population.

- 3.5: Statistical Analysis of Data
- A confidence interval is a useful way to report the result of an analysis because it sets limits on the expected result. In the absence of determinate error, a confidence interval based on a sample’s mean indicates the range of values in which we expect to find the population’s mean. In this section we introduce a general approach to the statistical analysis of data. Specific statistical tests are presented in the next section.

- 3.6: Statistical Methods for Normal Distributions
- The most common distribution for our results is a normal distribution. Because the area between any two limits of a normal distribution curve is well defined, constructing and evaluating significance tests is straightforward. The Median/Mad methods appearing in the section on outliers was added by J. Breen.

- 3.7: Detection Limits
- The International Union of Pure and Applied Chemistry (IUPAC) defines a method’s detection limit as the smallest concentration or absolute amount of analyte that has a signal significantly larger than the signal from a suitable blank.

- 3.8: Using Excel and R to Analyze Data
- Although the calculations in this chapter are relatively straightforward, it can be tedious to work problems using nothing more than a calculator. Both Excel and R include functions for many common statistical calculations. In addition, R provides useful functions for visualizing your data.

- 3.9: Problems
- End-of-chapter problems to test your understanding of topics in this chapter.

- 3.10: Additional Resources
- A compendium of resources to accompany topics in this chapter.

- 3.11: Chapter Summary and Key Terms
- Summary of chapter's main topics and a list of key terms introduced in this chapter.