WSU STAT 360
Class Session 1 Summary and Notes
September 1, 2000
Statistical measures
I mentioned the following measures as an introduction to descriptive statistics, and as a prelude to making graphs to illustrate data.
- Measures of central tendency, including the arithmetic mean, median, and mode. As I mentioned in class, the fact that the world is dominated by noise, mistakes, and stochastic behavior does not make us helpless to describe it. These three measures, as well as others, suggest a value about which our data seem to cluster. These could be called the best estimates of the data value.
- Measures, or norms, of difference or error among the data, including the sum of squared differences (the L2 norm), sum of absolute differences (the L1 norm), and the absolute value of the greatest difference (the Linfinity norm). I showed how the mean arises from minimizing L2, and simply stated that the median minimizes L1.
- Measures of dispersion of observations around the central value. These include the range, interquartile distance, and standard deviation. In fact, now that I mentioned the range I may as well say that the middle of the range ( (Max+Min)/2 ) is another measure of central tendency. Can anyone guess what norm it minimizes? Can you back up your guess with a proof?
- Measures of distribution, including quartiles, quintiles, and percentiles.
I also began a hurried explanation of a couple of graphical schemes for representing one's data which I do not think are very important, but which the book devotes space to explaining. As usually happens when I start a hurried explanation, I have some further explanation to present in these notes; this time dealing with a stem and leaf plot.
Let me use the example from class. The data are found on page 29. By the end of class we had organized these data by value into 5 classes thusly.
class | data observations
0.190 | 0.193
0.200 | 0.201,0.204
0.210 | 0.210,0.213,0.214,0.215,0.217.0.218
0.220 | 0.223,0.223,0.223,0.224,0.226,0.228
0.230 | 0.231,0.233,.237
Now this looks almost like a stem and leaf diagram, but it is not abbreviated enough. We do not actually need to list each data value, but only the single most significant figure in a list, and we will have produced a much more concise diagram, and yet be able to recapture all of the original data. I will therefore drop 0.19 from each value in the first row, 0.20 from each in the second and so forth. The result is
class | data observations
0.190 | 3
0.200 | 14
0.210 | 034578
0.220 | 333468
0.230 | 137
We will build a more complicated example of such a diagram in our Practicum next Friday (Sept. 8) using Excel.
Notes from August 27, 1999, that may be of interest.
Motivation for studying statistics
- Engineers commonly use statistics to test acceptability of their products and design their experiments.
- While engineers use deterministic models for many things, these models all have stochastic character of some sort. For example, material properties are not known exactly.
- At the limits of technology, for example very small electronic devices, the fundamental uncertainties of nature makes everything act stochastically.
- Statistics have invaded every aspect of our lives; measurement, performance, advertising, public policy, and science. A person can hardly be well educated without an understanding of statistics.
Notes:Vining presented two examples of deterministic models in Chapter 1. I attempted in class to show how these models are fundamentally stochastic, and become deterministic only within certain circumstances. Questions that students asked after class made me realize that I had done a mediocre job of explaining several points.
- First, I mentioned models without defining what they are. Models are descriptions of relationships between cause and effects, or between inputs and outputs. Generally in engineering and science models are mathematical equations that tell us how to calculate the effects from causes. However, models can be more general. Recipes, for example are models.
- Second, I mentioned at one point that a model is "correct." What I meant by this is that we do not always know what is the correct mathematical model to apply and whatever we choose has a probability of being "correct" or "not correct." For example, I used Newton's law F=ma as an example of a correct, deterministic model. If any of you recall your engineering physics course, F=ma describes mechanics correctly only when speeds are slow compared to the speed of light. Otherwise we have to use another theory of mechanics called special relativity. So if we are trying to analyze the behavior of particles in a synchotron, and we are using F=ma as a model, then there is going to be a discrepancy between the model and observations (data) that is not random and has nothing to do with noise. Statistics may not help our analysis in this case because the underlying model is simply wrong.
- Third, Vining mentioned the ideal gas law, PV=nRT, as an example of a deterministic law. However, I attempted to show that if our device for measuring pressure happened to be very small, then it is subjected to variations in the number of molecules that strike it at any time. This will cause the pressure to vary and behave like a random variable. Only when our device becomes so large that the variations in impacts from molecules can be neglected does the pressure settle down to a precise value. The "gas law" that applies at very small scales is stochastic.
- Finally, Vining presented Ohm's law as another example of a deterministic law. However, if we attempt to measure voltage too accurately (i.e. try to analyze small signals) then Johnson noise becomes an issue, and "Ohm's law" becomes stochastic, not deterministic. Johnson noise is fundamental, we cannot overcome it by improving our measurements of voltage, and it sets limits on the ultimate sensitivity of communications receivers. It results from random thermal energy of 1/2kT per degree of freedom of the particles that carry charge in the resistor.
Link forward to the next set of class notes for Friday, September 8, 2000