Tuesday, June 3, 2014

Standard Error of Estimate

To measure the reliability of the estimating equation, statisticians have developed the standard error of estimate. This standard error is symbolized se and is similar to the standard deviation, in that both are measures of dispersion. The standard deviation is used to measure the dispersion of a set of observations about the mean. The standard error of estimate, on the other hand, measures the variability, or scatter, of the observed values around the regression line.
The standard error may be defined as follows:
              where
•    y = values of the dependent variable
= estimated values from the estimating equation that correspond to each y value
•    number of data points used to fit the regression line.
Example: Now let’s refer again to our earlier example. We found the estimating equation in that situation to be

To calculate se for this problem, we must determine the value ofWe have done this in the following table:

Interpreting the Standard Error of Estimate

As was true of the standard deviation, the larger the standard error of estimate, the greater the scattering (or dispersion) of points around the regression line. Conversely, if se = 0, we expect the estimating equation to be a “perfect” estimator of the dependent variable. In that case, all the data points lie directly on the regression line, and no points would be scattered around it.

No comments:

Post a Comment