G.1.1 This annex addresses the general question of obtaining from the estimate y of the measurand Y, and from the combined standard uncertainty u_{c}(y) of that estimate, an expanded uncertainty U_{p} = k_{p}u_{c}(y) that defines an interval y − U_{p} ≤ Y ≤ y + U_{p} that has a high, specified coverage probability or level of confidence p. It thus deals with the issue of determining the coverage factor k_{p} that produces an interval about the measurement result y that may be expected to encompass a large, specified fraction p of the distribution of values that could reasonably be attributed to the measurand Y (see Clause 6).
G.1.2 In most practical measurement situations, the calculation of intervals having specified levels of confidence — indeed, the estimation of most individual uncertainty components in such situations — is at best only approximate. Even the experimental standard deviation of the mean of as many as 30 repeated observations of a quantity described by a normal distribution has itself an uncertainty of about 13 percent (see Table E.1 in Annex E).
In most cases, it does not make sense to try to distinguish between, for example, an interval having a level of confidence of 95 percent (one chance in 20 that the value of the measurand Y lies outside the interval) and either a 94 percent or 96 percent interval (1 chance in 17 and 25, respectively). Obtaining justifiable intervals with levels of confidence of 99 percent (1 chance in 100) and higher is especially difficult, even if it is assumed that no systematic effects have been overlooked, because so little information is generally available about the most extreme portions or “tails” of the probability distributions of the input quantities.
G.1.3 To obtain the value of the coverage factor k_{p} that produces an interval corresponding to a specified level of confidence p requires detailed knowledge of the probability distribution characterized by the measurement result and its combined standard uncertainty. For example, for a quantity z described by a normal distribution with expectation μ_{z} and standard deviation σ, the value of k_{p} that produces an interval μ_{z} ± k_{p}σ that encompasses the fraction p of the distribution, and thus has a coverage probability or level of confidence p, can be readily calculated. Some examples are given in Table G.1.
Level of confidence p | Coverage factor k_{p} |
---|---|
(percent) | |
68,27 | 1 |
90 | 1,645 |
95 | 1,960 |
95,45 | 2 |
99 | 2,576 |
99,73 | 3 |
NOTE By contrast, if z is described by a rectangular probability distribution with expectation μ_{z} and standard deviation σ = a⁄√3^{‾‾}, where a is the half‑width of the distribution, the level of confidence p is 57,74 percent for k_{p} = 1; 95 percent for k_{p} = 1,65; 99 percent for k_{p} = 1,71; and 100 percent for k_{p} ≥ √3^{‾‾} ≈ 1,73; the rectangular distribution is “narrower” than the normal distribution in the sense that it is of finite extent and has no “tails”.
G.1.4 If the probability distributions of the input quantities X_{1}, X_{2}, ..., X_{N} upon which the measurand Y depends are known [their expectations, variances, and higher moments (see C.2.13 and C.2.22) if the distributions are not normal distributions], and if Y is a linear function of the input quantities, Y = c_{1}X_{1} + c_{2}X_{2} + ... + c_{N}X_{N}, then the probability distribution of Y may be obtained by convolving the individual probability distributions [10]. Values of k_{p} that produce intervals corresponding to specified levels of confidence p may then be calculated from the resulting convolved distribution.
G.1.5 If the functional relationship between Y and its input quantities is nonlinear and a first‑order Taylor series expansion of the relationship is not an acceptable approximation (see 5.1.2 and 5.1.5), then the probability distribution of Y cannot be obtained by convolving the distributions of the input quantities. In such cases, other analytical or numerical methods are required.
G.1.6 In practice, because the parameters characterizing the probability distributions of input quantities are usually estimates, because it is unrealistic to expect that the level of confidence to be associated with a given interval can be known with a great deal of exactness, and because of the complexity of convolving probability distributions, such convolutions are rarely, if ever, implemented when intervals having specified levels of confidence need to be calculated. Instead, approximations are used that take advantage of the Central Limit Theorem.
G.2.1 If Y = c_{1}X_{1} + c_{2}X_{2} + ... + c_{N}X_{N} = ∑^{N}_{i = 1}c_{i}X_{i} and all the X_{i} are characterized by normal distributions, then the resulting convolved distribution of Y will also be normal. However, even if the distributions of the X_{i} are not normal, the distribution of Y may often be approximated by a normal distribution because of the Central Limit Theorem. This theorem states that the distribution of Y will be approximately normal with expectation E(Y) = ∑^{N}_{i = 1}c_{i}E(X_{i}) and variance σ^{2}(Y ) = ∑^{N}_{i = 1}c^{2}_{i}σ^{2}(X_{i}), where E(X_{i}) is the expectation of X_{i} and σ^{2}(X_{i}) is the variance of X_{i}, if the X_{i} are independent and σ^{2}(Y ) is much larger than any single component c^{2}_{i}σ^{2}(X_{i}) from a non‑normally distributed X_{i}.
G.2.2 The Central Limit Theorem is significant because it shows the very important role played by the variances of the probability distributions of the input quantities, compared with that played by the higher moments of the distributions, in determining the form of the resulting convolved distribution of Y. Further, it implies that the convolved distribution converges towards the normal distribution as the number of input quantities contributing to σ^{2}(Y) increases; that the convergence will be more rapid the closer the values of c^{2}_{i}σ^{2}(X_{i}) are to each other (equivalent in practice to each input estimate x_{i} contributing a comparable uncertainty to the uncertainty of the estimate y of the measurand Y); and that the closer the distributions of the X_{i} are to being normal, the fewer X_{i} are required to yield a normal distribution for Y.
EXAMPLE The rectangular distribution (see 4.3.7 and 4.4.5) is an extreme example of a non‑normal distribution, but the convolution of even as few as three such distributions of equal width is approximately normal. If the half‑width of each of the three rectangular distributions is a so that the variance of each is a^{2}∕3, the variance of the convolved distribution is σ^{2} = a^{2}. The 95 percent and 99 percent intervals of the convolved distribution are defined by 1,937 σ and 2,379 σ, respectively, while the corresponding intervals for a normal distribution with the same standard deviation σ are defined by 1,960 σ and 2,576 σ (see Table G.1) [10].
NOTE 1 For every interval with a level of confidence p greater than about 91,7 percent, the value of k_{p} for a normal distribution is larger than the corresponding value for the distribution resulting from the convolution of any number and size of rectangular distributions.
NOTE 2 It follows from the Central Limit Theorem that the probability distribution of the arithmetic mean q‾‾ of n observations q_{k} of a random variable q with expectation μ_{q} and finite standard deviation σ approaches a normal distribution with mean μ_{q} and standard deviation σ∕√n^{‾‾} as n → ∞, whatever may be the probability distribution of q.
G.2.3 A practical consequence of the Central Limit Theorem is that when it can be established that its requirements are approximately met, in particular, if the combined standard uncertainty u_{c}(y) is not dominated by a standard uncertainty component obtained from a Type A evaluation based on just a few observations, or by a standard uncertainty component obtained from a Type B evaluation based on an assumed rectangular distribution, a reasonable first approximation to calculating an expanded uncertainty U_{p} = k_{p}u_{c}(y) that provides an interval with level of confidence p is to use for k_{p} a value from the normal distribution. The values most commonly used for this purpose are given in Table G.1.
G.3.1 To obtain a better approximation than simply using a value of k_{p} from the normal distribution as in G.2.3, it must be recognized that the calculation of an interval having a specified level of confidence requires, not the distribution of the variable [Y − E(Y )]⁄σ(Y ), but the distribution of the variable (y − Y )⁄u_{c}(y). This is because in practice, all that is usually available are y the estimate of Y as obtained from y = ∑^{N}_{i = 1}c_{i}x_{i}, where x_{i} is the estimate of X_{i}; and the combined variance associated with y, u^{2}_{c}(y), evaluated from u^{2}_{c}(y) = ∑^{N}_{i = 1}c^{2}_{i}u^{2}(x_{i}), where u(x_{i}) is the standard uncertainty (estimated standard deviation) of the estimate x_{i}.
NOTE Strictly speaking, in the expression (y − Y )⁄u_{c}(y), Y should read E(Y ). For simplicity, such a distinction has only been made in a few places in this Guide. In general, the same symbol has been used for the physical quantity, the random variable that represents that quantity, and the expectation of that variable (see 4.1.1, notes).
G.3.2 If z is a normally distributed random variable with expectation μ_{z} and standard deviation σ, and z‾‾ is the arithmetic mean of n independent observations z_{k} of z with s(z‾‾) the experimental standard deviation of z‾‾ [see Equations (3) and (5) in 4.2], then the distribution of the variable t = (z‾‾ − μ_{z})⁄s(z‾‾) is the t‑distribution or Student's distribution (C.3.8) with v = n − 1 degrees of freedom.
Consequently, if the measurand Y is simply a single normally distributed quantity X, Y = X; and if X is estimated by the arithmetic mean X^{‾‾‾} of n independent repeated observations X_{k} of X, with experimental standard deviation of the mean s(X^{‾‾‾}), then the best estimate of Y is y = X^{‾‾‾} and the experimental standard deviation of that estimate is u_{c}(y) = s(X^{‾‾‾}). Then t = (z‾‾ − μ_{z})∕s(z‾‾) = (X^{‾‾‾} − X)∕s(X^{‾‾‾}) = (y − Y )∕u_{c}(y) is distributed according to the t‑distribution with
In these expressions, Pr[ ] means “probability of” and the t‑factor t_{p}(v) is the value of t for a given value of the parameter v — the degrees of freedom (see G.3.3) — such that the fraction p of the t distribution is encompassed by the interval −t_{p}(v) to +t_{p}(v). Thus the expanded uncertainty
G.3.3 The degrees of freedom v is equal to n − 1 for a single quantity estimated by the arithmetic mean of n independent observations, as in G.3.2. If n independent observations are used to determine both the slope and intercept of a straight line by the method of least squares, the degrees of freedom of their respective standard uncertainties is v = n − 2. For a least-squares fit of m parameters to n data points, the degrees of freedom of the standard uncertainty of each parameter is v = n − m. (See Reference [15] for a further discussion of degrees of freedom.)
G.3.4 Selected values of t_{p}(v) for different values of v and various values of p are given in Table G.2 at the end of this annex. As v → ∞ the t‑distribution approaches the normal distribution and t_{p}(v) ≈ (1 + 2∕v)^{1/2}k_{p}, where in this expression k_{p} is the coverage factor required to obtain an interval with level of confidence p for a normally distributed variable. Thus the value of t_{p}(∞) in Table G.2 for a given p equals the value of k_{p} in Table G.1 for the same p.
NOTE Often, the t‑distribution is tabulated in quantiles; that is, values of the quantile t_{1 − α} are given, where 1 − α denotes the cumulative probability and the relation
defines the quantile, where f is the probability density function of t. Thus t_{p} and t_{1 − α} are related by p = 1 − 2α, For example, the value of the quantile t_{0,975}, for which 1 − α = 0,975 and α = 0,025, is the same as t_{p}(v) for p = 0,95.
G.4.1 In general, the t‑distribution will not describe the distribution of the variable (y − Y )⁄u_{c}(y) if u^{2}_{c}(y) is the sum of two or more estimated variance components u^{2}_{i}(y) = c^{2}_{i}u^{2}(x_{i}) (see 5.1.3), even if each x_{i} is the estimate of a normally distributed input quantity X_{i}. However, the distribution of that variable may be approximated by a t‑distribution with an effective degrees of freedom v_{eff} obtained from the Welch‑Satterthwaite formula [16], [17], [18]
NOTE 1 If the value of v_{eff} obtained from Equation (G.2b) is not an integer, which will usually be the case in practice, the corresponding value of t_{p} may be found from Table G.2 by interpolation or by truncating v_{eff} to the next lower integer.
NOTE 2 If an input estimate x_{i} is itself obtained from two or more other estimates, then the value of v_{i} to be used with u^{4}_{i}(y) = [c^{2}_{i}u^{2}(x_{i})]^{2} in the denominator of Equation (G.2b) is the effective degrees of freedom calculated from an expression equivalent to Equation (G.2b).
NOTE 3 Depending upon the needs of the potential users of a measurement result, it may be useful, in addition to v_{eff}, to calculate and report also values for v_{effA} and v_{effB}, computed from Equation (G.2b) treating separately the standard uncertainties obtained from Type A and Type B evaluations. If the contributions to u^{2}_{c}(y) of the Type A and Type B standard uncertainties alone are denoted, respectively, by u^{2}_{cA}(y) and u^{2}_{cB}(y), the various quantities are related by
EXAMPLE Consider that Y = f(X_{1}, X_{2}, X_{3}) = bX_{1}X_{2}X_{3} and that the estimates x_{1}, x_{2}, x_{3} of the normally distributed input quantities X_{1}, X_{2}, X_{3} are the arithmetic means of n_{1} = 10, n_{2} = 5, and n_{3} = 15 independent repeated observations, respectively, with relative standard uncertainties u(x_{1})⁄x_{1} = 0,25 percent, u(x_{2})⁄x_{2} = 0,57 percent, and u(x_{3})⁄x_{3} = 0,82 percent. In this case, c_{i} = ∂f⁄∂X_{i} = Y⁄X_{i} (to be evaluated at x_{1}, x_{2}, x_{3} — see 5.1.3, Note 1), [u_{c}(y)⁄y]^{2} = ∑^{3}_{i = 1}[u(x_{i})⁄x_{i}]^{2} = (1,03 percent)^{2} (see Note 2 to 5.1.6), and Equation (G.2b) becomes
Thus
The value of t_{p} for p = 95 percent and v = 19 is, from Table G.2, t_{95}(19) = 2,09; hence the relative expanded uncertainty for this level of confidence is U_{95} = 2,09 × (1,03 percent) = 2,2 percent. It may then be stated that Y = y ± U_{95} = y(1 ± 0,022) (y to be determined from y = bx_{1}x_{2}x_{3}), or that 0,978y ≤ Y ≤ 1,022y, and that the level of confidence to be associated with the interval is approximately 95 percent.
G.4.2 In practice, u_{c}(y) depends on standard uncertainties u(x_{i}) of input estimates of both normally and non‑normally distributed input quantities, and the u(x_{i}) are obtained from both frequency‑based and a priori probability distributions (that is, from both Type A and Type B evaluations). A similar statement applies to the estimate y and input estimates x_{i} upon which y depends. Nevertheless, the probability distribution of the function t = (y − Y)⁄u_{c}(y) can be approximated by the t‑distribution if it is expanded in a Taylor series about its expectation. In essence, this is what is achieved, in the lowest order approximation, by the Welch‑Satterthwaite formula, Equation (G.2a) or Equation (G.2b).
The question arises as to the degrees of freedom to assign to a standard uncertainty obtained from a Type B evaluation when v_{eff} is calculated from Equation (G.2b). Since the appropriate definition of degrees of freedom recognizes that v as it appears in the t‑distribution is a measure of the uncertainty of the variance s^{2}(z‾‾), Equation (E.7) in E.4.3 may be used to define the degrees of freedom v_{i},
The quantity in large brackets is the relative uncertainty of u(x_{i}); for a Type B evaluation of standard uncertainty it is a subjective quantity whose value is obtained by scientific judgement based on the pool of available information.
EXAMPLE Consider that one's knowledge of how input estimate x_{i} was determined and how its standard uncertainty u(x_{i}) was evaluated leads one to judge that the value of u(x_{i}) is reliable to about 25 percent. This may be taken to mean that the relative uncertainty is Δu(x_{i})⁄u(x_{i}) = 0,25, and thus from Equation (G.3), v_{i} = (0,25)^{−2}⁄2 = 8. If instead one had judged the value of u(x_{i}) to be reliable to only about 50 percent, then v_{i} = 2. (See also Table E.1 in Annex E.)
G.4.3 In the discussion in 4.3 and 4.4 of Type B evaluation of standard uncertainty from an a priori probability distribution, it was implicitly assumed that the value of u(x_{i}) resulting from such an evaluation is exactly known. For example, when u(x_{i}) is obtained from a rectangular probability distribution of assumed half‑width a = (a_{+} −a_{−})⁄2 as in 4.3.7 and 4.4.5, u(x_{i}) = a⁄√3^{‾‾} is viewed as a constant with no uncertainty because a_{+} and a_{−}, and thus a, are so viewed (but see 4.3.9, Note 2). This implies through Equation (G.3) that v_{i} → ∞ or 1⁄v_{i} → 0, but it causes no difficulty in evaluating Equation (G.2b). Further, assuming that v_{i} → ∞ is not necessarily unrealistic; it is common practice to choose a_{−} and a_{+} in such a way that the probability of the quantity in question lying outside the interval a_{−} to a_{+} is extremely small.
G.5.1 An expression found in the literature on measurement uncertainty and often used to obtain an uncertainty that is intended to provide an interval with a 95 percent level of confidence may be written as
Here t_{95}(v′_{eff}) is taken from the t‑distribution for v′_{eff} degrees of freedom and p = 95 percent; v′_{eff} is the effective degrees of freedom calculated from the Welch‑Satterthwaite formula [Equation (G.2b)] taking into account only those standard uncertainty components s_{i} that have been evaluated statistically from repeated observations in the current measurement; s^{2} = ∑c^{2}_{i}s^{2}_{i}; c_{i} ≡ ∂f⁄∂x_{i}; and u^{2} = ∑u^{2}_{j}(y) = ∑c^{2}_{j}(a^{2}_{j}⁄3) accounts for all other components of uncertainty, where +a_{j} and −a_{j} are the assumed exactly known upper and lower bounds of X_{j} relative to its best estimate x_{j} (that is, x_{j} − a_{j} ≤ X_{j} ≤ x_{j} + a_{j}).
NOTE A component based on repeated observations made outside the current measurement is treated in the same way as any other component included in u^{2}. Hence, in order to make a meaningful comparison between Equation (G.4) and Equation (G.5) of the following subclause, it is assumed that such components, if present, are negligible.
G.5.2 If an expanded uncertainty that provides an interval with a 95 percent level of confidence is evaluated according to the methods recommended in G.3 and G.4, the resulting expression in place of Equation (G.4) is
In most cases, the value of U_{95} from Equation (G.5) will be larger than the value of U′_{95} from Equation (G.4), if it is assumed that in evaluating Equation (G.5), all Type B variances are obtained from a priori rectangular distributions with half‑widths that are the same as the bounds a_{j} used to compute u^{2} of Equation (G.4). This may be understood by recognizing that, although t_{95}(v′_{eff}) will in most cases be somewhat larger than t_{95}(v_{eff}), both factors are close to 2; and in Equation (G.5) u^{2} is multiplied by t^{2}_{p}(v_{eff}) ≈ 4 while in Equation (G.4) it is multiplied by 3. Although the two expressions yield equal values of U′_{95} and U_{95} for u^{2} ≪ s^{2}, U′_{95} will be as much as 13 percent smaller than U_{95} if u^{2} ≫ s^{2}. Thus in general, Equation (G.4) yields an uncertainty that provides an interval having a smaller level of confidence than the interval provided by the expanded uncertainty calculated from Equation (G.5).
NOTE 1 In the limits u^{2}⁄s^{2} → ∞ and v_{eff} → ∞, U′_{95} → 1,732u while U_{95} → 1,960u. In this case, U′_{95} provides an interval having only a 91,7 percent level of confidence, while U_{95} provides a 95 percent interval. This case is approximated in practice when the components obtained from estimates of upper and lower bounds are dominant, large in number, and have values of u^{2}_{j}(y) = c^{2}_{j}a^{2}_{j}⁄3 that are of comparable size.
NOTE 2 For a normal distribution, the coverage factor k = √3^{‾‾} ≈ 1,732 provides an interval with a level of confidence p = 91,673... percent. This value of p is robust in the sense that it is, in comparison with that of any other value, optimally independent of small deviations of the input quantities from normality.
G.5.3 Occasionally an input quantity X_{i} is distributed asymmetrically — deviations about its expected value of one sign are more probable than deviations of the opposite sign (see 4.3.8). Although this makes no difference in the evaluation of the standard uncertainty u(x_{i}) of the estimate x_{i} of X_{i}, and thus in the evaluation of u_{c}(y), it may affect the calculation of U.
It is usually convenient to give a symmetric interval, Y = y ± U, unless the interval is such that there is a cost differential between deviations of one sign over the other. If the asymmetry of X_{i} causes only a small asymmetry in the probability distribution characterized by the measurement result y and its combined standard uncertainty u_{c}(y), the probability lost on one side by quoting a symmetric interval is compensated by the probability gained on the other side. The alternative is to give an interval that is symmetric in probability (and thus asymmetric in U): the probability that Y lies below the lower limit y − U_{−} is equal to the probability that Y lies above the upper limit y + U_{+}. But in order to quote such limits, more information than simply the estimates y and u_{c}(y) [and hence more information than simply the estimates x_{i} and u(x_{i}) of each input quantity X_{i}] is needed.
G.5.4 The evaluation of the expanded uncertainty U_{p} given here in terms of u_{c}(y), v_{eff}, and the factor t_{p}(v_{eff}) from the t‑distribution is only an approximation, and it has its limitations. The distribution of (y − Y)⁄u_{c}(y) is given by the t‑distribution only if the distribution of Y is normal, the estimate y and its combined standard uncertainty u_{c}(y) are independent, and if the distribution of u^{2}_{c}(y) is a χ^{2} distribution. The introduction of v_{eff}, Equation (G.2b), deals only with the latter problem, and provides an approximately χ^{2} distribution for u^{2}_{c}(y); the other part of the problem, arising from the non‑normality of the distribution of Y, requires the consideration of higher moments in addition to the variance.
G.6.1 The coverage factor k_{p} that provides an interval having a level of confidence p close to a specified level can only be found if there is extensive knowledge of the probability distribution of each input quantity and if these distributions are combined to obtain the distribution of the output quantity. The input estimates x_{i} and their standard uncertainties u(x_{i}) by themselves are inadequate for this purpose.
G.6.2 Because the extensive computations required to combine probability distributions are seldom justified by the extent and reliability of the available information, an approximation to the distribution of the output quantity is acceptable. Because of the Central Limit Theorem, it is usually sufficient to assume that the probability distribution of (y − Y )⁄u_{c}(y) is the t‑distribution and take k_{p} = t_{p}(v_{eff}), with the t‑factor based on an effective degrees of freedom v_{eff} of u_{c}(y) obtained from the Welch‑Satterthwaite formula, Equation (G.2b).
G.6.3 To obtain v_{eff} from Equation (G.2b) requires the degrees of freedom v_{i} for each standard uncertainty component. For a component obtained from a Type A evaluation, v_{i} is obtained from the number of independent repeated observations upon which the corresponding input estimate is based and the number of independent quantities determined from those observations (see G.3.3). For a component obtained from a Type B evaluation, v_{i} is obtained from the judged reliability of the value of that component [see G.4.2 and Equation (G.3)].
G.6.4 Thus the following is a summary of the preferred method of calculating an expanded uncertainty U_{p} = k_{p}u_{c}(y) intended to provide an interval Y = y ± U_{p} that has an approximate level of confidence p:
If u(x_{i}) is obtained from a Type A evaluation, determine v_{i} as outlined in G.3.3. If u(x_{i}) is obtained from a Type B evaluation and it can be treated as exactly known, which is often the case in practice, v_{i} → ∞; otherwise, estimate v_{i} from Equation (G.3).
G.6.5 In certain situations, which should not occur too frequently in practice, the conditions required by the Central Limit Theorem may not be well met and the approach of G.6.4 may lead to an unacceptable result. For example, if u_{c}(y) is dominated by a component of uncertainty evaluated from a rectangular distribution whose bounds are assumed to be exactly known, it is possible [if t_{p}(v_{eff}) > √3^{‾‾}] that y + U_{p} and y − U_{p}, the upper and lower limits of the interval defined by U_{p}, could lie outside the bounds of the probability distribution of the output quantity Y. Such cases must be dealt with on an individual basis but are often amenable to an approximate analytic treatment (involving, for example, the convolution of a normal distribution with a rectangular distribution [10]).
G.6.6 For many practical measurements in a broad range of fields, the following conditions prevail:
Under these circumstances, the probability distribution characterized by the measurement result and its combined standard uncertainty can be assumed to be normal because of the Central Limit Theorem; and u_{c}(y) can be taken as a reasonably reliable estimate of the standard deviation of that normal distribution because of the significant size of v_{eff}. Then, based on the discussion given in this annex, including that emphasizing the approximate nature of the uncertainty evaluation process and the impracticality of trying to distinguish between intervals having levels of confidence that differ by one or two percent, one may do the following:
Although this approach should be suitable for many practical measurements, its applicability to any particular measurement will depend on how close k = 2 must be to t_{95}(v_{eff}) or k = 3 must be to t_{99}(v_{eff}); that is, on how close the level of confidence of the interval defined by U = 2u_{c}(y) or U = 3u_{c}(y) must be to 95 percent or 99 percent, respectively. Although for v_{eff} = 11, k = 2 and k = 3 underestimate t_{95}(11) and t_{99}(11) by only about 10 percent and 4 percent, respectively (see Table G.2), this may not be acceptable in some cases. Further, for all values of v_{eff} somewhat larger than 13, k = 3 produces an interval having a level of confidence larger than 99 percent. (See Table G.2, which also shows that for v_{eff} → ∞ the levels of confidence of the intervals produced by k = 2 and k = 3 are 95,45 percent and 99,73 percent, respectively). Thus, in practice, the size of v_{eff} and what is required of the expanded uncertainty will determine whether this approach can be used.
Degrees of freedom | Fraction p in percent | |||||
---|---|---|---|---|---|---|
v | 68,27^{a)} | 90 | 95 | 95,45^{a)} | 99 | 99,73^{a)} |
a) For a quantity z described by a normal distribution with expectation μ_{z} and standard deviation σ, the interval μ_{z} ± kσ encompasses p = 68,27 percent, 95,45 percent and 99,73 percent of the distribution for k = 1, 2 and 3, respectively. | ||||||
1 | 1,84 | 6,31 | 12,71 | 13,97 | 63,66 | 235,80 |
2 | 1,32 | 2,92 | 4,30 | 4,53 | 9,92 | 19,21 |
3 | 1,20 | 2,35 | 3,18 | 3,31 | 5,84 | 9,22 |
4 | 1,14 | 2,13 | 2,78 | 2,87 | 4,60 | 6,62 |
5 | 1,11 | 2,02 | 2,57 | 2,65 | 4,03 | 5,51 |
6 | 1,09 | 1,94 | 2,45 | 2,52 | 3,71 | 4,90 |
7 | 1,08 | 1,89 | 2,36 | 2,43 | 3,50 | 4,53 |
8 | 1,07 | 1,86 | 2,31 | 2,37 | 3,36 | 4,28 |
9 | 1,06 | 1,83 | 2,26 | 2,32 | 3,25 | 4,09 |
10 | 1,05 | 1,81 | 2,23 | 2,28 | 3,17 | 3,96 |
11 | 1,05 | 1,80 | 2,20 | 2,25 | 3,11 | 3,85 |
12 | 1,04 | 1,78 | 2,18 | 2,23 | 3,05 | 3,76 |
13 | 1,04 | 1,77 | 2,16 | 2,21 | 3,01 | 3,69 |
14 | 1,04 | 1,76 | 2,14 | 2,20 | 2,98 | 3,64 |
15 | 1,03 | 1,75 | 2,13 | 2,18 | 2,95 | 3,59 |
16 | 1,03 | 1,75 | 2,12 | 2,17 | 2,92 | 3,54 |
17 | 1,03 | 1,74 | 2,11 | 2,16 | 2,90 | 3,51 |
18 | 1,03 | 1,73 | 2,10 | 2,15 | 2,88 | 3,48 |
19 | 1,03 | 1,73 | 2,09 | 2,14 | 2,86 | 3,45 |
20 | 1,03 | 1,72 | 2,09 | 2,13 | 2,85 | 3,42 |
25 | 1,02 | 1,71 | 2,06 | 2,11 | 2,79 | 3,33 |
30 | 1,02 | 1,70 | 2,04 | 2,09 | 2,75 | 3,27 |
35 | 1,01 | 1,70 | 2,03 | 2,07 | 2,72 | 3,23 |
40 | 1,01 | 1,68 | 2,02 | 2,06 | 2,70 | 3,20 |
45 | 1,01 | 1,68 | 2,01 | 2,06 | 2,69 | 3,18 |
50 | 1,01 | 1,68 | 2,01 | 2,05 | 2,68 | 3,16 |
100 | 1,005 | 1,660 | 1,984 | 2,025 | 2,626 | 3,077 |
∞ | 1,000 | 1,645 | 1,960 | 2,000 | 2,576 | 3,000 |