Loading [MathJax]/jax/element/mml/optable/GeneralPunctuation.js
Skip to main content

Subsection 6.6.2 Summary

In our discussions, the set of floating point numbers, F, is the set of all numbers χ=μ×βe such that

  • β=2,

  • μ=±.δ0δ1δt1 (μ has only t (binary) digits), where δj{0,1}),

  • δ0=0 iff μ=0 (the mantissa is normalized), and

  • LeU.

Definition 6.6.2.1. Machine epsilon (unit roundoff).

The machine epsilon (unit roundoff), ϵmach, is defined as the smallest positive floating point number χ such that the floating point number that represents 1+χ is greater than one.

fl(expression)=[expression]

equals the result when computing {\rm expression} using floating point computation (rounding or truncating as every intermediate result is stored). If

κ=expression

in exact arithmetic, then we done the associated floating point result with

ˇκ=[expression].

The Standard Computational Model (SCM) assumes that, for any two floating point numbers χ and ψ, the basic arithmetic operations satisfy the equality

fl(χ op ψ)=(χ op ψ)(1+ϵ),|ϵ|ϵmach, and  op {+,,,/}.

The Alternative Computational Model (ACM) assumes for the basic arithmetic operations that

fl(χ op ψ)=χ op ψ1+ϵ,|ϵ|ϵmach, and  op {+,,,/}.
Definition 6.6.2.2. Backward stable implementation.

Given the mapping f:DR, where DRn is the domain and RRm is the range (codomain), let ˇf:DR be a computer implementation of this function. We will call ˇf a backward stable (also called "numerically stable") implementation of f on domain D if for all xD there exists a ˇx "close" to x such that ˇf(x)=f(ˇx).

  • Conditioning is a property of the problem you are trying to solve. A problem is well-conditioned if a small change in the input is guaranteed to only result in a small change in the output. A problem is ill-conditioned if a small change in the input can result in a large change in the output.

  • Stability is a property of an implementation. If the implementation, when executed with an input always yields an output that can be attributed to slightly changed input, then the implementation is backward stable.

Definition 6.6.2.3. Absolute value of vector and matrix.

Given xRn and ARm×n,

|x|=(|χ0||χ1||χn1|)and|A|=(|α0,0||α0,1||α0,n1||α1,0||α1,1||α1,n1||αm1,0||αm1,1||αm1,n1|).
Definition 6.6.2.4.

Let {<,,=,,>} and x,yRn. Then

|x||y|iff|χi||ψi|,

with i=0,,n1. Similarly, given A and BRm×n,

|A||B|iff|αij||βij|,

with i=0,,m1 and j=0,,n1.

Consider

\begin{equation*} \kappa \becomes x^T y = \left( \begin{array}{c} \chi_0 \\ \chi_1 \\ \vdots \\ \chi_{n-2} \\ \chi_{n-1} \end{array} \right)^T \left( \begin{array}{c} \psi_0 \\ \psi_1 \\ \vdots \\ \psi_{n-2} \\ \psi_{n-1} \end{array} \right) = \Big( \big( ( \chi_0 \psi_0 + \chi_1 \psi_1 ) + \cdots \big) + \chi_{n-2} \psi_{n-2} \Big) + \chi_{n-1} \psi_{n-1}. \end{equation*}

Under the computational model given in Subsection 6.2.3 the computed result, \check \kappa \text{,} satisfies

\begin{equation*} \check \kappa = \sum_{i=0}^{n-1} \left( \chi_i \psi_i ( 1 + \epsilon_{*}^{(i)} ) \prod_{j=i}^{n-1} ( 1 + \epsilon_{+}^{(j)} ) \right) , \end{equation*}

where \epsilon_{+}^{(0)} = 0 and \vert \epsilon_{*}^{(0)} \vert , \vert \epsilon_{*}^{(j)} \vert, \vert \epsilon_{+}^{(j)} \vert \leq \meps for j = 1, \ldots , n-1 \text{.}

Definition 6.6.2.7.

For all n \geq 1 and n \meps \lt 1\text{,} define

\begin{equation*} \displaystyle \gamma_n = {n \meps}/{(1 - n \meps)}\text{.} \end{equation*}

simplifies to

\begin{equation*} \begin{array}{l} \check \kappa \\ ~~~=~~~~\\ \chi_0 \psi_0 ( 1 + \theta_n ) + \chi_1 \psi_1 ( 1 + \theta_n ) + \cdots + \chi_{n-1} \psi_{n-1} ( 1 + \theta_2 ) \\ % \label{eqn:dot3b} \\ ~~~=~~~~\\ \left( \begin{array}{c} \chi_0 \\ \chi_1 \\ \chi_2 \\ \vdots \\ \chi_{n-1} \end{array} \right)^T \left( \begin{array}{c c c c c} ( 1 + \theta_n ) \amp 0 \amp 0\amp \cdots \amp 0 \\ 0 \amp ( 1 + \theta_n ) \amp0 \amp \cdots \amp 0 \\ 0 \amp 0 \amp ( 1 + \theta_{n-1} ) \amp \cdots \amp 0 \\ \vdots \amp\vdots \amp\vdots \amp\ddots \amp \vdots \\ 0 \amp 0 \amp 0 \amp \cdots \amp ( 1 + \theta_{2} ) \end{array} \right) \left( \begin{array}{c} \psi_0 \\ \psi_1 \\ \psi_2 \\ \vdots \\ \psi_{n-1} \end{array} \right) \\ %\nonumber ~~~=~~~~\\ \left( \begin{array}{c} \chi_0 \\ \chi_1 \\ \chi_2 \\ \vdots \\ \chi_{n-1} \end{array} \right)^T \left( I + \left( \begin{array}{c c c c c} \theta_n \amp 0 \amp 0\amp \cdots \amp 0 \\ 0 \amp \theta_n \amp0 \amp \cdots \amp 0 \\ 0 \amp 0 \amp \theta_{n-1} \amp \cdots \amp 0 \\ \vdots \amp\vdots \amp\vdots \amp\ddots \amp \vdots \\ 0 \amp 0 \amp 0 \amp \cdots \amp \theta_{2} \end{array} \right) \right) \left( \begin{array}{c} \psi_0 \\ \psi_1 \\ \psi_2 \\ \vdots \\ \psi_{n-1} \end{array} \right), \end{array} \end{equation*}

where \vert \theta_j \vert \leq \gamma_j \text{,} j = 2, \ldots, n \text{.}

An important example that demonstrates how LU with partial pivoting can incur "element growth":

\begin{equation*} A = \left( \begin{array}{r r r r r r} 1 \amp 0 \amp 0 \amp \cdots \amp 0 \amp 1 \\ -1 \amp 1 \amp 0 \amp \cdots \amp 0 \amp 1 \\ -1 \amp -1 \amp 1 \amp \cdots \amp 0 \amp 1 \\ \vdots \amp \vdots \amp \vdots \amp \ddots \amp \vdots \amp \vdots \\ -1 \amp -1 \amp \amp \cdots \amp 1 \amp 1 \\ -1 \amp -1 \amp \amp \cdots \amp -1 \amp 1 \\ \end{array} \right). \end{equation*}