Monday 27 August 2012

newton difference method





Chapter Three

“Interpolation for Equal and Unequal Intervals”

• Objective of the Chapter

• Lagrange Interpolation

• Divided Differences

• Forward Differences

• Backward Differences

• Central Differences

• Least square Methods


• Curve Fitting

• Splines Interpolation

• References

• Examination









Objective of the Chapter Three

Interpolation polynomial and FINITE DIFFERENCES

 Polynomials are used as the basic means of approximation in nearly all areas
numerical analysis. They are used in the solution of equations and in the
approximation of functions, of integrals and derivatives, of solutions of integral and
differential equations, etc. polynomials owe this popularity to their simple structure,
which makes it easy to construct effective approximations and then make use of
them. We discuss this topic in the present chapter in the context of polynomial
interpolation, the simplest and certainly the most widely used technique for
obtaining polynomial approximations.

Historically speaking, numerical analysts have always been concerned with tables of numbers,
and many techniques have been developed for dealing with mathematical functions, represented in
this way. For example, the value of the function at an untabulated point may be required, so that a
interpolation is necessary. It is also possible to estimate the derivative or the definite integral of
a tabulated function, using some finite processes to approximate the corresponding (infinitesimal)
limiting procedures of calculus. In each case, it has been traditional to use finite differences.


Another application of finite differences, which is outside the scope of this book, is the numerical
solution of partial differential equations.


Chapter ThreeInterpolation and ApproximationLet f(x) be a continuous function defined on some interval [a, b], and be
prescribed at n + 1 distinct tabular points x0, x1,..., xnsuch that
a = x0< x1< x2< ... < xn= b. The distinct tabular points x0, x1,..., xnmay be
non-equispacedor equispaced, that is xk+1–xk= h, k = 0, 1, 2,…, n –1.
The problem of polynomial approximation is to find a polynomial Pn(x), of
degree =n, which fits the given data exactly, that is,
Pn(xi) = f(xi), i = 0, 1, 2,…, n. ………………(1)
The polynomial Pn(x) is called the interpolating polynomial. The
conditions given in equation (1) are called the interpolating conditions.
Remark:1-Through two distinct points, we can construct a unique
polynomial of degree 1,(straight line).
2-Through three distinct points, we can construct a unique polynomial of
degree 2 (parabola), we can construct a unique polynomial of degree =2.
3-In general, through n + 1 distinct points, we can construct a unique
polynomial of degree =n. The interpolation polynomial fitting a given data is
unique.










 Interpolation: Is to connect discrete data points in a plausible way so that
one can get reasonable estimates of data points between the given data
points, the interpolation cure goes through all data points.
Extrapolation: Suppose that we have a tabulated points
Error of interpolationWe assume that f(x) has continuous derivatives of order up to n + 1 for all
x .(a, b). Since, f(x) is approximated by Pn(x), the results contain errors.
We define the error of interpolation or truncation error asE(f, x) = f(x) –Pn(x). …………….. (2)
where min(x0, x1,..., xn, x) < .< max(x0, x1,..., xn, x)


where min(x0, x1,..., xn, x) < .< max(x0, x1,..., xn, x).
Since, .is an unknown, it is difficult to find the value of the error. However,
we can find a bound of the error. The bound of the error is obtained as
..Since the interpolating polynomial is unique, the error of interpolation is
also unique, that is, the error is same whichever form of the polynomial is
used.


Tables of values

Many books contain tables of mathematical functions. One of the most comprehensive is
the Handbook of Mathematical Functions, edited by Abramowitz and Stegun (see the
Bibliography for publication details), which also contains useful information about
numerical methods.

Although most tables use constant argument intervals, some functions do change rapidly in
value in particular regions of their argument, and hence may best be tabulated using
intervals varying according to the local behaviour of the function. Tables with varying
argument intervals are more difficult to work with, however, and it is common to adopt
uniform argument intervals wherever possible. As a simple example, consider the 6S table
of the exponential function over 0.10 (0.01 ) 0.18 (a notation which specifies the domain
0.10



It is extremely important that the interval between successive values is small enough to
display the variation of the tabulated function, because usually the value of the function will
be needed at some argument value between values specified (for example, from
the above table). If the table is constructed in this manner, we can obtain such intermediate
values to a reasonable accuracy by using a polynomial representation (hopefully, of low
degree) of the function f.

1. Finite differences

Since Newton, finite differences have been used extensively. The construction of a table of
finite differences for a tabulated function is simple: One obtains first differences by
subtracting each value from the succeeding value in the table, second differences by
repeating this operation on the first differences, and so on for higher order differences.


From the above table of one has the (note the standard layout, with
decimal points and leading zeros omitted from the differences):



(In this case, the differences must be multiplied by 10-5 for comparison with the function
values.)

Checkpoint

1. What factors determine the intervals of tabulation of a function?
2. What is the name of the procedure to determine a value of a tabulated function at an
intermediate point?
3. What may be the cause of irregularity in the highest order differences in a difference
table?




EXERCISES

1. Construct the difference table for the function f (x) = x3 for x = 0(1) 6.

2. Construct difference tables for each of the polynomials:

d. 2x - l for x = 0(1)3.





e. 3x2 + 2x - 4 for x = 0(1)4.
f. 2x3 + 3x - 3 for x = 0(1)5.




Study your resulting tables carefully; note what happens in the final few columns of each
table. Suggest a general result for polynomials of degree n and compare your answer with
the theorem.

3. Construct a difference table for the function f (x) = ex, given to 7D for x = 0.1(0.05) 0.5



FINITE DIFFERENCES

Forward, backward, central difference notations

There are several different notations for the single set of finite differences, described in the
preceding Step. We introduce each of these three notations in terms of the so-called shift operator,
which we will define first.

1. The shift operator E

Let be a set of values of the function f(x) The shift
operator E is defined by:

.

Consequently,

.

and so on, i.e.,

,

where k is any positive integer. Moreover, the last formula can be extended to negative
integers, and indeed to all real values of j and k, so that, for example,

,

and


.

2. The forward difference operator Q

If we define the forward difference operator Q by

,

then

,

which is the first-order forward difference at xj. Similarly, we find that



is the second-order forward difference at xj, and so on. The forward difference of order
k is

,

where k is any integer.

3. The backward difference operator

If we define the backward difference operator by

,

then

,

which is the first-order backward difference at xj. Similarly,



is the second-order backward difference at xj, etc. The backward difference of order k is

,

where k is any integer. Note that .

4. The central difference operator

If we define the central difference operator by


,

then

,

which is the first-order central difference at xj. Similarly,



is the second-order central difference at xj, etc. The central difference of order k is

,

where k is any integer. Note that .

5. Differences display

The role of the forward, central, and backward differences is displayed by the difference
table: Forward Difference Table



,…. are called the leading differences 03020,,yyy...








Although forward, central, and backward differences represent precisely the same data:

1. Forward differences are useful near the start of a table, since they only involve
tabulated function values below xj ;
2. Central differences are useful away from the ends of a table, where there are
available tabulated function values above and below xj;
3. Backward differences are useful near the end of a table, since they only involve
tabulated function values above xj.




Central difference table







Problem1. Show that:
45652yy2yy....
Solution
562/11yyy...
452/9yyy...
2/92/1152yyyNow.....)yy()yy(4556....
456yy2y...
Numerical Analysis
3rdMathematics Department






 2. Show that:
10102yy2yy.....
Solution
2/12/102yyy......
012/1yyy...
102/1yyy.....
2/12/102yyy......
)yy()yy(1001.....
101yy2y...
ProblemNumerical Analysis
3rdMathematics Department

Checkpoint

4. What is the definition of the shift operator?
5. How are the forward, backward, and central difference operators defined?
6. When are the forward, backward, and central difference notations likely to be of
special use?




EXERCISES

7. Construct a table of differences for the polynomial




;

for x = 0(1)4. Use the table to obtain the values of :

1. ;
2. ;
3. .


8. For the difference table in section before of f (x) = ex for x = 0.1(0.05)0.5 determine to
six significant digits the quantities (taking x0 = 0.1 ):
1. ;
2. ;
3. ;
4. ;







5. ;


9. Prove the statements:
1. ;
2. ;
3. ;
4. .






FINITE DIFFERENCES

Polynomials

Since polynomial approximations are used in many areas of Numerical Analysis, it is important to
investigate the phenomena of differencing polynomials.

1. Finite differences of a polynomial

Consider the finite differences of an n-th degree polynomial

,

tabulated for equidistant points at the tabular interval h.

Theorem: The n-th difference of a polynomial of degree n is a constant proportional to n
and higher order differences are zero.

Proof: For any positive integer k, the binomial expansion

,

yields

.

Omitting the subscript of x, we find


.

In passing, the student may recall that in the Differential Calculus the increment is related
to the derivative of f (x) at the point x.

2. Example

Construct for f (x) = x3
with x = 5.0(0.1)5.5 the difference table:



Since in this case n = 3, an =1, h = 0.1, we find

Note that round-off error noise may occur; for example, consider the tabulation of f(x) = x3 for 5.0(0.1)5.5,
rounded to two decimal places:
















3. Approximation of a function by a polynomial

Whenever the higher differences of a table become small (allowing for round-off noise), the function represented
may be approximated well by a polynomial. For example, reconsider the difference table of 6D for f (x ) = ex with
x = 0.1(0.05)0.5:




Since the estimate for round-off error at (cf. the table in above section), we say that third differences are
constant within round-off error, and deduce that a cubic approximation is appropriate for ex over the range 0.1 < x <
0.5. An example in which polynomial approximation is inappropriate occurs when f(x) = 10x for x = 0(1)4, as is
shown by the next table:



Although the function f(x) = 10x is `smooth', the large tabular interval (h = 1) produces large higher order finite
differences. It should also be understood that there exist functions that cannot usefully be tabulated at all, at least in
certain neighborhoods; for example, f(x) = sin(1/x) near the origin x = 0. Nevertheless, these are fairly exceptional
cases.


Finally, we remark that the approximation of a function by a polynomial is fundamental to the widespread use of
finite difference methods.

Checkpoint

1. What may be said about the higher order (exact) differences of a polynomial?
2. What is the effect of round-off error on the higher order differences of a polynomial?
3. When may a function be approximated by a polynomial?


EXERCISES

1. Construct a difference table for the polynomial f(x) = x4 for x = 0(0.1)1 when
a.
b. the values of f are exact;
c. the values of f have been rounded to 3D.
d. Compare the fourth difference round-off errors with the estimate +/-6.
e.


2. Find the degree of the polynomial which fits the data in the table:








INTERPOLATION

Linear and quadratic interpolation


Interpolation is the art of reading between the lines in a table. It may be regarded as a special
case of the general process of curve fitting. More precisely, interpolation is the process whereby
untabulated values of a function, given only at certain values, are estimated on the assumption that
the function has sufficiently smooth behaviour between tabular points, so that it can be
approximated by a polynomial of fairly low degree.

Interpolation is not as important in Numerical Analysis as it has been, now that computers (and
calculators with built-in functions) are available, and function values may often be obtained
readily by an algorithm (probably from a standard subroutine). However,

1. interpolation is still important for functions that are available only in tabular form
(perhaps from the results of an experiment); and
2. interpolation serves to introduce the wider application of finite differences.


We have observed that, if the differences of order k are constant (within round-off fluctuation), the
tabulated function may be approximated by a polynomial of degree k. Linear and quadratic
interpolation correspond to the cases k = 1 and k = 2, respectively.

1. Linear interpolation

When a tabulated function varies so slowly that first differences are approximately constant,
it may be approximated closely by a straight line between adjacent tabular points. This is
the basic idea of linear interpolation. In Fig. 10, the two function points (xj, fj) and (xj+1,
fj+1) are connected by a straight line. Any x between xj and xj+1 may be defined by a value of
. such that



If f (x) varies only slowly in the interval, a value of the function at x is approximately given
by the ordinate to the straight line at x. Elementary geometrical considerations yield



so that


.



FIGURE 10. Linear interpolation.

In analytical terms, we have approximated f (x) by

,

the linear function of x which satisfies

,

As an example, consider the following difference table, taken from a 4D table of e-x:

,


The first differences are almost constant locally, so that the table is suitable for linear
interpolation. For example,

.

2. Quadratic interpolation

As previously indicated, linear interpolation is appropriate only for slowly varying
functions. The next simple process is quadratic interpolation, based on a quadratic
approximating polynomial; one might expect that such an approximation would give better
accuracy for functions with larger variations.

Given three adjacent points xj, xj+1 = xj and xj+2 = xj + 2h, suppose that f (x) can be
approximated by

,.

where a, b, and c are chosen so that

.

Thus,

.

whence

.

Setting , we obtain the quadratic interpolation formula:

.


We note immediately that this formula introduces a second term (involving ), not
included in the linear interpolation formula.

As an example, we determine the second-order correction to the value of f (0.934) obtained
above using linear interpolation. The extra term is



so that the quadratic interpolation formula yields



(In this case, the extra term -0.0024/200 is negligible!)

Checkpoint

1. What process obtains an untabulated value of a function?
2. When is linear interpolation adequate?
3. When is quadratic inteipolation needed and adequate?




EXERCISES

4. Obtain an estimate of sin(0.55) by linear interpolation of f (x) = sin x over the
interval [0.5, 0.6] using the data:






Compare your estimate with the value of sin(0.55) given by your calculator.

5. The entries in a table of cos x are:




.

Obtain an estimate of cos(80° 35') by means of


1. Linear interpolation,
2. quadratic interpolation.


6. The entries in a table of tan x are:






Is it more appropriate to use linear or quadratic interpolation? Obtain an estimate
of tan(80° 35').

INTERPOLATION

Newton interpolation formulae

The linear and quadratic interpolation formulae are based on first and second degree
polynomial approximations. Newton has derived general forward and backward difference
interpolation formulae, corresponding for tables with constant interval h. (For tables with variable
interval, we can use an interpolation procedure in section before involving divided differences.)

1. Newton's forward difference formula

Consider the points xj, xj + h, xj + 2h, . . ., and recall that

,

where . is any real number. Formally, one has (since )

,

which is Newton's forward difference formula. The linear and quadratic (forward)
interpolation formulae correspond to first and second order truncation, respectively. If we
truncate at n-th order, we obtain




which is an approximation based on the values fj, fj+1,. . . , fj+n. It will be exact if (within
round-off errors)



which is the case if f is a polynomial of degree n.







2. Newton's backward difference formula

Formally, one has (since Newton's backward difference formula. The
linear and quadratic (backward) interpolation formulae correspond to truncation at first
and second order, respectively. The approximation based on the fj-n, fj-1, . . . , fj-n is



3. Use of Newton's interpolation formulae

Newton's forward and backward difference formulae are wel1 suited for use at the
beginning and end of a difference table, respectively. (Other formulae which use central
differences may be more convenient elsewhere.)

As an example, consider the difference table of f (x) = sin x for x = 0°( 10°)50°:


.

Since the fourth order differences are constant, we conclude that a quartic approximation
is appropriate. (The third-order differences are not quite constant within expected round-
offs, and we anticipate that a cubic approximation is not quite good enough.) In order to
determine sin 5° from the table, we use Newton's forward difference formula (to fourth
order); thus, taking xj = 0, we find and



Note that we have kept a guard digit (in parentheses) to minimize accumulated round-off
error.

In order to determine sin 45° from the table, we use Newton's backward difference
formula (to fourth order); thus, taking xj = 40, we find and



4. Uniqueness of the interpolating polynomial


Given a set of values f(x0), f(x1), . . , f(xn) with xj = x0 + jh, we have two interpolation
formulae of order n available:



Clearly, Pn and Qn are both polynomials of degree n. It can be verified( H. w.) that Pn(xj) =
Qn(xj) = f(xj) for j = 0,1, 2, . . . , n, which implies that Pn - Qn is a polynomial of degree n
which vanishes at (n + 1 ) points. In turn, this implies that Pn - Qn . 0, or Pn . Qn. In fact, a
polynomial of degree n through any given (n + 1) (distinct but not necessarily equidistant)
points is unique, and is called the interpolating polynomial.

5. Analogy with Taylor series

If we define for an integer k



the Taylor series about xj becomes



Setting we have formally




A comparison with Newton's interpolation formula



shows that the operator (applied to functions of a continuous variable) is analogous to the
operator E (applied to functions of a discrete variable).

Checkpoint

1. What is the relationship between the forward and backward linear and quadratic
interpolation formulae (for a table of constant interval h) and Newton's interpolation
formulae?
2. When do you use Newton's forward difference formula?
3. When do you use Newton's backward difference formula?




EXERCISES

4. From a difference table of f (x) = ex to 5D for x = 0.10(0.05)0.40, estimate:
1. e0.14 by means of Newton's forward difference formula;
2. e0.315 by means of Newton's backward difference formula.


5. Show that for j = 0, 1, 2, . . .,






6. Derive the equation of the interpolating polynomial for the data.






INTERPOLATION

Lagrange interpolation formula

The linear and quadratic interpolation formulae of sections before correspond to first and
second degree polynomial approximations, respectively. In section before , we have discussed


Newton's forward and backward interpolation formulae and noted that higher order interpolation
corresponds to higher degree polynomial approximation. In this Step we consider an interpolation
formula attributed to Lagrange, which does not require function values at equal intervals.
Lagrange's interpolation formula has the disadvantage that the degree of the approximating
polynomial must be chosen at the outset; an alternative approach is discussed in the next Step.
Thus, Lagrange's formula is mainly of theoretical interest for us here; in passing, we mention that
there are some important applications of this formula beyond the scope of this book - for example,
the construction of basis functions to solve differential equations using a spectral (discrete
ordinate) method.

1. Procedure

Let the function f be tabulated at (n + 1), not necessarily equidistant points xj, j = 1, 2,…., n
and be approximated by the polynomial



of degree at most n, such that



Since for k = 0,1, 2, . . , n



is a polynomial of degree n which satisfies



then:



is a polynomial of degree n which satisfies




Hence,



is a polynomial of degree (at most) n such that

,

i.e., the (unique) interpolating polynomial. Note that for x = xj all terms in the sum vanish
except the j-th, which is fj; Lk(x) is called the k-th Lagrange interpolation coefficient, and
the identity



(established by setting f(x) . 1) may be used as a check. Note also that with n = 1 we recover
the linear interpolation formula:



2. Example

We will use Lagrange's interpolation formula to find the interpolating polynomial P3
through the points (0, 3), (1, 2), (2, 7), and (4, 59), and then find the approximate value
P3(3).

The Lagrange coefficients are:




(The student should verify that Hence, the required polynomial
is



Consequently, However, note that, if the explicit form of the
interpolating polynomial were not required, one would proceed to evaluate P3(x) for some
value of x directly from the factored forms of Lk(x). Thus, in order to evaluate P3(3), one
has



3. Notes of caution

In the case of the Newton interpolation formulae, considered in the preceding Step, or the
formulae to be discussed in the next Step, the degree of the required approximating
polynomial may be determined merely by computing terms until they no longer appear to be
significant. In the Lagrange procedure, the polynomial degree must be chosen at the
outset! Also, note that

1. a change of degree involves recomputation of all terms; and
2. for a polynomial of high degree the process involves a large number of
multiplications, whence it may be quite slow.




Lagrange interpolation should be used with considerable caution. For example, let us
employ it to obtain an estimate of from the points (0, 0), (1,1), (8, 2), (27, 3), and (64, 4)
on . We find




so that which is not very close to the correct value 2.7144! A better result (i.e.,
2,6316) can be obtained by linear interpolation between (8, 2) and (27, 3). The problem is
that the Lagrange method yields no indication as to how well is represented by a
quartic. In practice, therefore, Lagrange interpolation is used only rarely.

Checkpoint

1. When is the Lagrange interpolation formula used in practical computations?
2. What distinguishes the Lagrange formula from many other interpolation formulae?
3. Why should the Lagrange formula be used in practice only with caution?


EXERCISE

Given that f (-2) = 46, f (-1 ) = 4, f ( 1 ) = 4, f (3) = 156, and f (4) = 484, use Lagrange's
interpolation formula to estimate the value of f(0).

INTERPOLATION

Divided differences

We have noted that the Lagrange interpolation formula is mainly of theoretical interest,
because, at best, it involves, in practice, very considerable computation and its use can be quite
risky. It is much more efficient to use divided differences to interpolate a tabulated functions
(especially, if the arguments are unequally spaced); moreover, its use is relatively safe, since the
required degree of the interpolating polynomial can be decided upon at the start. An allied
procedure, due to Aitken, is also commonly used in practice.

1. Divided differences

Again, let the function f be tabulated at the (not necessarily equidistant) points [x0, x1, . . . ,
xn]. We define the divided differences between points as follows:




As an example, we will construct from the data:



the divided difference table:



We note that the third divided differences are constant. In Section 3, we shall use the table to
interpolate by means of Newton's divided difference formula and determine the
corresponding interpolating cubic.

2. Newton's divided difference formula

According to the definitions of divided differences, we find



Multiplying the second equation by (x - x0), the third by (x - x0)(x - x1), etc., and adding the
results yields Newton's divided difference forrnula, suitable for computer implementation




where

.

Note that the remainder term R vanishes at x0, x1, . . . , xn, whence we infer that the other
terms on the right-hand side constitute the interpolating polynomial or, equivalently, the
Lagrange polynomial. If the required degree of the interpolating polynomial is not known
in advance, it is customary to arrange the points x1, . . . , xn, according to their increasing
distance from x and add terms until R is small enough.

3. Example

From the tabulated function in Section 1, we will estimate f (2) and f(4), using Newton's
divided difference formula and find the corresponding interpolating polynomials.

The third divided difference being constant, we can fit a cubic through the five points. By
Newton's divided difference formula, using x0 = 0, x1 = 1, x2 = 3, and x3 = 6, the
interpolation cubic becomes:



so that

.

Obviously, the interpolating polynomial is

.

In order to estimate the value of f(4), we identify x0 =1, x1 = 3, x2 = 6, x3 =10, whence




and

.

As expected, the two interpolating polynomials are the same cubic, i.e., x3 - 8x + 1.

4. Errors in interpolating polynomials

In Section 2, we have seen that the error in an interpolating polynomial of degree n was
given by

.

As it stands, this expression is not very useful, because it involves the unknown quantity
However, it may be shown (cf., for example, Conte and de Boor (1980)) that,
if and f is (n + 1)-times differentiable on (a, b), then
there exists a such that

,

whence it follows that

.

This formula may be useful when we know the function generating the data and wish to find
lower and upper error bounds. For example, let there be given sin 0 = 0, sin(0.2) =
0.198669, and sin(0.4) = 0.389418 to 6D (where the arguments in the sine function are in
radians). Then we can form the divided difference table:

.


Thus, the quadratic approximation to sin(0.1) is:

..

Since n = 2, the magnitude of the error in the approximation is given by

,

where 0 < ... 0.4. For f(x) = sin x, one has , so that It then
follows that

.

The absolute value of the actual error is 0.000492, which is within these bounds.

Checkpoint

1. What major practical advantage has Newton's divided difference interpolation
formula over Lagrange's formula?
2. Are divided differences actually used in interpolation by Aitken's method?


EXERCISES

1. Use Newton's divided difference formula to show that it is quite invalied to
interpolate from the points .
2.



3. Given that use Newton's divided difference formula
to estimate the value of e0.25. Find lower and upper bounds on the magnitude of
the error and verify that the actual magnitude is within the calculated bounds.
4. Given that f(-2) = 46, f(-1) = 4, f(1) = 4, f(3) = 156, and f(4) = 484, estimate the
value of f (0) from
a. Newton's divided difference formula, and
b. Aitken's method.
Comment on the validity of this interpolation.
c. Given that f (0) = 2.3913, f( 1 ) = 2.3919, f (3) = 2.3938, and f (4) =
2.3951, use Aitken's method to estimate the value of f(2).






INTERPOLATION

Inverse interpolation

Instead of the value of a function f (x) for a certain x, one might seek the value of x which
corresponds to a given value of f (x), a process referred to as inverse interpolation. For example,
the reader may have contemplated the possibility of obtaining roots of f (x) = 0 by inverse
interpolation.

1. Linear inverse interpolation


An obvious elementary procedure is to tabulate the function in the neighbourhood of the
given value at an interval so small that linear inverse interpolation may be used.

yields

,

where



is the linear approximation. (Note that if f (x) = 0, we recover the method of false position.

For example, one finds from a 4D table of f (x) = ex that f (0.91) = 0.4025, f (0.92) = 0.3985,
so that f (x) = 0.4 corresponds to

.

In order to obtain an immediate check, we will use direct interpolation to recover f (x) =
0.4. Thus,

.

2. Iterative inverse interpolation



As undoubtedly the reader may appreciate, it may be preferable to adopt (at least
approximately) an interpolating polynomial of degree higher than one rather than seek to
tabulate at a small enough interval to permit linear inverse interpolation. The degree of the
approximating polynomial may be decided implicitly by an iterative (successive
approximation) method.

For example, Newton's forward difference formula may be rearranged as follows:




Since terms involving second and higher differences may be expected to decrease fairly
quickly, we obtain successive approximations (. i) to . given by



Similar iterative procedures may be based on other interpolation formulae such as Newton's
backward difference formulae.

In order to illustrate this statement, consider the first table of f (x) = sin x and let us seek the
value of x for which f (x) = 0.2. Obviously, 10° > x



(Note that it is unnecessary to carry many digits in the first estimates of ..) Consequently,

,

which yields x = 11.539°.

A check, either by the usual method of direct interpolation or in this case directly, yields
sin(11.539°) = 0.2000.

3. Divided differences


Since divided differences are suitable for interpolation with tabular values which are
unequally spaced, they may also be used for inverse interpolation. Consider again

f(x) = sin x for x=l0° (l0°)50°

and determine the value of x for which f(x) = 0.2. Ordering according to increasing distance
from f(x) = 0.2, one finds the divided difference table (entries multiplied by 100);

,

Hence,

,

Aitken's scheme could also have been used here! However, by either method, we note that
any advantage in accuracy gained by the use of iterative inverse interpolation may not
justify the additional computational demand.

Checkpoint

1. Why may linear inverse interpolation be either tedious or impractical?
2. What is the usual method for checking inverse interpolation?




EXERCISES


3. Use linear inverse interpolation to find the root of x + cos x = 0 correct to 4D.
4. Solve 3xex =1 to 3D.
5. Given a table of values of a cubic f without knowledge of its specific form:






find x for which f(x) = 10, 20 and 40, respectively. Check your answers by (direct)
interpolation. Finally, obtain the equation of the cubic and use it to recheck your
answers.



CURVE FITTING

1. Least squares

Scientists often wish to fit a smooth curve to experimental data. Given (n + 1) points, an obvious
approach is to use the interpolating polynomial of degree n, but when n is large, this is usually
unsatisfactory. Better results are obtained by piecewise use of polynomials, i.e., by fitting lower
degree polynomials through subsets of the data points. The use of spline functions, which, as a
rule, provide a particularly smooth fit, has become widespread.

A rather different, but often quite suitable approach is a least square fit, in which, instead of trying
to fit points exactly, a polynomial of low degree (often linear or quadratic) is obtained which fits
the points closely (after all, the points themselves may not, in general, be exact, but subject to
experimental error).





2. An illustration of the problem


Suppose we are studying experimentally the relationship between two variables x and y - for
example, quantities x of drug injected and observed responses y, reorded in a laboratory
experiment. By carrying out the appropriate experiment, say, six times, we obtain six pairs of
values (xj, yj), which can be plotted on a diagram such as Figure 11(a).



Fig. 12 Fitting a straight line and a parabola

We may believe that the relationship between the variables can be described satisfactorily by a
function y = f (x), but that the y-values, obtained experimentally, are subject to errors (or noise).
Therefore one arrives at the mathematical model:



with n data, where f (xi ) are the values of y, corresponding to the value of xi, used in the
experiment, and .i is the experimental error involved in the measurement of the variable y at the
point. Thus, the error in y at the observed point is

In the problem of curve fitting, we use the information of the sample data points to determine a
suitable curve (i.e., find a suitable function f ) so that the equation y = f (x) gives a description of
the (x, y) relationship, in other words, it is hoped that predictions made by means of this equation
will not be too much in error.

How does on choose the function f ? There is an unlimited range of functions available. Figure
11(b) shows four possibilities. The polygon A passes through all six points; intuitively, however,


we would prefer to fit a straight line B, or an exponential curve such as C. The curve D is clearly
not a good candidate for our model.

3. A general approach to the problem

Let us, first of all, answer the question regarding the choice of function. Given a set of values (x1,
y1), (x2, y2),. . , (xn, yn); we shall pick a function which we can specify completely except for·the
values of a set of k parameters c1, c2, .. , ck; we shall denote this function by . We
then choose values for the parameters which will make the errors at the observation points (xi, yi) as
small as possible. Next, we shall suggest three ways by which the phrase as small as possible can
be given specific meaning.

Examples of functions to use are:

1. (Polynomials),
2. ( Combinations of· sine functions),
3. <img src="step26a6.gif" width="330" height="16" '> ( Combination of cosine functions).


These examples) may be termed general, linear forms:

4. , where the functions are a preselected
set of functions.


In 1., the set of functions is ; in 2., with a constant chosen to
coincide with a periodicity in the data, while in 3., the set is Other
functions commonly used in curve fitting are exponential functions, Bessel functions, Legendre
polynomials, and Chebyshev polynomials (cf., for example, Burden and Faires (1993)).

4. The meaning of Errors as small as possible

We now present criteria which make precise the concept of choosing a function to make
measurement errors as small as possible. We suppose that the curve to be fitted can be expressed in
a general linear form, with a known set of functions

The errors at the n data points are:




If the number of data points is less than or equal to the number of parameters, i.e., , it is
possible to find values for {c1, c2,. .. ., ck) which make all the errors .i
zero. If n is an infinite
number of solutions for {ci} which make al1 the errors zero, then an infinite number of curves of
the given form pass through all the experimental points; in this case, the problem is not fully
determined, i.e., more information is needed to choose an appropriate curve.

If n > k, which, in practice, is mostly the case, then it is not normally possible to make all the errors
zero by a choice of the {ci}. There are three possible choices:

1. A set {ci} which minimizes the total absolute error, i.e., minimize the sum:
2. a set {ci} which minimizes the maximum absolute error, i.e., minimizes ;
3. a set {cI} which minimizes the sum of the squares of the errors, i.e., minimizes .


In general, Procedures 1 and 2 are not readily applied. Procedure 3 leads to a linear system of
equations for the set {cI}, referred to as the Principle of least squares; it is used almost
exclusively.

5. The least squares method and normal equations

In order to apply the principle of least squares, use has to be made of partial differentiation, a
calculus technique which may not be known to some readers of this text. For that reason, a general
description of the method will not be given until the section before, but we describe it here and
give examples, in order to show how it is used.

The sum of squared errors to be minimized is




The n values of (xi, yi) are the known measurements taken from n experiments. When they are
inserted on the right-hand side, S becomes an expression involving only the k unknowns c1, c2, . . ,
ck. In other words, S may be regarded as a function of the ci, i.e., . The problem is
now to choose that set of values {ci} which makes S a minimum.

A theorem in calculus tells us that, under certain conditions which are usually satisfied in practice,
the minimum of S occurs when all the partial derivatives



vanish. The partial derivative coincides here with the differential coefficient , while all the
other ci are held constant; for instance, if S = 3cl + 5c2, then



Thus, we have to solve the system of k equations:



This system is a set of equations which is linear in the variables cl, c2, . . , ck and is referred to as
the normal equations for the least squares approximation. One of the numerical methods
presented before, may be used to obtain the required set {cI} which minimizes S. . However, we
note that the normal equations may be ill-conditioned, when it is preferable to invoke QR
factorization as outlined in the next (optional), or to employ orthogonal basis functions (cf. for
example, Conte and de Boor ( 1980).

6. Example

The following points were obtained in an experiment:


.

We shall plot the points on a diagram and use the method of least squares to fit through them

a) a straight line, and b) a parabola.



The plotted points are shown in Figure 12(a). In order to fit a straight line, we have to find a
function y = cl + c2x, i.e., a first degree polynomial which minimizes



Differentiating first with respect to cl (keeping c2 constant) and then with respect to c2 (keeping cl
constant), and setting the results equal to zero, yields the normal equations:



We may divide both equations by -2, take the summation operations through the brackets, and
rearrange, in order to obtain:



We see that, in order to obtain a solution, we have to evaluate the four sums
and insert them into these equations. We can arrange the work in a table as follows (the last three
columns are devoted to fitting of the parabola and the required sums are in the last row):




The corresponding normal equations for fitting a straight line are:



The solutions to 2D are c1 = 2.13 and c2 = 0.20, whence the required line is figure 12b:



In order to fit a parabola, we must find the second degree polynomial



which minimizes

.

Taking partial derivatives and proceeding as above we obtain the normal equations:



Inserting the values for the sums (see the table above), we obtain the system of linear equations:

.


The solution to 3D is c1 = -1.200, c2 = 2.700, and c3 = -0.357. The required parabola is therefore
(retaining 2D):

.

it is also plotted in Figure 13(b). Obviously, the parabola is a better fit than the straight line!



Checkpoint

1. What is meant by the term error at a point?
2. Give three criteria which may be applied to choose the set (ci).
3. How are the normal equations obtained?


EXERCISES

1. For the example above (the data points are shown in figure 12a) compute the value of S, the
sum of the squares of the errors at the points, from 1. the fitted line, and 2. the fitted
parabola. Plot the points on graph paper, and fit a straight line by eye (i.e., use a ruler to
draw a line, guessing its best position). Determine the value of S for this line and compare it
with the value for the least squares line. Fit a straight line by the least squares method to
each of the following sets of data:


a) Toughness x and percentage of nickel y in eight specimens of alloy steel.




b) Aptitude test marks x, given to six trainee sales people, and their first-year sales y in
thousands of dollars.



For both sets of data, plot the points and draw the least squares line. Use the lines to predict
the % - nickel of a specimen of steel the toughness of which is 38, and the likely first-year
sales of a trainee sales person who obtains a mark of 48 in the aptitude test.

2. Obtain the normal equations for fitting a third-degree polynomial y = c1 + c2x + c3x2 + c3x3
to a set of n points. Show that they can be written in matrix form (all sums being from i =1
to i=n




Deduce the matrix form of the normal equations for fitting a fourth-degree polynomial.

3. Use the least squares method to fit a parabola to the points (0,0), (1,I), (2,3), (3,3), and
(4,2). Find the value of S for this fit.
4. Find the normal equations which arise while fitting by the least squares method an
equation of the form y = c1 + c2sin x to the set of points Solve
them for c1 and c2.




CURVE FITTING

Splines


Suppose we want to fit a smooth curve which actually goes through n + I given data points when n
is quite large. Since an interpolating polynomial of correspondingly high degree n tends to be
highly oscillatory and therefore is likely to give an unsatisfactory fit, at least in certain locations,
an interpolation is often constructed by linking lower degree polynomials (piecewise
polynomials) at certain or all of the given data points (called nodes or knots). This interpolation is
smooth if we also insist that the piecewise polynomials have matching derivatives at the nodes,
and this smoothness is enhanced by matching higher order derivatives.

Let the data points (x0, (f0), (x1(f1), . . ., (xn), (fn), be ordered according to their magnitude so that



We will seek a function S which is a polynomial of degree d on each subinterval [xj - 1,,xj], j = 1, 2,
…,n, where

.

In order to achieve maximum smoothness at the nodes, we shall allow S to have up to d - 1
continuous derivatives. Such functions are referred to as splines. An example of a spline for the
linear (d = 1) case is the polygon (Curve A) of Figure 11(b) in before. It is clear that this spline is
continuous, but does not have a continuous first derivative. In practice, most popular are cubic
splines, constructed from polynomials of degree three with continuous first and second derivatives
at the nodes and discussed below in detail. Figure 13 below shows an example of a cubic spline S
for n = 5. (The data points are taken from the table in section before) We see that S passes through
all the data points. The function Sj on the subinterval [xj-1, xj] is a cubic. As has already been
indicated, the first and second derivatives of Sj and Sj+1 match at (xj, fj), the point where they meet.

The term spline refers to the thin flexible rods, used in the past by draughtsman to draw smooth
curves, for example, in ship design. The graph of a cubic spline approximates the shape which
forms when such a rod is forced to pass through given n + 1 nodes and corresponds, according to
the theory of bending of thin rods, to minimum strain energy.

1. Construction of cubic splines


As has been indicated, a cubic spline S is constructed by fitting a cubic to each subinterval
[xj-1, xj] for j = 1, 2, . . , n, whence it is convenient to assume that S has values Sj(x) for
, where





FIGURE 13. Schematic example of a cubic spline over subintervals [x0, x1], [x1, x2], [x2,
x3], [x3, x4], and [x4, x5], each function Sj on [xj-1, xj] being a cubic.

We now impose the condition S(xj) = fj, whence aj = fj for j = 1, 2, . . , n. For S to be
continuous and to have continuous first and second derivatives at the given data points, we
require



for j=1, 2, .. , n - 1.

Since we have a cubic with the four unknowns (aj, bj, cj, dj) on each of the n subintervals,
and so a total of 4n unknowns, we need 4n equations to specify them. The requirements S(xj)
= fj, j=0,1, 2, . . , n yield n + 1 equations, while 3(n -1) equations arise from the continuity
requirement on S and its first two derivatives given above. This yields a total of n+1+3(n-
1)=4n-2 equations, whence we need to impose two more conditions to specify S completely.


The choice of these two extra conditions determines the type of the cubic spline obtained.
Two common options are:

1. A natural cubic spline, when S"(x0) = S"(xn) = 0;
2. A clamped cubic spline, when for some given constants and
If the values of f'(x0) and f'''(xn) are known, then and can be set to these values.




We shall not go here into the algebraic details; however, it turns out that if we write hj
= xj - xj-1 and mj = S(xj), then the coefficients of Sj are given by



The spline is thus determined by the values of which depend on whether we want a natural
or a clamped cubic spline.

For a natural cubic spline, we have m0 = mn = 0, and the equations:



for j = 1, 2, . . . , n - 1. (Note that if all the values of hj are the same, then the right-hand side of this
last equation is just . Setting , these linear equations become the (n -1)
. (n - 1) system:

,


where

,

Note that the coefficient matrix has non-zero entries only on the leading diagonal
and the two sub-diagonals either side of it. Such a system is called a tri-diagonal
system. Since most of the entries below the leading diagonal are zero, it is possible to
modify Gauss elimination ( before) to produce a very efficient method for solving
tri-diagonal systems.

For the clamped boundary conditions, the equations are:

,

and

,

It may be verified that these equations for m0, m1,. . . , mn can be written as an (n + 1
) x (n + 1 ) tridiagonal system.

3. Examples

We will fit a natural cubic spline to data,

,


Since the values of the xj are equally spaced, we find hj = 1 for j = 1, 2, . . , 5. Also,
m0=m5 = 0 and the remaining values m1, m2, m3, and m4 satisfy the linear system:

.

Using Gauss elimination to solve this system to 5D, we find:

.

Calculating the coeficients, we then find that the spline S is given by:

,

where

,

The data points and the cubic spline have already been displayed in Figure 13.

The next example demonstrates graphically a comment made earlier, namely that the
behaviour of interpolating polynomials of high degree tends to be very oscillatory.
For this purpose, we consider the data of the function f(x) = 10/(1 + x2):

.

It is readily verified that the interpolating polynomial of degree 6 is given by


.

The function f is plotted as a solid line along with the interpolating polynomial as a
dashed line in Figure 14 below. (Since both f and P6 are symmetric about the x-axis,
only the (0,3) sectiont of the graph has been displayed.) The oscillatory behaviour of
the interpolating polynomial of degree 6 is obvious.

Now fit a natural cubic spline to the data. In order to do so, we must solve the linear
system

.

Gauss elimination yields (to 5D )

.

We now find that the natural spline S is symmetric about the y-axis. On [0, 3], it is
given by

,

where

.

This spline is also plotted in Figure 14 as a dotted line. It is clear that it is a much
better approximation to f than the interpolating polynomial.




FIGURE 14. The function f (x) =10/(1 +x2) (solid line) approximated by an
interpolating polynomial (dashed line) and a natural cubic spline (dotted line).

Checkpoint

4. What characterizes a spline?
5. What are two common types of cubic spline?
6. What type of linear system arises when determining a cubic spline?




EXERCISE

Given the data points (0, 1 ), ( 1, 4), (2,15), and (3, 40), find the natural cubic spline fitting
these data. Use the spline to estimate the value of y at x = 2.3.



References

1-Numerical Analysis, Third Edition, 2002, David Kincaid and Ward Cheney.

2- Numerical Method and using Matlab, Third Edition, John H. Mathews and
Kartis D. Frik 1999.

3- Applied Numerical Analysis, Sixth Edition, Gerald and Patrick , 2002.

4- Applied Numerical Methods Using Matlab, Won Y. Yang , (2005).

5- Numerical Methods and Analysis, Buchanan, J. L., and Turner, P. R.,1992.

6- An Introduction to Numerical Analysis, Kendall E. Atkinson, 1989.




Examination

T



Q1: a. Use divided difference formula to compute the cubic interpolating polynomials for
xe)x(f.

 use the points . 3)1(0x.

 b. Find the operators i. where ii. ))(log(kx.hkxxk..0ky4.



Q2: Consider the curve ,
baxy
.
.
1

 a. Use least square method to find the normal equations,

 b. from the points (1,2) , (2,5) , (3,10) , (4,17) and (5,26) to find the fit curve
baxy
.
.
1

 then find the error . )f(E,)f(E21



Q3: a. Use Bessel’s equation to finding the grater polynomial for the following data

 (0, 0), (1,-1), (2, 8), (3,135), (4,704) and (5, 2375).

 b. find approximate value of for the following data )65.0(f

 (0,1) , (0.2 , 1.221) , (0.4 , 1.491) , ( 0.6 , 1.822) and ( 0.8 , 2.225) .

 …………………………………………….

Final Exam ((Numerical Analysis 3rd year computer)

Department of Mathematics-Science College

The University of Sulaimani,

June-2006, Time 3 hours.

First Trial

Lecturer Faraidun K.H





Q1: a. Is Newton Raphson-method be converges all time ? If not, how to be converges.



 b. Use false position method to solve the non-linear equation

 , . (10 M) 01)cos(...xx00001.0..



Q2: a. Solve the non-linear for (12 M) 1,122434......yyxxyyxx

 Use initial point .)6.0,9.0(



 b. Use Gauss-Seidel method to solve the system


25842512320710231321321
...
...
....
xxxxxxxxx

 stop iteration after three steps.

 (12 M)

Q3: i. Write the algorithm for LU-decomposition.


 ii. If then find .
4kyk.ky4.

 iii. Find the normal equations of the curve use least square method .
xaexby.



Q4: a. Find the polynomial of degree four using Newton’s backward difference formula for the

 following data (0,1), (0.2 , 1.2214), (0.4 , 1.4918) , (0.6, 1.8221) and (0.8 , 2.2255) and

 estimate the value of . )65.0(f



 b. Construct the Lagrange interpolating polynomials for the function )x3cos(e)x(fx2.

 . and find 6.0xand3.0x,0x210...)1.0(f.



 c. Use Simpson’s rule to find the value of the integral when
83..
511dxxx
.4.n



 (16 M)





 Best wish

………………………..



Final Exam ((Numerical Analysis 3rd year physics)

Department of Mathematics-Science College

The University of Sulaimani,

June-2006, Time 3 hours.

First trial

Lecturer Faraidun K.H





Q1: a. Drive secant method by using Newton Raphson-method. (10 M)

 b. Find the approximate solution of , use iterative method for . 0ex4x3..01.0..



Q2: a. What is the condition for Newton Raphson-method will be converges. (10 M)

 b. Solve the non-linear system

 with the initial point .
04405.02222
...
....
yxyxx
)1,2.0(.



Q3: 1. Prove or disprove: Explains (10 M)


432140444.
)1(.
....
.
.....
...
kkkkkkkyyyyyiiyyi

 2. Find the normal equations of the cure use the least square method. xabxy52..
.



Q4: a. Discuss the cubic spline functions, and find the quadratic spline for the following data

 .)20,10()2,8(,)1,7(,)2,5(,)0,0(and..



 b. From the following data find the values of )5()05.0(fandf....


 (10 M) .)629.1,1.5()619.1,05.5(,)61.1,5(,)599.1,95.4(,)589.1,9.4(and



Q5: a. Find the integral at four points to drive Simpson’s rule .
badxxf)(
83

 and use the composite Simpson’s to find approximate value of with
31



 b. Use Taylor series to find the solution of the initial value problem: )(xy

 . 1)0(1....yandyxy

 and compute (10 M) .)1.0(y



---……………………….


Q1: a. Write three difference between Newton-Raphson method and Bisection method. (15 M)

 b. Find the approximate solution of , use iterative method for 2320xx...

 and determine the oscillatory convergence type.



Q2: a. Write the out line of LU-decomposition method . (15 M)

 b. Solve the non-linear system

 with the initial point .



Q3: 1. Prove or disprove: Explains (10 M)


41234444kkkkkyyyyy.........

 2. Find the normal equations of the cure use the least square method. 5axyb.



Q4: a. . let on , Use Lagrange method to construct a cubic
interpolation polynomial.. (10 M)
)cos()(xxfy..2.1)4.0(0.x



 b. From the following data find the values of





Q5: a. use the trapezoidal rule to Find approximations value to the integral (10 M)


 and compare with Simpson’s to find error estimate for
.
...
2.10))sin(1(dxxex
5.n

……………………………….

Numerical Analysis 2nd year Statistics & Computer) Third Exam

Department of Statistics & Computer May.-14-2008, Time 1.5 hours.

The University of Sulaimani Lecturer Faraidun K.H



Q1: a. Find the following: 1. 2. 3.

 b. Given , ,,, then find ?

 c. What are differences between carve fitting and Interpolation?

Q2: i. Find a polynomial of degree four which takes the values

 (2, 0), (4, 0), (6, 1), (8, 0) and (10, 0).

 ii. Find the normal equation of convert to linear form and find

 for the points (-4, 4), (1, 6), (2, 10) and (3, 8).

 iii. From the following data find y(5) , using Bessel’s formula

 (0, 143), (4, 158), (8, 177) and (12, 199).

……………………………………….

Final Exam (Numerical Analysis 2nd year statistics & computer)

Department of statistics & computer –Commerce College

The University of Sulaimani,

 June.4 -2008, Time 3 hours.

First Trial

Lecturer Faraidun K.H





Q1: a. Define the following: i. . ..,.

 ii. Truncation Error (12 M)



 b. Write the difference between false podtion method and Newton-Raphson method.



Q2: i.Find the real root of by iteration method where . (12 M)
xexx.)cos(0001.0..



 ii. Find the iterative formula to find the value of where .
191
001.0..





Q3: a. Solve the system , use Gauss-Seidel and Jaccobi method compare
35123620218333114
...
....
...
zyxzxyzyx


 the result for two steps. (12 M)

 b. What is the principle of least square method? andFind the normal equations to the

 curve . bxxay..

Q4: 1. Find . (12 M) 24,20,18,932106....yyyyify



 2. From the following data estimate )12.0()26.0(,)12.0(fandff..

 (0.1, 0.1003), (0.15, 0.1511), (0.2, 0.2027), (0.25, 0.2553), (0.3, 0.3093) .



Q5: a. Explain the difference between interpolation and curve fitting. (12 M)

b. Compute the value of , n=5 use trapezoidal rule and Simpson’s rule ..
7.05.0dxxex


Explain your answer.





 Good Luck

………………………

(Numerical Analysis 3rd year Mathematics) Third Exam

Department of Mathematics May.- -2010, Time 1,5 hours.

The University of Sulaimani Lecturer Dr. Faraidun K.H

Science Education College





Q1: a. State and prove Existence theorem for interpolation polynomial.

 b. Prove or disprove the following:

 i. ii. .
4122122SSS....nknknyy....

 iii. Find if that the third differences are constant. 6y24,20,18,93210....yyyy

 iv. If then . nkky.!nykn..

Q2: a. From the following data, find sin(52) by using Newton forward interpolation, also estimate the error,
(45,0.7071), (50,0.766), (55,0.8192), (60, 0.866).

 b. Find the normal equation of the curve .
)(axxby
.
.

 c. Find the natural cubic spline for the data [1, 2], and estimate y’(1.5) for the following

 (1,1) , (2,5) , (3,11), (4,8).

……………………………

(Numerical Analysis 3rd year Mathematics) First Trial : Final Exam

Department of Mathematics June- -2010, Time 3 hours.

The University of Sulaimani Lecturer Dr. Faraidun K.H

Science Education College



 Note: Each question carried (10 marks).




Q1: a- Find the absolute, relative and percentage errors, if x is rounded off to three decimals digits
given x=0.005998.

 b- Show that Newton-Raphson method converges of order two.



Q2: i. Show that has a double root at x=1, and use modified Newton Raphson . xxexf...11)(00.x

 ii. How many steps Bisection method are needed to compute approximate root of f(x)=0 with error
. ..

 iii. Use Aitken method to find the approximate root of function only two steps. 15)(3...xxxf



Q3: a.. Use Gauss-Seidel matrix notation method two steps only.
121272
...
....
..
zyzyxyx

 b. State and prove fundamental theorem for interpolation error.

 c. Find the value of a and b such that is a quadratic spline.
...
...
....
.
323211)(
2xbxxaxxxf



Q4: 1- Find if . 33y.17,8,6,27543.....yyyy

 2- Show that i. ii. .......11....E

 3- Use Lagrange interpolation for and the function find the Lagrange
polynomial , and bound of the truncation error and estimate y(1.5) and y(2.5).
3,110..xx)sin(xy.



Q5: a. Evaluate use Gauss-Hermite formula, n=2. .
.
..
.
.
dxxex212



 b. Find , which is exact for polynomial of highest
possible degree, then use the formula .
)1()
21()0()(
)1(
110cfbfafdxxfxx
...
..
.
.
1031dxxx

 c. Using Taylor's method to compute y(0.2) and y(0.4) of

 for two steps.
0)0(
21
.
...
yyxy



……………………………………………………….




.....
...
zyxzyxzyx
.
.
.
.
.
.
.
.
.
.
..
9.01.39.00x

 b. Find for the given data, using divided difference formula (1, 0), (1.5,
0.40547), (2, 0.69315) and (3, 1.09861).
)6.1()6.1(fandf...



 c. Find the value of a and b such that is a quadratic spline.
...
...
....
.
323211)(
2xbxxaxxxf



Q4: 1- Find if . 33y.17,8,6,27543.....yyyy

 2- Show that i. ii. .......11....E



Q5: a. Evaluate use Gauss-Chybeshev quadrature formula for three points. .
.
.
10212)1()2cos(dxxx



 b. Find y at x=0.1 and 0.2 of , and find Truncation error, usinf runge
kutta fourth order method.
1)0(,02.....yyxyy









Good Luck




No comments:

Post a Comment