Truncation error is a discrepancy that arises when executing a finite number of steps to approximate an infinite process. It is the difference between a truncated value and the actual value, and is present even with infinite precision arithmetic. In numerical analysis and scientific computation, the error of truncation is the error that is made when truncating an infinite sum and approximating it to a finite sum. This page is about the truncation error of ODE methods.
It is closely related to the concept of discretization error, which is the error that arises when taking a finite number of steps in a calculation to approximate an infinite process. For example, in numerical methods for ordinary differential equations, the continuously varying function that is the solution of the differential equation is approximated by a process that progresses step by step, and the error that this implies is a discretization or truncation error. Often, the rounding error (the consequence of the use of finite-precision floating-point numbers in computers) is also called a truncation error, especially if the number is rounded by cutting. Knowing truncation error or other error measures is important for program verification by empirically establishing convergence rates.
We will be concerned with calculating the truncation errors that arise in finite difference formulas and in finite difference discretizations of differential equations. For one-step methods, the local truncation error gives us a measure to determine how the solution to the differential equation does not solve the difference equation. Find the truncation error if you use a Reimann sum of two segments on the left with the same width of segments. The term is used in several contexts, including infinite series truncation, finite precision arithmetic, finite differences, and differential equations. We will first look at a particular example in detail, and then we will list the truncation error in the most common finite difference approximation formulas.
The error caused by choosing a finite number of rectangles instead of an infinite number of them is a truncation error in the mathematical process of integration. In computer applications, it represents one of the main sources of error in numerical methods for algorithmic solution of continuous problems. Its analysis and methods for its estimation and control are central problems in numerical analysis. The following text will provide many examples of how to calculate truncation errors for finite difference discretizations of ODE and PDE.