Truncation error is a discrepancy that arises when approximating an infinite process with a finite number of steps. It is present even with infinite precision arithmetic, as it is caused by the truncation of Taylor's infinite series to form the algorithm. In numerical analysis and scientific computation, the error of truncation is the error that is made when truncating an infinite sum and approximating it to a finite sum. Sometimes the rounding error is also called a truncation error, especially if the number is rounded by truncation.
Knowing truncation error or other error measures is important for program verification by empirically establishing convergence rates. It generally increases as the step size increases, while the rounding error decreases as the step size increases. The term is used in several contexts, including the truncation of infinite series, finite precision arithmetic, finite differences, and differential equations. An advantage of truncation error analysis compared to empirical estimation of convergence rates or detailed analysis of a special problem with a mathematical expression for the numerical solution, is that analysis of the truncation error reveals the accuracy of the various building blocks in the numerical method and how each building block affects overall accuracy. Find the truncation error if you use a Reimann sum of two segments on the left with the same width of segments.
We will be concerned with calculating the truncation errors that arise in finite difference formulas and in finite difference discretizations of differential equations.