A Chronology of Interpolation
From Ancient Astronomy to Modern Signal and Image Processing
Proceedings of the IEEE, vol. 90, no. 3, March 2002, pp. 319-342
It is an extremely useful thing to have knowledge of the true origins of memorable discoveries, especially those that have been found not by accident but by dint of meditation. It is not so much that thereby history may attribute to each man his own discoveries and others should be encouraged to earn like commendation, as that the art of making discoveries should be extended by considering noteworthy examples of it.
G. W. Leibniz, Historia et Origo Calculi Differentialis (ca. 1714). Translation as in J. M. Child, "Newton and the Art of Discovery", in Isaac Newton 1642-1727: A Memorial Volume, W. J. Greenstreet (ed.), G. Bell and Sons, London, 1927, pp. 117-129.
Ancient Times and the Middle Ages
ca. 300 BC and earlier: Babylonian astronomers use linear and higher-order interpolation to fill gaps in ephemerides of the sun, moon, and the then-known planets, written down in cuneiform tablets as shown here.
|For explanations and more details, see O. Neugebauer, Astronomical Cuneiform Texts, Lund Humphries, London, 1955.|
ca. 150 BC: Hipparchus of Rhodes uses linear interpolation in the construction of tables of the so-called "chord-function" (related to the sine function) for the purpose of computing the position of celestial bodies.
ca. 140 AD: In an influential work known as the Almagest, Claudius Ptolomy uses an "adaptive" linear interpolation technique in constructing tables of functions of more than one variable defined for astronomical purposes.
600 AD: In producing the so-called Imperial Standard Calendar, the Chinese astronomer Liu Zhuo uses an interpolation formula equivalent to the second-order version of the Gregory-Newton formula.
625 AD: In his work Dhyanagraha, the Indian astronomer-mathematician Brahmagupta describes a method for second-order interpolation of the sine and versed sine function. In a later work, Khandakhadyaka (665 AD), he also describes a method for interpolation of unequal-interval data.
ca. 800 AD: In a commentary on a seventh-century work by Bhaskara I, Govindasvami uses an interpolation formula equivalent to the second-order version of the Newton-Gauss formula.
|For a translation and explanation of this Sanskrit passage, see R. C. Gupta, "Second Order Interpolation in Indian Mathematics up to the Fifteenth Century", Indian Journal of History of Science, vol. 4, nos. 1 & 2, 1969, pp. 86-98.|
727 AD: The Chinese Monk Yi Xing produces the so-called Da Yan Calendar and for that purpose uses a second-order interpolation formula that can handle unequal-interval observation data.
ca. 1000 AD: The erudite Arabian scientist al-Biruni writes his major work al-Qanun'l-Mas'udi (in Latin later referred to as Canon Masudicus), in which he describes a method for second-order interpolation. Several recent authors suspect Indian influences, given al-Biruni's knowledge of India and the Sanskrit language, as well as the fact that the works of Brahmagupta (see above) were translated into Arabic as early as the eight century AD.
1280 AD: Together with others, the Chinese Guo Shoujing produces the so-called Works and Days Calendar, for which he uses third-order interpolation techniques.
1303 AD: The Chinese mathematician Zhu Shijie writes his major work Jade Mirror of the Four Origins, in which he poses some interesting problems. In explaining the answer to one of them, he gives a general rule that is very closely related to the Gregory-Newton interpolation formula.
The Age of Scientific Revolution
1611: Harriot uses an interpolation formula involving up to fifth-order differences equivalent to the Gregory-Newton formula. This is the oldest known example of the use of interpolation techniques in the Western World.
1624: In the introductory chapters of his important work Arithmetica Logarithmica, and later also in his Trigonometria Britannica (1633), Briggs describes the precise rules by means of which he carried out the computations in constructing his logarithmic and trigonometric tables. These include his interpolation rules, which amount to the Gregory-Newton formula for the case when third- and higher-order differences are zero.
1655: Wallis, in his Arithmetica Infinitorum, is the first to use the Latin verb interpolare (to interpolate) in a mathematical sense.
Taken from his Opera Mathematica, vol. I, Oxford, 1695. For an explanation of parts of this work, see J. F. Scott, The Mathematical Work of John Wallis, Chelsea Publishing Company, New York, N. Y., 1981, Chapter 4.
Note that the word "interpolation" had already been introduced in the English language around 1612, and was then used in the sense of "to alter or enlarge texts by insertion of new matter".
1670: In a letter to Collins, dated November 23, 1670, Gregory describes the now well-known Gregory-Newton interpolation formula for equal-interval data.
1675: Newton begins his fundamental work on the topic and eventually lays the foundation of classical interpolation theory. His contributions can be found in:
A letter to Smith, dated May 8, 1675, and also a letter to Oldenburg, dated October 24, 1676, in the latter of which he wrote:
(...) I have another method not yet published by which the problem is easily dealt with. It is based upon a convenient, ready and general solution of this problem. To describe a geometrical curve which shall pass through any given points. (...) Although it may seem to be intractable at first sight, it is nevertheless quite the contrary. Perhaps indeed it is one of the prettiest problems that I can ever hope to solve.
A manuscript entitled Methodus Differentialis, published in 1711, but probably written in the mid-1670s.
For a translation and explanation, see D. C. Fraser, Newton's Interpolation Formulas, C. & E. Layton, London, 1927.
A manuscript entitled Regula Differentiarum, written in 1676 but first discovered in the 20th century.
For a transcription, translation, and explanation, see D. T. Whiteside, The Mathematical Papers of Isaac Newton, vol. IV (1674-1684), Cambridge University Press, Cambridge, 1971, Section I.1.3, pp. 36-51.
Lemma V from Book III of his celebrated Philosophiae Naturalis Principia Mathematica, published in 1687.
For a translation and explanation, see D. C. Fraser, Newton's Interpolation Formulas, C. & E. Layton, London, 1927.
This Lemma contains his general interpolation formula for unequal-interval data as well as the formula for equal-interval data, the latter of which is in fact a special case of the former and is equal to the one described earlier by Gregory, hence the Gregory-Newton formula.
1719: Stirling discusses several of Newton's interpolation formulae in the Methodus Differentialis, one of which is now known as the Newton-Stirling formula. In 1730, Stirling publishes a more elaborate booklet on the topic.
1779: Waring publishes a very elegant alternative representation of Newton's general formula for arbitrary-interval data that does not require the computation of divided differences. The formula is nowadays generally attributed to Lagrange, who proposed the same formula 16 years later.
From E. Waring, "Problems Concerning Interpolations", Philosophical Transactions of the Royal Society of London, vol. 69, 1779, pp. 59-67.
1783: Euler writes a tract which contains a variant of Newton's divided-differences formula close to the Waring-Lagrange formula.
1795: In apparent ignorance of the works of Waring and Euler, Lagrange publishes the formula now known under his name.
1812: Gauss gives a lecture on interpolation in which he discusses several formulae, including the one which is nowadays known as the Newton-Gauss formula. This fact is known only because one of his students, Encke, made notes of it, which were published in 1830.
1821: In his Cours d'Analyse, Cauchy studies interpolation by means of a ratio of two polynomials and shows that the solution to this problem is unique, the Waring-Lagrange formula being the special case for the second polynomial equal to one.
1824: Bessel publishes a paper on computing the motion of the moon. In it, he also goes into some detail about the formula he used for his interpolations, because he could "not recollect having seen it anywhere". The formula is, however, equivalent to one of Newton's in his Methodus Differentialis, which had earlier been discussed by Stirling and, so it seems, also by Laplace. It is now known as the Newton-Bessel formula.
1841: Cauchy finds an expression, the so-called Cauchy remainder term, for the error caused by truncating finite-difference interpolation series.
1860: Borchardt, and a little later Kronecker, publish first works on the problem of multivariate interpolation in the case of fairly arbitrary point configurations.
1874: Tchebychef introduces his famous polynomials, the zeroes of which are the optimal abscissae for interpolation, in the sense that the absolute value of the Cauchy remainder term is minimized.
1878: Hermite publishes his solution to the problem of finding a polynomial that assumes prespecified values also for its first few derivatives at given points, where the order of the highest derivative may differ from point to point.
1885: Weierstrass proves the famous approximation theorem, which states that every continuous function on a closed interval can be approximated uniformly to any prescribed accuracy by a polynomial. This justifies the use of polynomials for function approximation. Later it would become clear, however, that this theorem does not necessarily apply if the polynomial is forced to be interpolatory. A notorious counter-example of this theorem in the case of interpolation was published in 1901, by Runge.
1900: Everett publishes an interpolation formula which involves only even-order differences of the table entries between which to interpolate. It is said that this formula had been proven earlier by Laplace.
1906: Birkhoff publishes a paper on the most generalized interpolation problem: given any set of points, find a polynomial function which satisfies given criteria concerning its value and/or the value of any of its derivatives for each individual point. Birkhoff interpolation is also known as "lacunary interpolation". Waring-Lagrange interpolation and Hermite interpolation are in fact special cases of it.
The Information and Communication Era
Note that here, the focus of attention is on two very important theorems for interpolation of regularly-sampled data and their later impact on the fields of signal and image processing. Several alternative interpolation methods are also mentioned briefly.
1915: E. T. Whittaker studies the Newton-Gauss interpolation formula for an infinite number of equidistant abscissae on both sides of a given point and shows that, under certain conditions, the resulting interpolant converges to what he calls the "cardinal function", which consists of a linear combination of shifted functions of the form (sin x)/x. Analyzing the frequency content of this interpolant, he observes that all constituents of period 2w, with w the distance between the abscissae, are absent.
From E. T. Whittaker, "On the Functions which are Represented by the Expansions of Interpolation-Theory", Proceedings of the Royal Society of Edinburgh, vol. 35, 1915, pp. 181-194.
In recent years it has become known that closely related variants of the cardinal function had appeared earlier, in the works of Borel (1899), Hadamard (1901), de la Vallee Poussin (1908), and Steffensen (1914). None of them, however, seems to have studied its "bandlimited" nature.
1926: Ferrar publishes a paper on the cardinal function which contains more refined results concerning its convergence.
1928: Writing about telegraph transmission theory, Nyquist points out the importance of having a sampling interval equal to the reciprocal value of twice the highest frequency of the signal to be transmitted.
1929: J. M. Whittaker (son of E. T. Whittaker) publishes more refined statements as to the relation between the cardinal series and the truncated Fourier integral representation of a function in the case of convergence. These also appear in his 1935 book, Interpolatory Function Theory.
1946: Continuing work on osculatory interpolation by many others in the preceding decades, Schoenberg proves that any of the then-existing polynomial interpolation formulae may be written as a linear combination of shifted versions of some basic function, which completely determines the properties of the resulting interpolant. In addition he introduces the notion of a mathematical spline, and proves that any spline curve may be written as a linear combination of B-splines (a term he did not use at this time but which was coined by him some 20 years later).
From I. J. Schoenberg, "Contributions to the Problem of Approximation of Equidistant Data by Analytic Functions", Quarterly of Applied Mathematics, vol. 4, nos. 1 & 2, 1946, pp. 45-99 & 112-141.
1948/49: Shannon, referring to the works of J. M. Whittaker and Nyquist, presents and proves the now well-known sampling theorem.
From C. E. Shannon, "Communication in the Presence of Noise", Proceedings of the Institution of Radio Engineers, vol. 37, no. 1, 1949, pp. 10-21.
Later it became known that similar or even fully equivalent theorems had been published earlier by Ogura (1920), Kotel'nikov (1933, in Russian), and Raabe (1939, in German), and around the same time as Shannon by Someya (1949, in Japanese), and Weston (1949).
The theorems of Schoenberg and Shannon greatly stimulate the way of thinking about interpolating functions as linear combinations of basis functions, or kernels. In the decades to follow, both theorems find their way into applications, initially in largely different fields. The latter in communication engineering, numerous signal processing and analysis applications, and to some degree in numerical analysis. The former in approximation theory, numerical analysis, statistics, geometrical modeling, computer-aided geometric design, and computer graphics.
1971: Strang & Fix present an expression for the upper bound of the error introduced by the use of a given kernel for function approximation. In particular they show that the approximation order of the kernel, which determines the rate at which the error goes to zero when the sampling interval goes to zero, can be obtained from a relatively simple analysis of the kernel's spectrum.
1973: Schafer & Rabiner derive the kernels involved in Waring-Lagrange interpolation, study their spectral properties, and conclude that the higher-order kernels posses considerably better properties. They also discuss the design of alternative finite impulse response interpolators based on prespecified bandpass and bandstop characteristics.
1973: Rifman introduces a still very popular interpolation technique known as "cubic convolution" for the purpose of geometrical rectification of digital images obtained from the first Earth Resources Technology Satellite (ERTS) launched by the US National Aeronautics and Space Administration (NASA). The technique is later discussed in more detail by Simon (1975) and Bernstein (1976).
1978: The use of splines in digital image interpolation is pioneered by Hou & Andrews. They demonstrate the superiority of B-spline interpolation over other popular approaches, such as nearest-neighbor, linear, or windowed-sinc interpolation.
1981: Keys derives two cubic convolution kernels with optimal approximation orders. Although the derivations are new, the third-order kernel had been published earlier by Catmull & Rom (1974) and Simon (1975). Recently I have shown that the interpolant resulting from this kernel is exactly identical to that resulting from the osculatory interpolation formula published by Karup and King around 1900. In fact, Karup even studied what we would now call the impulse response of his technique, and included the graph in his paper. Up to a scaling factor of 5, this is exactly Keys' third-order cubic convolution kernel.
From J. Karup, "Ueber eine Neue Mechanische Ausgleichungsmethode", Transactions of the Second International Actuarial Congress, G. King (ed.), C. & E. Layton, London, 1899, pp. 31-77.
1983: Park & Schowengerdt publish results which are complementary to those of Keys in finding the optimal parameter for cubic convolution interpolation. In addition they introduce an interesting measure for the error caused by sampling and reconstruction using a given kernel.
1983: Parker et al. publish a first comparison of interpolation techniques in medical image processing. They fail, however, to implement cubic B-spline interpolation correctly, and arrive at erroneous conclusions concerning this technique.
1986 and later: Numerous researchers study the possibility of interpolation based on the Fourier transform, the Hartley transform, and the discrete cosine transform.
1986/89: Deslauriers & Dubuc study the problem of b-adic (iterative) interpolation and conclude that the main properties of such an interpolation process are determined by the fundamental function, obtained by feeding the process with a discrete impulse sequence.
1988: Mitchell & Netravali derive a still quite popular two-parameter family of cubic kernels, special cases of which are Keys' third-order kernel and the cubic B-spline. Later authors refer to this family as the BC-splines.
1989: Maeland publishes a paper on the comparison of interpolation methods and concludes from an analysis of the true spectra of several interpolation kernels that cubic spline interpolation is superior compared to cubic convolution.
1990: Raya & Udupa present a method known as shape-based interpolation, the basic idea of which had appeared shortly before in the numerical analysis literature, and investigate its application to interpolation of segmented objects. Follow-up papers are published by Herman (1992) and Grevera & Udupa (1996) to mention but a few.
1991: Unser et al. present an elegant approach to spline interpolation of regularly sampled data which is based on a computationally very efficient recursive prefilter.
1993: Following up on this result, Unser et al. publish a two-part paper in which they study the fundamental properties of spline interpolation and discuss their application to signal and image processing. In the years to follow, their results would lead to efficient algorithms for image rotation, enlargement or reduction, the construction of multiresolution pyramids, image registration, wavelet transformation, texture mapping, any many other applications.
1993: Schaum studies the kernels that can be derived from the Waring-Lagrange interpolation formula and presents an interesting error analysis which extends the results of Park & Showengerdt (1983). In particular, he points at the link between the behavior of his error function at zero frequency and the capability of the kernels to reproduce monomials.
1993: Stytz & Parrot experiment with an interpolation technique known long before in geostatistics under the name kriging, and investigate its application to medical image interpolation.
1994 and later: Several researchers develop interpolation kernels defined explicitly as linear combinations of B-splines. Examples are the kernels by Chen et al. (1994) and Knockaert & Olyslager (1999).
1995 and later: Numerous reseachers develop adaptive signal and image interpolation techniques. Examples include the techniques by Jensen & Anastassiou (1995), Thurnhofer & Mitra (1996), Ramponi (1999), Gao & Yin (1999), and Li & Orchard (2001).
1997 and later: The design methodologies originally used by Keys (1981) are reused by several researchers in developing alternative piecewise polynomial kernels. Examples of the latter include the quadratic kernel by Dodgson (1997), the fifth-order quartic kernel by German (1997), and the infinitely large family of third-order odd-degree kernels by myself (1999).
1999: Blu & Unser extend the results of Strang & Fix (1971) and Park & Showengerdt (1983) concerning quantitative approximation error estimation. In particular they show that the error introduced by a given interpolation or approximation kernel consists of two parts, one of which is dominant and is determined entirely by the spectrum of the function to be approximated and a certain error function, the latter of which depends only on the kernel.
1999 and later: Notwithstanding earlier studies by Parker et al. (1983), Schreiner et al. (1996), Ostuni et al. (1997), Haddad & Porenta (1998), Grevera & Udupa (1998), several independent large-scale evaluations of convolution-based interpolation methods are carried out by Lehmann et al. (1999), Thevenaz et al. (2000), and myself (2001). These studies all lead to the conclusion that splines generally offer the best cost-performance trade-off. Smaller errors can be achieved by using the so-called O-MOMS kernels recently introduced by Blu et al. (2001), if one is prepared to sacrifice the smoothness of the interpolant.
To be continued...
Copyright © 2002 by Erik Meijering. All rights reserved. No part of this page may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the author.