Taylor

Taylor's Theorem

Taylor's theorem is a cornerstone in maths. The gist is that if you have some function $f(x)$ and you know all the derivatives at, say $x=0$, then you can recreate the entire function from this information.

If you know everything about the function at one point, you know everything about the function!

Here is the k-degree Taylor series for $f(x)$ expanding around $x=0$
$T_k(x) = f(0) + x\cdot f'(0) + \frac{x^2}{2!} f''(0) + \dots + \frac{x^k}{k!} f^k(0)$

(Note: We are going to expand around 0 just to keep clutter to a minimum, it is trivial to extend this result to expand around any other point)

and here is Taylor's Theorem:

Given $f^n(\cdot)$ cts on $[0,x]$, we can write $f(x) = T_{n-1}(x) + \frac{x^n}{n!} f^n(\xi)$ for some $\xi \in [0,x]$

(we write that last $\frac{x^n}{n!} f^n(\xi)$ term as $E_n(\xi)$ )

Motivating question: How can we approximate f(x) in the neighbourhood of x=0?

Can we find a function that has the same value and the same slope as f at x=0?

Yes! $T_1(x) = f(0) + x\cdot f'(0)$ works.
Check: At x=0, $T_1(0) = f(0)$ and $T_1'(0) = f'(0)$, ok!

This could be seen as just constructing the tangent line to the curve at x=0: it is a common high school exam question. But here is the clever bit: let's keep going!

Now let's go one step further, and create a function that, at x=0, is identical to f in value, slope (first derivative) and rate of change of slope (second derivative):

$T_2(x) = f(0) + x\cdot f'(0) + \frac{x^2}{2!} f''(0)$

Just differentiate it twice at x=0 and observe $f''(0)$ falls out… at x=0, $T_2(0) = f(0)$, $T_2'(0) = f'(0)$ and $T_2''(0) = f''(0)$

i.e. $T_2(\cdot)$ has the same value, first derivative and second derivative as f(x)

Etc.

Basically if you take $T_k(x)$, and differentiate it repeatedly (up to k times), the derivatives of f at x=0 will fall out.
i.e. for ($i \leq k$), ${T_k}^i(0) = f^i(0)$
So at x=0, $T_k(x)$ has the same value, first derivative, second derivative, …, k'th derivative.

The question now becomes: can we put a bound on the error between $T_k(x)$ and f(x)? Before we come to this, let's have a quick look at the theorem in action:

Example: approximating $sin(x)$

Now you can try this out for yourself, you could calculate a 7th-degree Taylor series for $sin(x)$, and you would get:
$sin(x) = x-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!} +\frac{x^9}{9!}f^9(\xi)$

Plot $T_7(x)$ and you will find it gives a near-perfect approximation over the range $-2 < x < 2$.
Have a look at the error $E_9(x) = sin(x) - T_7(x)$ on Wolfram Alpha

If you want to extend the range, throw in a few more terms. In fact this is probably how your calculator calculates these functions.

Now let's be precise: can we find the maximum range where the error is guaranteed to be between ±0.1 ?
Sure! Notice $f^9(\xi) \in [-1,+1]$, in fact ALL derivatives for ANY $\xi$ MUST be in [-1,+1].
So $\frac{x^9}{9!} < 0.1$ gives us $x^9 < 0.1\cdot 9!$ i.e. $x < \sqrt[9]{32688} = 3.2110002323...$
So between (-3.2, +3.2) our error is guaranteed to be < 0.1

Example: Proving Euler's Identity

Ok, so that is a decent engineering example of how useful this theory is. But it is essential in complex analysis: you can use it to prove Euler's identity:
$e^{i\theta} = cos(\theta) + i\cdot sin(\theta)$
You just generate the Taylor expansion for $e^{i\theta}$, separate the real and imaginary parts, and observe that these give you the Taylor expansions for cos(x) and sin(x).
$e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \dots$
so $e^{i\theta} = 1 +i\theta + \frac{i^2\theta^2}{2!} + \frac{i^3\theta^3}{3!} + \dots$
$= 1 + i\theta - \frac{\theta^2}{2!} - i\frac{\theta^3}{3!} + \dots$
$= \left\{ 1 - \frac{\theta^2}{2!} + \dots \right\}+ i \left\{ \theta - \frac{\theta^3}{3!} + \dots \right\}$
$= cos(\theta) + i\cdot sin(\theta)$
as:
$cos(x) = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} + \dots$
$sin(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} + \dots$

(Note: if you want to be rigourous, you need to show this expansion for $e^x$ converges, but I don't want to go off topic — I will chalk it up separately at some point).

Taylor series

Now that formula for $T_k(x)$
$T_k(x) = f(0) + x\cdot f'(0) + \frac{x^2}{2!} f''(0) + \dots + \frac{x^k}{k!} f^k(0)$
… wasn't arbitrary or magic.

Look at its derivatives at x=0:

$T(0) = f(0)$
$T'(0) = f'(0)$
:
$T^k(0) = f^k(0)$

So at x=0, $T_k$ agrees with $f$ for derivatives up to k

So, close to 0 you can expect it to be a very good fit. But how good? i.e. If we move a distance $x$ from 0, can we put bounds on the error between $f(x)$ and $T_k(x)$ ?

Error term

Consider $E_n(x) = f(x) - T^{n-1}(x)$
Notice $E(0) = E'(0) = ... = E^{n-1}(0) = 0$
Also note that $E^n(\cdot) = f^n(\cdot)$

Well, if $E^n(\cdot)$ ( a.k.a. $f^n(\cdot)$ ) is continuous on [0, x], then by $E^n(\cdot)$ must be bounded by, say, m and M, and it will achieve these bounds, say $E^n(x_m) = m$ and $E^n(x_M) = M$. This should be visually obvious; just imagine the graph over the range [0, x] — it will have a maximum and minimum value, as it is continuous over the whole range.

And we will show via Lemma A below that this puts $E(x)$ between $m\frac{x^n}{n!}$ and $M\frac{x^n}{n!}$

i.e. there is some $\lambda$ between m and M s.t. $E(x) = \lambda \frac{x^n}{n!}$

Now $E^n(\cdot)$ is continuous, and
$E^n(x_m) = m$
$E^n(x_M) = M$
so there must be some $x_\lambda$ between $x_m$ and $x_M$ that maps to $\lambda$, i.e.
$E^n(x_\lambda) = \lambda$

So $\lambda = E^n(x_\lambda) = f^n(x_\lambda)$

Therefore, $E(x) = \lambda \frac{x^n}{n!} = f^n(\xi)\frac{x^n}{n!}$ (note that I am writing $\xi$ instead of $x_\lambda$ so as to recover the original notation)

…and we have our error term as required!

$\blacksquare$


Lemma A: Getting bounds on the error term $E_n(x)$

Let's consider some function $f$, and let's stipulate that $f(0) = f'(0) = ... = f^{n-1}(0) = 0$, and $f^n(x)$ is continuous on $[0, a]$

Question is, can we place an upper and lower bound on $f(x)$ in $[0,a]$?

We will now build up from the ground to this point. the only two results we will use are:

  1. FTC (a.k.a. the Fundamental Theorem of Calculus): the result we actually need is $\int_0^x{f'(t)}\,\mathrm{d}t = f(x) - f(0)$, it's very simple to get this from FTC
  2. The fact that the anti-derivative of $\frac{x^k}{k!} = \frac{x^{k+1}}{(k+1)!}$

This can be shown by differentiating $\frac{x^{k+1}}{(k+1)!}$ from first principles (most students have seen differentiating $x^2$ from first principles, this is just the same, just requires a little more working).

$f'(\cdot)$ bounded by $x^0$

Let's consider some function $f$ that is differentiable on $[0, a]$

Now let's say $f(0) = 0$, and $f'(x) < 1$ everywhere on this interval

What is the greatest possible value for f(a)?

Now you could think "At any point, the slope can be at maximum 45°. So our function has to be beneath the line $y=x$." It isn't rigourous, but it is intuitive.

Now you could use the Mean Value Theorem, and argue for a contradiction to get a rigourous proof. So you could say "Let us suppose that our function is NOT bounded by $y=x$, so there exists some x_0 such that $f(x_0) = M\cdot x_0 | M > 1$. So drawing the straight line from $(0,0)$ to the point $(x_0, M)$ we have a slope of more than 45°. A gradient equal to M ( > 1 ). and MVT tells us that this means there must be some point u on $[0, x_0]$ with $f'(u) = M$, i.e. $> 1$, ergo contradiction.

Another way of doing this would be using FTC (Fundamental Theorem of Calculus), which gives us $\int_0^x{f'(t)}\,\mathrm{d}t = f(x) - f(0)$

let's imagine a graph of $f'(x)$. It is bounded by 1 over $[0,a]$

So the integral between 0 and x will be less than the area of a rectangle (0,0) to (x,1) i.e. $1\cdot x$

FTC says that $\int_0^x{f'(t)}\,\mathrm{d}t = f(x) - f(0) = f(x)$, since we are given $f(0) = 0$

So we have our desired integral, which comes out as $f(x)$. This clearly must be less than the area of this rectangle, (which you could think of as $\int_0^x{1}\,\mathrm{d}t = x$

$f'(\cdot)$ bounded by $x^1$

Now let's suppose $f'(x)$ is now bounded by $x$ (a.k.a. $\frac{x^1}{1!}$, as opposed to previously when it was bounded by 1 aka $\frac{x^0}{0!}$). You will get an upper bound of $\frac{x^2}{2!}$. Actually you can still picture this visually; the area under the line $y=x$ is just going to be the area of a triangle width and height x, so $\frac{1}{2}x^2$ — you wouldn't be able to make an easy picture for the next power however).

$f'(\cdot)$ bounded by $x^k$

An obvious pattern is emerging, so let's generalise: what if $f(0) = 0$ but $f'(x)$ is now bounded by $\frac{x^k}{k!}$ ?

So, we can plot the graphs of $f'(x)$ and $\frac{x^k}{k!}$

$f'(x)$ is always beneath $\frac{x^k}{k!}$, so therefore the area underneath $f'(x)$ from 0 to x (which FTC tells us is $f(x)$) must be less than the area underneath $\frac{x^k}{k!}$(which FTC will give as $\frac{x^{k+1}}{(k+1)!}$)

This process will work equally well for a lower bound. So let's move the goalposts again, and bound $f'(x)$ between $m\frac{x^k}{k!}$ and $M\frac{x^k}{k!}$ (still keeping the condition $f(0) = 0$)

And you will be able to bound $f(x)$ between $m\frac{x^{k+1}}{(k+1)!}$ and $M\frac{x^{k+1}}{(k+1)!}$

$\blacksquare$

Telescoping…

Now let's consider $f(0) = f'(0) = ... = f^{n-1}(0) = 0$, and $f^n(\cdot)$ bounded by m and M on $[0,a]$

Then we can repeatedly apply our previous result:

$f^n(\cdot)$ bounded by $m\cdot \frac{x^0}{0!}$ and $M\cdot \frac{x^0}{0!}$

plus $f^{n-1}(0) = 0$ gives:
$f^{n-1}(x)$ bounded by $m\cdot \frac{x^1}{1!}$ and $M\cdot \frac{x^1}{1!}$

plus $f^{n-2}(0) = 0$ gives:
$f^{n-2}(x)$ bounded by $m\cdot \frac{x^2}{2!}$ and $M\cdot \frac{x^2}{2!}$
:
plus $f^{n-n}(0)$ (a.k.a. $f(0)$ ) $= 0$ gives:
$f^{n-n}(x)$ , i.e. $f(x)$ bounded by $m\cdot \frac{x^n}{n!}$ and $M\cdot \frac{x^n}{n!}$

$\blacksquare$


Pages

Included page "clone:mathpad" does not exist (create it now)

$]]

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License