Skip to Content

Convergent Series

We have seen that a series is the sum of the terms of a sequence.

Just like sequences, series can also converge or diverge. A series converges if the sequence of partial sums converges. So in other words using the original sequence we calculate a new sequence, where each term is the sum of all terms up to that point.

\[\begin{align*} \text{Sequence} & : (a_1, a_2, a_3, \ldots, a_n) \\ \text{Series} & : S_n = \sum^{n}_{k=1}{a_k} = a_1 + a_2 + a_3 + \ldots + a_n \\ \text{Sequence of partial sums} & : S_1, S_2, S_3, \ldots, S_n \end{align*} \]

If the sequence of partial sums converges then the series converges. The limit of the sequence of partial sums is called the sum or value of the series. If the sequence of partial sums diverges then the series diverges.

For a series to converge the underlying sequence must be a null sequence, in other words the limit of the original sequence must be zero. This is a necessary but not sufficient condition. There are series that diverge even though the sequence of terms converges to zero.

\[\sum_{n=1}^{\infty} a_n \text{ converges} \implies \lim_{n \to \infty} a_n = 0 \]

There is an intuitive interpretation behind this condition. Imagine you’re summing up the terms of a series. For the series to converge, the partial sums need to settle on a finite value as you keep adding more and more terms. If the terms of the sequence do not approach zero, it becomes impossible for the partial sums to settle, and the series will diverge. In short, if the terms are not getting smaller and smaller, the series will keep getting larger and larger and will not converge.

Another important property of convergent series is that the order in which we sum the terms does not matter as the addition of real numbers is associative and commutative. This means that we can rearrange the terms of a convergent series without affecting its convergence or the value of the series. So formally if the function \(f\) is a bijective mapping from the natural numbers to the natural numbers, then:

\[\sum_{n=1}^{\infty} a_n = \sum_{n=1}^{\infty} a_{f(n)} \]

In addition, just like with the convergence of a sequence, what the series does in the beginning does not affect the convergence of the series. For example, if we start summing the terms of a series at a different index, the series will still converge, but the value of the series will change. This is of course only true for real valued series, as if we would add infinity and then add a finite number, the series would diverge and if we skipped adding the infinity term, the series would converge to a finite value. So more formally for any \(m \geq N_0\) we have:

\[\sum_{n=N_0}^{\infty} a_n \text{converges} \iff \sum_{n=m}^{\infty} a_n \text{converges} \]
Proof
Geometric Series

Let’s look at some example of series and analyze their convergence. We have seen the geometric sequence before. The geometric sequence is a sequence where each term is a constant multiple of the previous term. So formally the geometric sequence is defined as follows where \(q \in \mathbb{C}\) is a constant:

\[a_n = q^n \]

Then the geometric sequence has the following properties as \(n \to \infty\):

  • If \(|q| < 1\), then the sequence converges to 0.
  • If \(q = 1\), then the sequence converges to 1.
  • If \(q = -1\), then the sequence oscillates between 1 and -1.
  • If \(|q| > 1\), then the sequence diverges to infinity.

The geometric series is then just the sum of the terms of the geometric sequence:

\[\sum^{\infty}_{k=0}{q^k} \]

For the series to converge we need to check if the sequence of partial sums converges. The sequence of partial sums is:

\[\begin{align*} S_n &= \sum^{n}_{k=0}{q^k} = 1 + q + q^2 + \ldots + q^n \\ q * S_n &= q + q^2 + q^3 + \ldots + q^{n+1} \\ S_n - q * S_n &= 1 - q^{n+1} \\ (1-q) * S_n &= 1 - q^{n+1} \\ S_n &= \frac{1 - q^{n+1}}{1-q} \end{align*} \]

Now we have a closed form for the sequence of partial sums and therefore we can analyze the convergence of the geometric series. For this we take the limit of the sequence of partial sums to see if the series converges. As \(n \to \infty\) the term \(q^{n+1}\) goes to zero as \(|q| < 1\). So we can assume the limit exists and that it is \(\frac{1}{1-q}\). Let’s prove it is indeed the limit:

\[\lim_{n \to \infty}\left|\frac{1 - q^{n+1}}{1-q} - \frac{1}{1-q}\right| = \lim_{n \to \infty}\left|\frac{1 - q^{n+1} - 1}{1-q}\right| = \lim_{n \to \infty}\left|\frac{- q^{n+1}}{1-q}\right| = \lim_{n \to \infty}\left|\frac{q^{n+1}}{1-q}\right| = 0 \]

So the sequence of partial sums converges to \(\frac{1}{1-q}\) with \(|q| < 1\), which means the geometric series converges to \(\frac{1}{1-q}\).

Let’s look at a concrete example of a geometric series for \(q = \frac{1}{2}\):

\[\begin{align*} \sum^{\infty}_{k=0}{\left(\frac{1}{2}\right)^k} &= 1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \ldots \\ &= \frac{1}{1-\frac{1}{2}} = \frac{1}{\frac{1}{2}} = 2 \end{align*} \]

But what if the index starts at 1 rather than 0 to see if the starting index has an effect on the convergence of the series and the value of the series. For this let’s revisit the closed form for the sequence of partial sums:

\[\begin{align*} \sum^{\infty}_{k=1}{q^k} &= q + q^2 + q^3 + \ldots \\ &= \sum^{\infty}_{k=0}{q^k} - q^0 = \frac{1}{1-q} - 1 = \frac{q}{1-q} \end{align*} \]

We know that this sequence still converges. So we can say that what we do in the first steps does not have an effect on the convergence of the series. However, it does have an effect on the value of the series. So the geometric series for \(q = \frac{1}{2}\) starting at 1 converges to \(\frac{\frac{1}{2}}{1-\frac{1}{2}} = 1\) instead of 2.

Another way of looking at this is that we know that \(\sum^{\infty}_{k=0}{\frac{1}{2^k}} = 2\) and that we can write this also as follows:

\[\sum^{\infty}_{k=1}{\frac{1}{2^k}} = 2 = 1 + \frac{1}{2} + \frac{1}{4} + \sum^{\infty}_{k=3}{\frac{1}{2^k}} \]

So therefore we must have:

\[\sum^{\infty}_{k=3}{\frac{1}{2^k}} = \frac{1}{4} \]
Harmonic Series

We have seen an example of a series that converges, now let’s look at an example of a series that diverges. Just like with the geometric series that uses the geometric sequence, we can also look at the harmonic series that uses the harmonic sequence. In this case we will analyze the following harmonic series:

\[\sum^{\infty}_{n=1}{\frac{1}{n}} \]

We already know that the harmonic sequence converges to zero as \(n\) approaches infinity. So the precondition is met. However, this does not automatically mean that the series also converges. In fact, the harmonic series diverges despite the sequence of terms converging to zero, the series diverges. This is a good example to show that the terms of the sequence converging to zero is a necessary but not sufficient condition for the series to converge.

Todo

This isn’t so intuitive so it will need a proof. Maybe later on with cauchy criterion or something like that.

Harmonic Series
Harmonic Series

However, there is a nice visualization showing that the series is bounded.

Harmonic Series
Harmonic Series
Telescoping Series

The telescoping series is another special case of a series that converges. The telescoping series is a series where the terms cancel each other out in a way that the series converges to a finite value. The telescoping series is defined as follows:

\[\sum^{\infty}_{n=1}{\frac{1}{n(n+1)}} \]

Now you might be wondering why this series is called telescoping series. The reason is that we can rewrite the terms to the following:

\[\frac{1}{n(n+1)} = \frac{1}{n} - \frac{1}{n+1} \]

So then when we write out the first few terms of the series, we get:

\[\sum^{\infty}_{n=1}{\left(\frac{1}{n} - \frac{1}{n+1}\right)} = \left(1 - \frac{1}{2}\right) + \left(\frac{1}{2} - \frac{1}{3}\right) + \left(\frac{1}{3} - \frac{1}{4}\right) + \ldots \]

and as we can see the terms cancel each other out in a way that we are left with only the first term and the last term of the series. So we can rewrite the series as follows:

\[\sum^{\infty}_{n=1}{\frac{1}{n(n+1)}} = \lim_{n \to \infty} \left(1 - \frac{1}{n+1}\right) = 1 \]
Beware of Infinite Sums

In the case of series we can not just use our usual rules for the sum operator. The reason is because the sums go to infinity and we can not just use the normal rules of arithmetic.

To see this let’s compare two example previously seen examples. We can rewrite

\[\sum^{\infty}_{n=1}{\frac{1}{n(n+1)}} = \sum^{\infty}_{n=1}{\frac{1}{n}} - \sum^{\infty}_{n=1}{\frac{1}{n+1}}$ \]

We already know this is the telescoping series that converges to 1. So in this infinity minus infinity case we get 1. However, if we try to do the same for the value of 0 and the sum over 0. We can do the following split:

\[0 = \sum^{\infty}_{n=1}{1} - \sum^{\infty}_{n=1}{1} \]

which is not correct as these then diverge and we suddenly get a value of 1 (infinity minus infinity).

So we can only perform operations with series if we have a series \(\sum^{\infty}_{k=1}{a_k}\) and \(\sum^{\infty}_{j=1}{b_j}\) and they both converge. If this is the case we can perform the following operations:

  • \(\sum^{\infty}_{k=1}{c*a_k}=c* \sum^{\infty}_{k=1}{a_k}\) für \(c \in C\)
  • \(\sum^{\infty}_{k=1}{a_k\pm b_k}=\sum^{\infty}_{k=1}{a_k}\pm \sum^{\infty}_{k=1}{b_k}\)
Todo

Add a proof for this. Maybe using the sequence of partial sums.

Cauchy Criterion

We have seen that for sequences we can use the Cauchy criterion to check if a sequence converges. Specifically we can say that a sequence converges if the following holds:

\[\forall \epsilon > 0 \quad \exists N_{\epsilon} \in \mathbb{N} : |a_n - a_m| < \epsilon \text{ for all } n, m \geq N_{\epsilon} \]

We can also apply this to series. The Cauchy criterion for series states that a series converges if the following holds:

\[\forall \epsilon > 0 \exists N_{\epsilon} \in \mathbb{N} : |\sum^{m}_{k=n} a_k| < \epsilon \text{ for all } m \geq n \text{ and } n \geq N_{\epsilon} \]

So after some point \(N_{\epsilon}\) the sum of all terms from \(n\) to all \(m\) is less than \(\epsilon\). So in other words after some point the sum of all terms does not change the value of the series anymore. This is a very powerful test to check if a series converges.

Proof

The proof is rather simple. We know a series converges if the sequence of partial sums converges. So we can just apply the Cauchy criterion for sequences to the sequence of partial sums. So if the partial sums are Cauchy, then the series converges. For the partial sums to be Cauchy, we need to show that the following holds where \(S_n = \sum^{n}_{k=1}{a_k}\):

\[\forall \epsilon > 0 \exists N_{\epsilon} \in \mathbb{N} : |S_m - S_{n-1}| < \epsilon \text{ for all } n, m \geq N_{\epsilon} \]

From this we can derive the following which gives us the Cauchy criterion for series:

\[|S_m - S_{n-1}| = |\sum^{m}_{k=1}{a_k} - \sum^{n-1}_{k=1}{a_k}| = |\sum^{m}_{k=n}{a_k}| < \epsilon \text{ for all } n, m \geq N_{\epsilon} \]

The cauchy criterion also has a special property that follows from it. Just like with the sequences at some point the difference must go to zero. For the series this means that the terms of the series must go to zero and therefore also the partial sum. So in other words after some point to all \(m \geq n\) the section of the series must be zero. More formally:

\[\sum_{k=1}^{\infty} a_k \text{ converges} \implies \lim_{n \to \infty} \sum_{k=n}^{m} a_k = 0 \]
Proof

Using the cauchy criterion we can actually prove that the underlying sequence must be a null sequence. The proof is pretty simple. We assume we have a series \(\sum_{n=1}^{\infty} a_n\) that converges. Then from the cauchy criterion we know that the following holds:

\[\forall \epsilon > 0 \exists N_{\epsilon} \in \mathbb{N} : |\sum^{m}_{k=n} a_k| < \epsilon \text{ for all } m \geq n \text{ and } n \geq N_{\epsilon} \]

Then if we specifically set \(m=n\) we get:

\[\begin{align*} |\sum^{n}_{k=n} a_k| < \epsilon \\ |a_n| < \epsilon \\ |a_n - 0| < \epsilon \end{align*} \]

So we can see that the terms of the series must go to zero as \(n\) approaches infinity. So we can conclude that the underlying sequence must be a null sequence.

Example

She just shows that the sequence goes to zero and then splits it into the telescoping series and the geometric series to find the value of the series.

\[\sum^{\infty}_{n=1}{\frac{3^n + n^2 + n}{3^{n+1}(n(n+1))}} \]

Direct Comparison Test

The direct comparison test is another method like the Cauchy criterion to check if a series converges. The idea comes from the Monotone Convergence Theorem and is in my opinion a lot more intuitive then the cauchy criterion.

First let’s just focus on series that converges where after some point \(k\) the terms are always positive. So more formally we focus on series where the following holds:

\[\text{Let } \sum^{\infty}_{n=1}{a_n} \text{ be a series with } a_n \geq 0 \text{ for all } n \geq k \text{ where } k \geq 1 \]

The idea is to use the monoton convergence theorem. First we need to show that the sequence of partial sums is monotone increasing. This is true if the terms of the series are always positive.

\[S_{n+1} - S_n = \sum^{n+1}_{k=1}{a_k} - \sum^{n}_{k=1}{a_k} = a_{n+1} \geq 0 \text{ for all } n \geq 1 \]

Now because we know that the sequence of partial sums is monotone increasing, and that it converges as otherwise the series would diverge, we can conclude that the sequence of partial sums must also be bounded by the monotone convergence theorem.

Todo

Add an example of using this and showing that the sequence of partial sums is bounded.

The direct comparison follows from the idea shown above but in a more general way and kind of mixing it with the squeeze theorem. The idea is that we can compare two series and show that one converges if the other converges. The formal definition of the direct comparison test is as follows. If we have two series \(\sum^{\infty}_{n=1}{a_n}\) and \(\sum^{\infty}_{n=1}{b_n}\) and we know that the following holds for all \(k \geq 1\):

\[0 \leq a_k \leq b_k \]

So the terms of both series are always positive and the terms of the first series are always less than or equal to the terms of the second series. Then we can say that if the second series converges, then the first series also converges.

\[\sum^{\infty}_{n=1}{b_n} \text{ converges} \implies \sum^{\infty}_{n=1}{a_n} \text{ converges} \]

The intuition behind this is rather simple. If \(b_n\) converges, then the sum of all terms of \(b_n\) is finite. If \(a_n\) is always less than or equal to \(b_n\), then the sum of all terms of \(a_n\) must also be finite. Because we are only allowing for positve terms, we also do not have to worry about convergence against negative infinity.

Proof

The proof follows from the cauchy criterion.

In a similar way we can also show that if the first series diverges, then the second series also diverges.

\[\sum^{\infty}_{n=1}{a_n} \text{ diverges} \implies \sum^{\infty}_{n=1}{b_n} \text{ diverges} \]

The idee is again the same, if \(a_n\) diverges, then the sum of all terms of \(a_n\) is infinite. If \(b_n\) is always greater than or equal to \(a_n\), then the sum of all terms of \(b_n\) must also be infinite and therefore diverges.

We can actually make this test even more powerful. Above we defined that the terms of \(b_k\) always had to be greater than or equal to the terms of \(a_k\) and that the terms of \(a_k\) always had to be positive. It turns out that the test also works if we can show that the conditions are met for all \(k \geq m\) for some \(m \in \mathbb{N}\). So we can have a finite number of terms that do not meet the conditions. The intuition behind this is that the first \(m\) terms of the series do not have an influence on the convergence of the series only on the value of the series. So the test works as long as the following holds:

\[0 \leq a_k \leq b_k \text{ for all } k \geq m \text{ where } m \geq 1 \]
Example

We know that the harmonic series \(\sum^{\infty}_{n=1}{\frac{1}{n}}\) diverges. So what if the denominator is squared? Does the series \(\sum^{\infty}_{n=1}{\frac{1}{n^2}}\) converge? We can use the direct comparison test to show that it does by comparing it to the telescoping series \(\sum^{\infty}_{n=1}{\frac{1}{n(n-1)}}\). We know that the telescope series converges to 1. So we want to show that the other series converges as well. We can do this by showing that the terms of the first series are always less than or equal to the terms of the second series and that they are always positive. The telescope series is not defined for \(n=1\) but if we start at \(n=2\) we can see that the terms of the first series are always less than or equal to the terms of the second series.

\(k\)\(\frac{1}{k^2}\)\(\frac{1}{k(k-1)}\)
11Undefined
2\(\frac{1}{4}\)\(\frac{1}{2}\)
3\(\frac{1}{9}\)\(\frac{1}{6}\)
4\(\frac{1}{16}\)\(\frac{1}{12}\)
5\(\frac{1}{25}\)\(\frac{1}{20}\)
6\(\frac{1}{36}\)\(\frac{1}{30}\)

This comes from the following:

\[k(k-1) \leq k^2 \implies \frac{1}{k(k-1)} \geq \frac{1}{k^2} \text{ for all } k \geq 2 \]

So therefore:

\[1 + \sum^{\infty}_{n=2}{\frac{1}{n^2}} \leq 1 + \sum^{\infty}_{n=2}{\frac{1}{n(n-1)}} \]

and by the direct comparison test because the telescoping series converges we can say that the series \(\sum^{\infty}_{n=1}{\frac{1}{n^2}}\) converges as well.

What about the series \(\sum^{\infty}_{n=1}{\frac{1}{n^3}}\)? We can use the same idea as above:

\[k^3 &\geq k^2 \implies \frac{1}{k^3} \leq \frac{1}{k^2} \text{ for all } k \geq 1 \\ \]

So then it follows that:

\[\sum^{\infty}_{n=1}{\frac{1}{n^3}} \leq \sum^{\infty}_{n=1}{\frac{1}{n^2}} \]

And because we know that the series \(\sum^{\infty}_{n=1}{\frac{1}{n^2}}\) converges, we can say that the series \(\sum^{\infty}_{n=1}{\frac{1}{n^3}}\) also converges by the direct comparison test. We can actually generalize this to all series of the form \(\sum^{\infty}_{n=1}{\frac{1}{n^p}}\) where \(p > 1\) converges and for \(p \leq 1\) diverges.

To show this we can look at the case where \(p = \frac{1}{2}\) so we have the following series:

\[\sum^{\infty}_{n=1}{\frac{1}{\sqrt{n}}} \]

We can use the direct comparison test to show that this series diverges. We can compare it to the harmonic series \(\sum^{\infty}_{n=1}{\frac{1}{n}}\) which we know diverges. We can see that for all \(n \geq 1\):

\[\sqrt{n} \leq n \implies \frac{1}{\sqrt{n}} \geq \frac{1}{n} \]

So we can conclude that:

\[\sum^{\infty}_{n=1}{\frac{1}{\sqrt{n}}} \geq \sum^{\infty}_{n=1}{\frac{1}{n}} \]

And because the harmonic series diverges, we can say that the series \(\sum^{\infty}_{n=1}{\frac{1}{\sqrt{n}}}\) also diverges by the direct comparison test.

Example

Another series that is actually similar to the series above is the following:

\[\sum^{\infty}_{n=1}{\frac{1}{n!}} \]

We can use the direct comparison test to show that this series converges. You may know from analysing algorithms that the factorial grows very fast. A simple comparison would just to compare it with the squared series \(\sum^{\infty}_{n=1}{\frac{1}{n^2}}\) which we know converges as the following holds:

\[\frac{1}{n!} \leq \frac{1}{n^2} \text{ for all } n \geq 4 \]

So we know that:

\[\sum^{\infty}_{n=1}{\frac{1}{n!}} \leq \sum^{\infty}_{n=1}{\frac{1}{n^2}} \]

and therefore it converges by the direct comparison test. However, we can also show a tighter bound by observing the following:

\[\begin{align*} k! = 1 \cdot 2 \cdot 3 \cdots k \geq 1 \cdot 2 \cdot 2 \cdots 2 = 2^{k-1} \text{ for all } k \geq 2 \\ \frac{1}{k!} \leq \frac{1}{2^{k-1}} \text{ for all } k \geq 2 \end{align*} \]

which means:

\[\sum^{\infty}_{n=1}{\frac{1}{n!}} \leq \sum^{\infty}_{n=2}{\frac{1}{2^{n-1}}} \]

And we know that the series \(\sum^{\infty}_{n=2}{\frac{1}{2^{n-1}}}\) is a geometric series with \(q = \frac{1}{2}\) which converges. So we can conclude that the series \(\sum^{\infty}_{n=1}{\frac{1}{n!}}\) converges as well.

Absolute Convergence

We have seen that series can converge to some limit or diverge. But there is also something called absolute convergence. A series is said to be absolutely convergent if the absolute values of the terms of the series converge. In other words, if we take the absolute value of each term in the series and then check if that series converges, then we can say that the original series is absolutely convergent. More formally we define absolute convergence as:

\[\sum^{\infty}_{n=1}{|a_n|} \text{ converges} \]

If a series is absolutely convergent, then it is also convergent. However, the converse is not true. A series can be convergent without being absolutely convergent. We call a series that is convergent but not absolutely convergent conditionally convergent.

\[\sum^{\infty}_{n=1}{a_n} \text{ converges} \implies \sum^{\infty}_{n=1}{|a_n|} \text{ converges} \]

This is easily proven by the cauchy criterion for series. If the series is absolutely convergent, then the value of the series is finite. If we instead take the absolute value of the value of the series, we get a smaller or equal value as some terms may cancel each other out. So the absolute value of the series is less than or equal to the value of the series. Therefore the value of the series is also finite and converges.

\[|\sum^{m}_{k=n} a_k| \leq \sum^{m}_{k=n} |a_k| < \epsilon \text{ for all } m \leq n \text{ and } n \leq N_{\epsilon} \]

If we say that \(S_n = \sum^{n}_{k=1}{a_k}\) and \(T_n = \sum^{n}_{k=1}{|a_k|}\), then we also get the following from the above inequality:

\[|S_n| = |\sum^{\infty}_{k=1}{a_k}| = |\lim_{n \to \infty} S_n| = \lim_{n \to \infty} |S_n| \leq \lim_{n \to \infty} T_n = \sum^{\infty}_{k=1}{|a_k|} \]

Or in short:

\[|\sum^{\infty}_{n=1}{a_n} | \leq \sum^{\infty}_{n=1}{|a_n|} \]
Example

Let’s look at the series \(\sum^{\infty}_{n=1}{(-1)^{n+1} \frac{1}{n}}\) which is the so called alternating harmonic series. The sequence of the terms converge to zero, so our first condition is met. If we analyze the values of the series then we notice that the series converges to \(\ln(2)\). So the series converges.

But if we take the absolute value of the terms of the series, we get the harmonic series \(\sum^{\infty}_{n=1}{\frac{1}{n}}\) which diverges. So we can say that the original series is convergent but not absolutely convergent. This is a good example to show that the converse of that absolute convergence implies convergence is not true.

Alternating Series Test

The alternating series test is useful test for checking the convergence of series that alternate in sign such as the alternating harmonic series. The test states that if we have a series of the form:

\[\sum^{\infty}_{n=1}{(-1)^{n+1} a_n} \]

where either all \(a_n\) are positive or all \(a_n\) are negative and the absolute values of the terms are decreasing, more formally for all \(n \geq N_0\) we have \(|a_{n+1}| \leq |a_n|\), and lastly the limit of the terms goes to zero, then the series converges.

Intuitively this makes sense as the terms of the series are alternating in sign and the absolute values of the terms are decreasing. So the series is “balancing out” and converging to a finite value.

So in the case of the alternating harmonic series all the above conditions are met and therefore it converges to some value \(S\). Alternating series that suit this form also have the interesting property that the value of the series \(S\) can be bounded if all the terms are positive:

\[a_1 - a_2 \leq S \leq a_1 \]

This comes from the fact that the first term of the series is positive will be the largest term. As an example for the alternating harmonic series we have:

\[1 - \frac{1}{2} \leq S \leq 1 \implies \frac{1}{2} \leq S \leq 1 \]
Todo

This can somehow be extended to the alternating series estimation theorem which probably also works for all negative terms?

She has a nice visualization of the idea of this estimation with it going back and forth within the bounds.

Riemann Rearrangement Theorem

A Rearrangement of a series is a new series that is formed by rearranging the terms of the original series. More formally the series \(\sum^{\infty}_{n=1}{x_n}\) is a rearrangement of the series \(\sum^{\infty}_{n=1}{a_n}\) if there exists a bijective function of the form:

\[f: \mathbb{N} \to \mathbb{N} \text{ such that } x_n = a_{f(n)} \text{ for all } n \in \mathbb{N} \]

Riemann showed that the series \(\sum^{\infty}_{n=1}{(-1)^{n+1} \frac{1}{n}}\) could be rearranged to converge to any real number.

Dirchlet’s Rearrangement Theorem

Dirchlet showed that if a series converges absolutely, then any rearrangement of the series converges and that it even converges to the same value.

Ratio Test

One of the most important tests for convergence of series is the ratio test. The ratio test is a method to check if a series converges or diverges by looking at the ratio of consecutive terms in the series. Similarily to the cauchy criterion where we looked at the difference between consecutive terms, we can also look at the ratio of consecutive terms. The ratio test states that if we have a series \(\sum^{\infty}_{n=1}{a_n}\), where importantly \(a_n \neq 0\) for all \(n\) to avoid division by zero, then we can analyze the convergence of the series by looking at the limit of the ratio of consecutive terms:

\[\limsup_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right| < 1 \implies \sum^{\infty}_{n=1}{a_n} \text{ converges} \] \[\liminf_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right| > 1 \implies \sum^{\infty}_{n=1}{a_n} \text{ diverges} \]

IF the value of the limit is equal to 1, then the test is inconclusive and we can not say anything about the convergence of the series. In this case we need to use another test to check if the series converges or diverges.

The intuition behind the ratio test is that if the ratio of consecutive terms is less than 1, then the terms of the series are getting smaller and smaller, which means that the series converges. If the ratio is greater than 1, then the terms of the series are getting larger and larger, which means that the series diverges. If the ratio is equal to 1, then we can not say anything about the convergence of the series.

Proof

dont get the proof

Example

Let’s look at the following series:

\[\sum^{\infty}_{n=1}{\frac{n!}{n^n}} \]

Using the ratio test we can analyze the convergence of this series. We can calculate the ratio of consecutive terms:

\[\begin{align*} \left| \frac{a_{n+1}}{a_n} \right| &= \left| \frac{(n+1)!}{(n+1)^{n+1}} \cdot \frac{n^n}{n!} \right| \\ &= \left| \frac{(n+1)!n^n}{(n+1)^{n+1}n!} \right| \\ &= \left| \frac{(n+1)!}{n!} \cdot (\frac{n}{n+1})^n \cdot \frac{1}{n+1} \right| \\ &= \left| (n+1) \cdot \left(\frac{n}{n+1}\right)^n \cdot \frac{1}{n+1} \right| \\ &= \left| \left(\frac{n}{n+1}\right)^n \right| \\ &= \left| \frac{1}{\left(1 + \frac{1}{n}\right)^n} \right| \\ &\to \left| \frac{1}{e} \right| = \frac{1}{e} < 1 \end{align*} \]

Therefore we can conclude that the series converges by the ratio test. Because a precondition of convergence is that the terms of the series must go to zero, we can also conclude that the terms of the series go to zero as \(n\) approaches infinity so we get the following:

\[\lim_{n \to \infty} a_n = \lim_{n \to \infty} \frac{n!}{n^n} = 0 \]
Example

We can show that the harmonic series \(\sum^{\infty}_{n=1}{\frac{1}{n}}\) diverges using the ratio test:

\[\begin{align*} \left| \frac{a_{n+1}}{a_n} \right| &= \left| \frac{\frac{1}{n+1}}{\frac{1}{n}} \right| \\ &= \left| \frac{n}{n+1} \right| \\ &= \left| 1 - \frac{1}{n+1} \right| \end{align*} \]

As \(n\) approaches infinity, the limit of the ratio is equal to 1 therefore we can not say anything about the convergence of the series.

Importantly we need to use the limit superior and limit inferior as these always exists, whereas the limit does not always exist and could cause the test to be nonsensical. However, if the limit of the ratio does exist, so \(\lim_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right|\) = \(L\), then the limes superior and limit inferior are equal to the limit and we can use the limit instead of the limes superior and limit inferior. So we can write the ratio test as follows:

\[\lim_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right| < 1 \implies \sum^{\infty}_{n=1}{a_n} \text{ converges} \] \[\lim_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right| > 1 \implies \sum^{\infty}_{n=1}{a_n} \text{ diverges} \]
Example

Another reason why we need to use the limes superior and limit inferior is that we can construct a series \(a_n\) that converges but has the a limes inferior less then 1 and a limes superior greater than 1. And a series \(b_n\) that diverges but has the same limes superior and limit inferior.

Exponential Series

We have seen a possible origin of the euler number \(e\). But we can also show a possible origin of the euler number to some power \(z \in \mathbb{C}\). Specifically we can show that the following series:

\[\sum^{\infty}_{n=0}{\frac{z^n}{n!}} = \exp(z) = e^z \]

Hence it is called the exponential series. For \(z = 0\) we get the following:

\[1 + \frac{z}{1!} + \frac{z^2}{2!} + \frac{z^3}{3!} + \ldots = 1 + 0 + 0 + 0 + \ldots = 1 \]

Which is correct. Let’s now check if the series converges for \(z \neq 0\). We can use the ratio test to check if the series converges:

\[\begin{align*} \left| \frac{a_{n+1}}{a_n} \right| &= \left| \frac{z^{n+1}}{(n+1)!} \cdot \frac{n!}{z^n} \right| \\ &= \left| \frac{z^{n+1}n!}{(n+1)!z^n} \right| \\ &= \left| \frac{z^{n+1}}{z^n} \cdot \frac{n!}{(n+1)!} \right| \\ &= \left| z \cdot \frac{1}{n+1} \right| \\ &= \left| \frac{z}{n+1} \right| \end{align*} \]

Which as \(n\) approaches infinity goes to zero for all \(z \in \mathbb{C}\). So we can conclude that the series converges for all \(z \in \mathbb{C}\). However the question still remains if the series converges to \(e^z\).

Convergence Radius

For complex numbers this is probably a circle and for real numbers this is probably a line hence an interval rather then a radius.

If we are given the following series where \(z \in \mathbb{C}\):

\[\sum^{\infty}_{n=0}{\frac{z^n n!}{n^n}} \]

Then the question is for which values of \(z\) does this series converge? To analyze this we can use the ratio test:

\[\begin{align*} \left| \frac{a_{n+1}}{a_n} \right| &= \left| \frac{z^{n+1} (n+1)!}{(n+1)^{n+1}} \cdot \frac{n^n}{z^n n!} \right| \\ &= \left| \frac{z^{n+1}(n+1)!n^n}{(n+1)^{n+1}z^n n!} \right| \\ &= \left| \frac{znn^n}{(n+1)^{n+1}} \right| \\ &= \left| \frac{zn^{n+1}}{(n+1)^{n+1}} \right| \\ &= \left| z \cdot \left(\frac{n}{n+1}\right)^{n+1} \right| \\ &= \left| z \cdot \left(1 - \frac{1}{n+1}\right)^{n+1} \right| \\ &\to \left| z \cdot \frac{1}{e} \right| = \frac{|z|}{e} < 1 \end{align*} \]

So for the series to converge we need the following to hold:

\[\frac{|z|}{e} < 1 \implies |z| < e \]

This results in the complex plane their being a circle with radius \(e\) around the origin where the series converges. This is the so called convergence radius of the series. If we are outside of this circle, then the series diverges. If we are on or inside this circle, then the series converges.

Another example would be the following series which is similar to the geometric series:

\[\sum^{\infty}_{n=1}{n \cdot z^n} \]

We can use the ratio test to check for which values of \(z\) this series converges:

\[\begin{align*} \left| \frac{a_{n+1}}{a_n} \right| &= \left| \frac{(n+1)z^{n+1}}{n z^n} \right| \\ &= \left| \frac{n+1}{n} \cdot z \right| \\ &= \frac{n+1}{n} \cdot |z| < 1 \end{align*} \]

As \(n\) approaches infinity, the term \(\frac{n+1}{n}\) approaches 1. So we can conclude that the series converges for all \(|z| < 1\). This means that the convergence radius of this series is 1. If we are outside of this circle, then the series diverges. If we are on or inside this circle, then the series converges. We might also wonder what is the actual value of the series. If we write out the terms we can notice some patterns:

\[\begin{align*} \sum^{\infty}_{n=1}{n \cdot z^n} &= z + 2z^2 + 3z^3 + 4z^4 + \ldots \\ &= z + z^2 + z^2 + z^3 + z^3 + z^3 + z^4 + z^4 + z^4 + z^4 + \ldots \\ &= z (1 + z + z^2 + z^3 + \ldots) + z^2 (1 + z + z^2 + \ldots) + z^3 (1 + z + \ldots) + \ldots \\ &= (z + z^2 + z^3 + \ldots)(1 + z + z^2 + \ldots) \\ &= z(1 + z + z^2 + \ldots) \cdot \frac{1}{1 - z} \\ &= \frac{z}{(1 - z)^2} \end{align*} \]

Root Test

We have seen that the ratio test can be very useful for analyzing the convergence of series. However, in some cases the ratio test is inconclusive when the limit of the ratio of consecutive terms is equal to 1 such as the harmonic series. In these cases, we can use another test that is even more powerful and can be used in more cases. This is the root test. Everything the ratio test solves, the root test can also solve but the root test can also solve cases where the ratio test is inconclusive. The root test is similar to the ratio test but instead of looking at the ratio of consecutive terms, the root test states that if we have a series \(\sum^{\infty}_{n=1}{a_n}\), then we can analyze the convergence of the series by looking at the limit superior and limit inferior of the \(n\)-th root of the absolute value of the terms. More formally we have the following:

\[\limsup_{n \to \infty} \sqrt[n]{|a_n|} < 1 \implies \sum^{\infty}_{n=1}{a_n} \text{ converges absolutely} \]

Because the absolute convergence implies convergence, we can also say that the series converges. However, if the limit superior and limit inferior are greater than 1, then we can say that the series diverges and therefore also the absolute series diverges. More formally we have the following:

\[\limsup_{n \to \infty} \sqrt[n]{|a_n|} > 1 \implies \sum^{\infty}_{n=1}{a_n} \text{ and } \sum^{\infty}_{n=1}{|a_n|} \text{ diverges} \]

Again if the limit superior and limit inferior are equal to 1, then the test is inconclusive and we can not say anything about the convergence of the series.

Importantly if the limit exists, so \(\lim_{n \to \infty} \sqrt[n]{|a_n|} = L\), then the limit superior and limit inferior are equal to the limit and we can use the limit instead of the limes superior and limit inferior. So we can write the root test as follows:

\[\lim_{n \to \infty} \sqrt[n]{|a_n|} < 1 \implies \sum^{\infty}_{n=1}{a_n} \text{ converges} \] \[\lim_{n \to \infty} \sqrt[n]{|a_n|} > 1 \implies \sum^{\infty}_{n=1}{a_n} \text{ and } \sum^{\infty}_{n=1}{|a_n|} \text{ diverges} \]
Example

We define the following sequence and series:

\[\sum^{\infty}_{n=1}{a_n} \text{ where } a_n = \begin{cases} \frac{1}{2^{n+1}} & \text{if } n \text{ is even} \\ \frac{1}{2^{n}} & \text{if } n \text{ is odd} \end{cases} \]

First we can check if the series converges by using the ratio test. Let’s assume that \(n\) is even, so \(n = 2k\) for some \(k \in \mathbb{N}\) then we have:

\[\frac{a_{n+1}}{a_n} = \frac{\frac{1}{2^{n+1}}}{\frac{1}{2^{n+1}}} = 1 \]

and if \(n\) is odd, so \(n = 2k + 1\) for some \(k \in \mathbb{N}\) then we have:

\[\frac{a_{n+1}}{a_n} = \frac{\frac{1}{2^{n+2}}}{\frac{1}{2^{n}}} = \frac{1}{4} \]

So we get the following limes superior and limes inferior:

\[\limsup_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right| = 1 \text{ and } \liminf_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right| = \frac{1}{4} \]

Because the limes superior is not less than 1 and the limes inferior is not larger than 1, the ratio test is inconclusive. So we can not say anything about the convergence of the series. So instead let’s use the root test. We assume that \(n\) is even, so \(n = 2k\) for some \(k \in \mathbb{N}\) then we have:

\[\sqrt[n]{|a_n|} = \sqrt[2k]{\frac{1}{2^{2k+1}}} = \left(\frac{1}{2^{2k}}\right)^{\frac{1}{2k}} = \frac{1}{2^{\frac{2k}{2k}}} = \frac{1}{2} \]

and if \(n\) is odd, so \(n = 2k + 1\) for some \(k \in \mathbb{N}\) then we have:

\[\sqrt[n]{|a_n|} = \sqrt[2k]{\frac{1}{2^{2k+1}}} = \left(\frac{1}{2^{2k+1}}\right)^{\frac{1}{2k}} = \frac{1}{2^{\frac{2k+1}{2k}}} = \frac{1}{2^{1 + \frac{1}{2k}}} \]

which as \(k \to \infty\) approaches 0. So we also get \(\frac{1}{2}\) as the limit. So we can conclude that the series converges absolutely by the root test as the limit is less than 1.

Power Series

One of the most important types of series in maths and computer science is the power series as it finds many applications such as in the Taylor series and Fourier series. A power series is a series with \(z \in \mathbb{C}\) or \(z \in \mathbb{R}\) of the form:

\[\sum^{\infty}_{n=0} c_n z^n = c_0 + c_1 z + c_2 z^2 + c_3 z^3 + \ldots = p(z) \]

where \(c_n \in \mathbb{C}\) or \(c_n \in \mathbb{R}\) are the coefficients of the series. So the power series is actually equivalent to a polynomial of infinite degree.

We have also actually already seen some examples of power series such as the exponential series \(\sum^{\infty}_{n=0} \frac{z^n}{n!}\) where the coefficients are \(c_n = \frac{1}{n!}\) and the geometric series \(\sum^{\infty}_{n=0} z^n\) where the coefficients are \(c_n = 1\).

The power series converges for all \(z\) in some interval or circle around the origin. The convergence radius of the power series is often denoted with the phi symbol \(\varphi\) or just simply \(r\). So we can say that the power series converges for all \(z\) such that:

\[|z| < r \text{converges and if } |z| > r \text{ diverges} \]

But how do we define the convergence radius of a power series? We can use the root test to define the convergence radius of a power series. The convergence radius is defined as follows:

\[r = \begin{cases} \infty & \text{if } \limsup_{n \to \infty} |c_n|^{\frac{1}{n}} = 0 \\ \frac{1}{\limsup_{n \to \infty} |c_n|^{\frac{1}{n}}} & \text{if } \limsup_{n \to \infty} |c_n|^{\frac{1}{n}} \neq 0 \end{cases} \]

By convention if the set \(\{|c_n|^{\frac{1}{n}} : n \in \mathbb{N}\}\) is not bounded we set the convergence radius to zero and therefore the power series does not converge. On the other hand if the set is bounded and \(\limsup_{n \to \infty} |c_n|^{\frac{1}{n}} = 0\), then the convergence radius is infinite and the power series converges for all \(z \in \mathbb{C}\) or \(z \in \mathbb{R}\).

Proof

We can prove that the convergence radius is defined as above by using the root test:

\[\begin{align*} \limsup_{n \to \infty} |a_n|^{\frac{1}{n}} &= \limsup_{n \to \infty} |c_n z^n|^{\frac{1}{n}} \\ &= \limsup_{n \to \infty} |c_n|^{\frac{1}{n}} |z| \\ &= |z| \cdot \limsup_{n \to \infty} |c_n|^{\frac{1}{n}} \\ &= |z| \cdot \lim_sup_{n \to \infty} |c_n|^{\frac{1}{n}} < 1 \\ &\implies |z| < \frac{1}{\limsup_{n \to \infty} |c_n|^{\frac{1}{n}}} \end{align*} \]

Where the right side is the convergence radius \(r\). So we can conclude that the power series converges for all \(|z| < r\) and diverges for all \(|z| > r\).

Example

Let’s look at the following power series:

\[\sum^{\infty}_{n=1}{\left(\frac{5n + 2n^3}{2n + 6n^3}\right)^n} \]

Then the contents of the bracket are the coefficients \(c_n\) and \(z\) is equal to 1. Let’s analyze the behaviour of the coefficients \(c_n\) as \(n\) approaches infinity to find the convergence radius of the power series:

\[\begin{align*} \limsup_{n \to \infty} |c_n|^{\frac{1}{n}} &= \limsup_{n \to \infty} \left(\frac{5n + 2n^3}{2n + 6n^3}\right) \\ &= \limsup_{n \to \infty} \frac{2}{6} = \frac{1}{3} < 1 \end{align*} \]

So by the root test we can conclude that the coefficients \(c_n\) converge to \(\frac{1}{3}\) and therefore the convergence radius of the power series is:

\[r = \frac{1}{\limsup_{n \to \infty} |c_n|^{\frac{1}{n}}} = \frac{1}{\frac{1}{3}} = 3 \]

and because the convergence radius is greater than 1, we can conclude that the power series converges for all \(|z| < 3\) and diverges for all \(|z| > 3\).

Riemann Zeta Function

The riemann zeta function is a special function that is defined as the following power series for \(s > 0\):

\[\zeta(s) = \sum^{\infty}_{n=1}{\frac{1}{n^s}} = 1 + \frac{1}{2^s} + \frac{1}{3^s} + \frac{1}{4^s} + \ldots \]

If we analyze the convergence of this series we hit a few walls. First we can use the ratio test to check if the series converges:

\[\begin{align*} \left| \frac{a_{n+1}}{a_n} \right| &= \left| \frac{\frac{1}{(n+1)^s}}{\frac{1}{n^s}} \right| \\ &= \left| \frac{n^s}{(n+1)^s} \right| \\ &= \left| \left(\frac{n}{n+1}\right)^s \right| \\ &= \left| \left(1 - \frac{1}{n+1}\right)^s \right| &\to 1 \text{ as } n \to \infty \end{align*} \]

So the ratio test is inconclusive. We can also use the root test to check if the series converges:

\[\begin{align*} \sqrt[n]{|a_n|} &= \sqrt[n]{\frac{1}{n^s}} \\ &= \left|\frac{1}{n^\frac{1}{n}}\right|^{s} \\ &\to 1 \text{ as } n \to \infty \end{align*} \]

So the root test is also inconclusive. Instead let’s look at some of the terms, maybe we can use the direct comparison test to check if the series converges. We can see that for \(s=1\) we get the harmonic series which diverges because of the following observation:

\[\begin{align*} &1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \frac{1}{8} + \ldots \\ &\geq 1 + \frac{1}{4} + \frac{1}{4} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8} + \ldots \\ &\geq 1 + \frac{1}{2} + \frac{1}{2} + \frac{1}{2} + \ldots \\ &\geq 1 + 1 + 1 + 1 + \ldots = \infty \end{align*} \]

For \(s > 1\) we can use a similar idea:

\[\begin{align*} &1 + \frac{1}{2^s} + \frac{1}{3^s} + \frac{1}{4^s} + \frac{1}{5^s} + \frac{1}{6^s} + \frac{1}{7^s} + \frac{1}{8^s} + \ldots \\ &\leq 1 + \frac{1}{2^s} + \frac{1}{2^s} + \frac{1}{4^s} + \frac{1}{4^s} + \frac{1}{4^s} + \frac{1}{4^s} + \ldots \\ &= 1 + \frac{2}{2^s} + \frac{2^2}{2^{2s}} + \frac{2^3}{2^{3s}} + \frac{2^4}{2^{4s}} + \ldots \\ &= 1 + \frac{1}{2^{s-1}} + \frac{1}{2^{2(s-1)}} + \frac{1}{2^{3(s-1)}} + \ldots \\ &= 1 + \frac{1}{2^{s-1}} + \left(\frac{1}{2^{s-1}}\right)^2 + \left(\frac{1}{2^{s-1}}\right)^3 + \ldots \\ &= \sum^{\infty}_{n=0}{\left(\frac{1}{2^{s-1}}\right)^n} \end{align*} \]

Which is just the geometric series with the first term \(1\) and the common ratio \(\frac{1}{2^{s-1}}\). So we can conclude that the series converges because for \(s > 1\) the common ratio is less than 1. And using our formula we can find the sum of the series:

\[\sum^{\infty}_{n=0}{q^n} = \frac{1}{1 - q} \text{ where } |q| < 1 \]

Therefore we can give an upper bound for the Riemann zeta function for \(s > 1\):

\[\zeta(s) = \sum^{\infty}_{n=1}{\frac{1}{n^s}} \leq \frac{1}{1 - \frac{1}{2^{s-1}}} \]

Using this upper bound we have therefore shown that the Riemann zeta function converges for \(s > 1\) using the direct comparison test as it less than or equal to the convergent geometric series. If we look at the missing values so for \(0 < s \leq 1\), we can see that the series diverges again using the direct comparison test as the following happens:

\frac{1}{n} \leq \frac{1}{n^s}

and therfore with the direct comparison test to the harmonic series we can conclude that the Riemann zeta function diverges for \(0 < s \leq 1\). So in summary:

\[\zeta(s) = \sum^{\infty}_{n=1}{\frac{1}{n^s}} = \begin{cases} \diverges & \text{if } 0 < s \leq 1 \\ \text{converges} & \text{if } s > 1 \end{cases} \]

Double Series and Multiplication of Series

We can define a double sequence c_kl that looks like a matrix. If we then add the elements column by column we get a sequence of U_n. From this sequence we can also create a series that sums up the elements so it is then a sum of sums. This is a double series. We can also add the elements row by row to get a different sequence S_n and also add the elements of this sequence to get a different double series. There are many ways of adding the elements and depending on the order the value of the series can be different because addition is not commutative in general.

So we say that some series is linear ordering of the double series if there is a bijection between the indices of the single series and the indices of the double series.

From cauchy it follows that if the linear ordering series converges absolutly then so do the adding by row or by column series. And it follows that these also must be the same. If this is the case then so does every other linear ordering also converges absolutly to the same value.

Cauchy Product

Now what if we want to multiply two series? So we need to to find some way to multiply the elements of the series together which will then form a new series. This is called the Cauchy Product which has a similar idea to cantors diagonalization?

This again results in a series (or double series?). The reason for defining the product of two series in this way was that if we had 2 polynomials which are just power series we want to have an accurate result and representation if we were to multipyl them.

Importantly the cauchy product must not converge! When does it converge?

If the two original series converge then the cauchy product can still converge. We can only say that if both the original series converge absolutely then the cauchy product also converge and their sum just becomes “normal looking?”

using this we can prove that exp(x + y) = exp(x) * exp(y)

we also compare the exponential series with the euler series where x=1 for the exponential series.

Last updated on