Skip to Content

Convergent Series

We have seen that a series is the sum of the terms of a sequence.

Just like sequences, series can also converge or diverge. A series converges if the sequence of partial sums converges. So in other words using the original sequence we calculate a new sequence, where each term is the sum of all terms up to that point.

\[\begin{align*} \text{Sequence} & : (a_1, a_2, a_3, \ldots, a_n) \\ \text{Series} & : S_n = \sum^{n}_{k=1}{a_k} = a_1 + a_2 + a_3 + \ldots + a_n \\ \text{Sequence of partial sums} & : S_1, S_2, S_3, \ldots, S_n \end{align*} \]

If the sequence of partial sums converges then the series converges. The limit of the sequence of partial sums is called the sum or value of the series. If the sequence of partial sums diverges then the series diverges.

For a series to converge the underlying sequence must be a null sequence, in other words the limit of the original sequence must be zero. This is a necessary but not sufficient condition. There are series that diverge even though the sequence of terms converges to zero.

\[\sum_{n=1}^{\infty} a_n \text{ converges} \implies \lim_{n \to \infty} a_n = 0 \]

There is an intuitive interpretation behind this condition. Imagine you’re summing up the terms of a series. For the series to converge, the partial sums need to settle on a finite value as you keep adding more and more terms. If the terms of the sequence do not approach zero, it becomes impossible for the partial sums to settle, and the series will diverge. In short, if the terms are not getting smaller and smaller, the series will keep getting larger and larger and will not converge.

Another important property of convergent series is that the order in which we sum the terms does not matter as the addition of real numbers is associative and commutative. This means that we can rearrange the terms of a convergent series without affecting its convergence or the value of the series. So formally if the function \(f\) is a bijective mapping from the natural numbers to the natural numbers, then:

\[\sum_{n=1}^{\infty} a_n = \sum_{n=1}^{\infty} a_{f(n)} \]

In addition, just like with the convergence of a sequence, what the series does in the beginning does not affect the convergence of the series. For example, if we start summing the terms of a series at a different index, the series will still converge, but the value of the series will change. This is of course only true for real valued series, as if we would add infinity and then add a finite number, the series would diverge and if we skipped adding the infinity term, the series would converge to a finite value. So more formally for any \(m \geq N_0\) we have:

\[\sum_{n=N_0}^{\infty} a_n \text{converges} \iff \sum_{n=m}^{\infty} a_n \text{converges} \]
Proof
Geometric Series

Let’s look at some example of series and analyze their convergence. We have seen the geometric sequence before. The geometric sequence is a sequence where each term is a constant multiple of the previous term. So formally the geometric sequence is defined as follows where \(q \in \mathbb{C}\) is a constant:

\[a_n = q^n \]

Then the geometric sequence has the following properties as \(n \to \infty\):

  • If \(|q| < 1\), then the sequence converges to 0.
  • If \(q = 1\), then the sequence converges to 1.
  • If \(q = -1\), then the sequence oscillates between 1 and -1.
  • If \(|q| > 1\), then the sequence diverges to infinity.

The geometric series is then just the sum of the terms of the geometric sequence:

\[\sum^{\infty}_{k=0}{q^k} \]

For the series to converge we need to check if the sequence of partial sums converges. The sequence of partial sums is:

\[\begin{align*} S_n &= \sum^{n}_{k=0}{q^k} = 1 + q + q^2 + \ldots + q^n \\ q * S_n &= q + q^2 + q^3 + \ldots + q^{n+1} \\ S_n - q * S_n &= 1 - q^{n+1} \\ (1-q) * S_n &= 1 - q^{n+1} \\ S_n &= \frac{1 - q^{n+1}}{1-q} \end{align*} \]

Now we have a closed form for the sequence of partial sums and therefore we can analyze the convergence of the geometric series. For this we take the limit of the sequence of partial sums to see if the series converges. As \(n \to \infty\) the term \(q^{n+1}\) goes to zero as \(|q| < 1\). So we can assume the limit exists and that it is \(\frac{1}{1-q}\). Let’s prove it is indeed the limit:

\[\lim_{n \to \infty}\left|\frac{1 - q^{n+1}}{1-q} - \frac{1}{1-q}\right| = \lim_{n \to \infty}\left|\frac{1 - q^{n+1} - 1}{1-q}\right| = \lim_{n \to \infty}\left|\frac{- q^{n+1}}{1-q}\right| = \lim_{n \to \infty}\left|\frac{q^{n+1}}{1-q}\right| = 0 \]

So the sequence of partial sums converges to \(\frac{1}{1-q}\) with \(|q| < 1\), which means the geometric series converges to \(\frac{1}{1-q}\).

Let’s look at a concrete example of a geometric series for \(q = \frac{1}{2}\):

\[\begin{align*} \sum^{\infty}_{k=0}{\left(\frac{1}{2}\right)^k} &= 1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \ldots \\ &= \frac{1}{1-\frac{1}{2}} = \frac{1}{\frac{1}{2}} = 2 \end{align*} \]

But what if the index starts at 1 rather than 0 to see if the starting index has an effect on the convergence of the series and the value of the series. For this let’s revisit the closed form for the sequence of partial sums:

\[\begin{align*} \sum^{\infty}_{k=1}{q^k} &= q + q^2 + q^3 + \ldots \\ &= \sum^{\infty}_{k=0}{q^k} - q^0 = \frac{1}{1-q} - 1 = \frac{q}{1-q} \end{align*} \]

We know that this sequence still converges. So we can say that what we do in the first steps does not have an effect on the convergence of the series. However, it does have an effect on the value of the series. So the geometric series for \(q = \frac{1}{2}\) starting at 1 converges to \(\frac{\frac{1}{2}}{1-\frac{1}{2}} = 1\) instead of 2.

Another way of looking at this is that we know that \(\sum^{\infty}_{k=0}{\frac{1}{2^k}} = 2\) and that we can write this also as follows:

\[\sum^{\infty}_{k=1}{\frac{1}{2^k}} = 2 = 1 + \frac{1}{2} + \frac{1}{4} + \sum^{\infty}_{k=3}{\frac{1}{2^k}} \]

So therefore we must have:

\[\sum^{\infty}_{k=3}{\frac{1}{2^k}} = \frac{1}{4} \]
Harmonic Series

We have seen an example of a series that converges, now let’s look at an example of a series that diverges. Just like with the geometric series that uses the geometric sequence, we can also look at the harmonic series that uses the harmonic sequence. In this case we will analyze the following harmonic series:

\[\sum^{\infty}_{n=1}{\frac{1}{n}} = 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \ldots \]

We already know that the harmonic sequence converges to zero as \(n\) approaches infinity. So the precondition is met. However, this does not automatically mean that the series also converges. In fact, the harmonic series diverges despite the sequence of terms converging to zero, the series diverges. This is a good example to show that the terms of the sequence converging to zero is a necessary but not sufficient condition for the series to converge.

\[\sum^{\infty}_{n=1}{\frac{1}{n}} = \infty \]

The intuitive idea behind this is that we can always find enough factors to make up a previous factor. For example, if we take the factor \(\frac{1}{2}\), we can find two factors that are smaller than \(\frac{1}{2}\), namely \(\frac{1}{3}\) and \(\frac{1}{4}\):

\[\frac{1}{2} \leq \frac{1}{3} + \frac{1}{4} \]

This process can be continued indefinitely, for example we can find four factors that are smaller than \(\frac{1}{4}\), namely \(\frac{1}{5}\), \(\frac{1}{6}\), \(\frac{1}{7}\) and \(\frac{1}{8}\):

\[\frac{1}{4} \leq \frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \frac{1}{8} \]

This means that we can always find enough factors to make up a previous factor, which leads to the series diverging.

Harmonic Series
Harmonic Series

However, there is a nice visualization showing that the series is bounded.

Harmonic Series
Harmonic Series
Telescoping Series

The telescoping series is another special case of a series that converges. The telescoping series is a series where the terms cancel each other out in a way that the series converges to a finite value. The telescoping series is defined as follows:

\[\sum^{\infty}_{n=1}{\frac{1}{n(n+1)}} \]

Now you might be wondering why this series is called telescoping series. The reason is that we can rewrite the terms to the following:

\[\frac{1}{n(n+1)} = \frac{1}{n} - \frac{1}{n+1} \]

So then when we write out the first few terms of the series, we get:

\[\sum^{\infty}_{n=1}{\left(\frac{1}{n} - \frac{1}{n+1}\right)} = \left(1 - \frac{1}{2}\right) + \left(\frac{1}{2} - \frac{1}{3}\right) + \left(\frac{1}{3} - \frac{1}{4}\right) + \ldots \]

and as we can see the terms cancel each other out in a way that we are left with only the first term and the last term of the series. So we can rewrite the series as follows:

\[\sum^{\infty}_{n=1}{\frac{1}{n(n+1)}} = \lim_{n \to \infty} \left(1 - \frac{1}{n+1}\right) = 1 \]
Beware of Infinite Sums

In the case of series we can not just use our usual rules for the sum operator. The reason is because the sums go to infinity and we can not just use the normal rules of arithmetic.

To see this let’s compare two previously seen examples. We can rewrite:

\[\sum^{\infty}_{n=1}{\frac{1}{n(n+1)}} = \sum^{\infty}_{n=1}{\frac{1}{n}} - \sum^{\infty}_{n=1}{\frac{1}{n+1}}$ \]

We already know this is the telescoping series that converges to 1. So in this infinity minus infinity case we get 1. However, if we try to do the same for the value of 0 and the sum over 0. We can do the following split:

\[0 = \sum^{\infty}_{n=1}{1} - \sum^{\infty}_{n=1}{1} \]

which is not correct as these then diverge and we suddenly get a value of 1 (infinity minus infinity).

So we can only perform operations with series if we have a series \(\sum^{\infty}_{k=1}{a_k}\) and \(\sum^{\infty}_{j=1}{b_j}\) and they both converge. If this is the case we can perform the following operations:

  • \(\sum^{\infty}_{k=1}{c*a_k}=c* \sum^{\infty}_{k=1}{a_k}\) für \(c \in C\)
  • \(\sum^{\infty}_{k=1}{a_k\pm b_k}=\sum^{\infty}_{k=1}{a_k}\pm \sum^{\infty}_{k=1}{b_k}\)

Cauchy Criterion

We have seen that for sequences we can use the Cauchy criterion to check if a sequence converges. Specifically we can say that a sequence converges if the following holds:

\[\forall \epsilon > 0 \quad \exists N_{\epsilon} \in \mathbb{N} : |a_n - a_m| < \epsilon \text{ for all } n, m \geq N_{\epsilon} \]

We can also apply this to series. The Cauchy criterion for series states that a series converges if the following holds:

\[\forall \epsilon > 0 \exists N_{\epsilon} \in \mathbb{N} : |\sum^{m}_{k=n} a_k| < \epsilon \text{ for all } m \geq n \text{ and } n \geq N_{\epsilon} \]

So after some point \(N_{\epsilon}\) the sum of all terms from \(n\) to all \(m\) is less than \(\epsilon\). So in other words after some point the sum of all terms does not change the value of the series anymore. This is a very powerful test to check if a series converges.

Proof

The proof is rather simple. We know a series converges if the sequence of partial sums converges. So we can just apply the Cauchy criterion for sequences to the sequence of partial sums. So if the partial sums are Cauchy, then the series converges. For the partial sums to be Cauchy, we need to show that the following holds where \(S_n = \sum^{n}_{k=1}{a_k}\):

\[\forall \epsilon > 0 \exists N_{\epsilon} \in \mathbb{N} : |S_m - S_{n-1}| < \epsilon \text{ for all } n, m \geq N_{\epsilon} \]

From this we can derive the following which gives us the Cauchy criterion for series:

\[|S_m - S_{n-1}| = |\sum^{m}_{k=1}{a_k} - \sum^{n-1}_{k=1}{a_k}| = |\sum^{m}_{k=n}{a_k}| < \epsilon \text{ for all } n, m \geq N_{\epsilon} \]

The cauchy criterion also has a special property that follows from it. Just like with the sequences at some point the difference must go to zero. For the series this means that the terms of the series must go to zero and therefore also the partial sum. So in other words after some point to all \(m \geq n\) the section of the series must be zero. More formally:

\[\sum_{k=1}^{\infty} a_k \text{ converges} \implies \lim_{n \to \infty} \sum_{k=n}^{m} a_k = 0 \]

which is our precondition we have already seen.

Proof

Using the cauchy criterion we can actually prove that the underlying sequence must be a null sequence. The proof is pretty simple. We assume we have a series \(\sum_{n=1}^{\infty} a_n\) that converges. Then from the cauchy criterion we know that the following holds:

\[\forall \epsilon > 0 \exists N_{\epsilon} \in \mathbb{N} : |\sum^{m}_{k=n} a_k| < \epsilon \text{ for all } m \geq n \text{ and } n \geq N_{\epsilon} \]

Then if we specifically set \(m=n\) we get:

\[\begin{align*} |\sum^{n}_{k=n} a_k| < \epsilon \\ |a_n| < \epsilon \\ |a_n - 0| < \epsilon \end{align*} \]

So we can see that the terms of the series must go to zero as \(n\) approaches infinity. So we can conclude that the underlying sequence must be a null sequence.

Example

She just shows that the sequence goes to zero and then splits it into the telescoping series and the geometric series to find the value of the series.

\[\sum^{\infty}_{n=1}{\frac{3^n + n^2 + n}{3^{n+1}(n(n+1))}} \]

Direct Comparison Test

The direct comparison test is another method like the Cauchy criterion to check if a series converges. The idea comes from the Monotone Convergence Theorem and is in my opinion a lot more intuitive then the cauchy criterion.

First let’s just focus on series that converges where after some point \(k\) the terms are always positive. So more formally we focus on series where the following holds:

\[\text{Let } \sum^{\infty}_{n=1}{a_n} \text{ be a series with } a_n \geq 0 \text{ for all } n \geq k \text{ where } k \geq 1 \]

The idea is to use the monoton convergence theorem. First we need to show that the sequence of partial sums is monotone increasing. This is true if the terms of the series are always positive.

\[S_{n+1} - S_n = \sum^{n+1}_{k=1}{a_k} - \sum^{n}_{k=1}{a_k} = a_{n+1} \geq 0 \text{ for all } n \geq 1 \]

Now because we know that the sequence of partial sums is monotone increasing, and that it converges as otherwise the series would diverge, we can conclude that the sequence of partial sums must also be bounded by the monotone convergence theorem. So in other words, for a series to converge, the sequence of partial sums must be bounded.

The direct comparison follows from the idea shown above but in a more general way and kind of mixing it with the squeeze theorem. The idea is that we can compare two series and show that one converges if the other converges. The formal definition of the direct comparison test is as follows. If we have two series \(\sum^{\infty}_{n=1}{a_n}\) and \(\sum^{\infty}_{n=1}{b_n}\) and we know that the following holds for all \(k \geq 1\):

\[0 \leq a_k \leq b_k \]

So the terms of both series are always positive and the terms of the first series are always less than or equal to the terms of the second series. Then we can say that if the second series converges, then the first series also converges.

\[\sum^{\infty}_{n=1}{b_n} \text{ converges} \implies \sum^{\infty}_{n=1}{a_n} \text{ converges} \]

The intuition behind this is rather simple. If \(b_n\) converges, then the sum of all terms of \(b_n\) is finite. If \(a_n\) is always less than or equal to \(b_n\), then the sum of all terms of \(a_n\) must also be finite. Because we are only allowing for positive terms, we also do not have to worry about convergence against negative infinity.

Proof

This follows from the cauchy criterion. If the series \(\sum^{\infty}_{n=1}{b_n}\) converges, then for every \(\epsilon > 0\) there exists an \(N \in \mathbb{N}\) such that for all \(m \geq n \geq N\):

\[|\sum^{m}_{k=n} b_k| < \epsilon \]

Because the terms of \(b_k\) are always positive, we can drop the absolute value and rewrite this as:

\[\sum^{m}_{k=n} b_k < \epsilon \]

We also know that the terms of \(a_k\) are always less than or equal to the terms of \(b_k\). So we can rewrite this as:

\[\sum^{m}_{k=n} a_k \leq \sum^{m}_{k=n} b_k < \epsilon \]

So we can conclude that the series \(\sum^{\infty}_{n=1}{a_n}\) converges as well for all \(m \geq n \geq N\).

In a similar way we can also show that if the first series diverges, then the second series also diverges.

\[\sum^{\infty}_{n=1}{a_n} \text{ diverges} \implies \sum^{\infty}_{n=1}{b_n} \text{ diverges} \]

The idee is again the same, if \(a_n\) diverges, then the sum of all terms of \(a_n\) is infinite. If \(b_n\) is always greater than or equal to \(a_n\), then the sum of all terms of \(b_n\) must also be infinite and therefore diverges.

We can actually make this test even more powerful. Above we defined that the terms of \(b_k\) always had to be greater than or equal to the terms of \(a_k\) and that the terms of \(a_k\) always had to be positive. It turns out that the test also works if we can show that the conditions are met for all \(k \geq m\) for some \(m \in \mathbb{N}\). So we can have a finite number of terms that do not meet the conditions. The intuition behind this is that the first \(m\) terms of the series do not have an influence on the convergence of the series only on the value of the series. So the test works as long as the following holds:

\[0 \leq a_k \leq b_k \text{ for all } k \geq m \text{ where } m \geq 1 \]
Example

We know that the harmonic series \(\sum^{\infty}_{n=1}{\frac{1}{n}}\) diverges. So what if the denominator is squared? Does the series \(\sum^{\infty}_{n=1}{\frac{1}{n^2}}\) converge? We can use the direct comparison test to show that it does by comparing it to the telescoping series \(\sum^{\infty}_{n=1}{\frac{1}{n(n-1)}}\). We know that the telescope series converges to 1. So we want to show that the other series converges as well. We can do this by showing that the terms of the first series are always less than or equal to the terms of the second series and that they are always positive. The telescope series is not defined for \(n=1\) but if we start at \(n=2\) we can see that the terms of the first series are always less than or equal to the terms of the second series.

\(k\)\(\frac{1}{k^2}\)\(\frac{1}{k(k-1)}\)
11Undefined
2\(\frac{1}{4}\)\(\frac{1}{2}\)
3\(\frac{1}{9}\)\(\frac{1}{6}\)
4\(\frac{1}{16}\)\(\frac{1}{12}\)
5\(\frac{1}{25}\)\(\frac{1}{20}\)
6\(\frac{1}{36}\)\(\frac{1}{30}\)

This comes from the following:

\[k(k-1) \leq k^2 \implies \frac{1}{k(k-1)} \geq \frac{1}{k^2} \text{ for all } k \geq 2 \]

So therefore:

\[1 + \sum^{\infty}_{n=2}{\frac{1}{n^2}} \leq 1 + \sum^{\infty}_{n=2}{\frac{1}{n(n-1)}} \]

and by the direct comparison test because the telescoping series converges we can say that the series \(\sum^{\infty}_{n=1}{\frac{1}{n^2}}\) converges as well.

What about the series \(\sum^{\infty}_{n=1}{\frac{1}{n^3}}\)? We can use the same idea as above:

\[k^3 &\geq k^2 \implies \frac{1}{k^3} \leq \frac{1}{k^2} \text{ for all } k \geq 1 \\ \]

So then it follows that:

\[\sum^{\infty}_{n=1}{\frac{1}{n^3}} \leq \sum^{\infty}_{n=1}{\frac{1}{n^2}} \]

And because we know that the series \(\sum^{\infty}_{n=1}{\frac{1}{n^2}}\) converges, we can say that the series \(\sum^{\infty}_{n=1}{\frac{1}{n^3}}\) also converges by the direct comparison test. We can actually generalize this to all series of the form \(\sum^{\infty}_{n=1}{\frac{1}{n^p}}\) where \(p > 1\) converges and for \(p \leq 1\) diverges.

To show this we can look at the case where \(p = \frac{1}{2}\) so we have the following series:

\[\sum^{\infty}_{n=1}{\frac{1}{\sqrt{n}}} \]

We can use the direct comparison test to show that this series diverges. We can compare it to the harmonic series \(\sum^{\infty}_{n=1}{\frac{1}{n}}\) which we know diverges. We can see that for all \(n \geq 1\):

\[\sqrt{n} \leq n \implies \frac{1}{\sqrt{n}} \geq \frac{1}{n} \]

So we can conclude that:

\[\sum^{\infty}_{n=1}{\frac{1}{\sqrt{n}}} \geq \sum^{\infty}_{n=1}{\frac{1}{n}} \]

And because the harmonic series diverges, we can say that the series \(\sum^{\infty}_{n=1}{\frac{1}{\sqrt{n}}}\) also diverges by the direct comparison test.

Example

Another series that is actually similar to the series above is the following:

\[\sum^{\infty}_{n=1}{\frac{1}{n!}} \]

We can use the direct comparison test to show that this series converges. You may know from analysing algorithms that the factorial grows very fast. A simple comparison would just to compare it with the squared series \(\sum^{\infty}_{n=1}{\frac{1}{n^2}}\) which we know converges as the following holds:

\[\frac{1}{n!} \leq \frac{1}{n^2} \text{ for all } n \geq 4 \]

So we know that:

\[\sum^{\infty}_{n=1}{\frac{1}{n!}} \leq \sum^{\infty}_{n=1}{\frac{1}{n^2}} \]

and therefore it converges by the direct comparison test. However, we can also show a tighter bound by observing the following:

\[\begin{align*} k! = 1 \cdot 2 \cdot 3 \cdots k \geq 1 \cdot 2 \cdot 2 \cdots 2 = 2^{k-1} \text{ for all } k \geq 2 \\ \frac{1}{k!} \leq \frac{1}{2^{k-1}} \text{ for all } k \geq 2 \end{align*} \]

which means:

\[\sum^{\infty}_{n=1}{\frac{1}{n!}} \leq \sum^{\infty}_{n=2}{\frac{1}{2^{n-1}}} \]

And we know that the series \(\sum^{\infty}_{n=2}{\frac{1}{2^{n-1}}}\) is a geometric series with \(q = \frac{1}{2}\) which converges. So we can conclude that the series \(\sum^{\infty}_{n=1}{\frac{1}{n!}}\) converges as well.

Absolute Convergence

We have seen that series can converge to some limit or diverge. But there is also something called absolute convergence. A series is said to be absolutely convergent if the absolute values of the terms of the series converge. In other words, if we take the absolute value of each term in the series and then check if that series converges, then we can say that the original series is absolutely convergent. More formally we define absolute convergence as:

\[\sum^{\infty}_{n=1}{|a_n|} \text{ converges} \]

If a series is absolutely convergent, then it is also convergent. However, the converse is not true. A series can be convergent without being absolutely convergent. We call a series that is convergent but not absolutely convergent conditionally convergent.

\[\sum^{\infty}_{n=1}{a_n} \text{ converges} \implies \sum^{\infty}_{n=1}{|a_n|} \text{ converges} \]

This is easily proven by the cauchy criterion for series. If the series is absolutely convergent, then the value of the series is finite. If we instead take the absolute value of the value of the series, we get a smaller or equal value as some terms may cancel each other out. So the absolute value of the series is less than or equal to the value of the series. Therefore the value of the series is also finite and converges.

\[|\sum^{m}_{k=n} a_k| \leq \sum^{m}_{k=n} |a_k| < \epsilon \text{ for all } m \leq n \text{ and } n \leq N_{\epsilon} \]

If we say that \(S_n = \sum^{n}_{k=1}{a_k}\) and \(T_n = \sum^{n}_{k=1}{|a_k|}\), then we also get the following from the above inequality:

\[|S_n| = |\sum^{\infty}_{k=1}{a_k}| = |\lim_{n \to \infty} S_n| = \lim_{n \to \infty} |S_n| \leq \lim_{n \to \infty} T_n = \sum^{\infty}_{k=1}{|a_k|} \]

Or in short:

\[|\sum^{\infty}_{n=1}{a_n} | \leq \sum^{\infty}_{n=1}{|a_n|} \]

Absolute convergent series also have the nice property that any subseries or rearrangement of the series also converges. This is not true for conditionally convergent series. So if we have a series \(\sum^{\infty}_{n=1}{a_n}\) that converges absolutely, then we can say that any subseries or rearrangement of the series also converges. Intuitively this makes sense as the absolute value of the terms of the series is always positive and therefore the series value stays finite.

Example

Let’s look at the series \(\sum^{\infty}_{n=1}{(-1)^{n+1} \frac{1}{n}}\) which is the so called alternating harmonic series. The sequence of the terms converge to zero, so our first condition is met. If we analyze the values of the series then we notice that the series converges to \(\ln(2)\). So the series converges.

But if we take the absolute value of the terms of the series, we get the harmonic series \(\sum^{\infty}_{n=1}{\frac{1}{n}}\) which diverges. So we can say that the original series is convergent but not absolutely convergent. This is a good example to show that the converse of that absolute convergence implies convergence is not true.

Example

If we have the series \(\sum^{\infty}_{n=1}{a_n}\) that converges absolutely, then it also follows that the series \(\sum^{\infty}_{n=1}{a_n^2}\) also converges absolutely. We can prove this as follows. Because \(a_n\) converges we know it is bounded i.e some \(C \in \mathbb{R}\) exists such that \(a_n \leq |a_n| \leq C\) for all \(n \geq N\) for some \(N \in \mathbb{N}\). So we can say that:

\[|a_n|^2 \leq C |a_n| \text{ for all } n \geq N \]

and because we know that \(\sum^{\infty}_{n=1}{|a_n|}\) converges and therefore also \(C |a_n|\), we can use the direct comparison test to show that \(\sum^{\infty}_{n=1}{|a_n|^2}\) converges as well. If we also had the series \(\sum^{\infty}_{n=1}{b_n}\), that converges, then we can also show in a similar way that the series \(\sum^{\infty}_{n=1}{a_nb_n}\) converges as well as their exists some bound \(C \in \mathbb{R}\) such that:

\[|a_nb_n| \leq C |b_n| \text{ for all } n \geq N \]

and therefore we can use the direct comparison test to show that \(\sum^{\infty}_{n=1}{a_nb_n}\) converges absolutely as well.

Alternating Series Test

The alternating series test is useful test for checking the convergence of series that alternate in sign such as the alternating harmonic series. The test states that if we have a series of the form:

\[\sum^{\infty}_{n=1}{(-1)^{n+1} a_n} \]

where either all \(a_n\) are positive or all \(a_n\) are negative and the absolute values of the terms are decreasing, more formally for all \(n \geq N_0\) we have \(|a_{n+1}| \leq |a_n|\), and lastly the limit of the terms goes to zero, then the series converges.

Intuitively this makes sense as the terms of the series are alternating in sign and the absolute values of the terms are decreasing. So the series is “balancing out” and converging to a finite value.

So in the case of the alternating harmonic series all the above conditions are met and therefore it converges to some value \(S\). Alternating series that suit this form also have the interesting property that the value of the series \(S\) can be bounded if all the terms are positive:

\[a_1 - a_2 \leq S \leq a_1 \]

This comes from the fact that the first term of the series is positive will be the largest term. As an example for the alternating harmonic series we have:

\[1 - \frac{1}{2} \leq S \leq 1 \implies \frac{1}{2} \leq S \leq 1 \]
Todo

This can somehow be extended to the alternating series estimation theorem which probably also works for all negative terms?

Riemann Rearrangement Theorem

A Rearrangement of a series is a new series that is formed by rearranging the terms of the original series. More formally the series \(\sum^{\infty}_{n=1}{x_n}\) is a rearrangement of the series \(\sum^{\infty}_{n=1}{a_n}\) if there exists a bijective function of the form:

\[f: \mathbb{N} \to \mathbb{N} \text{ such that } x_n = a_{f(n)} \text{ for all } n \in \mathbb{N} \]

If we then have the series \(\sum^{\infty}_{n=1}{a_n}\), that converges, but does not converge absolutely, then we can rearrange the series to converge to any \(L \in \mathbb{R} \cup \{\infty\}\). Specifically Riemann showed that the series \(\sum^{\infty}_{n=1}{(-1)^{n+1} \frac{1}{n}}\) could be rearranged to converge to any real number as this series converges to zero but does not converge absolutely.

Proof

We can prove that the alternating harmonic series converges. First we observe the following:

\[\begin{align*} S_2 &= a_1 - a_2 = 1 - \frac{1}{2} = \frac{1}{2} \\ S_3 &= S_2 + a_3 = (a_1 - a_2) + a_3 \text{ where } (a_1 - a_2) > 0 \\ S_4 &= S_3 - a_4 = S_2 + a_3 - a_4 \text{ where } (a_3 - a_4) + a_3 > 0 \\ \dots \end{align*} \]

If we then focus on the odd terms of the series, we can see that they are decreasing

\[S_{2n+1} = a_1 - a_2 + a_3 - a_4 + \dots + a_{2n-1} = S_{2n-1} + (a_{2n + 1} - a_{2n}) \text{ where } (a_{2n + 1} - a_{2n}) < 0 \]

So we have \(S_{2n+1} < S_{2n-1}\) and therefore the odd terms of the series are decreasing. We can similarily show that the even terms of the series are increasing:

\[S_{2n} = a_1 - a_2 + a_3 - a_4 + \dots + a_{2n} = S_{2n-2} + (a_{2n-1} - a_{2n}) \text{ where } (a_{2n-1} - a_{2n}) > 0 \]

So we have \(S_{2n} > S_{2n-2}\) and therefore the even terms of the series are increasing. We also know that these subsequences are bounded:

\[a_1 - a_2 = S_2 \leq S_{2n} \leq S_{2n+1} \leq S_1 = a_1 \]

So by the monotone convergence theorem we can conclude that both subsequences converge. The odd terms converge to some limit \(L_1\) and the even terms converge to some limit \(L_2\).

\[\lim_{n \to \infty} S_{2n+1} = L_1 \text{ and } \lim_{n \to \infty} S_{2n} = L_2 \]

However, we notice that \(S_{2n} = S_{2n-1} + a_{2n}\). We also know that \(a_{2n} \to 0\) as \(n \to \infty\) as it is then just a subsequence of the harmonic series. So we can conclude that \(L_2 = L_1 + 0 = L_1\). So both subsequences converge to the same limit \(L\):

\[\lim_{n \to \infty} S_{2n+1} = \lim_{n \to \infty} S_{2n} = L \]

From this it follows that if \(\sum^{\infty}_{n=1}{a_n}\) is a bounded series. Then if the series of the even and odd terms converge to the same limit, then the series converges to that limit.

Dirchlet’s Rearrangement Theorem

Dirchlet showed that if a series converges absolutely, then any rearrangement of the series converges and that it even converges to the same value.

Ratio Test

One of the most important tests for convergence of series is the ratio test. The ratio test is a method to check if a series converges or diverges by looking at the ratio of consecutive terms in the series. Similarly to the cauchy criterion where we looked at the difference between consecutive terms, we can also look at the ratio of consecutive terms. The ratio test states that if we have a series \(\sum^{\infty}_{n=1}{a_n}\), where importantly \(a_n \neq 0\) for all \(n\) to avoid division by zero, then we can analyze the convergence of the series by looking at the limit of the ratio of consecutive terms:

\[\limsup_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right| < 1 \implies \sum^{\infty}_{n=1}{a_n} \text{ converges absolutely} \] \[\liminf_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right| > 1 \implies \sum^{\infty}_{n=1}{a_n} \text{ diverges} \]

IF the value of the limit is equal to 1, then the test is inconclusive and we can not say anything about the convergence of the series. In this case we need to use another test to check if the series converges or diverges.

The intuition behind the ratio test is that if the ratio of consecutive terms is less than 1, then the terms of the series are getting smaller and smaller, which means that the series converges. If the ratio is greater than 1, then the terms of the series are getting larger and larger, which means that the series diverges. If the ratio is equal to 1, then we can not say anything about the convergence of the series.

Proof

We define the following sequence:

\[c_n = \sup \left\{ \left| \frac{a_{n+k}}{a_n} \right| : k \geq n \right\} \]

then the sequence \(c_n\) is decreasing because we are taking the supremum of less and less terms. Because it is the absolute value of the ratio of consecutive terms, we can also conclude that \(c_n \geq 0\) for all \(n\). So we can conclude that the sequence \(c_n\) converges to some limit \(L \geq 0\) by the monotone convergence theorem. If we pick then some \(q\) such that \(0 \leq L < q < 1\), then we can find some \(N \in \mathbb{N}\) such that for all \(n \geq N\):

\[\left| \frac{a_{k+1}}{a_k} \right| < q \text{ for all } k \geq N \]

This results in the following:

\[|a_{k+1}| < q |a_k| \text{ for all } k \geq N \]

and therefore for all \(j \geq 1\) we can repeatedly apply this to get:

\[|a_{k+j}| \leq q |a_{N + j - 1}| < q^j |a_N| = q^{N + j}\frac{|a_N|}{q^N} \text{ for all } k \geq N \]

So we have:

\[\sum^{\infty}_{k=N}{|a_k|} \leq \sum^{\infty}_{j=1}{q^j |a_N|} = |a_N| \sum^{\infty}_{j=1}{q^j} = |a_N| \cdot \frac{q}{1 - q} < \infty \]

By the comparison test with the geometric series we can conclude that the series \(\sum^{\infty}_{n=1}{|a_n|}\) converges if \(q < 1\) or in other words if \(L < 1\). We also know that the geometric series diverges if \(q > 1\) or in other words if \(L > 1\). So we can conclude that the series converges absolutely if \(L < 1\) and diverges if \(L > 1\). If \(L = 1\), then we can not say anything about the convergence of the series.

Importantly we need to use the limit superior and limit inferior as these always exists, whereas the limit does not always exist and could cause the test to be nonsensical. However, if the limit of the ratio does exist, so \(\lim_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right|\) = \(L\), then the limes superior and limit inferior are equal to the limit and we can use the limit instead of the limes superior and limit inferior. So we can write the ratio test as follows:

\[\lim_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right| < 1 \implies \sum^{\infty}_{n=1}{a_n} \text{ converges} \] \[\lim_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right| > 1 \implies \sum^{\infty}_{n=1}{a_n} \text{ diverges} \]
Example

Let’s look at the following series:

\[\sum^{\infty}_{n=1}{\frac{n!}{n^n}} \]

Using the ratio test we can analyze the convergence of this series. We can calculate the ratio of consecutive terms:

\[\begin{align*} \left| \frac{a_{n+1}}{a_n} \right| &= \left| \frac{(n+1)!}{(n+1)^{n+1}} \cdot \frac{n^n}{n!} \right| \\ &= \left| \frac{(n+1)!n^n}{(n+1)^{n+1}n!} \right| \\ &= \left| \frac{(n+1)!}{n!} \cdot (\frac{n}{n+1})^n \cdot \frac{1}{n+1} \right| \\ &= \left| (n+1) \cdot \left(\frac{n}{n+1}\right)^n \cdot \frac{1}{n+1} \right| \\ &= \left| \left(\frac{n}{n+1}\right)^n \right| \\ &= \left| \frac{1}{\left(1 + \frac{1}{n}\right)^n} \right| \\ &\to \left| \frac{1}{e} \right| = \frac{1}{e} < 1 \end{align*} \]

Therefore we can conclude that the series converges by the ratio test. Because a precondition of convergence is that the terms of the series must go to zero, we can also conclude that the terms of the series go to zero as \(n\) approaches infinity so we get the following:

\[\lim_{n \to \infty} a_n = \lim_{n \to \infty} \frac{n!}{n^n} = 0 \]
Example

We can show that the harmonic series \(\sum^{\infty}_{n=1}{\frac{1}{n}}\) diverges using the ratio test:

\[\begin{align*} \left| \frac{a_{n+1}}{a_n} \right| &= \left| \frac{\frac{1}{n+1}}{\frac{1}{n}} \right| \\ &= \left| \frac{n}{n+1} \right| \\ &= \left| 1 - \frac{1}{n+1} \right| \end{align*} \]

As \(n\) approaches infinity, the limit of the ratio is equal to 1 therefore we can not say anything about the convergence of the series.

Exponential Series

We have seen a possible origin of the euler number \(e\). But we can also show a possible origin of the euler number to some power \(z \in \mathbb{C}\). Specifically we can show that the following series:

\[\sum^{\infty}_{n=0}{\frac{z^n}{n!}} = \exp(z) = e^z \]

Hence it is called the exponential series. For \(z = 0\) we get the following:

\[1 + \frac{z}{1!} + \frac{z^2}{2!} + \frac{z^3}{3!} + \ldots = 1 + 0 + 0 + 0 + \ldots = 1 \]

Which is correct. Let’s now check if the series converges for \(z \neq 0\). We can use the ratio test to check if the series converges:

\[\begin{align*} \left| \frac{a_{n+1}}{a_n} \right| &= \left| \frac{z^{n+1}}{(n+1)!} \cdot \frac{n!}{z^n} \right| \\ &= \left| \frac{z^{n+1}n!}{(n+1)!z^n} \right| \\ &= \left| \frac{z^{n+1}}{z^n} \cdot \frac{n!}{(n+1)!} \right| \\ &= \left| z \cdot \frac{1}{n+1} \right| \\ &= \left| \frac{z}{n+1} \right| \end{align*} \]

Which as \(n\) approaches infinity goes to zero for all \(z \in \mathbb{C}\). So we can conclude that the series converges for all \(z \in \mathbb{C}\). However the question still remains if the series converges to \(e^z\).

Convergence Radius

For complex numbers this is probably a circle and for real numbers this is probably a line hence an interval rather then a radius.

If we are given the following series where \(z \in \mathbb{C}\):

\[\sum^{\infty}_{n=0}{\frac{z^n n!}{n^n}} \]

Then the question is for which values of \(z\) does this series converge? To analyze this we can use the ratio test:

\[\begin{align*} \left| \frac{a_{n+1}}{a_n} \right| &= \left| \frac{z^{n+1} (n+1)!}{(n+1)^{n+1}} \cdot \frac{n^n}{z^n n!} \right| \\ &= \left| \frac{z^{n+1}(n+1)!n^n}{(n+1)^{n+1}z^n n!} \right| \\ &= \left| \frac{znn^n}{(n+1)^{n+1}} \right| \\ &= \left| \frac{zn^{n+1}}{(n+1)^{n+1}} \right| \\ &= \left| z \cdot \left(\frac{n}{n+1}\right)^{n+1} \right| \\ &= \left| z \cdot \left(1 - \frac{1}{n+1}\right)^{n+1} \right| \\ &\to \left| z \cdot \frac{1}{e} \right| = \frac{|z|}{e} < 1 \end{align*} \]

So for the series to converge we need the following to hold:

\[\frac{|z|}{e} < 1 \implies |z| < e \]

This results in the complex plane their being a circle with radius \(e\) around the origin where the series converges. This is the so called convergence radius of the series. If we are outside of this circle, then the series diverges. If we are on or inside this circle, then the series converges.

Another example would be the following series which is similar to the geometric series:

\[\sum^{\infty}_{n=1}{n \cdot z^n} \]

We can use the ratio test to check for which values of \(z\) this series converges:

\[\begin{align*} \left| \frac{a_{n+1}}{a_n} \right| &= \left| \frac{(n+1)z^{n+1}}{n z^n} \right| \\ &= \left| \frac{n+1}{n} \cdot z \right| \\ &= \frac{n+1}{n} \cdot |z| < 1 \end{align*} \]

As \(n\) approaches infinity, the term \(\frac{n+1}{n}\) approaches 1. So we can conclude that the series converges for all \(|z| < 1\). This means that the convergence radius of this series is 1. If we are outside of this circle, then the series diverges. If we are on or inside this circle, then the series converges. We might also wonder what is the actual value of the series. If we write out the terms we can notice some patterns:

\[\begin{align*} \sum^{\infty}_{n=1}{n \cdot z^n} &= z + 2z^2 + 3z^3 + 4z^4 + \ldots \\ &= z + z^2 + z^2 + z^3 + z^3 + z^3 + z^4 + z^4 + z^4 + z^4 + \ldots \\ &= z (1 + z + z^2 + z^3 + \ldots) + z^2 (1 + z + z^2 + \ldots) + z^3 (1 + z + \ldots) + \ldots \\ &= (z + z^2 + z^3 + \ldots)(1 + z + z^2 + \ldots) \\ &= z(1 + z + z^2 + \ldots) \cdot \frac{1}{1 - z} \\ &= \frac{z}{(1 - z)^2} \end{align*} \]

Root Test

We have seen that the ratio test can be very useful for analyzing the convergence of series. However, in some cases the ratio test is inconclusive when the limit of the ratio of consecutive terms is equal to 1 such as the harmonic series. In these cases, we can use another test that is even more powerful and can be used in more cases. This is the root test. Everything the ratio test solves, the root test can also solve but the root test can also solve cases where the ratio test is inconclusive. The root test is similar to the ratio test but instead of looking at the ratio of consecutive terms, the root test states that if we have a series \(\sum^{\infty}_{n=1}{a_n}\), then we can analyze the convergence of the series by looking at the limit superior and limit inferior of the \(n\)-th root of the absolute value of the terms. More formally we have the following:

\[\limsup_{n \to \infty} \sqrt[n]{|a_n|} < 1 \implies \sum^{\infty}_{n=1}{a_n} \text{ converges absolutely} \]

Because the absolute convergence implies convergence, we can also say that the series converges. However, if the limit superior and limit inferior are greater than 1, then we can say that the series diverges and therefore also the absolute series diverges. More formally we have the following:

\[\limsup_{n \to \infty} \sqrt[n]{|a_n|} > 1 \implies \sum^{\infty}_{n=1}{a_n} \text{ and } \sum^{\infty}_{n=1}{|a_n|} \text{ diverges} \]

Again if the limit superior and limit inferior are equal to 1, then the test is inconclusive and we can not say anything about the convergence of the series.

Importantly if the limit exists, so \(\lim_{n \to \infty} \sqrt[n]{|a_n|} = L\), then the limit superior and limit inferior are equal to the limit and we can use the limit instead of the limes superior and limit inferior. So we can write the root test as follows:

\[\lim_{n \to \infty} \sqrt[n]{|a_n|} < 1 \implies \sum^{\infty}_{n=1}{a_n} \text{ converges} \] \[\lim_{n \to \infty} \sqrt[n]{|a_n|} > 1 \implies \sum^{\infty}_{n=1}{a_n} \text{ and } \sum^{\infty}_{n=1}{|a_n|} \text{ diverges} \]
Proof

The proof for the root test is similar to the proof for the ratio test. We define the following sequence:

\[c_n = \sup \left\{ \sqrt[n]{|a_{n+k}|} : k \geq n \right\} \]

then the sequence \(c_n\) is decreasing because we are taking the supremum of less and less terms. Because it is the absolute value of the \(n\)-th root of the terms, we can also conclude that \(c_n \geq 0\) for all \(n\). So we can conclude that the sequence \(c_n\) converges to some limit \(L \geq 0\) by the monotone convergence theorem. If we pick then some \(q\) such that \(0 \leq L < q < 1\), then we can find some \(N \in \mathbb{N}\) such that for all \(n \geq N\):

\[\sqrt[n]{|a_n|} < q \]

If we then take the \(n\)-th power of both sides, we get:

\[|a_n| < q^n \text{ for all } n \geq N \]

So we have:

\[\sum^{\infty}_{n=N}{|a_n|} \leq \sum^{\infty}_{n=N}{q^n} = \frac{q^N}{1 - q} < \infty \]

By the comparison test with the geometric series we can conclude that the series \(\sum^{\infty}_{n=1}{|a_n|}\) converges if \(q < 1\) or in other words if \(L < 1\). If \(L > 1\) then we can also conclude that the series diverges as by the definition of the limes superior we would have:

\[\sqrt[n]{|a_n|} > 1 + \epsilon \text{ for all } n \geq N \]

Taking the \(n\)-th power of both sides, we get:

\[|a_n| > (1 + \epsilon)^n \text{ for all } n \geq N \]

As \(n\) approaches infinity, the term \((1 + \epsilon)^n\) approaches infinity for any \(\epsilon > 0\). So we can conclude that the series diverges.

Example

We define the following sequence and series:

\[\sum^{\infty}_{n=1}{a_n} \text{ where } a_n = \begin{cases} \frac{1}{2^{n+1}} & \text{if } n \text{ is even} \\ \frac{1}{2^{n}} & \text{if } n \text{ is odd} \end{cases} \]

First we can check if the series converges by using the ratio test. Let’s assume that \(n\) is even, so \(n = 2k\) for some \(k \in \mathbb{N}\) then we have:

\[\frac{a_{n+1}}{a_n} = \frac{\frac{1}{2^{n+1}}}{\frac{1}{2^{n+1}}} = 1 \]

and if \(n\) is odd, so \(n = 2k + 1\) for some \(k \in \mathbb{N}\) then we have:

\[\frac{a_{n+1}}{a_n} = \frac{\frac{1}{2^{n+2}}}{\frac{1}{2^{n}}} = \frac{1}{4} \]

So we get the following limes superior and limes inferior:

\[\limsup_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right| = 1 \text{ and } \liminf_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right| = \frac{1}{4} \]

Because the limes superior is not less than 1 and the limes inferior is not larger than 1, the ratio test is inconclusive. So we can not say anything about the convergence of the series. So instead let’s use the root test. We assume that \(n\) is even, so \(n = 2k\) for some \(k \in \mathbb{N}\) then we have:

\[\sqrt[n]{|a_n|} = \sqrt[2k]{\frac{1}{2^{2k+1}}} = \left(\frac{1}{2^{2k}}\right)^{\frac{1}{2k}} = \frac{1}{2^{\frac{2k}{2k}}} = \frac{1}{2} \]

and if \(n\) is odd, so \(n = 2k + 1\) for some \(k \in \mathbb{N}\) then we have:

\[\sqrt[n]{|a_n|} = \sqrt[2k]{\frac{1}{2^{2k+1}}} = \left(\frac{1}{2^{2k+1}}\right)^{\frac{1}{2k}} = \frac{1}{2^{\frac{2k+1}{2k}}} = \frac{1}{2^{1 + \frac{1}{2k}}} \]

which as \(k \to \infty\) approaches 0. So we also get \(\frac{1}{2}\) as the limit. So we can conclude that the series converges absolutely by the root test as the limit is less than 1.

Power Series

One of the most important types of series in maths and computer science is the power series as it finds many applications such as in the Taylor series and Fourier series. A power series is a series with \(z \in \mathbb{C}\) or \(z \in \mathbb{R}\) of the form:

\[\sum^{\infty}_{n=0} c_n z^n = c_0 + c_1 z + c_2 z^2 + c_3 z^3 + \ldots = p(z) \]

where \(c_n \in \mathbb{C}\) or \(c_n \in \mathbb{R}\) are the coefficients of the series. So the power series is actually equivalent to a polynomial of infinite degree.

We have also actually already seen some examples of power series such as the exponential series \(\sum^{\infty}_{n=0} \frac{z^n}{n!}\) where the coefficients are \(c_n = \frac{1}{n!}\) and the geometric series \(\sum^{\infty}_{n=0} z^n\) where the coefficients are \(c_n = 1\).

The power series converges for all \(z\) in some interval or circle around the origin. The convergence radius of the power series is often denoted with the phi symbol \(\varphi\) or just simply \(r\). So we can say that the power series converges for all \(z\) such that:

\[|z| < r \text{converges and if } |z| > r \text{ diverges} \]

But how do we define the convergence radius of a power series? We can use the root test to define the convergence radius of a power series. The convergence radius is defined as follows:

\[r = \begin{cases} \infty & \text{if } \limsup_{n \to \infty} |c_n|^{\frac{1}{n}} = 0 \\ \frac{1}{\limsup_{n \to \infty} |c_n|^{\frac{1}{n}}} & \text{if } \limsup_{n \to \infty} |c_n|^{\frac{1}{n}} \neq 0 \end{cases} \]

By convention if the set \(\{|c_n|^{\frac{1}{n}} : n \in \mathbb{N}\}\) is not bounded we set the convergence radius to zero and therefore the power series does not converge. On the other hand if the set is bounded and \(\limsup_{n \to \infty} |c_n|^{\frac{1}{n}} = 0\), then the convergence radius is infinite and the power series converges for all \(z \in \mathbb{C}\) or \(z \in \mathbb{R}\).

Proof

We can prove that the convergence radius is defined as above by using the root test:

\[\begin{align*} \limsup_{n \to \infty} |a_n|^{\frac{1}{n}} &= \limsup_{n \to \infty} |c_n z^n|^{\frac{1}{n}} \\ &= \limsup_{n \to \infty} |c_n|^{\frac{1}{n}} |z| \\ &= |z| \cdot \limsup_{n \to \infty} |c_n|^{\frac{1}{n}} \\ &= |z| \cdot \lim_sup_{n \to \infty} |c_n|^{\frac{1}{n}} < 1 \\ &\implies |z| < \frac{1}{\limsup_{n \to \infty} |c_n|^{\frac{1}{n}}} \end{align*} \]

Where the right side is the convergence radius \(r\). So we can conclude that the power series converges for all \(|z| < r\) and diverges for all \(|z| > r\).

Example

Let’s look at the following power series:

\[\sum^{\infty}_{n=1}{\left(\frac{5n + 2n^3}{2n + 6n^3}\right)^n} \]

Then the contents of the bracket are the coefficients \(c_n\) and \(z\) is equal to 1. Let’s analyze the behaviour of the coefficients \(c_n\) as \(n\) approaches infinity to find the convergence radius of the power series:

\[\begin{align*} \limsup_{n \to \infty} |c_n|^{\frac{1}{n}} &= \limsup_{n \to \infty} \left(\frac{5n + 2n^3}{2n + 6n^3}\right) \\ &= \limsup_{n \to \infty} \frac{2}{6} = \frac{1}{3} < 1 \end{align*} \]

So by the root test we can conclude that the coefficients \(c_n\) converge to \(\frac{1}{3}\) and therefore the convergence radius of the power series is:

\[r = \frac{1}{\limsup_{n \to \infty} |c_n|^{\frac{1}{n}}} = \frac{1}{\frac{1}{3}} = 3 \]

and because the convergence radius is greater than 1, we can conclude that the power series converges for all \(|z| < 3\) and diverges for all \(|z| > 3\).

Riemann Zeta Function

The riemann zeta function is a special function that is defined as the following power series for \(s > 0\):

\[\zeta(s) = \sum^{\infty}_{n=1}{\frac{1}{n^s}} = 1 + \frac{1}{2^s} + \frac{1}{3^s} + \frac{1}{4^s} + \ldots \]

If we analyze the convergence of this series we hit a few walls. First we can use the ratio test to check if the series converges:

\[\begin{align*} \left| \frac{a_{n+1}}{a_n} \right| &= \left| \frac{\frac{1}{(n+1)^s}}{\frac{1}{n^s}} \right| \\ &= \left| \frac{n^s}{(n+1)^s} \right| \\ &= \left| \left(\frac{n}{n+1}\right)^s \right| \\ &= \left| \left(1 - \frac{1}{n+1}\right)^s \right| &\to 1 \text{ as } n \to \infty \end{align*} \]

So the ratio test is inconclusive. We can also use the root test to check if the series converges:

\[\begin{align*} \sqrt[n]{|a_n|} &= \sqrt[n]{\frac{1}{n^s}} \\ &= \left|\frac{1}{n^\frac{1}{n}}\right|^{s} \\ &\to 1 \text{ as } n \to \infty \end{align*} \]

So the root test is also inconclusive. Instead let’s look at some of the terms, maybe we can use the direct comparison test to check if the series converges. We can see that for \(s=1\) we get the harmonic series which diverges because of the following observation:

\[\begin{align*} &1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \frac{1}{8} + \ldots \\ &\geq 1 + \frac{1}{4} + \frac{1}{4} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8} + \ldots \\ &\geq 1 + \frac{1}{2} + \frac{1}{2} + \frac{1}{2} + \ldots \\ &\geq 1 + 1 + 1 + 1 + \ldots = \infty \end{align*} \]

For \(s > 1\) we can use a similar idea:

\[\begin{align*} &1 + \frac{1}{2^s} + \frac{1}{3^s} + \frac{1}{4^s} + \frac{1}{5^s} + \frac{1}{6^s} + \frac{1}{7^s} + \frac{1}{8^s} + \ldots \\ &\leq 1 + \frac{1}{2^s} + \frac{1}{2^s} + \frac{1}{4^s} + \frac{1}{4^s} + \frac{1}{4^s} + \frac{1}{4^s} + \ldots \\ &= 1 + \frac{2}{2^s} + \frac{2^2}{2^{2s}} + \frac{2^3}{2^{3s}} + \frac{2^4}{2^{4s}} + \ldots \\ &= 1 + \frac{1}{2^{s-1}} + \frac{1}{2^{2(s-1)}} + \frac{1}{2^{3(s-1)}} + \ldots \\ &= 1 + \frac{1}{2^{s-1}} + \left(\frac{1}{2^{s-1}}\right)^2 + \left(\frac{1}{2^{s-1}}\right)^3 + \ldots \\ &= \sum^{\infty}_{n=0}{\left(\frac{1}{2^{s-1}}\right)^n} \end{align*} \]

Which is just the geometric series with the first term \(1\) and the common ratio \(\frac{1}{2^{s-1}}\). So we can conclude that the series converges because for \(s > 1\) the common ratio is less than 1. And using our formula we can find the sum of the series:

\[\sum^{\infty}_{n=0}{q^n} = \frac{1}{1 - q} \text{ where } |q| < 1 \]

Therefore we can give an upper bound for the Riemann zeta function for \(s > 1\):

\[\zeta(s) = \sum^{\infty}_{n=1}{\frac{1}{n^s}} \leq \frac{1}{1 - \frac{1}{2^{s-1}}} \]

Using this upper bound we have therefore shown that the Riemann zeta function converges for \(s > 1\) using the direct comparison test as it less than or equal to the convergent geometric series. If we look at the missing values so for \(0 < s \leq 1\), we can see that the series diverges again using the direct comparison test as the following happens:

\frac{1}{n} \leq \frac{1}{n^s}

and therfore with the direct comparison test to the harmonic series we can conclude that the Riemann zeta function diverges for \(0 < s \leq 1\). So in summary:

\[\zeta(s) = \sum^{\infty}_{n=1}{\frac{1}{n^s}} = \begin{cases} \diverges & \text{if } 0 < s \leq 1 \\ \text{converges} & \text{if } s > 1 \end{cases} \]

Double Series

So far, we have focused on series with a single index. However, it is also common to encounter series indexed by two variables, such as sequences \(c_{ij}\) where both \(i\) and \(j\) run through the natural numbers. This leads to the concept of the double series (or double sum), which is essentially the sum over a double-indexed sequence (often visualized as a matrix):

\[\sum_{i=0}^{\infty} \sum_{j=0}^{\infty} c_{ij} \]

The partial sum of the double series is the sum of all terms within a finite rectangle:

\[S_{m,n} = \sum_{i=0}^{m} \sum_{j=0}^{n} c_{ij} \]

We can then consider the limit as both \(m\) and \(n\) go to infinity. However, just as with single series, the order in which we take the limit—or, more concretely, the order in which we sum the terms—can affect the value of the double series if the series does not converge absolutely. So unlike finite sums, infinite double sums can depend on the order of summation if the double series does not converge absolutely. Recall that addition is only “order-agnostic” for absolutely convergent series.

There are several natural ways to sum the elements of a double series:

  • Row-wise sum (then columns): First sum over \(j\) for each fixed \(i\) (i.e., sum across each row), then sum the resulting values over \(i\).

    \[U_i = \sum_{j=0}^{\infty} c_{ij} \]

    Then

    \[\sum_{i=0}^{\infty} U_i = \sum_{i=0}^{\infty} \sum_{j=0}^{\infty} c_{ij} \]
  • Column-wise sum (then rows): First sum over \(i\) for each fixed \(j\) (i.e., sum down each column), then sum over \(j\).

    \[V_j = \sum_{i=0}^{\infty} c_{ij} \]

    Then

    \[\sum_{j=0}^{\infty} V_j = \sum_{j=0}^{\infty} \sum_{i=0}^{\infty} c_{ij} \]
  • Linear Ordering: We can also sum the \(c_{ij}\) by any linear ordering (or enumeration) of a double series where the ordering is a bijection \(\sigma : \mathbb{N} \to \mathbb{N}^2\) so \(k \mapsto (i,j)\). We can then write the double series as a single series:

\[\sum_{k=0}^{\infty} b_k = \sum_{k=0}^{\infty} c_{\sigma(k)} \]
Example

If a double series does not converge absolutely, then different summation orders can give different results (or not converge at all!). Here is a classic example:

\[c_{ij} = \begin{cases} 1 & \text{if } i = j \\ -1 & \text{if } j = i+1 \\ 0 & \text{otherwise} \end{cases} \]

So the matrix looks like:

1 -1 0 0 0 ... 0 1 -1 0 0 ... 0 0 1 -1 0 ... ...
  • Sum by rows: Each row is \(1 + (-1) = 0\) (since the rest are zeros). Thus, summing by rows gives \(0 + 0 + 0 + \ldots = 0\).

  • Sum by columns: The first column: \(1 + 0 + 0 + \ldots = 1\) The second column: \(-1 + 1 + 0 + \ldots = 0\) The third column: \(0 + -1 + 1 + \ldots = 0\). So, the first term is \(1\), and the rest are \(0\); the sum is \(1\).

So, the value depends on the summation order! If we look at the series we can see that it is not absolutely convergent:

\[\sum_{i, j} |c_{ij}| = \sum_{i=0}^{\infty} (|1| + |-1|) = \infty \]

This is an explicit demonstration that absolute convergence is required for the order of summation to be irrelevant.

Fubini’s Theorem

We say the double series converges absolutely if the sum of the absolute values converges, so there exists some \(M \in \mathbb{R}\) such that:

\[\sum_{i=0}^{\infty} \sum_{j=0}^{\infty} |c_{ij}| \leq M < \infty \]

If this is the case, the value of the double series is independent of the order in which we sum the terms. This is a consequence of Fubini’s Theorem for series. From this is then also follows that the series of adding the elements row-wise and column-wise converge absolutely and converge to the same value. In fact, any linear ordering of the double series converges to the same value:

\[\sum_{k=0}^{\infty} c_{\sigma(k)} = \sum_{i=0}^{\infty} \sum_{j=0}^{\infty} c_{ij} = \sum_{j=0}^{\infty} \sum_{i=0}^{\infty} c_{ij} \]

Cauchy Product

So far, we’ve seen how to sum and manipulate series, but what if we want to multiply two series together? Just as the product of two polynomials is a new polynomial whose coefficients are sums of products of the original coefficients, we can define the product of two infinite series in an analogous way. This construction is known as the Cauchy product.

Recall that multiplying two polynomials results in new coefficients which are sums of products of the coefficients from each polynomial. For example, consider two polynomials of degree \(n\):

\[P(x) = a_0 + a_1 x + a_2 x^2 + \ldots + a_n x^n \] \[Q(x) = b_0 + b_1 x + b_2 x^2 + \ldots + b_n x^n \]

Their product is

\[P(x)Q(x) = a_0 b_0 + (a_0 b_1 + a_1 b_0)x + (a_0 b_2 + a_1 b_1 + a_2 b_0)x^2 + \ldots + a_n b_n x^{2n} \]

We can rewrite this as a sum of products of coefficients:

\[P(x)Q(x) = \sum_{k=0}^{2n} c_k x^k \]

where the coefficients \(c_k\) are defined as:

\[c_k = \sum_{j=0}^{k} a_j b_{k-j} \]

That is, each coefficient \(c_k\) is the sum of all products \(a_j b_{k-j}\) where the indices add up to \(k\).

When we extend this to infinite series, we get the Cauchy product. Let \(\sum_{n=0}^{\infty} a_n\) and \(\sum_{n=0}^{\infty} b_n\) be two series (typically we take \(a_n, b_n \in \mathbb{R}\) or \(\mathbb{C}\)). The Cauchy product is the series whose \(n\)-th term is defined as:

\[c_n = \sum_{k=0}^{n} a_k b_{n-k} \]

So, the Cauchy product is

\[\sum_{n=0}^{\infty} c_n = \sum_{n=0}^{\infty} \left( \sum_{k=0}^{n} a_k b_{n-k} \right ) \]

This formula captures exactly the way polynomial coefficients multiply: each \(c_n\) is a sum of products, where the indices of \(a_k\) and \(b_{n-k}\) add up to \(n\).

Now because it is a series, we also want to analyze the convergence of the Cauchy product. In general, the Cauchy product of two series may or may not converge, and if it does converge, it may not converge to the product of the sums of the original series. However, there are important conditions under which the Cauchy product behaves nicely. Even if both original series converge, their Cauchy product need not converge and vice versa if the two series diverge, the Cauchy product can still converge.

However, we have the following important result:

If both \(\sum a_n\) and \(\sum b_n\) converge absolutely, then their Cauchy product \(\sum c_n\) also converges absolutely, and:

\[\sum_{n=0}^{\infty} c_n = \left( \sum_{n=0}^{\infty} a_n \right) \left( \sum_{n=0}^{\infty} b_n \right) \]

That is, the sum of the Cauchy product is the product of the sums.

Proof

Suppose \(\sum_{n=0}^{\infty} a_n\) and \(\sum_{n=0}^{\infty} b_n\) are absolutely convergent series. Define

\[A = \sum_{n=0}^{\infty} |a_n|, \qquad B = \sum_{n=0}^{\infty} |b_n| \]

Let \(c_n = \sum_{k=0}^n a_k b_{n-k}\) be the terms of the Cauchy product.

We can bound the absolute value of each \(c_n\) using the triangle inequality:

\[|c_n| = \left| \sum_{k=0}^n a_k b_{n-k} \right| \leq \sum_{k=0}^n |a_k| |b_{n-k}| \]

Therefore,

\[\sum_{n=0}^{\infty} |c_n| \leq \sum_{n=0}^{\infty} \sum_{k=0}^n |a_k| |b_{n-k}| \]

Notice that the right-hand side is a double sum over all \((k, l)\) such that \(k + l = n\), which, when summed over all \(n\), covers all \((k, l)\) pairs in \(\mathbb{N}^2\) exactly once.

Therefore,

\[\sum_{n=0}^{\infty} \sum_{k=0}^n |a_k| |b_{n-k}| = \sum_{k=0}^{\infty} |a_k| \sum_{l=0}^{\infty} |b_l| = A \cdot B < \infty \]

This shows that the Cauchy product is absolutely convergent.

To show that the sum of the Cauchy product equals the product of the sums, consider the partial sums:

\[S_N = \sum_{n=0}^N a_n, \qquad T_N = \sum_{n=0}^N b_n \]

Then, their product is:

\[S_N T_N = \sum_{n=0}^N a_n \sum_{m=0}^N b_m = \sum_{n=0}^N \sum_{m=0}^N a_n b_m \]

But for each fixed \(k = n + m\), the sum over all \((n, m)\) such that \(n + m = k\) with \(0 \leq n, m \leq N\) yields:

  • For \(k \leq N\), all \((n, m)\) with \(n + m = k\) are in \(0 \leq n, m \leq N\), so we sum \(a_n b_{k-n}\) for \(n = 0\) to \(k\).
  • For \(k > N\), not all pairs are in \(0 \leq n, m \leq N\).

Thus,

\[S_N T_N = \sum_{k=0}^{2N} \left( \sum_{j = \max(0, k-N)}^{\min(k, N)} a_j b_{k-j} \right ) \]

As \(N \to \infty\), \(S_N \to S = \sum_{n=0}^\infty a_n\), \(T_N \to T = \sum_{n=0}^\infty b_n\), and the inner sum tends to \(c_k = \sum_{j=0}^k a_j b_{k-j}\). So the sum of the Cauchy product equals \(S \cdot T\).

Example

Suppose

\[a_n = b_n = (-1)^n \frac{1}{\sqrt{n+1}} \]

Both \(\sum a_n\) and \(\sum b_n\) are convergent by the alternating series test since \(\frac{1}{\sqrt{n+1}} \to 0\). But not absolutely convergent since:

\[\sum_{n=0}^{\infty} |a_n| = \sum_{n=0}^{\infty} \frac{1}{\sqrt{n+1}} = \infty \]

Therefore, the Cauchy product \(\sum c_n\) may or may not converge. In fact, it does converge. To show this let’s compute the terms of the Cauchy product:

\[c_n = \sum_{k=0}^{n} a_k b_{n-k} \]

Plugging in the values for \(a_k\) and \(b_{n-k}\):

\[c_n = \sum_{k=0}^n (-1)^k \frac{1}{\sqrt{k+1}} \cdot (-1)^{n-k} \frac{1}{\sqrt{n-k+1}} \]

Combine the \((-1)\) factors:

\[(-1)^k \cdot (-1)^{n-k} = (-1)^{n} \]

So

\[c_n = (-1)^n \sum_{k=0}^n \frac{1}{\sqrt{k+1}\sqrt{n-k+1}} = (-1)^n \sum_{k=0}^n \frac{1}{\sqrt{(k+1)(n-k+1)}} \]

We can then get a upper bound for the root term as \(k\) approaches \(n\):

\[(k+1)(n-k+1) \leq (n+1)^2 \]

So we get:

\[c_n = (-1)^n \sum_{k=0}^n \frac{1}{\sqrt{(k+1)(n-k+1)}} \leq (-1)^n \sum_{k=0}^n \frac{1}{\sqrt{(n+1)^2}} = (-1)^n \sum_{k=0}^n \frac{1}{n+1} \]

Using the fact that the sum of \(n+1\) terms is just \(n+1\), we get:

\[c_n \leq (-1)^n \frac{n+1}{n+1} = (-1)^n \]

Therefore the Cauchy product diverges as the terms do not converge to zero. So we can conclude that the Cauchy product of two series that converge but not absolutely convergent can diverge.

\[\sum_{n=0}^{\infty} c_n = \sum_{n=0}^{\infty} (-1)^n \text{ diverges} \]
Example

Let’s see a powerful application of the cauchy product. The exponential function can be written as a power series:

\[\exp(x) = \sum_{n=0}^{\infty} \frac{x^n}{n!}, \qquad \exp(y) = \sum_{n=0}^{\infty} \frac{y^n}{n!} \]

Both series are absolutely convergent for all \(x, y \in \mathbb{R}\) or \(\mathbb{C}\). The Cauchy product is:

\[\sum_{n=0}^{\infty} c_n \text{ where } c_n = \sum_{k=0}^{n} \frac{x^k}{k!} \frac{y^{n-k}}{(n-k)!} \]

We can write:

\[c_n = \sum_{k=0}^{n} \frac{x^k}{k!} \frac{y^{n-k}}{(n-k)!} \]

But using the binomial theorem we have:

\[c_n = \frac{1}{n!} \sum_{k=0}^n \binom{n}{k} x^k y^{n-k} = \frac{(x + y)^n}{n!} \]

So the Cauchy product of the exponential series is again the exponential series with the argument \(x + y\):

\[\exp(x) \exp(y) = \sum_{n=0}^{\infty} \frac{(x + y)^n}{n!} = \exp(x + y) \]

This is a classic and fundamental result: the exponential function turns addition into multiplication, i.e., \(\exp(x + y) = \exp(x) \exp(y)\).

Last updated on