Given a sequence \((w_n)_{n\ge0}\subseteq \mathbb C\), consider the series \(\sum_{n=0}^{\infty} w_{n}\). If \[\lim _{n \rightarrow \infty} \sum_{k=0}^{n} w_{k}=w\] for some \(w\in\mathbb C\), then we say that the series converges to \(w\) and write \(w=\sum_{n=0}^{\infty} w_{n}\). Otherwise, the series is said to diverge.
A useful observation is that a series is convergent iff the partial sums \(\sum_{k=0}^{n} w_{k}\) form a Cauchy sequence, that is, \(\lim_{m, n \rightarrow \infty}\sum_{k=m}^{n} w_{k} =0\).
The series \(\sum_{n=0}^{\infty} w_{n}\) is said to converge absolutely if the series \(\sum_{n=0}^{\infty}\left|w_{n}\right|\) is convergent.
As in the real variables case, an absolutely convergent series is convergent.
A necessary and sufficient condition for absolute convergence is that the sequence of partial sums \(\sum_{k=0}^{n}\left|w_{k}\right|\) be bounded.
Let \(\sum_{n\ge0} w_{n}\) be a series of nonzero terms.
If \(\limsup _{n \rightarrow \infty}\big|\frac{w_{n+1}}{w_{n}}\big|<1\), then the series converges absolutely.
If \(\big|\frac{w_{n}+1}{w_{n}}\big| \geq 1\) for all sufficiently large \(n\), the series diverges.
Proof: Exercise! $$\tag*{$\blacksquare$}$$
Let \(\sum_{n\ge0} w_{n}\) be any complex series.
If \(\limsup _{n \rightarrow \infty}|w_{n}|^{1 / n}<1\), the series converges absolutely.
If \(\limsup _{n \rightarrow \infty}|w_{n}|^{1 / n}>1\), the series diverges.
Proof: Exercise! $$\tag*{$\blacksquare$}$$
Fact Let \((f_{n})_{n\in\mathbb N}\) be sequence of complex-valued functions on a set \(S\).
Then \((f_{n})_{n\in\mathbb N}\) converges pointwise on \(S\) (that is, for each \(z \in S\), the sequence \((f_{n}(z))_{n\in\mathbb N}\) is convergent in \(\mathbb{C}\) ) iff \((f_{n})_{n\in\mathbb N}\) is pointwise Cauchy (that is, for each \(z \in S\), the sequence \((f_{n}(z))_{n\in\mathbb N}\) is a Cauchy sequence in \(\mathbb{C}\)).
Also, \((f_{n})_{n\in\mathbb N}\) converges uniformly iff \((f_{n})_{n\in\mathbb N}\) is uniformly Cauchy on \(S\), in other words, \(\lim_{m, n \rightarrow \infty}\left|f_{n}(z)-f_{m}(z)\right|= 0\), uniformly for \(z \in S\).
Let \((g_{n})_{n\in\mathbb N}\) be a sequence complex-valued functions on a set \(S\subseteq \mathbb C\), and assume that \(\left|g_{n}(z)\right| \leq M_{n}\) for all \(z \in S\). If \(\sum_{n=1}^{\infty} M_{n}<+\infty\), then the series \(\sum_{n=1}^{\infty} g_{n}(z)\) converges uniformly on \(S\).
Proof: Let \(f_{n}=\sum_{k=1}^{n} g_{k}\). Then \(|f_n-f_m|\le \sum_{k=m+1}^n|g_{k}|\le \sum_{k=m+1}^nM_k\) for \(n>m\). Thus \((f_{n})_{n\in\mathbb N}\) is uniformly Cauchy on \(S\) and we are done.$$\tag*{$\blacksquare$}$$
The series of the form \(\sum_{n=0}^{\infty} a_{n}\left(z-z_{0}\right)^{n}\), where \(z_{0}, a_{n}\in\mathbb C\) are called the power series. Thus we are dealing with series of functions \(\sum_{n=0}^{\infty} f_{n}\) of a very special type, namely \(f_{n}(z)=a_{n}\left(z-z_{0}\right)^{n}\).
If \(\sum_{n=0}^{\infty} a_{n}\left(z-z_{0}\right)^{n}\) converges at the point \(z\in\mathbb C\) with \(\left|z-z_{0}\right|=r\), then the series converges absolutely on \(D\left(z_{0}, r\right)\), uniformly on each closed subdisk of \(D\left(z_{0}, r\right)\), hence uniformly on each compact subset of \(D\left(z_{0}, r\right)\).
Proof: We have \(\left|a_{n}\left(z^{\prime}-z_{0}\right)^{n}\right|=\left|a_{n}\left(z-z_{0}\right)^{n}\right|\big|\frac{z^{\prime}-z_{0}}{z-z_{0}}\big|^{n}\).
The convergence at \(z\) implies that the sequence \((a_{n}\left(z-z_{0}\right)^{n})_{n\in\mathbb N}\) is bounded, since \(\lim_{n\to\infty}a_{n}\left(z-z_{0}\right)^{n} = 0\).
If \(\left|z^{\prime}-z_{0}\right| \leq r^{\prime}<r\), then \(\big|\frac{z^{\prime}-z_{0}}{z-z_{0}}\big| \leq \frac{r^{\prime}}{r}<1\) proving absolute convergence at \(z^{\prime}\) (by comparison with a geometric series). Thus the series converges uniformly on \(\overline{D}\left(z_{0}, r^{\prime}\right)\) by the Weierstrass \(M\)-test.$$\tag*{$\blacksquare$}$$
Let \(\sum_{n=0}^{\infty} a_{n}\left(z-z_{0}\right)^{n}\) be a power series. Let \(r=\Big[\lim \sup _{n \rightarrow \infty}\sqrt[n]{|a_{n}|}\Big]^{-1}\), be the radius of convergence of the series. (Adopt the convention that \(1 / 0=\infty, 1 / \infty=0\).) The series converges absolutely on \(D\left(z_{0}, r\right)\), and uniformly on its compact subsets. The series diverges for \(\left|z-z_{0}\right|>r\).
Proof: We have \(\limsup _{n \rightarrow \infty}\left|a_{n}\left(z-z_{0}\right)^{n}\right|^{1 / n}=\left|z-z_{0}\right| / r\), which will be less than \(1\) if \(\left|z-z_{0}\right|<r\).
By the root test, the series converges absolutely on \(D\left(z_{0}, r\right)\). Uniform convergence on compact subsets follows from the previous result. (We do not necessarily have convergence for \(\left|z-z_{0}\right|=r\), but we do have convergence for \(\left|z-z_{0}\right|=r^{\prime}\), for any \(r^{\prime}\in(0, r)\).
If the series converges at some point \(z\) with \(\left|z-z_{0}\right|>r\), then by the previous theorem it converges absolutely at points \(z^{\prime}\) so that \(r<\left|z^{\prime}-z_{0}\right|<\left|z-z_{0}\right|\). But then \(\left|z-z_{0}\right| / r>1\), contradicting the root test. $$\tag*{$\blacksquare$}$$
Let \(\Omega\subseteq \mathbb C\) be an open set. We say that a function \(f:\Omega\to \mathbb C\) is representable by power series in \(\Omega\) if to every disc \(D(a,r) \subseteq \Omega\) we have \[f(z)=\sum_{n=0}^{\infty} c_{n}(z-a)^{n} \quad \text{ for all } \quad z \in D(a,r). \tag{*}\]
Let \(\Omega\subseteq \mathbb C\) be an open set. If \(f:\Omega\to \mathbb C\) is representable by power series in \(\Omega\), then \(f \in H(\Omega)\) and \(f^{\prime}\) is also representable by power series in \(\Omega\). In fact, if (*) holds, then we also have \[f^{\prime}(z)=\sum_{n=1}^{\infty} n c_{n}(z-a)^{n-1} \quad \text{ for all } \quad z \in D(a,r). \tag{**}\]
Proof: The key idea of the proof is to compare the differential quotient \[\frac{f(z) - f(w)}{z - w}\] with the power series (**) evaluated at \(w\). Then, we let \(z \to w\).
If the series (*) converges in \(D(a,r)\), then by the root test, the series (**) also converges in that domain.
Without loss of generality, set \(a = 0\) and let \[g(z)=\sum_{n=1}^{\infty} nc_{n}z^{n-1}.\]
Fix \(w \in D(0,r)\) and choose \(\rho\) such that \(|w| < \rho < r\). We have \[\frac{f(z)-f(w)}{z-w}-g(w)=\sum_{n=1}^{\infty} c_{n}\left[\frac{z^{n}-w^{n}}{z-w}-n w^{n-1}\right] \quad \text{ if } \quad z \neq w.\]
For \(n = 1\), the expression in brackets equals \(0\). For \(n \geq 2\), it becomes: \[\label{eq:15} \left[\frac{z^{n}-w^{n}}{z-w}-n w^{n-1}\right]=(z - w) \sum_{k=1}^{n-1} k w^{k-1} z^{n-k-1}.\]
This follows from the formula \[z^n - w^n = (z - w) \sum_{k=0}^{n-1} z^{n-1-k} w^k,\] and the telescoping identity \[\begin{aligned} \sum_{k=0}^{n-1} z^{n-1-k} w^k& = \sum_{k=0}^{n-1} ((k+1) - k) z^{n-1-k} w^k\\ &= \sum_{k=1}^{n} k z^{n-k} w^{k-1} - \sum_{k=1}^{n-1} k z^{n-1-k} w^k. \end{aligned}\]
If \(|z| < \rho\), then \[\Big|\sum_{k=1}^{n-1} k w^{k-1} z^{n-k-1}\Big|\le \frac{n(n-1)}{2} \rho^{n-2}.\]
Thus, we have \[\left|\frac{f(z) - f(w)}{z - w} - g(w)\right| \leq |z - w| \sum_{n=2}^{\infty} n^2 \left|c_n\right| \rho^{n-2}.\]
Since \(\rho < r\), the last series converges.
Hence, \[\begin{aligned} \lim_{z\to w}\left|\frac{f(z) - f(w)}{z - w} - g(w)\right|=0. \end{aligned}\]
This implies that \(f^{\prime}(w) = g(w)\), completing the proof.$$\tag*{$\blacksquare$}$$
Remark. Since \(f^{\prime}\) satisfies the same conditions as \(f\), the theorem can be applied to \(f^{\prime}\) as well. This implies that \(f\) has derivatives of all orders, each of which can be represented by a power series in \(\Omega\).
Specifically, if (*) holds, then \[f^{(k)}(z) = \sum_{n=k}^{\infty} n(n-1) \cdots (n-k+1) c_n (z-a)^{n-k}.\]
Consequently, (*) leads to \[k! c_k = f^{(k)}(a) \quad \text{for } \quad k = 0, 1, 2, \ldots,\] ensuring that for each \(a \in \Omega\), there exists a unique sequence \((c_n)_{n\ge0}\) satisfying (*).
The geometric series \[\sum_{n=0}^{\infty} z^{n}\] converges absolutely only within the disk \(|z| < 1\).
Its sum within this region is the function \(\frac{1}{1-z}\), which is holomorphic in the open set \(\mathbb{C}\setminus\{1\}\).
This identity is established similarly to the real case: \[\sum_{n=0}^{N} z^{n} = \frac{1 - z^{N+1}}{1 - z}.\] Then \(\lim_{N \to \infty} z^{N+1} = 0\) if \(|z| < 1\).
By the previous theorem, for \(z \in D(0,1)\), we have \[\frac{1}{(1-z)^2}=\bigg(\frac{1}{1-z}\bigg)'=\sum_{n=1}^\infty nz^{n-1}.\]
The most important example of a power series is the complex exponential function, which is defined for \(z \in \mathbb{C}\) by \[\exp(z) = \sum_{n=0}^{\infty} \frac{z^n}{n!}.\] When \(z\) is real, this definition coincides with the usual exponential function and in fact, the series above converges absolutely for every \(z \in \mathbb{C}\).
Further examples of power series that converge in the whole complex plane are given by the standard trigonometric functions; these are defined by \[\cos z = \sum_{n=0}^{\infty} (-1)^n \frac{z^{2n}}{(2n)!}, \quad \text{and} \quad \sin z = \sum_{n=0}^{\infty} (-1)^n \frac{z^{2n+1}}{(2n+1)!},\] and they agree with the usual cosine and sine of a real argument whenever \(z \in \mathbb{R}\).
In order to show that the series defining \(\exp(z)\) converges absolutely, observe that \[\left|\frac{z^{n}}{n!}\right| = \frac{|z|^n}{n!}.\] Thus, \(|\exp(z)|\) can be compared to the series \(\sum_{n=0}^\infty \frac{|z|^n}{n!} = e^{|z|} < \infty\). In fact, this estimate shows that the series defining \(\exp(z)\) is uniformly convergent in every disk in \(\mathbb{C}\).
A similar argument can be used to deduce the convergence of power series for \(\sin z\) and \(\cos z\).
By the previous theorem, for any \(z \in \mathbb{C}\), the complex derivative of \(\exp(z)\) exists and is given by \[\exp'(z) = \sum_{n=0}^{\infty} n \frac{z^{n-1}}{n!} = \sum_{m=0}^{\infty} \frac{z^{m}}{m!} = \exp(z),\] therefore \(\exp(z)\) is its own derivative.
A similar argument gives us that \(\cos'z = -\sin z\) and \(\sin'z = \cos z\). This shows that these are entire functions as well.
Since the series defining the exponential function is absolutely convergent, we may multiply it with itself to obtain that for \(z, w \in \mathbb{C}\), we have \[\begin{aligned} \left( \sum_{k=0}^{\infty} \frac{z^k}{k!} \right) \left( \sum_{m=0}^{\infty} \frac{w^m}{m!} \right) &= \sum_{n=0}^{\infty} \frac{1}{n!} \sum_{k=0}^{n} \frac{n!}{k!(n-k)!} z^k w^{n-k}\\ &= \sum_{n=0}^{\infty} \frac{(z+w)^n}{n!}, \end{aligned}\] which shows that \[\exp(z)\exp(w) = \exp(z+w). \tag{A}\]
A simple calculation shows that for \(y \in \mathbb{R}\) we have \[\exp(iy) = \cos(y) + i\sin(y).\tag{B}\]
We have \(\exp(z) = 1\) if and only if \(z = 2k\pi i\). Indeed, let \(z = x + iy\) and if \(\exp(z) = 1\), then \(|\exp(z)| = 1\).
The identities (A) and (B), together with the Pythagorean trigonometric identity, imply that \(x = 0\). Hence, \(z = iy\).
If \(\exp(iy) = 1\), this implies that \(\cos(y) = 1\) and \(\sin(y) = 0\). This implies that \(y = 2k\pi\), where \(k \in \mathbb{Z}\). Therefore, \(z = 2k\pi i\).
Consequently, by (A), we get that the complex exponential is a periodic function with period \(2\pi i\). This implies that \[\exp(z + 2k\pi i) = \exp(z), \quad \text{for all } \quad z \in \mathbb{C} \text{ and } k \in \mathbb{Z}.\]
If \(X\) is a topological space, a curve in \(X\) is a continuous mapping \(\gamma\) of a compact interval \([\alpha, \beta] \subset \mathbb R\) into \(X\); here \(\alpha<\beta\). We call \([\alpha, \beta]\) the parameter interval of \(\gamma\) and denote the range of \(\gamma\) by \[\gamma^{*}=\gamma([\alpha, \beta])=\{\gamma(t): t\in[\alpha, \beta]\}.\]
Observe that \(\gamma^{*}\) is compact and connected.
If the initial point \(\gamma(\alpha)\) of \(\gamma\) coincides with its end point \(\gamma(\beta)\), we call \(\gamma\) a closed curve.
In the definition of a curve in \(\mathbb C\), we will distinguish between the one-dimensional geometric object in the plane (endowed with an orientation) \(\gamma^*\), and its parametrization \(\gamma\), which is a mapping from a closed interval to \(\mathbb{C}\). This parametrization is not uniquely determined.
Two parametrizations, \[\gamma:[\alpha, \beta] \rightarrow \mathbb{C} \quad \text{and} \quad \tilde{\gamma}:[\alpha_1, \beta_1] \rightarrow \mathbb{C},\] are equivalent if there exists a continuously differentiable bijection from \([\alpha_1, \beta_1] \ni s \mapsto \varphi(s)\in [\alpha, \beta]\) such that \(\varphi^{\prime}(s) > 0\) and \[\tilde{\gamma}(s) = \gamma(\varphi(s)).\]
The condition \(\varphi^{\prime}(s) > 0\) precisely ensures that the orientation is preserved: as \(s\) travels from \(\alpha_1\) to \(\beta_1\), \(\varphi(s)\) travels from \(\alpha\) to \(\beta\).
The family of all parametrizations that are equivalent to \(\gamma(t)\) determines a smooth curve \(\gamma^\ast \subset \mathbb{C}\), namely the image of \([\alpha, \beta]\) under \(\gamma\), with the orientation given by \(\gamma\) as \(t\) travels from \(\alpha\) to \(\beta\).
A path is a piecewise continuously differentiable curve in the plane. More precisely, a path with parameter interval \([\alpha, \beta]\) is a continuous complex function \(\gamma\) defined on \([\alpha, \beta]\), satisfying the following conditions:
There exist finitely many points \(s_{j}\) such that \[\alpha = s_{0} < s_{1} < \cdots < s_{n} = \beta,\] and on each interval \([s_{j-1}, s_{j}]\), the function \(\gamma\) has a continuous derivative.
However, at the points \(s_{1}, \ldots, s_{n-1}\), the left-hand and right-hand derivatives of \(\gamma\) may differ.
A closed path is a closed curve that is also a path.
Let \(\gamma\) be a path with parameter interval \([\alpha, \beta]\). Assume that \(f\) is continuous on \(\gamma^{*}\subset\mathbb C\). Then we define \[\int_{\gamma} f(z) d z=\int_{\alpha}^{\beta} f(\gamma(t)) \gamma^{\prime}(t) d t.\]
When \(\gamma\) is closed path, then integration over \(\gamma\) is understood to be in the anticlockwise direction, unless otherwise mentioned.
Here, the integral on the right-hand side is the Riemann integral since \(\gamma^{\prime}(t)\) is a bounded function of \(t\) in \([\alpha, \beta]\) with at most finitely many discontinuities.
Let \(\phi:[a, b] \rightarrow[\alpha, \beta]\) be continuous, strictly increasing and onto function. Further assume that \(\phi\) is continuously differentiable.
Then \(\phi(a)=\alpha, \phi(b)=\beta\) and \(\phi([a, b])=[\alpha, \beta]\).
Let \(\gamma\) be a path with parameter interval \([\alpha, \beta]\). Then \[\sigma=\gamma \circ \phi\] is a path with parameter interval \([a, b]\).
Let \(f\) be continuous on \(\gamma^{*}\). Then \(f\) is also continuous on \(\sigma^{*}\) and \[\int_{\sigma} f(z) d z=\int_{\gamma} f(z) d z.\]
We call \(\phi:[a, b] \rightarrow[\alpha, \beta]\) a change of parameter function.
Let \(\gamma_{1}\) and \(\gamma_{2}\) be paths such that the end point of \(\gamma_{1}\) coincides with the initial point of \(\gamma_{2}\). Then, after suitable re-parametrization, we obtain a path \(\gamma\) by first following \(\gamma_{1}\) and then \(\gamma_{2}\).
By (i), we have \[\int_{\gamma} f(z) d z=\int_{\gamma_{1}} f(z) d z+\int_{\gamma_{2}} f(z) d z,\] where \(f\) is continuous on \(\gamma_{1}^{*} \cup \gamma_{2}^{*}\). We write \(\gamma=\gamma_{1}+\gamma_{2}\).
For paths \(\gamma_{1}, \gamma_{2}, \ldots, \gamma_{n}\) such that the end points of \(\gamma_{j}\) coincides with the initial point of \(\gamma_{j+1}\) with \(1 \leq j<n\), the path \[\gamma=\gamma_{1}+\gamma_{2}+\cdots+\gamma_{n}\] is defined similarly.
Let \(\gamma:[0, 1]\to\mathbb C\) be a path. Define \(\gamma_{1}(t)=\gamma(1-t)\) for \(t\in[0, 1]\). Then \(\gamma_{1}\) is called a path opposite to \(\gamma\). We have \[\int_{\gamma} f(z) d z=-\int_{\gamma_{1}} f(z) d z,\] where \(f\) is continuous on \(\gamma^{*}\).
Let \(\gamma:[\alpha, \beta]\to\mathbb C\) be a path and \(f\) be continuous on \(\gamma^{*}\). Then \[\begin{aligned} \left|\int_{\gamma} f(z) d z\right|&=\left|\int_{\alpha}^{\beta} f(\gamma(t)) \gamma^{\prime}(t) d t\right|\\ & \leq \max _{z \in \gamma^{*}}|f(z)| \int_{\alpha}^{\beta}\left|\gamma^{\prime}(t)\right| d t \\ & =\ell(\gamma) \max _{z \in \gamma^{*}}|f(z)| \end{aligned}\] where \(\ell(\gamma)=\int_{\alpha}^{\beta}\left|z^{\prime}(t)\right| d t\) is the length of \(\gamma\).
Let \(\gamma\) be a closed path.
Then the complement of \(\gamma^{*}\) in metric space \(\mathbb{C}_{\infty}\) is open. Thus it is a disjoint union of regions, since every open set is a disjoint union of open and connected sets.
We say that these regions are determined by \(\gamma\) in \(\mathbb{C}_{\infty}\). There is only one region determined by \(\gamma\) which is unbounded and we call it the unbounded region determined by \(\gamma\). We observe that it contains \(\infty\).
The regions determined by \(\gamma\) in \(\mathbb{C}_{\infty}\) and the regions determined by \(\gamma\) in \(\mathbb{C}\) are identical except that the unbounded region determined by \(\gamma\) in \(\mathbb{C}\) does not contain \(\infty\).
If \(a\) is a complex number and \(r>0\), the path defined by \[\gamma(t):=a+r e^{i t}, \quad0 \leq t \leq 2 \pi,\] is called the positively oriented circle with center at \(a\) and radius \(r\) and then we have \[\int_{\gamma} f(z)\,{\rm d} z=\int_{0}^{2 \pi} f\left(a+r e^{i t}\right) ire^{i t}d t\] and the length of \(\gamma\) is \(2 \pi r\), as expected.
If \(a\) and \(b\) are complex numbers, the path \(\gamma\) given by
\[\gamma(t) := a + (b-a) t, \quad 0 \leq t \leq 1,\] is the positively oriented interval \([a, b]\); its length is \(|b-a|\), and
\[\int_{[a, b]} f(z) \, {\rm d} z = (b-a) \int_{0}^{1} f\big(a + (b-a)t\big) d t.\] Let \(\alpha < \beta\) be real numbers. If \[\gamma_{1}(t) := \frac{a(\beta - t) + b(t - \alpha)}{\beta - \alpha}, \quad \alpha \leq t \leq \beta,\] then we obtain an equivalent path, which we still denote by \([a, b]\).
The path opposite to \([a, b]\) is \([b, a]\).
Let \(\{a, b, c\}\) be an ordered triple of complex numbers, and let \[\Delta = \Delta(a, b, c)\] be the triangle with vertices at \(a\), \(b\), and \(c\). The set \(\Delta\) is the smallest convex set that contains \(a\), \(b\), and \(c\). Define \[\int_{\partial \Delta} f = \int_{[a, b]} f + \int_{[b, c]} f + \int_{[c, a]} f \tag{$\triangle$}\] for any \(f\) continuous on the boundary of \(\Delta\). We can regard (\(\triangle\)) as the definition of its left side. Alternatively, we can consider \(\partial \Delta\) as a path obtained by joining \([a, b]\) to \([b, c]\) to \([c, a]\), as outlined in definition of the path, in which case (\(\triangle\)) is easily proved to be true.
If \(\{a, b, c\}\) is permuted cyclically, we see from (\(\triangle\)) that the left side of (\(\triangle\)) is unaffected. If \(\{a, b, c\}\) is replaced by \(\{a, c, b\}\), then the left side of (\(\triangle\)) changes sign.