A series that looks like the following:
$\frac{1}{c}+\frac{1}{c^2}+\frac{1}{c^3}+\cdots=\sum_{k=1}^{\infty}\frac{1}{c^k}, \quad c \text{ is a constant with } |c|>1$is a convergent geometric series, which can be evaluated using the following infinite sum formula:
$\frac{a}{1-r}$where $a$ is the first term of the series and $r$ is the common ratio. In the case of the series above, $a = \frac{1}{c}$ and $r = \frac{1}{c}$. Plugging them in, we obtain the sum, $S_0$, for the above series:
$S_0=\frac{1}{c-1}.$This series is useful in various applications, from distance calculation in physics, compound interests in finance, population growth in biology to even fascinating topics like fractal geometry. Indeed, this series is good and fun, but like most mathematicians may ask, "Can we generalise it even further, and how?"
This article demonstrates one way this series can be generalised, which is by allowing variables in the numerator of the summed over terms.
Now, what if instead of the numerator being a constant number, it is the index of summation $k$?
$\sum_{k=1}^{\infty}\frac{k}{c^k}, \quad c \text{ is a constant with } |c|>1.$It somewhat resembles the previous series but is no longer a geometric one. In fact, it is now an arithmetico-geometric series, with each term in the sum being the product of its respective term of an arithmetic sequence and that of a geometric sequence.
It seems like the infinite sum formula above is not really useful currently. How should we tackle this problem then?
Since the variable $k$ in the numerator is of degree $1$, let's call the desired sum $S_1$, and list the first few terms of the series.
$S_1=\frac{1}{c}+\frac{2}{c^2}+\frac{3}{c^3}+\cdots$Next, let's multiply all of the terms by $\dfrac{1}{c}$. We then obtain $\dfrac{1}{c} S$, with the terms becoming the following.
$\frac{1}{c}S_1=\frac{1}{c^2}+\frac{2}{c^3}+\frac{3}{c^4}+\cdots$Now, observe that with the exception of the $k=1$ term, there is a difference of $\dfrac{1}{c^k}$ between each term in the former and latter series. With this in mind, let's see what happens if we subtract the former sum by the latter.
$\frac{c-1}{c}S_1=\frac{1}{c}+\frac{1}{c^2}+\frac{1}{c^3}+\cdots=\sum_{k=1}^{\infty}\frac{1}{c^k}=S_0$We recover the geometric series at the beginning! The key here is that we have managed to reduce the problem down to what we've seen before, and we can now apply the geometric infinite sum formula!
How can we express this in terms of $S$ then? Recalling what we have previously done, we can in fact write it as $S-\frac{1}{c} S$. Doing some simplification and applying the infinite sum formula gives us the following equation:
$\frac{c-1}{c}S_1=\frac{1}{c-1}$Solving this equation finally gives us the desired answer, also known as the Gabriel's staircase:
$S_1=\frac{c}{(c-1)^2}.$The infinite sum formula has an application in probability theory: it gives us the expected value of a discrete random variable defined by a geometric distribution.
Explanations of some jargons involved.
A random variable is a representation of a collection of possible outcomes of an experiment, each of which being assigned a probability value.
There are two types of random variables: discrete and continuous. Roughly speaking, a discrete random variable is "defined over whole (natural) numbers like 1, 2 and 3" whereas a continuous random variable is "defined over decimals (real numbers) like 0.2, 1.67 or even π." Formally, a discrete random variable takes on a countable set of possible outcomes while a continuous random variable takes on an uncountable set of possible values.
A common example of a discrete random variable, labelled here as $X$, is the roll of a six-sided fair die 🎲. There are six possible results of a die roll: $1,2,3,4,5,6$, each of which occurring with a probability of $\frac{1}{6}$, thus $X=\{1,2,3,4,5,6\}$, and each of the probabilities can be tabulated as follows:
$k$ | $P(X=k)$ |
---|---|
$1$ | $\frac{1}{6}$ |
$2$ | $\frac{1}{6}$ |
$3$ | $\frac{1}{6}$ |
$4$ | $\frac{1}{6}$ |
$5$ | $\frac{1}{6}$ |
$6$ | $\frac{1}{6}$ |
The expected value, also called expectation, of a random variable is the average value of all possible outcomes of an experiment, weighted by their respective probabilities. Its formula is
$E[X]=\sum_{k \in X}k \cdot P(X=k)$which is the sum of the products of all possible values of a random variable $X$ and their respective probabilities.
Using the previous die 🎲 example, the expected value obtained from a single die roll is then
$1 \cdot \frac{1}{6} + 2 \cdot \frac{1}{6} + 3 \cdot \frac{1}{6} + 4 \cdot \frac{1}{6} + 5 \cdot \frac{1}{6} + 6 \cdot \frac{1}{6} = \frac{7}{2} = 3.5.$A geometric distribution is a probability distribution that gives the number of binomial trials (i.e. you either "succeed" or "fail") needed until achieving the first success.
If the probability of success on each trial is $p$, then the probability of having the $n$-th trial as the first success is
$P(X=n)=(1-p)^{n-1}p.$Using this result, the expected value formula and the fact that the number of trials can theoretically be infinite, the expectation of this distribution is then found to be
$E[X]=p \sum_{k=1}^{\infty}{k \cdot (1-p)^{k-1}}$which explains why it is related to the arithmetico-geometric series discussed in this section. In fact, this infinite sum corresponds to the formula with $c$ substituted by $\frac{1}{1-p}$ (so that it lies within the interval of convergence), less the multiplicative constant $p$ and the index shift.
Upon further simplification while accounting for the multiplicative constant, as well as the index shift by adding a factor of $\frac{1}{1-p}$, we obtain this surprisingly simple result:
$p \cdot \cfrac{\frac{1}{1-p}}{\left(\frac{1}{1-p}-1\right)^2} \cdot \frac{1}{1-p}=\frac{1}{p}.$This is helpful in calculating the average number of trials needed before attaining the first success. Here are a few examples.
For the $c=10$ case, this sum also provides a proof that the infinite sum $0.1+0.02+0.003+0.0004+\cdots=\sum_{k=1}^{\infty}\frac{k}{10^k}=\frac{10}{81}$ is in fact rational, which may not be obvious at first glance.
This method only works if the series is indeed convergent. In other words, it won't work for a series like 1 - 1 + 1 - 1 + ... (which has an oscillating sum) or 1 + 1/2 + 1/3 + ... (which, surprisingly, blows up to positive infinity).
As a sidenote, this also means that the use of the same method covered in the infamous Numberphile -1/12 video to "evaluate" the sum of natural numbers is actually wrong (as explained with more detail in this Mathologer video), because the series mentioned in that video are clearly not even convergent to begin with!
A proof to show that $\sum_{k=1}^{\infty}\frac{k}{c^k}$ is indeed convergent.
The ratio test will come in handy here, i.e. we need to evaluate the limit $L = \lim\limits_{k \to \infty}\left\vert{\frac{k+1\text{-th term}}{k\text{-th term}}}\right\vert = \lim\limits_{k \to \infty}\left\vert{\cfrac{\frac{k+1}{c^{k+1}}}{\frac{k}{c^k}}}\right\vert$, which gives us $L=\frac{1}{c} < 1$, so the series is convergent.
In fact, it exhibits a stronger form of convergence: it is absolutely convergent, i.e. even if we replace each term in the series with the absolute value of themselves, the series will still converge!
At this point, one may ask, "Why stop here? Why not replace the numerator from $k$ to $k^2$ and find the general formula of the resulting new series? It will still be convergent (proof below) anyways."
$\sum_{k=1}^{\infty}{\frac{k^2}{c^k}}$Let's try to apply the same technique we used to obtain the general formula for the case when $p=1$. Here, we call the desired sum $S_2$.
$S_2=\frac{1}{c}+\frac{4}{c^2}+\frac{9}{c^3}+\cdots$Multiplying throughout by $\dfrac{1}{c}$ yields
$\frac{1}{c}S_2=\frac{1}{c^2}+\frac{4}{c^3}+\frac{9}{c^4}+\cdots$Finally, subtracting the former by the latter, we obtain
$\frac{c-1}{c}S_2=\frac{1}{c}+\frac{3}{c^2}+\frac{5}{c^3}+\cdots=\sum_{k=1}^{\infty}\frac{2k-1}{c^k}=2S_1-S_0.$Once again, we've managed to utilise the fact that the pairwise differences of consecutive perfect squares produce the sequence of odd numbers to reduce this series down to simpler results.
Applying the previous results obtained so far, we finally have
$\frac{c-1}{c}S_2=\sum_{k=1}^{\infty}\frac{2k-1}{c^k}=2 \cdot \sum_{k=1}^{\infty}\frac{k}{c^k}-\sum_{k=1}^{\infty}\frac{1}{c^k}=2 \cdot \frac{c}{(c-1)^2}-\frac{1}{c-1}$ $\implies S_2=\frac{c(c+1)}{(c-1)^3}.$Now, what can we get if the numerator is a cubic term?
$\sum_{k=1}^{\infty}{\frac{k^3}{c^k}}$We will call this sum $S_3$ and let's try the same method again.
$S_3=\frac{1}{c}+\frac{8}{c^2}+\frac{27}{c^3}+\cdots$ $\implies \frac{1}{c}S_3=\frac{1}{c^2}+\frac{8}{c^3}+\frac{27}{c^4}+\cdots$ $\implies \frac{c-1}{c}S_3=\frac{1}{c}+\frac{7}{c^2}+\frac{19}{c^3}+\frac{37}{c^4}+\cdots \tag{\#}$Notice that in (#), the numerator terms are in the form of $k^3-(k-1)^3$, with $k = 1,2,3,\ldots$ This simplifies to $3k^2-3k+1$, which means that (#) can be rewritten as:
$\frac{c-1}{c}S_3=3\sum_{k=1}^{\infty}{\frac{k^2}{c^k}}-3\sum_{k=1}^{\infty}{\frac{k}{c^k}}+\sum_{k=1}^{\infty}{\frac{1}{c^k}}=3S_2-3S_1+S_0$Finally, we then arrive at:
$\frac{c-1}{c}S_3=3 \cdot \frac{c(c+1)}{(c-1)^3}-3 \cdot \frac{c}{(c-1)^2}+\frac{1}{c-1}$ $\implies S_3=\frac{c^3+4c^2+c}{(c-1)^4}.$You may start to notice a pattern here, and this will be discussed in the next section.
All of these naturally lead to the question: what happens if we generalise the power of $k$ on the numerator to be any positive integer, i.e. what is the infinite sum of the following series?
$\sum_{k=1}^{\infty}\frac{k^p}{c^k}, \quad c \text{ is a constant with } |c|>1,\; p>1 \text{ is an integer}.$We know from previous sections that such series are in fact convergent, so we could try to find the respective general formulae for them.
A proof to show that this series is convergent.
Using the ratio test again like the previous proof, we find that $L=\lim\limits_{k \to \infty}\left\vert{\cfrac{\frac{(k+1)^p}{c^{k+1}}}{\frac{k^p}{c^k}}}\right\vert=\frac{1}{c} < 1$ as well, so it is indeed convergent.
Let's call this sum $S_p$. Inductively, one can see that if we obtain $\dfrac{c-1}{c}S_p$, its numerator terms are then in the form of $k^p-(k-1)^p$, where $k=1,2,3,\ldots$
Not only is this where the binomial theorem comes into play, you can observe that the highest-power term, $k^p$ gets cancelled out while the terms of lower powers remain intact. This means that this method enables us to obtain a recurrence relation of $S_p$ in terms of the lower-power sums, i.e. $S_{p-1}$, $S_{p-2}$ and so on, and this can help us to evaluate its general formula.
Since $k^p-(k-1)^p=k^p-\sum_{i=0}^{p}\binom{p}{i}k^{p-i}(-1)^i=\sum_{i=1}^{p}\binom{p}{i}k^{p-i}(-1)^{i+1}$, we then arrive at this recurrence relation:
$S_p=\frac{c}{c-1} \cdot \sum_{i=1}^{p}(-1)^{i+1}\binom{p}{i}S_{p-i}.$In fact, this derivation is very similar to that in this short paper by Alan Gorfin.
Interestingly, the closed form of this general formula has connections with combinatorics. According to a short paper by Tom Edgar, $S_p$ has the following formula, due to Carlitz:
$S_p=\frac{c \cdot A_{p}(c)}{(c-1)^{p+1}}$where $A_p(c)=\sum_{k=0}^{p-1}A(p,k)c^k$ denotes the $p$-th order Eulerian polynomial (following the Wikipedia notation).
The coefficients of the Eulerian polynomial, denoted $A(p,k)$ as above, are called Eulerian numbers. We can construct a triangle similar to the Pascal's triangle using Eulerian numbers, with $p$ starting from $1$ and incrementing one step at a time when traversing down the column, whereas $k$ ranges from $0$ to $p-1$ as we go from the left to the right of each row. This triangle is then usually called the Euler's triangle.
$\begin{array}{lc} p=1: & 1 \\ p=2: & 1 \quad 1 \\ p=3: & 1 \quad 4 \quad 1 \\ p=4: & 1 \quad 11 \quad 11 \quad 1 \\ p=5: & 1 \quad 26 \quad 66 \quad 26 \quad 1 \\ p=6: & 1 \quad 57 \quad 302 \quad 302 \quad 57 \quad 1 \\ & \quad \quad \quad \vdots \quad \quad \quad \end{array}$Eulerian numbers have an application in combinatorics. To quote Wikipedia,
the Eulerian number is the number of permutations of the numbers $1$ to $n$ in which exactly $k$ elements are greater than the previous element (permutations with $k$ "ascents").
For example, $A(3,1) = 4$, gives us the number of permutations of $1,2,3$ with exactly one element being greater than the previous element, i.e. $132, 213, 231, 312$.
This means that this extension of the geometric series provides us a connection to a combinatorial pattern! Isn't that cool?
As a sidenote, what if we generalise even further, so that the numerator variable could take on powers of any complex number?
This brings us beyond the realm (heh) of real numbers and opens ourselves up to the world of Dirichlet series and polylogarithm function, which are common research topics in analytic number theory.
A Dirichlet series is defined as a series of the form
$\sum_{k=1}^{\infty}{\frac{a_k}{k^s}}$where $s$ is a complex number and $a_k$ is a complex sequence.
The polylogarithm function is then a particular collection of Dirichlet series for which $a_k$ is a power series (the free variable in the terms of the series has successively increasing integer powers) in $z$, where $|z|<1$ and $z$ is allowed to be complex (hence the absolute value here is in fact the modulus). Here, $s$ is called the order or weight of the function.
$\operatorname {Li} _{s}(z)=\frac{z}{1^s}+{z^{2} \over 2^{s}}+{z^{3} \over 3^{s}}+\cdots=\sum _{k=1}^{\infty }{z^{k} \over k^{s}}$One can then notice that $S_p$ discussed previously is in fact $\operatorname{Li}_{-p}\left(\frac{1}{c}\right)$, with $|c| > 1$.
Unfortunately, even allowing analytic continuation, obtaining a value or an explicit expression for $\operatorname{Li}_{-s}\left(\frac{1}{c}\right)$ for even just positive rational values of $s$, let alone a non-real number, is difficult. For example, even a closed-form expression for $\operatorname{Li}_{-\frac{3}{2}}\left(\frac{1}{c}\right)=\sum_{k=1}^{\infty}\frac{k\sqrt{k}}{c^k}$ is currently unknown.
That said, polylogarithm function is still a topic with various directions of research, including its integral representations, dilogarithms (i.e. when $s = 2$) and polylogarithm ladders.
To read more about the subject, as well as other topics covered here, feel free to search online using the keywords or navigate to the references/further reading section below.
There is a prime number related result which is adapted from a tutorial question in a course about number theory, which is stated as follows.
For any prime $p$, $1+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{p-1}$ is divisible by $p$.
A proof of this result is fairly straightforward, when observed correctly.
Indeed, when $p=2$, $1+\frac{1}{p-1}=1+1=2$ is divisible by $2$.
Now, we only consider the odd primes, then we should have an even number of terms for the sum above. Now here's the key part: note that the terms of the sum above can be rearranged such that the first and last term are next to each other, as well as the second and second last term, and so on. When we sum each of the pairs together, we then obtain
$\frac{p}{p-1}+\frac{p}{2(p-2)}+\cdots+\cfrac{p}{\left(\frac{p-1}{2}\right)\left(\frac{p-1}{2}+1\right)}.$We can then see that we are able to factorise $p$ out of the sum, so it is indeed divisible by $p$.
In fact, using this result as well as the fact that there are an infinite number of prime numbers, which makes the set of prime numbers to be unbounded above due to the definition of prime numbers itself that necessitates them to be integers, it follows that the series $\sum_{n=1}^{\infty}\frac{1}{n}$ diverges to positive infinity. This provides an alternative proof to the ingenious method that produces an infinite sum of one halves.
This gives rise to another interesting question: can we prove that there are infinitely many primes, assuming the divergence of the harmonic series?
In fact, this is true. If we break down the denominator of each term of the harmonic series into its respective prime factorisation, then apply the distributive law and the formula for the geometric infinite sum, we then obtain the following result due to Euler, where $\mathbb{P}$ denotes the set of prime numbers:
$\sum _{i=1}^{\infty }{\frac {1}{i}}=\prod _{p\in \mathbb {P} }\left(1+{\frac {1}{p}}+{\frac {1}{p^{2}}}+\cdots \right)=\prod _{p\in \mathbb {P} }{\frac {1}{1-1/p}}.$What Euler did next was taking logarithms on both sides and applying the Taylor series of logarithms gives the following:
$\displaystyle \ln \prod _{p\in \mathbb {P} }{\frac {1}{1-1/p}}=\sum _{p\in \mathbb {P} }\ln {\frac {1}{1-1/p}}=\sum _{p\in \mathbb {P} }\left({\frac {1}{p}}+{\frac {1}{2p^{2}}}+{\frac {1}{3p^{3}}}+\cdots \right)=\sum _{p\in \mathbb {P} }{\frac {1}{p}}+\text{convergent terms}.$Since this is equal to the divergent harmonic series and a finite sum containing finite terms must converge, it follows that there must be infinitely many primes. This result also implies that the sum of reciprocals of prime numbers diverges, which was more rigorously verfied by Franz Mertens.
]]>The problem statement of the challenge (with required data set and answer) can be found here.
Here is my submitted code for the challenge, which can also be found via this GitHub repository.
import xml.etree.ElementTree as ET
import os
import matplotlib.pyplot as plt
import traceback
cwd = os.getcwd()
K = 2254
coordinates = []
invalid = [317, 1768]
for i in range(K):
if i in invalid:
continue
try:
filename = cwd + '/log_' + str(i) + '.xml'
# with open(filename, 'r') as f:
# read_data = f.read()
tree = ET.parse(filename)
root = tree.getroot()
location_raw = root[0].text
coordinate = tuple(map(lambda text: int(text), location_raw.split(',')))
coordinates.append(coordinate)
except Exception:
print(traceback.format_exc())
print(i)
plt.scatter(*zip(*coordinates))
plt.savefig('key.jpeg')
The subtleties between a function composition and HOF can be shown through the syntax of Python, which can result in different behaviours depending on how we close the parentheses.
A classic example is demonstrated as follows.
def thrice(f):
return lambda x: f(f(f(x)))
add1 = lambda x: x + 1
x = thrice(thrice(add1))(0) # 9
y = thrice(thrice)(add1)(0) # 27
In the example above, x
will output 9
because the defining line of code is interpreted ‘right-to-left’ (like function composition), first composing the add1
function thrice to produce a function equivalent to lambda x: x + 3
, then composing the first output thrice to produce a function equivalent to lambda x: x + 9
, thus 0 + 9 = 9
.
As for y
, it will output 27
because it is defined ‘left-to-right’, first composing the thrice
function itself three times, resulting in an HOF that effectively composes an input function $3^3=27$ times, which then takes in the function add1
to produce a function equivalent to lambda x: x + 27
, and finally we get 0 + 27 = 27
.
Higher-order functions are closely related to mathematics, in the sense that they enable the implementation of operators in mathematics, as well as a family of functions, using code.
For example^{1}, in set theory, we can define a family of functions $F$, which itself is also considered a function from $\mathbb{N}^{\mathbb{N}} \to \mathbb{N}^{\mathbb{N}}$. This operator $F$ takes in a function $f : \mathbb{N} \to \mathbb{N}$ and outputs another function $F(f) : \mathbb{N} \to \mathbb{N}$ that takes in whatever $f$ takes in and outputs whatever $f$ outputs plus one.
Phrased in a set-theoretic language as ‘pure’ as possible, $F$ is defined as
$F=\{(f,\{(n,f(n)+1) : n \in \mathbb{N}\}) : f \in \mathbb{N}^{\mathbb{N}}\}.$An HOF implementation of $F$ in Python can then look like the following:
# The following type hints are to indicate that the functions are in N^N, and are unnecessary for practical purposes.
from typing import Annotated, Callable
from annotated_types import Ge # requires additional pip install: https://pypi.org/project/annotated-types/
def F(f: Callable[Annotated[int, Ge(0)], Annotated[int, Ge(0)]]) -> Callable[Annotated[int, Ge(0)], Annotated[int, Ge(0)]]:
def output(n: Annotated[int, Ge(0)]) -> Annotated[int, Ge(0)]:
return f(n) + 1
return output
f = lambda x: x + 1
F(f)(0) # 2
This example is taken from the lecture notes of the Set Theory course conducted during the Fall 2022 semester by Dilip Raghavan in National University of Singapore. ↩
It was originally published on my main website but I have changed my place of posting blogs to this site. For reverse compatibility (so that the old link is still accessible), I am linking the article instead of reposting it, which can be found here.
]]>