about summary refs log tree commit diff homepage
path: root/blog
diff options
context:
space:
mode:
authorNguyễn Gia Phong <mcsinyx@disroot.org>2021-03-08 21:45:21 +0700
committerNguyễn Gia Phong <mcsinyx@disroot.org>2021-03-08 21:45:21 +0700
commit818a0bd6f1b3305351d482eeab4e9e64c2af3a18 (patch)
tree807afa8b0a26d98202f97bf7d4ab9d6513054fa2 /blog
parentc9409b7b0b426a8af44c2046972d26ff5621f37b (diff)
downloadsite-818a0bd6f1b3305351d482eeab4e9e64c2af3a18.tar.gz
Migrate *the rest* of the math blogs
Diffstat (limited to 'blog')
-rw-r--r--blog/conseq.md307
-rw-r--r--blog/index.md5
-rw-r--r--blog/system.md96
3 files changed, 408 insertions, 0 deletions
diff --git a/blog/conseq.md b/blog/conseq.md
new file mode 100644
index 0000000..eadc970
--- /dev/null
+++ b/blog/conseq.md
@@ -0,0 +1,307 @@
++++
+rss = "SICP subsection 3.5.2 in Python"
+date = Date(2019, 2, 28)
++++
+@def tags = ["sicp", "fun", "python", "calculus"]
+
+# Infinite Sequences: A Case Study in Functional Python
+
+In this article, we will only consider sequences defined by a function
+whose domain is a subset of the set of all integers.  Such sequences will be
+*visualized*, i.e. we will try to evaluate the first few (thousand) elements,
+using functional programming paradigm, where functions are more similar
+to the ones in math (in contrast to imperative style with side effects
+confusing to inexperenced coders).  The idea is taken from [subsection 3.5.2
+of SICP][] and adapted to Python, which, compare to Scheme, is significantly
+more popular: Python is pre-installed on almost every modern Unix-like system,
+namely macOS, GNU/Linux and the \*BSDs; and even at MIT, the new 6.01 in Python
+has recently replaced the legendary 6.001 (SICP).
+
+One notable advantage of using Python is its huge **standard** library.
+For example the *identity sequence* (sequence defined by the identity function)
+can be imported directly from ``itertools``:
+
+```python
+>>> from itertools import count
+>>> positive_integers = count(start=1)
+>>> next(positive_integers)
+1
+>>> next(positive_integers)
+2
+>>> for _ in range(4): next(positive_integers)
+... 
+3
+4
+5
+6
+```
+
+To open a Python emulator, simply lauch your terminal and run `python`.
+If that is somehow still too struggling, navigate to [the interactive shell][]
+on Python.org.
+
+*Let's get it started* with somethings everyone hates: recursively defined
+sequences, e.g. the famous Fibonacci ($F_n = F_{n-1} + F_{n-2}$,
+$F_1 = 1$ and $F_0 = 0$).  Since [Python does not support][] [tail recursion][],
+it's generally **not** a good idea to define anything recursively (which is,
+ironically, the only trivial *functional* solution in this case)
+but since we will only evaluate the first few terms
+(use the **Tab** key to indent the line when needed):
+
+```python
+>>> def fibonacci(n, a=0, b=1):
+...     # To avoid making the code look complicated,
+...     # n < 0 is not handled here.
+...     return a if n == 0 else fibonacci(n - 1, b, a + b)
+... 
+>>> fibo_seq = (fibonacci(n) for n in count(start=0))
+>>> for _ in range(7): next(fibo_seq)
+... 
+0
+1
+1
+2
+3
+5
+8
+```
+
+@@colbox-blue
+The `fibo_seq` above is just to demonstrate how `itertools.count`
+can be use to create an infinite sequence defined by a function.
+For better performance, this should be used instead:
+
+```python
+def fibonacci_sequence(a=0, b=1):
+    yield a
+    yield from fibonacci_sequence(b, a + b)
+```
+@@
+
+It is noticable that the elements having been iterated through (using `next`)
+will disappear forever in the void (oh no!), but that is the cost we are
+willing to pay to save some memory, especially when we need to evaluate a
+member of (arbitrarily) large index to estimate the sequence's limit.
+One case in point is estimating a definite integral using [left Riemann sum][].
+
+```python
+def integral(f, a, b):
+    def left_riemann_sum(n):
+        dx = (b-a) / n
+        def x(i): return a + i*dx
+        return sum(f(x(i)) for i in range(n)) * dx
+    return left_riemann_sum
+```
+
+The function `integral(f, a, b)` as defined above returns a function taking
+$n$ as an argument.  As $n\to\infty$, its result approaches
+$\int_a^b f(x)\mathrm d x$.  For example, we are going to estimate
+$\pi$ as the area of a semicircle whose radius is $\sqrt 2$:
+
+```python
+>>> from math import sqrt
+>>> def semicircle(x): return sqrt(abs(2 - x*x))
+... 
+>>> pi = integral(semicircle, -sqrt(2), sqrt(2))
+>>> pi_seq = (pi(n) for n in count(start=2))
+>>> for _ in range(3): next(pi_seq)
+... 
+2.000000029802323
+2.514157464087051
+2.7320508224700384
+```
+
+Whilst the first few aren't quite close, at index around 1000,
+the result is somewhat acceptable:
+
+```
+3.1414873191059525
+3.1414874770617427
+3.1414876346231577
+```
+
+Since we are comfortable with sequence of sums, let's move on to sums of
+a sequence, which are called series.  For estimation, again, we are going to
+make use of infinite sequences of partial sums, which are implemented as
+`itertools.accumulate` by thoughtful Python developers.  [Geometric][] and
+[p-series][] can be defined as follow:
+
+```python
+from itertools import accumulate as partial_sums
+
+def geometric_series(r, a=1):
+    return partial_sums(a*r**n for n in count(0))
+
+def p_series(p):
+    return partial_sums(1 / n**p for n in count(1))
+```
+
+We can then use these to determine whether a series is convergent or divergent.
+For instance, one can easily verify that the $p$-series with $p = 2$
+converges to $\pi^2 / 6 \approx 1.6449340668482264$ via
+
+```python
+>>> s = p_series(p=2)
+>>> for _ in range(11): next(s)
+... 
+1.0
+1.25
+1.3611111111111112
+1.4236111111111112
+1.4636111111111112
+1.4913888888888889
+1.511797052154195
+1.527422052154195
+1.5397677311665408
+1.5497677311665408
+1.558032193976458
+```
+
+We can observe that it takes quite a lot of steps to get the precision we would
+generally expect ($s_{11}$ is only precise to the first decimal place;
+second decimal places: $s_{101}$; third: $s_{2304}$).
+Luckily, many techniques for series acceleration are available.
+[Shanks transformation][] for instance, can be implemented as follow:
+
+```python
+from itertools import islice, tee
+
+def shanks(seq):
+    return map(lambda x, y, z: (x*z - y*y) / (x + z - y*2),
+               *(islice(t, i, None) for i, t in enumerate(tee(seq, 3))))
+```
+
+In the code above, `lambda x, y, z: (x*z - y*y) / (x + z - y*2)` denotes
+the anonymous function $(x, y, z) \mapsto \frac{xz - y^2}{x + z - 2y}$
+and `map` is a higher order function applying that function to
+respective elements of subsequences starting from index 1, 2 and 3 of `seq`.
+On Python 2, one should import `imap` from `itertools` to get the same
+[lazy][] behavior of `map` on Python 3.
+
+```python
+>>> s = shanks(p_series(2))
+>>> for _ in range(10): next(s)
+... 
+1.4500000000000002
+1.503968253968257
+1.53472222222223
+1.5545202020202133
+1.5683119658120213
+1.57846371882088
+1.5862455815659202
+1.5923993101138652
+1.5973867787856946
+1.6015104548459742
+```
+
+The result was quite satisfying, yet we can do one step futher
+by continuously applying the transformation to the sequence:
+
+```python
+>>> def compose(transform, seq):
+... 	yield next(seq)
+... 	yield from compose(transform, transform(seq))
+... 
+>>> s = compose(shanks, p_series(2))
+>>> for _ in range(10): next(s)
+... 
+1.0
+1.503968253968257
+1.5999812811165188
+1.6284732442271674
+1.6384666832276524
+1.642311342667821
+1.6425249569252578
+1.640277484549416
+1.6415443295058203
+1.642038043478661
+```
+
+Shanks transformation works on every sequence (not just sequences of
+partial sums).  Back to previous example of using left Riemann sum
+to compute definite integral:
+
+```python
+>>> pi_seq = compose(shanks, map(pi, count(2)))
+>>> for _ in range(10): next(pi_seq)
+... 
+2.000000029802323
+2.978391111182236
+3.105916845397819
+3.1323116570377185
+3.1389379264270736
+3.140788413965646
+3.140921512857936
+3.1400282163913436
+3.1400874774021816
+3.1407097229603256
+>>> next(islice(pi_seq, 300, None))
+3.1415061302492413
+```
+
+Now having series defined, let's see if we can learn anything
+about power series. Sequence of partial sums of power series
+$\sum c_n (x - a)^n$ can be defined as
+
+```python
+from operator import mul
+
+def power_series(c, start=0, a=0):
+    return lambda x: partial_sums(map(mul, c, (x**n for n in count(start))))
+```
+
+We can use this to compute functions that can be written as
+[Taylor series][]:
+
+```python
+from math import factorial
+def exp(x):
+    return power_series(1/factorial(n) for n in count(0))(x)
+
+def cos(x):
+    c = ((1 - n%2) * (1 - n%4) / factorial(n) for n in count(0))
+    return power_series(c)(x)
+
+def sin(x):
+    c = (n%2 * (2 - n%4) / factorial(n) for n in count(1))
+    return power_series(c, start=1)(x)
+```
+
+Amazing!  Let's test 'em!
+
+```python
+>>> e = compose(shanks, exp(1)) # this should converges to 2.718281828459045
+>>> for _ in range(4): next(e)
+... 
+1.0
+2.749999999999996
+2.718276515152136
+2.718281825486623
+```
+
+Impressive, huh? For sine and cosine, series acceleration is not even necessary:
+
+```python
+>>> from math import pi as PI
+>>> s = sin(PI/6)
+>>> for _ in range(5): next(s)
+... 
+0.5235987755982988
+0.5235987755982988
+0.49967417939436376
+0.49967417939436376
+0.5000021325887924
+>>> next(islice(cos(PI/3), 8, None))
+0.500000433432915
+```
+
+[subsection 3.5.2 of SICP]: https://mitpress.mit.edu/sites/default/files/sicp/full-text/book/book-Z-H-24.html#%_sec_3.5.2
+[the interactive shell]: https://www.python.org/shell
+[Python does not support]: http://neopythonic.blogspot.com/2009/04/final-words-on-tail-calls.html
+[tail recursion]: https://mitpress.mit.edu/sites/default/files/sicp/full-text/book/book-Z-H-11.html#call_footnote_Temp_48
+[left Riemann sum]: https://en.wikipedia.org/wiki/Riemann_sum#Left_Riemann_sum
+[Geometric]: https://en.wikipedia.org/wiki/Geometric_series
+[p-series]: https://math.oregonstate.edu/home/programs/undergrad/CalculusQuestStudyGuides/SandS/SeriesTests/p-series.html
+[Shanks transformation]: https://en.wikipedia.org/wiki/Shanks_transformation
+[lazy]: https://en.wikipedia.org/wiki/Lazy_evaluation
+[Taylor series]: https://en.wikipedia.org/wiki/Taylor_series
diff --git a/blog/index.md b/blog/index.md
new file mode 100644
index 0000000..bdfba6a
--- /dev/null
+++ b/blog/index.md
@@ -0,0 +1,5 @@
+# Web Logs
+
+I occasionally blog about functional programming, lambda calculus
+and other computational stuff, or anything related to computers in general.
+They are tagged as [`fun`](/tag/fun).
diff --git a/blog/system.md b/blog/system.md
new file mode 100644
index 0000000..bb92343
--- /dev/null
+++ b/blog/system.md
@@ -0,0 +1,96 @@
++++
+rss = "Properties of cascade connected systems analyzed via anonymous functions"
+date = Date(2020, 4, 15)
++++
+@def tags = ["system", "fun", "anonymous"]
+
+# System Cascade Connection
+
+Given two discrete-time systems $A$ and $B$ connected in cascade
+to form a new system $C = x \mapsto B(A(x))$.
+
+## Linearity
+
+If $A$ and $B$ are linear, i.e. for all signals $x_i$ and scalars $a_i$,
+
+\[\begin{aligned}
+  A\left(n \mapsto \sum_i a_i x_i[n]\right) = n \mapsto \sum_i a_i A(x_i)[n]\\
+  B\left(n \mapsto \sum_i a_i x_i[n]\right) = n \mapsto \sum_i a_i B(x_i)[n]
+\end{aligned}\]
+
+then $C$ is also linear
+
+\[\begin{aligned}
+  C\left(n \mapsto \sum_i a_i x_i[n]\right)
+  &= B\left(A\left(n \mapsto \sum_i a_i x_i[n]\right)\right)\\
+  &= B\left(n \mapsto \sum_i a_i A(x_i)[n]\right)\\
+  &= n \mapsto \sum_i a_i B(A(x_i))[n]\\
+  &= n \mapsto \sum_i a_i C(x_i)[n]
+\end{aligned}\]
+
+## Time Invariance
+
+If $A$ and $B$ are time invariant,
+i.e. for all signals $x$ and integers $k$,
+
+\[\begin{aligned}
+  A(n \mapsto x[n - k]) &= n \mapsto A(x)[n - k]\\
+  B(n \mapsto x[n - k]) &= n \mapsto B(x)[n - k]
+\end{aligned}\]
+
+then $C$ is also time invariant
+
+\[\begin{aligned}
+  C(n \mapsto x[n - k])
+  &= B(A(n \mapsto x[n - k]))\\
+  &= B(n \mapsto A(x)[n - k])\\
+  &= n \mapsto B(A(x))[n - k]\\
+  &= n \mapsto C(x)[n - k]
+\end{aligned}\]
+
+## LTI Ordering
+
+If $A$ and $B$ are linear and time-invariant, there exists
+signals $g$ and $h$ such that for all signals $x$,
+$A = x \mapsto x * g$ and $B = x \mapsto x * h$, thus 
+
+\[B(A(x)) = B(x * g) = x * g * h = x * h * g = A(x * h) = A(B(x))\]
+
+or interchanging $A$ and $B$ order does not change $C$.
+
+## Causality
+
+If $A$ and $B$ are causal,
+i.e. for all signals $x$, $y$ and any choise of integer $k$,
+
+\[\begin{aligned}
+  \forall n < k, x[n] = y[n]\quad
+  \Longrightarrow &\;\begin{cases}
+  \forall n < k, A(x)[n] = A(y)[n]\\
+  \forall n < k, B(x)[n] = B(y)[n]
+  \end{cases}\\
+  \Longrightarrow &\;\forall n < k, B(A(x))[n] = B(A(y))[n]\\
+  \Longleftrightarrow &\;\forall n < k, C(x)[n] = C(y)[n]
+\end{aligned}\]
+
+then $C$ is also causal.
+
+## BIBO Stability
+
+If $A$ and $B$ are stable, i.e. there exists a signal $x$
+and scalars $a$ and $b$ that for all integers $n$,
+
+\[\begin{aligned}
+  |x[n]| < a &\Longrightarrow |A(x)[n]| < b\\
+  |x[n]| < a &\Longrightarrow |B(x)[n]| < b
+\end{aligned}\]
+
+then $C$ is also stable, i.e. there exists a signal $x$
+and scalars $a$, $b$ and $c$ that for all integers $n$,
+
+\[\begin{aligned}
+  |x[n]| < a\quad
+  \Longrightarrow &\;|A(x)[n]| < b\\
+  \Longrightarrow &\;|B(A(x))[n]| < c\\
+  \Longleftrightarrow &\;|C(x)[n]| < c
+\end{aligned}\]