Complex (Cognitive) Systems
the brain meets the operational definition of what a complex2 system is:
- it's non-stationary (dynamical system whose model constants/parameters actually vary; system doesn't stay in an attractor)
- non-linear behaviour, emergent properties (hard to determine from the observation of components alone, which number in the millions)
- chaotic to initial conditions (slight differences lead to widely different states)
- hierarchical organisation, scale-free structure, many-to-many relations among components, interconnectedness
- phase transitions, equilibrium depends on critical states
- noisy measurements, plus some rather stochastic processes (spiking)
• dynamical systems basics
we'll distinguish between two kinds of systems:
- dynamical (deterministic). e.g. the harmonic oscilator:
| |--\/\/\--mass | |----------------> x
From Hooke's law:
we could have arrived at the same harmonic oscilator function using a pendulum, or an IC circuit.
in order to solve the differential equation, we partition it into two linear equations:
in order to determine the initial conditions/constants we rewrite the general solution as:
this has the property that , and .
let be the initial phase. therefore
• homework 1
• linear independence of Fourier terms
let with be functions on the interval . prove that , given the proper definition of the dot product.
let be Dirac's cummulative delta distribution; and let be the scalar product for functions of the vector basis defined by .
proof by modus ponens follows:
- restatement of conditional premise
given the above definitions; if , then is an orthogonal basis. that is:
- proof of the antecedent premise
from Euler's formula:
from the integration of the odd function over a zero-centered interval, :
let ; then .
note that .
is an orthogonal basis.
• Fourier transform
the Fourier series represents a periodic function as a discrete linear combination of sines and cosines . on the other hand, the Fourier transform is capable of representing a more general set of functions, not restricted to being periodic. how is that achieved?
(for convenience, we will start from the series in complex exponential notation)
if weren't periodic, the series would be representing it only at . nonetheless, we can make the interval arbitrarily large:
let , then . substituting in the last expression yields:
which is a Riemman sum for variable :
which symmetry properties does H(f) has if h(t) is:
- odd real
- odd imaginary
- odd complex
and which symmetry properties does h(t) has if H(f ) is:
- odd real
- even complex
- even imaginary
• odd real h(t)
we get rid of the imaginary part of H(f):
from integration of an odd function ($r(t)cos(2πft)$) at a symmetric interval:
is odd. is odd and imaginary.
• odd imaginary h(t)
is odd. is odd and real.
• odd h(t)
is odd. is odd and complex.
• odd real H(f)
is odd. is imaginary and odd.
• complex even H(f)
is even. is even and complex.
• imaginary even H(f)
is even. is even and imaginary.
• convolution: graphical interpretation
let for t ≥ 0, and h(t) = 0 elsewhere. also let x(t) = sen(t) for 0 ≤ t ≤ π/2 and x(t) = 0 elsewhere. graphically estimate the convolution of these two functions.
#!/usr/bin/env python3 # convolution graphical demo # Copyright 2018 Isaac David # License: GNU AGPLv3 %matplotlib inline from ipywidgets import interactive, IntSlider, FloatSlider import matplotlib.pyplot as plt import numpy as np import math def x(t): if 0 <= t and t <= (math.pi / 2): return math.sin(t) else: return 0 def h(t): if t > 0: return math.exp(-t) else: return 0 def part_convolution(mini, t): time = np.arange(0, t, 0.01) x2 =  h2 =  for i in time: x2.append(x(i)) h2.append(h(i)) convolution = np.convolve(x2, h2, 'same') # produce array of 'same' size as x and h convolution_normalized = convolution / (max(convolution) + 0.001) # plotting fig=plt.figure(figsize=(12, 5)) plt.plot(time, 2 * convolution_normalized) plt.plot(time, x2) plt.plot(time, h2[::-1]) # [::-1] reverses h2 plt.ylim(0, max(2.2 * convolution_normalized)) plt.xlim(-1, 5) plt.xlabel('t') plt.ylabel('conv(x, h) / max(convolution(x, h))') plt.axhline(0, color='gray') plt.show() form = interactive(part_convolution, t = FloatSlider(min=0, max=10, step=0.01, value=0.01), mini = IntSlider(min=-10, max=0, step=1, value=0)) form
• convolution theorem
is the convolution of functions x(t) y h(t). F[y(t)] denotes its Fourier transform:
time-domain convolution is equivalent to frequency-domain product!
• cross-correlation theorem
is the correlation of functions x(t) y h(t) as a function of a time lag between them. F[z(t)] denotes its Fourier transform:
cross-correlation is equivalent to frequency domain product (with conjugate X(f)).
• uncertainty principle
the more short-lived the wave in the time domain, the more widespread its power spectrum (a.k.a. frequency spectrum). vice versa.
• discrete Fourier transform
the computation of a DFT is done segment by segment. the signal is multiplied by some "window" function (e.g. a square pulse) which is zero outside the window. this is known as windowing.
the effect of using a finite window is that unexisting frequencies appear at the power spectrum (around the original frequencies). this phenomenon is known as leakage. the explanation is evident from the convolution theorem: one can use similar reasoning to show that . therefore, multiplication by a window will induce a convolution in the frequency domain.
low frequencies (and therefore non-stationary systems) are the most problematic to compute a DFT for, since the window may not be long enough to capture them. there's no single test for stationarity. it can be tested for nonetheless, measuring the effect of varying window sizes on different parameters (centrality and spread statistics, Lyapunov exponent, etc.).
• sinc interpolation
what is the corresponding frequency-domain convolution for square wave windowing?
calculate the first three terms of the Fourier series for function , with
in terms of the Fourier series:
from the integration of odd functions ( and ) at symmetrical interval ( ):
we will consider the interval , which includes a full period of for any n. therefore:
f(x) could be just a discrete sample of a continuous signal, but because convolution produces a continuous function (a sum of sine waves), the operation fills in the missing time points.
• Nyquist-Shannon sampling theorem
continuous functions of bounded frequency bandwidth ("band-limited") contain a bounded amount of information. they can be perfectly represented using a function of countable (i.e. discrete) sets: the function's Fourier coefficient/phase pairs.
the Nyquist frequency is the maximum sampling frequency at which the digitization of a continuous signal still losses fidelity. that is, aliasing occurs:
sampling frequency should be greater than twice the maximum frequency in the power spectrum of the original signal.
undersampling results in the unconsidered high frequencies aliasing (adding up) over at the frequency domain. oversampling is harmless, beyond being a waste of disk space.
• testing for undersampling
what if the maximum frequency isn't known in advance? it is possible to test that the maximum frequency has been considered, because aliasing will make two power spectra of the same signal look dissimilar under two different sampling frequencies:
- register using two sampling frequencies, one greater than the one to test for the Nyquist criterion.
- compute both Fourier transforms.
- if they are simply scaled versions of one another, aliasing didn't occur. i.e. the ratio between two features (e.g. maxima) should stay constant.
complexity doesn't imply using advanced math, for it can emerge from very simple descriptions (see rule 110 for instance). this page uses calculus (including some differential equations), inferential statistics (also the fundamentals of information theory), bits of graph theory and linear algebra; most (or all) of which should be familiar to STEM undergraduates. topics like Fourier analysis and chaos theory are introduced from there. ↩
not to be confused with "complicated" or "difficult" ↩