Useful Formulas for Math
I am opening a new post for formulas in Learning Theory and general Machine Learning area. It is purely mathamatically based. It is suggested to use it as reference instead of studying them one by one.
Note: You can share this post to social network at the bottom.
Probability Theory and Expectation
Law of Total Expectection (a.k.a. Tower Rule)
Let random variable X and Y defined in the same probability space. Then, $E_X (X) = E_Y(E_X(X\lvert Y))$.
Proof:
Learning Theory
L’ Hospital Rule
L’ Hospital Rule uses derivatives to help evaluate limits involving inderterminate forms. It states that for functions $f$ and $g$ which are differentiable on an open interval $I$ except possibly at a point $c$ contained in $I$,if
or
$g^{\prime}(x)\neq 0$ for all $x$ in $I$ with $x\neq c$ and $\lim\limits_{x\to c} \frac{f^{\prime}(x)}{g^{\prime}(x)}$ exists,
then $\lim\limits_{x\to c} \frac{f(x)}{g(x)} = \lim\limits_{x\to c} \frac{f^{\prime}(x)}{g^{\prime}(x)}$
Markov Inequality
For a positive random variable $X \leq 0$,
Prove: $E[X] = \sum_x xPr(X=x) \geq \sum_{x\leq b} bPr(X=x) = bPr(X\geq b)$
Math
Log properties
$a^{log_b n} = (b^{log_b a})^{log_b n} = (b^{log_b n})^{log_b a} = n^{log_b a} $
Geometric Series
$1 + x + x^2 + \dots + x^n = \frac{1x^{n+1}}{1x}$ for $x\neq 1$
$1 + x + x^2 +\dots = \frac{1}{1x}$ for $\lvert x\rvert <1$
Harmonic Series
$\sum\limits_{i=1}^{n} \frac{1}{i} = \ln{n} + \gamma + \frac{1}{2n}  \frac{1}{12n^2}$ is the best approximation for the series.
Stirling’s approximation
Trace Properties
$tr(AB) = tr(BA)$
$tr(ABC) = tr(CAB) = tr(BCA)$ which is called cyclic property of trace.
$\nabla_{A} tr(AB) = b^{T}$
$\nabla_{A^T} f(A) = (\nabla_A f(A))^T$
$\nabla_A tr(ABA^TC) = CAB + C^TAB^T$
$\nabla_A \lvert A\rvert = \lvert A\rvert (A^{1})^T$
Woodbury Matrix Identity
It says that for any given A with n by n and U with n by k and C with k by k and V with k by n such that A and C are nonsingular, we have :
Proof can be found on Wiki easily.
Some useful approximations

$(1  \frac{\lambda}{n})^n \approx e^{\lambda}$ where $\frac{\lambda}{n} < 1$

Binomial approximation: $(1+x)^{\alpha} \approx 1 + \alpha x$ when $\lvert x\rvert <1$ and $\lvert \alpha x\rvert \ll 1$.

Euler related: $\sum\limits_{n=1}^{\infty} n^{2} = \frac{\pi^2}{6}$
Maclaurin series

$\ln(1x) = \sum\limits_{n=1}^{\infty} \frac{x^n}{n}$ when $\lvert x\rvert < 1$ or $x=1$

$\ln(1+x) = \sum\limits_{n=1}^{\infty} (1)^{n+1} \frac{x^n}{n}$ when $\lvert x\rvert < 1$ or $x=1$
Vector Calculus
Gradient  Vector in, Scalar out
Let’s denote a function $f:\mathbb{R}^n\mapsto\mathbb{R}$. The gradient of the function is defined as:
It simply says that for a function which takes a vector as input and a scaler as output, the gradient of the function is a column vector which each element is the derivative of f with respect to a single component of x. The ith element indicates how the change rate of function is with respect to ith variable or ith dimension.
For example, $y = x^Tz$ where $x \in \mathbb{R}^n$ should be a good practice to work on.
Jacobian Matrix  Vector in, Vector out
For Jacobian, the case is more complicated. Let’s denote a function $f:\mathbb{R}^n\mapsto\mathbb{R}^m$. The Jacobian of the function is defined as:
It says that if we have a function which takes a vector as input and putput a vector, the Jacobian of this function is a matrix described above. The $J_{i,j}$ says about how the change rate of ith function is with respect to jth variable.
For example, $f(x,y) = (x^2 + y, y^3)$ should a good one to try.
Generalized Jacobian Matrix  Tensor in, Tensor out
In previous Jacobian matrix, we assume the input is an vector and output is an vector too. However, we can generalize this to a tensor to form generalized Jacobian matrix.
Suppose that $f:\mathcal{R}^{N_1\times \dots\times N_{D_x}}\mapsto\mathcal{R}^{N_1\times \dots\times N_{D_y}}$. It means the input to f is a $D_x$dimensional tensor of shape $N_1\times \dots\times N_{D_x}$ and the output of f is a $D_y$dimensional tensor of shape $N_1\times \dots\times N_{D_y}$. Thus, the generalized Jacobian matrix has the shape of $(N_1\times \dots\times N_{D_y})\times(N_1\times \dots\times N_{D_x})$.
You can image this generalized Jacobian matrix as a “2D” matrix where the row dimension is indexed by $N_1\times \dots\times N_{D_y}$ and the column dimension is indexed by $N_1\times \dots\times N_{D_x}$.
Now, given that $i\in\mathcal{Z}^{D_y}$ and $j\in\mathcal{Z}^{D_x}$ We can have:
So we know that $y_i$ and $x_j$ are all scalars. Thus, $\frac{\partial y_i}{\partial x_j}$ is also a scalar. This tells us the relative rates of change between all elements of x and all elements in y.
Hessian Matrix  Vector in, Scalar out
Whereas gradient and Jacobian are somewhat like a first order derivative, Hessian is somewhat like second order partial derivative. Let’s denote a function $f:\mathbb{R}^n\mapsto\mathbb{R}$. The Hessian of the function is defined as:
The $H_{i,j}$ indicates how the change rate of the function is with respect to ith variable and jth variable where i and j might be same.
For example, $f(x,y,z) = x^2 + y^2 + z^2$ could be an example for practice.
Negation of Logic Summary
Product of Sum
In general, we have:
It cannot be reversed.