Generally speaking, an object should be regarded as a complete individual, showing what is meaningful in reality and can not be easily split.

**object**It is the objective thing that is characterized.**surface**（Or a matrix) is a container for holding these objects. In other words, the object is the element in the table, the table is the set of objects (each object in the table has the same features and dimensions, and the object has a certain value for each feature).

**classification**or**clustering**It can be regarded as a division of matrix space according to the similarity and difference of object characteristics.

**Forecast**or**regression**It can be regarded as a trend based on the correlation of objects in a certain sequence (time).

`import numpy as np`

```
a = np.arange(9).reshape((3, -1))
a
```

```
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
```

`b = a.copy()`

`id(a) == id(b)`

`False`

`repr(a) == repr(b)`

`True`

Contents

## Linalg

`A = np.mat([[1, 2, 4, 5, 7], [9,12 ,11, 8, 2], [6, 4, 3, 2, 1], [9, 1, 3, 4, 5], [0, 2, 3, 4 ,1]])`

`np.linalg.det(A)`

`-812.00000000000068`

`np.linalg.det(A)`

`-812.00000000000068`

`np.linalg.inv(A)`

`np.linalg.inv(A)`

```
matrix([[ -7.14285714e-02, -1.23152709e-02, 5.29556650e-02,
9.60591133e-02, -8.62068966e-03],
[ 2.14285714e-01, -3.76847291e-01, 1.22044335e+00,
-4.60591133e-01, 3.36206897e-01],
[ -2.14285714e-01, 8.25123153e-01, -2.04802956e+00,
5.64039409e-01, -9.22413793e-01],
[ 5.11521867e-17, -4.13793103e-01, 8.79310345e-01,
-1.72413793e-01, 8.10344828e-01],
[ 2.14285714e-01, -6.65024631e-02, 1.85960591e-01,
-8.12807882e-02, -1.46551724e-01]])
```

### transpose”

`A.T`

```
matrix([[ 1, 9, 6, 9, 0],
[ 2, 12, 4, 1, 2],
[ 4, 11, 3, 3, 3],
[ 5, 8, 2, 4, 4],
[ 7, 2, 1, 5, 1]])
```

`A * A.T`

```
matrix([[ 95, 131, 43, 78, 43],
[131, 414, 153, 168, 91],
[ 43, 153, 66, 80, 26],
[ 78, 168, 80, 132, 32],
[ 43, 91, 26, 32, 30]])
```

### rank

`np.linalg.matrix_rank(A)`

`5`

### solution equation”

\[Ax = b\]

```
b = [1, 0, 1, 0, 1]
S = np.linalg.solve(A, b)
S
```

`array([-0.0270936 , 1.77093596, -3.18472906, 1.68965517, 0.25369458])`

The three cornerstones of modern mathematics:

**probability theory**It explains what things may be like;**numerical analysis**It reveals why they are so and how they become so.**linear algebra**It tells us that things are never the same, so that we can observe things from many angles.

# similarity measure

It is more common**norm**（as*Euclidean distance*（\(L_2\)）、*Manhattan distance*（\(L_1\)）、*Chebyshev Distance*（\(L_{\infty}\)））and**Cosine**。Next, I would like to highlight some other interesting metrics.

## Hamming distance (Hamming)

Definition: two isometric strings`s1`

and`s2`

The Hamming distance is defined as the minimum number of substitutions needed to turn one of them into another.

Application: information coding (in order to enhance fault tolerance, the minimum Hamming distance between codes should be as large as possible).

```
A = np.mat([[1, 1, 0, 1, 0, 1, 0, 0, 1], [0, 1, 1, 0, 0, 0, 1, 1, 1]])
smstr = np.nonzero(A[0] - A[1])
```

`A[0] - A[1]`

`matrix([[ 1, 0, -1, 1, 0, 1, -1, -1, 0]])`

`smstr`

```
(array([0, 0, 0, 0, 0, 0], dtype=int64),
array([0, 2, 3, 5, 6, 7], dtype=int64))
```

```
d = smstr[0].shape[0]
d
```

`6`

## \[

J(A, B) = \frac{|\;A \bigcap B\;|}{|\;A \bigcup B\;|}

\]

## \[

J_{\delta}(A, B) = 1 – J(A, B) = 1 – \frac{|\;A \bigcap B\;|}{|\;A \bigcup B\;|}

\]

Application:

sample\(A\) And sample\(B\) The values of all dimensions are\(0\) or\(1\)，Indicates whether a certain element is included or not.

`import scipy.spatial.distance as dist`

`A`

```
matrix([[1, 1, 0, 1, 0, 1, 0, 0, 1],
[0, 1, 1, 0, 0, 0, 1, 1, 1]])
```

`dist.pdist(A, 'jaccard')`

`array([ 0.75])`

# \(10\) An apple,\(2\) A little pig.

**Random event**：A subset of the sample space, which can be understood as a classification, actually points to a probability distribution: the apple is red and the pig is white.

**random variable**：It can be understood as a variable that points to an event:\(X\{x_i = \text{Yellow] \ \)

**Probability distribution of random variables**：The value range of a given random variable leads to the possibility of a certain random event. It can be understood as the possibility that an object that belongs to a range of random variables belongs to a category or obeys certain trend.

# space transformation

The matrix space composed of the value of the characteristic column should have integrity, that is, it can reflect the spatial form or transformation rule of things.

Vector: it has the size and direction.

The product of a vector and a matrix is a process in which a vector is transformed from a linear space (coordinate system) to a new substrate by linear transformation to another linear space made up of the new base.

multiplication of matrices\(C = A \cdot B\)：

- \(A\)：Vector group
- \(B\)：The matrix under linear transformation

Suppose we look at a set of objects\(\scr{A} = \{\alpha_1, \cdots, \alpha_m\}\)，They are in two different dimensions\(V^n\) and\(V^p\) The bases are respectively\(\{\vec{e_1}, \cdots, \vec{e_n}\}\) and\(\{\vec{d_1}, \cdots, \vec{d_p}\}\)，\(T\) mean\(V^n\) reach\(V^p\) Linear transformation, and there are (\(k = \{1, \cdots, m\}\)）：

\[

\begin{align}

&T

\begin{pmatrix}

\begin{bmatrix}

\vec{e_1} \\ \vdots \\ \vec{e_n}

\end{bmatrix}

\end{pmatrix} =

A

\begin{bmatrix}

\vec{d_1} \\ \vdots \\ \vec{d_p}

\end{bmatrix} \\

&\alpha_k =

\begin{bmatrix}

x_1^{k} & \cdots & x_n^k

\end{bmatrix}

\begin{bmatrix}

\vec{e_1} \\ \vdots \\ \vec{e_n}

\end{bmatrix}\\

&T(\alpha_k)=

\begin{bmatrix}

y_1^{k} & \cdots & y_p^k

\end{bmatrix}

\begin{bmatrix}

\vec{d_1} \\ \vdots \\ \vec{d_p}

\end{bmatrix}

\end{align}

\]

order

\[

\begin{cases}

&X^k =

\begin{bmatrix}

x_1^{k} & \cdots & x_n^k

\end{bmatrix}\\

&Y^k =

\begin{bmatrix}

y_1^{k} & \cdots & y_p^k

\end{bmatrix}

\end{cases}

\]

Then remember:

\[

\begin{cases}

&X =

\begin{bmatrix}

X^{1} \\ \vdots \\ X^m

\end{bmatrix} \\

&Y = \begin{bmatrix}

Y^{1} \\ \vdots \\ Y^m

\end{bmatrix}

\end{cases}

\]

By formula (1), it is known that:

\[

\begin{align}

XA = Y

\end{align}

\]

thus\(X\) and\(Y\) Represents the coordinate representation of a set of objects in different linear spaces.\(A\) It represents a linear transformation in a base pair (such as,\((\{\vec{e_1}, \cdots, \vec{e_n}\}, \{\vec{d_1}, \cdots, \vec{d_p}\})\)）The matrix representation below.

# \[

A = \lambda v

\]

```
A = [[8, 7, 6], [3, 5, 7], [4, 9, 1]]
evals, evecs = np.linalg.eig(A)
```

`print('Eigenvalues:\n%s\nFeature vectors:\n%s'%(evals, evecs))`

`Eigenvalues:[16.43231925 2.84713925 -5.2794585]Feature vectors:[[0.73717284 0.86836047 -0.09167612]][0.48286213 -0.4348687 -0.54207062][0.47267364 -0.23840995 0.83531726]]`

With eigenvalues and eigenvectors, we can restore matrices.

\[

A = Q \Sigma Q^{-1}

\]

```
sigma = evals * np.eye(3)
sigma
```

```
array([[ 16.43231925, 0. , -0. ],
[ 0. , 2.84713925, -0. ],
[ 0. , 0. , -5.2794585 ]])
```

Or, use it`np.diag`

：

`np.diag(evals)`

```
array([[ 16.43231925, 0. , 0. ],
[ 0. , 2.84713925, 0. ],
[ 0. , 0. , -5.2794585 ]])
```

`np.dot(np.dot(evecs, sigma), np.linalg.inv(evecs))`

```
array([[ 8., 7., 6.],
[ 3., 5., 7.],
[ 4., 9., 1.]])
```