- Knowledge point
- 1. Relu（Rectified Liner Uints Rectifying linear unit) activation function: Max (0, z)
- 2、Common supervised learning application scenarios
- 3、Structured data and unstructured data
- 4、An illustration of the development trend of deep learning
- 5、Mathematical knowledge points supplement:
1. Relu（Rectified Liner Uints Rectifying linear unit) activation function: Max (0, z)
The ReLU activation function is commonly used in neural networks, which has the following advantages compared with the sigmoid activation function mentioned in machine learning curriculum.
1、simoidActivation functions are saturable, and usually do not apply simoid as activation function.
2、ReLUThe speed of convergence is faster
2、Common supervised learning application scenarios
3、Structured data and unstructured data
Structured data, structured data, can be represented and stored in relational databases, representing two dimensional data. The general characteristic is that data is represented by a behavior unit and a row of data represents the information of an entity, and the attributes of each row are the same.
Unstructured data without fixed structure data. Documents, images, video / audio and so on belong to unstructured data. For this type of data, we usually store the data as a whole and store it in binary data format.
Semi-structured data, including mail, HTML, report, resource base, and so on, such as mail system, WEB cluster, teaching resource library, data mining system, file system and so on.
4、An illustration of the development trend of deep learning
The scale and scale of neural network can greatly improve the performance of deep learning.
5、Mathematical knowledge points supplement:
Sparsity: the number of elements in a group of vectors is 0, which is closely related to the compressibility of the signal.