Article From:https://www.cnblogs.com/Shaylin/p/9957885.html

Look at it:Bias algorithm:Each tuple has its own attributes. According to the training probability, it can be classified as https://blog.csdn.net/baimafujinji/article/details/50441927.

          Hidden Markov Model,Those formulas are really long and can’t be seen. We know that there are two important points: (1) transition probability p (qt | qt-1): probability of the next state in this state (2) probability p (yt | qt): probability of YT observation looks like Markov in the case of hidden state qt.I use more on voice test.

         decision tree:It is also used for classification. Compared with Bayesian algorithm, the advantage of decision tree is that the construction process does not require any domain knowledge or parameter settings. Therefore, in practical application, it is more suitable for exploratory knowledge discovery. There are two algorithms ID3.5 and C4.5, and the information gain for ID3.5, each time.Choose the maximum information gain to classify.

 

 

C4.5Choose the maximum gain rate for classification.

 

Link of this Article: Knowledge point learning

Leave a Reply

Your email address will not be published. Required fields are marked *