-
-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
next word predictor #23
Comments
link 1: https://towardsdatascience.com/exploring-the-next-word-predictor-5e22aeb85d8f link 2: https://www.youtube.com/watch?v=35tu6XnRkH0 Between LSTM and N-grams approach RNN-LSTM is the best because it is a more advanced approach, using a neural language. Standard RNNs and other language models become less accurate when the gap between the context and the word to be predicted increases but LSTM can be used to tackle the long-term dependency problem because it has memory cells to remember the previous context. |
link 1: https://towardsdatascience.com/exploring-the-next-word-predictor-5e22aeb85d8f B) |
Next word predictor is or can be an application of Natural Language (NLP), where we can use different algorithms or techniques of NLP and Recurrent Neural Network (RNN) to predict the next word in the sentence. There are many algorithms and some of them are n-gram, Kneser-Ney smoothing, k Nearest Neighbours, RNN-LSTM, RNN-GRU.
After studying these models, the RNN-LSTM and RNN-GRU were the best models to implement, due to less code and more accuracy. Between RNN-LSTM and RNN-GRU, RNN-LSTM is the best among the two. This is due to the following: |
Add in the comments the links of resources you found and also add these things:
At the end also conclude which one is better.
The text was updated successfully, but these errors were encountered: