-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Modified LSTM Model (2 - Layers) with Dropout Layer to Improve Sequence Processing #98
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure the PR matches the requirements mentioned in the Contribution guide. The maintainer might get in touch to enusre quality. Thanks for your time
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. @rohitinu6 Please check for the technical details ones.
@jvedsaqib @rohitinu6 please review it |
@rohitinu6 is there any issue i didnt get it |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@deepanshubaghel
You are deleting the notebook and creating a new notebook
Does it contain all necessary code?
…iction(Updated).ipynb
@jvedsaqib @rohitinu6 please review it |
I have added 2 - layer LSTM model (model10) to enhance sequence processing in the project. The model includes the following layers:
LSTM Layer with 300 units and return_sequences=True.
Dropout Layer with a rate of 0.4 to prevent overfitting.
Second LSTM Layer with 160 units and return_sequences=False for final sequence output.
Dense Layer with 50 units.
Final Dense Layer with 1 unit for output.
closes #91
@rohitinu6 or @jvedsaqib can you plz check the file and merge it?