We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
After running this following code, I receive 'ModuleNotFoundError: No module named 'token_model''
` with open('tweets1k.txt', 'r') as infile: tweets = infile.readlines()
tokenizer = WordTokenizer() tokenizer.build_vocab(tweets)
ds = Dataset(tweets, emojis, tokenizer=tokenizer) ds.update_test_indices(test_size=0.2) ds.save('dataset')
factory = TokenModelFactory(1, tokenizer.token_index, max_tokens=100, embedding_type='glove.6B.100d') word_encoder_model = YoonKimCNN() model = factory.build_model(token_encoder_model=word_encoder_model) model.compile(optimizer='adam', loss='categorical_crossentropy') model.summary() ` How to solve that, please.
The text was updated successfully, but these errors were encountered:
A quick workaround is edit keras_text/models/init.py to look like this
from .token_model import TokenModelFactory from .sentence_model import SentenceModelFactory from .sequence_encoders import *
Notice the period in front of each module.
Sorry, something went wrong.
Unable to locate init.py on my system! Do you have any idea how to locate the path? I am using mac ios.
Because He means edit keras_text/models/__init__.py look for that ;)
keras_text/models/__init__.py
No branches or pull requests
After running this following code, I receive 'ModuleNotFoundError: No module named 'token_model''
`
with open('tweets1k.txt', 'r') as infile:
tweets = infile.readlines()
tokenizer = WordTokenizer()
tokenizer.build_vocab(tweets)
ds = Dataset(tweets, emojis, tokenizer=tokenizer)
ds.update_test_indices(test_size=0.2)
ds.save('dataset')
factory = TokenModelFactory(1, tokenizer.token_index, max_tokens=100, embedding_type='glove.6B.100d')
word_encoder_model = YoonKimCNN()
model = factory.build_model(token_encoder_model=word_encoder_model)
model.compile(optimizer='adam', loss='categorical_crossentropy')
model.summary()
`
How to solve that, please.
The text was updated successfully, but these errors were encountered: