Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

what if my sentences is large, like 20GB data, how can I use inferSent? #114

Open
YanLiang1102 opened this issue Mar 11, 2019 · 1 comment

Comments

@YanLiang1102
Copy link

No description provided.

@YanLiang1102
Copy link
Author

what it happend during both build and encode step? without using build, and directly calling encode, make all my embeddings the same, is there any iterative way like I can get my sentences into build and make the model take those? Thank you so much for the help

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant