You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
High Level approach: Module that creates sentence embeddings for every book. This could enable semantic search, clustering, recommendations, anomaly detection, diversity measurement, classification using distance function and could be first step to a “talk to books” or “talk to library” feature.
Disadvantage: Distance functions operate in the high-dimensional space of embeddings and can be computationally expensive, especially for large-scale book datasets.
The text was updated successfully, but these errors were encountered:
@finnless Great idea! Sentence embeddings can unlock powerful features, I was just talking about this during the IA event with @mekarpeles and team something like a "talk to book/library" feature where I can directly ask it for personalised recommendations. This module (if implemented) can definitely pave a way for it.
Regarding the computational cost, were you hinting at using RAG (Retrieval-Augmented Generation)? It could help by retrieving relevant subsets, reducing the need for expensive distance calculations across the full dataset. This way, we balance scalability with performance. Let me know if this aligns with your thinking!
High Level approach: Module that creates sentence embeddings for every book. This could enable semantic search, clustering, recommendations, anomaly detection, diversity measurement, classification using distance function and could be first step to a “talk to books” or “talk to library” feature.
Disadvantage: Distance functions operate in the high-dimensional space of embeddings and can be computationally expensive, especially for large-scale book datasets.
The text was updated successfully, but these errors were encountered: