You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are 697 informal names where the LLM gave a sensible output (not blank or a “No specific drug name” response) in Esmond’s dataset. Of these, 63 (9%) are exact matches to a concept in the RxNorm vocabulary. Of the rest, in 208 cases, a vector search gives the exact same answer as GPT-3. As the vector search has a far lower computational cost, and can succesfully answer at least 39% of queries, it's worth integrating into the pipeline. There might be further improvements if a little effort is made.
There are 697 informal names where the LLM gave a sensible output (not blank or a “No specific drug name” response) in Esmond’s dataset. Of these, 63 (9%) are exact matches to a concept in the RxNorm vocabulary. Of the rest, in 208 cases, a vector search gives the exact same answer as GPT-3. As the vector search has a far lower computational cost, and can succesfully answer at least 39% of queries, it's worth integrating into the pipeline. There might be further improvements if a little effort is made.
My experiment with this used roughly this code:
Then using
embeddings.search()
you can fetch the closest n embeddings.We need to
The text was updated successfully, but these errors were encountered: