Skip to content

lavita-ai/medical-eval-sphere

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

39 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Medical Evaluation Sphere

Medical Evaluation Sphere Logo

Medical QA Benchmark 🤗

from datasets import load_dataset

ds = load_dataset("lavita/medical-eval-sphere")

# loading the benchmark into a data frame
df = ds['medical_qa_benchmark_v1.0'].to_pandas()

Notebooks Overview

The repository includes several Jupyter notebooks demonstrating various analyses and preprocessing steps. These notebooks are located in the notebooks folder. Below is a brief description of each notebook:

Set up API Keys

Create a .env file in the root directory of the project and add the following lines:

ANTHROPIC_API_KEY=your_api_key_here
OPENAI_API_KEY=your_api_key_here
LABELBOX_API_KEY=your_api_key_here

Citation

@article{hosseini2024benchmark,
  title={A Benchmark for Long-Form Medical Question Answering},
  author={Hosseini, Pedram and Sin, Jessica M and Ren, Bing and Thomas, Bryceton G and Nouri, Elnaz and Farahanchi, Ali and Hassanpour, Saeed},
  journal={arXiv preprint arXiv:2411.09834},
  year={2024}
}