diff --git a/docs/explore.html b/docs/explore.html index d7b2fde..cf8c210 100644 --- a/docs/explore.html +++ b/docs/explore.html @@ -32,7 +32,8 @@
SecEval is the first benchmark specifically created for evaluating cybersecurity knowledge in Foundation Models. It offers over 2000 multiple-choice questions across 9 domains: Software Security, Application Security, System Security, Web Security, Cryptography, Memory Safety, Network Security, and PenTest. - These questions are generated by prompting OpenAI GPT4 with authoritative sources such as open-licensed textbooks, official documentation, and industry guidelines and standards. The generation process is meticulously crafted to ensure the dataset meets rigorous quality, diversity, and impartiality criteria. Explore our dataset and detailed methodology in our [research paper](paper_placeholder.html). You can explore samples by visiting [explore](explore.html). + These questions are generated by prompting OpenAI GPT4 with authoritative sources such as open-licensed textbooks, official documentation, and industry guidelines and standards. The generation process is meticulously crafted to ensure the dataset meets rigorous quality, diversity, and impartiality criteria. You can explore our dataset and detailed methodology in our [research paper](paper_placeholder.html), or explore samples by visiting [explore](explore.html). Explore our dataset samples by visiting Explore, or consult our paper for more details. diff --git a/docs/leaderboard.html b/docs/leaderboard.html index b0b5afb..79066b8 100644 --- a/docs/leaderboard.html +++ b/docs/leaderboard.html @@ -26,7 +26,8 @@