Skip to content

Latest commit

 

History

History
27 lines (17 loc) · 1.24 KB

README.md

File metadata and controls

27 lines (17 loc) · 1.24 KB

Finetuning CodeLlama on the Weights & Biases API

>> W&B Companion Report <<

In this repo you have the relevant code to train a CodeLLama model on instruction dataset gathered by the wandbot.

  • It uses the Huggignface Integration with Sagemaker
  • Formats the dataset accordingly
  • Evaluates the model in freshly gathered data

More info on the HF tools used: transformers, datasets.

Training

Requirements

Before we can start make sure you have met the following requirements

  • AWS Account with quota
  • AWS CLI installed
  • AWS IAM user configured in CLI with permission to create and manage ec2 instances