diff --git a/Hugging Face/Hugging_Face_Few_Shot_Learning_with_Inference_API.ipynb b/Hugging Face/Hugging_Face_Few_Shot_Learning_with_Inference_API.ipynb
index 8b2f32ac46..a109756793 100644
--- a/Hugging Face/Hugging_Face_Few_Shot_Learning_with_Inference_API.ipynb
+++ b/Hugging Face/Hugging_Face_Few_Shot_Learning_with_Inference_API.ipynb
@@ -168,7 +168,8 @@
"- Now, create a new access token with name: `GPT_INFERENCE` and role: `read`\n",
"- Copy the generated token and paste it below\n",
"\n",
- "We will use gpt-neo-1.3B model for our demonstration. "
+ "We use GPT based models since they excel in few-shot learning due to their ability to generate coherent and contextually relevant responses based on limited examples, capturing relationships in data more effectively than many other large language models.\n",
+ "In this demonstration, we will utilize the gpt-neo-1.3B model; additional GPT-based models can be explored here. Developed by EleutherAI, GPT-Neo is a series of transformer-based language models built on the GPT architecture. EleutherAI aims to create a model of GPT-3's scale and provide open access."
]
},
{
@@ -386,7 +387,7 @@
"source": [
"### Few-shot learning with custom dataset\n",
"\n",
- "You can also use any custom dataset and generate prompts like above. For example, below we will use twitter-sentiment-analysis. More datasets in huggingface can be found here"
+ "You can also use any custom dataset and generate prompts like above. For example, below we will use twitter-sentiment-analysis. More datasets in huggingface can be found here."
]
},
{