Skip to content

Commit

Permalink
LLAMA3 doc update
Browse files Browse the repository at this point in the history
  • Loading branch information
prabod committed Sep 3, 2024
1 parent 546a1bd commit 1f222af
Show file tree
Hide file tree
Showing 2 changed files with 13 additions and 12 deletions.
6 changes: 3 additions & 3 deletions python/sparknlp/annotator/seq2seq/llama3_transformer.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,13 +20,13 @@ class LLAMA3Transformer(AnnotatorModel, HasBatchedAnnotate, HasEngine):
"""Llama 3: Cutting-Edge Foundation and Fine-Tuned Chat Models
The Llama 3 release introduces a new family of pretrained and fine-tuned LLMs, ranging in scale
from 1B to 70B parameters (1B, 3B, 7B, 13B, 34B, 70B). Llama 3 models are designed with enhanced
from 8B and 70B parameters. Llama 3 models are designed with enhanced
efficiency, performance, and safety, making them more capable than previous versions. These models
are trained on a more diverse and expansive dataset, offering improved contextual understanding
and generation quality.
The fine-tuned models, known as Llama 3-Chat, are optimized for dialogue applications using an advanced
version of Reinforcement Learning from Human Feedback (RLHF). Llama 3-Chat models demonstrate superior
The fine-tuned models, known as Llama 3-instruct, are optimized for dialogue applications using an advanced
version of Reinforcement Learning from Human Feedback (RLHF). Llama 3-instruct models demonstrate superior
performance across multiple benchmarks, outperforming Llama 2 and even matching or exceeding the capabilities
of some closed-source models.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,16 +47,17 @@ import org.json4s.jackson.JsonMethods._

/** Llama 3: Cutting-Edge Foundation and Fine-Tuned Chat Models
*
* The Llama 3 release introduces a new family of large language models, ranging from 1B to 70B
* parameters (1B, 3B, 7B, 13B, 34B, 70B). Llama 3 models are designed with a greater emphasis on
* efficiency, performance, and safety, achieving remarkable advancements in training and
* deployment processes. These models are trained on a diversified dataset that significantly
* enhances their capability to generate more accurate and contextually relevant outputs.
* The Llama 3 release introduces a new family of large language models, ranging from 8B to 70B
* parameters. Llama 3 models are designed with a greater emphasis on efficiency, performance,
* and safety, achieving remarkable advancements in training and deployment processes. These
* models are trained on a diversified dataset that significantly enhances their capability to
* generate more accurate and contextually relevant outputs.
*
* The fine-tuned variants, known as Llama 3-Chat, are specifically optimized for dialogue-based
* applications, making use of Reinforcement Learning from Human Feedback (RLHF) with an advanced
* reward model. Llama 3-Chat models demonstrate state-of-the-art performance across multiple
* benchmarks and surpass the capabilities of Llama 2, particularly in conversational settings.
* The fine-tuned variants, known as Llama 3-instruct, are specifically optimized for
* dialogue-based applications, making use of Reinforcement Learning from Human Feedback (RLHF)
* with an advanced reward model. Llama 3-instruct models demonstrate state-of-the-art
* performance across multiple benchmarks and surpass the capabilities of Llama 2, particularly
* in conversational settings.
*
* Pretrained models can be loaded with `pretrained` of the companion object:
* {{{
Expand Down

0 comments on commit 1f222af

Please sign in to comment.