Skip to content
Ads Dawson edited this page Jul 7, 2023 · 20 revisions

A centralized glossary of terms and their meaning, to clarify the language used throughout the top 10.

  • LLM - Large language model. A type of artificial intelligence (AI) that is trained on a massive dataset of text and code. LLMs used natural language processing to process requests and generate data.
  • NLP (Natural Language Processing) - The branch of computer science focused on teaching computers to speak.
  • Transformer - A type of neural network architecture that is commonly used to train LLMs. Transformers are able to learn long-range dependencies between words, which makes them well-suited for natural language processing tasks.
  • Self-supervised learning - A type of machine learning in which the model is trained to learn from unlabeled data. In the case of LLMs, self-supervised learning is often used to train the model to predict the next word in a sequence.
  • Fine-tuning - A process of further training a model that has already been trained on a large dataset. Fine-tuning is often used to improve the performance of a model on a specific task.
  • Transfer learning - A process of using a model that has been trained on one task to improve the performance of a model on a different task. Transfer learning is often used to save time and resources when training new models.
  • Inference - The process of using a trained model to generate predictions or responses, usually as an API or web service.
  • Hallucinate - In the context of LLMs, hallucinate can refer to the process of generating text that is not based on any real-world input. This can happen for a variety of reasons. When an LLM hallucinates, the output text may be nonsensical, offensive, or even dangerous.
Clone this wiki locally