From f9fbef40f6fa98748f55c9b7832efc9b80a33a76 Mon Sep 17 00:00:00 2001 From: Mark Mc Naught Date: Wed, 27 Mar 2024 00:01:53 +0200 Subject: [PATCH] docs | standards on proper names (#997) * fix: italise TruLens-Eval ref * fix: italise TruLens-Eval ref in root scripts. * docs: add contribution instructions for proper names with mod to inverted commas. * Update standards.md Markdown lint prefers _ to * for emphasis. --------- Co-authored-by: Josh Reini <60949774+joshreini1@users.noreply.github.com> Co-authored-by: Piotr Mardziel --- README.md | 2 +- docs/trulens_eval/contributing/standards.md | 10 +++++----- docs/trulens_eval/gh_top_intro.md | 2 +- docs/trulens_eval/intro.md | 2 +- 4 files changed, 8 insertions(+), 8 deletions(-) diff --git a/README.md b/README.md index f7d15d073..1abcfef2a 100644 --- a/README.md +++ b/README.md @@ -30,7 +30,7 @@ community](https://communityinviter.com/apps/aiqualityforum/josh)! **Don't just vibe-check your llm app!** Systematically evaluate and track your LLM experiments with TruLens. As you develop your app including prompts, models, -retreivers, knowledge sources and more, TruLens-Eval is the tool you need to +retreivers, knowledge sources and more, *TruLens-Eval* is the tool you need to understand its performance. Fine-grained, stack-agnostic instrumentation and comprehensive evaluations help diff --git a/docs/trulens_eval/contributing/standards.md b/docs/trulens_eval/contributing/standards.md index 190c83097..e010fe5ec 100644 --- a/docs/trulens_eval/contributing/standards.md +++ b/docs/trulens_eval/contributing/standards.md @@ -5,15 +5,15 @@ Enumerations of standards for code and its documentation to be maintained in ## Proper Names -Styling/formatting of proper names. +Styling/formatting of proper names in italics. -- "TruLens" +- _TruLens_ -- "LangChain" +- _LangChain_ -- "LlamaIndex" +- _LlamaIndex_ -- "NeMo Guardrails", "Guardrails" for short, "rails" for shorter. +- _NeMo Guardrails_, _Guardrails_ for short, _rails_ for shorter. ## Python diff --git a/docs/trulens_eval/gh_top_intro.md b/docs/trulens_eval/gh_top_intro.md index 33e29dc1c..bcffa7496 100644 --- a/docs/trulens_eval/gh_top_intro.md +++ b/docs/trulens_eval/gh_top_intro.md @@ -30,7 +30,7 @@ community](https://communityinviter.com/apps/aiqualityforum/josh)! **Don't just vibe-check your llm app!** Systematically evaluate and track your LLM experiments with TruLens. As you develop your app including prompts, models, -retreivers, knowledge sources and more, TruLens-Eval is the tool you need to +retreivers, knowledge sources and more, *TruLens-Eval* is the tool you need to understand its performance. Fine-grained, stack-agnostic instrumentation and comprehensive evaluations help diff --git a/docs/trulens_eval/intro.md b/docs/trulens_eval/intro.md index 4db36c1ae..2bb96030e 100644 --- a/docs/trulens_eval/intro.md +++ b/docs/trulens_eval/intro.md @@ -9,7 +9,7 @@ trulens_eval/README.md . If you are editing README.md, your changes will be over **Don't just vibe-check your llm app!** Systematically evaluate and track your LLM experiments with TruLens. As you develop your app including prompts, models, -retreivers, knowledge sources and more, TruLens-Eval is the tool you need to +retreivers, knowledge sources and more, *TruLens-Eval* is the tool you need to understand its performance. Fine-grained, stack-agnostic instrumentation and comprehensive evaluations help