diff --git a/README.md b/README.md index f7d15d073..1abcfef2a 100644 --- a/README.md +++ b/README.md @@ -30,7 +30,7 @@ community](https://communityinviter.com/apps/aiqualityforum/josh)! **Don't just vibe-check your llm app!** Systematically evaluate and track your LLM experiments with TruLens. As you develop your app including prompts, models, -retreivers, knowledge sources and more, TruLens-Eval is the tool you need to +retreivers, knowledge sources and more, *TruLens-Eval* is the tool you need to understand its performance. Fine-grained, stack-agnostic instrumentation and comprehensive evaluations help diff --git a/docs/trulens_eval/contributing/standards.md b/docs/trulens_eval/contributing/standards.md index 190c83097..e010fe5ec 100644 --- a/docs/trulens_eval/contributing/standards.md +++ b/docs/trulens_eval/contributing/standards.md @@ -5,15 +5,15 @@ Enumerations of standards for code and its documentation to be maintained in ## Proper Names -Styling/formatting of proper names. +Styling/formatting of proper names in italics. -- "TruLens" +- _TruLens_ -- "LangChain" +- _LangChain_ -- "LlamaIndex" +- _LlamaIndex_ -- "NeMo Guardrails", "Guardrails" for short, "rails" for shorter. +- _NeMo Guardrails_, _Guardrails_ for short, _rails_ for shorter. ## Python diff --git a/docs/trulens_eval/gh_top_intro.md b/docs/trulens_eval/gh_top_intro.md index 33e29dc1c..bcffa7496 100644 --- a/docs/trulens_eval/gh_top_intro.md +++ b/docs/trulens_eval/gh_top_intro.md @@ -30,7 +30,7 @@ community](https://communityinviter.com/apps/aiqualityforum/josh)! **Don't just vibe-check your llm app!** Systematically evaluate and track your LLM experiments with TruLens. As you develop your app including prompts, models, -retreivers, knowledge sources and more, TruLens-Eval is the tool you need to +retreivers, knowledge sources and more, *TruLens-Eval* is the tool you need to understand its performance. Fine-grained, stack-agnostic instrumentation and comprehensive evaluations help diff --git a/docs/trulens_eval/intro.md b/docs/trulens_eval/intro.md index 4db36c1ae..2bb96030e 100644 --- a/docs/trulens_eval/intro.md +++ b/docs/trulens_eval/intro.md @@ -9,7 +9,7 @@ trulens_eval/README.md . If you are editing README.md, your changes will be over **Don't just vibe-check your llm app!** Systematically evaluate and track your LLM experiments with TruLens. As you develop your app including prompts, models, -retreivers, knowledge sources and more, TruLens-Eval is the tool you need to +retreivers, knowledge sources and more, *TruLens-Eval* is the tool you need to understand its performance. Fine-grained, stack-agnostic instrumentation and comprehensive evaluations help