Skip to content

Commit

Permalink
docs | standards on proper names (#997)
Browse files Browse the repository at this point in the history
* fix: italise TruLens-Eval ref

* fix: italise TruLens-Eval ref in root scripts.

* docs: add contribution instructions for proper names with mod to inverted commas.

* Update standards.md

Markdown lint prefers _ to * for emphasis.

---------

Co-authored-by: Josh Reini <[email protected]>
Co-authored-by: Piotr Mardziel <[email protected]>
  • Loading branch information
3 people authored Mar 26, 2024
1 parent 82f2d68 commit f9fbef4
Show file tree
Hide file tree
Showing 4 changed files with 8 additions and 8 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ community](https://communityinviter.com/apps/aiqualityforum/josh)!

**Don't just vibe-check your llm app!** Systematically evaluate and track your
LLM experiments with TruLens. As you develop your app including prompts, models,
retreivers, knowledge sources and more, TruLens-Eval is the tool you need to
retreivers, knowledge sources and more, *TruLens-Eval* is the tool you need to
understand its performance.

Fine-grained, stack-agnostic instrumentation and comprehensive evaluations help
Expand Down
10 changes: 5 additions & 5 deletions docs/trulens_eval/contributing/standards.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,15 @@ Enumerations of standards for code and its documentation to be maintained in

## Proper Names

Styling/formatting of proper names.
Styling/formatting of proper names in italics.

- "TruLens"
- _TruLens_

- "LangChain"
- _LangChain_

- "LlamaIndex"
- _LlamaIndex_

- "NeMo Guardrails", "Guardrails" for short, "rails" for shorter.
- _NeMo Guardrails_, _Guardrails_ for short, _rails_ for shorter.

## Python

Expand Down
2 changes: 1 addition & 1 deletion docs/trulens_eval/gh_top_intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ community](https://communityinviter.com/apps/aiqualityforum/josh)!

**Don't just vibe-check your llm app!** Systematically evaluate and track your
LLM experiments with TruLens. As you develop your app including prompts, models,
retreivers, knowledge sources and more, TruLens-Eval is the tool you need to
retreivers, knowledge sources and more, *TruLens-Eval* is the tool you need to
understand its performance.

Fine-grained, stack-agnostic instrumentation and comprehensive evaluations help
Expand Down
2 changes: 1 addition & 1 deletion docs/trulens_eval/intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ trulens_eval/README.md . If you are editing README.md, your changes will be over

**Don't just vibe-check your llm app!** Systematically evaluate and track your
LLM experiments with TruLens. As you develop your app including prompts, models,
retreivers, knowledge sources and more, TruLens-Eval is the tool you need to
retreivers, knowledge sources and more, *TruLens-Eval* is the tool you need to
understand its performance.

Fine-grained, stack-agnostic instrumentation and comprehensive evaluations help
Expand Down

0 comments on commit f9fbef4

Please sign in to comment.