Skip to content

Commit

Permalink
change relative links to hardcoded links in docs (#657)
Browse files Browse the repository at this point in the history
* change relative links to hardcoded links in docs

* hardlink contributions guide

* update readmes, contributing
  • Loading branch information
joshreini1 authored Dec 13, 2023
1 parent f29bdc9 commit 62c0cbd
Show file tree
Hide file tree
Showing 8 changed files with 13 additions and 18 deletions.
5 changes: 0 additions & 5 deletions CONTRIBUTING.md

This file was deleted.

4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ The best way to support TruLens is to give us a ⭐ on [GitHub](https://www.gith

Fine-grained, stack-agnostic instrumentation and comprehensive evaluations help you to identify failure modes & systematically iterate to improve your application.

Read more about the core concepts behind TruLens including [Feedback Functions](./trulens_eval/core_concepts_feedback_functions.md), [The RAG Triad](./core_concepts_rag_triad.md), and [Honest, Harmless and Helpful Evals](./core_concepts_honest_harmless_helpful_evals.md).
Read more about the core concepts behind TruLens including [Feedback Functions](https://www.trulens.org/trulens_eval/core_concepts_feedback_functions/), [The RAG Triad](https://www.trulens.org/trulens_eval/core_concepts_rag_triad/), and [Honest, Harmless and Helpful Evals](https://www.trulens.org/trulens_eval/core_concepts_honest_harmless_helpful_evals/).

## TruLens in the development workflow

Expand All @@ -44,7 +44,7 @@ Walk through how to instrument and evaluate a RAG built from scratch with TruLen

### 💡 Contributing

Interested in contributing? See our [contribution guide](https://github.com/truera/trulens/tree/main/trulens_eval/CONTRIBUTING.md) for more details.
Interested in contributing? See our [contribution guide](https://www.trulens.org/trulens_eval/CONTRIBUTING/) for more details.


## TruLens-Explain
Expand Down
6 changes: 3 additions & 3 deletions docs/trulens_eval/core_concepts_feedback_functions.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,13 @@ It can be useful to think of the range of evaluations on two axis: Scalable and

In early development stages, we recommend starting with domain expert evaluations. These evaluations are often completed by the developers themselves and represent the core use cases your app is expected to complete. This allows you to deeply understand the performance of your app, but lacks scale.

See this [example notebook](./groundtruth_evals.ipynb) to learn how to run ground truth evaluations with TruLens.
See this [example notebook](https://www.trulens.org/trulens_eval/groundtruth_evals/) to learn how to run ground truth evaluations with TruLens.

## User Feedback (Human) Evaluations

After you have completed early evaluations and have gained more confidence in your app, it is often useful to gather human feedback. This can often be in the form of binary (up/down) feedback provided by your users. This is more slightly scalable than ground truth evals, but struggles with variance and can still be expensive to collect.

See this [example notebook](./human_feedback.ipynb) to learn how to log human feedback with TruLens.
See this [example notebook](https://www.trulens.org/trulens_eval/human_feedback/) to learn how to log human feedback with TruLens.

## Traditional NLP Evaluations

Expand All @@ -34,4 +34,4 @@ Large Language Models can also provide meaningful and flexible feedback on LLM a

Depending on the size and nature of the LLM, these evaluations can be quite expensive at scale.

See this [example notebook](./quickstart.ipynb) to learn how to run LLM-based evaluations with TruLens.
See this [example notebook](https://www.trulens.org/trulens_eval/quickstart/) to learn how to run LLM-based evaluations with TruLens.
2 changes: 1 addition & 1 deletion docs/trulens_eval/core_concepts_rag_triad.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,5 +24,5 @@ Last, our response still needs to helpfully answer the original question. We can

By reaching satisfactory evaluations for this triad, we can make a nuanced statement about our application’s correctness; our application is verified to be hallucination free up to the limit of its knowledge base. In other words, if the vector database contains only accurate information, then the answers provided by the RAG are also accurate.

To see the RAG triad in action, check out the [TruLens Quickstart](./quickstart.ipynb)
To see the RAG triad in action, check out the [TruLens Quickstart](https://www.trulens.org/trulens_eval/quickstart/)

4 changes: 2 additions & 2 deletions docs/trulens_eval/gh_top_intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ The best way to support TruLens is to give us a ⭐ on [GitHub](https://www.gith

Fine-grained, stack-agnostic instrumentation and comprehensive evaluations help you to identify failure modes & systematically iterate to improve your application.

Read more about the core concepts behind TruLens including [Feedback Functions](./trulens_eval/core_concepts_feedback_functions.md), [The RAG Triad](./core_concepts_rag_triad.md), and [Honest, Harmless and Helpful Evals](./core_concepts_honest_harmless_helpful_evals.md).
Read more about the core concepts behind TruLens including [Feedback Functions](https://www.trulens.org/trulens_eval/core_concepts_feedback_functions/), [The RAG Triad](https://www.trulens.org/trulens_eval/core_concepts_rag_triad/), and [Honest, Harmless and Helpful Evals](https://www.trulens.org/trulens_eval/core_concepts_honest_harmless_helpful_evals/).

## TruLens in the development workflow

Expand All @@ -44,4 +44,4 @@ Walk through how to instrument and evaluate a RAG built from scratch with TruLen

### 💡 Contributing

Interested in contributing? See our [contribution guide](https://github.com/truera/trulens/tree/main/trulens_eval/CONTRIBUTING.md) for more details.
Interested in contributing? See our [contribution guide](https://www.trulens.org/trulens_eval/CONTRIBUTING/) for more details.
4 changes: 2 additions & 2 deletions docs/trulens_eval/intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

Fine-grained, stack-agnostic instrumentation and comprehensive evaluations help you to identify failure modes & systematically iterate to improve your application.

Read more about the core concepts behind TruLens including [Feedback Functions](./trulens_eval/core_concepts_feedback_functions.md), [The RAG Triad](./trulens_eval/core_concepts_rag_triad.md), and [Honest, Harmless and Helpful Evals](./trulens_eval/core_concepts_honest_harmless_helpful_evals.md).
Read more about the core concepts behind TruLens including [Feedback Functions](https://www.trulens.org/trulens_eval/core_concepts_feedback_functions/), [The RAG Triad](https://www.trulens.org/trulens_eval/core_concepts_rag_triad/), and [Honest, Harmless and Helpful Evals](https://www.trulens.org/trulens_eval/core_concepts_honest_harmless_helpful_evals/).

## TruLens in the development workflow

Expand All @@ -30,4 +30,4 @@ Walk through how to instrument and evaluate a RAG built from scratch with TruLen

### 💡 Contributing

Interested in contributing? See our [contribution guide](https://github.com/truera/trulens/tree/main/trulens_eval/CONTRIBUTING.md) for more details.
Interested in contributing? See our [contribution guide](https://www.trulens.org/trulens_eval/CONTRIBUTING/) for more details.
2 changes: 1 addition & 1 deletion trulens_eval/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ New contributors may want to start with issues tagged with good first issue.
Please feel free to open an issue and/or assign an issue to yourself.

## 🎉 Add Usage Examples
If you have applied TruLens to track and evalaute a unique use-case, we would love your contribution in the form of an example notebook: e.g. [Evaluating Pinecone Configuration Choices on Downstream App Performance](https://github.com/truera/trulens/blob/main/trulens_eval/examples/vector-dbs/pinecone/constructing_optimal_pinecone.ipynb)
If you have applied TruLens to track and evalaute a unique use-case, we would love your contribution in the form of an example notebook: e.g. [Evaluating Pinecone Configuration Choices on Downstream App Performance](https://colab.research.google.com/github/truera/trulens/blob/main/trulens_eval/examples/expositional/vector-dbs/pinecone/pinecone_evals_build_better_rags.ipynb)

All example notebooks are expected to:

Expand Down
4 changes: 2 additions & 2 deletions trulens_eval/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

Fine-grained, stack-agnostic instrumentation and comprehensive evaluations help you to identify failure modes & systematically iterate to improve your application.

Read more about the core concepts behind TruLens including [Feedback Functions](./trulens_eval/core_concepts_feedback_functions.md), [The RAG Triad](./trulens_eval/core_concepts_rag_triad.md), and [Honest, Harmless and Helpful Evals](./trulens_eval/core_concepts_honest_harmless_helpful_evals.md).
Read more about the core concepts behind TruLens including [Feedback Functions](https://www.trulens.org/trulens_eval/core_concepts_feedback_functions/), [The RAG Triad](https://www.trulens.org/trulens_eval/core_concepts_rag_triad/), and [Honest, Harmless and Helpful Evals](https://www.trulens.org/trulens_eval/core_concepts_honest_harmless_helpful_evals/).

## TruLens in the development workflow

Expand All @@ -30,4 +30,4 @@ Walk through how to instrument and evaluate a RAG built from scratch with TruLen

### 💡 Contributing

Interested in contributing? See our [contribution guide](https://github.com/truera/trulens/tree/main/trulens_eval/CONTRIBUTING.md) for more details.
Interested in contributing? See our [contribution guide](https://www.trulens.org/trulens_eval/CONTRIBUTING/) for more details.

0 comments on commit 62c0cbd

Please sign in to comment.