From b61f5002ce265abf7ffe0c2611abdc8a3cdf699e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Furkan=20=C3=87etinkaya?= Date: Tue, 26 Nov 2024 22:30:19 +0300 Subject: [PATCH] Fix wrong link href in docs/running-llms.md (#37) ## Description Hello, i found a wrong link href in [/docs/guides/running-llms](https://docs.swmansion.com/react-native-executorch/docs/guides/running-llms) Running LLMs section. The constants href goes to github's 404 not found page. I fixed this href. ### Type of change - [ ] Bug fix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) - [x] Documentation update (improves or adds clarity to existing documentation) ### Tested on - [ ] iOS - [ ] Android ### Testing instructions ### Screenshots ### Related issues ### Checklist - [x] I have performed a self-review of my code - [x] I have commented my code, particularly in hard-to-understand areas - [x] I have updated the documentation accordingly - [x] My changes generate no new warnings ### Additional notes --- docs/docs/guides/running-llms.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs/guides/running-llms.md b/docs/docs/guides/running-llms.md index 98882ed..eb3f0e8 100644 --- a/docs/docs/guides/running-llms.md +++ b/docs/docs/guides/running-llms.md @@ -5,7 +5,7 @@ sidebar_position: 1 React Native ExecuTorch supports Llama 3.2 models, including quantized versions. Before getting started, you’ll need to obtain the .pte binary—a serialized model—and the tokenizer. There are various ways to accomplish this: -- For your convienience, it's best if you use models exported by us, you can get them from our hugging face repository. You can also use [constants](https://github.com/software-mansion/react-native-executorch/tree/main/src/modelUrls.ts) shipped with our library. +- For your convienience, it's best if you use models exported by us, you can get them from our hugging face repository. You can also use [constants](https://github.com/software-mansion/react-native-executorch/tree/main/src/constants/modelUrls.ts) shipped with our library. - If you want to export model by yourself,you can use a Docker image that we've prepared. To see how it works, check out [exporting Llama](./exporting-llama.mdx) - Follow the official [tutorial](https://github.com/pytorch/executorch/blob/fe20be98c/examples/demo-apps/android/LlamaDemo/docs/delegates/xnnpack_README.md) made by ExecuTorch team to build the model and tokenizer yourself