Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix broken link in docs #1969

Merged
merged 3 commits into from
Aug 9, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/source/3x/PT_MixedPrecision.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,5 +107,5 @@ best_model = autotune(model=build_torch_model(), tune_config=custom_tune_config,

## Examples

Users can also refer to [examples](https://github.com/intel/neural-compressor/blob/master/examples/3.x_api/pytorch\cv\mixed_precision
Users can also refer to [examples](https://github.com/intel/neural-compressor/blob/master/examples/3.x_api/pytorch/cv/mixed_precision
) on how to quantize a model with Mixed Precision.
2 changes: 1 addition & 1 deletion docs/source/3x/TF_Quant.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ TensorFlow Quantization

`neural_compressor.tensorflow` supports quantizing both TensorFlow and Keras model with or without accuracy aware tuning.

For the detailed quantization fundamentals, please refer to the document for [Quantization](../quantization.md).
For the detailed quantization fundamentals, please refer to the document for [Quantization](quantization.md).


## Get Started
Expand Down
2 changes: 1 addition & 1 deletion docs/source/3x/TF_SQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,4 +50,4 @@ best_model = autotune(

## Examples

Users can also refer to [examples](https://github.com/intel/neural-compressor/blob/master/examples/3.x_api/tensorflow/nlp/large_language_models\quantization\ptq\smoothquant) on how to apply smooth quant to a TensorFlow model with `neural_compressor.tensorflow`.
Users can also refer to [examples](https://github.com/intel/neural-compressor/blob/master/examples/3.x_api/tensorflow/nlp/large_language_models/quantization/ptq/smoothquant) on how to apply smooth quant to a TensorFlow model with `neural_compressor.tensorflow`.
2 changes: 1 addition & 1 deletion docs/source/3x/quantization.md
Original file line number Diff line number Diff line change
Expand Up @@ -396,7 +396,7 @@ For supported quantization methods for `accuracy aware tuning` and the detailed

User could refer to below chart to understand the whole tuning flow.

<img src="../source/imgs/accuracy_aware_tuning_flow.png" width=600 height=480 alt="accuracy aware tuning working flow">
<img src="./imgs/workflow.png" alt="accuracy aware tuning working flow">



Expand Down
Loading