-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[GPU] activations scaling to resolve accuracy issues for infer precision of f16 #27265
Merged
vladimir-paramuzov
merged 65 commits into
openvinotoolkit:master
from
e-ddykim:static_scaling
Jan 14, 2025
Merged
Changes from all commits
Commits
Show all changes
65 commits
Select commit
Hold shift + click to select a range
1bc8397
added the static scaling feature
e-ddykim de75452
added a new rt_info scale_factor
e-ddykim 8bfa45a
fp16 scaling for vae decoder of sdxl
e-ddykim 3fc1cc3
resolved accuracy issue in transformer of flux.1
e-ddykim 908f54a
removed unnecessary codes
e-ddykim fe02f65
removed unnecessary codes
e-ddykim 06ebc95
renamed to ActivationsScaling
e-ddykim 99ef573
updated code style
e-ddykim 3aaa022
updated to use multiple MatcherPass
e-ddykim 19d0a9f
updated code style
e-ddykim 83104fe
updated code style
e-ddykim 148a4ac
added unit tests
e-ddykim e87c702
update code style
e-ddykim efe8b37
updated code style
e-ddykim e9e2e1d
updated code style
e-ddykim 6681856
updated code style
e-ddykim 8ca4986
updated for transformer of FLUX.1
e-ddykim d0d9b68
disabled FullyConnectedPerLayerScaling
e-ddykim e131fa2
added unit tests
e-ddykim d9e4246
fixed code style
e-ddykim 79b2e48
Enable FullyConnectedHorizontalFusion with activations scaling
andrew-k-park bc8e9c1
updated ScaleDownMultipleLayers
e-ddykim 1ca6cae
updated code style
e-ddykim 6f277b7
reading ACTIVATIONS_SCALE_FACTOR from rt_info
e-ddykim 45ae58b
updated to use LPT
e-ddykim b160899
fixed for flux.1 dynamic model
e-ddykim 1a58a38
fix merging faults
e-ddykim e918f5d
fixes for flux.1
e-ddykim 562c263
update not to add redundant Convert
e-ddykim 799a67a
updated apply_rt_info
e-ddykim 9cb19a9
added a new ScaleDownFusion pass
e-ddykim 4e169ec
added a new param useDefaultTransformation for activations scaling
e-ddykim 3bfd7e5
update code style
e-ddykim a54ac37
update code style
e-ddykim fadef1f
updated clamp_fp16 tests
e-ddykim 9dad71d
code cleanup
e-ddykim 6f5ded8
code cleanup
e-ddykim a98ddf3
update code style
e-ddykim 1d7592c
remove redundant code
e-ddykim caf8024
updated activations scaling tests
e-ddykim 6d3cc27
updated ScaleDownFusion
e-ddykim 390c8c4
fixed ScaleDownFusionTest
e-ddykim 9ca735d
added MulNormTransformation and NormMulTransformation
e-ddykim a1a3255
removed apply_rt_info
e-ddykim 9f5cfd1
updated activations scaling unit tests
e-ddykim e2caf35
updated code style
e-ddykim f58c08c
updated AddTransformation to use output_type instead of fp32
e-ddykim e81ef27
added a new EliminateMultiplyX1 pass
e-ddykim d06b550
update code style
e-ddykim 1b1e04a
added a new MulMulTransformation
e-ddykim 2c450f9
added MulDownTransformation
e-ddykim 966955d
fixed code style
e-ddykim 7e53946
added a functional test
e-ddykim d31907f
applied reviews
e-ddykim 83f65b2
merged master
e-ddykim a26d87f
applied reviews
e-ddykim 6d6f7b0
updated to preserve the original output precision
e-ddykim 000b11d
updated per reviews
e-ddykim 20a7a44
reverted to apply activations_scale_factor from rt_info
e-ddykim 3afa527
added MulMulTransformationTest
e-ddykim aa284a8
updated MulShareTransformation
e-ddykim 366a1be
updated scaling tests
e-ddykim ecc48e6
applied reviews
e-ddykim 8122cde
set scalingMode = true
e-ddykim 0874b17
disabled scaling for quantized models
e-ddykim File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
104 changes: 104 additions & 0 deletions
104
...mmon/transformations/include/transformations/common_optimizations/activations_scaling.hpp
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,104 @@ | ||
// Copyright (C) 2024 Intel Corporation | ||
// SPDX-License-Identifier: Apache-2.0 | ||
// | ||
|
||
#pragma once | ||
|
||
#include <memory> | ||
|
||
#include "openvino/pass/matcher_pass.hpp" | ||
#include "transformations_visibility.hpp" | ||
|
||
namespace ov { | ||
namespace pass { | ||
|
||
class TRANSFORMATIONS_API ActivationsScaling; | ||
|
||
namespace activations_scaling { | ||
|
||
class TRANSFORMATIONS_API ScaleDownSingleLayer; | ||
class TRANSFORMATIONS_API EliminateScalarMul; | ||
class TRANSFORMATIONS_API MulConcatTransformation; | ||
class TRANSFORMATIONS_API MulShareTransformation; | ||
class TRANSFORMATIONS_API MoveDownScalarMul; | ||
|
||
} // namespace activations_scaling | ||
} // namespace pass | ||
} // namespace ov | ||
|
||
// ActivationsScaling makes activation values smaller to prevent overflow due to the limited range of FP16 | ||
// This feature is controlled by ov::hint::activations_scale_factor. | ||
// For example, when this property is set as 16, activations are divided by 16. | ||
// If ov::hint::activations_scale_factor is less than or equal to zero, it is disabled. | ||
|
||
// Add scale_down and scale_up layers around Convolution and MatMul nodes | ||
// Conv/MatMul | ||
// ==> | ||
// Multiply(scale_down by scale_factor) --> Conv/MatMul --> Multiply(scale_up by scale_factor) | ||
class ov::pass::activations_scaling::ScaleDownSingleLayer : public ov::pass::MatcherPass { | ||
public: | ||
OPENVINO_MATCHER_PASS_RTTI("ScaleDownSingleLayer", "0"); | ||
ScaleDownSingleLayer(float scale_factor, ov::element::Type scaled_prec); | ||
}; | ||
|
||
// Normalization and ShapeOf have the following property. | ||
// | ||
// Norm(input * const_a) = Norm(input) | ||
// | ||
// So, we can skip Multiply that is connected to Normalization and ShapeOf. | ||
// | ||
// input --> Multiply --> Normalization/ShapeOf | ||
// ==> | ||
// input --> Normalization/ShapeOf | ||
class ov::pass::activations_scaling::EliminateScalarMul : public ov::pass::MatcherPass { | ||
public: | ||
OPENVINO_MATCHER_PASS_RTTI("EliminateScalarMul", "0"); | ||
EliminateScalarMul(); | ||
}; | ||
|
||
// input_a const_a input_b const_b input_c const_c | ||
// \ / \ / \ / | ||
// Multiply_a Multiply_b Multiply_c | ||
// \ | / | ||
// \ | / | ||
// ---------- Concat ------------ | ||
// ==> | ||
// (const_a (const_b (const_c | ||
// input_a /const_c) input_b /const_c) input_c /const_c) | ||
// \ / \ / \ / | ||
// Multiply_a Multiply_b Multiply_c | ||
// \ | / | ||
// \ | / | ||
// ---------- Concat ------------ | ||
// | const_c | ||
// | / | ||
// Multiply | ||
class ov::pass::activations_scaling::MulConcatTransformation : public ov::pass::MatcherPass { | ||
public: | ||
OPENVINO_MATCHER_PASS_RTTI("MulConcatTransformation", "0"); | ||
MulConcatTransformation(); | ||
}; | ||
|
||
// input input | ||
// / \ | | ||
// Norm Mul ==> Mul (expect to be fused into the input layer) | ||
// | | / \_ | ||
// op_a op_b Norm op_b | ||
// | | ||
// op_a | ||
class ov::pass::activations_scaling::MulShareTransformation : public ov::pass::MatcherPass { | ||
public: | ||
OPENVINO_MATCHER_PASS_RTTI("MulShareTransformation", "0"); | ||
MulShareTransformation(); | ||
}; | ||
|
||
// input_b scalar input_a input_b | ||
// \ / \ / | ||
// input_a Mul_b ==> Mul_a' scalar | ||
// \ / \ / | ||
// Mul_a Mul_b' (expect to be merged with Mul_a') | ||
class ov::pass::activations_scaling::MoveDownScalarMul : public ov::pass::MatcherPass { | ||
public: | ||
OPENVINO_MATCHER_PASS_RTTI("MoveDownScalarMul", "0"); | ||
MoveDownScalarMul(); | ||
}; |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This transformation duplicates
ConcatTransformation
behavior. I'd suggest to reenableConcatTransformation
(it is currently disabled), and removeMulConcatTransformation
. The subgraph test you provided passes successfully with these changesThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @e-ddykim for the help: It was found that a current
ConcatTransformation
implementation doesn't handle all the cases which this transformation is able to handle.I created a ticket for
ConcatTransformation
improvement: CVS-160325. After it is implemented, we will be able to removeMulConcatTransformation
and reuseConcatTransformation