Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GPU] activations scaling to resolve accuracy issues for infer precision of f16 #27265

Merged
merged 65 commits into from
Jan 14, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
65 commits
Select commit Hold shift + click to select a range
1bc8397
added the static scaling feature
e-ddykim Oct 16, 2024
de75452
added a new rt_info scale_factor
e-ddykim Oct 16, 2024
8bfa45a
fp16 scaling for vae decoder of sdxl
e-ddykim Oct 24, 2024
3fc1cc3
resolved accuracy issue in transformer of flux.1
e-ddykim Oct 27, 2024
908f54a
removed unnecessary codes
e-ddykim Oct 27, 2024
fe02f65
removed unnecessary codes
e-ddykim Oct 27, 2024
06ebc95
renamed to ActivationsScaling
e-ddykim Oct 28, 2024
99ef573
updated code style
e-ddykim Oct 28, 2024
3aaa022
updated to use multiple MatcherPass
e-ddykim Oct 29, 2024
19d0a9f
updated code style
e-ddykim Oct 29, 2024
83104fe
updated code style
e-ddykim Oct 29, 2024
148a4ac
added unit tests
e-ddykim Oct 29, 2024
e87c702
update code style
e-ddykim Oct 29, 2024
efe8b37
updated code style
e-ddykim Oct 29, 2024
e9e2e1d
updated code style
e-ddykim Oct 29, 2024
6681856
updated code style
e-ddykim Oct 29, 2024
8ca4986
updated for transformer of FLUX.1
e-ddykim Nov 4, 2024
d0d9b68
disabled FullyConnectedPerLayerScaling
e-ddykim Nov 4, 2024
e131fa2
added unit tests
e-ddykim Nov 4, 2024
d9e4246
fixed code style
e-ddykim Nov 4, 2024
79b2e48
Enable FullyConnectedHorizontalFusion with activations scaling
andrew-k-park Nov 5, 2024
bc8e9c1
updated ScaleDownMultipleLayers
e-ddykim Nov 11, 2024
1ca6cae
updated code style
e-ddykim Nov 11, 2024
6f277b7
reading ACTIVATIONS_SCALE_FACTOR from rt_info
e-ddykim Nov 12, 2024
45ae58b
updated to use LPT
e-ddykim Nov 20, 2024
b160899
fixed for flux.1 dynamic model
e-ddykim Nov 26, 2024
1a58a38
fix merging faults
e-ddykim Nov 26, 2024
e918f5d
fixes for flux.1
e-ddykim Nov 28, 2024
562c263
update not to add redundant Convert
e-ddykim Nov 29, 2024
799a67a
updated apply_rt_info
e-ddykim Nov 29, 2024
9cb19a9
added a new ScaleDownFusion pass
e-ddykim Dec 2, 2024
4e169ec
added a new param useDefaultTransformation for activations scaling
e-ddykim Dec 2, 2024
3bfd7e5
update code style
e-ddykim Dec 2, 2024
a54ac37
update code style
e-ddykim Dec 2, 2024
fadef1f
updated clamp_fp16 tests
e-ddykim Dec 2, 2024
9dad71d
code cleanup
e-ddykim Dec 2, 2024
6f5ded8
code cleanup
e-ddykim Dec 3, 2024
a98ddf3
update code style
e-ddykim Dec 3, 2024
1d7592c
remove redundant code
e-ddykim Dec 3, 2024
caf8024
updated activations scaling tests
e-ddykim Dec 3, 2024
6d3cc27
updated ScaleDownFusion
e-ddykim Dec 4, 2024
390c8c4
fixed ScaleDownFusionTest
e-ddykim Dec 4, 2024
9ca735d
added MulNormTransformation and NormMulTransformation
e-ddykim Dec 8, 2024
a1a3255
removed apply_rt_info
e-ddykim Dec 8, 2024
9f5cfd1
updated activations scaling unit tests
e-ddykim Dec 8, 2024
e2caf35
updated code style
e-ddykim Dec 8, 2024
f58c08c
updated AddTransformation to use output_type instead of fp32
e-ddykim Dec 10, 2024
e81ef27
added a new EliminateMultiplyX1 pass
e-ddykim Dec 10, 2024
d06b550
update code style
e-ddykim Dec 10, 2024
1b1e04a
added a new MulMulTransformation
e-ddykim Dec 16, 2024
2c450f9
added MulDownTransformation
e-ddykim Dec 17, 2024
966955d
fixed code style
e-ddykim Dec 18, 2024
7e53946
added a functional test
e-ddykim Dec 23, 2024
d31907f
applied reviews
e-ddykim Dec 24, 2024
83f65b2
merged master
e-ddykim Dec 24, 2024
a26d87f
applied reviews
e-ddykim Jan 2, 2025
6d6f7b0
updated to preserve the original output precision
e-ddykim Jan 8, 2025
000b11d
updated per reviews
e-ddykim Jan 8, 2025
20a7a44
reverted to apply activations_scale_factor from rt_info
e-ddykim Jan 8, 2025
3afa527
added MulMulTransformationTest
e-ddykim Jan 8, 2025
aa284a8
updated MulShareTransformation
e-ddykim Jan 9, 2025
366a1be
updated scaling tests
e-ddykim Jan 9, 2025
ecc48e6
applied reviews
e-ddykim Jan 9, 2025
8122cde
set scalingMode = true
e-ddykim Jan 10, 2025
0874b17
disabled scaling for quantized models
e-ddykim Jan 12, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -252,11 +252,13 @@ class LP_TRANSFORMATIONS_API LayerTransformation : public ov::pass::MatcherPass
element::Type deqPrecision = element::f32,
const std::vector<ov::element::Type> defaultPrecisions =
{ ov::element::u8, ov::element::i8 },
const bool reshapeIgnorePerTensorQuantizationCheck = false) :
const bool reshapeIgnorePerTensorQuantizationCheck = false,
const bool scalingMode = false) :
updatePrecisions(updatePrecisions),
deqPrecision(deqPrecision),
defaultPrecisions(defaultPrecisions),
reshapeIgnorePerTensorQuantizationCheck(reshapeIgnorePerTensorQuantizationCheck) {}
reshapeIgnorePerTensorQuantizationCheck(reshapeIgnorePerTensorQuantizationCheck),
scalingMode(scalingMode) {}

Params& setUpdatePrecisions(const bool updatePrecisions) {
this->updatePrecisions = updatePrecisions;
Expand All @@ -281,6 +283,8 @@ class LP_TRANSFORMATIONS_API LayerTransformation : public ov::pass::MatcherPass
std::vector<ov::element::Type> defaultPrecisions;
// to support GPU workarround to keep Reshape and MatMul in FP32
bool reshapeIgnorePerTensorQuantizationCheck;
// to support Activations Scaling
bool scalingMode;
};

class PrecisionDetails {
Expand Down Expand Up @@ -352,6 +356,7 @@ class LP_TRANSFORMATIONS_API LayerTransformation : public ov::pass::MatcherPass
element::Type deqPrecision;
std::vector<ov::element::Type> defaultPrecisions;
bool reshapeIgnorePerTensorQuantizationCheck;
bool scalingMode;

static constexpr char originalLayerPostfix[] = "_original";
TransformationContext* context;
Expand Down
13 changes: 7 additions & 6 deletions src/common/low_precision_transformations/src/add.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -214,14 +214,15 @@ bool AddTransformation::transform(TransformationContext& context, ov::pass::patt
newSubtractFullPathValues),
newMultiplyFullPathValues);

auto output_type = scalingMode ? add->get_output_element_type(0) : element::f32;
newAddOrSubtract = std::make_shared<ov::op::TypeRelaxed<ov::opset1::Add>>(
std::vector<element::Type>{element::f32, element::f32}, std::vector<element::Type>{ element::f32 },
ov::op::TemporaryReplaceOutputType(inputs[0], element::f32).get(),
ov::op::TemporaryReplaceOutputType(inputs[1], element::f32).get());
std::vector<element::Type>{output_type, output_type}, std::vector<element::Type>{output_type},
ov::op::TemporaryReplaceOutputType(inputs[0], output_type).get(),
ov::op::TemporaryReplaceOutputType(inputs[1], output_type).get());
newMultiply = std::make_shared<ov::op::TypeRelaxed<ov::opset1::Multiply>>(
std::vector<element::Type>{element::f32, element::f32}, std::vector<element::Type>{ add->get_output_element_type(0) },
ov::op::TemporaryReplaceOutputType(newAddOrSubtract, element::f32).get(),
ov::op::TemporaryReplaceOutputType(multiplyEmptyPathValues, element::f32).get());
std::vector<element::Type>{output_type, output_type}, std::vector<element::Type>{add->get_output_element_type(0)},
ov::op::TemporaryReplaceOutputType(newAddOrSubtract, output_type).get(),
ov::op::TemporaryReplaceOutputType(multiplyEmptyPathValues, output_type).get());

NetworkHelper::insertDequantizationAfter(add, newMultiply, newAddOrSubtract);
NetworkHelper::copyInfo(add, newAddOrSubtract);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@ LayerTransformation::LayerTransformation(const Params& params) :
deqPrecision(params.deqPrecision),
defaultPrecisions(params.defaultPrecisions),
reshapeIgnorePerTensorQuantizationCheck(params.reshapeIgnorePerTensorQuantizationCheck),
scalingMode(params.scalingMode),
context(nullptr) {}

void LayerTransformation::setContext(TransformationContext* context) noexcept {
Expand Down
29 changes: 18 additions & 11 deletions src/common/low_precision_transformations/src/multiply_partial.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -79,16 +79,17 @@ bool MultiplyPartialTransformation::transform(TransformationContext& context, ov
auto constParent = multiply->input_value(multiplyBranch.first == 0 ? 1 : 0);
auto multiplyParentParent = multiplyParent.get_node_shared_ptr()->input_value(multiplyBranch.second);
auto multiplyParentConst = multiplyParent.get_node_shared_ptr()->input_value(multiplyBranch.second == 0 ? 1 : 0);
auto inputDataType = scalingMode ? multiply->get_output_element_type(0) : element::f32;

newMultiply = std::make_shared<ov::op::TypeRelaxed<ov::opset1::Multiply>>(
std::vector<ov::element::Type>{ element::f32, element::f32 },
std::vector<ov::element::Type>{ inputDataType, inputDataType },
std::vector<ov::element::Type>{ multiply->get_output_element_type(0) },
ov::op::TemporaryReplaceOutputType(multiplyParentParent, element::f32).get(),
ov::op::TemporaryReplaceOutputType(multiplyParentParent, inputDataType).get(),
ov::op::TemporaryReplaceOutputType(
fold<ov::opset1::Multiply>(
foldConvert(multiplyParentConst, element::f32),
foldConvert(constParent, element::f32)),
element::f32).get());
foldConvert(multiplyParentConst, inputDataType),
foldConvert(constParent, inputDataType)),
inputDataType).get());

NetworkHelper::copyInfo(multiplyParent.get_node_shared_ptr(), newMultiply);
NetworkHelper::copyInfo(multiply, newMultiply);
Expand Down Expand Up @@ -133,24 +134,30 @@ bool MultiplyPartialTransformation::transform(TransformationContext& context, ov


// before: Y = (SC1 * (X1 - SH1)) * (SC2 * X2)
// after : Y = (SC1' * (X1 - SH1)) * (X2) , where :
// SC1' = SC1 * SC2
// if scalingMode == false
// after : Y = (SC1' * (X1 - SH1)) * (X2) , where :
// SC1' = SC1 * SC2
// else
// after : Y = ((X1 - SH1) * X2) * SC1' , where :
// SC1' = SC1 * SC2
auto newMultiplyValuesFullPath = fold<ov::opset1::Multiply>(multiplyValuesEmptyPath, multiplyValuesFullPath);
OutputVector inputs{ {}, {} };
inputs[emptyPathIndex] = dequantizationEmptyPath.data;
inputs[emptyPathIndex] = scalingMode ? newMultiplyValuesFullPath : dequantizationEmptyPath.data;
auto input_for_fullPath = scalingMode ? dequantizationEmptyPath.data.get_node_shared_ptr() :
newMultiplyValuesFullPath;

ov::Output<ov::Node> parent0 = dequantizationFullPath.subtract == nullptr ?
(dequantizationFullPath.convert == nullptr ? dequantizationFullPath.data : dequantizationFullPath.convert) :
dequantizationFullPath.subtract;

inputs[fullPathIndex] =
parent0.get_node()->get_output_element_type(0) == newMultiplyValuesFullPath->get_output_element_type(0) ?
std::make_shared<ov::opset1::Multiply>(parent0, newMultiplyValuesFullPath) :
parent0.get_node()->get_output_element_type(0) == input_for_fullPath->get_output_element_type(0) ?
std::make_shared<ov::opset1::Multiply>(parent0, input_for_fullPath) :
std::make_shared<ov::op::TypeRelaxed<ov::opset1::Multiply>>(
std::vector<element::Type>{element::f32, element::f32},
std::vector<element::Type>{element::f32},
ov::op::TemporaryReplaceOutputType(parent0, element::f32).get(),
ov::op::TemporaryReplaceOutputType(newMultiplyValuesFullPath, element::f32).get());
ov::op::TemporaryReplaceOutputType(input_for_fullPath, element::f32).get());

newMultiply = std::make_shared<ov::op::TypeRelaxed<ov::opset1::Multiply>>(
std::vector<element::Type>{element::f32, element::f32},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,6 @@ std::shared_ptr<Node> NetworkHelper::swapMultiplyAndAdd(std::shared_ptr<ov::opse
if (multiplyConst == nullptr)
return addAfterMultiply;

const auto x = multiply->input_value(multiplyInputBranch);
auto a = as_type_ptr<ov::opset1::Constant>(multiply->get_input_node_shared_ptr(multiplyInputBranch == 0 ? 1 : 0));
auto b = as_type_ptr<ov::opset1::Constant>(addAfterMultiply->get_input_node_shared_ptr(multiplyBranch == 0 ? 1 : 0));
std::shared_ptr<ov::opset1::Constant> bDivA;
Expand Down Expand Up @@ -263,15 +262,15 @@ std::shared_ptr<Node> NetworkHelper::swapMultiplyAndAdd(std::shared_ptr<ov::opse
bDivA = as_type_ptr<ov::opset1::Constant>(foldConvert(bDivA->output(0), a->get_element_type()));
}

OutputVector inputs{ {}, {} };
inputs[0] = x;
inputs[1] = bDivA->output(0);

const auto& add_input = multiply->input_value(multiplyInputBranch);
// Note: precision is copied to a separate variable intentionally,
// since TemporaryReplaceOutputType replaces add_input's precision, whereas we need to set the original precision on newAdd's output
const auto add_output_precision = add_input.get_element_type();
std::shared_ptr<ov::opset1::Add> newAdd = std::make_shared<ov::op::TypeRelaxed<ov::opset1::Add>>(
std::vector<element::Type>{element::f32, element::f32},
std::vector<element::Type>{ x.get_element_type() },
ov::op::TemporaryReplaceOutputType(inputs[0], element::f32).get(),
ov::op::TemporaryReplaceOutputType(inputs[1], element::f32).get());
std::vector<element::Type>{ add_output_precision },
ov::op::TemporaryReplaceOutputType(add_input, element::f32).get(),
ov::op::TemporaryReplaceOutputType(bDivA, element::f32).get());
copyInfo(addAfterMultiply, newAdd);

auto newMultiply = std::make_shared<ov::op::TypeRelaxed<ov::opset1::Multiply>>(
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
// Copyright (C) 2024 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//

#pragma once

#include <memory>

#include "openvino/pass/matcher_pass.hpp"
#include "transformations_visibility.hpp"

namespace ov {
namespace pass {

class TRANSFORMATIONS_API ActivationsScaling;

namespace activations_scaling {

class TRANSFORMATIONS_API ScaleDownSingleLayer;
class TRANSFORMATIONS_API EliminateScalarMul;
class TRANSFORMATIONS_API MulConcatTransformation;
class TRANSFORMATIONS_API MulShareTransformation;
class TRANSFORMATIONS_API MoveDownScalarMul;

} // namespace activations_scaling
} // namespace pass
} // namespace ov

// ActivationsScaling makes activation values smaller to prevent overflow due to the limited range of FP16
// This feature is controlled by ov::hint::activations_scale_factor.
// For example, when this property is set as 16, activations are divided by 16.
// If ov::hint::activations_scale_factor is less than or equal to zero, it is disabled.

// Add scale_down and scale_up layers around Convolution and MatMul nodes
// Conv/MatMul
// ==>
// Multiply(scale_down by scale_factor) --> Conv/MatMul --> Multiply(scale_up by scale_factor)
class ov::pass::activations_scaling::ScaleDownSingleLayer : public ov::pass::MatcherPass {
public:
OPENVINO_MATCHER_PASS_RTTI("ScaleDownSingleLayer", "0");
ScaleDownSingleLayer(float scale_factor, ov::element::Type scaled_prec);
};

// Normalization and ShapeOf have the following property.
//
// Norm(input * const_a) = Norm(input)
//
// So, we can skip Multiply that is connected to Normalization and ShapeOf.
//
// input --> Multiply --> Normalization/ShapeOf
// ==>
// input --> Normalization/ShapeOf
class ov::pass::activations_scaling::EliminateScalarMul : public ov::pass::MatcherPass {
public:
OPENVINO_MATCHER_PASS_RTTI("EliminateScalarMul", "0");
EliminateScalarMul();
};

// input_a const_a input_b const_b input_c const_c
// \ / \ / \ /
// Multiply_a Multiply_b Multiply_c
// \ | /
// \ | /
// ---------- Concat ------------
// ==>
// (const_a (const_b (const_c
// input_a /const_c) input_b /const_c) input_c /const_c)
// \ / \ / \ /
// Multiply_a Multiply_b Multiply_c
// \ | /
// \ | /
// ---------- Concat ------------
// | const_c
// | /
// Multiply
class ov::pass::activations_scaling::MulConcatTransformation : public ov::pass::MatcherPass {
public:
OPENVINO_MATCHER_PASS_RTTI("MulConcatTransformation", "0");
MulConcatTransformation();
};
Comment on lines +76 to +80
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This transformation duplicates ConcatTransformation behavior. I'd suggest to reenable ConcatTransformation (it is currently disabled), and remove MulConcatTransformation. The subgraph test you provided passes successfully with these changes

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @e-ddykim for the help: It was found that a current ConcatTransformation implementation doesn't handle all the cases which this transformation is able to handle.
I created a ticket for ConcatTransformation improvement: CVS-160325. After it is implemented, we will be able to remove MulConcatTransformation and reuse ConcatTransformation


// input input
// / \ |
// Norm Mul ==> Mul (expect to be fused into the input layer)
// | | / \_
// op_a op_b Norm op_b
// |
// op_a
class ov::pass::activations_scaling::MulShareTransformation : public ov::pass::MatcherPass {
public:
OPENVINO_MATCHER_PASS_RTTI("MulShareTransformation", "0");
MulShareTransformation();
};

// input_b scalar input_a input_b
// \ / \ /
// input_a Mul_b ==> Mul_a' scalar
// \ / \ /
// Mul_a Mul_b' (expect to be merged with Mul_a')
class ov::pass::activations_scaling::MoveDownScalarMul : public ov::pass::MatcherPass {
public:
OPENVINO_MATCHER_PASS_RTTI("MoveDownScalarMul", "0");
MoveDownScalarMul();
};
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ namespace ov {

TRANSFORMATIONS_API void mark_as_dequantization_node(const std::shared_ptr<Node>& node);

TRANSFORMATIONS_API bool is_dequantization_node(const std::shared_ptr<Node>& node);
TRANSFORMATIONS_API bool is_dequantization_node(const std::shared_ptr<const Node>& node);

/**
* @ingroup ov_runtime_attr_api
Expand Down
Loading
Loading