Skip to content

Commit

Permalink
support other quantizers in QConv2DBatchnorm
Browse files Browse the repository at this point in the history
  • Loading branch information
jmduarte committed Apr 21, 2021
1 parent 8ad3bec commit bafb268
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion hls4ml/model/optimizer/passes/qkeras.py
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ class QKerasFactorizeAlpha(OptimizerPass):
and an 'ApplyAlpha' layer is inserted to reapply the scale.
'''
def match(self, node):
q_layer = node.__class__.__name__ in ["Dense", "Conv1D", "Conv2D"]
q_layer = node.__class__.__name__ in ["Dense", "Conv1D", "Conv2D", "Conv2DBatchnorm"]
has_w_quant = node.get_attr('weight_quantizer') is not None
has_b_quant = node.get_attr('bias_quantizer') is not None
has_w_alpha, has_b_alpha = False, False
Expand Down

0 comments on commit bafb268

Please sign in to comment.