torch compilation function.
- MAX_BITWIDTH_BACKWARD_COMPATIBLE
- OPSET_VERSION_FOR_ONNX_EXPORT
convert_torch_tensor_or_numpy_array_to_numpy_array(
torch_tensor_or_numpy_array: Union[Tensor, ndarray]
) → ndarray
Convert a torch tensor or a numpy array to a numpy array.
Args:
torch_tensor_or_numpy_array
(Tensor): the value that is either a torch tensor or a numpy array.
Returns:
numpy.ndarray
: the value converted to a numpy array.
compile_torch_model(
torch_model: Module,
torch_inputset: Union[Tensor, ndarray, Tuple[Union[Tensor, ndarray], ]],
import_qat: bool = False,
configuration: Optional[Configuration] = None,
artifacts: Optional[DebugArtifacts] = None,
show_mlir: bool = False,
n_bits=8,
rounding_threshold_bits: Optional[int] = None,
p_error: Optional[float] = None,
global_p_error: Optional[float] = None,
verbose: bool = False
) → QuantizedModule
Compile a torch module into a FHE equivalent.
Take a model in torch, turn it to numpy, quantize its inputs / weights / outputs and finally compile it with Concrete
Args:
torch_model
(torch.nn.Module): the model to quantizetorch_inputset
(Dataset): the calibration input-set, can contain either torch tensors or numpy.ndarray.import_qat
(bool): Set to True to import a network that contains quantizers and was trained using quantization aware trainingconfiguration
(Configuration): Configuration object to use during compilationartifacts
(DebugArtifacts): Artifacts object to fill during compilationshow_mlir
(bool): if set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, eg, for debugging or demon_bits
: the number of bits for the quantizationrounding_threshold_bits
(int): if not None, every accumulators in the model are rounded down to the given bits of precisionp_error
(Optional[float]): probability of error of a single PBSglobal_p_error
(Optional[float]): probability of error of the full circuit. In FHE simulationglobal_p_error
is set to 0verbose
(bool): whether to show compilation information
Returns:
QuantizedModule
: The resulting compiled QuantizedModule.
compile_onnx_model(
onnx_model: ModelProto,
torch_inputset: Union[Tensor, ndarray, Tuple[Union[Tensor, ndarray], ]],
import_qat: bool = False,
configuration: Optional[Configuration] = None,
artifacts: Optional[DebugArtifacts] = None,
show_mlir: bool = False,
n_bits=8,
rounding_threshold_bits: Optional[int] = None,
p_error: Optional[float] = None,
global_p_error: Optional[float] = None,
verbose: bool = False
) → QuantizedModule
Compile a torch module into a FHE equivalent.
Take a model in torch, turn it to numpy, quantize its inputs / weights / outputs and finally compile it with Concrete-Python
Args:
onnx_model
(onnx.ModelProto): the model to quantizetorch_inputset
(Dataset): the calibration input-set, can contain either torch tensors or numpy.ndarray.import_qat
(bool): Flag to signal that the network being imported contains quantizers in in its computation graph and that Concrete ML should not re-quantize it.configuration
(Configuration): Configuration object to use during compilationartifacts
(DebugArtifacts): Artifacts object to fill during compilationshow_mlir
(bool): if set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, eg, for debugging or demon_bits
: the number of bits for the quantizationrounding_threshold_bits
(int): if not None, every accumulators in the model are rounded down to the given bits of precisionp_error
(Optional[float]): probability of error of a single PBSglobal_p_error
(Optional[float]): probability of error of the full circuit. In FHE simulationglobal_p_error
is set to 0verbose
(bool): whether to show compilation information
Returns:
QuantizedModule
: The resulting compiled QuantizedModule.
compile_brevitas_qat_model(
torch_model: Module,
torch_inputset: Union[Tensor, ndarray, Tuple[Union[Tensor, ndarray], ]],
n_bits: Optional[int, dict] = None,
configuration: Optional[Configuration] = None,
artifacts: Optional[DebugArtifacts] = None,
show_mlir: bool = False,
rounding_threshold_bits: Optional[int] = None,
p_error: Optional[float] = None,
global_p_error: Optional[float] = None,
output_onnx_file: Union[Path, str] = None,
verbose: bool = False
) → QuantizedModule
Compile a Brevitas Quantization Aware Training model.
The torch_model parameter is a subclass of torch.nn.Module that uses quantized operations from brevitas.qnn. The model is trained before calling this function. This function compiles the trained model to FHE.
Args:
torch_model
(torch.nn.Module): the model to quantizetorch_inputset
(Dataset): the calibration input-set, can contain either torch tensors or numpy.ndarray.n_bits
(Optional[Union[int, dict]): the number of bits for the quantization. By default, for most models, a value of None should be given, which instructs Concrete ML to use the bit-widths configured using Brevitas quantization options. For some networks, that perform a non-linear operation on an input on an output, if None is given, a default value of 8 bits is used for the input/output quantization. For such models the user can also specify a dictionary with model_inputs/model_outputs keys to override the 8-bit default or a single integer for both values.configuration
(Configuration): Configuration object to use during compilationartifacts
(DebugArtifacts): Artifacts object to fill during compilationshow_mlir
(bool): if set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, eg, for debugging or demorounding_threshold_bits
(int): if not None, every accumulators in the model are rounded down to the given bits of precisionp_error
(Optional[float]): probability of error of a single PBSglobal_p_error
(Optional[float]): probability of error of the full circuit. In FHE simulationglobal_p_error
is set to 0output_onnx_file
(str): temporary file to store ONNX model. If None a temporary file is generatedverbose
(bool): whether to show compilation information
Returns:
QuantizedModule
: The resulting compiled QuantizedModule.