Implement sklearn linear model.
- TRUSTED_SKOPS
- USE_SKOPS
A linear regression model with FHE.
Parameters:
n_bits
(int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.
For more details on LinearRegression please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html
__init__(
n_bits=8,
fit_intercept=True,
normalize='deprecated',
copy_X=True,
n_jobs=None,
positive=False
)
Get the FHE circuit.
The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation: https://docs.zama.ai/concrete/developer/terminology_and_structure#terminology Is None if the model is not fitted.
Returns:
Circuit
: The FHE circuit.
Indicate if the model is compiled.
Returns:
bool
: If the model is compiled.
Indicate if the model is fitted.
Returns:
bool
: If the model is fitted.
Get the ONNX model.
Is None if the model is not fitted.
Returns:
onnx.ModelProto
: The ONNX model.
dump_dict() → Dict[str, Any]
load_dict(metadata: Dict)
An ElasticNet regression model with FHE.
Parameters:
n_bits
(int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.
For more details on ElasticNet please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.ElasticNet.html
__init__(
n_bits=8,
alpha=1.0,
l1_ratio=0.5,
fit_intercept=True,
normalize='deprecated',
precompute=False,
max_iter=1000,
copy_X=True,
tol=0.0001,
warm_start=False,
positive=False,
random_state=None,
selection='cyclic'
)
Get the FHE circuit.
The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation: https://docs.zama.ai/concrete/developer/terminology_and_structure#terminology Is None if the model is not fitted.
Returns:
Circuit
: The FHE circuit.
Indicate if the model is compiled.
Returns:
bool
: If the model is compiled.
Indicate if the model is fitted.
Returns:
bool
: If the model is fitted.
Get the ONNX model.
Is None if the model is not fitted.
Returns:
onnx.ModelProto
: The ONNX model.
dump_dict() → Dict[str, Any]
load_dict(metadata: Dict)
A Lasso regression model with FHE.
Parameters:
n_bits
(int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.
For more details on Lasso please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html
__init__(
n_bits=8,
alpha: float = 1.0,
fit_intercept=True,
normalize='deprecated',
precompute=False,
copy_X=True,
max_iter=1000,
tol=0.0001,
warm_start=False,
positive=False,
random_state=None,
selection='cyclic'
)
Get the FHE circuit.
The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation: https://docs.zama.ai/concrete/developer/terminology_and_structure#terminology Is None if the model is not fitted.
Returns:
Circuit
: The FHE circuit.
Indicate if the model is compiled.
Returns:
bool
: If the model is compiled.
Indicate if the model is fitted.
Returns:
bool
: If the model is fitted.
Get the ONNX model.
Is None if the model is not fitted.
Returns:
onnx.ModelProto
: The ONNX model.
dump_dict() → Dict[str, Any]
load_dict(metadata: Dict)
A Ridge regression model with FHE.
Parameters:
n_bits
(int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.
For more details on Ridge please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html
__init__(
n_bits=8,
alpha: float = 1.0,
fit_intercept=True,
normalize='deprecated',
copy_X=True,
max_iter=None,
tol=0.001,
solver='auto',
positive=False,
random_state=None
)
Get the FHE circuit.
The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation: https://docs.zama.ai/concrete/developer/terminology_and_structure#terminology Is None if the model is not fitted.
Returns:
Circuit
: The FHE circuit.
Indicate if the model is compiled.
Returns:
bool
: If the model is compiled.
Indicate if the model is fitted.
Returns:
bool
: If the model is fitted.
Get the ONNX model.
Is None if the model is not fitted.
Returns:
onnx.ModelProto
: The ONNX model.
dump_dict() → Dict[str, Any]
load_dict(metadata: Dict)
A logistic regression model with FHE.
Parameters:
n_bits
(int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.
For more details on LogisticRegression please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html
__init__(
n_bits=8,
penalty='l2',
dual=False,
tol=0.0001,
C=1.0,
fit_intercept=True,
intercept_scaling=1,
class_weight=None,
random_state=None,
solver='lbfgs',
max_iter=100,
multi_class='auto',
verbose=0,
warm_start=False,
n_jobs=None,
l1_ratio=None
)
Get the FHE circuit.
The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation: https://docs.zama.ai/concrete/developer/terminology_and_structure#terminology Is None if the model is not fitted.
Returns:
Circuit
: The FHE circuit.
Indicate if the model is compiled.
Returns:
bool
: If the model is compiled.
Indicate if the model is fitted.
Returns:
bool
: If the model is fitted.
Get the ONNX model.
Is None if the model is not fitted.
Returns:
onnx.ModelProto
: The ONNX model.
dump_dict() → Dict[str, Any]
load_dict(metadata: Dict)