-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fit! on a LinearBinaryClassifier fails for Float32 values #42
Comments
Hi @tiemvanderdeure. Thanks for reporting this issue. Out of curiosity, could you tell how much you would need The second situation with julia> X.a .+ X.b .+ rand(200)) |> typeof
Vector{Float64} (alias for Array{Float64, 1}) In this case, I've gave fixing this a shot in #43, but it looks not as easy as I hoped. |
I discovered this error because I am working with data extracted for rasters, which is often Float32. I don't actually need the Float32 computations, so if it's not easy to fix I'll just put in a safeguard that converts to Float64 before the data is passed to Sloppy on my part with the second example. So in reality, using GLM, CategoricalArrays, MLJBase, MLJGLMInterface
X = (a = rand(Float32, 200), b = rand(Float32, 200))
y_categorical = X.a .+ X.b .> rand(200)
mach = MLJBase.machine(LinearBinaryClassifier(), X, categorical(y_categorical))
mach2 = MLJBase.machine(LinearRegressor(), X, X.a .+ X.b .+ rand(Float32, 200))
fit!(mach) # errors
fit!(mach2) # also errors! |
Solved by #45 |
Fitting a machine with a
LinearBinaryClassifier
model errors if the input data has datatypeFloat32
orFloat16
.The same regression using vanilla GLM works. A
LinearRegressor
also works.A reproducible example:
Stacktrace:
The text was updated successfully, but these errors were encountered: