-
Notifications
You must be signed in to change notification settings - Fork 283
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: invalid 'type' (environment) of argument #1423
Comments
Using the functional API in Keras to train a multi-output model, the value supplied to However, this API does not cover the use case when you have multiple outputs, and you need the values of all the outputs in one scope to calculate the loss. There are however two straightforward ways to do this:
|
You can pass custom_loss_fn <- function(y_true, y_pred) {
... # same as before
single_gaussian_nll <- function(.x) {
c(y, mu, sigma, p) %<-% .x
result <- ... # calculate as before
if (py_bool(op_isnan(result))) browser()
result
}
total_nll <- ... # same as before
if (py_bool(op_isnan(total_nll))) browser()
total_nll
}
model |> compile(run_eagerly = TRUE, loss = custom_loss_fn) |
Many thanks. |
Thanks, I could reproduce. This was slightly harder to track down than I expected, because The issue is that the custom loss you are calculating returns Here is your code updated with inserted #install_keras()
Sys.setenv("CUDA_VISIBLE_DEVICES"="")
library(keras3)
# library(tensorflow, exclude = c("set_random_seed", "shape"))
library(reticulate)
num_components = 2 # Number of mixture components
input <- layer_input(shape = c(100)) # a 1-dimensional input
# Define a hidden layer
hidden <- input %>%
layer_dense(units = 128, activation = 'relu') %>%
layer_dense(units = 64, activation = 'relu')
# Output layers for mixture components
mu <- hidden %>%
layer_dense(units = num_components,
name = 'mu') # Means of the Gaussians
sigma <- hidden %>%
layer_dense(units = num_components,
activation = 'softplus',
name = 'sigma') # Standard deviation of the Gaussians (positive)
p <- hidden %>%
layer_dense(units = num_components,
activation = 'softmax',
name = 'p') # Mixture coefficients (sum to 1)
model <-
keras_model(inputs = input,
outputs = layer_concatenate(mu, sigma, p))
op_vectorized_map_debug <- function(elements, fn) {
batch_size <- elements[[1]] |> op_shape() |> _[[1]]
elements |>
lapply(\(e) op_split(e, batch_size)) |>
zip_lists() |>
lapply(fn) |>
op_stack()
}
ii <- 0L
custom_loss_fn <- function(y_true, y_pred) {
ii <<- ii + 1L
str(keras3:::named_list(ii, y_true, y_pred))
## browser() is safe to use here to be able to work with the `y_true` and
## `y_pred` tracing tensors interactively. Just be sure to exit the browser
## context by pressing "Continue" (to raise an error) than by "Quit". If you
## "Quit" the R browser context, it leaves the TensorFlow tracing context
## open, and nothing else will work as expected (and it will eventually
## segfault).
if(py_bool(op_any(op_isnan(y_pred)))) browser()
c(mu, sigma, p) %<-% op_split(y_pred, 3, axis = 2)
sigma %<>% `+`(config_epsilon())
single_gaussian_nll <- function(.x) {
c(y, mu, sigma, p) %<-% .x
result <- -op_log(op_sum(op_exp(
op_log(p) +
(
-op_log(sigma) - op_log(op_sqrt(2 * pi)) - (1 / 2)
* ((y - mu) ^ 2 / sigma ^ 2)
)
)))
if(py_bool(op_isinf(result))) str(c(.x, result = result))
result
}
total_nll <-
op_sum(op_vectorized_map_debug(list(y_true, mu, sigma, p),
single_gaussian_nll))
if(py_bool(op_any(op_isnan(total_nll)))) browser()
if(py_bool(op_any(op_isinf(total_nll)))) browser()
str(keras3:::named_list(ii, total_nll))
print(total_nll)
total_nll
}
model |> compile(run_eagerly = TRUE,
loss = custom_loss_fn)
#data simulation
theta_alpha <- -10
theta_beta <- 10
alpha <- 1 / 4
sigma_1 <- 1
sigma_2 <- 0.1
n <- 10000 #number of samples from prior distribution
theta_prior <- runif(n, min = theta_alpha, max = theta_beta)
x_simulated <- matrix(nrow = n, ncol = 100)
for (i in 1:n) {
for (j in 1:100) {
indic <- rbinom(1, 1, alpha)
x_simulated[i, j] <-
indic * rnorm(1, mean = theta_prior[i], sd = sigma_1) +
(1 - indic) * rnorm(1, mean = -theta_prior[i], sd = sigma_2)
}
}
model %>% fit(x_simulated,
matrix(theta_prior, ncol = 1),
epochs = 10,
batch_size = 100)
|
Many thanks for your prompt reply. I will continue to debug under your generous help. |
It works. Thanks a lot. |
He, I am using keras and tensorflow in R to train a mixture density network. My customized loss function has been tested.
However, when I try to fit the model, there is always an error:
My codes are following here:
Thanks in advance for your any comments.
The text was updated successfully, but these errors were encountered: