You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@Tamuel Hi,Tamuel, I try to run the code but something went wrong.
Just like the title , the Spyder positioned it in the following :
logits = segmentation.network(inputs=input_image, is_training=False) --prediction.py
decoded = slim_decoder(... output_depth=256) --lane_segmentator.segmentation.network
net = depthwise_conv2d_layer(inputs, 3) --tf_module.slim_decoder
output = depthwise_conv2d(inputs, kernel, bias, strides, padding, dilations, to_batch_norm, batch_norm_decay,is_training, activation_fn, name='conv') --tf_util.depthwise_conv2d_layer
output = tf.nn.depthwise_conv2d(
input=inputs,
filter=filters,
strides=strides,
padding=padding,
rate=dilations,
name=name
) --tf_util.depthwise_conv2d
I convert each frame of the original video into an image.The datafile is:
I did this for the first time.Could you please tell me what I missed?
The text was updated successfully, but these errors were encountered:
@Keepbright,
Issue is coming from:
if data_format is not None and data_format.startswith("NC"):
expected_input_rank = spatial_dims[-1]
else:
expected_input_rank = spatial_dims[-1] + 1
try:
input_shape.with_rank_at_least(expected_input_rank)
except ValueError:
**raise ValueError(
"input tensor must have rank %d at least" % (expected_input_rank))**
I think issue is due to dilation argument, by default we assume in a format like [1, dh, dw, 1], but you should be feeding as [dh, dw]
@Tamuel Hi,Tamuel, I try to run the code but something went wrong.
Just like the title , the Spyder positioned it in the following :
logits = segmentation.network(inputs=input_image, is_training=False) --prediction.py
decoded = slim_decoder(... output_depth=256) --lane_segmentator.segmentation.network
net = depthwise_conv2d_layer(inputs, 3) --tf_module.slim_decoder
output = depthwise_conv2d(inputs, kernel, bias, strides, padding, dilations, to_batch_norm, batch_norm_decay,is_training, activation_fn, name='conv') --tf_util.depthwise_conv2d_layer
output = tf.nn.depthwise_conv2d(
input=inputs,
filter=filters,
strides=strides,
padding=padding,
rate=dilations,
name=name
) --tf_util.depthwise_conv2d
I convert each frame of the original video into an image.The datafile is:
I did this for the first time.Could you please tell me what I missed?
The text was updated successfully, but these errors were encountered: