why delay/latency of the neural net is considered in yolo_kp example and not considered in netx/mnist example #874
Replies: 1 comment 2 replies
-
Hi @gwgknudayanga
Is expected when using connection models (like dense or sparse).
For the first 2 timesteps, L3 will process garbage (Dense23 will produce all zeros as output even if there is no input, to avoid deadlocks). Only after the input has propagated to all Layers (timestep 3) you can consider the output of L3 valid. Take all the above with a grain of salt, but it might be helpful. BR |
Beta Was this translation helpful? Give feedback.
-
Hi Experts,
frame_buffer = netx.modules.FIFO(depth=len(net) + 1)
annotation_buffer = netx.modules.FIFO(depth=len(net) + 1)
But in netx/mnist there is no such consideration for such a latency due to the layers of the neural network.
Actually when we consider one time step, the input to the network should be completely propagated to the output of the neural network at the end of that time step. If this is the case, why we worry about additional offset/delay which equals to the len(net) in yolo_kp? This is very confusing.
Thanks and Rgds,
Udayanga
Beta Was this translation helpful? Give feedback.
All reactions