You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the BuildArmComputeTensor(Tensor& tensor, const armnn::TensorInfo& tensorInfo) function in the ArmComputeTensorUtils.hpp file, I tried to modify the alignment value of ComputeTensor memory, that is,
tensor.allocator()->init(BuildArmComputeTensorInfo(tensorInfo));
modified to tensor.allocator()->init(BuildArmComputeTensorInfo(tensorInfo),4096);,
making the allocated memory can be 4K aligned(default 64), but in the test found that the execution speed of some operators has slowed down, I don't quite understand why?
The text was updated successfully, but these errors were encountered:
Superficially I can't see any obvious reason why inference performance would decrease by changing the tensor alignment. Is it CpuAcc or GpuAcc you're using? If you can tell me the hardware too that might be relevant too.
In the BuildArmComputeTensor(Tensor& tensor, const armnn::TensorInfo& tensorInfo) function in the ArmComputeTensorUtils.hpp file, I tried to modify the alignment value of ComputeTensor memory, that is,
tensor.allocator()->init(BuildArmComputeTensorInfo(tensorInfo));
modified to
tensor.allocator()->init(BuildArmComputeTensorInfo(tensorInfo),4096);,
making the allocated memory can be 4K aligned(default 64), but in the test found that the execution speed of some operators has slowed down, I don't quite understand why?
The text was updated successfully, but these errors were encountered: