You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently the XSenseDataReader and APDMDataReader create a (semi-)uniformly sampled time vector from a provided sampling frequency by successively adding 1/rate to the previous value of time.
This causes the maximum propagation of floating point error due to successive additions. This issue is a significant contributor to this issue. This present issue is also related to this issue with the primary difference being that this issue currently causes anyone who uses the XSenseDataReader or the APDMDataReader to have non-negligible floating point errors in their time intervals.
Solution
I have created a sample (which is shown below) to demonstrate creating 100000 uniformly spaced numbers with a sampling rate of 40 Hz and a starting time offset of 10.675 (chosen arbitrarily). I then create 3 double vectors which contain the results of using three methods: addition, multiplication against a vector of integers and adding the offset, and the same technique using std::fma (fuse multiply add). Finally I print the error in the last result in the array and calculate the difference between it and the known (human calculated) result.
I have tested a few scenarios and the floating point errors will always be different depending on what the starting value, sample interval, and amount of numbers is. Even with this, the results have always been that std::fma is the most accurate, followed by multiplication, followed by addition.
My proposal is to change the XSenseDataReader and the APDMDataReader to create the uniform interval (time column) using the std::fma approach to create a more accurate uniformly sampled time column which would solve many downstream processing issues (like not triggering a resampling in the lowpass filter).
Sample Code
The complete example is provided here in the FloatingPointPrecision folder
constint startVal = 0;
constint numEl = 100000;
// 40 FPS Sampling Rateconstdouble rate = 1.0 / 40.0;
// Last value expected in sampled arrayconstdouble last_num = 2510.65;
constdouble offset = 10.675;
std::vector<int> integers(numEl);
for (int i = startVal; i < numEl; ++i) {
integers[i] = i;
}
std::vector<double> decimalValues(numEl);
decimalValues[0] = offset;
double time = decimalValues[0];
for (int i = 1; i < numEl; ++i) {
time += rate;
decimalValues[i] = time;
}
std::vector<double> decimalValues2(numEl);
for (int i = startVal; i < numEl; ++i) {
decimalValues2[i] = integers[i] * rate + offset;
}
std::vector<double> decimalValues3(numEl);
for (int i = startVal; i < numEl; ++i) {
decimalValues3[i] = std::fma(integers[i], rate, offset);
}
for (int i = startVal; i < numEl; ++i) {
std::cout << std::fixed << std::setprecision(32)
<< "Add: " << decimalValues[i]
<< " Multiply: " << decimalValues2[i]
<< " FMA: " << decimalValues3[i] << std::endl;
}
constdouble err1 = std::abs(last_num - decimalValues[numEl - 1]);
constdouble err2 = std::abs(last_num - decimalValues2[numEl - 1]);
constdouble err3 = std::abs(last_num - decimalValues3[numEl - 1]);
std::cout << "\nError of last value: " << std::endl;
std::cout << std::fixed << std::setprecision(32) << "Add: " << err1
<< " Multiply: " << err2 << " FMA: " << err3 << std::endl;
which results in the following output
Error of last value:
Add: 0.00000000475074557471089065074921 Multiply: 0.00000000000045474735088646411896 FMA: 0.00000000000000000000000000000000
The addition error is of the magnitude 1e-9, the multiplication error is of the magnitude 1e-13 and the std::fma error is 0 in this case.
Problem
Currently the XSenseDataReader and APDMDataReader create a (semi-)uniformly sampled time vector from a provided sampling frequency by successively adding
1/rate
to the previous value of time.This causes the maximum propagation of floating point error due to successive additions. This issue is a significant contributor to this issue. This present issue is also related to this issue with the primary difference being that this issue currently causes anyone who uses the
XSenseDataReader
or theAPDMDataReader
to have non-negligible floating point errors in their time intervals.Solution
I have created a sample (which is shown below) to demonstrate creating
100000
uniformly spaced numbers with a sampling rate of40 Hz
and a starting time offset of10.675
(chosen arbitrarily). I then create 3 double vectors which contain the results of using three methods: addition, multiplication against a vector of integers and adding the offset, and the same technique usingstd::fma
(fuse multiply add). Finally I print the error in the last result in the array and calculate the difference between it and the known (human calculated) result.I have tested a few scenarios and the floating point errors will always be different depending on what the starting value, sample interval, and amount of numbers is. Even with this, the results have always been that
std::fma
is the most accurate, followed by multiplication, followed by addition.My proposal is to change the
XSenseDataReader
and theAPDMDataReader
to create the uniform interval (time column) using thestd::fma
approach to create a more accurate uniformly sampled time column which would solve many downstream processing issues (like not triggering a resampling in the lowpass filter).Sample Code
The complete example is provided here in the
FloatingPointPrecision
folderwhich results in the following output
The addition error is of the magnitude
1e-9
, the multiplication error is of the magnitude1e-13
and the std::fma error is 0 in this case.Further Reading
std::fma
The text was updated successfully, but these errors were encountered: