-
-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add least squares batch filter #85
Comments
Levenberg Marquart does not work well with a single noisy measurement. I guess that it isn't surprising if it only has a single measurement, so I need to implement the multiple measurement approach. That would be quite similar to a batch least squares, and the size of the Jacobian would be variable, based on the number of measurements. The LM crate only supports fixed sizes, so I may just implement the BLS approach using a smarter optimization approach like one from arg-min, scheduled in #119 . |
Hello Chris, not sure this is relevant here, but anyways. My implementation is very tied to the process of solving a position. |
This work should also move the |
High level description
Least squares batch filters and classical Kalman filters are both methods for estimating the state of a dynamic system from noisy measurements. They both use a mathematical model of the system to predict the state at future times and then use the measurements to correct the predictions and update the estimate of the state.
One advantage of a least squares batch filter is that it can provide a more accurate estimate of the state than a classical Kalman filter, especially when the measurements are correlated or the system is nonlinear. This is because the least squares method uses a least squares estimate to update the state, which is optimal in a least squares sense and takes into account the correlations between the measurements.
A disadvantage of a least squares batch filter is that it requires all of the measurements to be available at once in order to compute the optimal estimate of the state. This is known as a batch processing approach, and it is not suitable for systems that require a real-time estimate of the state. In contrast, a classical Kalman filter can update the estimate of the state incrementally as new measurements become available, using a recursive algorithm that is more computationally efficient than the batch approach.
Another disadvantage of a least squares batch filter is that it can be more sensitive to model errors and outliers in the measurements than a classical Kalman filter. This is because the least squares method assumes that the model and measurements are correct, and any errors or outliers in the data can bias the estimate of the state. In contrast, a classical Kalman filter can use a process noise covariance matrix to account for model errors and a measurement noise covariance matrix to downweight the influence of outliers on the estimate of the state.
The Levenberg-Marquardt optimization algorithm is a method for solving nonlinear least squares problems, which are optimization problems where the objective function is the sum of squares of nonlinear functions. This type of optimization problem is commonly used in state estimation, where the objective function is the sum of squares of the residuals between the measurements and the model predictions.
A Levenberg-Marquardt optimization algorithm can be used to solve a least squares batch filter problem, where the objective function is the sum of squares of the residuals between the measurements and the model predictions at the current estimate of the state. The algorithm can iteratively update the estimate of the state by computing the Jacobian matrix of the residuals with respect to the state variables and using this Jacobian matrix to compute a search direction for the update. The algorithm can then compute the optimal step size along this search direction using the Levenberg-Marquardt damping factor, which balances the tradeoff between the accuracy of the estimate and the convergence of the algorithm.
In summary, a Levenberg-Marquardt optimization algorithm can be used to solve a least squares batch filter problem, but it is not the only method that can be used. Other methods, such as the Gauss-Newton method or the trust-region method, can also be used to solve this type of optimization problem.
Requirements
Test plans
Edge cases
Design
This is the design section. Each subsection has its own subsection in the quality assurance document.
API definition
Define how the Nyx APIs will be affect by this: what are new functions available, do any previous function change their definition, why call these functions by that name, etc.
High level architecture
Human note: this does not seem correct. ChatGPT is recommending using a mix of Levenberg Marquart and a LSBF.
Detailed design
The detailed design *will be used in the documentation of how Nyx works.
Feel free to fill out additional QA sections here, but these will typically be determined during the development, including the release in which this issue will be tackled.
The text was updated successfully, but these errors were encountered: