Honeycomb
0.1
Component-Model Framework
|
Linear least squares solver. More...
#include <LinearLeastSqr.h>
Public Types | |
typedef Matrix< matrix::dynamic, matrix::dynamic, Real > | Matrix |
typedef Vec< matrix::dynamic, Real > | Vec |
Public Member Functions | |
LinearLeastSqr () | |
void | calc (const Matrix &x, const Vec &y, Vec &b) |
Linear least squares. More... | |
void | calc (const Matrix &x, const Vec &y, const Vec &w, Vec &b) |
Weighted linear least squares. Each of the m equations in has an associated weight. A relatively low weight corresponds to high uncertainty. More... | |
void | calc (const Matrix &x, const Vec &y, const Vec &w, const Matrix &c, const Vec &d, Vec &b) |
Constrained weighted linear least squares. Get a best-fit solution to , subject to the constraints . More... | |
Linear least squares solver.
typedef Matrix<matrix::dynamic, matrix::dynamic, Real> honey::LinearLeastSqr< Real >::Matrix |
typedef Vec<matrix::dynamic, Real> honey::LinearLeastSqr< Real >::Vec |
|
inline |
void honey::LinearLeastSqr< Real >::calc | ( | const Matrix & | x, |
const Vec & | y, | ||
Vec & | b | ||
) |
Linear least squares.
Get a best-fit solution to the system: where the rows of (m x n) X and m-dim form a system of m linear equations, and n-dim contains unknowns assumed to be linearly related.
Given the residual (error) as , the least squares approach is to minimize the residual's Euclidean -norm (aka. magnitude) .
Thus, the least squared problem becomes:
It can be shown that this minimization problem can be uniquely solved using the SVD and the pseudo-inverse.
Consider a quadratic curve:
Let's say we have a series of samples on a graph that we want to fit to our curve. If we plug in our samples directly we will find the left/right-hand sides don't agree, there are errors and the best we can do is minimize them.
Let's introduce the linearly related unknowns into the curve equation:
We want to find the coefficients of which minimize the errors across all samples (a best-fit curve).
With the model formulated we must now work it into a linear system for the solver. For each sample we append a row to X, and a row (y) to . We now have all the parameters needed to solve for .
x | A (m x n) matrix of function coefficients. In the usual case m > n, ie. there are more samples than unknowns. |
y | A m-dim vector of function results |
b | Result: a n-dim vector of coefficients that best approximates the solution |
void honey::LinearLeastSqr< Real >::calc | ( | const Matrix & | x, |
const Vec & | y, | ||
const Vec & | w, | ||
Vec & | b | ||
) |
Weighted linear least squares. Each of the m equations in has an associated weight. A relatively low weight corresponds to high uncertainty.
Ideally, a sample's weight should be the inverse of its variance. The residuals that least squares minimizes are multiplied by the weights.
x | |
y | |
w | A m-dim vector of sample weights |
b |
void honey::LinearLeastSqr< Real >::calc | ( | const Matrix & | x, |
const Vec & | y, | ||
const Vec & | w, | ||
const Matrix & | c, | ||
const Vec & | d, | ||
Vec & | b | ||
) |
Constrained weighted linear least squares. Get a best-fit solution to , subject to the constraints .
This is similar to solving with k additional equations that are infinitely weighted.
x | |
y | |
w | |
c | A (k x n) matrix of constraint coefficients, where k < n. X and C col sizes must match. |
d | A k-dim vector of constraint results. |
b |