Introduction to R for Quantitative Finance
上QQ阅读APP看书,第一时间看更新

Solution concepts

In the last 50 years, many great algorithms have been developed for numerical optimization and these algorithms work well, especially in case of quadratic functions. As we have seen in the previous section, we only have quadratic functions and constraints; so these methods (that are implemented in R as well) can be used in the worst case scenarios (if there is nothing better).

However, a detailed discussion of numerical optimization is out of the scope of this book. Fortunately, in the special case of linear and quadratic functions and constraints, these methods are unnecessary; we can use the Lagrange theorem from the 18th century.

Theorem (Lagrange)

If Theorem (Lagrange) and Theorem (Lagrange), (where Theorem (Lagrange)) have continuous partial derivatives and Theorem (Lagrange) is a relative extreme point of f(x) subject to the Theorem (Lagrange) constraint where Theorem (Lagrange).

Then, there exist the coefficients Theorem (Lagrange) such that Theorem (Lagrange)

In other words, all of the partial derivatives of the function Theorem (Lagrange) are 0 (Bertsekas Dimitri P. (1999)).

In our case, the condition is also sufficient. The partial derivative of a quadratic function is linear, so the optimization leads to the problem of solving a linear system of equations, which is a high school task (unlike numerical methods).

Let's see, how this can be used to solve the third problem:

Theorem (Lagrange)

It can be shown that this problem is equivalent to the following system of linear equations:

Theorem (Lagrange)

(Two rows and two columns are added to the covariance matrix, so we have conditions to determine the two Lagrange multipliers as well.) We can expect a unique solution for this system.

It is worth emphasizing that what we get with the Lagrange theorem is not an optimization problem anymore. Just as in one dimension, minimizing a quadratic function leads to taking a derivative and a linear system of equations, which is trivial from the point of complexity. Now let's see what to do with the return maximization problem:

Theorem (Lagrange)

It's easy to see that the derivative of the Lagrange function subject to λ is the constraint itself.

To see this, take the derivative of L:

  • Theorem (Lagrange)
  • Theorem (Lagrange)

So this leads to non-linear equations, which is more of an art than a science.