Mean-Variance model
The Mean-Variance model by Markowitz (Markowitz, H.M. (March 1952)) is practically the ice-cream/umbrella business in higher dimensions. For the mathematical formulation, we need some definitions.
They are explained as follows:
- By asset , we mean a random variable with finite variance.
- By portfolio, we mean the combination of assets: , where , and . The combination can be affine or convex. In the affine case, there is no extra restriction on the weights. In the convex case, all the weights are non-negative.
- By optimization, we mean a process of choosing the best coefficients (weights) so that our portfolio meets our needs (that is, it has a minimal risk on the given expected return or has the highest expected return on a given level of risk, and so on).
Let be the random return variables with a finite variance, be their covariance matrix, and .
We will focus on the following optimization problems:
It is clear that is the variance of the portfolio and is the expected return. For the sum of the weights we have which means that we would like to invest 1 unit of cash. (We can also consider adding the condition, which means that short selling is not allowed.) The problems are explained in detail in the following points:
- The first problem is to find the portfolio with a minimal risk. It can be nontrivial if there is no riskless asset.
- The second one is to maximize the expected return on a given level of variance.
- A slightly different approach is to find a portfolio with minimal variance on a desired level of expected return.
- The fourth problem is to maximize a simple utility function where λ is the coefficient of risk tolerance; it's an arbitrary number that expresses our attitude to a risk. It is practically the same as the first problem.
- In the fifth problem, Y is an n+1th asset (for example, an index), which we cannot purchase or don't want to purchase, but want to replicate it. Other similar problems can be formulated in the same way.
It is clear that the second problem is a linear optimization with a quadratic constraint; all the others are quadratic functions with linear constraints. As we will see later, this is an important difference because linear constraints can be handled easily while quadratic constraints are more difficult to handle. In the next two sections, we will focus on the complexity and possible solutions of these problems.