4

I'm solving a standard optimisation problem from Finance - portfolio optimisation. The vast majority of the time, NLopt is returning a sensible solution. However, on rare occasions, the SLSQP algorithm appears to iterate to the correct solution, and then for no obvious reason it chooses to return a solution from about one third of the way through the iterative process that is very obviously suboptimal. Interestingly, changing the initial parameter vector by a very small amount can fix the problem.

I have managed to isolate a relatively simple working example of the behaviour I am talking about. Apologies that the numbers are a bit messy. It was the best I could do. The following code can be cut-and-pasted into a Julia REPL and will run and print values of the objective function and parameters each time NLopt calls the objective function. I call the optimisation routine twice. If you scroll back through the output that is printed by the code below, you'll notice on the first call, the optimisation routine iterates to a good solution with objective function value of 0.0022 but then for no apparent reason goes back to a much earlier solution where the objective function is 0.0007, and returns it instead. The second time I call the optimisation function, I use a slightly different starting vector of parameters. Again, the optimisation routine iterates to the same good solution, but this time it returns the good solution with objective function value of 0.0022.

So, the question: Does anyone know why in the first case SLSQP abandons the good solution in favour of a much poorer one from only about a third of the way through the iterative process? If so, is there any way I can fix this?

#-------------------------------------------
#Load NLopt package
using NLopt
#Define objective function for the portfolio optimisation problem (maximise expected return subject to variance constraint)
function obj_func!(param::Vector{Float64}, grad::Vector{Float64}, meanVec::Vector{Float64}, covMat::Matrix{Float64})
    if length(grad) > 0
        tempGrad = meanVec - covMat * param
        for j = 1:length(grad)
            grad[j] = tempGrad[j]
        end
        println("Gradient vector = " * string(grad))
    end
    println("Parameter vector = " * string(param))
    fOut = dot(param, meanVec) - (1/2)*dot(param, covMat*param)
    println("Objective function value = " * string(fOut))
    return(fOut)
end
#Define standard equality constraint for the portfolio optimisation problem
function eq_con!(param::Vector{Float64}, grad::Vector{Float64})
    if length(grad) > 0
        for j = 1:length(grad)
            grad[j] = 1.0
        end
    end
    return(sum(param) - 1.0)
end
#Function to call the optimisation process with appropriate input parameters
function do_opt(meanVec::Vector{Float64}, covMat::Matrix{Float64}, paramInit::Vector{Float64})
    opt1 = Opt(:LD_SLSQP, length(meanVec))
    lower_bounds!(opt1, [0.0, 0.0, 0.05, 0.0, 0.0, 0.0])
    upper_bounds!(opt1, [1.0, 1.0, 1.0, 1.0, 1.0, 1.0])
    equality_constraint!(opt1, eq_con!)
    ftol_rel!(opt1, 0.000001)
    fObj = ((param, grad) -> obj_func!(param, grad, meanVec, covMat))
    max_objective!(opt1, fObj)
    (fObjOpt, paramOpt, flag) = optimize(opt1, paramInit)
    println("Returned parameter vector = " * string(paramOpt))
    println("Return objective function = " * string(fObjOpt))
end
#-------------------------------------------
#Inputs to optimisation
meanVec = [0.00238374894628471,0.0006879970888824095,0.00015027322404371585,0.0008440624572209092,-0.004949409024535505,-0.0011493778903180567]
covMat = [8.448145928621056e-5 1.9555283947528615e-5 0.0 1.7716366331331983e-5 1.5054664977783003e-5 2.1496436765051825e-6;
          1.9555283947528615e-5 0.00017068536691928327 0.0 1.4272576023325365e-5 4.2993023110905543e-5 1.047156519965148e-5;
          0.0 0.0 0.0 0.0 0.0 0.0;
          1.7716366331331983e-5 1.4272576023325365e-5 0.0 6.577888700124854e-5 3.957059294420261e-6 7.365234067319808e-6
          1.5054664977783003e-5 4.2993023110905543e-5 0.0 3.957059294420261e-6 0.0001288060347757139 6.457128839875466e-6
          2.1496436765051825e-6 1.047156519965148e-5 0.0 7.365234067319808e-6 6.457128839875466e-6 0.00010385067478418426]
paramInit = [0.0,0.9496114216578236,0.050388578342176464,0.0,0.0,0.0]

#Call the optimisation function
do_opt(meanVec, covMat, paramInit)

#Re-define initial parameters to very similar numbers
paramInit = [0.0,0.95,0.05,0.0,0.0,0.0]

#Call the optimisation function again
do_opt(meanVec, covMat, paramInit)

Note: I know my covariance matrix is positive-semi-definite, rather than positive definite. This is not the source of the issue. I've confirmed this by altering the diagonal element of the zero row to a small, but significantly non-zero value. The issue is still present in the above example, as well as others that I can randomly generate.

4

1 回答 1

2

SLSQP 是一种约束优化算法。每一轮它都必须检查是否具有最佳目标值并满足约束条件。最终输出是满足约束时的最佳值。

通过更改为打印出约束的值eq_con!

function eq_con!(param::Vector{Float64}, grad::Vector{Float64})
    if length(grad) > 0
        for j = 1:length(grad)
            grad[j] = 1.0
        end
    end
    @show sum(param)-1.0
    return(sum(param) - 1.0)
end

显示第一次运行中的最后一个有效评估点具有:

Objective function value = 0.0007628202546187453
sum(param) - 1.0 = 0.0

在第二次运行中,所有评估点都满足约束。这解释了这种行为并表明它是合理的。

附录:

导致参数不稳定的根本问题是等式约束的确切性质。引用 NLopt 参考(http://ab-initio.mit.edu/wiki/index.php/NLopt_Reference#Nonlinear_constraints):

对于等式约束,强烈建议使用小的正容差,以允许 NLopt 收敛,即使等式约束略微非零。

实际上,将equality_constraint!呼叫切换do_opt

    equality_constraint!(opt1, eq_con!,0.00000001)

给出两个初始参数的 0.0022 解。

于 2016-02-05T17:03:16.330 回答