1

我试图解决一个最优控制问题,其中成本函数是 J = x^TQ x + u^TR u 受 x_dot = A x + B u 和 x 和 u 的限制。我知道有一些求解器,如 cvxpy、yalimp 等,可以做到这一点,但我想自己做,以便对编码和将来可能添加的一些其他参数有更好的了解。我附上了我写的代码。它运行但对所有时间步返回相同的值。我已将 x 和 u 堆叠为单个向量。我不知道这是否是正确的方法。我认为可以以更好/有效的方式编写代码。欢迎所有建议,并非常感谢您提前提供任何帮助

import numpy as np
import sympy as sp
import scipy.optimize as opt
import matplotlib.pyplot as plt

# Optimal Control Problem
# Cost, J = x.transpose() * Q * x + u.transpose() * R * u
# x_dot = A*x + B*u
# x_min < x < x_max
# u_min < x < u_max


class mpc_opt():

    def __init__(self):
        self.Q = sp.diag(0.5, 1, 0)  # state penalty matrix, Q
        self.R = sp.eye(2) # input penalty matrix
        self.A = sp.Matrix([[-0.79, -0.3, -0.1],[0.5, 0.82, 1.23], [0.52, -0.3, -0.5]])  # state matrix 
        self.B = sp.Matrix([[-2.04, -0.21], [-1.28, 2.75], [0.29, -1.41]])  # input matrix

        self.t = np.linspace(0, 1, 30)


    # reference trajectory  ## static!!!
    def ref_trajectory(self, i):  # y = 3*sin(2*pi*omega*t)
        # y = 3 * np.sin(2*np.pi*self.omega*self.t[i])
        x_ref = sp.Matrix([0, 1, 0])
        return x_ref
        # return sp.Matrix(([[self.t[i]], [y], [0]]))

    def cost_function(self, U, *args):
        t = args
        nx, nu = self.A.shape[-1], self.B.shape[-1]
        x0 = U[0:nx]
        u = U[nx:nx+nu]
        u = u.reshape(len(u), -1)
        x0 = x0.reshape(len(x0), -1)
        x1 = self.A * x0 + self.B * u
        # q = [x1[0], x1[1]]
        # pos = self.end_effec_pose(q)
        traj_ref = self.ref_trajectory(t)
        pos_error = x1 - traj_ref
        cost = pos_error.transpose() * self.Q * pos_error + u.transpose() * self.R * u
        return cost

    def cost_gradient(self, U, *args):
        t = args
        nx, nu = self.A.shape[-1], self.B.shape[-1]
        x0 = U[0:nx]
        u = U[nx:nx + nu]
        u = u.reshape(len(u), -1)
        x0 = x0.reshape(len(x0), -1)
        x1 = self.A * x0 + self.B * u
        traj_ref = self.ref_trajectory(t)
        pos_error = x1 - traj_ref
        temp1 = self.Q * pos_error
        cost_gradient = temp1.col_join(self.R * u)
        return cost_gradient


    def optimise(self, u0, t):
        umin = [-2., -3.]
        umax = [2., 3.]
        xmin = [-10., -9., -8.]
        xmax = [10., 9., 8.]
        bounds = ((xmin[0], xmax[0]), (xmin[1], xmax[1]), (xmin[2], xmax[2]), (umin[0], umax[0]), (umin[1], umax[1]))

        U = opt.minimize(self.cost_function, u0, args=(t), method='SLSQP', bounds=bounds, jac=self.cost_gradient,
                         options={'maxiter': 200, 'disp': True})
        U = U.x
        return U


if __name__ == '__main__':
    mpc = mpc_opt()
    x0, u0, = sp.Matrix([[0.1], [0.02], [0.05]]), sp.Matrix([[0.4], [0.2]])
    X, U = sp.zeros(len(x0), len(mpc.t)), sp.zeros(len(u0), len(mpc.t))
    U0 = sp.Matrix([x0, u0])
    nx, nu = mpc.A.shape[-1], mpc.B.shape[-1]
    for i in range(len(mpc.t)):
        print('i = :', i)
        result = mpc.optimise(U0, i)
        x0 = result[0:nx]
        u = result[nx:nx + nu]
        u = u.reshape(len(u), -1)
        x0 = x0.reshape(len(x0), -1)
        U[:, i], X[:, i] = u0, x0
        # x0 = mpc.A * x0 + mpc.B * u
        U0 = result

plt.plot(X[0, :], '--r')
plt.plot(X[1, :], '--b')
plt.plot(X[2, :], '*r')
plt.show()
4

2 回答 2

0

在模型预测控制的过程动态和控制页面Scipy.optimize.minimize上发布了一个类似的 MPC 应用程序(选择 Show Python MPC)。它使用也可以以状态空间形式表示的一阶线性系统。这是动画的屏幕截图:

模型预测控制

尽管这是我的课程网站,但这个应用程序的功劳归于 Junho Park。还有另一个使用 MATLAB 和 Python Gekko 的线性 MPC教程。在您开发应用程序时,此源代码可能会对您有所帮助。

于 2019-10-10T12:30:26.773 回答
0

Some points that I noticed:

1) Why do you use sympy matrices? It seems to me that numpy.matrix or numpy.array would be sufficient in your case. sympy is usually used when you want to work with symbolic expressions (e.g. symbolic differentiation). Note: Use the @ operator for Matrix multiplication, e.g. in cost_gradient, you would use temp1 = self.Q @ pos_error

2) You can use the staticmethod decorator to denote a class method as static

3) Why do you use the optimization method SLSQP? It is used for constrained optimization (which does not apply to your problem). If we look at the notes for scipy.optimize.minimize we can see that there are many algorithms used for unconstrained optimization (with bounds), which will probably converge faster. In most cases, you can just let the algorithm be decided automatically by not specifying anything.

4) Put the plot commands in the main method, not outside

Otherwise it looks alright. I haven't tested the functionality though.

于 2019-06-18T11:47:07.637 回答