0

我正在尝试使用 Python 和 numpy实现修改后的单纯形法(RSM)算法。我坚持它要么只进行最大化(在 2x4、5x5 等微小矩阵上正确,而在较大矩阵上错误)或在大多数最小化情况下进入无限循环。下面的代码演示了我实现最小化的尝试:

    def __init__(self, A: np.ndarray, b: np.ndarray, c: np.ndarray):
        base_size = A.shape[0]  # Number of basis vectors
        I = np.eye(base_size)  # Identity matrix, an initial basis 
        self.extended_matrix = np.concatenate((A, I), axis=1)  # Extended matrix of the system
        self.free_size = A.shape[1]  # Number of free vectors
        self.b = b  # Right parts of the constraints
        self.base_inv = I  # Initial inverse basis matrix
        self.c = np.concatenate((c, np.zeros(base_size)))  # Objective function quotients including those related to the basis variables
        self.c_i = [i for i, coeff in enumerate(self.c)]  # Indices for all variables
        
    @property
    def c_f_indices(self):
        """
        Indices of the free variables.
        """
        return self.c_i[:self.free_size]
    
    @property
    def c_T_B(self):
        """
        Objective function quotients related to the basis variables.
        """
        c_B_indices = self.c_i[self.free_size:]  # Basis variables indices.
        return self.c[c_B_indices]
    
    @property
    def c_T_f(self):
        """
        Objective function quotients related to the free variables.
        """
        return self.c[self.c_f_indices]
        
    @property
    def free(self):
        """
        Free vectors.
        """
        return self.extended_matrix[:, self.c_f_indices]
    
    @property
    def y_T(self):
        """
        Lagrange multipliers.
        """
        return self.c_T_B @ self.base_inv
    
    @property
    def deltas(self):
        """
        Net evaluations. 
        """
        return (self.y_T @ self.free) - self.c_T_f 
    


    def _swap(self, exits: int, enters: int) -> None:
        """
        In-/excluding respective vectors into/from the basis.
        """
        self.c_i[enters], self.c_i[exits + self.free_size] = self.c_i[exits + self.free_size], self.c_i[enters]
    
    def optimize(self):
        while any([delta > 0 for delta in self.deltas]): # < 0 in case of maximization
            x_B = self.base_inv @ self.b  # Current basis variables
            enters_base = np.argmax(self.deltas)  # Vector to enter the basis; argmin in case of maximization
            
            # Finding the vector to leave the basis:
            alpha = self.base_inv @ self.free[:, enters_base]

            try:
                exits_base = np.argmin([b/a if a > 0 else np.inf for b, a in zip(x_B, alpha)])
                assert alpha[exits_base] != 0
            except (ValueError, AssertionError):
                raise Exception("No solutions")
            
            # Finding the E_r matrix, which current inverse basis will be left-multiplied with in order to achieve the new inverse basis:
            zetas = [-alpha[j] / alpha[exits_base] if j != exits_base else 1/alpha[exits_base] for j, a_j in enumerate(alpha)]
            E = np.eye(self.free.shape[0])
            E[:, exits_base] = zetas
            self.base_inv = E @ self.base_inv
            
            # In-/excluding respective vectors into/from the basis:
            self._swap(exits_base, enters_base)
            
        return self.c_T_B @ self.base_inv @ self.b # Final objective function value

我也尝试对 c_f_indices 进行排序,但仍然会出现无限循环。类似的RSM 实现也会在较大的矩阵(例如 16x8)上产生错误的结果,并且在小矩阵上也能正常工作。

问题的根源在哪里?

4

1 回答 1

0

正如 Erwin Kalvelagen 已经提到的,正常的 Dantzig 枢轴规则会导致循环和停顿,在这种情况下,目标值在很长一段时间内都没有改善。

通常,这种现象被称为LP 的退化。有限的数值精度和舍入误差会导致问题的退化。这就是为什么它通常在大型 LP 中更为普遍。

有几种方法可以解决这个问题。正如 Erwin 所提到的,最常使用的是插管。但是,如果您将此作为一个学习项目进行,我建议您使用一种解决方案,在该解决方案中使用更精细的旋转规则,例如 Zadeh 规则Cunningham 规则,您只需保留一个表格或变量以确保您选择不同的变量循环后输入基础。

于 2021-04-27T15:34:00.973 回答