此代码来自 Maziar Raissi 的论文,名为Continuous_time_identification(Navier-Stokes)。
self.u_pred, self.v_pred, self.p_pred, self.f_u_pred, self.f_v_pred = self.net_NS(self.x_tf, self.y_tf, self.t_tf)
self.loss = tf.reduce_sum(tf.square(self.u_tf - self.u_pred)) + \
tf.reduce_sum(tf.square(self.v_tf - self.v_pred)) + \
tf.reduce_sum(tf.square(self.f_u_pred)) + \
tf.reduce_sum(tf.square(self.f_v_pred))
self.optimizer = tf.contrib.opt.ScipyOptimizerInterface(self.loss,
method = 'L-BFGS-B',
options = {'maxiter': 50000,
'maxfun': 50000,
'maxcor': 50,
'maxls': 50,
'ftol' : 1.0 * np.finfo(float).eps})
self.optimizer_Adam = tf.train.AdamOptimizer()
self.train_op_Adam = self.optimizer_Adam.minimize(self.loss)
init = tf.global_variables_initializer()
self.sess.run(init)
是否定义了两个不同的优化器?第一个用L-BFGS-B
,最后一个Adam
。定义好网络后,训练网络时,代码如下:
def train(self, nIter):
tf_dict = {self.x_tf: self.x, self.y_tf: self.y, self.t_tf: self.t,
self.u_tf: self.u, self.v_tf: self.v}
start_time = time.time()
for it in range(nIter):
self.sess.run(self.train_op_Adam, tf_dict)
# Print
if it % 10 == 0:
elapsed = time.time() - start_time
loss_value = self.sess.run(self.loss, tf_dict)
lambda_1_value = self.sess.run(self.lambda_1)
lambda_2_value = self.sess.run(self.lambda_2)
print('It: %d, Loss: %.3e, l1: %.3f, l2: %.5f, Time: %.2f' %
(it, loss_value, lambda_1_value, lambda_2_value, elapsed))
start_time = time.time()
self.optimizer.minimize(self.sess,
feed_dict = tf_dict,
fetches = [self.loss, self.lambda_1, self.lambda_2],
loss_callback = self.callback)
网络是否由之前描述的不同优化器训练了两次?