1

我正在尝试使用 TensorFlow 实现一个机械模型,该模型将用作 GAN 的一部分,基于本文中所示的方法:https ://arxiv.org/abs/2009.08267 。我正在使用 tensorflow 2.5.0 和 tensorflow-probability 0.13.0。

机械模型使用 TF Dormand-Prince 求解器来求解一组微分方程,这些方程产生心血管系统不同区域的压力波形。我想获得关于机械模型参数的波形梯度,以训练 GAN 的生成器。

我的几个微分方程包含一个随时间变化的变量(分段但连续,没有“尖角”),它是从机械模型的参数子集计算出来的。如果我将此变量设置为常数,我可以得到波形 wrt 模型参数的梯度。但是,如果我将此变量保持为随时间变化的,那么当我尝试计算梯度时会得到 ZeroDivisionError。

知道为什么会出现此错误吗?我在下面包含了一个堆栈跟踪。

非常感谢你的帮助!

---------------------------------------------------------------------------
ZeroDivisionError                         Traceback (most recent call last)
<ipython-input-4-462599885903> in <module>
----> 1 dy6_dX = tape.gradient(y6, X)

~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow/python/eager/backprop.py in gradient(self, target, sources, output_gradients, unconnected_gradients)
   1078         output_gradients=output_gradients,
   1079         sources_raw=flat_sources_raw,
-> 1080         unconnected_gradients=unconnected_gradients)
   1081 
   1082     if not self._persistent:

~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow/python/eager/imperative_grad.py in imperative_grad(tape, target, sources, output_gradients, sources_raw, unconnected_gradients)
     75       output_gradients,
     76       sources_raw,
---> 77       compat.as_str(unconnected_gradients.value))

~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow/python/ops/custom_gradient.py in actual_grad_fn(*result_grads)
    472                          "@custom_gradient grad_fn.")
    473     else:
--> 474       input_grads = grad_fn(*result_grads)
    475       variable_grads = []
    476     flat_grads = nest.flatten(input_grads)

~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow_probability/python/math/ode/base.py in grad_fn(*dresults, **kwargs)
    454               initial_time=result_time_array.read(initial_n),
    455               initial_state=make_augmented_state(initial_n,
--> 456                                                  terminal_augmented_state),
    457           )
    458 

~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow_probability/python/math/ode/dormand_prince.py in _initialize_solver_internal_state(self, ode_fn, initial_time, initial_state)
    307     p = self._prepare_common_params(initial_state, initial_time)
    308 
--> 309     initial_derivative = ode_fn(p.initial_time, p.initial_state)
    310     initial_derivative = tf.nest.map_structure(tf.convert_to_tensor,
    311                                                initial_derivative)

~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow_probability/python/math/ode/base.py in augmented_ode_fn(backward_time, augmented_state)
    388              adjoint_constants_ode) = tape.gradient(
    389                  adjoint_dot_derivatives, (state, tuple(variables), constants),
--> 390                  unconnected_gradients=tf.UnconnectedGradients.ZERO)
    391             return (negative_derivatives, adjoint_ode, adjoint_variables_ode,
    392                     adjoint_constants_ode)

~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow/python/eager/backprop.py in gradient(self, target, sources, output_gradients, unconnected_gradients)
   1078         output_gradients=output_gradients,
   1079         sources_raw=flat_sources_raw,
-> 1080         unconnected_gradients=unconnected_gradients)
   1081 
   1082     if not self._persistent:

~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow/python/eager/imperative_grad.py in imperative_grad(tape, target, sources, output_gradients, sources_raw, unconnected_gradients)
     75       output_gradients,
     76       sources_raw,
---> 77       compat.as_str(unconnected_gradients.value))

~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow/python/eager/backprop.py in _gradient_function(op_name, attr_tuple, num_inputs, inputs, outputs, out_grads, skip_input_indices, forward_pass_name_scope)
    157       gradient_name_scope += forward_pass_name_scope + "/"
    158     with ops.name_scope(gradient_name_scope):
--> 159       return grad_fn(mock_op, *out_grads)
    160   else:
    161     return grad_fn(mock_op, *out_grads)

~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow/python/ops/array_grad.py in _ConcatGradV2(op, grad)
    228 def _ConcatGradV2(op, grad):
    229   return _ConcatGradHelper(
--> 230       op, grad, start_value_index=0, end_value_index=-1, dim_index=-1)
    231 
    232 

~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow/python/ops/array_grad.py in _ConcatGradHelper(op, grad, start_value_index, end_value_index, dim_index)
    117       # in concat implementation to be within the allowed [-rank, rank) range.
    118       non_neg_concat_dim = (
--> 119           concat_dim._numpy().item(0) % input_values[0]._rank())  # pylint: disable=protected-access
    120       # All inputs are guaranteed to be EagerTensors in eager mode
    121       sizes = pywrap_tfe.TFE_Py_TensorShapeSlice(input_values,

ZeroDivisionError: integer division or modulo by zero

4

0 回答 0