2

新设置:2x2080ti Nvidia 驱动程序:430 Cuda 10.0 Cudnn 7.6 Tensorflow 1.13.1

旧设置:2x1080ti Nvidia 驱动程序:410 Cuda 9.0 Tensorflow 1.10

我实现了一个分割模型,它可以在 FP32 或混合精度下进行训练(按照此处的说明http://on-demand.gputechconf.com/gtc-taiwan/2018/pdf/5-1_Internal%20Speaker_Michael%20Carilli_PDF%20For% 20 共享.pdf )。

它适用于旧设置,但 1080ti 不完全支持 float16,这就是我切换到新设置的原因。

在新设置中,FP32 工作正常,但混合精度总是有错误: tensorflow.python.framework.errors_impl.InternalError: RET_CHECK failure (tensorflow/compiler/xla/service/gpu/ir_emitter_unnested.cc:3171) ShapeUtil::Equal(first_reduce ->形状(),inst->形状())

模型结构:

with tf.name_scope('Inputs'):
    is_training_tensor = tf.placeholder(dtype=tf.bool, shape=(), name='is_training')

    input_tensor = tf.placeholder(dtype=tf.float32, shape=set_shape(hypes, hypes['arch']['num_channels']),
                                  name='inputs')

    if hypes['arch']['half_precision']:
        input_tensor = tf.cast(input_tensor, tf.float16)

    binary_label_tensors = []
    for label in hypes['data']['predict_labels']:
        binary_label_tensor = tf.placeholder(dtype=tf.int64, shape=set_shape(hypes, 1, is_input=False), name=label)
        binary_label_tensors.append(binary_label_tensor)

tower_grads = []
loss_dicts = []
eval_dicts = []

with tf.name_scope('Optimizer'):
    opt, step = create_optimizer_wrapper(hypes)

with tf.variable_scope('ModelCrossGPUs', reuse=tf.AUTO_REUSE, custom_getter=float32_variable_storage_getter
                       if hypes['arch']['half_precision'] else None):
    for i in range(gpus):
        with tf.device('/device:GPU:{}'.format(i)):
            with tf.name_scope('GPU_{}'.format(i)):
                # restructure input
                input_tensor_gpu = input_tensor[i * batch_size: (i + 1) * batch_size]

                binary_label_tensors_gpu = []
                for tensor in binary_label_tensors:
                    binary_label_tensors_gpu.append(tensor[i * batch_size: (i + 1) * batch_size])

                # instantiate the network
                net_module = getattr(importlib.import_module('ml.projects.xxx.nets.' +
                                                             hypes['arch']['net']), 'inference')
                inference_net = net_module(hypes,
                                           input_tensor=input_tensor_gpu,
                                           is_training_tensor=is_training_tensor)

                if hypes['arch']['half_precision']:
                    logitss = [tf.cast(logits, tf.float32) for logits in inference_net['logitss']]
                else:
                    logitss = inference_net['logitss']
                binary_seg_rets = inference_net['binary_seg_rets']

                with tf.name_scope('Loss'):
                    loss_dict = loss.multi_binary_segmentation_loss(hypes, input_tensor_gpu,
                                                                    binary_label_tensors_gpu, logitss)
                    loss_dict.update({'total_loss': loss.consolidation_loss(loss_dict['binary_seg_loss'])})
                    loss_dicts.append(loss_dict)

                with tf.name_scope('Evaluation'):
                    evaluator = eval.Evaluator()
                    eval_dict = evaluator.eval_logits(hypes, input_tensor_gpu, binary_label_tensors_gpu, logitss)
                    eval_dicts.append(eval_dict)

                with tf.name_scope('Gradients'):
                    grads = single_gradients(hypes, loss_dict['total_loss'], opt)

                    tower_grads.append(grads)

            with tf.name_scope('Summary_Train/'):
                with tf.name_scope('Summary_Train_{}'.format(i)):
                    add_tensor_to_summary(hypes, input_tensor_gpu, binary_label_tensors_gpu, inference_net)
                    for grad in grads:
                        tf.summary.histogram("Gradient/" + grad.name.split(':')[0], grad)

            with tf.name_scope('Summary_Eval/'):
                with tf.name_scope('Summary_Eval_{}'.format(i)):
                    add_tensor_to_summary(hypes, input_tensor_gpu, binary_label_tensors_gpu, inference_net)

with tf.name_scope('Optimizer'):
    grads = average_gradients(tower_grads)
    train_op = global_optimizer(grads, opt, step)

错误发生在这里:

binary_label = tf.multiply(binary_label, mask)

        is_binary_label_one = tf.equal(binary_label, 1)
        is_out_one = tf.equal(out, 1)

        # Ground truth
        t = tf.count_nonzero(binary_label, dtype=tf.int64)
        # Prediction
        p = tf.count_nonzero(out, dtype=tf.int64)
        # Union
        u = tf.count_nonzero(tf.logical_or(is_binary_label_one, is_out_one))
        # Intersection
        i = tf.count_nonzero(tf.logical_and(is_binary_label_one, is_out_one))
        # Valid mask region
        m = tf.count_nonzero(mask)
        # correct prediction including both positive and negative prediction
        c = tf.count_nonzero(tf.logical_and(tf.equal(binary_label, out), tf.equal(mask, 1)))

        one = tf.constant(1.0, dtype=tf.float64)

        accuracy = tf.cond(tf.equal(m, 0), lambda: one, lambda: c / m)
        precision = tf.cond(tf.equal(p, 0), lambda: one, lambda: i / p)
        recall = tf.cond(tf.equal(t, 0), lambda: one, lambda: i / t)
        iou = tf.cond(tf.equal(u, 0), lambda: one, lambda: i / u)
        f1 = tf.cond(tf.equal(precision + recall, 0), lambda: one, lambda: 2 * precision * recall /
                     (precision + recall))

错误:

    * Begin stack trace 

    tensorflow::Status xla::HloInstruction::Visit<xla::HloInstruction*>(xla::DfsHloVisitorBase<xla::HloInstruction*>*)

    tensorflow::Status xla::HloInstruction::Accept<xla::HloInstruction*>(xla::DfsHloVisitorBase<xla::HloInstruction*>*, bool, bool)
    tensorflow::Status xla::HloComputation::Accept<xla::HloInstruction*>(xla::DfsHloVisitorBase<xla::HloInstruction*>*) const
    xla::gpu::NVPTXCompiler::RunBackend(std::unique_ptr<xla::HloModule, std::default_delete<xla::HloModule> >, stream_executor::StreamExecutor*, xla::DeviceMemoryAllocator*)
    xla::Service::BuildExecutable(xla::HloModuleProto const&, std::unique_ptr<xla::HloModuleConfig, std::default_delete<xla::HloModuleConfig> >, xla::Backend*, stream_executor::StreamExecutor*, xla::DeviceMemoryAllocator*
    tensorflow::XlaCompilationCache::BuildExecutable(tensorflow::XlaCompiler::Options const&, tensorflow::XlaCompiler::CompilationResult const&, std::unique_ptr<xla::LocalExecutable, std::default_delete<xla::LocalExecutable> >*)
    tensorflow::XlaCompilationCache::CompileImpl(tensorflow::XlaCompiler::Options const&, tensorflow::NameAttrList const&, absl::Span<tensorflow::XlaCompiler::Argument const>, std::function<tensorflow::Status (tensorflow::XlaCompiler*, tensorflow::XlaCompiler::CompilationResult*)> const&, absl::optional<long long>, tensorflow::XlaCompiler::CompilationResult const**, xla::LocalExecutable**)
    tensorflow::XlaCompilationCache::Compile(tensorflow::XlaCompiler::Options const&, tensorflow::NameAttrList const&, absl::Span<tensorflow::XlaCompiler::Argument const>, tensorflow::XlaCompiler::CompileOptions const&, tensorflow::XlaCompilationCache::CompileMode, tensorflow::XlaCompiler::CompilationResult const**, xla::LocalExecutable**)

    tensorflow::XlaCompileOp::Compute(tensorflow::OpKernelContext*)
    tensorflow::BaseGPUDevice::ComputeHelper(tensorflow::OpKernel*, tensorflow::OpKernelContext*)
    tensorflow::BaseGPUDevice::Compute(tensorflow::OpKernel*, tensorflow::OpKernelContext*)
    Eigen::ThreadPoolTempl<tensorflow::thread::EigenEnvironment>::WorkerLoop(int) std::_Function_handler<void (), tensorflow::thread::EigenEnvironment::CreateThread(std::function<void ()>)::{lambda()#1}>::_M_invoke(std::_Any_data const&)

    clone
    *End stack trace

2019-06-03 21:16:54.599314: W tensorflow/core/framework/op_kernel.cc:1401]
OP_REQUIRES failed at xla_ops.cc:429 : Internal: RET_CHECK failure (tensorflow/compiler/xla/service/gpu/ir_emitter_unnested.cc:3171) ShapeUtil::Equal(first_reduce->shape(), inst->shape()) 
Traceback (most recent call last):
  File "/home/usr/workspace/virtualenvs/xxx/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call
    return fn(*args)
  File "/home/usr/workspace/virtualenvs/xxx/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "/home/usr/workspace/virtualenvs/xxx/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.InternalError: RET_CHECK failure (tensorflow/compiler/xla/service/gpu/ir_emitter_unnested.cc:3171) ShapeUtil::Equal(first_reduce->shape(), inst->shape()) 
     [[{{node cluster_26_1/xla_compile}}]]
     [[{{node ModelCrossGPUs/GPU_0/Evaluation/cond_2/Merge}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/usr/pycharm/pycharm-community-2018.3.5/helpers/pydev/pydevd.py", line 1741, in <module>
    main()
  File "/home/usr/pycharm/pycharm-community-2018.3.5/helpers/pydev/pydevd.py", line 1735, in main
    globals = debugger.run(setup['file'], None, None, is_module)
  File "/home/usr/pycharm/pycharm-community-2018.3.5/helpers/pydev/pydevd.py", line 1135, in run
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "/home/usr/pycharm/pycharm-community-2018.3.5/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "/home/usr/workspace/projects/xxx/train.py", line 201, in <module>
    tf.app.run()
  File "/home/usr/workspace/virtualenvs/xxx/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run
    _sys.exit(main(argv))
  File "/home/usr/workspace/projects/xxx/train.py", line 197, in main
    train_net(hypes, graph, session, run_options, itr_init)
  File "/home/usr/workspace/projects/xxx/train.py", line 107, in train_net
    run_metadata=run_options['metadata'])
  File "/home/usr/workspace/virtualenvs/xxx/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 929, in run
    run_metadata_ptr)
  File "/home/usr/workspace/virtualenvs/xxx/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1152, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/usr/workspace/virtualenvs/xxx/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run
    run_metadata)
  File "/home/usr/workspace/virtualenvs/xxx/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InternalError: RET_CHECK failure (tensorflow/compiler/xla/service/gpu/ir_emitter_unnested.cc:3171) ShapeUtil::Equal(first_reduce->shape(), inst->shape()) 
     [[{{node cluster_26_1/xla_compile}}]]
     [[node ModelCrossGPUs/GPU_0/Evaluation/cond_2/Merge (defined at /home/usr/workspace/projects/xxx/utils/eval.py:84) ]]
4

1 回答 1

0

在为张量流函数启用 XLA 后,我遇到了类似的错误代码“RET_CHECK FAILURE”:

tensorflow.python.framework.errors_impl.InternalError: RET_CHECK failure (tensorflow/compiler/jit/xla_launch_util.cc:586) input->dtype() != DT_RESOURCE  [Op:__inference_tf_train_3912]

除了,在我的情况下,它暗示了推理调用,因此我在错误的代码行之后注释掉了所有内容,然后令我惊讶的是,我收到了更明确和有用的错误消息。

在推理调用之后注释代码行

@tf.function(experimental_compile=True)
def train():
...
    with tf.GradientTape() as tape:
        train_batch_y_pred = m(xx, training=True)
        #loss_value = tf.losses.BinaryCrossentropy()(yy, train_batch_y_pred)
    #grads = tape.gradient(loss_value, m.trainable_weights)
    #opt.apply_gradients(zip(grads, m.trainable_weights))

新的错误信息:

 Can''t find libdevice directory ${CUDA_DIR}/nvvm/libdevice. This may result in compilation or runtime failures, if the program we try to run uses routines from libdevice.

原来 XLA 使用的是以前和删除的 cuda 安装中的环境变量。由于此错误代码可以隐藏其他以前的错误,我认为此答案可能很有用,尽管与 OP 没有直接关系。

完整的错误转储,在评论之前

2021-05-28 05:36:34.723223: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1db60391100 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2021-05-28 05:36:34.723316: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): NVIDIA GeForce RTX 2080 Ti, Compute Capability 7.5
2021-05-28 05:36:34.978528: E tensorflow/compiler/xla/status_macros.cc:56] Internal: RET_CHECK failure (tensorflow/compiler/jit/xla_launch_util.cc:586) input->dtype() != DT_RESOURCE 
0x00007FFF4E316C75  tensorflow::CurrentStackTrace
0x00007FFF4DA89B97  xla::status_macros::MakeErrorStream::Impl::GetStatus
0x00007FFF4DA8A028  xla::status_macros::MakeErrorStream::Impl::GetStatus
0x00007FFF4DA89A5C  xla::status_macros::MakeErrorStream::Impl::GetStatus
0x00007FFF310EEDE6  Eigen::TensorEvaluator<Eigen::TensorMap<Eigen::Tensor<tensorflow::ResourceHandle,5,1,__int64>,16,Eigen::MakePointer>,Eigen::DefaultDevice>::coeffRef
0x00007FFF310D6B93  absl::lts_2020_02_25::optional_internal::optional_data_dtor_base<tensorflow::Tensor,0>::~optional_data_dtor_base<tensorflow::Tensor,0>
0x00007FFF310D880C  absl::lts_2020_02_25::optional_internal::optional_data_dtor_base<tensorflow::Tensor,0>::~optional_data_dtor_base<tensorflow::Tensor,0>
0x00007FFF4B88F3FC  google::protobuf::RepeatedPtrField<tensorflow::InterconnectLink>::Add
0x00007FFF4B4E6075  tensorflow::EagerExecutor::~EagerExecutor
0x00007FFF4B4AEF91  google::protobuf::RepeatedPtrField<tensorflow::RunMetadata_FunctionGraphs>::Add
0x00007FFF4B4B55A5  google::protobuf::RepeatedPtrField<tensorflow::RunMetadata_FunctionGraphs>::Add
0x00007FFF4B4E310F  tensorflow::EagerExecutor::~EagerExecutor
0x00007FFF4B4AD2FC  google::protobuf::RepeatedPtrField<tensorflow::RunMetadata_FunctionGraphs>::Add
0x00007FFF4B4B0149  google::protobuf::RepeatedPtrField<tensorflow::RunMetadata_FunctionGraphs>::Add
0x00007FFF4B4AEC2F  google::protobuf::RepeatedPtrField<tensorflow::RunMetadata_FunctionGraphs>::Add
0x00007FFF4B4A2CC1  absl::lts_2020_02_25::Span<tensorflow::Tensor const >::end
0x00007FFF3103F185  TFE_Execute
0x00007FFF30FD1790  TFE_Py_ExecuteCancelable
0x00007FFF77D34FB1  (unknown)
0x00007FFF77D2632B  (unknown)
0x00007FFF77D09A06  (unknown)
0x00007FFF77D3A466  (unknown)
0x00007FFF886E3CC4  PyCFunction_Call
0x00007FFF886C4DCA  PyEval_EvalFrameDefault
0x00007FFF886BE618  PyEval_EvalCodeWithName
0x00007FFF886BFD5F  PyFunction_Vectorcall
0x00007FFF886C6B9E  PyEval_EvalFrameDefault
0x00007FFF886BE618  PyEval_EvalCodeWithName
0x00007FFF886C5675  PyEval_EvalFrameDefault
0x00007FFF886BE618  PyEval_EvalCodeWithName
0x00007FFF886C5675  PyEval_EvalFrameDefault
0x00007FFF886C2D24  PyEval_EvalFrameDefault
0x00007FFF886BE618  PyEval_EvalCodeWithName
0x00007FFF886C39B2  PyEval_EvalFrameDefault
0x00007FFF886BE618  PyEval_EvalCodeWithName
0x00007FFF886BFD5F  PyFunction_Vectorcall
0x00007FFF886B3061  PyObject_FastCallDict
0x00007FFF887B5CA6  PyObject_Call_Prepend
0x00007FFF887B5C15  PyNumber_InPlaceMultiply
0x00007FFF886C5C78  PyEval_EvalFrameDefault
0x00007FFF886C2C3B  PyEval_EvalFrameDefault
0x00007FFF886C2C3B  PyEval_EvalFrameDefault
0x00007FFF886C2C3B  PyEval_EvalFrameDefault
0x00007FFF886C2C3B  PyEval_EvalFrameDefault
0x00007FFF886BE618  PyEval_EvalCodeWithName
0x00007FFF886D315B  PyEval_EvalCodeEx
0x00007FFF886D30B9  PyEval_EvalCode
0x00007FFF886D2AC6  PyArena_New
0x00007FFF886D2A55  PyArena_New
0x00007FFF8877A1A3  Py_wfopen
0x00007FFF887785A8  PyUnicode_CompareWithASCIIString
0x00007FFF88777837  PyRun_SimpleFileExFlags
0x00007FFF888973FF  PyRun_AnyFileExFlags
0x00007FFF88846453  Py_gitversion
0x00007FFF8877B494  Py_RunMain
0x00007FFF8877B31D  Py_RunMain
0x00007FFF8877AECD  Py_Main
0x00007FF66CFD1258  (unknown)
0x00007FFFBEC27034  BaseThreadInitThunk
0x00007FFFC023D0D1  RtlUserThreadStart

2021-05-28 05:36:34.980650: W tensorflow/core/framework/op_kernel.cc:1763] OP_REQUIRES failed at xla_ops.cc:238 : Internal: RET_CHECK failure (tensorflow/compiler/jit/xla_launch_util.cc:586) input->dtype() != DT_RESOURCE 
Traceback (most recent call last):
  File "C:/Users/Cfirm/PycharmProjects/NNProj/lstm_classification_double.py", line 364, in <module>
    start()
  File "C:/Users/Cfirm/PycharmProjects/NNProj/lstm_classification_double.py", line 344, in start
    model_long, o = train_model(timeserie, True)
  File "C:/Users/Cfirm/PycharmProjects/NNProj/lstm_classification_double.py", line 158, in train_model
    model = m1 = train_layered_model(i, o1, e, 60, timeserie, direction, math.log(101.0 / 100.0))
  File "C:/Users/Cfirm/PycharmProjects/NNProj/lstm_classification_double.py", line 146, in train_layered_model
    train(m, x, y, epochs, batch_size)
  File "C:/Users/Cfirm/PycharmProjects/NNProj/lstm_classification_double.py", line 114, in train
    _concrete_fn_train(x, y)
  File "C:\Users\Cfirm\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\eager\function.py", line 1669, in __call__
    return self._call_impl(args, kwargs)
  File "C:\Users\Cfirm\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\eager\function.py", line 1678, in _call_impl
    return self._call_with_structured_signature(args, kwargs,
  File "C:\Users\Cfirm\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\eager\function.py", line 1759, in _call_with_structured_signature
    return self._call_flat(
  File "C:\Users\Cfirm\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\eager\function.py", line 1918, in _call_flat
    return self._build_call_outputs(self._inference_function.call(
  File "C:\Users\Cfirm\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\eager\function.py", line 555, in call
    outputs = execute.execute(
  File "C:\Users\Cfirm\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\eager\execute.py", line 59, in quick_execute
    tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InternalError: RET_CHECK failure (tensorflow/compiler/jit/xla_launch_util.cc:586) input->dtype() != DT_RESOURCE  [Op:__inference_tf_train_3912]

Process finished with exit code 1

完整的错误转储,评论后

2021-05-28 05:55:49.033829: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1fddbb09770 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2021-05-28 05:55:49.033924: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): NVIDIA GeForce RTX 2080 Ti, Compute Capability 7.5
2021-05-28 05:55:49.092926: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublas64_11.dll
2021-05-28 05:55:49.463671: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublasLt64_11.dll
2021-05-28 05:55:49.512861: I tensorflow/core/platform/windows/subprocess.cc:308] SubProcess ended with return code: 0

2021-05-28 05:55:49.601997: I tensorflow/core/platform/windows/subprocess.cc:308] SubProcess ended with return code: 0

2021-05-28 05:55:49.697453: I tensorflow/core/platform/windows/subprocess.cc:308] SubProcess ended with return code: 0

2021-05-28 05:55:49.740538: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:70] Can't find libdevice directory ${CUDA_DIR}/nvvm/libdevice. This may result in compilation or runtime failures, if the program we try to run uses routines from libdevice.
2021-05-28 05:55:49.740661: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:71] Searched for CUDA in the following directories:
2021-05-28 05:55:49.740724: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:74]   ./cuda_sdk_lib
2021-05-28 05:55:49.740772: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:74]   C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.0
2021-05-28 05:55:49.740837: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:74]   .
2021-05-28 05:55:49.740877: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:76] You can choose the search directory by setting xla_gpu_cuda_data_dir in HloModule's DebugOptions.  For most apps, setting the environment variable XLA_FLAGS=--xla_gpu_cuda_data_dir=/path/to/cuda will work.
2021-05-28 05:55:49.742075: W tensorflow/compiler/xla/service/gpu/llvm_gpu_backend/gpu_backend_lib.cc:324] libdevice is required by this HLO module but was not found at ./libdevice.10.bc
2021-05-28 05:55:49.742459: I tensorflow/compiler/jit/xla_compilation_cache.cc:333] Compiled cluster using XLA!  This line is logged at most once for the lifetime of the process.
2021-05-28 05:55:49.745001: W tensorflow/core/framework/op_kernel.cc:1763] OP_REQUIRES failed at xla_ops.cc:238 : Internal: libdevice not found at ./libdevice.10.bc
Traceback (most recent call last):
  File "C:/Users/Cfirm/PycharmProjects/NNProj/lstm_classification_double.py", line 364, in <module>
    start()
  File "C:/Users/Cfirm/PycharmProjects/NNProj/lstm_classification_double.py", line 344, in start
    model_long, o = train_model(timeserie, True)
  File "C:/Users/Cfirm/PycharmProjects/NNProj/lstm_classification_double.py", line 158, in train_model
    model = m1 = train_layered_model(i, o1, e, 60, timeserie, direction, math.log(101.0 / 100.0))
  File "C:/Users/Cfirm/PycharmProjects/NNProj/lstm_classification_double.py", line 146, in train_layered_model
    train(m, x, y, epochs, batch_size)
  File "C:/Users/Cfirm/PycharmProjects/NNProj/lstm_classification_double.py", line 114, in train
    _concrete_fn_train(x, y)
  File "C:\Users\Cfirm\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\eager\function.py", line 1669, in __call__
    return self._call_impl(args, kwargs)
  File "C:\Users\Cfirm\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\eager\function.py", line 1678, in _call_impl
    return self._call_with_structured_signature(args, kwargs,
  File "C:\Users\Cfirm\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\eager\function.py", line 1759, in _call_with_structured_signature
    return self._call_flat(
  File "C:\Users\Cfirm\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\eager\function.py", line 1918, in _call_flat
    return self._build_call_outputs(self._inference_function.call(
  File "C:\Users\Cfirm\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\eager\function.py", line 555, in call
    outputs = execute.execute(
  File "C:\Users\Cfirm\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\eager\execute.py", line 59, in quick_execute
    tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InternalError: libdevice not found at ./libdevice.10.bc [Op:__inference_tf_train_1439]

Process finished with exit code 1

结果发现,在我修复了第二个错误之后,第一个错误仍然出现,它是由将“展开”变量设置为 False 的 LSTM 层引起的,它在设置为 True 时工作。

于 2021-05-28T04:04:50.587 回答