-2

Nova 实例在启动时抛出错误 - “未能对实例执行请求的操作……服务器出错或无法执行请求的操作 (HTTP 500)”。请参阅下面的屏幕截图。

实例创建错误

令人惊讶的是,它在实例启动后单独附加卷时效果很好。您需要在创建实例时将“创建新卷”设置为“否”。

我们重新启动了 cinder 服务,但它并没有解决问题。

从 API 日志中,我们发现在服务端点(Nova 和 Cinder)的 API 交互期间存在 HTTP 500 错误。日志粘贴在下面。

有人可以帮助解决这个问题吗?

提前致谢。

Openstack - 详细信息

它是 3 节点系统。一个控制器 +2 计算。控制器有 Centos7 和 Openstack Ocata 发布 Cinder 版本 1.11.0 和 Nova 版本 7.1.2 Nova 和 Cinder RPM 的列表

==> api.log <==

2019-01-30 04:16:28.785 275098 ERROR cinder.api.middleware.fault [req-634abf81-df79-42b5-b8f4-8f19488c0bba a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - default default] Caught error: <class 'oslo_messaging.exceptions.MessagingTimeout'> Timed out waiting for a reply to message ID bf2f80590a754b59a720405cd0bc1ffb
2019-01-30 04:16:28.785 275098 ERROR cinder.api.middleware.fault Traceback (most recent call last):
2019-01-30 04:16:28.785 275098 ERROR cinder.api.middleware.fault   File "/usr/lib/python2.7/site-packages/cinder/api/middleware/fault.py", line 79, in __call__
2019-01-30 04:16:28.785 275098 ERROR cinder.api.middleware.fault     return req.get_response(self.application)
2019-01-30 04:16:28.785 275098 ERROR cinder.api.middleware.fault   File "/usr/lib/python2.7/site-packages/webob/request.py", line 1299, in send
2019-01-30 04:16:28.793 275098 INFO cinder.api.middleware.fault [req-634abf81-df79-42b5-b8f4-8f19488c0bba a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - default default] http://10.110.77.2:8776/v2/2db5c111414e4d2bbc14645e6f0931db/volumes/301f71f0-8fb5-4429-a67c-473d42ff9def/action returned with HTTP 500
2019-01-30 04:16:28.794 275098 INFO eventlet.wsgi.server [req-634abf81-df79-42b5-b8f4-8f19488c0bba a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - default default] 10.110.77.4 "POST /v2/2db5c111414e4d2bbc14645e6f0931db/volumes/301f71f0-8fb5-4429-a67c-473d42ff9def/action HTTP/1.1" status: 500  len: 425 time: 60.0791931
2019-01-30 04:16:28.813 275098 INFO cinder.api.openstack.wsgi [req-53d149ac-6e60-4ddd-9ace-216d12122790 a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - default default] POST http://10.110.77.2:8776/v2/2db5c111414e4d2bbc14645e6f0931db/volumes/301f71f0-8fb5-4429-a67c-473d42ff9def/action
2019-01-30 04:16:28.852 275098 INFO cinder.volume.api [req-53d149ac-6e60-4ddd-9ace-216d12122790 a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - default default] Volume info retrieved successfully.

新星日志:

2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [req-a4b94c35-2532-4e82-864c-ff33b972a3b2 a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - - -] [instance: aba62cf8-0880-4bf7-8201-3365861c8079] Instance failed block device setup
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] Traceback (most recent call last):
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1588, in _prep_block_device
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     wait_func=self._await_block_device_map_created)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 512, in attach_block_devices
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     _log_and_attach(device)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 509, in _log_and_attach
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     bdm.attach(*attach_args, **attach_kwargs)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 408, in attach
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     do_check_attach=do_check_attach)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 48, in wrapped
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     ret_val = method(obj, context, *args, **kwargs)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 258, in attach
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     connector)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 168, in wrapper
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     res = method(self, ctx, *args, **kwargs)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 190, in wrapper
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     res = method(self, ctx, volume_id, *args, **kwargs)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 391, in initialize_connection
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     exc.code if hasattr(exc, 'code') else None)})
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     self.force_reraise()
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     six.reraise(self.type_, self.value, self.tb)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 365, in initialize_connection
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     context).volumes.initialize_connection(volume_id, connector)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/cinderclient/v2/volumes.py", line 404, in initialize_connection
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     {'connector': connector})
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/cinderclient/v2/volumes.py", line 334, in _action
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     resp, body = self.api.client.post(url, body=body)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 167, in post
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     return self._cs_request(url, 'POST', **kwargs)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 155, in _cs_request
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     return self.request(url, method, **kwargs)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]   File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 144, in request
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]     raise exceptions.from_response(resp, body)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] ClientException: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-dcd4a981-8b22-4c3d-9ba7-25fafe80b8f5)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]
2019-01-30 03:58:04.811 5642 DEBUG nova.compute.claims [req-a4b94c35-2532-4e82-864c-ff33b972a3b2 a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - - -] [instance: aba62cf8-0880-4bf7-8201-3365861c8079] Aborting claim: [Claim: 4096 MB memory, 40 GB disk] abort /usr/lib/python2.7/site-packages/nova/compute/claims.py:124
2019-01-30 03:58:04.812 5642 DEBUG oslo_concurrency.lockutils [req-a4b94c35-2532-4e82-864c-ff33b972a3b2 a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.abort_instance_claim" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:270
2019-01-30 03:58:04.844 5642 INFO nova.scheduler.client.report [req-a4b94c35-2532-4e82-864c-ff33b972a3b2 a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - - -] Deleted allocation for instance aba62cf8-0880-4bf7-8201-3365861c8079

从 openstack 输出一些卫生命令:

[root@controller ~(keystone_admin)]# cinder service-list
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
| Binary           | Host           | Zone | Status  | State | Updated_at                 | Disabled Reason |
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
| cinder-backup    | controller     | nova | enabled | up    | 2019-01-31T10:27:20.000000 | -               |
| cinder-scheduler | controller     | nova | enabled | up    | 2019-01-31T10:27:13.000000 | -               |
| cinder-volume    | controller@lvm | nova | enabled | up    | 2019-01-31T10:27:12.000000 | -               |
+------------------+----------------+------+---------+-------+----------------------------+-----------------+


[root@controller yum.repos.d]# rpm -qa | grep cinder
openstack-cinder-10.0.5-1.el7.noarch
puppet-cinder-10.4.0-1.el7.noarch
python-cinder-10.0.5-1.el7.noarch
python2-cinderclient-1.11.0-1.el7.noarch
[root@controller yum.repos.d]# rpm -qa | grep nova
openstack-nova-conductor-15.1.0-1.el7.noarch
openstack-nova-novncproxy-15.1.0-1.el7.noarch
openstack-nova-compute-15.1.0-1.el7.noarch
openstack-nova-cert-15.1.0-1.el7.noarch
openstack-nova-api-15.1.0-1.el7.noarch
openstack-nova-console-15.1.0-1.el7.noarch
openstack-nova-common-15.1.0-1.el7.noarch
openstack-nova-placement-api-15.1.0-1.el7.noarch
python-nova-15.1.0-1.el7.noarch
python2-novaclient-7.1.2-1.el7.noarch
openstack-nova-scheduler-15.1.0-1.el7.noarch
puppet-nova-10.5.0-1.el7.noarch
[root@controller yum.repos.d]#

[root@controller yum.repos.d]# rpm -qa | grep ocata
centos-release-openstack-ocata-1-2.el7.noarch
[root@controller yum.repos.d]# uname -a
Linux controller 3.10.0-862.2.3.el7.x86_64 #1 SMP Wed May 9 18:05:47 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
[root@controller yum.repos.d]#
centos-release-openstack-ocata-1-2.el7.noarch

[root@controller yum.repos.d]# cinder --version
1.11.0
[root@controller yum.repos.d]# nova --version
7.1.2
[root@controller yum.repos.d]#
4

1 回答 1

0

我得到了解决这个问题的方法。我观察到 Openstack 中很少有项目在“删除错误”时卷删除卡在错误状态。我使用“cinder reset-state --state available volume-id”从 cinder db 显式更改了卷状态。

这使我能够成功删除该卷。之后我重新启动了煤渣服务,一切正常

于 2019-02-11T05:06:08.027 回答