1

因此,对于我在 Celery 3.0.19 上的一些任务,Celery 显然不尊重队列属性,而是将任务发送到默认的 celery 队列

/This is a stupid test with the proprietary code ripped out.  
def run_chef_task(task, **env):
if env is None:
    env = {}
if not task_name is None:
    env['CHEF'] = task_name

print env
cmd = []
if len(env):
    cmd = ['env']
    for key, value in env.items():
        if not isinstance(key, str) or not isinstance(value, str):
            raise TypeError(
                "Environment Values must be strings ({0}, {1})"\
                .format(key, value))
        key = "ND" + key.upper()
        cmd.append('%s=%s' % (key, value))


cmd.extend(['/root/chef/run_chef', 'noudata_default'])
print cmd
ret = " ".join(cmd)
ret = subprocess.check_call(cmd)
print 'CHECK'
return ret,cmd

r = run_chef_task.apply_async(args=['mongo_backup], queue = 'my_special_queue_with_only_one_worker') r.get() # 立即返回

去赏花。查找任务。查找运行任务的工作人员。看到工人是不同的,并且运行任务的工人不是特殊工人。确认 Flower 说“special_worker”只在“my_special_queue”上,只有“special_worker”不在“my_special_queue”上。

现在这是真正有趣的部分:

在broker上拉起rabbitmq-management(并确认broker就是broker)。
有一条消息通过正确队列上的代理发送到正确的工作人员(已验证)。紧接着,芹菜队列上又发送了一条消息

在工作人员的日志文件中,它说它接受并完成了任务:

[2013-05-16 02:24:15,455: INFO/MainProcess] Got task from broker: noto.tasks.chef_tasks.run_chef_task[0dba1107-2bb5-4c19-8df3-8a74d8e1234c]
[2013-05-16 02:24:15,456: DEBUG/MainProcess] TaskPool: Apply <function _fast_trace_task at 0x2479c08> (args:('noto.tasks.chef_tasks.run_chef_task', '0dba1107-2bb5-4c19-8df3-8a74d8e1234c', ['mongo_backup'], {}, {'utc': True, 'is_eager': False, 'chord': None, 'group': None, 'args': ['mongo_backup'], 'retries': 0, 'delivery_info': {'priority': None, 'routing_key': u'', 'exchange': u'celery'}, 'expires': None, 'task': 'noto.tasks.chef_tasks.run_chef_task', 'callbacks': None, 'errbacks': None, 'hostname': 'manager1.i-6e958f0f', 'taskset': None, 'kwargs': {}, 'eta': None, 'id': '0dba1107-2bb5-4c19-8df3-8a74d8e1234c'}) kwargs:{})
// This is output from the task
[2013-05-16 02:24:15,459: WARNING/PoolWorker-1] {'CHEF': 'mongo_backup'}

[2013-05-16 02:24:15,463: WARNING/PoolWorker-1] ['env', 'NDCHEF=mongo_backup', '/root/chef/run_chef', 'default']
[2013-05-16 02:24:15,477: DEBUG/MainProcess] Task accepted: noto.tasks.chef_tasks.run_chef_task[0dba1107-2bb5-4c19-8df3-8a74d8e1234c] pid:17210
...A bunch of boring debug logs repeating the registered tasks
[2013-05-16 02:31:45,061: INFO/MainProcess] Task noto.tasks.chef_tasks.run_chef_task[0dba1107-2bb5-4c19-8df3-8a74d8e1234c] succeeded in 88.438395977s: (0, ['env', 'NDCHEF=mongo_backup',...

所以它接受任务,运行任务,并完全触发另一个队列上的另一个工作人员在同一时间运行它,而不是正确返回。我唯一能想到的是这个工人是唯一一个有正确来源的人。所有其他工作人员都有旧的源代码,子流程调用已被注释掉,因此他们或多或少会立即返回。

有谁知道是什么原因造成的?这不是我们看到发生这种情况的唯一任务,因为它似乎从 celery 队列中随机选择 3 台机器来运行它。我们对 celeryconfig 做了什么奇怪的事情会导致这种情况吗?

4

1 回答 1

1

您的 TaskPool 日志建议没有明确的路由,请参阅 routing_key 和默认的“交换”:

'delivery_info': {'priority': None, 'routing_key': u'', 'exchange': u'celery'}

我猜想,问题是开箱即用的自动默认值。考虑看看在 celery 配置中测试显式手动路由。

http://docs.celeryproject.org/en/latest/userguide/routing.html#manual-routing

例如:

CELERY_ROUTES = {
"work-queue": {
    "queue": "work_queue",
    "binding_key": "work_queue"
},
"new-feeds": {
    "queue": "new_feeds",
    "binding_key": "new_feeds"
},
}

CELERY_QUEUES = {
"work_queue": {
    "exchange": "work_queue",
    "exchange_type": "direct",
    "binding_key": "work_queue",
},
"new_feeds": {
    "exchange": "new_feeds",
    "exchange_type": "direct",
    "binding_key": "new_feeds"
},
}
于 2013-05-17T00:57:53.207 回答