2

我有一个用 django-channels+celery 编写的 webapp,它使用 websockets 进行客户端-服务器通信。在我的主机上运行 daphne、celery worker 和 redis 对其进行测试后,我决定用 docker-compose 封装所有内容以拥有一个可部署的系统。

这就是问题开始的地方。在学习、调整和调试我的 docler-compose.yaml 之后,我设法让它工作,但我仍然无法让 websockets 再次工作。

如果我打开 websocket 并发送命令,无论是从应用程序的 javascript 部分内部,还是从 chrome 中的 javascript 控制台,它都不会触发 ws_connect 或 ws_receive 消费者。

这是我的设置:

设置.py

# channels settings
REDIS_HOST = os.environ['REDIS_HOST']
REDIS_URL = "redis://{}:6379".format(REDIS_HOST)

CHANNEL_LAYERS = {
    "default": {
        "BACKEND": "asgi_redis.RedisChannelLayer",
        "CONFIG": {
            # "hosts": [os.environ.get('REDIS_HOST', 'redis://localhost:6379')],
            "hosts": [REDIS_URL],
        },
        "ROUTING": "TMWA.routing.channel_routing",
    },
}

路由.py

channel_routing = {
    'websocket.connect': consumers.ws_connect,
    'websocket.receive': consumers.ws_receive,
    'websocket.disconnect': consumers.ws_disconnect,
}

消费者.py

@channel_session
def ws_connect(message):
    print "in ws_connect"
    print message['path']
    prefix, label, sessionId = message['path'].strip('/').split('/')
    print prefix, label, sessionId
    message.channel_session['sessionId'] = sessionId
    message.reply_channel.send({"accept": True})
    connMgr.AddNewConnection(sessionId, message.reply_channel)

@channel_session
def ws_receive(message):
    print "in ws_receive"
    jReq = message['text']
    print jReq
    task = ltmon.getJSON.delay( jReq )
    connMgr.UpdateConnection(message.channel_session['sessionId'], task.id)

@channel_session
def ws_disconnect(message):
    print "in ws_disconnect"
    connMgr.CloseConnection(message.channel_session['sessionId'])

docker-compose.yaml

version: '3'
services:
  daphne:
    build: ./app
    image: "tmwa:latest"
    # working_dir: /opt/TMWA
    command: bash -c "./start_server.sh"
    ports:
      - "8000:8000"
    environment:
      - REDIS_HOST=redis
      - RABBIT_HOST=rabbit
      - DB_NAME=postgres
      - DB_USER=postgres
      - DB_SERVICE=postgres
      - DB_PORT=5432
      - DB_PASS=''
    networks:
      - front
      - back
    depends_on:
      - redis
      - postgres
      - rabbitmq
    links:
      - redis:redis
      - postgres:postgres
      - rabbitmq:rabbit
    volumes:
      - ./app:/opt/myproject
      - static:/opt/myproject/static
      - /Volumes/AMS_Disk/TrackerMonitoring/Data/:/Data/CalFiles

  worker:
    image: "tmwa:latest"
    # working_dir: /opt/myproject
    command: bash -c "./start_worker.sh"
    environment:
      - REDIS_HOST=redis
      - RABBIT_HOST=rabbit
      - DB_NAME=postgres
      - DB_USER=postgres
      - DB_SERVICE=postgres
      - DB_PORT=5432
      - DB_PASS=''
    networks:
      - front
      - back
    depends_on:
      - redis
      - postgres
      - rabbitmq
    links:
      - redis:redis
      - postgres:postgres
      - rabbitmq:rabbit
    volumes:
      - ./app:/opt/myproject
      - /Volumes/AMS_Disk/TrackerMonitoring/Data/:/Data/CalFiles

  postgres:
    restart: always
    image: postgres:latest
    networks:
      - back
    ports:
      - "5432:5432"
    volumes:
      - pgdata:/var/lib/postgresql/data/

  redis:
    restart: always
    image: redis
    networks:
      - back
    ports:
      - "6379:6379"
    volumes:
      - redis:/data

  rabbitmq:
    image: tutum/rabbitmq
    environment:
      - RABBITMQ_PASS=password
    networks:
      - back
    ports:
      - "5672:5672"
      - "15672:15672"

networks:
  front:
  back:

volumes:
  pgdata:
    driver: local
  redis:
    driver: local
  app:
    driver: local
  static:

我运行服务器

daphne -b 0.0.0.0 -p 8000 TMWA.asgi:channel_layer

和工人

python manage.py runworker

我从等式中删除了 nginx,所以我在单独的容器中运行 worker 和 daphne,我希望所有的 websocket 连接都由 daphne 容器管理,然后将计算任务分派给 worker。问题是,当我打开 websocket 并发送数据时,什么也没有发生

码头工人组成

redis_1     | 1:C 11 Oct 15:25:22.012 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1     | 1:C 11 Oct 15:25:22.012 # Redis version=4.0.2, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1     | 1:C 11 Oct 15:25:22.012 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1     | 1:M 11 Oct 15:25:22.013 * Running mode=standalone, port=6379.
redis_1     | 1:M 11 Oct 15:25:22.013 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1     | 1:M 11 Oct 15:25:22.013 # Server initialized
redis_1     | 1:M 11 Oct 15:25:22.014 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_1     | 1:M 11 Oct 15:25:22.014 * DB loaded from disk: 0.000 seconds
redis_1     | 1:M 11 Oct 15:25:22.014 * Ready to accept connections
rabbitmq_1  | => Securing RabbitMQ with a preset password
postgres_1  | LOG:  database system was interrupted; last known up at 2017-10-11 15:09:33 UTC
rabbitmq_1  | => Done!
postgres_1  | LOG:  database system was not properly shut down; automatic recovery in progress
rabbitmq_1  | ========================================================================
rabbitmq_1  | You can now connect to this RabbitMQ server using, for example:
postgres_1  | LOG:  invalid record length at 0/249A378: wanted 24, got 0
rabbitmq_1  |
postgres_1  | LOG:  redo is not required
rabbitmq_1  |     curl --user admin:<RABBITMQ_PASS> http://<host>:<port>/api/vhosts
rabbitmq_1  |
postgres_1  | LOG:  MultiXact member wraparound protections are now enabled
rabbitmq_1  | ========================================================================
postgres_1  | LOG:  database system is ready to accept connections
rabbitmq_1  |
postgres_1  | LOG:  autovacuum launcher started
rabbitmq_1  |               RabbitMQ 3.6.1. Copyright (C) 2007-2016 Pivotal Software, Inc.
rabbitmq_1  |   ##  ##      Licensed under the MPL.  See http://www.rabbitmq.com/
rabbitmq_1  |   ##  ##
rabbitmq_1  |   ##########  Logs: /var/log/rabbitmq/rabbit@a22d1ccdf39e.log
rabbitmq_1  |   ######  ##        /var/log/rabbitmq/rabbit@a22d1ccdf39e-sasl.log
rabbitmq_1  |   ##########
daphne_1    | System check identified some issues:
daphne_1    |
daphne_1    | WARNINGS:
daphne_1    | ?: (1_7.W001) MIDDLEWARE_CLASSES is not set.
daphne_1    |   HINT: Django 1.7 changed the global defaults for the MIDDLEWARE_CLASSES. django.contrib.sessions.middleware.SessionMiddleware, django.contrib.auth.middleware.AuthenticationMiddleware, and django.contrib.messages.middleware.MessageMiddleware were removed from the defaults. If your project needs these middleware then you should configure this setting.
daphne_1    | Operations to perform:
daphne_1    |   Synchronize unmigrated apps: staticfiles, channels, messages
daphne_1    |   Apply all migrations: admin, TkMonitor, contenttypes, auth, sessions
daphne_1    | Synchronizing apps without migrations:
daphne_1    |   Creating tables...
daphne_1    |     Running deferred SQL...
daphne_1    |   Installing custom SQL...
daphne_1    | Running migrations:
daphne_1    |   No migrations to apply.
worker_1    | System check identified some issues:
worker_1    |
worker_1    | WARNINGS:
worker_1    | ?: (1_7.W001) MIDDLEWARE_CLASSES is not set.
worker_1    |   HINT: Django 1.7 changed the global defaults for the MIDDLEWARE_CLASSES. django.contrib.sessions.middleware.SessionMiddleware, django.contrib.auth.middleware.AuthenticationMiddleware, and django.contrib.messages.middleware.MessageMiddleware were removed from the defaults. If your project needs these middleware then you should configure this setting.
worker_1    | 2017-10-11 15:25:30,482 - INFO - runworker - Using single-threaded worker.
worker_1    | 2017-10-11 15:25:30,483 - INFO - runworker - Running worker against channel layer default (asgi_redis.core.RedisChannelLayer)
worker_1    | 2017-10-11 15:25:30,483 - INFO - worker - Listening on channels http.request, websocket.connect, websocket.disconnect, websocket.receive
daphne_1    | System check identified some issues:
daphne_1    |
daphne_1    | WARNINGS:
daphne_1    | ?: (1_7.W001) MIDDLEWARE_CLASSES is not set.
daphne_1    |   HINT: Django 1.7 changed the global defaults for the MIDDLEWARE_CLASSES. django.contrib.sessions.middleware.SessionMiddleware, django.contrib.auth.middleware.AuthenticationMiddleware, and django.contrib.messages.middleware.MessageMiddleware were removed from the defaults. If your project needs these middleware then you should configure this setting.
rabbitmq_1  |               Starting broker... completed with 6 plugins.
daphne_1    | DEBUG: Init FileManager with path /Data/CalFiles
daphne_1    | DEBUG: Found 72026 files
daphne_1    | DEBUG: 72026 entries in the DB
daphne_1    | DEBUG: DB updated.
daphne_1    | 72026 entries in the DB
daphne_1    | Last file: /Data/CalFiles/CalTree_1500299703.root
daphne_1    |
daphne_1    | 0 static files copied to '/opt/myproject/static', 89 unmodified.
daphne_1    | 2017-10-11 15:25:42,068 INFO     Starting server at tcp:port=8000:interface=0.0.0.0, channel layer TMWA.asgi:channel_layer.
daphne_1    | 2017-10-11 15:25:42,070 INFO     HTTP/2 support enabled
daphne_1    | 2017-10-11 15:25:42,070 INFO     Using busy-loop synchronous mode on channel layer
daphne_1    | 2017-10-11 15:25:42,071 INFO     Listening on endpoint tcp:port=8000:interface=0.0.0.0

在这无线电沉默之后。在主机上运行所有内容时,我打印了输出,但现在我什么都没有。知道问题可能出在哪里吗?

4

2 回答 2

1

看起来您没有将路由添加到您的 websocket。您需要添加文件routing.py(与 settings.py 相同的目录):

application = ProtocolTypeRouter({
    # Empty for now (http->django views is added by default)
    'websocket': URLRouter(apps.tasks.routing.websocket_urlpatterns),
})

websocket_urlpatterns看起来像:

websocket_urlpatterns = [
    url(r'^tasks/$', consumers.TasksConsumer),
]

我用 celery 和 django 频道创建了一个简单的例子来展示如何使用它们(github)。

于 2018-10-25T11:37:46.720 回答
0

我认为问题在于 Daphne,当不在开发中运行时,不能处理 websocket 流量。相反,您需要runworker根据Channels 文档调用。

于 2017-10-21T14:20:53.410 回答