我有:
- ML 模型 (PyTorch) 可对数据进行矢量化并在~3.5 毫秒内做出预测(中位数 ≈ 均值)
- HTTP API (FastAPI + uvicorn) 在约 2 毫秒内处理简单请求
但是当我将它们结合起来时,中位响应时间几乎变成了200ms。
这种退化的原因是什么?
注意:
- 我还单独尝试了 aiohttp、aiohttp + gunicorn 和 Flask 开发服务器来提供服务 - 结果相同
- 我尝试每秒发送 2、20 和 100 个请求 - 结果相同
- 我确实意识到并行请求可以减少延迟,但不是 30 倍!
- CPU 负载仅为 ~7%
以下是我测量模型性能的方法(我分别测量了中位时间,它与平均时间几乎相同):
def predict_all(predictor, data):
for i in range(len(data)):
predictor(data[i])
data = load_random_data()
predictor = load_predictor()
%timeit predict_all(predictor, data)
# manually divide total time by number of records in data
这是 FastAPI 版本:
from fastapi import FastAPI
from starlette.requests import Request
from my_code import load_predictor
app = FastAPI()
app.predictor = load_predictor()
@app.post("/")
async def root(request: Request):
predictor = request.app.predictor
data = await request.json()
return predictor(data)
HTTP 性能测试:
wrk2 -t2 -c50 -d30s -R100 --latency -s post.lua http://localhost:8000/
编辑。
这是我尝试使用和不使用的稍微修改的版本async:
@app.post("/")
# async def root(request: Request, user_dict: dict):
def root(request: Request, user_dict: dict):
predictor = request.app.predictor
start_time = time.time()
y = predictor(user_dict)
finish_time = time.time()
logging.info(f"user {user_dict['user_id']}: "
"prediction made in {:.2f}ms".format((finish_time - start_time) * 1000))
return y
所以我只是添加了预测时间的记录。
异步版本的日志:
2021-02-03 11:14:31,822: user 12345678-1234-1234-1234-123456789123: prediction made in 2.87ms
INFO: 127.0.0.1:49284 - "POST / HTTP/1.1" 200 OK
2021-02-03 11:14:56,329: user 12345678-1234-1234-1234-123456789123: prediction made in 3.93ms
INFO: 127.0.0.1:49286 - "POST / HTTP/1.1" 200 OK
2021-02-03 11:14:56,345: user 12345678-1234-1234-1234-123456789123: prediction made in 15.06ms
INFO: 127.0.0.1:49287 - "POST / HTTP/1.1" 200 OK
2021-02-03 11:14:56,351: user 12345678-1234-1234-1234-123456789123: prediction made in 4.78ms
INFO: 127.0.0.1:49288 - "POST / HTTP/1.1" 200 OK
2021-02-03 11:14:56,358: user 12345678-1234-1234-1234-123456789123: prediction made in 6.85ms
INFO: 127.0.0.1:49289 - "POST / HTTP/1.1" 200 OK
2021-02-03 11:14:56,363: user 12345678-1234-1234-1234-123456789123: prediction made in 3.71ms
INFO: 127.0.0.1:49290 - "POST / HTTP/1.1" 200 OK
2021-02-03 11:14:56,369: user 12345678-1234-1234-1234-123456789123: prediction made in 5.49ms
INFO: 127.0.0.1:49291 - "POST / HTTP/1.1" 200 OK
2021-02-03 11:14:56,374: user 12345678-1234-1234-1234-123456789123: prediction made in 5.00ms
所以预测很快,平均不到 10 毫秒,但整个请求需要 200 毫秒。
同步版本的日志:
2021-02-03 11:17:58,332: user 12345678-1234-1234-1234-123456789123: prediction made in 65.49ms
2021-02-03 11:17:58,334: user 12345678-1234-1234-1234-123456789123: prediction made in 23.05ms
INFO: 127.0.0.1:49481 - "POST / HTTP/1.1" 200 OK
INFO: 127.0.0.1:49482 - "POST / HTTP/1.1" 200 OK
2021-02-03 11:17:58,338: user 12345678-1234-1234-1234-123456789123: prediction made in 72.39ms
2021-02-03 11:17:58,341: user 12345678-1234-1234-1234-123456789123: prediction made in 78.66ms
2021-02-03 11:17:58,341: user 12345678-1234-1234-1234-123456789123: prediction made in 85.74ms
现在预测需要很长时间!无论出于何种原因,完全相同的调用,但在同步上下文中进行,开始花费约 30 倍的时间。但整个请求大约需要相同的时间 - 160-200ms。