当我从雪花中提取小数据集时,它没有问题。但是,当我尝试使用 Python SNOW 连接器从雪花中提取大型数据集时,它会引发操作错误。
任何帮助表示赞赏。
错误:OperationalError: (snowflake.connector.errors.OperationalError) 250003:未能获得响应。绞刑?方法:获取,网址:https ://sfc-oh-ds1-customer-stage.s3.amazonaws.com/cyok-s-ohss0400/results/0198120b-0077-b3d9-0000-0661009d1726_0/main/data_0_0_1?x- amz-server-side-encryption-customer-algorithm=AES256&response-content-encoding=gzip&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20201105T174742Z&X-Amz-SignedHeaders=host&X-Amz-Expires=86399&X-Amz- Credential=AKIAZ767M53OOUHERYED%2F20201105%2Fus-east-2%2Fs3%2Faws4_request&X-Amz-Signature=047fa367aeea73a6ef66de485a3d5dccc229707a72c21096b01b180fd3369c7d(此错误的背景:http /e3q://sqlalche.me )
代码:
import snowflake.connector
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.dialects import registry
from snowflake.sqlalchemy import URL
import sqlalchemy
snow_conn1 = URL(
account="XXX.aws",
user="XXX",
password="XXX",
insecure_mode=True,
role="SYSADMIN",
warehouse=XX,
database=XX,
schema=XX)
engine1=create_engine(snow_conn1)
with engine1.connect() as con:
mtd_query1 = "select * from information_schema.tables"
df1 = pd.read_sql(mtd_query1, con)