我有一个数据库表,我试图从中获取 5+ 百万行的两列。
以下 python 中的代码可以完美快速地运行(大约 3 分钟即可获取完整的 5 行以上数据,通过查询检索并写入 CSV):
import pandas as pd
import teradatasql
hostname = "myhostname.domain.com"
username = "myusername"
password = "mypassword"
with teradatasql.connect(host=hostname, user=username, password=password, encryptdata=True) as conn:
df = pd.read_sql("SELECT COL1, COL2 FROM MY_TABLE", conn)
df.to_csv(mypath, sep = '\t', index = False)
R 中的以下带有teradatasql
包的代码适用于要检索的显式提供的行数的小值。但是,当 n 足够大时(实际上并没有那么大),或者当我要求它检索完整的 5+ 行数据集时,它会花费大量时间或几乎永远不会返回。
知道发生了什么吗?
library(teradatasql)
dbconn <- DBI::dbConnect(
teradatasql::TeradataDriver(),
host = 'myhostname.domain.com',
user = 'myusername', password = 'mypassword'
)
dbExecute(dbconn, "SELECT COL1, COL2 FROM MY_TABLE")
[1] 5348946
system.time(dbGetQuery(dbconn, "SELECT COL1, COL2 FROM MY_TABLE", n = 10))
user system elapsed
0.084 0.016 1.496
system.time(dbGetQuery(dbconn, "SELECT COL1, COL2 FROM MY_TABLE", n = 100))
user system elapsed
0.104 0.024 1.548
system.time(dbGetQuery(dbconn, "SELECT COL1, COL2 FROM MY_TABLE", n = 1000))
user system elapsed
0.488 0.036 1.826
system.time(dbGetQuery(dbconn, "SELECT COL1, COL2 FROM MY_TABLE", n = 10000))
user system elapsed
7.484 0.100 9.413
system.time(dbGetQuery(dbconn, "SELECT COL1, COL2 FROM MY_TABLE", n = 100000))
user system elapsed
767.824 4.648 782.518
system.time(dbGetQuery(dbconn, "SELECT COL1, COL2 FROM MY_TABLE", n = 5348946))
< DOES NOT RETURN IN HOURS >
以下是一些版本信息供参考:
> packageVersion('teradatasql')
[1] ‘17.0.0.2’
> version
_
platform x86_64-pc-linux-gnu
arch x86_64
os linux-gnu
system x86_64, linux-gnu
status
major 3
minor 6.1
year 2019
month 07
day 05
svn rev 76782
language R
version.string R version 3.6.1 (2019-07-05)
nickname Action of the Toes