Pandas 的文档中有大量关于处理以各种格式存储的数据的最佳实践示例。
但是,我找不到任何使用 MySQL 等数据库的好例子。
谁能指出我的链接或提供一些代码片段,说明如何使用mysql-python有效地将查询结果转换为 Pandas 中的数据帧?
正如 Wes 所说,一旦您使用 DBI 兼容库获得了数据库连接,io/sql 的 read_sql 就会执行此操作。我们可以看两个使用MySQLdb
和cx_Oracle
库连接到 Oracle 和 MySQL 并查询它们的数据字典的简短示例。这是示例cx_Oracle
:
import pandas as pd
import cx_Oracle
ora_conn = cx_Oracle.connect('your_connection_string')
df_ora = pd.read_sql('select * from user_objects', con=ora_conn)
print 'loaded dataframe from Oracle. # Records: ', len(df_ora)
ora_conn.close()
这是等效的示例MySQLdb
:
import MySQLdb
mysql_cn= MySQLdb.connect(host='myhost',
port=3306,user='myusername', passwd='mypassword',
db='information_schema')
df_mysql = pd.read_sql('select * from VIEWS;', con=mysql_cn)
print 'loaded dataframe from MySQL. records:', len(df_mysql)
mysql_cn.close()
对于这个问题的最新读者:pandas 在他们的 14.0 版文档中有以下警告:
警告:一些现有的函数或函数别名已被弃用,并将在未来的版本中删除。这包括:tquery、uquery、read_frame、frame_query、write_frame。
和:
警告:使用 DBAPI 连接对象时对“mysql”风格的支持已被弃用。SQLAlchemy 引擎 (GH6900) 将进一步支持 MySQL。
这使得这里的许多答案都过时了。你应该使用sqlalchemy
:
from sqlalchemy import create_engine
import pandas as pd
engine = create_engine('dialect://user:pass@host:port/schema', echo=False)
f = pd.read_sql_query('SELECT * FROM mytable', engine, index_col = 'ID')
作为记录,这是一个使用 sqlite 数据库的示例:
import pandas as pd
import sqlite3
with sqlite3.connect("whatever.sqlite") as con:
sql = "SELECT * FROM table_name"
df = pd.read_sql_query(sql, con)
print df.shape
我更喜欢使用SQLAlchemy创建查询,然后从中创建一个 DataFrame。如果您打算一遍又一遍地混合和匹配事物,SQLAlchemy可以更轻松地以 Python 方式组合SQL条件。
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Table
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from pandas import DataFrame
import datetime
# We are connecting to an existing service
engine = create_engine('dialect://user:pwd@host:port/db', echo=False)
Session = sessionmaker(bind=engine)
session = Session()
Base = declarative_base()
# And we want to query an existing table
tablename = Table('tablename',
Base.metadata,
autoload=True,
autoload_with=engine,
schema='ownername')
# These are the "Where" parameters, but I could as easily
# create joins and limit results
us = tablename.c.country_code.in_(['US','MX'])
dc = tablename.c.locn_name.like('%DC%')
dt = tablename.c.arr_date >= datetime.date.today() # Give me convenience or...
q = session.query(tablename).\
filter(us & dc & dt) # That's where the magic happens!!!
def querydb(query):
"""
Function to execute query and return DataFrame.
"""
df = DataFrame(query.all());
df.columns = [x['name'] for x in query.column_descriptions]
return df
querydb(q)
MySQL 示例:
import MySQLdb as db
from pandas import DataFrame
from pandas.io.sql import frame_query
database = db.connect('localhost','username','password','database')
data = frame_query("SELECT * FROM data", database)
同样的语法也适用于使用 podbc 的 Ms SQL 服务器。
import pyodbc
import pandas.io.sql as psql
cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER=servername;DATABASE=mydb;UID=username;PWD=password')
cursor = cnxn.cursor()
sql = ("""select * from mytable""")
df = psql.frame_query(sql, cnxn)
cnxn.close()
这就是您使用 psycopg2 驱动程序连接到 PostgreSQL 的方式(如果您使用的是 Debian Linux 衍生操作系统,请使用“apt-get install python-psycopg2”安装)。
import pandas.io.sql as psql
import psycopg2
conn = psycopg2.connect("dbname='datawarehouse' user='user1' host='localhost' password='uberdba'")
q = """select month_idx, sum(payment) from bi_some_table"""
df3 = psql.frame_query(q, conn)
pandas.io.sql.frame_query
已弃用。改为使用pandas.read_sql
。
对于 Sybase,以下工作(使用http://python-sybase.sourceforge.net)
import pandas.io.sql as psql
import Sybase
df = psql.frame_query("<Query>", con=Sybase.connect("<dsn>", "<user>", "<pwd>"))
import pandas as pd
import oursql
conn=oursql.connect(host="localhost",user="me",passwd="mypassword",db="classicmodels")
sql="Select customerName, city,country from customers order by customerName,country,city"
df_mysql = pd.read_sql(sql,conn)
print df_mysql
这工作得很好,并使用 pandas.io.sql frame_works(带有弃用警告)。使用的数据库是 mysql 教程中的示例数据库。
这有助于我从基于python 3.x的lambda 函数连接到AWS MYSQL(RDS)并加载到 pandas DataFrame
import json
import boto3
import pymysql
import pandas as pd
user = 'username'
password = 'XXXXXXX'
client = boto3.client('rds')
def lambda_handler(event, context):
conn = pymysql.connect(host='xxx.xxxxus-west-2.rds.amazonaws.com', port=3306, user=user, passwd=password, db='database name', connect_timeout=5)
df= pd.read_sql('select * from TableName limit 10',con=conn)
print(df)
# TODO implement
#return {
# 'statusCode': 200,
# 'df': df
#}
对于 Postgres 用户
import psycopg2
import pandas as pd
conn = psycopg2.connect("database='datawarehouse' user='user1' host='localhost' password='uberdba'")
customers = 'select * from customers'
customers_df = pd.read_sql(customers,conn)
customers_df
这应该工作得很好。
import MySQLdb as mdb
import pandas as pd
con = mdb.connect(‘127.0.0.1’, ‘root’, ‘password’, ‘database_name’);
with con:
cur = con.cursor()
cur.execute(“select random_number_one, random_number_two, random_number_three from randomness.a_random_table”)
rows = cur.fetchall()
df = pd.DataFrame( [[ij for ij in i] for i in rows] )
df.rename(columns={0: ‘Random Number One’, 1: ‘Random Number Two’, 2: ‘Random Number Three’}, inplace=True);
print(df.head(20))