7

我们正在使用 sqlalchemy 的自动加载功能来进行列映射,以防止在我们的代码中进行硬编码。

class users(Base):
    __tablename__ = 'users'
    __table_args__ = {
        'autoload': True,
        'mysql_engine': 'InnoDB',
        'mysql_charset': 'utf8'
    }

有没有办法序列化或缓存自动加载的元数据/orms,这样我们就不必每次需要从其他脚本/函数引用我们的 orm 类时都经过自动加载过程?

我看过烧杯缓存和泡菜,但如果可能或如何做,还没有找到明确的答案。

理想情况下,我们仅在已提交对数据库结构的更改但从所有其他脚本/函数引用我们的数据库映射的非自动加载/持久/缓存版本时才运行自动加载映射脚本,

有任何想法吗?

4

2 回答 2

6

我现在正在做的是在通过数据库连接(MySQL)运行反射之后对元数据进行腌制,一旦腌制可用,就使用腌制的元数据来反映模式,并将元数据绑定到 SQLite 引擎。

cachefile='orm.p'
dbfile='database'
engine_dev = create_engine(#db connect, echo=True)
engine_meta = create_engine('sqlite:///%s' % dbfile,echo=True)
Base = declarative_base()
Base.metadata.bind = engine_dev
metadata = MetaData(bind=engine_dev)

# load from pickle 
try:
    with open(cachefile, 'r') as cache:
        metadata2 = pickle.load(cache)
        metadata2.bind = engine_meta
        cache.close()
    class Users(Base):
        __table__ = Table('users', metadata2, autoload=True)

    print "ORM loaded from pickle"

# if no pickle, use reflect through database connection    
except:
    class Users(Base):
        __table__ = Table('users', metadata, autoload=True)

print "ORM through database autoload"

# create metapickle
metadata.create_all()
with open(cachefile, 'w') as cache:
    pickle.dump(metadata, cache)
    cache.close()

如果这没问题(它有效)或有什么我可以改进的,有什么意见吗?

于 2012-08-03T13:37:11.203 回答
1

我的解决方案与@user1572502 的解决方案没有太大不同,但可能有用。我将缓存的元数据文件放在 中~/.sqlalchemy_cache,但它们可以在任何地方。


# assuming something like this:
Base = declarative_base(bind=engine)

metadata_pickle_filename = "mydb_metadata_cache.pickle"

# ------------------------------------------
# Load the cached metadata if it's available
# ------------------------------------------
# NOTE: delete the cached file if the database schema changes!!
cache_path = os.path.join(os.path.expanduser("~"), ".sqlalchemy_cache")
cached_metadata = None
if os.path.exists(cache_path):
    try:
        with open(os.path.join(cache_path, metadata_pickle_filename), 'rb') as cache_file:
            cached_metadata = pickle.load(file=cache_file)
    except IOError:
        # cache file not found - no problem
        pass
# ------------------------------------------

# -----------------------------
# Define database table classes
# -----------------------------
class MyTable(Base):
    if cached_metadata:
        __table__ = cached_metadata.tables['my_schema.my_table']
    else:
        __tablename__ = 'my_table'
        __table_args__ = {'autoload':True, 'schema':'my_schema'}

# ... continue for any other tables ...

# ----------------------------------------
# If no cached metadata was found, save it
# ----------------------------------------
if cached_metadata is None:
    # cache the metadata for future loading
    # - MUST DELETE IF THE DATABASE SCHEMA HAS CHANGED
    try:
        if not os.path.exists(cache_path):
            os.makedirs(cache_path)
        # make sure to open in binary mode - we're writing bytes, not str
        with open(os.path.join(cache_path, metadata_pickle_filename), 'wb') as cache_file:
            pickle.dump(Base.metadata, cache_file)
    except:
        # couldn't write the file for some reason
        pass

重要的提示!!如果数据库架构发生更改,您必须删除缓存文件以强制代码自动加载并创建新缓存。如果不这样做,更改将反映在代码中。这是一件容易忘记的事情。

于 2019-09-29T07:16:15.430 回答