这是一种方法,但我并不真正推荐它,因为(1)它添加了一个额外的手动检查缓存的操作,以及(2)它可能复制了库内部已经在做的事情。我没有对任何性能影响进行适当的检查,因为我没有具有不同doc_id
s 的生产数据/环境,但正如martineau 的评论所说,由于额外的查找操作,它可能会减慢速度。
但它就在这里。
diskcache.Cache对象“支持熟悉的 Python 映射接口”(如dict
s)。然后,您可以使用根据memoize
-d 函数的参数自动生成的相同密钥,手动检查缓存中是否已存在给定密钥:
附加__cache_key__
属性可用于生成用于给定参数的缓存键。
>>> key = fibonacci.__cache_key__(100)
>>> print(cache[key])
>>> 354224848179261915075
因此,您可以将您的fetch_doc
函数包装到另一个函数中,该函数检查是否存在基于 、 和 参数的缓存键url
,database
将doc_id
结果打印给用户,所有这些都在调用实际fetch_doc
函数之前完成:
import couchdb
from diskcache import Cache
cache = Cache("couch_cache")
@cache.memoize()
def fetch_doc(url: str, database: str, doc_id: str) -> dict:
server = couchdb.Server(url=url)
db = server[database]
return dict(db[doc_id])
def fetch_doc_with_logging(url: str, database: str, doc_id: str):
# Generate the key
key = fetch_doc.__cache_key__(url, database, doc_id)
# Print out whether getting from cache or not
if key in cache:
print(f'Getting {doc_id} from cache!')
else:
print(f'Getting {doc_id} from DB!')
# Call the actual memoize-d function
return fetch_doc(url, database, doc_id)
使用以下方法进行测试时:
url = 'https://your.couchdb.instance'
database = 'test'
doc_id = 'c97bbe3127fb6b89779c86da7b000885'
cache.stats(enable=True, reset=True)
for _ in range(5):
fetch_doc_with_logging(url, database, doc_id)
print(f'(hits, misses) = {cache.stats()}')
# Only for testing, so 1st call will always miss and will get from DB
cache.clear()
它输出:
$ python test.py
Getting c97bbe3127fb6b89779c86da7b000885 from DB!
Getting c97bbe3127fb6b89779c86da7b000885 from cache!
Getting c97bbe3127fb6b89779c86da7b000885 from cache!
Getting c97bbe3127fb6b89779c86da7b000885 from cache!
Getting c97bbe3127fb6b89779c86da7b000885 from cache!
(hits, misses) = (4, 1)
您可以将该包装函数转换为装饰器:
def log_if_cache_or_not(memoized_func):
def _wrap(*args):
key = memoized_func.__cache_key__(*args)
if key in cache:
print(f'Getting {doc_id} from cache!')
else:
print(f'Getting {doc_id} from DB!')
return memoized_func(*args)
return _wrap
@log_if_cache_or_not
@cache.memoize()
def fetch_doc(url: str, database: str, doc_id: str) -> dict:
server = couchdb.Server(url=url)
db = server[database]
return dict(db[doc_id])
for _ in range(5):
fetch_doc(url, database, doc_id)
或者按照评论中的建议,将其组合成 1 个新的装饰器:
def memoize_with_logging(func):
memoized_func = cache.memoize()(func)
def _wrap(*args):
key = memoized_func.__cache_key__(*args)
if key in cache:
print(f'Getting {doc_id} from cache!')
else:
print(f'Getting {doc_id} from DB!')
return memoized_func(*args)
return _wrap
@memoize_with_logging
def fetch_doc(url: str, database: str, doc_id: str) -> dict:
server = couchdb.Server(url=url)
db = server[database]
return dict(db[doc_id])
for _ in range(5):
fetch_doc(url, database, doc_id)
一些快速测试:
In [9]: %timeit for _ in range(100000): fetch_doc(url, database, doc_id)
13.7 s ± 112 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [10]: %timeit for _ in range(100000): fetch_doc_with_logging(url, database, doc_id)
21.2 s ± 637 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
(如果doc_id
在呼叫中随机变化可能会更好)
同样,正如我在开始时提到的,缓存和memoize
-ing 函数调用应该加速该函数。无论您是从数据库还是从缓存中获取,这个答案都会增加缓存查找和打印/记录的额外操作,它可能会影响该函数调用的性能。适当地测试。