我有一个StatisticStore
模型定义为:
class StatisticStore(ndb.Model):
user = ndb.KeyProperty(kind=User)
created = ndb.DateTimeProperty(auto_now_add=True)
kind = ndb.StringProperty()
properties = ndb.PickleProperty()
@classmethod
def top_links(cls, user, start_date, end_date):
'''
returns the user's top links for the given date range
e.g.
{'http://stackoverflow.com': 30,
'http://google.com': 10,
'http://yahoo.com': 15}
'''
stats = cls.query(
cls.user == user.key,
cls.created >= start_date,
cls.created <= end_date,
cls.kind == 'link_visited'
)
links_dict = {}
# generate links_dict from stats
# keys are from the 'properties' property
return links_dict
我想要一个模型来存储每天AggregateStatisticStore
的总量。StatisticStore
它可以每天生成一次。就像是:
class AggregateStatisticStore(ndb.Model):
user = ndb.KeyProperty(kind=User)
date = ndb.DateProperty()
kinds_count = ndb.PickleProperty()
top_links = ndb.PickleProperty()
因此,以下将是正确的:
start = datetime.datetime(2013, 8, 22, 0, 0, 0)
end = datetime.datetime(2013, 8, 22, 23, 59, 59)
aug22stats = StatisticStore.query(
StatisticStore.user == user,
StatisticStore.kind == 'link_visited',
StatisticStore.created >= start,
StatisticStore.created <= end
).count()
aug22toplinks = StatisticStore.top_links(user, start, end)
aggregated_aug22stats = AggregateStatisticStore.query(
AggregateStatisticStore.user == user,
AggregateStatisticStore.date == start.date()
)
aug22stats == aggregated_aug22stats.kinds_count['link_visited']
aug22toplinks == aggregated_aug22stats.top_links
我正在考虑使用任务队列 API 运行一个 cronjob。该任务将生成AggregateStatisticStore
每天的。但我担心它可能会遇到内存问题?看到StatisticStore
每个用户可能有很多记录。
此外,top_links
属性有点使事情复杂化。我还不确定在聚合模型中拥有它的属性是否是最好的方法。任何关于该属性的建议都会很棒。
最终,我只想记录StatisticStore
大约 30 天前的记录。如果记录超过 30 天,则应将其汇总(然后删除)。节省空间并缩短可视化查询时间。
编辑:每次StatisticStore
记录 a 时如何创建/更新适当的AggregateStatisticStore
记录。这样,所有 cronjob 所要做的就是清理。想法?