5

我正在寻找一种有效的方法来限制从 Google App Engine 到第三方服务的请求。第三方服务速率对每个帐户的请求进行限制,在 Google App Engine 方面,大部分工作都是在任务内部进行的。令牌桶在这里是一个很好的通用算法。

问:可以使用什么方法来有效地对每个帐户而不是每个服务的请求进行速率限制?

这不应该涉及在 GAE 任务队列上设置速率,因为每个帐户的请求数量和服务的帐户数量会有很大差异。出于性能原因,我最感兴趣的是基于 memcache(incr/decr?)的想法!

我认为这归结为基于 memcache 的令牌桶?

想法?

4

3 回答 3

1

I know this is an old question, but it's a top search result and I thought others might find an alternative I made useful. It's a bit more granular (down to the second), simple (only a single function), and performant (only one memcache lookup) than the solution above:

import webapp2
from functools import wraps
from google.appengine.api import memcache


def rate_limit(seconds_per_request=1):
  def rate_limiter(function):
    @wraps(function)
    def wrapper(self, *args, **kwargs):
      added = memcache.add('%s:%s' % (self.__class__.__name__, self.request.remote_addr or ''), 1,
                           time=seconds_per_request, namespace='rate_limiting')
      if not added:
        self.response.write('Rate limit exceeded.')
        self.response.set_status(429)
        return
      return function(self, *args, **kwargs)
    return wrapper
  return rate_limiter


class ExampleHandler(webapp2.RequestHandler):
  @rate_limit(seconds_per_request=2)
  def get(self):
    self.response.write('Hello, webapp2!')
于 2014-01-30T21:28:16.860 回答
1

不久前我将此项目作为书签保存:http ://code.google.com/p/gaedjango-ratelimitcache/

不是您具体问题的真正答案,但也许它可以帮助您入门。

于 2010-09-06T18:48:53.147 回答
1

以下是我在 GAE 上使用 memcache 实现令牌桶的方法:

编辑:采取(另一个)刺这个。

这部分是从https://github.com/simonw/ratelimitcache/blob/master/ratelimitcache.py借来的

def throttle(key, rate_count, rate_seconds, tries=3):
    '''
    returns True if throttled (not enough tokens available) else False
    implements token bucket algorithm
    '''
    client = memcache.Client(CLIENT_ARGS)
    for _ in range(tries):
        now = int(time.time())
        keys = ['%s-%s' % (key, str(now-i)) for i in range(rate_seconds)]
        client.add(keys[0], 0, time=rate_seconds+1)
        tokens = client.get_multi(keys[1:])
        tokens[keys[0]] = client.gets(keys[0])
        if sum(tokens.values()) >= rate_count:
            return True
        if client.cas(keys[0], tokens[keys[0]] + 1, time=rate_seconds+1) != 0:
            return False
    logging.error('cache contention error')
    return True

以下是使用示例:

def test_that_it_throttles_too_many_requests(self):
    burst = 1
    interval = 1
    assert shared.rate_limit.throttle('test', burst, interval) is False
    assert shared.rate_limit.throttle('test', burst, interval) is True


def test_that_it_doesnt_throttle_burst_of_requests(self):
    burst = 16
    interval = 1
    for i in range(burst):
        assert shared.rate_limit.throttle('test', burst, interval) is False
    time.sleep(interval + 1) # memcache has 1 second granularity
    for i in range(burst):
        assert shared.rate_limit.throttle('test', burst, interval) is False
于 2016-02-09T18:56:28.653 回答