0

我需要监督一个实时应用程序。这个应用程序每秒接收 60 个连接,每个连接我使用 53 个指标。

所以我的模拟客户发送了 3180 个指标人员。我需要下限、上限、平均值、中位数和 count_ps 值。这就是我使用“计时”类型的原因。

当我在 statsd 末尾查看 count_ps 以获取一个指标时,我只有 40 个值而不是 60 个。我找不到有关 statsd 容量的信息。也许我超载了它^^

那么你能帮我吗,我有什么选择?

我无法减少指标的数量,但我不需要“计时”类型提供的所有信息。我可以限制“时间”吗?

谢谢 !

我的配置:

1) cat storage-schemas.conf

# Schema definitions for Whisper files. Entries are scanned in order,
# and first match wins. This file is scanned for changes every 60 seconds.
#
#  [name]
#  pattern = regex
#  retentions = timePerPoint:timeToStore, timePerPoint:timeToStore, ...

# Carbon's internal metrics. This entry should match what is specified in
# CARBON_METRIC_PREFIX and CARBON_METRIC_INTERVAL settings
[carbon]
pattern = ^carbon\.
retentions = 60:90d

[stats]
pattern = ^application.*
retentions = 60s:7d

2) 猫 dConfig.js

{
  graphitePort: 2003
, graphiteHost: "127.0.0.1"
, port: 8125
, backends: [ "./backends/graphite", "./backends/console" ]
, flushInterval: 60000
, debug: true
, graphite: { legacyNamespace: false, globalPrefix: "", prefixGauge: "", prefixCounter: "", prefixTimer: "", prefixSet: ""}
}

3) cat storage-aggregation.conf

# Aggregation methods for whisper files. Entries are scanned in order,
# and first match wins. This file is scanned for changes every 60 seconds
#
#  [name]
#  pattern = <regex>
#  xFilesFactor = <float between 0 and 1>
#  aggregationMethod = <average|sum|last|max|min>
#
#  name: Arbitrary unique name for the rule
#  pattern: Regex pattern to match against the metric name
#  xFilesFactor: Ratio of valid data points required for aggregation to the next                                                                              retention to occur
#  aggregationMethod: function to apply to data points for aggregation
#
[min]
pattern = \.lower$
xFilesFactor = 0.1
aggregationMethod = min

[max]
pattern = \.upper$
xFilesFactor = 0.1
aggregationMethod = max

[sum]
pattern = \.sum$
xFilesFactor = 0
aggregationMethod = sum

[count]
pattern = \.count$
xFilesFactor = 0
aggregationMethod = sum

[count_legacy]
pattern = ^stats_counts.*
xFilesFactor = 0
aggregationMethod = sum

[default_average]
pattern = .*
xFilesFactor = 0.3

4) 客户:

#!/usr/bin/env python
import time
import random
import statsd
import math

c = statsd.StatsClient('localhost',8125)
k = 0
nbData = 60
pause = 1

while True :
      print k
      k += pause
      tps1 = time.clock()
      for j in range (nbData):
                digit = j%10 + k*10 + math.sin(j/500)
                c.timing('TPS.global', digit)
                c.timing('TPS.interne', digit)
                c.timing('TPS.externe', digit)
                for i in range(5):
                        c.timing('TPS.a.'+str(i), digit)
                        c.timing('TPS.b.'+str(i), digit)
                        c.timing('TPS.c.'+str(i), digit)
                        c.timing('TPS.d.'+str(i), digit)
                        c.timing('TPS.e.'+str(i), digit)
                        c.timing('CR.a.'+str(i), digit)
                        c.timing('CR.b.'+str(i), digit)
                        c.timing('CR.c.'+str(i), digit)
                        c.timing('CR.d.'+str(i), digit)
                        c.timing('CR.e.'+str(i), digit)
      tps2 = time.clock()
      print 'temps = ' + str(tps2 - tps1)
      if k >= 60:
          k = 0
      if pause-tps2 + tps1 < 1:
         time.sleep(pause-tps2 + tps1)

编辑:添加客户端代码

4

2 回答 2

0

你的CARBON_METRIC_INTERVAL设置是什么?我怀疑它需要匹配 StatsD flushInterval

于 2013-07-05T16:08:36.983 回答
0

如果没有更多上下文,很难说会发生什么。向 StatsD 发送数据时是否使用采样?您在什么硬件上运行 StatsD?您的模拟全部在本地主机上吗?您是否在有损连接上运行它?

目前没有办法将时序指标限制为仅某些类型。

很抱歉不能提供更直接的帮助。如果您的问题仍然存在,请考虑进入 Freenode IRC 上的#statsd 并在那里询问。

于 2013-07-01T12:30:16.783 回答