Prepared for: Appboy
http://graphite01.mattjbarlow.com/render? from=-12hour& until=now& width=800& height=250& target=stats_counts.keystroke& lineMode=connected& target=drawAsInfinite(events ("Pomodoro")))You can manipulate that data in almost endless ways.
Sums and Histograms
Know how many times an event happens per second, per hour, per day, etc.Overlays
2nd Y AxisRaw Graphite: metric_path value timestamp\n:
foo.bar.baz 42 74857843
Counters:
StatsClient().incr(stat, count=1, rate=1)
'stats.' + key + ' ' + valuePerSecond + ' ' + ts + "\n";
'stats_counts.' + key + ' ' + value + ' ' + ts + "\n";
Timers
StatsClient().timing(stat, delta, rate=1)
Gauges
StatsClient().gauge(stat, value, rate=1, delta=False)Sample rate tells StatsD that your application is only sending data a certain percentage of the time. You can do this to avoid overwhelming the StatsD server with UDP.
c = StatsClient(host=secrets.GRAPHITE_URL, port=8126, prefix=None) c.incr('keystroke', count=1, rate=1)
def self.query(search_item="pizza") starttime = Time.now result = Twitter.search(search_item, :count => 10, :result_type => "recent").results.first tweettime = ((Time.now - starttime) * 1000).to_i METRICS.timing('tweetsearch.query', tweettime) return result end
ps = subprocess.Popen(('ps', 'aux'), stdout=subprocess.PIPE) output = subprocess.check_output(('wc', '-l'), stdin=ps.stdout) output = re.match(r'(?:^\s*)(\d.*$)', output).group(1) c = statsd.StatsClient('STATSD_SERVER_IP', 8125) c.gauge('{}.processes'.format(args.servername), output)Gauges only flush the last value in StatsD.
Keystrokes Per Second
We all want to beat a number.c = StatsClient(host=secrets.GRAPHITE_URL, port=8126, prefix=None) dev = InputDevice('/dev/input/event0') for event in dev.read_loop(): if event.type == ecodes.EV_KEY: if event.value == 01: c.incr('keystroke', count=1, rate=1)
[0] update LINE_RECEIVER_INTERFACE in carbon.conf for ports 2003 and 2004
[1] python manage.py changepassword root
Programming in the URL Bar :(
This web tool makes it easier :)
&target=alias(stats_counts.keystroke, "Key Strokes")
&target=alias(*.agents.graphite-a.cpuUsage, "cpu")& target=alias(secondYAxis(*.timers.tweetsearch.query.mean), "tweets")
&target=stats_counts.keystroke&lineMode=connected&target=drawAsInfinite(events("Pomodoro"))
&target=color(summarize(stats_counts.keystroke%2C'1hour')%2C'64DD0E')& areaMode=stacked&lineMode=staircase& target=summarize(timeShift(stats_counts.keystroke%2C'1hour)%2C'1hour')& areaMode=stacked& lineMode=staircase& target=summarize(timeShift(stats_counts.keystroke%2C'1hour')%2C'1hour')& areaMode=stacked& lineMode=staircase
apt-get install collectd
#/etc/collectd/collectd.conf LoadPlugin write_graphite Host "localhost" Port "2003" Protocol "udp" LogSendErrors true Prefix "collectd." StoreRates true AlwaysAppendDS false EscapeCharacter "_"103 Plugins: sensors, thermal, postgres, varnish, etc. Most of them read plugins.
curl -X POST http://GRAPHITE_URL/events/ \ -d '{"what": "Ansible Jekyll Role", "tags": "Playbook"}''
drawAsInfinite(events("Playbook"))
import argparse parser = argparse.ArgumentParser(description='Send num processes to StatsD') parser.add_argument('servername', metavar='S', type=str, help='The server that is generating this process list.') args = parser.parse_args() ps = subprocess.Popen(('ps', 'aux'), stdout=subprocess.PIPE) output = subprocess.check_output(('wc', '-l'), stdin=ps.stdout) output = re.match(r'(?:^\s*)(\d.*$)', output).group(1) c = statsd.StatsClient('STATSD_URL', 8125) c.gauge('{}.processes'.format(args.servername), output) print output
def self.query(search_item="pizza") starttime = Time.now result = Twitter.search(search_item, :count => 10, :result_type => "recent").results.first tweettime = ((Time.now - starttime) * 1000).to_i METRICS.timing('tweetsearch.query', tweettime) return result endMETRICS constant set in initializer.
114 website metrics
Also creates:
target=stats.gauges.push.mattjbarlow.mattjbarlow-blogSend both Events and Metrics. You can put a star on the user or repo to get an overlay comparison.
TCP_IP = 'CARBON_IP' TCP_PORT = 2003 github_user = '' events_url = 'https://api.github.com/users/' + github_user + '/events' r = requests.get(events_url) for i in r.json(): if i['type'] == 'PushEvent': git_time = i['created_at'] dt = time.strptime(git_time, "%Y-%m-%dT%H:%M:%SZ") timestamp = time.mktime(dt) s = socket.socket() s.connect((TCP_IP, TCP_PORT)) message = "github.{}.{} 1 {}\n".format(i['actor']['login'],i['repo']['name'], \ int(timestamp)) print message s.sendall(message) s.close()
Subscribes to Rails instrumentation events and sends them to StatsD.
Circular Buffer means Fixed Size
You can put all kinds of data in here.Lives in /opt/graphite/conf
[stats] priority = 110 pattern = ^stats\..* retentions = 10s:6h,1m:7d,10m:1y
Lives in /opt/graphite/conf
[default_average] pattern = .* xFilesFactor = 0.5 aggregationMethod = averagename: Arbitrary unique name for the rule pattern: Regex pattern to match against the metric name xFilesFactor: Ratio of valid data points required for aggregation to the next retention to occur aggregationMethod: function to apply to data points for aggregation
In /opt/graphite/storage/whisper
drwxr-xr-x 3 root root 4096 May 3 14:17 carbon drwxr-xr-x 4 root root 4096 May 3 14:18 stats drwxr-xr-x 3 root root 4096 May 3 14:18 stats_counts drwxr-xr-x 2 root root 4096 May 3 14:18 statsd
graphite/storage/whisper/carbon/agents/graphited01-a/metricsReceived.wsp
http://graphite.example.com/render?target=carbon.agents.graphite01-a.metricsReceived
Start carbon:
/opt/graphite/bin/carbon-cache.py start --logdir=/var/log/carbon
Start django:
python /opt/graphite/webapp/graphite/manage.py runserver 192.168.0.1:80
node /opt/statsd/stats.js /opt/statsd/localConfig.js 2>&1 >> /var/log/statsd.log
The smallest retention rate in Graphite must match the Flush Interval in StatsD
storage-schemas.conf
localConfig.js
Carbon will only write the stats once per retention period.storage-aggregation.conf