grafana

I attended the 33c3 in Hamburg. A awesome event, as always. One of the slogan: “use more bandwidth!”. To display the current amount of data that has been sent a dashboard (Note: Down to this time of the year) was created. I want to have something similar. So I started digging into the dash board while be on the congress. Which is running on grafana.

Grafana

What is grafana?

Grafana is a open source metric analytics & visualization suite. It is most commonly used for visualizing time series data for infrastructure and application analytics but many use it in other domains including industrial sensors, home automation, weather, and process control.

Quote from the docs

grafana A similar look, but with more details on bandwidth, this is the dashboard I created.

Setup

For testing it, I create a VM, a script that captures my traffic on a interface (wlan0) and a Database. You can download grafana with a fitting binary from the project page. My example VM was ubuntu 14.04, create with vagrant.

wget https://grafanarel.s3.amazonaws.com/builds/grafana_4.0.2-1481203731_amd64.deb
sudo apt-get install -y adduser libfontconfig
sudo dpkg -i grafana_4.0.2-1481203731_amd64.deb

Note: only use this type of installation for testing, for production use please add the repository!

Next step is to read the getting started. You’ll recognize that a datasource is necessary. Using Plaintext is not a option.

Data sources / InfluxDB

For the sake of simplicity, I used influxDB, as described here.

What’s that? Again a quote from they docs

InfluxDB is a time series database built from the ground up to handle high write and query loads. It is the second piece of the TICK stack. InfluxDB is meant to be used as a backing store for any use case involving large amounts of timestamped data, including DevOps monitoring, application metrics, IoT sensor data, and real-time analytics.

I has a neat rest api you can sent your data to.

Installation

Same simple tick like before for the installation:

wget https://dl.influxdata.com/influxdb/releases/influxdb_1.1.1_amd64.deb
sudo dpkg -i influxdb_1.1.1_amd64.deb

Note: only use this type of installation for testing, for production use please add the repository!

enable remote access for InfluxDB

Because I didn’t create the data on the VM itself, a remote host needs access to the db. InfluxDB is configure to allow access ONLY from localhost. Yet without any further authentication, how great this is seems debatable. But for another node you need to change within the /etc/influxdb/influxdb.conf the auth-enabled from true to false.

Note: This is only done of the sake of testing!

InfluxDB create a db

Just start the influx-shell and create a db. Note: The official way, as documented on github is broken….

influx -precision rfc3339
CREATE DATABASE mydb

Get network data, the hacky way

There are many great tools out for collecting network statics, I prefer for example vnstat. But for getting straightforward data it’s quite a pain. So instead I use python to do this job.

psutil

There is useful python module, called psutil. It can read data from the /proc/ filesystem in an easy fashion. To get the data send per second, you’ll need to read the /proc/net/dev. In there is a static for network interfaces.

Inter-|   Receive                                                |  Transmit
 face |bytes    packets errs drop fifo frame compressed multicast|bytes    packets errs drop fifo colls carrier compressed
  eth0:  217872    1985    0    0    0     0          0         0   202460    1690    0    0    0     0       0          0
  eth1:  237343     500    0    0    0     0          0         0   852594     594    0    0    0     0       0          0
    lo:  262258     570    0    0    0     0          0         0   262258     570    0    0    0     0       0          0
docker0:       0       0    0    0    0     0          0         0        0       0    0    0    0     0       0          0

_ content of a /proc/net/dev file _

Next is to select the interface of your desire and create the delta of the receive/transmitted byte and/or packets.

Here’s the script to get the traffic in one second on an interface:

#!/usr/bin/env python3
"""
Interface static 

Usage:
  interface-statistic.py help | --help | -h
  interface-statistic.py version| --version | -v
  interface-statistic.py <interface>

Akendo 2016
Apache 2.0
"""


# https://pypi.python.org/pypi/psutil/
import psutil # for access to the procfs
from docopt import docopt
from time import sleep

class interface:
  def __init__(self, interface_name):
      # test for /sys/class/net/{interface} is, else this is not existing interface
      # test for /sys/class/net/{interface}/link_mode is 1
      if interface_name not in psutil.net_if_stats():
        raise Exception('invalid interace name!')
      if not psutil.net_if_stats()[interface_name].isup:
        raise Exception('interace is down!')
      self.interface_name = interface_name
      self.data = []

  def stats(self, interface_name):
    # 0, bytes_sent
    # 1, bytes_recv 
    sent_a = psutil.net_io_counters(pernic=True)[self.interface_name][0]
    recv_a = psutil.net_io_counters(pernic=True)[self.interface_name][1]
    sleep(1)
    sent_b = psutil.net_io_counters(pernic=True)[self.interface_name][0]
    recv_b = psutil.net_io_counters(pernic=True)[self.interface_name][1]
    print('TX/s:{0} RX/s:{1}'.format(sent_b - sent_a, recv_b - recv_a))


if __name__ == '__main__':
  arguments = docopt(__doc__, version='Interface static  0.0a')
  if '<interface>' in  arguments:
    iface = arguments['<interface>']
    net = interface(iface)
    net.stats(iface)
  else: 
    print(arguments)

When the code is execute with an valid interface, that’s up and running, you’ll get the traffic from the last second.

python interface-statistic.py enp0s25
TX/s:102 RX/s:102

Putting everything together

Next is to send the data to the influxdb and configure grafana to display this dataset. Here’s a hacky bash script to send it to the db. I just was to lacy at this point to do this proper in python.

#!/bin/env bash

while /bin/true;
do
out=$(python interface-statistic.py wlp3s0)
curl -XPOST 'http://192.168.56.200:8086/write?db=mydb' -d "traffic,host=x240 up=$(echo ${out}|cut -d ':' -f 2 |sed 's/[^0-9]*//g'),down=$(echo ${out}|cut -d ':' -f 3 )"
done

This will create a measurement in influxdb, with the tag host with value x240, with two field-keys. Their contains the value up and down. Let check this:

influx -precision rfc3339
Visit https://enterprise.influxdata.com to register for updates, InfluxDB server management, and monitoring.
Connected to http://localhost:8086 version 1.1.1
InfluxDB shell version: 1.1.1
> use mydb
Using database mydb
SELECT "down", "up" FROM "traffic" 
...
16-12-30T17:05:50.226740631Z  1268            816
2016-12-30T17:05:51.319601571Z  3815            1836
2016-12-30T17:05:52.401584693Z  4467            3186
2016-12-30T17:05:53.512383018Z  1137            758
2016-12-30T17:05:54.600852827Z  2857            1200
2016-12-30T17:05:55.698312396Z  2814            1164
2016-12-30T17:05:56.81280255Z   2898            372
2016-12-30T17:05:57.885670347Z  3000            1284
2016-12-30T17:05:59.040079442Z  2264            792
2016-12-30T17:06:00.134257182Z  1522            552
2016-12-30T17:06:07.686102817Z  2210            416
...

Here we go! Now we can utilize this in the dashboard!

create a dashboard in grafana

Last step is to add a dashboard and let select the right data set. The result can be seen above.

recap

grafana is a nice tool, it also includes an alert feature that will allow you to notify in case a thrash value is hit. It’s a tool I’m going to use in the further more.

best regards Akendo