Dashing: How to make it work for you

Now that you have Dashing up and running in standard form you’re going to want to add your own data to it.

So, how does it work?

There are 2 main folders you’ll need to work in.

The first folder is dashboards. If you have a look in there just now you’ll find 3 files.
The first is called layout.erb – This is a basic layout file. You’ll probably not touch this very much.
The second is sample.erb and the third sampletv.erb. These are you demo dashboards that show you how to use the basic widgets.

In this scenario each widget makes up one of the boxes displayed on your dashboard. You can also install other widgets but that’s not covered here.

To view the dashboards go to http://yourip:3030/sample or http://yourip:3030/sampletv – The sampletv one basically just has more widgets on it.

When you want to add a new dashboard it’s as simple as creating a new file(with the relevant code in it) in the dashboards folder, say testing.erb, and restart Dashing. It then automatically become available on http://yourip:3030/testing.

Basically you’ll want to configure the widgets in the size you want, the name of them and the type of widget.

So, let’s break down the configuration of a widget.

    <li data-row="1" data-col="1" data-sizex="2" data-sizey="1">
      <div data-id="redalerts" data-view="Text" data-title="Red alerts" data-text="# of red alerts on Hobbit"></div>

The data-size{x,y} sections set the height and width of the widget.
The data-id value should be named something specific to what you’re using the widget for. You’ll use this name in the script that pulls the data you want to display here.
The data-view is the type of widget you’re going to use. In this case it’s a simple text widget.

The rest you can play about with.

So, now that you have a dashboard configured you need to throw some data in it.

The next folder to look at is jobs.

This has scripts inside it that are scheduled to run at various intervals.
I’ve found it easier to split out the tests in to multiple scripts instead of having one large script.
This lets you set different schedule times and just makes it simpler to manage.

Let’s look at one of the scripts;

# we need to pull in open-uri
require 'open-uri'

# Initialise the variable
redalerts = 0

# Schedule the script to run every 10 seconds
SCHEDULER.every '10s' do

# Set the redalerts variable to a count of how many times the redalert.gif is found on the page it scans
  redalerts = open("").read.scan(/redalert.gif/).count

# Send the alert to your dashboard
  send_event('redalert', { current: redalerts  })

The above script uses open-uri to open a webpage every 10 seconds and searches for the term redalert.gif. It then sticks a count of that number in to the variable redalerts.
The send_event then updates the dashboard with the value.

You’ll notice that the variable name that send_alert us using matches the data-id in the dashboard file. That’s how the 2 match themselves up.

Now restart dashing and your dashboard should be getting your new data.

This is a basic test but shows you how easy it is to get data in to your dashboard.

Red alerts

Red alerts

Next we’ll go over some more advanced tests like getting ruby to connect to a MS SQL server to grab data.

Dashing dashboard Ubuntu install

At work recently I found that there are certain snippets of information I would like to have available to me at a glance.
We already have monitoring systems in place for alerts but I just wanted something simple – a real-time snapshot of relevant information.

I can’t remember where, it might have been VelocityConf Europe, but I came accross Dashing. Dashing is an opensource dashboard framework which is based on the Sinatra ruby library. It was built by Shopify to meet their needs but it’s very flexible and I’ve found it meets out needs just fine.

Installing it is pretty simple. My installation was based on Ubuntu 12. 04.3 LTS.

Install went as follows;

I started by updating the OS to the latest patch levels

apt-get update && apt-get upgrade && reboot

Now on to dashing itself.

I added a dashing user and installed it under that user. I found it easier to manage that way but I imagine you can install it just fine under root.

useradd dashing
apt-get install ruby1.9.1 ruby1.9.1-dev build-essential nodejs
gem install bundler
gem install dashing
su - dashing
dashing new test-dashboard
cd test-dashboard
dashing start

This should get a basic dashing install up and running. You’ll get a message along the lines of;

Thin web server (v1.6.1 codename Death Proof)
Maximum connections set to 1024
Listening on, CTRL+C to stop
For the twitter widget to work, you need to put in your twitter API keys in the jobs/twitter.rb file.

You can then access it on your ipaddress. So or whatever your IP address should display a page like this;

Default dashing dashboard

Default dashing dashboard

All of the data on the initial dashboard is generated randomly and updated every 2 seconds.

Now that you’ve tested it you’ll want an init file.

I pulled and modified one from there;

You need to update the username you are using for dashing, the location of the dashboard and maybe a couple other things. Then stick it in /etc/init.d/.

You’ll need to make it executable and tell Ubuntu to start it on boot.

chmod +x /etc/init.d/dashing
update-rc.d dashing defaults

Now start it and you’re service is up.

Change Pacemaker resource parameters

I recently had to work out how to change some parameters of a Heartbeat based cluster. This was something new to me as typically I’ve never had a need to change the parameters but in this case we’re migrating services from an old cluster to a new one and will be taking the old IP address with it.

I found 3 ways to do it and both of them are nice and simple.

Option 1, live edit.

You can open the configuration in your favourite text editor(vim in my case) and modify the parameter you need;

crm configure edit

Simply key your way to the part you want to change and save the file as you normally would in your editor. This will automatically commit the change to the cluster.

Test that it’s working as expected by issuesing the crm monitor command;

crm_mon -1

Option 2 is a one shot command line.

The line below sets the IP value on the resource named SITE1-VIP.

crm resource param SITE1-VIP set ip ""


Option 3 is really the same as option 2 but perhaps make it a bit clearer on what’s actually happening.

crm_resource --resource SITE1-VIP --set-parameter ip --parameter-value ""

All nice and simple options.

Strange MySQL server issue connecting to actual IP address

After spending the afternoon helping a friend troubleshoot some MySQL issues I thought I’d document what happened so I can look back and re-use for similar issue in the future.

Ultimately the issue was down to name resolution on the MySQL server. This was never an issue before and there appears to have been no changes made to the box but it would be strange if this was a bug.

The issue was resolved by starting MySQL using the –skip-name-resolve option. You can set the option in the my.cnf file but adding the skip-name-resolve option.

What helped me get to the bottom of the problem(which took me far too long!) was when logged on to the MySQL server and tried connecting to the MySQL instance. If I did;

mysql -uroot -p -h

it would work fine. Same if you replace for localhost. Instant connection.

When I tried to connect on the IP that eth0 was using it look much, much longer. This IP wasn’t listed in the DNS server or in the /etc/hosts file. I looked up how to disable lookups to test and found that to be the problem.

By setting that option you need to remember that any host connecting to that instance of MySQL needs to use the servers IP address and not the hostname. Hostnames listed in the MySQL privilege table are rendered unusable.

Another benefit of not doing DNS is extra performance. In some cases it seems it can be quite spectacular.

Pacemaker Cluster Management

In my last post I talked about creating a Pacemaker and Heartbeat based High Availability cluster. One thing I found when I first set one of these clusters up was that there wasn’t much information about how to query the cluster status, make changes and manually move resources around. Don’t get me wrong, it is all out there but I didn’t find it all easily in one place so i’m going to document it here.

To check the cluster status easily by doing the following;

server1# crm_mon -1

This will output something like;

server1# crm_mon -1
Last updated: Wed Jun 27 22:33:43 2012
Stack: Heartbeat
Current DC: server2 (sdfsf4dsf-612c-42d4-8544-ce216d3b6095) - partition with quorum
Version: 1.0.9-da7075976b5ff0bee71074385f8fd02f296ec8a3
2 Nodes configured, unknown expected votes
2 Resources configured.
Online: [ server1 server2 ]
SITE1-VIP (ocf::heartbeat:IPaddr): Started server1
SITE2-VIP (ocf::heartbeat:IPaddr): Started server2

As you can see both nodes are online and 1 VIP started on each node.

If you want to see the configuration of the cluster you can run;

server1# crm configure show

You can manually move resources around. In this case we are moving a VIP from one server to another.

crm resource move SITE1-VIP server2

Resources can be renamed but they must be brought offline to do it. To minimise downtime I did the following to rename SITE1-VIP to BUSYWEBSITE-VIP. You want to make these as descriptive as possible for other users who may need to troubleshoot the cluster in future.

crm resource stop SITE1-VIP && crm configure rename SITE1-VIP BUSYWEBSITE-VIP && crm resource start BUSYWEBSITE-VIP

More information can be found on the Clusterlabs website over here.

High Availability on Ubuntu

I recently had to re-build an Ubuntu High Availability cluster which was providing access from 2 front end web farms to a back end SOLR cluster.

It’s a simple but incredibly effective setup. 2 inexpensive servers each running Nginx(I won’t cover the config here) with a Heartbeat based Pacemaker cluster and 2 Virtual IP addresses. I used Ubuntu server 11.04 but I suspect more recent versions will be near identical in configuration.

I configured server1 as the primary server of website1 and the second server as the primary server of website2. If one of the servers were to tail the VIP that was active on that server would switch to the other server.

Here’s how I did it:

On each server install Nginx, Pacemaker and Heartbeat.

server1# apt-get install nginx pacemaker heartbeat -y
server2# apt-get install nginx pacemaker heartbeat -y

There’s no risk of any data corruption so we don’t have to make sure Nginx is only running on one server.

And start Nginx right now so on both servers issue the command below.

server# service nginx start

Now we need to configure Heartbeat. Edit /etc/ha.cf on server1 and configure it as you need it.

autojoin none          
udpport 697
bcast bond0
warntime 10
deadtime 20
initdead 60
keepalive 1
node server1
node server2
crm respawn

If you want to know more about the options I recommend having a look here.

For a basic guide autojoin none means that servers can’t just join the cluster. Give a bit of security but on a local LAN you *should* be pretty safe. The node lines list the servers that should be in the cluster. Ping is basically telling the server which IP to do test pings to which in this case the network gateway to make sure it should remain part of the cluster. If it can’t see the gateway then it probably shouldn’t be advertising any VIPs.

Another file you need to modify is /etc/ha.d/authkeys. The format should be;

auth 1
1 sha1 ARanDomKeyShouLdGoHereMuchMoreSecureThanThis

You should make the sha1 key quite secure. The authkeys file needs strong permissions so chmod it to 600.

server1# chmod 600 /etc/ha.d/authkeys

The 2 above files need to be the same on both servers. From server1 do the following;

server1# scp /etc/ha.d/ha.cf server2:/etc/ha.d/
server1# scp /etc/ha.d/ha.cf server2:/etc/ha.d/

The basics are now in place so on both servers start Heartbeat.

service heartbeat start && ssh server2 service heartbeat start

The next bit is important. Go make a cup of tea while the cluster gets itself running. It doesn’t take a massively long time, less than a couple minutes, but I spent the entire time checking if it was up and being disappointed it wasn’t.

After your tea check the cluster status using crm_mon.

crm_mon -1

This should give you the status of the cluster. The important bit to look for is “Online: [ server1 server2 ]”

That shows the cluster is talking and you’re OK to configure your resources.

When entering the cluster configuration you only need to do it on one server and the cluster replicates the configuration between the servers.

We’re not going to enable stonith here so the first thing we do is disable it.

crm configure property stonith-enabled=false

Now we can configure some actual resources. First thing we’ll do is configure a VIP

crm configure primitive SITE1-VIP ocf:heartbeat:IPaddr params ip="" cidr_netmask="32" op monitor interval="30s"

Now we have a VIP lets set a preference. We’ll make this VIP prefer to be on server1.

crm configure location SITE1-VIP-PREF SITE1-VIP 100: server1

Now we configure the second VIP.

crm configure primitive SITE2-VIP ocf:heartbeat:IPaddr params ip="" cidr_netmask="32" op monitor interval="30s"

This time we set the VIP preference to server2.

crm configure location SITE2-VIP-PREF SITE2-VIP 100: server2

So all done. Now we check the cluster status.

server1# crm_mon -1
Last updated: Wed Jun 27 22:33:43 2012
Stack: Heartbeat
Current DC: server2 (sdfsf4dsf-612c-42d4-8544-ce216d3b6095) - partition with quorum
Version: 1.0.9-da7075976b5ff0bee71074385f8fd02f296ec8a3
2 Nodes configured, unknown expected votes
2 Resources configured.
Online: [ server1 server2 ]
SITE1-VIP (ocf::heartbeat:IPaddr): Started server1
SITE2-VIP (ocf::heartbeat:IPaddr): Started server2

And that’s us. A simple HA cluster utilising both servers.

Another note I would like to add is how resource friendly Nginx is. We’re throwing thousands of queries per second through each instance and the load on the box. We’re using less than 5 percent of the CPU during peak loads.

One final thing. Test! Pull cables. Reboot the machine. Run through as many scenarios as you can think of to make sure the cluster reacts as you want it to.