Daniel and I are in Chicago for HostingCon! To read about our first day’s adventures click here. I’ll be blogging about HostingCon a few times per day at my NetworkWorld blog, DatacenterJunkie. Amongst some of the highlights will be interviews with hosting industry leaders, almost-live coverage of the interesting tracks, and photos of the conference.
This weekend we say goodbye to our offices on floor 17 of the Westin Bldg
It was good while it lasted. Here are some parting shots of our views, taken with my camera phone.
Thanks to Phil for both putting up with us and putting us up.
The Westin building is arguably still the primary hub of telcos and ISPs in Seattle.
I’d post some of the infamous meet-me room on floor #19 if I had any, but I will say my short visit in 2003 was astonishing.
Ski Kacoroski gave a great presentation last night at Seattle Area Systems Administrators Group about BitPusher’s work deploying Puppet onto NorthShore School District’s 5,000+ Apple Workstations & Laptops.
You can view the presentation slides here: Puppet For Mac Workstation Configuration Management.
Currently this is the second largest deployment of Puppet, second only to Google’s recently announced puppet deployment.
Cacti is a great tool for time-based visualization of data. Out-of-box functionality can leave something to be desired. Here is a stepwise tutorial for creating custom graphs of web server requests. Most of the instructions can be applied to other scenarios.
From a high-level you need to:
- Create a source of the (time-based) data which exists independent of Cacti
- Create a means of obtaining the data (Data Input Method)
- Assemble outputs of (2) into a Data Template
- Create Graph Template based on the values available in the Data Template
- Apply those to a Device, this creates a Data Source and set of graphs
Then after some time passes you will have some nice graphs to ogle at.
A real world example would be capturing requests-per-second statistics from Nginx. This technique would apply to any web server logs with a little massaging I guess. In this case, our Nginx server was not compiled with stub_status ability which might otherwise be used, so instead we fashion a shell script that can run on (each) web server and determine rps from the access log.
statsnginx is a rudimentary script that captures average rps in ranges of 10s, 1m and 5m ago.
Example of running and the output:
c10s:389 c1m:409 c5m:312
Next step, make this data available to cacti. I choose snmp MIB since snmpd is already running on our Nginx servers.
Just by adding this line to snmpd.conf and restarting can we see the data remotely (wrapped for clarity).
exec 184.108.40.206.4.1.5001.3 statsnginx
$ snmpget -v 1 -On -c public
.220.127.116.11.4.1.5001.3.101.1 = STRING: "c10s:275 c1m:274 c5m:230"
This part can be tricky to figure out the exact OID. Use snmpwalk if necessary, e.g.
$ snmpwalk -v 1 -On -c public
Create the cacti script in /opt/cacti/scripts/statsnginx.sh
OUTPUT=`snmpget -Ov -v 2c -c bitpushsnmp $1
echo $OUTPUT | sed -e 's/STRING: //' | sed -e 's/"//g'
Test it from the cacti server.
$ ./statsnginx.sh 192.168.1.117
c10s:190 c1m:192 c5m:152
Notice how the output has three fields of name:value pairs separated by spaces. These are what cacti likes.
Don’t make the mistake like I did of using name=value because it will make cacti think it’s a PARTIAL result..
Now we can move into cacti to make use of the script and data it provides.
Please see this helpful link to Cacti docs.
Go to the Cacti console and create a Data Input Method to tell Cacti how to call the script …
- Data Input methods
- Name: statsnginx
- Input Type: Script/Command
- Input String: <path_cacti>/scripts/statsnginx.sh <hostname>
Now you have Input Fields and Output Fields
Cacti wants you to provide these.
Since we are gathering data from a remote (to cacti server) host, need to give hostname as input
- Add an Input Field
- Name: hostname
- Field Order: 1
- Friendly Name: Hostname
Add an Output Field for each of the name:value pairs above (c10s, c1m, c5m)
- Field [Output]: c10s
- Friendly Name: Average requests/sec over last 10 seconds
- Update RRD File: checked
- Do this for the other two fields.
- Then Save once more
Now create the Data Template
- Data Template
- – Name: nginx – Requests
- Data Source
- Name: |host_description| – nginx – Requests
- Data Input Method: statsnginx (chosen from list)
- Data Source Items
- Internal Data Source Name: for each data point above (c60s, c1m, c5m) add all (output fields items from above)
This is a tricky part. You want all three data source items listed with appropriate min/max values (left at 0 in this case) and using GAUGE as the Data Source Type. Also make sure to select the appropriate Output Field from the list for each one.
Now create the Graph Template
- Template Name: nginx – Requests
- Title: |host_description| – nginx – Requests
- I changed Upper Limit to 10000 just to be sure.
Now add Graph Template Items one by one
e.g Data Source nginx – requests (c10s)
and so on. Give each one an appropriate type like area, stacked or line1.
You can use the Graph Template for ucd/net – Load Average as a reference since it has similar measures (1, 5 and 15m load average).
BitPusher is excited to be sponsoring the Six Hour Startup Conference being held on May 31st at the Columbia City Theatre in Seattle, WA. Six Hour Startup Conference is being put on by Six Hour Startups, an exciting local company dedicated towards social business collaboration and development.
Six Hour Startup Conference’s agenda includes discussions with a variety of startup experts, on topics including writing business plan writing, corporate legal structures, and developing investment oriented pitches. There will also be punch, and pie.
The event is from 9:30 AM until 5:20 PM. Registration is $99 + $2.48 processing fee from EventBrite. I’ll be attending as well, and hope to see you there! PS, I’m going to keep a backpack of t-shirts with me, if you’d like one, email me a size and look me up during lunch!
Come drink our Guinness! From 2:00 PM – 7:00 PM Thursday, May 15th, 2008 BitPusher’s staff will be on-site serving Guinness, and handing out T-Shirts to all of our friends and customers while celebrate our new managed colocation partnership with NetRiver.
NetRiver’s datacenter is located at 4200 194th Street SW in Lynnwood, WA 98036.
Please RSVP by leaving a comment here. All are welcome!
Last weekend we exhibited at LinuxFest NorthWest. It was pretty exciting, since this was the first time we’d actually had a booth and marketing schwag at an event. Hopefully it won’t be the last.
As these things go, the preparation was pretty intense. Between the time we signed up for our booth, and 10 days prior to the conference, we didn’t have much time to plan for the event. At T-10 days, we decided we wanted stickers and T-shirts.
With 8 days left, I called up Robert Kaule over at essensys, who also owns a silk screening business. I asked him to make it happen on time. Being an engineer first, and a screen-printer second, his response was “no problem, I’ll have to bill you for overnight delivery “.