Content here is by Michael Still mikal@stillhq.com. All opinions are my own.
See recent comments. RSS feed of all comments.


Wed, 27 Sep 2017



I think I found a bug in python's unittest.mock library

    Mocking is a pretty common thing to do in unit tests covering OpenStack Nova code. Over the years we've used various mock libraries to do that, with the flavor de jour being unittest.mock. I must say that I strongly prefer unittest.mock to the old mox code we used to write, but I think I just accidentally found a fairly big bug.

    The problem is that python mocks are magical. Its an object where you can call any method name, and the mock will happily pretend it has that method, and return None. You can then later ask what "methods" were called on the mock.

    However, you use the same mock object later to make assertions about what was called. Herein is the problem -- the mock object doesn't know if you're the code under test, or the code that's making assertions. So, if you fat finger the assertion in your test code, the assertion will just quietly map to a non-existent method which returns None, and your code will pass.

    Here's an example:

      #!/usr/bin/python3
      
      from unittest import mock
      
      
      class foo(object):
          def dummy(a, b):
              return a + b
      
      
      @mock.patch.object(foo, 'dummy')
      def call_dummy(mock_dummy):
          f = foo()
          f.dummy(1, 2)
      
          print('Asserting a call should work if the call was made')
          mock_dummy.assert_has_calls([mock.call(1, 2)])
          print('Assertion for expected call passed')
      
          print()
          print('Asserting a call should raise an exception if the call wasn\'t made')
          mock_worked = False
          try:
              mock_dummy.assert_has_calls([mock.call(3, 4)])
          except AssertionError as e:
              mock_worked = True
              print('Expected failure, %s' % e)
      
          if not mock_worked:
              print('*** Assertion should have failed ***')
      
          print()
          print('Asserting a call where the assertion has a typo should fail, but '
                'doesn\'t')
          mock_worked = False
          try:
              mock_dummy.typo_assert_has_calls([mock.call(3, 4)])
          except AssertionError as e:
              mock_worked = True
              print('Expected failure, %s' % e)
              print()
      
          if not mock_worked:
              print('*** Assertion should have failed ***')
              print(mock_dummy.mock_calls)
              print()
      
      
      if __name__ == '__main__':
          call_dummy()
      


    If I run that code, I get this:

      $ python3 mock_assert_errors.py 
      Asserting a call should work if the call was made
      Assertion for expected call passed
      
      Asserting a call should raise an exception if the call wasn't made
      Expected failure, Calls not found.
      Expected: [call(3, 4)]
      Actual: [call(1, 2)]
      
      Asserting a call where the assertion has a typo should fail, but doesn't
      *** Assertion should have failed ***
      [call(1, 2), call.typo_assert_has_calls([call(3, 4)])]
      


    So, we should have been told that typo_assert_has_calls isn't a thing, but we didn't notice because it silently failed. I discovered this when I noticed an assertion with a (smaller than this) typo in its call in a code review yesterday.

    I don't really have a solution to this right now (I'm home sick and not thinking straight), but it would be interesting to see what other people think.

    Tags for this post: python unittest.mock mock testing
    Related posts: Example 2.1 from Dive Into Python; Universal Feedparser and XML namespaces; Calculating a SSH host key with paramiko; Twisted Python and Jabber SSL; Killing a blocking thread in python?; Starfish Prime

posted at: 21:58 | path: /python | permanent link to this entry


Sun, 28 May 2017



Configuring docker to use rexray and Ceph for persistent storage

    For various reasons I wanted to play with docker containers backed by persistent Ceph storage. rexray seemed like the way to do that, so here are my notes on getting that working...

    First off, I needed to install rexray:

      root@labosa:~/rexray# curl -sSL https://dl.bintray.com/emccode/rexray/install | sh
      Selecting previously unselected package rexray.
      (Reading database ... 177547 files and directories currently installed.)
      Preparing to unpack rexray_0.9.0-1_amd64.deb ...
      Unpacking rexray (0.9.0-1) ...
      Setting up rexray (0.9.0-1) ...
      
      rexray has been installed to /usr/bin/rexray
      
      REX-Ray
      -------
      Binary: /usr/bin/rexray
      Flavor: client+agent+controller
      SemVer: 0.9.0
      OsArch: Linux-x86_64
      Branch: v0.9.0
      Commit: 2a7458dd90a79c673463e14094377baf9fc8695e
      Formed: Thu, 04 May 2017 07:38:11 AEST
      
      libStorage
      ----------
      SemVer: 0.6.0
      OsArch: Linux-x86_64
      Branch: v0.9.0
      Commit: fa055d6da595602715bdfd5541b4aa6d4dcbcbd9
      Formed: Thu, 04 May 2017 07:36:11 AEST
      


    Which is of course horrid. What that script seems to have done is install a deb'd version of rexray based on an alien'd package:

      root@labosa:~/rexray# dpkg -s rexray
      Package: rexray
      Status: install ok installed
      Priority: extra
      Section: alien
      Installed-Size: 36140
      Maintainer: Travis CI User <travis@testing-gce-7fbf00fc-f7cd-4e37-a584-810c64fdeeb1>
      Architecture: amd64
      Version: 0.9.0-1
      Depends: libc6 (>= 2.3.2)
      Description: Tool for managing remote & local storage.
       A guest based storage introspection tool that
       allows local visibility and management from cloud
       and storage platforms.
       .
       (Converted from a rpm package by alien version 8.86.)
      


    If I was building anything more than a test environment I think I'd want to do a better job of installing rexray than this, so you've been warned.

    Next to configure rexray to use Ceph. The configuration details are cunningly hidden in the libstorage docs, and aren't mentioned at all in the rexray docs, so you probably want to take a look at the libstorage docs on ceph. First off, we need to install the ceph tools, and copy the ceph authentication information from the the ceph we installed using openstack-ansible earlier.

      root@labosa:/etc# apt-get install ceph-common
      root@labosa:/etc# scp -rp 172.29.239.114:/etc/ceph .
      The authenticity of host '172.29.239.114 (172.29.239.114)' can't be established.
      ECDSA key fingerprint is SHA256:SA6U2fuXyVbsVJIoCEHL+qlQ3xEIda/MDOnHOZbgtnE.
      Are you sure you want to continue connecting (yes/no)? yes
      Warning: Permanently added '172.29.239.114' (ECDSA) to the list of known hosts.
      rbdmap                       100%   92     0.1KB/s   00:00    
      ceph.conf                    100%  681     0.7KB/s   00:00    
      ceph.client.admin.keyring    100%   63     0.1KB/s   00:00    
      ceph.client.glance.keyring   100%   64     0.1KB/s   00:00    
      ceph.client.cinder.keyring   100%   64     0.1KB/s   00:00    
      ceph.client.cinder-backup.keyring   71     0.1KB/s   00:00  
      root@labosa:/etc# modprobe rbd
      


    You also need to configure rexray. My first attempt looked like this:

      root@labosa:/var/log# cat /etc/rexray/config.yml
      libstorage:
        service: ceph
      


    And the rexray output sure made it look like it worked...

      root@labosa:/etc# rexray service start
      ● rexray.service - rexray
         Loaded: loaded (/etc/systemd/system/rexray.service; enabled; vendor preset: enabled)
         Active: active (running) since Mon 2017-05-29 10:14:07 AEST; 33ms ago
       Main PID: 477423 (rexray)
          Tasks: 5
         Memory: 1.5M
            CPU: 9ms
         CGroup: /system.slice/rexray.service
                 └─477423 /usr/bin/rexray start -f
      
      May 29 10:14:07 labosa systemd[1]: Started rexray.
      


    Which looked good, but /var/log/syslog said:

      May 29 10:14:08 labosa rexray[477423]: REX-Ray
      May 29 10:14:08 labosa rexray[477423]: -------
      May 29 10:14:08 labosa rexray[477423]: Binary: /usr/bin/rexray
      May 29 10:14:08 labosa rexray[477423]: Flavor: client+agent+controller
      May 29 10:14:08 labosa rexray[477423]: SemVer: 0.9.0
      May 29 10:14:08 labosa rexray[477423]: OsArch: Linux-x86_64
      May 29 10:14:08 labosa rexray[477423]: Branch: v0.9.0
      May 29 10:14:08 labosa rexray[477423]: Commit: 2a7458dd90a79c673463e14094377baf9fc8695e
      May 29 10:14:08 labosa rexray[477423]: Formed: Thu, 04 May 2017 07:38:11 AEST
      May 29 10:14:08 labosa rexray[477423]: libStorage
      May 29 10:14:08 labosa rexray[477423]: ----------
      May 29 10:14:08 labosa rexray[477423]: SemVer: 0.6.0
      May 29 10:14:08 labosa rexray[477423]: OsArch: Linux-x86_64
      May 29 10:14:08 labosa rexray[477423]: Branch: v0.9.0
      May 29 10:14:08 labosa rexray[477423]: Commit: fa055d6da595602715bdfd5541b4aa6d4dcbcbd9
      May 29 10:14:08 labosa rexray[477423]: Formed: Thu, 04 May 2017 07:36:11 AEST
      May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error
      msg="error starting libStorage server" error.driver=ceph time=1496016848215
      May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error
      msg="default module(s) failed to initialize" error.driver=ceph time=1496016848216
      May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error
      msg="daemon failed to initialize" error.driver=ceph time=1496016848216
      May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error
      msg="error starting rex-ray" error.driver=ceph time=1496016848216
      


    That's because the service is called rbd it seems. So, the config file ended up looking like this:

      root@labosa:/var/log# cat /etc/rexray/config.yml
      libstorage:
        service: rbd
      
      rbd:
        defaultPool: rbd
      


    Now to install docker:

      root@labosa:/var/log# sudo apt-get update
      root@labosa:/var/log# sudo apt-get install linux-image-extra-$(uname -r) \
          linux-image-extra-virtual
      root@labosa:/var/log# sudo apt-get install apt-transport-https \
          ca-certificates curl software-properties-common
      root@labosa:/var/log# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
      root@labosa:/var/log# sudo add-apt-repository \
          "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
          $(lsb_release -cs) \
          stable"
      root@labosa:/var/log# sudo apt-get update
      root@labosa:/var/log# sudo apt-get install docker-ce
      


    Now let's make a rexray volume.

      root@labosa:/var/log# rexray volume ls
      ID  Name  Status  Size
      root@labosa:/var/log# docker volume create --driver=rexray --name=mysql \
          --opt=size=1
      A size of 1 here means 1gb
      mysql
      root@labosa:/var/log# rexray volume ls
      ID         Name   Status     Size
      rbd.mysql  mysql  available  1
      


    Let's start the container.

      root@labosa:/var/log# docker run --name some-mysql --volume-driver=rexray \
          -v mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql
      Unable to find image 'mysql:latest' locally
      latest: Pulling from library/mysql
      10a267c67f42: Pull complete 
      c2dcc7bb2a88: Pull complete 
      17e7a0445698: Pull complete 
      9a61839a176f: Pull complete 
      a1033d2f1825: Pull complete 
      0d6792140dcc: Pull complete 
      cd3adf03d6e6: Pull complete 
      d79d216fd92b: Pull complete 
      b3c25bdeb4f4: Pull complete 
      02556e8f331f: Pull complete 
      4bed508a9e77: Pull complete 
      Digest: sha256:2f4b1900c0ee53f344564db8d85733bd8d70b0a78cd00e6d92dc107224fc84a5
      Status: Downloaded newer image for mysql:latest
      ccc251e6322dac504e978f4b95b3787517500de61eb251017cc0b7fd878c190b
      


    And now to prove that persistence works and that there's nothing up my sleeve...

      root@labosa:/var/log# docker run -it --link some-mysql:mysql --rm mysql \
          sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" \
          -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
      mysql: [Warning] Using a password on the command line interface can be insecure.
      Welcome to the MySQL monitor.  Commands end with ; or \g.
      Your MySQL connection id is 3
      Server version: 5.7.18 MySQL Community Server (GPL)
      
      Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.
      
      Oracle is a registered trademark of Oracle Corporation and/or its
      affiliates. Other names may be trademarks of their respective
      owners.
      
      Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
      
      mysql> show databases;
      +--------------------+
      | Database           |
      +--------------------+
      | information_schema |
      | mysql              |
      | performance_schema |
      | sys                |
      +--------------------+
      4 rows in set (0.00 sec)
      
      mysql> create database demo;
      Query OK, 1 row affected (0.03 sec)
      
      mysql> use demo;
      Database changed
      mysql> create table foo(val char(5));
      Query OK, 0 rows affected (0.14 sec)
      
      mysql> insert into foo(val) values ('a'), ('b'), ('c');
      Query OK, 3 rows affected (0.08 sec)
      Records: 3  Duplicates: 0  Warnings: 0
      
      mysql> select * from foo;
      +------+
      | val  |
      +------+
      | a    |
      | b    |
      | c    |
      +------+
      3 rows in set (0.00 sec)
      


    Now let's re-create the container and prove the data remains.

      root@labosa:/var/log# docker stop some-mysql
      some-mysql
      root@labosa:/var/log# docker rm some-mysql
      some-mysql
      root@labosa:/var/log# docker run --name some-mysql --volume-driver=rexray \
          -v mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql
      99a7ccae1ad1865eb1bcc8c757251903dd2f1ac7d3ce4e365b5cdf94f539fe05
      
      root@labosa:/var/log# docker run -it --link some-mysql:mysql --rm mysql \
          sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -\
          P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
      mysql: [Warning] Using a password on the command line interface can be insecure.
      Welcome to the MySQL monitor.  Commands end with ; or \g.
      Your MySQL connection id is 3
      Server version: 5.7.18 MySQL Community Server (GPL)
      
      Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.
      
      Oracle is a registered trademark of Oracle Corporation and/or its
      affiliates. Other names may be trademarks of their respective
      owners.
      
      Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
      
      mysql> use demo;
      Reading table information for completion of table and column names
      You can turn off this feature to get a quicker startup with -A
      
      Database changed
      mysql> select * from foo;
      +------+
      | val  |
      +------+
      | a    |
      | b    |
      | c    |
      +------+
      3 rows in set (0.00 sec)
      
    So there you go.

    Tags for this post: docker ceph rbd rexray
    Related posts: So you want to setup a Ceph dev environment using OSA; Juno nova mid-cycle meetup summary: containers

posted at: 18:45 | path: /docker | permanent link to this entry


Sat, 27 May 2017



So you want to setup a Ceph dev environment using OSA

    Support for installing and configuring Ceph was added to openstack-ansible in Ocata, so now that I have a need for a Ceph development environment it seems logical that I would build it by building an openstack-ansible Ocata AIO. There were a few gotchas there, so I want to explain the process I used.

    First off, Ceph is enabled in an openstack-ansible AIO using a thing I've never seen before called a "Scenario". Basically this means that you need to export an environment variable called "SCENARIO" before running the AIO install. Something like this will do the trick?L:

      export SCENARIO=ceph
      


    Next you need to set the global pg_num in the ceph role or the install will fail. I did that with this patch:

      --- /etc/ansible/roles/ceph.ceph-common/defaults/main.yml       2017-05-26 08:55:07.803635173 +1000
      +++ /etc/ansible/roles/ceph.ceph-common/defaults/main.yml       2017-05-26 08:58:30.417019878 +1000
      @@ -338,7 +338,9 @@
       #     foo: 1234
       #     bar: 5678
       #
      -ceph_conf_overrides: {}
      +ceph_conf_overrides:
      +  global:
      +    osd_pool_default_pg_num: 8
       
       
       #############
      @@ -373,4 +375,4 @@
       # Set this to true to enable File access via NFS.  Requires an MDS role.
       nfs_file_gw: true
       # Set this to true to enable Object access via NFS. Requires an RGW role.
      -nfs_obj_gw: false
      \ No newline at end of file
      +nfs_obj_gw: false
      


    That of course needs to be done after the Ceph role has been fetched, but before it is executed, so in other words after the AIO bootstrap, but before the install.

    And that was about it (although of course that took a fair while to work out). I have this automated in my little install helper thing, so I'll never need to think about it again which is nice.

    Once Ceph is installed, you interact with it via the monitor container, not the utility container, which is a bit odd. That said, all you really need is the Ceph config file and the Ceph utilities, so you could move those elsewhere.

      root@labosa:/etc/openstack_deploy# lxc-attach -n aio1_ceph-mon_container-a3d8b8b1
      root@aio1-ceph-mon-container-a3d8b8b1:/# ceph -s
          cluster 24424319-b5e9-49d2-a57a-6087ab7f45bd
           health HEALTH_OK
           monmap e1: 1 mons at {aio1-ceph-mon-container-a3d8b8b1=172.29.239.114:6789/0}
                  election epoch 3, quorum 0 aio1-ceph-mon-container-a3d8b8b1
           osdmap e20: 3 osds: 3 up, 3 in
                  flags sortbitwise,require_jewel_osds
            pgmap v36: 40 pgs, 5 pools, 0 bytes data, 0 objects
                  102156 kB used, 3070 GB / 3070 GB avail
                        40 active+clean
      root@aio1-ceph-mon-container-a3d8b8b1:/# ceph osd tree
      ID WEIGHT  TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY 
      -1 2.99817 root default                                      
      -2 2.99817     host labosa                                   
       0 0.99939         osd.0        up  1.00000          1.00000 
       1 0.99939         osd.1        up  1.00000          1.00000 
       2 0.99939         osd.2        up  1.00000          1.00000 
      


    Tags for this post: openstack osa ceph openstack-ansible
    Related posts: Configuring docker to use rexray and Ceph for persistent storage

posted at: 18:30 | path: /openstack/osa | permanent link to this entry


Wed, 17 May 2017



The Collapsing Empire




    ISBN: 076538888X
    LibraryThing
    This is a fun fast read, as is everything by Mr Scalzi. The basic premise here is that of a set of interdependent colonies that are about to lose their ability to trade with each other, and are therefore doomed. Oh, except they don't know that and are busy having petty trade wars instead. It isn't a super intellectual read, but it is fun and does leave me wanting to know what happens to the empire...

    Tags for this post: book john_scalzi
    Related posts: Agent to the Stars; The Android's Dream; Redshirts; The Ghost Brigades ; Old Man's War ; The End of All Things


posted at: 21:46 | path: /book/John_Scalzi | permanent link to this entry


Thu, 11 May 2017



Python3 venvs for people who are old and grumpy

    I've been using virtualenvwrapper to make venvs for python2 for probably six or so years. I know it, and understand it. Now some bad man (hi Ramon!) is making me do python3, and virtualenvwrapper just isn't a thing over there as best as I can tell.

    So how do I make a venv? Its really not too bad...

    First, install the dependencies:

      git clone git://github.com/yyuu/pyenv.git .pyenv
      echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bashrc
      echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bashrc
      echo 'eval "$(pyenv init -)"' >> ~/.bashrc
      git clone https://github.com/yyuu/pyenv-virtualenv.git ~/.pyenv/plugins/pyenv-virtualenv
      source ~/.bashrc
      


    Now to make a venv, do something like this (in this case, infrasot is the name of the venv):

      mkdir -p ~/.virtualenvs/pyenv-infrasot
      cd ~/.virtualenvs/pyenv-infrasot
      pyenv virtualenv system infrasot
      


    You can see your installed venvs like this:

      $ pyenv versions
      * system (set by /home/user/.pyenv/version)
        infrasot
      


    Where system is the system installed python, and not a venv. To activate and deactivate the venv, do this:

      $ pyenv activate infrasot
      $ ... stuff you're doing ...
      $ pvenv deactivate
      


    I'll probably write wrappers at some point so that this looks like virtualenvwrapper, but its good enough for now.

    Tags for this post: python venv virtualenvwrapper python3
    Related posts: Getting Google Talk working with PyXMPP; Packet capture in python; Implementing SCP with paramiko; More coding club; Dealing with remote HTTP servers with buggy chunking implementations; Learning Python

posted at: 21:20 | path: /python | permanent link to this entry


Sun, 07 May 2017



Things I read today: the best description I've seen of metadata routing in neutron

posted at: 17:52 | path: /openstack | permanent link to this entry


Tue, 04 Apr 2017



Light to Light, Day Three

    The third and final day of the Light to Light Walk at Ben Boyd National Park. This was a shorter (8 kms) easier walk. A nice way to finish the journey.






                         

    Tags for this post: events pictures 20170313 photo scouts bushwalk
    Related posts: Light to Light, Day Two; Light to Light, Day One; Scout activity: orienteering at Mount Stranger; Exploring the Jagungal; Potato Point

posted at: 17:42 | path: /events/pictures/20170313 | permanent link to this entry


Light to Light, Day Two

    Our second day walking the Light to Light walk in Ben Boyd National Park. This second day was about 10 kms and was on easier terrain than the first day. That said, probably a little less scenic than the first day too.






                 

    Tags for this post: events pictures 20170312 photo scouts bushwalk
    Related posts: Potato Point; Light to Light, Day Three; Light to Light, Day One; Scout activity: orienteering at Mount Stranger; Exploring the Jagungal

posted at: 16:59 | path: /events/pictures/20170312 | permanent link to this entry


Light to Light, Day One

    Macarthur Scouts took a group of teenagers down to Ben Boyd National Park on the weekend to do the Light to Light walk. The first day was 14 kms through lovely undulating terrain. This was the hardest day of the walk, but very rewarding and I think we all had fun.






                                           

    See more thumbnails

    Tags for this post: events pictures 20170311 photo scouts bushwalk
    Related posts: Exploring the Jagungal; Scout activity: orienteering at Mount Stranger; Light to Light, Day Two; Light to Light, Day Three; Potato Point

posted at: 16:01 | path: /events/pictures/20170311 | permanent link to this entry


Thu, 02 Feb 2017



Nova vendordata deployment, an excessively detailed guide

    Nova presents configuration information to instances it starts via a mechanism called metadata. This metadata is made available via either a configdrive, or the metadata service. These mechanisms are widely used via helpers such as cloud-init to specify things like the root password the instance should use. There are three separate groups of people who need to be able to specify metadata for an instance.

    User provided data

    The user who booted the instance can pass metadata to the instance in several ways. For authentication keypairs, the keypairs functionality of the Nova APIs can be used to upload a key and then specify that key during the Nova boot API request. For less structured data, a small opaque blob of data may be passed via the user-data feature of the Nova API. Examples of such unstructured data would be the puppet role that the instance should use, or the HTTP address of a server to fetch post-boot configuration information from.

    Nova provided data

    Nova itself needs to pass information to the instance via its internal implementation of the metadata system. Such information includes the network configuration for the instance, as well as the requested hostname for the instance. This happens by default and requires no configuration by the user or deployer.

    Deployer provided data

    There is however a third type of data. It is possible that the deployer of OpenStack needs to pass data to an instance. It is also possible that this data is not known to the user starting the instance. An example might be a cryptographic token to be used to register the instance with Active Directory post boot -- the user starting the instance should not have access to Active Directory to create this token, but the Nova deployment might have permissions to generate the token on the user's behalf.

    Nova supports a mechanism to add "vendordata" to the metadata handed to instances. This is done by loading named modules, which must appear in the nova source code. We provide two such modules:

    • StaticJSON: a module which can include the contents of a static JSON file loaded from disk. This can be used for things which don't change between instances, such as the location of the corporate puppet server.
    • DynamicJSON: a module which will make a request to an external REST service to determine what metadata to add to an instance. This is how we recommend you generate things like Active Directory tokens which change per instance.


    Tell me more about DynamicJSON

    Having said all that, this post is about how to configure the DynamicJSON plugin, as I think its the most interesting bit here.

    To use DynamicJSON, you configure it like this:

    • Add "DynamicJSON" to the vendordata_providers configuration option. This can also include "StaticJSON" if you'd like.
    • Specify the REST services to be contacted to generate metadata in the vendordata_dynamic_targets configuration option. There can be more than one of these, but note that they will be queried once per metadata request from the instance, which can mean a fair bit of traffic depending on your configuration and the configuration of the instance.


    The format for an entry in vendordata_dynamic_targets is like this:

    <name>@<url>
    


    Where name is a short string not including the '@' character, and where the URL can include a port number if so required. An example would be:

    testing@http://127.0.0.1:125
    


    Metadata fetched from this target will appear in the metadata service at a new file called vendordata2.json, with a path (either in the metadata service URL or in the configdrive) like this:

    openstack/2016-10-06/vendor_data2.json
    


    For each dynamic target, there will be an entry in the JSON file named after that target. For example::

            {
                "testing": {
                    "value1": 1,
                    "value2": 2,
                    "value3": "three"
                }
            }
    


    Do not specify the same name more than once. If you do, we will ignore subsequent uses of a previously used name.

    The following data is passed to your REST service as a JSON encoded POST:

    • project-id: the UUID of the project that owns the instance
    • instance-id: the UUID of the instance
    • image-id: the UUID of the image used to boot this instance
    • user-data: as specified by the user at boot time
    • hostname: the hostname of the instance
    • metadata: as specified by the user at boot time


    Deployment considerations

    Nova provides authentication to external metadata services in order to provide some level of certainty that the request came from nova. This is done by providing a service token with the request -- you can then just deploy your metadata service with the keystone authentication WSGI middleware. This is configured using the keystone authentication parameters in the vendordata_dynamic_auth configuration group.

    This behavior is optional however, if you do not configure a service user nova will not authenticate with the external metadata service.

    Deploying the same vendordata service

    There is a sample vendordata service that is meant to model what a deployer would use for their custom metadata at http://github.com/mikalstill/vendordata. Deploying that service is relatively simple:

    $ git clone http://github.com/mikalstill/vendordata
    $ cd vendordata
    $ apt-get install virtualenvwrapper
    $ . /etc/bash_completion.d/virtualenvwrapper (only needed if virtualenvwrapper wasn't already installed)
    $ mkvirtualenv vendordata
    $ pip install -r requirements.txt
    


    We need to configure the keystone WSGI middleware to authenticate against the right keystone service. There is a sample configuration file in git, but its configured to work with an openstack-ansible all in one install that I setup up for my private testing, which probably isn't what you're using:

    [keystone_authtoken]
    insecure = False
    auth_plugin = password
    auth_url = http://172.29.236.100:35357
    auth_uri = http://172.29.236.100:5000
    project_domain_id = default
    user_domain_id = default
    project_name = service
    username = nova
    password = 5dff06ac0c43685de108cc799300ba36dfaf29e4
    region_name = RegionOne
    


    Per the README file in the vendordata sample repository, you can test the vendordata server in a stand alone manner by generating a token manually from keystone:

    $ curl -d @credentials.json -H "Content-Type: application/json" http://172.29.236.100:5000/v2.0/tokens > token.json
    $ token=`cat token.json | python -c "import sys, json; print json.loads(sys.stdin.read())['access']['token']['id'];"`
    


    We then include that token in a test request to the vendordata service:

    curl -H "X-Auth-Token: $token" http://127.0.0.1:8888/
    


    Configuring nova to use the external metadata service

    Now we're ready to wire up the sample metadata service with nova. You do that by adding something like this to the nova.conf configuration file:

    [api]
    vendordata_providers=DynamicJSON
    vendordata_dynamic_targets=testing@http://metadatathingie.example.com:8888
    


    Where metadatathingie.example.com is the IP address or hostname of the server running the external metadata service. Now if we boot an instance like this:

    nova boot --image 2f6e96ca-9f58-4832-9136-21ed6c1e3b1f --flavor tempest1 --nic net-name=public --config-drive true foo
    


    We end up with a config drive which contains the information or external metadata service returned (in the example case, handy Carrie Fischer quotes):

    # cat openstack/latest/vendor_data2.json | python -m json.tool
    {
        "testing": {
            "carrie_says": "I really love the internet. They say chat-rooms are the trailer park of the internet but I find it amazing."
        }
    }
    


    Tags for this post: openstack nova metadata vendordata configdrive cloud-init
    Related posts: Things I read today: the best description I've seen of metadata routing in neutron; Technorati porn tags; Kilo Nova deploy recommendations; Juno nova mid-cycle meetup summary: conclusion; Compute Kilo specs are open; Merged in Havana: fixed ip listing for single hosts

posted at: 19:49 | path: /openstack | permanent link to this entry


Tue, 31 Jan 2017



Giving serial devices meaningful names

    This is a hack I've been using for ages, but I thought it deserved a write up.

    I have USB serial devices. Lots of them. I use them for home automation things, as well as for talking to devices such as the console ports on switches and so forth. For the permanently installed serial devices one of the challenges is having them show up in predictable places so that the scripts which know how to drive each device are talking in the right place.

    For the trivial case, this is pretty easy with udev:

    $  cat /etc/udev/rules.d/60-local.rules 
    KERNEL=="ttyUSB*", \
        ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", \
        ATTRS{serial}=="A8003Ye7", \
        SYMLINK+="radish"
    


    This says for any USB serial device that is discovered (either inserted post boot, or at boot), if the USB vendor and product ID match the relevant values, to symlink the device to "/dev/radish".

    You find out the vendor and product ID from lsusb like this:

    $ lsusb
    Bus 003 Device 003: ID 0624:0201 Avocent Corp. 
    Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
    Bus 007 Device 002: ID 0665:5161 Cypress Semiconductor USB to Serial
    Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
    Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
    Bus 004 Device 002: ID 0403:6001 Future Technology Devices International, Ltd FT232 Serial (UART) IC
    Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
    Bus 008 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
    


    You can play with inserting and removing the device to determine which of these entries is the device you care about.

    So that's great, until you have more than one device with the same USB serial vendor and product id. Then things are a bit more... difficult.

    It turns out that you can have udev execute a command on device insert to help you determine what symlink to create. So for example, I have this entry in the rules on one of my machines:

    KERNEL=="ttyUSB*", \
        ATTRS{idVendor}=="067b", ATTRS{idProduct}=="2303", \
        PROGRAM="/usr/bin/usbtest /dev/%k", \
        SYMLINK+="%c"
    


    This results in /usr/bin/usbtest being run with the path of the device file on its command line for every device detection (of a matching device). The stdout of that program is then used as the name of a symlink in /dev.

    So, that script attempts to talk to the device and determine what it is -- in my case either a currentcost or a solar panel inverter.

    Tags for this post: linux udev serial usb usbserial
    Related posts: Roomba serial cables; Via M10000, video, and a Belkin wireless USB thing; Linux USB quandary; SMART and USB storage; ov511 hackery; Ubuntu, Dapper Drake, and that difficult Dell e310

posted at: 12:04 | path: /linux | permanent link to this entry


Mon, 30 Jan 2017



A pythonic example of recording metrics about ephemeral scripts with prometheus

    In my previous post we talked about how to record information from short lived scripts (I call them ephemeral scripts by the way) with prometheus. The example there was a script which checked the SMART status of each of the disks in a machine and reported that via pushgateway. I now want to work through a slightly more complicated example.

    I think you hit the limits of reporting simple values in shell scripts via curl requests fairly quickly. For example with the SMART monitoring script, SMART is capable of returning a whole heap of metrics about the performance of a disk, but we boiled that down to a single "health" value. This is largely because writing a parser for all the other values that smartctl returns would be inefficient and fragile in shell. So for this post, we're going to work through an example of how to report a variety of values from a python script. Those values could be the parsed output of smartctl, but to mix things up a bit, I'm going to use a different script I wrote recently.

    This new script uses the Weather Underground API to lookup weather stations near my house, and then generate graphics of the weather forecast. These graphics are displayed on the various Cisco SIP phones I already had around the house. The forecasts look like this:



    The script to generate these weather forecasts is relatively simple python, and you can see the source code on github.

    My cunning plan here is to use prometheus' time series database and alert capabilities to drive home automation around my house. The first step for that is to start gathering some simple facts about the home environment so that we can do trending and decision making on them. The code to do this isn't all that complicated. First off, we need to add the python prometheus client to our python environment, which is hopefully a venv:

    pip install prometheus_client
    pip install six
    


    That second dependency isn't a strict requirement for prometheus, but the script I'm working on needs it (because it needs to work out what's a text value, and python 3 is bonkers).

    Next we import the prometheus client in our code and setup the counter registry. At the same time I record when the script was run:

    from prometheus_client import CollectorRegistry, Gauge, push_to_gateway
    
    registry = CollectorRegistry()
    Gauge('job_last_success_unixtime', 'Last time the weather job ran',
          registry=registry).set_to_current_time()
    


    And then we just add gauges for any values we want to add to the pushgateway

    Gauge('_'.join(field), '', registry=registry).set(value)
    


    Finally, the values don't exist in the pushgateway until we actually push them there, which we do like this:

    push_to_gateway('localhost:9091', job='weather', registry=registry)
    


    You can see the entire patch I wrote to add prometheus support on github if you're interested in an example with more context.

    Now we can have pretty graphs of temperature and stuff!

    Tags for this post: prometheus monitoring python pushgateway
    Related posts: Recording performance information from short lived processes with prometheus; Basic prometheus setup; The Ghost Brigades (2); Python effective TLD library; Getting started with OpenStack development; Building a symlink tree for MythTV recordings

posted at: 01:08 | path: /prometheus | permanent link to this entry


Fri, 27 Jan 2017



Recording performance information from short lived processes with prometheus

    Now that I'm recording basic statistics about the behavior of my machines, I now want to start tracking some statistics from various scripts I have lying around in cron jobs. In order to make myself sound smarter, I'm going to call these short lived scripts "ephemeral scripts" throughout this document. You're welcome.

    The promethean way of doing this is to have a relay process. Prometheus really wants to know where to find web servers to learn things from, and my ephemeral scripts are both not permanently around and also not running web servers. Luckily, prometheus has a thing called the pushgateway which is designed to handle this situation. I can run just one of these, and then have all my little scripts just tell it things to add to its metrics. Then prometheus regularly scrapes this one process and learns things about those scripts. Its like a game of Telephone, but for processes really.

    First off, let's get the pushgateway running. This is basically the same as the node_exporter from last time:

    $ wget https://github.com/prometheus/pushgateway/releases/download/v0.3.1/pushgateway-0.3.1.linux-386.tar.gz
    $ tar xvzf pushgateway-0.3.1.linux-386.tar.gz
    $ cd pushgateway-0.3.1.linux-386
    $ ./pushgateway
    


    Let's assume once again that we're all adults and did something nicer than that involving configuration management and init scripts.

    The pushgateway implements a relatively simple HTTP protocol to add values to the metrics that it reports. Note that the values wont change once set until you change them again, they're not garbage collected or aged out or anything fancy. Here's a trivial example of adding a value to the pushgateway:

    echo "some_metric 3.14" | curl --data-binary @- http://pushgateway.example.org:9091/metrics/job/some_job
    


    This is stolen straight from the pushgateway README of course. The above command will have the pushgateway start to report a metric called "some_metric" with the value "3.14", for a job called "some_job". In other words, we'll get this in the pushgateway metrics URL:

    # TYPE some_metric untyped
    some_metric{instance="",job="some_job"} 3.14
    


    You can see that this isn't perfect because the metric is untyped (what types exist? we haven't covered that yet!), and has these confusing instance and job labels. One tangent at a time, so let's explain instances and jobs first.

    On jobs and instances

    Prometheus is built for a universe a little bit unlike my home lab. Specifically, it expects there to be groups of processes doing a thing instead of just one. This is especially true because it doesn't really expect things like the pushgateway to be proxying your metrics for you because there is an assumption that every process will be running its own metrics server. This leads to some warts, which I'll explain in a second. Let's start by explaining jobs and instances.

    For a moment, assume that we're running the world's most popular wordpress site. The basic architecture for our site is web frontends which run wordpress, and database servers which store the content that wordpress is going to render. When we first started our site it was all easy, as they could both be on the same machine or cloud instance. As we grew, we were first forced to split apart the frontend and the database into separate instances, and then forced to scale those two independently -- perhaps we have reasonable database performance so we ended up with more web frontends than we did database servers.

    So, we go from something like this:



    To an architecture which looks a bit like this:



    Now, in prometheus (i.e. google) terms, there are three jobs here. We have web frontends, database masters (the top one which is getting all the writes), and database slaves (the bottom one which everyone is reading from). For one of the jobs, the frontends, there is more than one instance of the job. To put that into pictures:



    So, the topmost frontend job would be job="fe" and instance="0". Google also had a cool way to lookup jobs and instances via DNS, but that's a story for another day.

    To harp on a point here, all of these processes would be running a web server exporting metrics in google land -- that means that prometheus would know that its monitoring a frontend job because it would be listed in the configuration file as such. You can see this in the configuration file from the previous post. Here's the relevant snippet again:

      - job_name: 'node'
        static_configs:
          - targets: ['molokai:9100', 'dell:9100', 'eeebox:9100']
    


    The job "node" runs on three targets (instances), named "molokai:9100", "dell:9100", and "eeebox:9100".

    However, we live in the ghetto for these ephemeral scripts and want to use the pushgateway for more than one such script, so we have to tell lies via the pushgateway. So for my simple emphemeral script, we'll tell the pushgateway that the job is the script name and the instance can be an empty string. If we don't do that, then prometheus will think that the metric relates to the pushgateway process itself, instead of the ephemeral process.

    We tell the pushgateway what job and instance to use like this:

    echo "some_metric 3.14" | curl --data-binary @- http://localhost:9091/metrics/job/frontend/instance/0
    


    Now we'll get this at the metrics URL:

    # TYPE some_metric untyped
    some_metric{instance="",job="some_job"} 3.14
    some_metric{instance="0",job="frontend"} 3.14
    


    The first metric there is from our previous attempt (remember when I said that values are never cleared out?), and the second one is from our second attempt. To clear out values you'll need to restart the pushgateway process. For simple ephemeral scripts, I think its ok to leave the instance empty, and just set a job name -- as long as that job name is globally unique.

    We also need to tell prometheus to believe our lies about the job and instance for things reported by the pushgateway. The scrape configuration for the pushgateway therefore ends up looking like this:

      - job_name: 'pushgateway'
        honor_labels: true
        static_configs:
          - targets: ['molokai:9091']
    


    Note the honor_labels there, that's the believing the lies bit.

    There is one thing to remember here before we can move on. Job names are being blindly trusted from our reporting. So, its now up to us to keep job names unique. So if we export a metric on every machine, we might want to keep the job name specific to the machine. That said, it really depends on what you're trying to do -- so just pay attention when picking job and instance names.

    On metric types

    Prometheus supports a couple of different types for the metrics which are exported. For now we'll discuss two, and we'll cover the third later. The types are:

    • Gauge: a value which goes up and down over time, like the fuel gauge in your car. Non-motoring examples would include the amount of free disk space on a given partition, the amount of CPU in use, and so forth.
    • Counter: a value which always increases. This might be something like the number of bytes sent by a network card -- the value only resets when the network card is reset (probably by a reboot). These only-increasing types are valuable because its easier to do maths on them in the monitoring system.
    • Histograms: a set of values broken into buckets. For example, the response time for a given web page would probably be reported as a histogram. We'll discuss histograms in more detail in a later post.


    I don't really want to dig too deeply into the value types right now, apart from explaining that our previous examples haven't specified a type for the metrics being provided, and that this is undesirable. For now we just need to decide if the value goes up and down (a gauge) or just up (a counter). You can read more about prometheus types at https://prometheus.io/docs/concepts/metric_types/ if you want to.

    A typed example

    So now we can go back and do the same thing as before, but we can do it with typing like adults would. Let's assume that the value of pi is a gauge, and goes up and down depending on the vagaries of space time. Let's also show that we can add a second metric at the same time because we're fancy like that. We'd therefore need to end up doing something like (again heavily based on the contents of the README):

    cat <<EOF | curl --data-binary @- http://pushgateway.example.org:9091/metrics/job/frontend/instance/0
    # TYPE some_metric gauge
    # HELP approximate value of pi in the current space time continuum
    some_metric 3.14
    # TYPE another_metric counter
    # HELP another_metric Just an example.
    another_metric 2398
    EOF
    


    And we'd end up with values like this in the pushgateway metrics URL:

    # TYPE some_metric gauge
    some_metric{instance="0",job="frontend"} 3.14
    # HELP another_metric Just an example.
    # TYPE another_metric counter
    another_metric{instance="0",job="frontend"} 2398
    


    A tangible example

    So that's a lot of talking. Let's deploy this in my home lab for something actually useful. The node_exporter does not report any SMART health details for disks, and that's probably a thing I'd want to alert on. So I wrote this simple script:

    #!/bin/bash
    
    hostname=`hostname | cut -f 1 -d "."`
    
    for disk in /dev/sd[a-z]
    do
      disk=`basename $disk`
    
      # Is this a USB thumb drive?
      if [ `/usr/sbin/smartctl -H /dev/$disk | grep -c "Unknown USB bridge"` -gt 0 ]
      then
        result=1
      else
        result=`/usr/sbin/smartctl -H /dev/$disk | grep -c "overall-health self-assessment test result: PASSED"`
      fi
    
      cat <<EOF | curl --data-binary @- http://localhost:9091/metrics/job/$hostname/instance/$disk
      # TYPE smart_health_passed gauge
      # HELP whether or not a disk passed a "smartctl -H /dev/sdX"
      smart_health_passed $result
    EOF
    done
    


    Now, that's not perfect and I am sure that I'll re-write this in python later, but it is actually quite useful already. It will report if a SMART health check failed, and now I could write an alerting rule which looks for disks with a health value of 0 and send myself an email to go to the hard disk shop. Once your pushgateways are being scraped by prometheus, you'll end up with something like this in the console:



    I'll explain how to turn this into alerting later.

    Tags for this post: prometheus monitoring ephemeral_script pushgateway
    Related posts: A pythonic example of recording metrics about ephemeral scripts with prometheus; Basic prometheus setup; Cryptonomicon; The Ghost Brigades (2); The Diamond Age ; Mona Lisa Overdrive

posted at: 20:17 | path: /prometheus | permanent link to this entry


Thu, 26 Jan 2017



Basic prometheus setup

    I've been playing with prometheus for monitoring. It feels quite familiar to me because its based on an internal google technology called borgmon, but I suspect that means it feels really weird to everyone else.

    The first thing to realize is that everything at google is a web server. Your short lived tool that copies some files around probably runs a web server. All of these web servers have built in URLs which report the progress and status of the task at hand. Prometheus is built to: scrape those web servers; aggregate the data; store the data into a time series database; and then perform dashboarding, trending and alerting on that data.

    The most basic example is to just export metrics for each machine on my home network. This is the easiest first step, because we don't need to build any software to do this. First off, let's install node_exporter on each machine. node_exporter is the tool which runs a web server to export metrics for each node. Everything in prometheus land is written in go, which is new to me. However, it does make running node exporter easy -- just grab the relevant binary from https://prometheus.io/download/, untar, and run. Let's do it in a command line script example thing:

    $ wget https://github.com/prometheus/node_exporter/releases/download/v0.14.0-rc.1/node_exporter-0.14.0-rc.1.linux-386.tar.gz
    $ tar xvzf node_exporter-0.14.0-rc.1.linux-386.tar.gz
    $ cd node_exporter-0.14.0-rc.1.linux-386
    $ ./node_exporter
    


    That's all it takes to run the node_exporter. This runs a web server at port 9100, which exposes the following metrics:

    $ curl -s http://localhost:9100/metrics | grep filesystem_free | grep 'mountpoint="/data"'
    node_filesystem_free{device="/dev/mapper/raidvg-srvlv",fstype="xfs",mountpoint="/data"} 6.811044864e+11
    


    Here you can see that the system I'm running on is exporting a filesystem_free value for the filesystem mounted at /data. There's a lot more than that exported, and I'd encourage you to poke around at that URL a little before continuing on.

    So that's lovely, but we really want to record that over time. So let's assume that you have one of those running on each of your machines, and that you have it setup to start on boot. I'll leave the details of that out of this post, but let's just say I used my existing puppet infrastructure.

    Now we need the central process which collects and records the values. That's the actual prometheus binary. Installation is again trivial:

    $ wget https://github.com/prometheus/prometheus/releases/download/v1.5.0/prometheus-1.5.0.linux-386.tar.gz
    $ tar xvzf prometheus-1.5.0.linux-386.tar.gz
    $ cd prometheus-1.5.0.linux-386
    


    Now we need to move some things around to install this nicely. I did the puppet equivalent of:

    • Moving the prometheus file to /usr/bin
    • Creating an /etc/prometheus directory and moving console_libraries and consoles into it
    • Creating a /etc/prometheus/prometheus.yml config file, more on the contents on this one in a second
    • And creating an empty data directory, in my case at /data/prometheus


    The config file needs to list all of your machines. I am sure this could be generated with puppet templating or something like that, but for now here's my simple hard coded one:

    # my global config
    global:
      scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
      evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
      # scrape_timeout is set to the global default (10s).
    
      # Attach these labels to any time series or alerts when communicating with
      # external systems (federation, remote storage, Alertmanager).
      external_labels:
          monitor: 'stillhq'
    
    # Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
    rule_files:
      # - "first.rules"
      # - "second.rules"
    
    # A scrape configuration containing exactly one endpoint to scrape:
    # Here it's Prometheus itself.
    scrape_configs:
      # The job name is added as a label `job=` to any timeseries scraped from this config.
      - job_name: 'prometheus'
    
        # metrics_path defaults to '/metrics'
        # scheme defaults to 'http'.
    
        static_configs:
          - targets: ['molokai:9090']
    
      - job_name: 'node'
        static_configs:
          - targets: ['molokai:9100', 'dell:9100', 'eeebox:9100']
    


    Here you can see that I want to scrape each of my web servers which exports metrics every 15 seconds, and I also want to calculate values (such as firing alerts) every 15 seconds too. This might not scale if you have bajillions of processes or machines to monitor. I also label all of my values as coming from my domain, so that if I ever aggregate these values with another prometheus from somewhere else the origin will be clear.

    The other interesting bit for now is the scrape configuration. This lists the metrics exporters to monitor. In this case its prometheus itself (molokai:9090), and then each of my machines in the home lab (molokai, dell, and eeebox -- all on port 9100). Remember, port 9090 is the prometheus binary itself and port 9100 is that node_exporter binary we now have running on all of our machines.

    Now if we start prometheus, it will do its thing. There is some configuration which needs to be passed on the command line here (instead of in the configration file), so my command line looks like this:

    /usr/bin/prometheus -config.file=/etc/prometheus/prometheus.yml \
        -web.console.libraries=/etc/prometheus/console_libraries \
        -web.console.templates=/etc/prometheus/consoles \
        -storage.local.path=/data/prometheus
    


    Prometheus also presents an interactive user interface on port 9090, which is handy. Here's an example of it graphing the load average on each of my machines (it was something which caused a nice jaggy line):



    You can see here that the user interface has a drop down for selecting values that are known, and that the key at the bottom tells you things about each time series in the graph. So for example, if we added {instance="eeebox:9100"} to the end of the value in the text box at the top, then we'd be filtering for values with that label set, and would as a result only show one value in the graph (the one for eeebox).

    If you're interested in very simple dashboarding of basic system metrics, that's actually all you need to do. In my next post about prometheus I'm going to show how to write your own binary which exports values to be graphed. In my case, the temperature outside my house.

    Tags for this post: prometheus monitoring node_exporter
    Related posts: Recording performance information from short lived processes with prometheus; A pythonic example of recording metrics about ephemeral scripts with prometheus; Friday ; The Ghost Brigades (2); Cryptonomicon; Mona Lisa Overdrive

posted at: 21:23 | path: /prometheus | permanent link to this entry


Mon, 23 Jan 2017



Gods of Metal

posted at: 02:38 | path: /book/Eric_Schlosser | permanent link to this entry


Sat, 17 Dec 2016



A Walk in the Woods

posted at: 23:22 | path: /book/Bill_Bryson | permanent link to this entry


Sat, 10 Dec 2016



Leviathan Wakes

posted at: 21:16 | path: /book/James_SA_Corey | permanent link to this entry


Fri, 27 May 2016



Oryx and Crake




    ISBN: 9780385721677
    LibraryThing
    I bought this book ages ago, on the recommendation of a friend (I don't remember who), but I only just got around to reading it. Its a hard book to read in places -- its not hopeful, or particularly fun, and its confronting in places -- especially the plot that revolves around child exploitation. There's very little to like about the future society that Atwood posits here, but perhaps that's the point.

    Despite not being a happy fun story, the book made me think about things like genetic engineering in a way I didn't before and I think that's what Atwood was seeking to achieve. So I'd have to describe the book as a success.

    Tags for this post: book margaret_atwood apocalypse genetic_engineering
    Related posts: Nerilka's Story; The Android's Dream; Dragonquest; Cyteen: The Betrayal; Dragonflight ; The White Dragon


posted at: 03:07 | path: /book/Margaret_Atwood | permanent link to this entry


Sun, 22 May 2016



Potato Point

posted at: 18:21 | path: /diary/pictures/20160523 | permanent link to this entry


Sat, 23 Apr 2016



High Output Management

posted at: 01:30 | path: /book/Andy_Gove | permanent link to this entry