Content here is by Michael Still All opinions are my own.
See recent comments. RSS feed of all comments.

Sat, 22 Dec 2012

Image handlers (in essex)

    George asks in the comments on my previous post about loop and nbd devices an interesting question about the behavior of this code on essex. I figured the question was worth bringing out into its own post so that its more visible. I've edited George's question lightly so that this blog post flows reasonably.
    Can you please explain the order (and conditions) in which the three methods are used? In my Essex installation, the "img_handlers" is not defined in nova.conf, so it takes the default value "loop,nbd,guestfs". However, nova is using nbd as the chose method.
    The handlers will be used in the order specified -- with the caveat that loop doesn't support Copy On Write (COW) images and will therefore be skipped if the libvirt driver is trying to create a COW image. Whether COW images are used is configured with the use_cow_images flag, which defaults to True. So, loop is being skipped because you're probably using COW images.
    My ssh keys are obtained by cloud-init, and still whenever I start a new instance I see in the nova-compute.logs this sequence of events:
    qemu-nbd -c /dev/nbd15 /var/lib/nova/instances/instance-0000076d/disk 
    kpartx -a /dev/nbd15 
    mount /dev/mapper/nbd15p1 /tmp/tmpxGBdT0 
    umount /dev/mapper/nbd15p1 
    kpartx -d /dev/nbd15 
    qemu-nbd -d /dev/nbd15 
    I don't understand why the mount of the first partition is necessary and what it happens when the partition is mounted.
    This is a bit harder than the first bit of the question. What I think is happening is that there are files being injected, and that's causing the mount. Just because the admin password isn't being inject doesn't mean that other things aren't being injected still. You'd be able to tell what's happening by grepping your logs for "Injecting .* into image" and seeing what shows up.

    Tags for this post: openstack loop nbd libvirt file_injection rackspace
    Related posts: Some quick operational notes for users of loop and nbd devices; Upgrade problems with the new Fixed IP quota; Michael's surprisingly unreliable predictions for the Havana Nova release; Merged in Havana: configurable iptables drop actions in nova; Nova database continuous integration; Moving on

posted at: 15:51 | path: /openstack | permanent link to this entry

Sat, 15 Dec 2012

The Forever War (again)

posted at: 21:58 | path: /book/Joe_Haldeman | permanent link to this entry

Some quick operational notes for users of loop and nbd devices

posted at: 16:28 | path: /openstack | permanent link to this entry

Thu, 13 Dec 2012


posted at: 18:45 | path: /book/Robert_L_Forward | permanent link to this entry

Sat, 08 Dec 2012

Moving on

    Thursday this week is my last day at Canonical. After a little over a year at Canonical, I'm moving on to the private cloud team at Rackspace -- my first day with Rackspace will be the 17th of December. I'm very excited to be joining Rackspace -- I'm excited by the project, the team, and the opportunity to make OpenStack even better. We've also talked about some interesting stuff we'd like to do in the Australian OpenStack community, but I'm going to hold off on talking about that until I've had a chance to settle in.

    I am appreciative of my time at Canonical -- when I joined I was unaware of the existence of OpenStack, and without Canonical I might never have found this awesome project that I love. I also had the chance to work with some really smart people who taught me a lot. This move is about spending more time on OpenStack than Canonical was able to allow.

    Tags for this post: openstack canonical rackspace
    Related posts: Slow git review uploads?; Nova database continuous integration; Taking over a launch pad project; Wow, qemu-img is fast; Announcement video; Folsom Dev Summit sessions

posted at: 12:56 | path: /openstack | permanent link to this entry

Wed, 14 Nov 2012

Fuzzy Nation

posted at: 22:51 | path: /book/John_Scalzi | permanent link to this entry


posted at: 22:43 | path: /book/Suzanne_Collins | permanent link to this entry

Fri, 21 Sep 2012

On conference t-shirts

    Conference t-shirts can't be that hard, right? I certainly don't remember them being difficult when Canberra last hosted in 2005. I was the person who arranged all the swag for that conference, so I should remember. Yet here I am having spent hours on the phone with vendors, and surrounded with discarded sample t-shirts, size charts and colour swatches. What changed?

    The difference between now and then is that in the intervening seven years the Australian Linux community has started to make real effort to be more inclusive. We have anti-harassment policies, we encourage new speakers, and we're making real efforts to encourage more women into the community. 2013 is making real efforts to be as inclusive as possible -- one of the first roles we allocated was a diversity officer, who is someone active in the geek feminism community. We've had serious discussions about how we can make our event as friendly to all groups as possible, and have some interesting things along those lines to announce soon. We're working hard to make the conference a safe environment for everyone, and will have independent delegate advocates available at all social events, as well as during the conference.

    What I want to specifically talk about here is the conference t-shirts though. We started out with the following criteria -- we wanted to provide a men's cut, and a separate women's cut, because we recognize that unisex t-shirts are not a good solution for most women. We also need a wider than usual size range in those shirts because we have a diverse set of delegates attending our event. We also didn't really want to do black, dark blue, or white shirts -- mostly because those colours are overdone, but also because the conference is in January when the mean temperature is around 30 degrees Celsius.

    Surprisingly, those criteria eliminate the two largest vendors of t-shirts in Australia. Neither Hanes nor Gildan make any t-shirt that has both men's and women's cuts, in interesting colours and with a large size variety. So we went on the hunt for other manufacturers. However, I'm jumping a little ahead of myself here, so bear with me.

    First off we picked a Hanes shirt because we liked the look of it. We were comfortable with that choice for quite a while before we discovered that the range of colours available in both the men's and women's cut was quite small. Sure, there are heaps of colours in each cut, but the overlapping set of colours is much smaller than it first appears. At this point we knew we needed to find a new vendor.

    The next most obvious choice is Gildan. Gildan does some really nice shirts, and I immediately fell in love with a colour called "charcoal". However, once bitten twice shy, so we ordered some sample t-shirts for my wife and I to try out. I'm glad we did this, because the women's cut was a disaster. First off it didn't fit my wife very well in the size she normally wears, which it turns out is because the lighter cotton style of t-shirt is 10 centimeters smaller horizontally than the thicker cotton version! It got even worse when we washed the shirts and tried them again -- the shirt shrunk significantly on first wash. We also noticed something else which had escaped our attention -- the absolute largest size that Gildan did in our chosen style for women was a XXL. Given the sizing ran small, that probably made the largest actual size we could provide a mere XL. That's not good enough.

    Gildan was clearly not going to work for us. I got back on the phone with the supplier who was helping us out and we spent about an hour talking over our requirements and the problems we were seeing with the samples. We even discussed getting a run of custom shirts made overseas and shipped in, but the timing wouldn't work out. They promised to go away and see what other vendors they could find in this space. Luckily for us they came back with a vendor called BizCollection, who do soft cotton shirts in the charcoal colour I like.

    So next we ordered samples of this shirt. It looked good initially -- my shirt fit well, as did my wife's. However, we'd now learnt that testing the shirts through a few wash cycles was useful. So then my wife and I wore the shirts as much as we could for a week, washing them each evening and abusing them in all the ways we could think of -- using the dryer, hanging them outside in the sun, pretty much everything apart from jumping up and down on them. I have to say these shirts have held up well, and we're very happy with them.

    The next step is I'm going to go back and order a bunch more sample shirts and make my team wear them. The goal here is to try and validate the size charts that the vendor provides and make sure that we can provide as much advice about fit as possible to delegates. Also, I love a free t-shirt.

    After all this we still recognize that some people will never be happy with the conference's t-shirt. Perhaps they hate the colour or the design, or perhaps they're very tall and every t-shirt is too short for them. So the final thing we're doing is we're giving delegates a choice -- they can select between a t-shirt, a branded cap, or a reusable coffee cup. In this way we don't force delegates to receive something they don't really want and are unlikely to use.

    When you register for the conference, please try to remember that we've put a lot of effort as an organizing team into being as detail oriented as possible with all the little things we think delegates care about. I'm sure we've made some mistakes, but we are volunteers after all who are doing our best. If you do see something you think can be improved I'd ask that you come and speak to us privately first and give us a chance to make it right before you complain in public.

    Thanks for reading my rant about conference t-shirts.

    Tags for this post: conference lca2013 swag t-shirts canonical
    Related posts: Returns to Canberra in 2013; Call for papers opens soon; Got Something to Say? The LCA 2013 CFP Opens Soon!; Announcement video; Are you in a LUG? Do you want some promotional materials for LCA 2013?; The mechanics of bidding for LCA

posted at: 14:42 | path: /conference/lca2013 | permanent link to this entry

Sat, 08 Sep 2012

The Tuloriad

posted at: 22:14 | path: /book/John_Ringo | permanent link to this entry

Wed, 22 Aug 2012

Yellow Eyes

posted at: 20:01 | path: /book/John_Ringo | permanent link to this entry

Sat, 04 Aug 2012

Watch on the Rhine

    ISBN: 9781416521204
    If you knew you were in deep trouble, had the technology to rejuvenate any soldier you wanted, and happened to be a late nineties Germany desperate for cannon fodder, would you return the SS to service? A harsh reality is that they're some of the only soldiers you have left with real combat experience, even if their politics is abhorrent. This book has an interesting underlying concept, but to a certain extent its ruined by the politics of the authors -- any concern for anything other that military strength is dismissed as another example of rampant nimbyism. However, the book tells a good story and made me think about some stuff I wouldn't have otherwise thought about, while being entertaining. So, overall a success I guess.

    Tags for this post: book john_ringo tom_kratman aliens combat personal_ai rejuv legacy_of_the_aldenata germany
    Related posts: The Tuloriad; Yellow Eyes; Hell's Faire; When the Devil Dances; Cally's War; A Hymn Before Battle

posted at: 18:13 | path: /book/John_Ringo | permanent link to this entry

Sat, 28 Jul 2012


posted at: 23:34 | path: /book/William_C_Dietz | permanent link to this entry

Tue, 10 Jul 2012

A first pass at glance replication

    A few weeks back I was tasked with turning up a new OpenStack region. This region couldn't share anything with existing regions because the plan was to test pre-release versions of OpenStack there, and if we shared something like glance then we would either have to endanger glance for all regions during testing, or not test glance. However, our users already have a favorite set of images uploaded to glance, and I really wanted to make it as easy as possible for them to use the new region -- I wanted all of their images to magically just appear there. What I needed was some form of glance replication.

    I'd sat in on the glance replication session at the Folsom OpenStack Design Summit. The NeCTAR use case at the bottom is exactly what I wanted, so its reassuring that other people wanted something like that too. However, no one was working on this feature. So I wrote it. In fact, because of the code review process I wrote it twice, but let's not dwell on that too much.

    So, as of change id I7dabbd6671ec75a0052db58312054f611707bdcf there is a very simple replicator script in glance/bin. Its not perfect, and I expect it will need to be extended a bunch, but its a start at least and I'm using it in production now so I am relatively confident its not totally wrong.

    The replicator supports the following commands at the moment:

    glance-replicator livecopy fromserver:port toserver:port
        Load the contents of one glance instance into another.
        fromserver:port: the location of the master glance instance.
        toserver:port:   the location of the slave glance instance.

    This is the main meat of the replicator. Take a copy of the fromserver, and dump it onto the toserver. Only images visible to the user running the replicator will be copied if you're using Keystone. Only images active on fromserver are copied across. The copy is done "on-the-wire", so there are no large temporary files on the machine running the replicator to clean up.

    glance-replicator dump server:port path
        Dump the contents of a glance instance to local disk.
        server:port: the location of the glance instance.
        path:        a directory on disk to contain the data.

    Do the same thing as livecopy, but dump the contents of the glance server to a directory on disk. This includes meta data and image data, and this directory is probably going to be quite large so be prepared.

    glance-replicator load server:port path
        Load the contents of a local directory into glance.
        server:port: the location of the glance instance.
        path:        a directory on disk containing the data.

    Load a directory created by the dump command into a glance server. dump / load was originally written because I had two glance servers who couldn't talk to each other over the network for policy reasons. However, I could dump the data and move it to the destination network out of band. If you had a very large glance installation and were bringing up a new region at the end of a slow link, then this might be something you'd be interested in.

    glance-replicator compare fromserver:port toserver:port
        Compare the contents of fromserver with those of toserver.
        fromserver:port: the location of the master glance instance.
        toserver:port:   the location of the slave glance instance.

    What would a livecopy do? The compare command will show you the differences between the two servers, so its a bit like a dry run of the replication.

    glance-replicator size 
        Determine the size of a glance instance if dumped to disk.
        server:port: the location of the glance instance.

    The size command will tell you how much disk is going to be used by image data in either a dump or a livecopy. It doesn't however know about redundancy costs with things like swift, so it just gives you the raw number of bytes that would be written to the destination.

    The glance replicator is very new code, so I wouldn't be too surprised if there are bugs out there or obvious features that are lacking. For example, there is no support for SSL at the moment. Let me know if you have any comments or encounter problems using the replicator.

    Tags for this post: openstack glance replication multi-region canonical
    Related posts: Further adventures with base images in OpenStack; Openstack compute node cleanup; Returns to Canberra in 2013; The next thing; MySQL Users Conference; Reflecting on Essex

posted at: 16:09 | path: /openstack | permanent link to this entry

Sun, 20 May 2012

Got Something to Say? The LCA 2013 CFP Opens Soon!

    The call for presentations opens on 1 June, which is only 11 days away! So if you're thinking of speaking at the conference (a presentation, tutorial, or miniconference), now would be a good time to start thinking about what you're going to say. While you're thinking, please spare a thought for our web team, who are bringing up the entire zookeepr instance so that the CFP will work properly.

    We've been getting heaps of stuff done over the past few months. We've had a "ghosts" meeting (a meeting with former LCA directors), found conference and social venues, and are gearing up for the Call For Presentations.

    We've signed a contract for the keynote venue, which I think you will all really enjoy. We have also locked in our booking for the lecture theatres, which is now working its way through the ANU process. For social events, we've got a great venue for the penguin dinner, and have shortlisted venues for the speakers' dinner and the professional delegates' networking session. We're taking a bit of extra time here because we want venues that are special, and not just the ones which first came to mind.

    The ghosts meeting went really well and I think we learnt some important things. The LCA 2013 team is a bit unusual, because so many of us have been on a LCA core team before, but that gave us a chance to dig into things which deserved more attention and skip over the things which are self-evident. We want to take the opportunity in 2013 to have the most accessible, diverse and technically deep conference that we possibly can, and there was a lot of discussion around those issues. We've also had it drummed into us that communications with delegates is vitally important and you should expect our attempts to communicate to ramp up as the conference approaches.

    I'm really excited about the progress we've made so far, and I feel like we're in a really good state right now. As always, please feel free to contact the LCA2013 team at if you have any questions.

    Tags for this post: conference lca2013 cfp canonical
    Related posts: Call for papers opens soon; Returns to Canberra in 2013; Are you in a LUG? Do you want some promotional materials for LCA 2013?; Announcement video; On conference t-shirts; The next thing

posted at: 20:44 | path: /conference/lca2013 | permanent link to this entry

Thu, 10 May 2012

Catching Fire

posted at: 04:55 | path: /book/Suzanne_Collins | permanent link to this entry

Sun, 22 Apr 2012


posted at: 19:37 | path: /book/Ben_Bova | permanent link to this entry

Mon, 16 Apr 2012

The Hunger Games

posted at: 14:04 | path: /book/Suzanne_Collins | permanent link to this entry

Sat, 14 Apr 2012

The Android's Dream

posted at: 13:48 | path: /book/John_Scalzi | permanent link to this entry

Logos Run

    ISBN: 0441015360
    This is the continuation from Runner, and continues the story of the attempt to re-enable the star gates. It has the comicly incompetent Technosociety once again, as well as series of genetically engineered protagonists. I am bothered by why the star gate power supplies cause people to fall ill -- you'd think in a highly advanced society capable of building star gates they might have spent some time on shielding. Or did the shielding somehow fail on all the power sources sometime over the thousands of years of decay? The has a disappointing ending, but was a fun read until then. I find it hard to suspend disbelief about how the AIs present themselves, but apart from that the book was solid. This one is probably not as good as the first.

    Tags for this post: book william_c_dietz religion combat space_travel decay courier engineered_human genetic_engineering runner_series
    Related posts: Runner; The Accidental Time Machine ; Rendezvous With Rama; Halo: The Flood; Friday ; The Sagan Diary

posted at: 13:45 | path: /book/William_C_Dietz | permanent link to this entry

Tue, 10 Apr 2012

Folsom Dev Summit sessions

    I thought I should write up the dev summit sessions I am hosting now that the program is starting to look solid. This is mostly for my own benefit, so I have a solid understanding of where to start these sessions off. Both are short brainstorm sessions, so I am not intending to produce slide decks or anything like that. I just want to make sure there is something to kick discussion off.

    Image caching, where to from here (nova hypervisors)

    As of essex libvirt has an image cache to speed startup of new instances. This cache stores images direct from glance, as well as resized images. There is a periodic task which cleans up images in the cache which are no longer needed. The periodic task can also optionally detect images which have become corrupted on disk.

    So first off, do we want to implement this for other hypervisors as well? As mentioned in a recent blog post I'd like to see the image cache manager become common code and have all the hypervisors deal with this in exactly the same manner -- that makes it easier to document, and means that on-call operations people don't need to determine what hypervisor a compute node is running before starting to debug. However, that requires the other hypervisor implementations to change how they stage images for instance startup, and I think it bears further discussion.

    Additionally, the blueprint ( proposed that popular / strategic images could be pre-cached on compute nodes. Is this something we still want to do? What factors do we want to use for the reference implementation? I have a few ideas here that are listed in the blueprint, but most of them require talking to glance to implement. There is some hesitance in adding glance calls to a periodic task, because in a keystone'd implementation that would require an admin token in the nova configuration file. Is there a better way to do this, or is it ok to rely on glance in a periodic task?

    Ops pain points (nova other)

    Apart from my own ideas (better instance logging for example), I'm very interested in hearing from other people about what we can do to make nova easier for ops people to run. This is especially true for relatively easy to implement things we can get done in Folsom. This blueprint for deployer friendly configuration files is a good example of changes which don't look too hard to implement, but that would make the world a better place for opsen. There are many other examples of blueprints in this space, including:

    What else can we be doing to make life better for opsen? I'm especially interested in getting people who actually run openstack in the wild into the room to tell us what is painful for them at the moment.

    Tags for this post: openstack canonical folsom image_cache_management sre
    Related posts: Reflecting on Essex; Further adventures with base images in OpenStack; Openstack compute node cleanup; A first pass at glance replication; Conference Wireless not working yet?; Managing MySQL the Slack Way: How Google Deploys New MySQL Servers

posted at: 17:25 | path: /openstack | permanent link to this entry

Thu, 05 Apr 2012

Reflecting on Essex

    This post is kind of long, and a little self indulgent. However, I really wanted to spend some time thinking about what I did for the Essex release cycle, and what I want to do for the Folsom release. I spent Essex mostly hacking on things in isolation, except for when Padraig Brady and I were hacking in a similar space. I'd like to collaborate more for Folsom, and I'm hoping talking about what I'm interested in doing in public might help with that.

    I came relatively late to the Essex development cycle, having never even heard of OpenStack before joining Canonical. We can talk about how I'd worked in the cloud space for six years and yet wasn't aware of the open source implementations at some other time.

    My initial introduction to OpenStack was being paged for compute nodes which were continually running out of disk. I googled around a bit and discovered that cached images for instances were never cleaned up (to start an instance, an image is fetched from glance, possibly has its format converted, is resized, and then an instance started with that resulting image, all those images were never being cleaned up). I filed bug 904532 as my absolute first interaction with the OpenStack community. Scott Moser kindly pointed me at the blueprint for how to actually fix the problem.

    (Remind me if Phil Day comes to the OpenStack developer summit that I should sit down with him at some point and see how what close what was actually implemented got to what he wrote in that blueprint. I suspect we've still got a fair way to go, but I'll talk more about that later in this post).

    This was a pivotal moment. I'd just spent the last six years writing python code to manage largish cloud clusters, and here was a bug which was hurting me in a python package intended to manage clusters very similar to those I had been running. I should just fix the bug, right?

    It turns out that the OpenStack core developers are super easy to work with. I'd say that the code review process certainly feels like it was modelled on Google's but in general the code reviewers are nicer with their comments that what I'm used to. This makes it much easier to motivate yourself to go and spend some more time hacking that a deeply negative review would. I think Vish is especially worthy of a shout out as being an amazing person to work with. He's helpful, patient, and very smart.

    In the end I wrote the image cache manager which ships in Essex. Its not perfect, but its a lot better than what came before, and its a good basis to build on. There is some remaining tech debt for image cache management which I intend to work on for Folsom. First off, the image cache only works for libvirt instances at the moment. I'd like to pull all the other hypervisors into line as best as possible. There are hooks in the virtualization driver for this, but no one has started this work as best as I am aware. To be completely honest I'd like to see the image cache manager become common code and have all the hypervisors deal with this in exactly the same manner -- that makes it easier to document, and means that on-call operations people don't need to determine what hypervisor a compute node is running before starting to debug. This is something I very much want to sit down with other nova developers and talk about at the summit.

    The next step for image cache management is tracked in a very bare bones blueprint. The original blueprint envisaged that it would be desirable to pre-cache some images on all nodes. For example, a cloud host might want to offer slightly faster startup times for some images by ensuring they are pre-cached. I've been thinking about this a lot, and I can see other use cases here as well. For example, if you have mission critical instances and you wanted to tolerate a glance failure, then perhaps you want to pre-cache a class of images that serve those mission critical instances. The intention is to provide an interface and default implementation for the pre-caching logic, and then let users go wild working out their own requirements.

    The hardest bit of the pre-caching will be reducing the interactions with glance I suspect. The current feeling is that calling glance from a periodic task is a bit scary, and has been actively avoided for Essex. This is especially true if Keystone is enabled, as the periodic task wont have an admin context unless we pull that from the config file. However, if you're trying to determine what images are mission critical, then you really need to talk to glance. I guess another option would be to have a table of such things in nova's database, but that feels wrong to me. We're going to have to talk about this bit more.

    (It would be interesting as well to talk about the relative priority of instances as well. If a cluster is experiencing outages, then perhaps some customers would pay more to have their instances be the last killed off or something. Or perhaps I have instances which are less critical than others, so I want the cluster to degrade in an understood manner.)

    That leads logically onto a scheduler change I would like to see. If I have a set of compute nodes I know already have the image for a given instance, shouldn't I prefer to start instances on those nodes instead of fetching the image to yet more compute nodes? In fact, if I already have a correctly resized COW base image for an instance on a given node, then it would make sense to run a new instance on that node as well. We need to be careful here, because you wouldn't want to run all of a given class of instance on a small set of compute nodes, but if the image was something like a default Ubuntu image, then it would make sense. I'd be interested in hearing what other people think of doing something like this.

    Another thing I've tried to focus on for Essex is making OpenStack easier for operators to run. That started off relatively simply, by adding an option for log messages to specify what instance a message relates to. This means that when a user queries the state of their instance, the admin can now just grep for the instance UUID, and run from there. Its not perfect yet, in that not all messages use this functionality, but that's some tech debt that I will take on in Folsom. If you're a nova developer, then please pass instance= in your log messages where relevant!

    This logging functionality isn't perfect, because if you only have the instance UUID in the method you're writing, it wont work. It expects full instance dicts because of the way the formatting code works. This is kind of ironic in that the default logging format only includes the UUID. In Folsom I'll also extend this code so that the right thing happens with UUIDs as well.

    Another simple logging tweak I wrote is that tracebacks now have the time and instance included in them. This makes it much easier for admins to determine the context of a traceback in their logs. It should be noted that both of these changes was relatively trivial, but trivial things can often make it much easier for others.

    There are two sessions at the Folsom dev summit talking about how to make OpenStack easier for operators to run. One was from me, and the other is from Duncan McGreggor. Neither has been accepted yet, but if I notice that Duncan's was accepted I'll drop mine. I'm very very interested in what operations staff feel is currently painful, because having something which is easy to scale and manage is vital to adoption. This is also the core of what I did at Google, and I feel I can make a real contribution here.

    I know I've come relatively late to the OpenStack party, but there's heaps more to do here and I'm super enthused to be working on code that I can finally show people again.

    Tags for this post: openstack canonical essex folsom image_cache_management sre
    Related posts: Folsom Dev Summit sessions; Openstack compute node cleanup; Further adventures with base images in OpenStack; Announcement video; Got Something to Say? The LCA 2013 CFP Opens Soon!; On conference t-shirts

posted at: 18:19 | path: /openstack | permanent link to this entry

Tue, 03 Apr 2012

Blathering for Tuesday, 03 April 2012

posted at: 06:43 | path: /blather | permanent link to this entry

Mon, 02 Apr 2012

Call for papers opens soon

    It's time to start thinking about your talk proposals, because the call for papers is only eight weeks away!

    For the 2013 conference, the papers committee are going to be focusing on deep technical content, and things we think are going to really matter in the future -- that might range from freedom and privacy, to open source cloud systems, or energy efficient server farms of the future. However, the conference is to a large extent what the speakers make it -- if we receive many excellent submissions on a topic, then its sure to be represented at the conference.

    The papers committee will be headed by the able combination of Michael Davies and Mary Gardiner, who have done an excellent job in previous years. They're currently working through the details of the call for papers announcement. I am telling you this now because I want speakers to have plenty of time to prepare for the submissions process, as I think that will produce the highest quality of submissions.

    I also wanted to let you know the organising for 2013 is progressing well. We're currently in the process of locking in all of our venue arrangements, so we will have some announcements about that soon. We've received our first venue contract to sign, which is for the keynote venue. It's exciting, but at the same time a good reminder that the conference is a big responsibility.

    What would you like to see at the conference? I am sure there are things which are topical which I haven't thought of. Blog or tweet your thoughts (include the hashtag #lca2013 please), or email us at

    Tags for this post: conference lca2013 cfp canonical
    Related posts: Got Something to Say? The LCA 2013 CFP Opens Soon!; Are you in a LUG? Do you want some promotional materials for LCA 2013?; Announcement video; On conference t-shirts; Returns to Canberra in 2013; Yet more lca2013 setup

posted at: 20:45 | path: /conference/lca2013 | permanent link to this entry

Memorial service details

    This is what will be published in the paper on Wednesday this week:

    Robyn Barbara Boland
    24 April 1948 - 30 March 2012

    Dearly loved and cherished mother of
    Catherine and Michael, Emily and Justin
    Jonathan and Lynley, and Allister.
    Proud Ma of Andrew and Matthew.

    Robyn took Jesus' hand and
    walked peacefully
    into her Heavenly Father's arms.
    She was a friend to all who met her.
    Robyn will be deeply missed.

    A celebration of Robyn's life will be held
    at Woden Valley Alliance Church,
    81 Namatjira Drive, Waramanga on
    Tuesday, 10 April 2012 commencing at 1pm.

    Tags for this post: health robyn liver funeral
    Related posts: Continued improvement; Bigger improvements; RIP Robyn Boland; Weekend update; More on Robyn; Small improvements

posted at: 00:41 | path: /health/robyn | permanent link to this entry

Fri, 30 Mar 2012

RIP Robyn Boland

posted at: 04:01 | path: /health/robyn | permanent link to this entry

Tue, 27 Mar 2012

Update on Robyn from Catherine

    I apologize if there are factual inaccuracies in this post. It has been written with the best information I have available at the time.

    Cat sent this update out to robyn-discuss last night, but I am reposting it here for those who aren't on the mailing list.

      Mum is stable condition with no real change in her condition.
      She spends some time awake but is still hard to wake.
      Thank you all for your thoughts and prayers we really
      appreciate them. Below are some specific prayer requests:
      - Mum will spend more time awake (so they can safely remove her
        breathing tube which is currently there just in case) 
      - Mum's kidney function will improve (they are currently taking
        a hammering due to taking over from the liver) she is
        currently on dialysis.
      - her liver function will improve so that she has a chance to
        recover either fully or at least enough to have a transplant.

    Tags for this post: health robyn liver sydney
    Related posts: RIP Robyn Boland; Weekend update; Bigger improvements; Continued improvement; A further update on Robyn's health; Small improvements

posted at: 17:40 | path: /health/robyn | permanent link to this entry

Blathering for Wednesday, 28 March 2012

    16:39: Mikal shared: Death of a data haven: cypherpunks, WikiLeaks, and the world's smallest nation
      A data haven is "the information equivalent to a tax haven," a country that helps you evade other countries' rules on what you can and can't do with your bits. (Think "Swiss banking" for data.) The best-known example comes from Neal Stephenson's 1999 best-seller Cryptonomicon, whose heroes go up against murderous warlords, rapacious venture capitalists, and epic authorial digressions in their quest to bring untraceable communications to the masses and get rich in the process.

    Tags for this post: blather

posted at: 07:03 | path: /blather | permanent link to this entry

Sun, 25 Mar 2012

Weekend update

    I apologize if there are factual inaccuracies in this post. It has been written with the best information I have available at the time.

    Robyn's blood test results are showing a slight decline in both liver and kidney function. She was awake for slightly longer periods this morning, but was back to being really sleepy this afternoon. Based on her ability to stay awake this morning they were talking about removing her breathing tube.

    Robyn is still breathing on her own but she but they want the breathing tube to stay in place until she is more conscious and able to be roused. Disappointingly this afternoon she was back to being mostly un-responsive. Over all her condition is stable but she has a long way to go.

    Tags for this post: health robyn liver sydney
    Related posts: RIP Robyn Boland; Bigger improvements; Continued improvement; A further update on Robyn's health; Update on Robyn from Catherine; Small improvements

posted at: 03:13 | path: /health/robyn | permanent link to this entry

Wed, 21 Mar 2012

Continued improvement

posted at: 17:04 | path: /health/robyn | permanent link to this entry

Tue, 20 Mar 2012

Bigger improvements

    I apologize if there are factual inaccuracies in this post. It has been written with the best information I have available at the time.

    Last night Robyn went for a CAT scan. That involved detaching her from dialysis. When she got back from the scan they didn't bother to connect her back up, and her kidneys seem to be coping on their own now. Since this morning she has also been breathing without assistance, which is also good. Robyn continues to respond to input which is also good.

    Next steps are that Robyn needs to increase her blood pressure, her kidney function needs to improve, and her kidneys need to start producing urine again. The kids also need to decide at what point they're going to go back home, which is hard for them at the moment as they're all pretty tired. It seems that discussion will happen tomorrow sometime.

    Tags for this post: health robyn liver sydney
    Related posts: A further update on Robyn's health; Update on Robyn from Catherine; Small improvements; Robyn's Health; More on Robyn; RIP Robyn Boland

posted at: 19:48 | path: /health/robyn | permanent link to this entry

Mon, 19 Mar 2012

Small improvements

    I apologize if there are factual inaccuracies in this post. It has been written with the best information I have available at the time.

    I've just been told that Robyn has opened her eyes briefly and is responding more concretely to input than she was previously. Specifically she is squeezing people's hands in response to questions and moving her head around. This is more communicative than she's been for the last few days, so it seems small but it is still a move in the right direction.

    Tags for this post: health robyn liver sydney
    Related posts: Continued improvement; Bigger improvements; RIP Robyn Boland; Weekend update; More on Robyn; Robyn's Health

posted at: 17:14 | path: /health/robyn | permanent link to this entry

Sun, 18 Mar 2012

More on Robyn

posted at: 19:44 | path: /health/robyn | permanent link to this entry

Rage of a Demon King

posted at: 15:58 | path: /book/Raymond_E_Feist | permanent link to this entry

A further update on Robyn's health

    I apologize if there are factual inaccuracies in this post. It has been written with the best information I have available at the time.

    Here's a status update on what I know about Robyn's condition. She has been in the ICU at RPA in Sydney since about noon yesterday. All of her children made it up to see her, although she has been heavily sedated in the ICU and cannot respond to conversation with the kids. As part of the ICU process they have surgically implanted a series of devices, including dialysis, breathing assistance, and a feeding tube. My understanding is that the sedation is mostly about reducing panic about these devices.

    As best as I am aware the transplant meeting is still going ahead today. The doctors yesterday were saying that the battle isn't over, so that was at least slightly reassuring. Apparently it isn't unexpected for someone awaiting a liver transplant to have sudden flare ups.

    I have been getting a lot of requests for information, which is nice in that its obvious that lots of people care deeply about Robyn. Its amazing how many people she has touched in her life. I have therefore created a mailing list I will send these updates to for those who don't want to check here regularly. You can find details for the list here.

    Tags for this post: health robyn liver sydney
    Related posts: Robyn's Health; Small improvements; Update on Robyn from Catherine; More on Robyn; Weekend update; RIP Robyn Boland

posted at: 13:39 | path: /health/robyn | permanent link to this entry

Sat, 17 Mar 2012

Robyn's Health

    I apologize if there are factual inaccuracies in this post. It has been written with the best information I have available at the time.

    My mother in law, Robyn Boland, has had severe liver problems for a long time, and has been in and out of hospital in Canberra for at least a year. This was one of the major factors in being unable to move to Sydney for Google, and was one of the reasons I was ultimately laid off by them. However, Robyn's condition has been getting worse recently and she was transfered to Sydney about two weeks ago. The transfer was because RPA is the regional specialist hospital in liver problems.

    On Friday I got a call from the resident overseeing Robyn's care. I was told that her liver had basically failed, and that it only had months of any form of operation remaining at best. The liver failure has resulted in the kidney's having to do more work than normal, and they are now failing under the additional workload. The doctors wanted to put Robyn on the transplant waiting list, which we of course agreed to.

    However, this morning we got a call that Robyn is now much worse, and has been moved into intensive care after requiring a crash cart to resuscitate. Emily and Justin were already at the hospital, but the other kids are now flying to Sydney as rapidly as they can to be there. I am staying home for now to look after the kids. We obviously don't know what the likely outcome is at this stage, but things are looking pretty grim to be honest.

    So, I'm pretty distracted at the moment. If you've emailed me about conference stuff or anything else, I apologize and will work through the mail backlog as soon as I can.

    Tags for this post: health robyn liver sydney
    Related posts: More on Robyn; Small improvements; A further update on Robyn's health; Update on Robyn from Catherine; Continued improvement; Bigger improvements

posted at: 16:24 | path: /health/robyn | permanent link to this entry

Thu, 15 Mar 2012

It seems stickers are a gas

posted at: 16:42 | path: /conference/lca2013 | permanent link to this entry

Wed, 14 Mar 2012

Blathering for Wednesday, 14 March 2012

    09:32: Mikal shared: Why I left Google
      An interesting take on the cultural changes happening at Google at the moment.

    Tags for this post: blather

posted at: 05:39 | path: /blather | permanent link to this entry

Wed, 07 Mar 2012

Blathering for Thursday, 08 March 2012

posted at: 15:16 | path: /blather | permanent link to this entry

Mon, 05 Mar 2012

Blathering for Tuesday, 06 March 2012

posted at: 15:08 | path: /blather | permanent link to this entry

Sun, 26 Feb 2012


posted at: 01:21 | path: /book/William_C_Dietz | permanent link to this entry

Fri, 10 Feb 2012

Red Storm Rising

    ISBN: 0006173624
    HarperCollins Publishers Ltd (1988), Paperback, 832 pages
    I had read this book many years ago, and remembered it fondly. I wasn't disappointed reading it again -- its certainly a classic techno-thriller, even if it is a little dated now. I imagine it would make less sense to someone who hadn't grown up with the cold war, but within that context its a good read. The worst bit is that given what we knew back then it is so completely plausible. Great book.

    Tags for this post: book tom_clancy combat communism thriller
    Related posts: The Road to Damascus

posted at: 18:13 | path: /book/Tom_Clancy | permanent link to this entry

Sat, 04 Feb 2012

The next thing

    It has been a couple of months, so I feel that perhaps its time that I mentioned more publicly where I ended up after Google. I am now a systems administrator at Canonical, the makers of Ubuntu. That's a pretty good fit for me in the sense that I have been a Ubuntu user for a very long time. For reference, I don't love the job title "systems administrator" because it doesn't really match what I do, but it isn't something I'm fixated on.

    One of the things I help manage at Canonical is our Openstack infrastructure. Along the way I've been finding a few things there I think can be improved, which is why I've been hacking on Openstack in my spare time. I've had a couple of patches merged already, and am generally having fun contributing to an open source project which I think stands a very good chance of being the Apache of cloud management. It is a lot like the stuff I was doing at Google in the sense that I like working on things which I think will affect the quality of life for a large number of people, and Openstack is clearly in that space.

    Canonical is also much more open about contributing to open source projects than Google was, so expect me to be able to talk more about what I do in my work life than I did before. I think its already noticeable that I am blogging more than I did during my six years at the big G.

    Tags for this post: work canonical google
    Related posts: Further adventures with base images in OpenStack; A first pass at glance replication; Reflecting on Essex; Returns to Canberra in 2013; Openstack compute node cleanup; Are you in a LUG? Do you want some promotional materials for LCA 2013?

posted at: 15:50 | path: /work | permanent link to this entry

Rise of a Merchant Prince

posted at: 15:30 | path: /book/Raymond_E_Feist | permanent link to this entry

An update on Catherine's health

    In the last week we've now seen the two specialists that we needed to see to learn more about Catherine's pituitary adenoma. The first was the opthamologist, who kindly saw us at very short notice. Even better, he's Andrew Tridgell's brother and a lovely guy. He did a great job of answering our questions and generally reassuring us, and the short of it is that Catherine's vision is not current disturbed and barring another hemorrhage or a significant growth in the tumor it shouldn't be. He of course couldn't rule these things out, but that's because all things are possible even if they are unlikely. The ongoing strategy here appears to be a series of MRIs and visual field studies done every six months or so for the foreseeable future. The only real wart here was that if there is a hemorrhage, which is something we can't control, the prognosis here could change rapidly for the worse with very little warning. There is evidence of a previous hemorrhage on the MRI.

    The second specialist was the endocronologist, who we saw on Monday in Sydney. Again he was a lovely guy and put up with our two pages of questions. As best as he can tell the adenoma is not cancer, but he's not sure if it is functional or not (controlling the level of prolactin in Catherine's body). The first steps are that he's going to take the MRI films to a radiologist specializing in cranial scans, and has put Catherine on a drug which should control her prolactin levels. Then it will be a blood test in a month to see if the drug is working, and we'll take it from there. He was talking about the possibility that this whole thing is related to Catherine's sarcoidosis from a decade ago, but he thinks that only a biopsy of the tumor will confirm that. I feel that if they're going to do brain surgery for a biopsy they may as well just take the darn thing out while they're there, but we'll have that argument when we get there.

    So, overall not as horribly bad as it could have been. There are still risks if there is a hemorrhage, and its possible that we'll end up seeing a neurosurgeon to have the tumor removed, but we'll cross those bridges when we come to them. The next step is either that the radiologist will see something on the MRI that he thinks needs more information, or that Catherine will have a blood test in a month. We'll keep you posted.

    Tags for this post: health catherine brain tumor pituitary adenoma
    Related posts: It hasn't been a very good week; A year of being more active; Continued improvement; Headache; Recumbent bikes; Bigger improvements

posted at: 02:02 | path: /health/catherine | permanent link to this entry

Fri, 03 Feb 2012

Wow, qemu-img is fast

    I wanted to determine if its worth putting ephemeral images into the libvirt cache at all. How expensive are these images to create? They don't need to come from the image service, so it can't be too bad, right? It turns out that qemu-img is very very fast at creating these images, based on the very small data set of my laptop with an ext4 file system...

      mikal@x220:/data/temp$ time qemu-img create -f raw disk 10g
      Formatting 'disk', fmt=raw size=10737418240 
      real	0m0.315s
      user	0m0.000s
      sys	0m0.004s
      mikal@x220:/data/temp$ time qemu-img create -f raw disk 100g
      Formatting 'disk', fmt=raw size=107374182400 
      real	0m0.004s
      user	0m0.000s
      sys	0m0.000s

    Perhaps this is because I am using ext4, which does funky extents things when allocating blocks. However, the only ext3 file system I could find at my place is my off site backup disks, which are USB3 attached instead of the SATA2 that my laptop uses. Here's the number from there:

      $ time qemu-img create -f raw disk 100g
      Formatting 'disk', fmt=raw size=107374182400 
      real	0m0.055s
      user	0m0.000s
      sys	0m0.004s

    So still very very fast. Perhaps its the mkfs that's slow? Here's a run of creating a ext4 file system inside that 100gb file I just made on my laptop:

      $ time mkfs.ext4 disk 
      mke2fs 1.41.14 (22-Dec-2010)
      disk is not a block special device.
      Proceed anyway? (y,n) y
      warning: Unable to get device geometry for disk
      Filesystem label=
      OS type: Linux
      Block size=4096 (log=2)
      Fragment size=4096 (log=2)
      Stride=0 blocks, Stripe width=0 blocks
      6553600 inodes, 26214400 blocks
      1310720 blocks (5.00%) reserved for the super user
      First data block=0
      Maximum filesystem blocks=0
      800 block groups
      32768 blocks per group, 32768 fragments per group
      8192 inodes per group
      Superblock backups stored on blocks: 
      	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
      	4096000, 7962624, 11239424, 20480000, 23887872
      Writing inode tables: done                            
      Creating journal (32768 blocks): done
      Writing superblocks and filesystem accounting information: done
      This filesystem will be automatically checked every 36 mounts or
      180 days, whichever comes first.  Use tune2fs -c or -i to override.
      real	0m4.083s
      user	0m0.096s
      sys	0m0.136s

    That time includes the time it took me to hit the 'y' key, as I couldn't immediately find a flag to stop prompting.

    In conclusion, there is nothing slow here. I don't see why we'd want to cache ephemeral disks and use copy on write for them at all. Its very cheap to just create a new one each time, and it makes the code much simpler.

    Tags for this post: openstack qemu ephemeral mkfs swap speed canonical
    Related posts: Further adventures with base images in OpenStack; Openstack compute node cleanup; Large inodes = faster samba; Speed limit; The next thing; Reflecting on Essex

posted at: 17:16 | path: /openstack | permanent link to this entry

Thu, 02 Feb 2012

Slow git review uploads?

posted at: 16:53 | path: /openstack | permanent link to this entry

Tue, 31 Jan 2012

Announcement video

posted at: 15:04 | path: /conference/lca2013 | permanent link to this entry

Wed, 25 Jan 2012

Shadow of a Dark Queen

posted at: 22:26 | path: /book/Raymond_E_Feist | permanent link to this entry

Tue, 24 Jan 2012

Are you in a LUG? Do you want some promotional materials for LCA 2013?

posted at: 22:24 | path: /conference/lca2013 | permanent link to this entry

Sat, 21 Jan 2012 Returns to Canberra in 2013

posted at: 03:10 | path: /conference/lca2013 | permanent link to this entry

Fri, 06 Jan 2012

It hasn't been a very good week

    This week has presented me with a few learning opportunities. Catherine and I are expecting to get a fair few questions about the week, so we thought we'd try and write it up here. That way we can tell people something that's consistent and complete, without having to type the same thing out 200 times. I also think that this topic deserves more space than twitter will allow.

    On Wednesday Catherine was told she probably has a brain tumor, and to get an MRI immediately. This was obviously pretty upsetting, and if I've been irritable at you this week that's why and I apologize. Neither of us are medical professionals, and we didn't really know what this meant. Catherine was told that the tumor was "almost certainly" benign, but that wasn't all that reassuring.

    Catherine had her MRI the next day. It sounds like a pretty unpleasant process -- your head is clamped into position and an IV fitted, and then you're left in a room which makes the surgical metal in your lower spine feel hot for 40 minutes. Did I mention they clamp your head so you can't escape? Another irritation is that Medicare doesn't cover this MRI at all. So, you take people who have been told they have a brain tumor, and then you tell them that the government doesn't care enough about them to pay for what is considered the best diagnostic for their condition. Better than that, we rang our private insurer, and they told us that Medicare also forbids them to cover it. So, you're out of pocket at least $400.

    The MRI report says this: On the right side of the anterior pituitary, there is a hyperintense lesion measuring 9 x 9 x 10mm (T x CC x AP). There is a fluid/fluid level with no definite enhancement of the lesion following contrast injection. The pituitary stalk is minimally bowed to the left. These appearances are in keeping with haemorrhage into a pituitary adenoma.

    The first piece of information we had was this paragraph from the MRI company. The GP gets this information about 12 hours before the patient, but our GP was so busy she hadn't read it by the time we did. We saw this about 8pm on Wednesday night, and of course immediately started web searching for the terms in the description. "Hyperintense" for example means "bright white on the MRI", which I believe to be a measure of density of the tumor.

    Other learning includes that the pituitary is the gland which moderates the behavior of various elements of the endocrine system, including reproductive hormones. Technically, the pituitary is not part of the brain, but is attached very closely to it.

    We saw the GP the next morning (yesterday), and it was mostly reassuring. The tumor is almost certainly not cancer -- I didn't even know there were non-cancerous tumors before yesterday. However, the tumor is affecting Catherine's reproductive hormones, and she is probably sterile for the period the tumor is present. The tumor might also get larger, and if it does it could impact on her optic nerves (which run to either side of the tumor) and that might result in varying levels of vision problems right up to blindness.

    It sounds like there are a few courses of action available -- regular MRIs to monitor the state of the tumor. Surgery is an option to have it removed, which is more of an issue if you care about having more children or are suffering from vision disturbances. There are also radio therapy and drug options, but we haven't really had those explained to us yet.

    The next steps are for Catherine to see an endocrinologist to see what he thinks about the MRI. Apparently there is a huge waiting list for those in Canberra, so it will mean a trip to Sydney at the end of the month. She also needs to have her vision tested. There's also a huge waiting list for that in Canberra but the specialist she is referred to does waiting list triage, so there is some hope that it wont be too long. We'll know more about that next month.

    On a personal note, one of the other things that the last six months has taught me is that I'm not very good at talking about things which are really upsetting me -- our builder going bankrupt leaving us with an unfinished house, my mother in law's ailing health, getting made redundant by Google and this tumor incident being four examples from the last six months. I find I cope much better with these things if I have a chance to internalize them first before I talk to heaps of people about them. So, if I appear standoffish, that's why.

    I think its fair for people to have questions about this post, but please remember that we're not experts and we've tried to include everything we know in this post already.

    Tags for this post: health catherine brain tumor pituitary adenoma mri
    Related posts: An update on Catherine's health; Continued improvement; Headache; A year of being more active; Recumbent bikes; Bigger improvements

posted at: 16:59 | path: /health/catherine | permanent link to this entry

Tue, 03 Jan 2012

Blathering for Wednesday, 04 January 2012

    18:07: Mikal shared: TVs are all awful
      How motivational. A funny read if you care about video.

    Tags for this post: blather

posted at: 05:00 | path: /blather | permanent link to this entry

Its a good sign that they're already making fun of me, right?

    So, today on IRC...

      16:07 <mikal> So, breakfast catering at the student accommodation... Will there be bacon?
      16:07 <ctudball> mikal: You have my permission to riot if there is no bacon.
      16:07 <mikal> Yay!
      16:07 <mikal> Real coffee?
      16:08 <ctudball> mikal: No.
      16:08 <mikal> !
      16:10 <mikal> I can still add breakfast to my rego, right?
      16:10 <mikal> I'll just fill a sock with $18 worth of bacon each morning
      16:11 <ctudball> mikal: You can! 
      16:11 <mikal> Ok, that's official authorization for Operation Bacon Sock
      16:11 <mikal> If anyone complains, I am showing them a lightly edited version of this IRC log

    Which somehow became this.

    Tags for this post: conference lca2012 bacon irc
    Related posts: What do you do on days as a bachelor in a strange country?; The greatest IRC chat evar!; LCA 2012: Ballarat

posted at: 02:32 | path: /conference/lca2012 | permanent link to this entry

Mon, 02 Jan 2012

Further adventures with base images in OpenStack

    I was bored over the New Years weekend, so I figured I'd have a go at implementing image cache management as discussed previously. I actually have an implementation of about 75% of that blueprint now, but its not ready for prime time yet. The point of this post is more to document some stuff I learnt about VM startup along the way so I don't forget it later.

    So, you want to start a VM on a compute node. Once the scheduler has selected a node to run the VM on, the next step is the compute instance on that machine starting the VM up. First the specified disk image is fetched from your image service (in my case glance), and placed in a temporary location on disk. If the image is already a raw image, it is then renamed to the correct name in the instances/_base directory. If it isn't a raw image then it is converted to raw format, and that converted file is put in the right place. Optionally, the image can be extended to a specified size as part of this process.

    Then, depending on if you have copy on write (COW) images turned on or not, either a COW version of the file is created inside the instances/$instance/ directory, or the file from _base is copied to instances/$instance.

    This has a side effect that had me confused for a bunch of time yesterday -- the checksums, and even file sizes, stored in glance are not reliable indicators of base image corruption. Most of my confusion was because image files in glance are immutable, so how come they differed from what's on disk? The other problem was that the images I was using on my development machine were raw images, and checksums did work. It was only when I moved to a slightly more complicated environment that I had enough data to work out what was happening.

    We therefore have a problem for that blueprint. We can't use the checksums from glance as a reliable indicator of if something has gone wrong with the base image. I need to come up with something nicer. What this probably means for the first cut of the code is that checksums will only be verified for raw images which weren't extended, but I haven't written that code yet.

    So, there we go.

    Tags for this post: openstack cloud computing nova glance qemu image management canonical sre image_cache_management
    Related posts: Folsom Dev Summit sessions; Reflecting on Essex; Wow, qemu-img is fast; Juno nova mid-cycle meetup summary: slots; A first pass at glance replication; Openstack compute node cleanup

posted at: 23:13 | path: /openstack | permanent link to this entry