Content here is by Michael Still All opinions are my own.
See recent comments. RSS feed of all comments.

Mon, 14 Apr 2014

Thoughts from the PTL

    I sent this through to the openstack-dev mailing list (you can see the thread here), but I want to put it here as well for people who don't actively follow the mailing list.

    First off, thanks for electing me as the Nova PTL for Juno. I find the
    outcome of the election both flattering and daunting. I'd like to
    thank Dan and John for running as PTL candidates as well -- I strongly
    believe that a solid democratic process is part of what makes
    OpenStack so successful, and that isn't possible without people being
    will to stand up during the election cycle.
    I'm hoping to send out regular emails to this list with my thoughts
    about our current position in the release process. Its early in the
    cycle, so the ideas here aren't fully formed yet -- however I'd rather
    get feedback early and often, in case I'm off on the wrong path. What
    am I thinking about at the moment? The following things:
    * a mid cycle meetup. I think the Icehouse meetup was a great success,
    and I'd like to see us do this again in Juno. I'd also like to get the
    location and venue nailed down as early as possible, so that people
    who have complex travel approval processes have a chance to get travel
    sorted out. I think its pretty much a foregone conclusion this meetup
    will be somewhere in the continental US. If you're interested in
    hosting a meetup in approximately August, please mail me privately so
    we can chat.
    * specs review. The new blueprint process is a work of genius, and I
    think its already working better than what we've had in previous
    releases. However, there are a lot of blueprints there in review, and
    we need to focus on making sure these get looked at sooner rather than
    later. I'd especially like to encourage operators to take a look at
    blueprints relevant to their interests. Phil Day from HP has been
    doing a really good job at this, and I'd like to see more of it.
    * I promised to look at mentoring newcomers. The first step there is
    working out how to identify what newcomers to mentor, and who mentors
    them. There's not a lot of point in mentoring someone who writes a
    single drive by patch, so working out who to invest in isn't as
    obvious as it might seem at first. Discussing this process for
    identifying mentoring targets is a good candidate for a summit
    session, so have a ponder. However, if you have ideas let's get
    talking about them now instead of waiting for the summit.
    * summit session proposals. The deadline for proposing summit sessions
    for Nova is April 20, which means we only have a little under a week
    to get that done. So, if you're sitting on a summit session proposal,
    now is the time to get it in.
    * business as usual. We also need to find the time for bug fix code
    review, blueprint implementation code review, bug triage and so forth.
    Personally, I'm going to focus on bug fix code review more than I have
    in the past. I'd like to see cores spend 50% of their code review time
    reviewing bug fixes, to make the Juno release as solid as possible.
    However, I don't intend to enforce that, its just me asking real nice.
    Thanks for taking the time to read this email, and please do let me
    know if you think this sort of communication is useful.

    Tags for this post: openstack juno ptl nova
    Related posts: Expectations of core reviewers; Juno Nova PTL Candidacy; Review priorities as we approach juno-3; Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: scheduler

posted at: 00:01 | path: /openstack/juno | permanent link to this entry

Sat, 05 Apr 2014

Initial play with wood turning

posted at: 17:17 | path: /wood/turning/20140406-woodturning | permanent link to this entry

Sat, 29 Mar 2014

Juno Nova PTL Candidacy

    This is a repost of an email to the openstack-dev list, which is mostly here for historical reasons.

    I would like to run for the OpenStack Compute PTL position as well.
    I have been an active nova developer since late 2011, and have been a
    core reviewer for quite a while. I am currently serving on the
    Technical Committee, where I have recently been spending my time
    liaising with the board about how to define what software should be
    able to use the OpenStack trade mark. I've also served on the
    vulnerability management team, and as nova bug czar in the past.
    I have extensive experience running Open Source community groups,
    having served on the TC, been the Director for 2013, as
    well as serving on the boards of various community groups over the
    In Icehouse I hired a team of nine software engineers who are all
    working 100% on OpenStack at Rackspace Australia, developed and
    deployed the turbo hipster third party CI system along with Joshua
    Hesketh, as well as writing nova code. I recognize that if I am
    successful I will need to rearrange my work responsibilities, and my
    management is supportive of that.
    The future
    To be honest, I've thought for a while that the PTL role in OpenStack
    is poorly named. Specifically, its the T that bothers me. Sure, we
    need strong technical direction for our programs, but putting it in
    the title raises technical direction above the other aspects of the
    job. Compute at the moment is in an interesting position -- we're
    actually pretty good on technical direction and we're doing
    interesting things. What we're not doing well on is the social aspects
    of the PTL role.
    When I first started hacking on nova I came from an operations
    background where I hadn't written open source code in quite a while. I
    feel like I'm reasonably smart, but nova was certainly the largest
    python project I'd ever seen. I submitted my first patch, and it was
    rejected -- as it should have been. However, Vishy then took the time
    to sit down with me and chat about what needed to change, and how to
    improve the patch. That's really why I'm still involved with
    OpenStack, Vishy took an interest and was always happy to chat. I'm
    told by others that they have had similar experiences.
    I think that's what compute is lacking at the moment. For the last few
    cycles we're focused on the technical, and now the social aspects are
    our biggest problem. I think this is a pendulum, and perhaps in a
    release or two we'll swing back to needing to re-emphasise on
    technical aspects, but for now we're doing poorly on social things.
    Some examples:
    - we're not keeping up with code reviews because we're reviewing the
    wrong things. We have a high volume of patches which are unlikely to
    ever land, but we just reject them. So far in the Icehouse cycle we've
    seen 2,334 patchsets proposed, of which we approved 1,233. Along the
    way, we needed to review 11,747 revisions. We don't spend enough time
    working with the proposers to improve the quality of their code so
    that it will land. Specifically, whilst review comments in gerrit are
    helpful, we need to identify up and coming contributors and help them
    build a relationship with a mentor outside gerrit. We can reduce the
    number of reviews we need to do by improving the quality of initial
    - we're not keeping up with bug triage, or worse actually closing
    bugs. I think part of this is that people want to land their features,
    but part of it is also that closing bugs is super frustrating at the
    moment. It can take hours (or days) to replicate and then diagnose a
    bug. You propose a fix, and then it takes weeks to get reviewed. I'd
    like to see us tweak the code review process to prioritise bug fixes
    over new features for the Juno cycle. We should still land features,
    but we should obsessively track review latency for bug fixes. Compute
    fails if we're not producing reliable production grade code.
    - I'd like to see us focus more on consensus building. We're a team
    after all, and when we argue about solely the technical aspects of a
    problem we ignore the fact that we're teaching the people involved a
    behaviour that will continue on. Ultimately if we're not a welcoming
    project that people want to code on, we'll run out of developers. I
    personally want to be working on compute in five years, and I want the
    compute of the future to be a vibrant, friendly, supportive place. We
    get there by modelling the behaviour we want to see in the future.
    So, some specific actions I think we should take:
    - when we reject a review from a relatively new contributor, we should
    try and pair them up with a more experienced developer to get some
    coaching. That experienced dev should take point on code reviews for
    the new person so that they receive low-latency feedback as they
    learn. Once the experienced dev is ok with a review, nova-core can
    pile on to actually get the code approved. This will reduce the
    workload for nova-core (we're only reviewing things which are of a
    known good standard), while improving the experience for new
    - we should obsessively track review performance for bug fixes, and
    prioritise them where possible. Let's not ignore features, but let's
    agree that each core should spend at least 50% of their review time
    reviewing bug fixes.
    - we should work on consensus building, and tracking the progress of
    large blueprints. We should not wait until the end of the cycle to
    re-assess the v3 API and discover we have concerns. We should be
    talking about progress in the weekly meetings and making sure we're
    all on the same page. Let's reduce the level of surprise. This also
    flows into being clearer about the types of patches we don't want to
    see proposed -- for example, if we think that patches that only change
    whitespace are a bad idea, then let's document that somewhere so
    people know before they put a lot of effort in.
    Thanks for taking the time to read this email!

    Tags for this post: openstack juno ptl nova election
    Related posts: Expectations of core reviewers; Review priorities as we approach juno-3; Thoughts from the PTL; Havana Nova PTL elections; Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: social issues

posted at: 00:01 | path: /openstack/juno | permanent link to this entry

Wed, 26 Mar 2014

The Hot Gate

posted at: 02:04 | path: /book/John_Ringo | permanent link to this entry

Wed, 05 Mar 2014


posted at: 18:34 | path: /book/John_Ringo | permanent link to this entry

Fri, 28 Feb 2014

The Runaway Jury

posted at: 23:49 | path: /book/John_Grisham | permanent link to this entry

Mon, 24 Feb 2014

NBNCo likely to miss its rollout targets

    In December 2013 NBNCo issued a strategic review ordered by Malcolm Turnbull. In this review, they stated that their target for number of premises passed by June 2014 was 637,000 of the 10,910,000 premises in Australia. How are they going on that goal? Well, there are currently 303,905 premises passed in Australia, with around 6,000 more being added per week. That means they're in deep trouble -- they need to be averaging more like 17,000 premises a week at this point to meet their goal. This is especially depressing because the goal is only a few months old and already looking grim.

    (Update: so apparently I misread the NBN strategic review having watched the Senate estimates hearing just now. The NBN target for 30 June 2014 is 357,000 not 600,000. This is a difference between the numbers of page 46 or the report and those on page 48. The 357,000 number is so low, that NBNCo will probably meet that target.)

    While I think its true that the Labor party didn't cover itself with glory in managing the NBN rollout, I have to say that Malcolm isn't doing much better. Especially now that he's reneged on the 25mbit by the end of 2016 promise he took to the election.

    You can see more about the state of the NBN rollout in my Google spreadsheet. I read the boring NBN documents so you don't have to!

    Tags for this post: blog nbn politics
    Related posts: Dublin trip; Mistress of the Empire; I am sometimes amazed by the childlike political discourse in the US; The Moon Is A Harsh Mistress; Daughter of the Empire; Servant of the Empire

posted at: 17:04 | path: /diary | permanent link to this entry

Fri, 14 Feb 2014

Live Free or Die

    ISBN: 9781439133972
    This book is useful. When the Earth is invaded by evil aliens intent on stripping us of our heavy metals, I now know how to fight back using just Maple Syrup and a Death Star I just happen to have hanging around. That's education right there. This book is delightfully not sexist compared with some of Ringo's other books, which makes me happy. It does lack strong female characters, but at least they're not being used for titillation (refer to Cally's War for an example of how this isn't always true). I enjoyed this book.

    Tags for this post: book john_ringo alien invasion combat troy_rising
    Related posts: Citadel; The Hot Gate; Speaker For The Dead; Emerald Sea; Hell's Faire; Princess of Wands

posted at: 19:53 | path: /book/John_Ringo | permanent link to this entry

Thu, 06 Feb 2014

A Talent for War

posted at: 21:47 | path: /book/Jack_McDevitt | permanent link to this entry

Thu, 30 Jan 2014

NBNCo's ACT deployment, rolling out in the direction of backwards

    So those who follow me on twitter wont be surprised to discover that since last night I've been chasing NBNCo rollout statistics for the ACT. It turns out they now do weekly rollout reports (good), but make it hard to find historical ones (bad). So, I made an index of the reports, which you can see at

    I also did some historical analysis for the ACT, and its not good. In fact, it seems we've rolled out -24 premises in the last two months. That's right, we're going backwards:

    Oh course, NBNCo doesn't respond on twitter to a request for an explanation, but that's what I've come to expect.

    Tags for this post: blog nbnco rollout

posted at: 16:44 | path: /diary | permanent link to this entry

Wed, 29 Jan 2014

Weird email of the day

posted at: 15:36 | path: /diary | permanent link to this entry

Mon, 27 Jan 2014

Just storing this here -- how to fix the keyboard bindings on MacOS

posted at: 23:40 | path: /link | permanent link to this entry

Sun, 26 Jan 2014

The Long Earth

posted at: 17:13 | path: /book/Terry_Pratchett_and_Stephen_Baxter | permanent link to this entry

Sun, 03 Nov 2013

Comparing alembic with sqlalchemy migrate

    In the last few days there has been a discussion on the openstack-dev mailing list about converting nova to alembic. Nova currently uses sqlalchemy migrate for its schema migrations. I would consider myself a sceptic of this change, but I want to be a well educated sceptic so I thought I should take a look at an existing alembic user, in this case neutron. There is also at least one session on database changes at the Icehouse summit this coming week, and I wanted to feel prepared for those conversations.

    I should start off by saying that I'm not particularly opposed to alembic. We definitely have problems with migrate, but I am not sure that these problems are addressed by alembic in the way that we'd hope. I think we need to dig deeper into the issues we face with migrate to understand if alembic is a good choice.

    sqlalchemy migrate

    There are two problems with migrate that I see us suffering from at the moment. The first is that migrate is no longer maintained by upstream. I can see why this is bad, although there are other nova dependencies that the OpenStack team maintains internally. For example, the various oslo libraries and the oslo incubator. I understand that reducing the amount of code we maintain is good, but migrate is stable and relatively static. Any changes made will be fixes for security issues or feature changes that the OpenStack project wants. This relative stability means that we're unlikely to see gate breakages because of unexpected upstream changes. It also means that when we want to change how migrate works for our convenience, we don't need to spend time selling upstream on that change.

    The other problem I see is that its really fiddly to land database migrations in nova at the moment. Migrations are a linear stream though time implemented in the form of a sequential number. So, if the current schema version is 227, then my new migration would be implemented by adding the following files to the git repository:

    In this example, the migration is called "implement_funky_feature", and needs custom sqlite upgrades and downgrades. Those sqlite specific files are optional.

    Now the big problem here is that if there is more than one patch competing for the next migration number (which is quite common), then only one patch can win. The others will need to manually rebase their change by renaming these files and then have to re-attempt the code review process. This is very annoying, especially because migration numbers are baked into our various migration tests.

    "Each" migration also has migration tests, which reside in nova/tests/db/ I say each in quotes because we haven't been fantastic about actually adding tests for all our migrations, so that is imperfect at best. When you miss out on a migration number, you also need to update your migration tests to have the new version number in them.

    If we ignore alembic for a moment, I think we can address this issue within migrate relatively easily. The biggest problem at the moment is that migration numbers are derived from the file naming scheme. If instead they came from a configuration file, then when you needed to change the migration number for your patch it would be a one line change in a configuration file, instead of a selection of file renames and some changes to tests. Consider a configuration file which looks like this:

      mikal@e7240:~/src/openstack/nova/nova/db/sqlalchemy/migrate_repo/versions$ cat versions.json | head
          "133": [
          "134": [
          "135": [

    Here, the only place the version number appears is in this versions.json configuration file. For each version, you just list the files present for the migration. In each of the cases here its just the python migration, but it could just as easily include sqlite specific migrations in the array of filenames.

    Then we just need a very simple change to migrate to prefer the config file if it is present:

      diff --git a/migrate/versioning/ b/migrate/versioning/ index d5a5be9..cee1e66 100644 --- a/migrate/versioning/ +++ b/migrate/versioning/ @@ -61,22 +61,31 @@ class Collection(pathed.Pathed): """ super(Collection, self).__init__(path) - # Create temporary list of files, allowing skipped version numbers. - files = os.listdir(path) - if '1' in files: - # deprecation - raise Exception('It looks like you have a repository in the old ' - 'format (with directories for each version). ' - 'Please convert repository before proceeding.') - - tempVersions = dict() - for filename in files: - match = self.FILENAME_WITH_VERSION.match(filename) - if match: - num = int( - tempVersions.setdefault(num, []).append(filename) - else: - pass # Must be a helper file or something, let's ignore it. + # NOTE(mikal): If there is a versions.json file, use that instead of + # filesystem numbering + json_path = os.path.join(path, 'versions.json') + if os.path.exists(json_path): + with open(json_path) as f: + tempVersions = json.loads( + + else: + # Create temporary list of files, allowing skipped version numbers. + files = os.listdir(path) + if '1' in files: + # deprecation + raise Exception('It looks like you have a repository in the ' + 'old format (with directories for each ' + 'version). Please convert repository before ' + 'proceeding.') + + tempVersions = dict() + for filename in files: + match = self.FILENAME_WITH_VERSION.match(filename) + if match: + num = int( + tempVersions.setdefault(num, []).append(filename) + else: + pass # Must be a helper file or something, let's ignore it. # Create the versions member where the keys # are VerNum's and the values are Version's.

    There are some tweaks required to as well, but they are equally trivial. As an aside, I wonder what people think about moving the migration tests out of the test tree and into the versions directory so that they are beside the migrations. This would make it clearer which migrations lack tests, and would reduce the length of, which is starting to get out of hand at 3,478 lines.

    There's one last thing I want to say about migrate migrations before I move onto discussing alembic. One of the features of migrate is that schema migrations are linear, which I think is a feature not a limitation. In the Havana (and presumably Icehouse) releases there has been significant effort from Mirantis and Rackspace Australia to fix bugs in database migrations in nova. To be frank, we do a poor job of having reliable migrations, even in the relatively simple world of linear migrations. I strongly feel we'd do an even worse job if we had non-linear migrations, and I think we need to require that all migrations be sequential as a matter of policy. Perhaps one day when we're better at writing migrations we can vary that, but I don't think we're ready for it yet.


    An example of an existing user of alembic in openstack is neutron, so I took a look at their code to work out what migrations in nova using alembic might look like. First off, here's the work flow for adding a new migration:

    First off, have a read of neutron/db/migration/README. The process involves more tools than nova developers will be used to, its not a simple case of just adding a manually written file to the migrations directory. First off, you need access to the neutron-db-manage tool to write a migration, so setup neutron.

    Just as an aside, the first time I tried to write this blog post I was on an aeroplane, with no network connectivity. Its is frustrating that writing a new database migration requires network connectivity if you don't already have the neutron tools setup in your development environment. Even more annoyingly, you need to have a working neutron configuration in order to be able to add a new migration, which slowed me down a fair bit when I was trying this out. In the end it seems the most expedient way to do this is just to run up a devstack with neutron configured.

    Now we can add a new migration:

      $ neutron-db-manage --config-file /etc/neutron/neutron.conf \
      --config-file /etc/neutron/plugins/ml2/ml2_conf.ini \
      revision -m "funky new database migration" \
      No handlers could be found for logger "neutron.common.legacy"
      INFO  [alembic.migration] Context impl MySQLImpl.
      INFO  [alembic.migration] Will assume non-transactional DDL.
      INFO  [alembic.autogenerate] Detected removed table u'arista_provisioned_tenants'
      INFO  [alembic.autogenerate] Detected removed table u'ml2_vxlan_allocations'
      INFO  [alembic.autogenerate] Detected removed table u'cisco_ml2_nexusport_bindings'
      INFO  [alembic.autogenerate] Detected removed table u'ml2_vxlan_endpoints'
      INFO  [alembic.autogenerate] Detected removed table u'arista_provisioned_vms'
      INFO  [alembic.autogenerate] Detected removed table u'ml2_flat_allocations'
      INFO  [alembic.autogenerate] Detected removed table u'routes'
      INFO  [alembic.autogenerate] Detected removed table u'cisco_ml2_credentials'
      INFO  [alembic.autogenerate] Detected removed table u'ml2_gre_allocations'
      INFO  [alembic.autogenerate] Detected removed table u'ml2_vlan_allocations'
      INFO  [alembic.autogenerate] Detected removed table u'servicedefinitions'
      INFO  [alembic.autogenerate] Detected removed table u'servicetypes'
      INFO  [alembic.autogenerate] Detected removed table u'arista_provisioned_nets'
      INFO  [alembic.autogenerate] Detected removed table u'ml2_gre_endpoints'
        Generating /home/mikal/src/openstack/neutron/neutron/db/migration/alembic_migrations/

    This command has allocated us a migration id, in this case 297033515e04. Interestingly, the template migration drops all of the tables for the ml2 driver, which is a pretty interesting choice of default.

    There are a bunch of interesting headers in the migration python file which you need to know about:

      """funky new database migration
      Revision ID: 297033515e04
      Revises: havana
      Create Date: 2013-11-04 17:12:31.692133
      # revision identifiers, used by Alembic.
      revision = '297033515e04'
      down_revision = 'havana'
      # Change to ['*'] if this migration applies to all plugins
      migration_for_plugins = [

    The developer README then says that you can check your migration is linear with this command:

      $ neutron-db-manage --config-file /etc/neutron/neutron.conf \
      --config-file /etc/neutron/plugins/ml2/ml2_conf.ini check_migration

    In my case it is fine because I'm awesome. However, it is also a little worrying that you need a tool to hold your hand to verify this because its too hard to read through the migrations to verify it yourself.

    So how does alembic go with addressing the concerns we have with the nova database migrations? Well, alembic is currently supported by an upstream other than OpenStack developers, so alembic addresses that concern. I should also say that alembic is obviously already in use by other OpenStack projects, so I think it would be a big ask to move to something other than alembic.

    Alembic does allow linear migrations as well, but its not enforced by the tool itself (in other words, non-linear migrations are supported by the tooling). That means there's another layer of checking required by developers in order to maintain a linear migration stream, and I worry that will introduce another area in which we can make errors and accidentally end up with non-linear migrations. In fact, in the example of multiple patches competing to be the next one in the line alembic is worse, because the headers in the migration file would need to be updated to ensure that linear migrations are maintained.


    I'm still not convinced alembic is a good choice for nova, but I look forward to a lively discussion at the design summit about this.

    Tags for this post: openstack icehouse migrate alembic db migrations
    Related posts: On Continuous Integration testing for Nova DB; Exploring a single database migration; One week of Nova Kilo specifications; Specs for Kilo; OpenStack at 2013; Wow, qemu-img is fast

posted at: 22:52 | path: /openstack/icehouse | permanent link to this entry

Sat, 02 Nov 2013

On Continuous Integration testing for Nova DB

    To quote Homer Simpson: "All my life I've had one dream, to achieve my many goals.".

    One of my more recent goals is a desire to have real continuous integration testing for database migrations in Nova. You see, at the moment, database migrations can easily make upgrades painful for deployers, normally by taking a very long time to run. This is partially because we test on trivial datasets on our laptops, but it is also because it is hard to predict the scale of the various dimensions in the database -- for example: perhaps one deployment has lots of instances; whilst another might have a smaller number of instances but a very large number of IP addresses.

    The team I work with at Rackspace Australia has therefore been cooking up a scheme to try and fix this. For example, Josh Hesketh has been working on what we call Turbo Hipster, which he has blogged about. We've started off with a prototype to prove we can get meaningful testing results, which is running now.

    Since we finished the prototype we've been working on a real implementation, which is known as Turbo Hipster. I know it's an odd name, but we couldn't decide what to call it, so we just took a suggestion from the github project namer. Its just an added advantage that the OpenStack Infra team think that the name is poking fun at them. Turbo Hipster reads the gerrit event stream, and then uses our own zuul to run tests and report results to gerrit. We need our own zuul because we want to able to offer federated testing later, and it isn't fair to expect the Infra team to manage that for us. There's nothing special about the tests we're running; our zuul is capable of running other tests if people are interested in adding more, although we'd have to talk about if it makes more sense for you to just run your own zuul.

    Generally I keep an eye on the reports and let developers know when there are problems with their patchset. I don't want to link to where the reports live just yet. Right now, there are some problems which stop me from putting our prototype in a public place, though. Consider a migration that takes some form of confidential data out of the database and just logs it. Sure, we'd pick this up in code review, but by then we might have published test logs with confidential information. This is especially true because we want to be able to run tests against real production databases, both ones donated to run on our test infrastructure and ones where a federated worker is running somewhere else.

    We have therefore started work on a database anonymization tool, which we named Fuzzy Happiness (see earlier comment about us being bad at naming things). This tool takes markup in the sqlalchemy models file and uses that to decide what values to anonymize (and how). Fuzzy Happiness is what prompted me to write this blog post: Nova reviewers are about to see a patch with strange markup in it, and I wanted something to point at to explain what we're trying to do.

    Once we have anonymization working there is one last piece we need, which is database scaling. Perhaps the entire size of your database gives away things you don't want leaked into gerrit. This tool is tentatively codenamed Elastic Duckface, and we'll tell you more about it just as soon as we've written it.

    I'd be very interested in comments on any of this work, so please do reach out if you have thoughts.

    Tags for this post: openstack turbo_hipster fuzzy_happiness db ci anonymization
    Related posts: Comparing alembic with sqlalchemy migrate; Nova database continuous integration; One week of Nova Kilo specifications; Specs for Kilo; OpenStack at 2013; Wow, qemu-img is fast

posted at: 13:10 | path: /openstack | permanent link to this entry

Mon, 30 Sep 2013

Starship Troopers (again)

posted at: 04:58 | path: /book/Robert_A_Heinlein | permanent link to this entry

Starship Troopers

posted at: 04:39 | path: /book/Robert_A_Heinlein | permanent link to this entry

Wed, 04 Sep 2013

Call for presentations for the 2014 OpenStack mini-conference

    I've just emailed this out to the relevant lists, but I figured it can't hurt to post it here as well... will be hosting the second OpenStack mini-conference to run in Australia. The first one was well attended, and this mini-conference will be the first OpenStack conference to be held on Australia's west coast. The mini-conference is a day long event focusing on OpenStack development and operations, and is available to attendees of

    The mini-conference is therefore calling for proposals for content. Speakers at the mini-conference must be registered for 2014 as delegates, or discuss their needs with the mini-conference organizers if that isn't possible.

    Some examples of talks we're interested in are: talks from OpenStack developers about what features they are working on for IceHouse; talks from deployers of OpenStack about their experiences and how others can learn from them; talks covering the functionality of OpenStack and how it can be used in new and interesting ways.

    Some important details:

    • runs from 6 to 10 January 2014 in Perth, Australia at the University of Western Australia
    • the mini-conference will be on Tuesday the 7th of January
    • proposals are due to the mini-conference organizer no later than 1 November
    • there are two types of talks -- full length (45 minutes) and half length (20 minutes)

    CFP submissions are made by completing this online form: CFP submission form

    If you have questions about this call for presentations, please contact Michael Still at for more details.

    Tags for this post: conference lca2014 openstack mini-conference rackspace
    Related posts: OpenStack at 2013; Moving on; Faster pip installs; Image handlers (in essex); Upgrade problems with the new Fixed IP quota; Merged in Havana: fixed ip listing for single hosts

posted at: 18:55 | path: /conference/lca2014 | permanent link to this entry

Fri, 02 Aug 2013

Exploring a single database migration

    Yesterday I was having some troubles with a database migration download step, and a Joshua Hesketh suggested I step through the migrations one at a time and see what they were doing to my sqlite test database. That's a great idea, but it wasn't immediately obvious to me how to do it. Now that I've figured out the steps required, I thought I'd document them here.

    First off we need a test environment. I'm hacking on nova at the moment, and tend to build throw away test environments in the cloud because its cheap and easy. So, I created a new Ubuntu 12.04 server instance in Rackspace's Sydney data center, and then configured it like this:

      $ sudo apt-get update
      $ sudo apt-get install -y git python-pip git-review libxml2-dev libxml2-utils
      libxslt-dev libmysqlclient-dev pep8 postgresql-server-dev-9.1 python2.7-dev
      python-coverage python-netaddr python-mysqldb python-git virtualenvwrapper
      python-numpy virtualenvwrapper sqlite3
      $ source /etc/bash_completion.d/virtualenvwrapper
      $ mkvirtualenv migrate_204
      $ toggleglobalsitepackages

    Simple! I should note here that we probably don't need the virtualenv because this machine is disposable, but its still a good habit to be in. Now I need to fetch the code I am testing. In this case its from my personal fork of nova, and the git location to fetch will obviously change for other people:

      $ git clone

    Now I can install the code under test. This will pull in a bunch of pip dependencies as well, so it takes a little while:

      $ cd nova
      $ python develop

    Next we have to configure nova because we want to install specific database schema versions.

      $ mkdir /etc/nova
      $ sudo mkdir /etc/nova
      $ sudo vim /etc/nova/nova.conf
      $ sudo chmod -R ugo+rx /etc/nova

    The contents of my nova.conf looks like this:

      $ cat /etc/nova/nova.conf
      sql_connection = sqlite:////tmp/foo.sqlite

    Now I can step up to the version before the one I am testing:

      $ nova-manage db sync --version 203

    You do the same thing but with a different version number to step somewhere else. Its also pretty easy to get the schema for a table under sqlite. I just do this:

      $ sqlite3 /tmp/foo.sqlite
      SQLite version 3.7.9 2011-11-01 00:52:41
      Enter ".help" for instructions
      Enter SQL statements terminated with a ";"
      sqlite> .schema instances
      CREATE TABLE "instances" (
              created_at DATETIME,
              updated_at DATETIME,

    So there you go.

    Disclaimer -- I wouldn't recommend upgrading to a specific version like this for real deployments, because the models in the code base wont match the tables. If you wanted to do that you'd need to work out what git commit added the version after the one you've installed, and then checkout the commit before that commit.

    Tags for this post: openstack tips rackspace nova database migrations sqlite
    Related posts: Faster pip installs; Upgrade problems with the new Fixed IP quota; Merged in Havana: fixed ip listing for single hosts; Nova database continuous integration; Merged in Havana: configurable iptables drop actions in nova; Michael's surprisingly unreliable predictions for the Havana Nova release

posted at: 18:37 | path: /openstack/tips | permanent link to this entry

Wed, 03 Jul 2013

Nova database continuous integration

    I've had some opportunity recently to spend a little quality time off line, and I spent some of that time working on a side project I've wanted to do for a while -- continuous integration testing of nova database migrations. Now, the code isn't perfect at the moment, but I think its an interesting direction to take and I will keep pursuing it.

    One of the problems nova developers have is that we don't have a good way of determining whether a database migration will be painful for deployers. We can eyeball code reviews, but whether code looks reasonable or not, its still hard to predict how it will perform on real data. Continuous integration is the obvious solution -- if we could test patch sets on real databases as part of the code review process, then reviewers would have more data about whether to approve a patch set or not. So I did that.

    At the moment the CI implementation I've built isn't posting to code reviews, but that's because I want to be confident that the information it gathers is accurate before wasting other reviewers' time. You can see results at For now, I am keeping an eye on the test results and posting manually to reviews when an error is found -- that has happened twice so far.

    The CI tests work by restoring a MySQL database to a known good state, upgrading that database from Folsom to Grizzly (if needed). It then runs the upgrades already committed to trunk, and then the proposed patch set. Timings for each step are reported -- for example with my biggest test database the upgrade from Folsom to Grizzly takes between about 7 and 9 minutes to run, which isn't too bad. You can see an example log at here.

    I'd be interested in know if anyone else has sample databases they'd like to see checks run against. If so, reach out to me and we can make it happen.

    Tags for this post: openstack rackspace database ci mysql
    Related posts: Exploring a single database migration; OpenStack at 2013; Call for presentations for the 2014 OpenStack mini-conference; Moving on; Faster pip installs; Image handlers (in essex)

posted at: 03:30 | path: /openstack | permanent link to this entry

Previous page