Content here is by Michael Still mikal@stillhq.com. All opinions are my own.
See recent comments. RSS feed of all comments.


Sun, 31 May 2015



The linux.conf.au 2016 Call For Proposals is open!

    The OpenStack community has been well represented at linux.conf.au over the last few years, which I think is reflective of both the growing level of interest in OpenStack in the general Linux community, as well as the fact that OpenStack is one of the largest Python projects around these days. linux.conf.au is one of the region's biggest Open Source conferences, and has a solid reputation for deep technical content.

    Its time to make it all happen again, with the linux.conf.au 2016 Call For Proposals opening today! I'm especially keen to encourage talk proposals which are somehow more than introductions to various components of OpenStack. Its time to talk detail about how people's networking deployments work, what container solutions we're using, and how we're deploying OpenStack in the real world to do seriously cool stuff.

    The conference is in the first week of February in Geelong, Australia. I'd be happy to chat with anyone who has questions about the CFP process.

    Tags for this post: openstack conference linux.conf.au lca2016
    Related posts: LCA 2007 Video: CFQ IO; LCA 2006: CFP closes today; I just noticed...; LCA2006 -- CFP opens soon!; I just noticed...; Updated: linux.conf.au 2007 MythTV tutorial homework

posted at: 22:44 | path: /openstack | permanent link to this entry


Thu, 15 Jan 2015



Another Nova spec update

    I started chasing down the list of spec freeze exceptions that had been requested, and that resulted in the list of specs for Kilo being updated. That updated list is below, but I'll do a separate post with the exception requests highlighted soon as well.

    API

    • Add more detailed network information to the metadata server: review 85673 (approved).
    • Add separated policy rule for each v2.1 api: review 127863 (requested a spec exception).
    • Add user limits to the limits API (as well as project limits): review 127094.
    • Allow all printable characters in resource names: review 126696 (approved).
    • Consolidate all console access APIs into one: review 141065 (approved).
    • Expose the lock status of an instance as a queryable item: review 127139 (abandoned); review 85928 (approved).
    • Extend api to allow specifying vnic_type: review 138808 (requested a spec exception).
    • Implement instance tagging: review 127281 (fast tracked, approved).
    • Implement the v2.1 API: review 126452 (fast tracked, approved).
    • Improve the return codes for the instance lock APIs: review 135506.
    • Microversion support: review 127127 (approved).
    • Move policy validation to just the API layer: review 127160 (approved).
    • Nova Server Count API Extension: review 134279 (fast tracked).
    • Provide a policy statement on the goals of our API policies: review 128560 (abandoned).
    • Sorting enhancements: review 131868 (fast tracked, approved, implemented).
    • Support JSON-Home for API extension discovery: review 130715 (requested a spec exception).
    • Support X509 keypairs: review 105034 (approved).


    API (EC2)

    • Expand support for volume filtering in the EC2 API: review 104450.
    • Implement tags for volumes and snapshots with the EC2 API: review 126553 (fast tracked, approved).


    Administrative

    • Actively hunt for orphan instances and remove them: review 137996 (abandoned); review 138627.
    • Add totalSecurityGroupRulesUsed to the quota limits: review 145689.
    • Check that a service isn't running before deleting it: review 131633.
    • Enable the nova metadata cache to be a shared resource to improve the hit rate: review 126705 (abandoned).
    • Implement a daemon version of rootwrap: review 105404 (requested a spec exception).
    • Log request id mappings: review 132819 (fast tracked).
    • Monitor the health of hypervisor hosts: review 137768.
    • Remove the assumption that there is a single endpoint for services that nova talks to: review 132623.


    Block Storage

    • Allow direct access to LVM volumes if supported by Cinder: review 127318.
    • Cache data from volumes on local disk: review 138292 (abandoned); review 138619.
    • Enhance iSCSI volume multipath support: review 134299 (requested a spec exception).
    • Failover to alternative iSCSI portals on login failure: review 137468 (requested a spec exception).
    • Give additional info in BDM when source type is "blank": review 140133.
    • Implement support for a DRBD driver for Cinder block device access: review 134153 (requested a spec exception).
    • Poll volume status: review 142828 (abandoned).
    • Refactor ISCSIDriver to support other iSCSI transports besides TCP: review 130721 (approved).
    • StorPool volume attachment support: review 115716 (approved, requested a spec exception).
    • Support Cinder Volume Multi-attach: review 139580 (approved).
    • Support iSCSI live migration for different iSCSI target: review 132323 (approved).


    Cells



    Containers Service



    Database

    • Develop and implement a profiler for SQL requests: review 142078 (abandoned).
    • Enforce instance uuid uniqueness in the SQL database: review 128097 (fast tracked, approved, implemented).
    • Nova db purge utility: review 132656.
    • Online schema change options: review 102545 (approved).
    • Support DB2 as a SQL database: review 141097 (fast tracked, approved).
    • Validate database migrations and model': review 134984 (approved).


    Hypervisor: Docker



    Hypervisor: FreeBSD

    • Implement support for FreeBSD networking in nova-network: review 127827.


    Hypervisor: Hyper-V

    • Allow volumes to be stored on SMB shares instead of just iSCSI: review 102190 (approved, implemented).
    • Instance hot resize: review 141219.


    Hypervisor: Ironic



    Hypervisor: VMWare

    • Add ephemeral disk support to the VMware driver: review 126527 (fast tracked, approved).
    • Add support for the HTML5 console: review 127283 (requested a spec exception).
    • Allow Nova to access a VMWare image store over NFS: review 126866.
    • Enable administrators and tenants to take advantage of backend storage policies: review 126547 (fast tracked, approved).
    • Enable the mapping of raw cinder devices to instances: review 128697.
    • Implement vSAN support: review 128600 (fast tracked, approved).
    • Support multiple disks inside a single OVA file: review 128691.
    • Support the OVA image format: review 127054 (fast tracked, approved).


    Hypervisor: libvirt



    Instance features



    Internal

    • A lock-free quota implementation: review 135296 (approved).
    • Automate the documentation of the virtual machine state transition graph: review 94835.
    • Fake Libvirt driver for simulating HW testing: review 139927 (abandoned).
    • Flatten Aggregate Metadata in the DB: review 134573 (abandoned).
    • Flatten Instance Metadata in the DB: review 134945 (abandoned).
    • Implement a new code coverage API extension: review 130855.
    • Move flavor data out of the system_metadata table in the SQL database: review 126620 (approved).
    • Move to polling for cinder operations: review 135367.
    • PCI test cases for third party CI: review 141270.
    • Transition Nova to using the Glance v2 API: review 84887 (abandoned).
    • Transition to using glanceclient instead of our own home grown wrapper: review 133485 (approved).


    Internationalization

    • Enable lazy translations of strings: review 126717 (fast tracked, approved).


    Networking

    • Add a new linuxbridge VIF type, macvtap: review 117465 (abandoned).
    • Add a plugin mechanism for VIF drivers: review 136827 (abandoned).
    • Add support for InfiniBand SR-IOV VIF Driver: review 131729 (requested a spec exception).
    • Neutron DNS Using Nova Hostname: review 90150 (abandoned).
    • New VIF type to allow routing VM data instead of bridging it: review 130732 (approved, requested a spec exception).
    • Nova Plugin for OpenContrail: review 126446 (approved).
    • Refactor of the Neutron network adapter to be more maintainable: review 131413.
    • Use the Nova hostname in Neutron DNS: review 137669.
    • Wrap the Python NeutronClient: review 141108.


    Performance

    • Dynamically alter the interval nova polls components at based on load and expected time for an operation to complete: review 122705.


    Scheduler

    • A nested quota driver API: review 129420.
    • Add a filter to take into account hypervisor type and version when scheduling: review 137714.
    • Add an IOPS weigher: review 127123 (approved, implemented); review 132614.
    • Add instance count on the hypervisor as a weight: review 127871 (abandoned).
    • Add soft affinity support for server group: review 140017 (approved).
    • Allow extra spec to match all values in a list by adding the ALL-IN operator: review 138698 (fast tracked, approved).
    • Allow limiting the flavors that can be scheduled on certain host aggregates: review 122530 (abandoned).
    • Allow the remove of servers from server groups: review 136487.
    • Cache aggregate metadata: review 141846.
    • Convert get_available_resources to use an object instead of dict: review 133728 (abandoned).
    • Convert the resource tracker to objects: review 128964 (fast tracked, approved).
    • Create an object model to represent a request to boot an instance: review 127610 (approved).
    • Decouple services and compute nodes in the SQL database: review 126895 (approved).
    • Distribute PCI Requests Across Multiple Devices: review 142094.
    • Enable adding new scheduler hints to already booted instances: review 134746.
    • Fix the race conditions when migration with server-group: review 135527 (abandoned).
    • Implement resource objects in the resource tracker: review 127609 (approved, requested a spec exception).
    • Improve the ComputeCapabilities filter: review 133534 (requested a spec exception).
    • Isolate Scheduler DB for Filters: review 138444 (requested a spec exception).
    • Isolate the scheduler's use of the Nova SQL database: review 89893 (approved).
    • Let schedulers reuse filter and weigher objects: review 134506 (abandoned).
    • Move select_destinations() to using a request object: review 127612 (approved).
    • Persist scheduler hints: review 88983.
    • Refactor allocate_for_instance: review 141129.
    • Stop direct lookup for host aggregates in the Nova database: review 132065 (abandoned).
    • Stop direct lookup for instance groups in the Nova database: review 131553 (abandoned).
    • Support scheduling based on more image properties: review 138937.
    • Trusted computing support: review 133106.


    Scheduling



    Security

    • Make key manager interface interoperable with Barbican: review 140144 (fast tracked, approved).
    • Provide a reference implementation for console proxies that uses TLS: review 126958 (fast tracked, approved).
    • Strongly validate the tenant and user for quota consuming requests with keystone: review 92507 (approved).


    Service Groups



posted at: 19:16 | path: /openstack/kilo | permanent link to this entry


Mon, 12 Jan 2015



Kilo Nova deploy recommendations

    What would a Nova developer tell a deployer to think about before their first OpenStack install? This was the question I wanted to answer for my linux.conf.au OpenStack miniconf talk, and writing this essay seemed like a reasonable way to take the bullet point list of ideas we generated and turn it into something that was a cohesive story. Hopefully this essay is also useful to people who couldn't make the conference talk.

    Please understand that none of these are hard rules -- what I seek is for you to consider your options and make informed decisions. Its really up to you how you deploy Nova.

    Operating environment

    • Consider what base OS you use for your hypervisor nodes if you're using Linux. I know that many environments have standardized on a given distribution, and that many have a preference for a long term supported release. However, Nova is at its most basic level a way of orchestrating tools packaged by your distribution via APIs. If those underlying tools are buggy, then your Nova experience will suffer as well. Sometimes we can work around known issues in older versions of our dependencies, but often those work-arounds are hard to implement (and therefore likely to be less than perfect) or have performance impacts. There are many examples of the problems you can encounter, but hypervisor kernel panics, and disk image corruption are just two examples. We are trying to work with distributions on ensuring they back port fixes, but the distributions might not be always willing to do that. Sometimes upgrading the base OS on your hypervisor nodes might be a better call.
    • The version of Python you use matters. The OpenStack project only tests with specific versions of Python, and there can be bugs between releases. This is especially true for very old versions of Python (anything older than 2.7) and new versions of Python (Python 3 is not supported for example). Your choice of base OS will affect the versions of Python available, so this is related to the previous point.
    • There are existing configuration management recipes for most configuration management systems. I'd avoid reinventing the wheel here and use the community supported recipes. There are definitely resources available for chef, puppet, juju, ansible and salt. If you're building a very large deployment from scratch consider triple-o as well. Please please please don't fork the community recipes. I know its tempting, but contribute to upstream instead. Invariably upstream will continue developing their stuff, and if you fork you'll spend a lot of effort keeping in sync.
    • Have a good plan for log collection and retention at your intended scale. The hard reality at the moment is that diagnosing Nova often requires that you turn on debug logging, which is very chatty. Whilst we're happy to take bug reports where we've gotten the log level wrong, we haven't had a lot of success at systematically fixing this issue. Your log infrastructure therefore needs to be able to handle the demands of debug logging when its turned on. If you're using central log servers think seriously about how much disks they require. If you're not doing centralized syslog logging, perhaps consider something like logstash.
    • Pay attention to memory usage on your controller nodes. OpenStack python processes can often consume hundreds of megabytes of virtual memory space. If you run many controller services on the same node, make sure you have enough RAM to deal with the number of processes that will, by default, be spawned for the many service endpoints. After a day or so of running a controller node, check in on the VMM used for python processes and make any adjustments needed to your "workers" configuration settings.


    Scale
    • Estimate your final scale now. Sure, you're building a proof of concept, but these things have a habit of becoming entrenched. If you are planning a deployment that is likely to end up being thousands of nodes, then you are going to need to deploy with cells. This is also possibly true if you're going to have more than one hypervisor or hardware platform in your deployment -- its very common to have a cell per hypervisor type or per hardware platform. Cells is relatively cheap to deploy for your proof of concept, and it helps when that initial deploy grows into a bigger thing. Should you be deploying cells from the beginning? It should be noted however that not all features are currently implemented in cells. We are working on this at the moment though.
    • Consider carefully what SQL database to use. Nova supports many SQL databases via sqlalchemy, but are some are better tested and more widely deployed than others. For example, the Postgres back end is rarely deployed and is less tested. I'd recommend a variant of MySQL for your deployment. Personally I've seen good performance on Percona, but I know that many use the stock MySQL as well. There are known issues at the moment with Galera as well, so show caution there. There is active development happening on the select-for-update problems with Galera at the moment, so that might change by the time you get around to deploying in production. You can read more about our current Galera problems on Jay Pipe's blog .
    • We support read only replicas of the SQL database. Nova supports offloading read only SQL traffic to read only replicas of the main SQL database, but I do no believe this is widely deployed. It might be of interest to you though.
    • Expect a lot of SQL database connections. While Nova has the nova-conductor service to control the number of connections to the database server, other OpenStack services do not, and you will quickly out pace the number of default connections allowed, at least for a MySQL deployment. Actively monitor your SQL database connection counts so you know before you run out. Additionally, there are many places in Nova where a user request will block on a database query, so if your SQL back end isn't keeping up this will affect performance of your entire Nova deployment.
    • There are options with message queues as well. We currently support rabbitmq, zeromq and qpid. However, rabbitmq is the original and by far the most widely deployed. rabbitmq is therefore a reasonable default choice for deployment.


    Hypervisors
    • Not all hypervisor drivers are created equal. Let's be frank here -- some hypervisor drivers just aren't as actively developed as others. This is especially true for drivers which aren't in the Nova code base -- at least the ones the Nova team manage are updated when we change the internals of Nova. I'm not a hypervisor bigot -- there is a place in the world for many different hypervisor options. However, the start of a Nova deploy might be the right time to consider what hypervisor you want to use. I'd personally recommend drivers in the Nova code base with active development teams and good continuous integration, but ultimately you have to select a driver based on its merits in your situation. I've included some more detailed thoughts on how to evaluate hypervisor drivers later in this post, as I don't want to go off on a big tangent during my nicely formatted bullet list.
    • Remember that the hypervisor state is interesting debugging information. For example with the libvirt hypervisor, the contents on /var/lib/instances is super useful for debugging misbehaving instances. Additionally, all of the existing libvirt tools work, so you can use those to investigate as well. However, I strongly recommend you only change instance state via Nova, and not go directly to the hypervisor.


    Networking
    • Avoid new deployments of nova-network. nova-network has been on the deprecation path for a very long time now, and we're currently working on the final steps of a migration plan for nova-network users to neutron. If you're a new deployment of Nova and therefore don't yet depend on any of the features of nova-network, I'd start with Neutron from the beginning. This will save you a possible troublesome migration to Neutron later.


    Testing and upgrades
    • You need a test lab. For a non-trivial deployment, you need a realistic test environment. Its expected that you test all upgrades before you do them in production, and rollbacks can sometimes be problematic. For example, some database migrations are very hard to roll back, especially if new instances have been created in the time it took you to decide to roll back. Perhaps consider turning off API access (or putting the API into a read only state) while you are validating a production deploy post upgrade, that way you can restore a database snapshot if you need to undo the upgrade. We know this isn't perfect and are working on a better upgrade strategy for information stored in the database, but we will always expect you to test upgrades before deploying them.
    • Test database migrations on a copy of your production database before doing them for real. Another reason to test upgrades before doing them in production is because some database migrations can be very slow. Its hard for the Nova developers to predict which migrations will be slow, but we do try to test for this and minimize the pain. However, aspects of your deployment can affect this in ways we don't expect -- for example if you have large numbers of volumes per instance, then that could result in database tables being larger than we expect. You should always test database migrations in a lab and report any problems you see.
    • Think about your upgrade strategy in general. While we now support having the control infrastructure running a newer release than the services on hypervisor nodes, we only support that for one release (so you could have your control plane running Kilo for example while you are still running Juno on your hypervisors, you couldn't run Icehouse on the hypervisors though). Are you going to upgrade every six months? Or are you going to do it less frequently but step through a series of upgrades in one session? I suspect the latter option is more risky -- if you encounter a bug in a previous release we would need to back port a fix, which is a much slower process than fixing the most recent release. There are also deployments which choose to "continuously deploy" from trunk. This gets the access to features as they're added, but means that the deployments need to have more operational skill and a closer association with the upstream developers. In general continuous deployers are larger public clouds as best as I can tell.


    libvirt specific considerations
    • For those intending to run the libvirt hypervisor driver, not all libvirt hypervisors are created equal. libvirt implements pluggable hypervisors, so if you select the Nova libvirt hypervisor driver, you then need to select what hypervisor to use with libvirt as well. It should be noted however that some hypervisors work better than others, with kvm being the most widely deployed.
    • There are two types of storage for instances. There is "instance storage", which is block devices that exist for the life of the instance and are then cleaned up when the instance is destroyed. There is also block storage provided Cinder, which is persistent and arguably easier to manage than instance storage. I won't discuss storage provided by Cinder any further however, because it is outside the scope of this post. Instance storage is provided by a plug in layer in the libvirt hypervisor driver, which presents you with another set of deployment decisions.
    • Shared instance storage is attractive, but it comes at a cost. Shared instance storage is an attractive option, but isn't required for live migration of instances using the libvirt hypervisor. Think about the costs of shared storage though -- for example putting everything on network attached storage is likely to be expensive, especially if most of your instances don't need the facility. There are other options such as Ceph, but the storage interface layer in libvirt is one of the areas of code where we need to improve testing so be wary of bugs before relying on those storage back ends.


    Thoughts on how to evaluate hypervisor drivers

    As promised, I also have some thoughts on how to evaluate which hypervisor driver is the right choice for you. First off, if your organization has a lot of experience with a particular hypervisor, then there is always value in that. If that is the case, then you should seriously consider running the hypervisor you already have experience with, as long as that hypervisor has a driver for Nova which meets the criteria below.

    What's important is to be looking for a driver which works well with Nova, and a good measure of that is how well the driver development team works with the Nova development team. The obvious best case here is where both teams are the same people -- which is true for drivers that are in the Nova code base. I am aware there are drivers that live outside of Nova's code repository, but you need to remember that the interface these drivers plug into isn't a stable or versioned interface. The risk of those drivers being broken by the ongoing development of Nova is very high. Additionally, only a very small number of those "out of tree" drivers contribute to our continuous integration testing. That means that the Nova team also doesn't know when those drivers are broken. The breakages can also be subtle, so if your vendor isn't at the very least doing tempest runs against their out of tree driver before shipping it to you then I'd be very worried.

    You should also check out how many bugs are open in LaunchPad for your chosen driver (this assumes the Nova team is aware of the existence of the driver I suppose). Here's an example link to the libvirt driver bugs currently open. As well as total bug count, I'd be looking for bug close activity -- its nice if there is a very small number of bugs filed, but perhaps that's because there aren't many users. It doesn't necessarily mean the team for that driver is super awesome at closing bugs. The easiest way to look into bug close rates (and general code activity) would be to checkout the code for Nova and then look at the log for your chosen driver. For example for the libvirt driver again:

    $ git clone http://git.openstack.org/openstack/nova
    $ cd nova/nova/virt/driver/libvirt
    $ git log .
    


    That will give you a report on all the commits ever for that driver. You don't need to read the entire report, but it will give you an idea of what the driver authors have recently been thinking about.

    Another good metric is the specification activity for your driver. Specifications are the formal design documents that Nova adopted for the Juno release, and they document all the features that we're currently working on. I write summaries of the current state of Nova specs regularly, which you can see posted at stillhq.com with this being the most recent summary at the time of writing this post. You should also check how much your driver authors interact with the core Nova team. The easiest way to do that is probably to keep an eye on the Nova team meeting minutes, which are posted online.

    Finally, the OpenStack project believes strongly in continuous integration testing. It (s/It/Testing) has clear value in the number of bugs it finds in code before our users experience them, and I would be very wary of driver code which isn't continuously integrated with Nova. Thus, you need to ensure that your driver has well maintained continuous integration testing. This is easy for "in tree" drivers, as we do that for all of them. For out of tree drivers, continuous integration testing is done with a thing called "third party CI".

    How do you determine if a third party CI system is well maintained? First off, I'd start by determining if a third party CI system actually exists by looking at OpenStack's list of known third party CI systems. If the third party isn't listed on that page, then that's a very big warning sign. Next you can use Joe Gordon's lastcomment tool to see when a given CI system last reported a result:

    $ git clone https://github.com/jogo/lastcomment
    $ ./lastcomment.py --name "DB Datasets CI"
    last 5 comments from 'DB Datasets CI'
    [0] 2015-01-07 00:46:33 (1:35:13 old) https://review.openstack.org/145378 'Ignore 'dynamic' addr flag on gateway initialization' 
    [1] 2015-01-07 00:37:24 (1:44:22 old) https://review.openstack.org/136931 'Use session with neutronclient' 
    [2] 2015-01-07 00:35:33 (1:46:13 old) https://review.openstack.org/145377 'libvirt: Expanded test libvirt driver' 
    [3] 2015-01-07 00:29:50 (1:51:56 old) https://review.openstack.org/142450 'ephemeral file names should reflect fs type and mkfs command' 
    [4] 2015-01-07 00:15:59 (2:05:47 old) https://review.openstack.org/142534 'Support for ext4 as default filesystem for ephemeral disks' 
    


    You can see here that the most recent run is 1 hour 35 minutes old when I ran this command. That's actually pretty good given that I wrote this while most of America was asleep. If the most recent run is days old, that's another warning sign. If you're left in doubt, then I'd recommend appearing in the OpenStack IRC channels on freenode and asking for advice. OpenStack has a number of requirements for third party CI systems, and I haven't discussed many of them here. There is more detail on what OpenStack considers a "well run CI system" on the OpenStack Infrastructure documentation page.

    General operational advice

    Finally, I have some general advice for operators of OpenStack. There is an active community of operators who discuss their use of the various OpenStack components at the openstack-operators mailing list, if you're deploying Nova you should consider joining that mailing list. While you're welcome to ask questions about deploying OpenStack at that list, you can also ask questions at the more general OpenStack mailing list if you want to.

    There are also many companies now which will offer to operate an OpenStack cloud for you. For some organizations engaging a subject matter expert will be the right decision. Probably the most obvious way to evaluate which of those companies to use is to look at their track record of successful deployments, as well as their overall involvement in the OpenStack community. You need a partner who can advocate for you with the OpenStack developers, as well as keeping an eye on what's happening upstream to ensure it meets your needs.

    Conclusion

    Thanks for reading so far! I hope this document is useful to someone out there. I'd love to hear your feedback -- are there other things we wished deployers considered before committing to a plan? Am I simply wrong somewhere? Finally, this is the first time that I've posted an essay form of a conference talk instead of just the slide deck, and I'd be interested in if people find this format more useful than a YouTube video post conference. Please drop me a line and let me know if you find this useful!

    Tags for this post: openstack nova
    Related posts: One week of Nova Kilo specifications; Specs for Kilo; Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno Nova PTL Candidacy; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic

posted at: 14:11 | path: /openstack | permanent link to this entry


Sun, 14 Dec 2014



How are we going with Nova Kilo specs after our review day?

posted at: 15:15 | path: /openstack/kilo | permanent link to this entry


Soft deleting instances and the reclaim_instance_interval in Nova

    I got asked the other day how the reclaim_instance_interval in Nova works, so I thought I'd write it up here in case its useful to other people.

    First off, there is a periodic task run the nova-compute process (or the computer manager as a developer would know it), which runs every reclaim_instance_interval seconds. It looks for instances in the SOFT_DELETED state which don't have any tasks running at the moment for the hypervisor node that nova-compute is running on.

    For each instance it finds, it checks if the instance has been soft deleted for at least reclaim_instance_interval seconds. This has the side effect from my reading of the code that an instance needs to be deleted for at least reclaim_instance_Interval seconds before it will be removed from disk, but that the instance might be up to approximately twice that age (if it was deleted just as the periodic task ran, it would skip the next run and therefore not be deleted for two intervals).

    Once these conditions are met, the instance is deleted from disk.

    Tags for this post: openstack nova instance delete
    Related posts: One week of Nova Kilo specifications; Specs for Kilo; Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Historical revisionism; Juno Nova PTL Candidacy; Juno nova mid-cycle meetup summary: scheduler

posted at: 13:51 | path: /openstack | permanent link to this entry


Mon, 01 Dec 2014



Specs for Kilo, an update

posted at: 20:13 | path: /openstack/kilo | permanent link to this entry


Thu, 23 Oct 2014



Specs for Kilo

posted at: 19:27 | path: /openstack/kilo | permanent link to this entry


Mon, 13 Oct 2014



One week of Nova Kilo specifications

posted at: 03:27 | path: /openstack/kilo | permanent link to this entry


Sun, 12 Oct 2014



Compute Kilo specs are open

    From my email last week on the topic:
    I am pleased to announce that the specs process for nova in kilo is
    now open. There are some tweaks to the previous process, so please
    read this entire email before uploading your spec!
    
    Blueprints approved in Juno
    ===========================
    
    For specs approved in Juno, there is a fast track approval process for
    Kilo. The steps to get your spec re-approved are:
    
     - Copy your spec from the specs/juno/approved directory to the
    specs/kilo/approved directory. Note that if we declared your spec to
    be a "partial" implementation in Juno, it might be in the implemented
    directory. This was rare however.
     - Update the spec to match the new template
     - Commit, with the "Previously-approved: juno" commit message tag
     - Upload using git review as normal
    
    Reviewers will still do a full review of the spec, we are not offering
    a rubber stamp of previously approved specs. However, we are requiring
    only one +2 to merge these previously approved specs, so the process
    should be a lot faster.
    
    A note for core reviewers here -- please include a short note on why
    you're doing a single +2 approval on the spec so future generations
    remember why.
    
    Trivial blueprints
    ==================
    
    We are not requiring specs for trivial blueprints in Kilo. Instead,
    create a blueprint in Launchpad
    at https://blueprints.launchpad.net/nova/+addspec and target the
    specification to Kilo. New, targeted, unapproved specs will be
    reviewed in weekly nova meetings. If it is agreed they are indeed
    trivial in the meeting, they will be approved.
    
    Other proposals
    ===============
    
    For other proposals, the process is the same as Juno... Propose a spec
    review against the specs/kilo/approved directory and we'll review it
    from there.
    


    After a week I'm seeing something interesting. In Juno the specs process was new, and we saw a pause in the development cycle while people actually wrote down their designs before sending the code. This time around people know what to expect, and there are left over specs from Juno lying around. We're therefore seeing specs approved much faster than in Kilo. This should reduce the effect of the "pipeline flush" that we saw in Juno.

    So far we have five approved specs after only a week.

    Tags for this post: openstack kilo blueprints spec nova
    Related posts: One week of Nova Kilo specifications; Specs for Kilo; How are we going with Nova Kilo specs after our review day?; Specs for Kilo; Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno Nova PTL Candidacy

posted at: 16:39 | path: /openstack/kilo | permanent link to this entry


Tue, 30 Sep 2014



On layers

    There's been a lot of talk recently about what we should include in OpenStack and what is out of scope. This is interesting, in that many of us used to believe that we should do ''everything''. I think what's changed is that we're learning that solving all the problems in the world is hard, and that we need to re-focus on our core products. In this post I want to talk through the various "layers" proposals that have been made in the last month or so. Layers don't directly address what we should include in OpenStack or not, but they are a useful mechanism for trying to break up OpenStack into simpler to examine chunks, and I think that makes them useful in their own right.

    I would address what I believe the scope of the OpenStack project should be, but I feel that it makes this post so long that no one will ever actually read it. Instead, I'll cover that in a later post in this series. For now, let's explore what people are proposing as a layering model for OpenStack.

    What are layers?

    Dean Troyer did a good job of describing a layers model for the OpenStack project on his blog quite a while ago. He proposed the following layers (this is a summary, you should really read his post):

    • layer 0: operating system and Oslo
    • layer 1: basic services -- Keystone, Glance, Nova
    • layer 2: extended basics -- Neutron, Cinder, Swift, Ironic
    • layer 3: optional services -- Horizon and Ceilometer
    • layer 4: turtles all the way up -- Heat, Trove, Moniker / Designate, Marconi / Zaqar


    Dean notes that Neutron would move to layer 1 when nova-network goes away and Neutron becomes required for all compute deployments. Dean's post was also over a year ago, so it misses services like Barbican that have appeared since then. Services are only allowed to require services from lower numbered layers, but can use services from higher number layers as optional add ins. So Nova for example can use Neutron, but cannot require it until it moves into layer 1. Similarly, there have been proposals to add Ceilometer as a dependency to schedule instances in Nova, and if we were to do that then we would need to move Ceilometer down to layer 1 as well. (I think doing that would be a mistake by the way, and have argued against it during at least two summits).

    Sean Dague re-ignited this discussion with his own blog post relatively recently. Sean proposes new names for most of the layers, but the intent remains the same -- a compute-centric view of the services that are required to build a working OpenStack deployment. Sean and Dean's layer definitions are otherwise strongly aligned, and Sean notes that the probability of seeing something deployed at a given installation reduces as the layer count increases -- so for example Trove is way less commonly deployed than Nova, because the set of people who want a managed database as a service is smaller than the set of of people who just want to be able to boot instances.

    Now, I'm not sure I agree with the compute centric nature of the two layers proposals mentioned so far. I see people installing just Swift to solve a storage problem, and I think that's a completely valid use of OpenStack and should be supported as a first class citizen. On the other hand, resolving my concern with the layers model there is trivial -- we just move Swift to layer 1.

    What do layers give us?

    Sean makes a good point about the complexity of OpenStack installs and how we scare away new users. I agree completely -- we show people our architecture diagrams which are deliberately confusing, and then we wonder why they're not impressed. I think we do it because we're proud of the scope of the thing we've built, but I think our audiences walk away thinking that we don't really know what problem we're trying to solve. Do I really need to deploy Horizon to have working compute? No of course not, but our architecture diagrams don't make that obvious. I gave a talk along these lines at pyconau, and I think as a community we need to be better at explaining to people what we're trying to do, while remembering that not everyone is as excited about writing a whole heap of cloud infrastructure code as we are. This is also why the OpenStack miniconf at linux.conf.au 2015 has pivoted from being a generic OpenStack chatfest to being something more solidly focussed on issues of interest to deployers -- we're just not great at talking to our users and we need to reboot the conversation at community conferences until its something which meets their needs.


    We intend this diagram to amaze and confuse our victims


    Agreeing on a set of layers gives us a framework within which to describe OpenStack to our users. It lets us communicate the services we think are basic and always required, versus those which are icing on the cake. It also let's us explain the dependency between projects better, and that helps deployers work out what order to deploy things in.

    Do layers help us work out what OpenStack should focus on?

    Sean's blog post then pivots and starts talking about the size of the OpenStack ecosystem -- or the "size of our tent" as he phrases it. While I agree that we need to shrink the number of projects we're working on at the moment, I feel that the blog post is missing a logical link between the previous layers discussion and the tent size conundrum. It feels to me that Sean wanted to propose that OpenStack focus on a specific set of layers, but didn't quite get there for whatever reason.

    Next Monty Taylor had a go at furthering this conversation with his own blog post on the topic. Monty starts by making a very important point -- he (like all involved) both want the OpenStack community to be as inclusive as possible. I want lots of interesting people at the design summits, even if they don't work directly on projects that OpenStack ships. You can be a part of the OpenStack community without having our logo on your product.

    A concrete example of including non-OpenStack projects in our wider community was visible at the Atlanta summit -- I know for a fact that there were software engineers at the summit who work on Google Compute Engine. I know this because I used to work with them at Google when I was a SRE there. I have no problem with people working on competing products being at our summits, as long as they are there to contribute meaningfully in the sessions, and not just take from us. It needs to be a two way street. Another concrete example is Ceph. I think Ceph is cool, and I'm completely fine with people using it as part of their OpenStack deploy. What upsets me is when people conflate Ceph with OpenStack. They are different. They're separate. And that is fine. Let's just not confuse people by saying Ceph is part of the OpenStack project -- it simply isn't because it doesn't fall under our governance model. Ceph is still a valued member of our community and more than welcome at our summits.

    Do layers help us work our what to focus OpenStack on for now? I think they do. Should we simply say that we're only going to work on a single layer? Absolutely not. What we've tried to do up until now is have OpenStack be a single big thing, what we call "the integrated release". I think layers gives us a tool to find logical ways to break that thing up. Perhaps we need a smaller integrated release, but then continue with the other projects but on their own release cycles? Or perhaps they release at the same time, but we don't block the release of a layer 1 service on the basis of release critical bugs in a layer 4 service?

    Is there consensus on what sits in each layer?

    Looking at the posts I can find on this topic so far, I'd have to say the answer is no. We're close, but we're not aligned yet. For example, one proposal has a tweak to the previously proposed layer model that adds Cinder, Designate and Neutron down into layer 1 (basic services). The author argues that this is because stateless cloud isn't particularly useful to users of OpenStack. However, I think this is wrong to be honest. I can see that stateless cloud isn't super useful by itself, but we are assuming that OpenStack is the only piece of infrastructure that a given organization has. Perhaps that's true for the public cloud case, but the vast majority of OpenStack deployments at this point are private clouds. So, you're an existing IT organization and you're deploying OpenStack to increase the level of flexibility in compute resources. You don't need to deploy Cinder or Designate to do that. Let's take the storage case for a second -- our hypothetical IT organization probably already has some form of storage -- a SAN, or NFS appliances, or something like that. So stateful cloud is easy for them -- they just have their instances mount resources from those existing storage pools like they would any other machine. Eventually they'll decide that hand managing that is horrible and move to Cinder, but that's probably later once they've gotten through the initial baby step of deploying Nova, Glance and Keystone.

    The first step to using layers to decide what we should focus on is to decide what is in each layer. I think the conversation needs to revolve around that for now, because it we drift off into whether existing in a given layer means you're voted off the OpenStack island, when we'll never even come up with a set of agreed layers.

    Let's ignore tents for now

    The size of the OpenStack "tent" is the metaphor being used at the moment for working out what to include in OpenStack. As I say above, I think we need to reach agreement on what is in each layer before we can move on to that very important conversation.

    Conclusion

    Given the focus of this post is the layers model, I want to stop introducing new concepts here for now. Instead let me summarize where I stand so far -- I think the layers model is useful. I also think the layers should be an inverted pyramid -- layer 1 should be as small as possible for example. This is because of the dependency model that the layers model proposes -- it is important to keep the list of things that a layer 2 service must use as small and coherent as possible. Another reason to keep the lower layers as small as possible is because each layer represents the smallest possible increment of an OpenStack deployment that we think is reasonable. We believe it is currently reasonable to deploy Nova without Cinder or Neutron for example.

    Most importantly of all, having those incremental stages of OpenStack deployment gives us a framework we have been missing in talking to our deployers and users. It makes OpenStack less confusing to outsiders, as it gives them bite sized morsels to consume one at a time.

    So here are the layers as I see them for now:

    • layer 0: operating system, and Oslo
    • layer 1: basic services -- Keystone, Glance, Nova, and Swift
    • layer 2: extended basics -- Neutron, Cinder, and Ironic
    • layer 3: optional services -- Horizon, and Ceilometer
    • layer 4: application services -- Heat, Trove, Designate, and Zaqar


    I am not saying that everything inside a single layer is required to be deployed simultaneously, but I do think its reasonable for Ceilometer to assume that Swift is installed and functioning. The big difference here between my view of layers and that of Dean, Sean and Monty is that I think that Swift is a layer 1 service -- it provides basic functionality that may be assumed to exist by services above it in the model.

    I believe that when projects come to the Technical Committee requesting incubation or integration, they should specify what layer they see their project sitting at, and the justification for a lower layer number should be harder than that for a higher layer. So for example, we should be reasonably willing to accept proposals at layer 4, whilst we should be super concerned about the implications of adding another project at layer 1.

    In the next post in this series I'll try to address the size of the OpenStack "tent", and what projects we should be focussing on.

    Tags for this post: openstack kilo technical committee tc layers
    Related posts: One week of Nova Kilo specifications; Specs for Kilo; Working on review comments for Chapters 2, 3 and 4 tonight; What do you do when you care about a standard...; Compute Kilo specs are open; Specs for Kilo

posted at: 18:57 | path: /openstack/kilo | permanent link to this entry


Blueprints implemented in Nova during Juno

posted at: 13:56 | path: /openstack/juno | permanent link to this entry


Mon, 29 Sep 2014



Chronological list of Juno Nova mid-cycle meetup posts

posted at: 23:10 | path: /openstack/juno | permanent link to this entry


My candidacy for Kilo Compute PTL

    This is mostly historical at this point, but I forgot to post it here when I emailed it a week or so ago. So, for future reference:

    I'd like another term as Compute PTL, if you'll have me.
    
    We live in interesting times. openstack has clearly gained a large
    amount of mind share in the open cloud marketplace, with Nova being a
    very commonly deployed component. Yet, we don't have a fantastic
    container solution, which is our biggest feature gap at this point.
    Worse -- we have a code base with a huge number of bugs filed against
    it, an unreliable gate because of subtle bugs in our code and
    interactions with other openstack code, and have a continued need to
    add features to stay relevant. These are hard problems to solve.
    
    Interestingly, I think the solution to these problems calls for a
    social approach, much like I argued for in my Juno PTL candidacy
    email. The problems we face aren't purely technical -- we need to work
    out how to pay down our technical debt without blocking all new
    features. We also need to ask for understanding and patience from
    those feature authors as we try and improve the foundation they are
    building on.
    
    The specifications process we used in Juno helped with these problems,
    but one of the things we've learned from the experiment is that we
    don't require specifications for all changes. Let's take an approach
    where trivial changes (no API changes, only one review to implement)
    don't require a specification. There will of course sometimes be
    variations on that rule if we discover something, but it means that
    many micro-features will be unblocked.
    
    In terms of technical debt, I don't personally believe that pulling
    all hypervisor drivers out of Nova fixes the problems we face, it just
    moves the technical debt to a different repository. However, we
    clearly need to discuss the way forward at the summit, and come up
    with some sort of plan. If we do something like this, then I am not
    sure that the hypervisor driver interface is the right place to do
    that work -- I'd rather see something closer to the hypervisor itself
    so that the Nova business logic stays with Nova.
    
    Kilo is also the release where we need to get the v2.1 API work done
    now that we finally have a shared vision for how to progress. It took
    us a long time to get to a good shared vision there, so we need to
    ensure that we see that work through to the end.
    
    We live in interesting times, but they're also exciting as well.
    


    I have since been elected unopposed, so thanks for that!

    Tags for this post: openstack kilo compute ptl
    Related posts: One week of Nova Kilo specifications; Specs for Kilo; Juno Nova PTL Candidacy; Review priorities as we approach juno-3; Thoughts from the PTL; Compute Kilo specs are open

posted at: 18:34 | path: /openstack/kilo | permanent link to this entry


Thu, 21 Aug 2014



Juno nova mid-cycle meetup summary: conclusion

posted at: 23:47 | path: /openstack/juno | permanent link to this entry


Juno nova mid-cycle meetup summary: the next generation Nova API

    This is the final post in my series covering the highlights from the Juno Nova mid-cycle meetup. In this post I will cover our next generation API, which used to be called the v3 API but is largely now referred to as the v2.1 API. Getting to this point has been one of the more painful processes I think I've ever seen in Nova's development history, and I think we've learnt some important things about how large distributed projects operate along the way. My hope is that we remember these lessons next time we hit something as contentious as our API re-write has been.

    Now on to the API itself. It started out as an attempt to improve our current API to be more maintainable and less confusing to our users. We deliberately decided that we would not focus on adding features, but instead attempt to reduce as much technical debt as possible. This development effort went on for about a year before we realized we'd made a mistake. The mistake we made is that we assumed that our users would agree it was trivial to move to a new API, and that they'd do that even if there weren't compelling new features, which it turned out was entirely incorrect.

    I want to make it clear that this wasn't a mistake on the part of the v3 API team. They implemented what the technical leadership of Nova at the time asked for, and were very surprised when we discovered our mistake. We've now spent over a release cycle trying to recover from that mistake as gracefully as possible, but the upside is that the API we will be delivering is significantly more future proof than what we have in the current v2 API.

    At the Atlanta Juno summit, it was agreed that the v3 API would never ship in its current form, and that what we would instead do is provide a v2.1 API. This API would be 99% compatible with the current v2 API, with the incompatible things being stuff like if you pass a malformed parameter to the API we will now tell you instead of silently ignoring it, which we call 'input validation'. The other thing we are going to add in the v2.1 API is a system of 'micro-versions', which allow a client to specify what version of the API it understands, and for the server to gracefully degrade to older versions if required.

    This micro-version system is important, because the next step is to then start adding the v3 cleanups and fixes into the v2.1 API, but as a series of micro-versions. That way we can drag the majority of our users with us into a better future, without abandoning users of older API versions. I should note at this point that the mechanics for deciding what the minimum micro-version a version of Nova will support are largely undefined at the moment. My instinct is that we will tie to stable release versions in some way; if your client dates back to a release of Nova that we no longer support, then we might expect you to upgrade. However, that hasn't been debated yet, so don't take my thoughts on that as rigid truth.

    Frustratingly, the intent of the v2.1 API has been agreed and unchanged since the Atlanta summit, yet we're late in the Juno release and most of the work isn't done yet. This is because we got bogged down in the mechanics of how micro-versions will work, and how the translation for older API versions will work inside the Nova code later on. We finally unblocked this at the mid-cycle meetup, which means this work can finally progress again.

    The main concern that we needed to resolve at the mid-cycle was the belief that if the v2.1 API was implemented as a series of translations on top of the v3 code, then the translation layer would be quite thick and complicated. This raises issues of maintainability, as well as the amount of code we need to understand. The API team has now agreed to produce an API implementation that is just the v2.1 functionality, and will then layer things on top of that. This is actually invisible to users of the API, but it leaves us with an implementation where changes after v2.1 are additive, which should be easier to maintain.

    One of the other changes in the original v3 code is that we stopped proxying functionality for Neutron, Cinder and Glance. With the decision to implement a v2.1 API instead, we will need to rebuild that proxying implementation. To unblock v2.1, and based on advice from the HP and Rackspace public cloud teams, we have decided to delay implementing these proxies. So, the first version of the v2.1 API we ship will not have proxies, but later versions will add them in. The current v2 API implementation will not be removed until all the proxies have been added to v2.1. This is prompted by the belief that many advanced API users don't use the Nova API proxies, and therefore could move to v2.1 without them being implemented.

    Finally, I want to thank the Nova API team, especially Chris Yeoh and Kenichi Oomichi for their patience with us while we have worked through these complicated issues. It's much appreciated, and I find them a consistent pleasure to work with.

    That brings us to the end of my summary of the Nova Juno mid-cycle meetup. I'll write up a quick summary post that ties all of the posts together, but apart from that this series is now finished. Thanks for following along.

    Tags for this post: openstack juno nova mid-cycle summary api v3 v2.1
    Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: conclusion; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues

posted at: 16:52 | path: /openstack/juno | permanent link to this entry


Tue, 19 Aug 2014



Juno nova mid-cycle meetup summary: nova-network to Neutron migration

    This will be my second last post about the Juno Nova mid-cycle meetup, which covers the state of play for work on the nova-network to Neutron upgrade.

    First off, some background information. Neutron (formerly Quantum) was developed over a long period of time to replace nova-network, and added to the OpenStack Folsom release. The development of new features for nova-network was frozen in the Nova code base, so that users would transition to Neutron. Unfortunately the transition period took longer than expected. We ended up having to unfreeze development of nova-network, in order to fix reliability problems that were affecting our CI gating and the reliability of deployments for existing nova-network users. Also, at least two OpenStack companies were carrying significant feature patches for nova-network, which we wanted to merge into the main code base.

    You can see the announcement at http://lists.openstack.org/pipermail/openstack-dev/2014-January/025824.html. The main enhancements post-freeze were a conversion to use our new objects infrastructure (and therefore conductor), as well as features that were being developed by Nebula. I can't find any contributions from the other OpenStack company in the code base at this time, so I assume they haven't been proposed.

    The nova-network to Neutron migration path has come to the attention of the OpenStack Technical Committee, who have asked for a more formal plan to address Neutron feature gaps and deprecate nova-network. That plan is tracked at https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Neutron_Gap_Coverage. As you can see, there are still some things to be merged which are targeted for juno-3. At the time of writing this includes grenade testing; Neutron being the devstack default; a replacement for nova-network multi-host; a migration plan; and some documentation. They are all making good progress, but until these action items are completed, Nova can't start the process of deprecating nova-network.

    The discussion at the Nova mid-cycle meetup was around the migration planning item in the plan. There is a Nova specification that outlines one possible plan for live upgrading instances (i.e, no instance downtime) at https://review.openstack.org/#/c/101921/, but this will probably now be replaced with a simpler migration path involving cold migrations. This is prompted by not being able to find a user that absolutely has to have live upgrade. There was some confusion, because of a belief that the TC was requiring a live upgrade plan. But as Russell Bryant says in the meetup etherpad:

    "Note that the TC has made no such statement on migration expectations other than a migration path must exist, both projects must agree on the plan, and that plan must be submitted to the TC as a part of the project's graduation review (or project gap review in this case). I wouldn't expect the TC to make much of a fuss about the plan if both Nova and Neutron teams are in agreement."


    The current plan is to go forward with a cold upgrade path, unless a user comes forward with an absolute hard requirement for a live upgrade, and a plan to fund developers to work on it.

    At this point, it looks like we are on track to get all of the functionality we need from Neutron in the Juno release. If that happens, we will start the nova-network deprecation timer in Kilo, with my expectation being that nova-network would be removed in the "M" release. There is also an option to change the default networking implementation to Neutron before the deprecation of nova-network is complete, which will mean that new deployments are defaulting to the long term supported option.

    In the next (and probably final) post in this series, I'll talk about the API formerly known as Nova API v3.

    Tags for this post: openstack juno nova mid-cycle summary nova-network neutron migration
    Related posts: Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: conclusion; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: slots

posted at: 20:37 | path: /openstack/juno | permanent link to this entry


Juno nova mid-cycle meetup summary: slots

    If I had to guess what would be a controversial topic from the mid-cycle meetup, it would have to be this slots proposal. I was actually in a Technical Committee meeting when this proposal was first made, but I'm told there were plenty of people in the room keen to give this idea a try. Since the mid-cycle Joe Gordon has written up a more formal proposal, which can be found at https://review.openstack.org/#/c/112733.

    If you look at the last few Nova releases, core reviewers have been drowning under code reviews, so we need to control the review workload. What is currently happening is that everyone throws up their thing into Gerrit, and then each core tries to identify the important things and review them. There is a list of prioritized blueprints in Launchpad, but it is not used much as a way of determining what to review. The result of this is that there are hundreds of reviews outstanding for Nova (500 when I wrote this post). Many of these will get a review, but it is hard for authors to get two cores to pay attention to a review long enough for it to be approved and merged.

    If we could rate limit the number of proposed reviews in Gerrit, then cores would be able to focus their attention on the smaller number of outstanding reviews, and land more code. Because each review would merge faster, we believe this rate limiting would help us land more code rather than less, as our workload would be better managed. You could argue that this will mean we just say 'no' more often, but that's not the intent, it's more about bringing focus to what we're reviewing, so that we can get patches through the process completely. There's nothing more frustrating to a code author than having one +2 on their code and then hitting some merge freeze deadline.

    The proposal is therefore to designate a number of blueprints that can be under review at any one time. The initial proposal was for ten, and the term 'slot' was coined to describe the available review capacity. If your blueprint was not allocated a slot, then it would either not be proposed in Gerrit yet, or if it was it would have a procedural -2 on it (much like code reviews associated with unapproved specifications do now).

    The number of slots is arbitrary at this point. Ten is our best guess of how much we can dilute core's focus without losing efficiency. We would tweak the number as we gained experience if we went ahead with this proposal. Remember, too, that a slot isn't always a single code review. If the VMWare refactor was in a slot for example, we might find that there were also ten code reviews associated with that single slot.

    How do you determine what occupies a review slot? The proposal is to groom the list of approved specifications more carefully. We would collaboratively produce a ranked list of blueprints in the order of their importance to Nova and OpenStack overall. As slots become available, the next highest ranked blueprint with code ready for review would be moved into one of the review slots. A blueprint would be considered 'ready for review' once the specification is merged, and the code is complete and ready for intensive code review.

    What happens if code is in a slot and something goes wrong? Imagine if a proposer goes on vacation and stops responding to review comments. If that happened we would bump the code out of the slot, but would put it back on the backlog in the location dictated by its priority. In other words there is no penalty for being bumped, you just need to wait for a slot to reappear when you're available again.

    We also talked about whether we were requiring specifications for changes which are too simple. If something is relatively uncontroversial and simple (a better tag for internationalization for example), but not a bug, it falls through the cracks of our process at the moment and ends up needing to have a specification written. There was talk of finding another way to track this work. I'm not sure I agree with this part, because a trivial specification is a relatively cheap thing to do. However, it's something I'm happy to talk about.

    We also know that Nova needs to spend more time paying down its accrued technical debt, which you can see in the huge amount of bugs we have outstanding at the moment. There is no shortage of people willing to write code for Nova, but there is a shortage of people fixing bugs and working on strategic things instead of new features. If we could reserve slots for technical debt, then it would help us to get people to work on those aspects, because they wouldn't spend time on a less interesting problem and then discover they can't even get their code reviewed. We even talked about having an alternating focus for Nova releases; we could have a release focused on paying down technical debt and stability, and then the next release focused on new features. The Linux kernel does something quite similar to this and it seems to work well for them.

    Using slots would allow us to land more valuable code faster. Of course, it also means that some patches will get dropped on the floor, but if the system is working properly, those features will be ones that aren't important to OpenStack. Considering that right now we're not landing many features at all, this would be an improvement.

    This proposal is obviously complicated, and everyone will have an opinion. We haven't really thought through all the mechanics fully, yet, and it's certainly not a done deal at this point. The ranking process seems to be the most contentious point. We could encourage the community to help us rank things by priority, but it's not clear how that process would work. Regardless, I feel like we need to be more systematic about what code we're trying to land. It's embarrassing how little has landed in Juno for Nova, and we need to be working on that. I would like to continue discussing this as a community to make sure that we end up with something that works well and that everyone is happy with.

    This series is nearly done, but in the next post I'll cover the current status of the nova-network to neutron upgrade path.

    Tags for this post: openstack juno nova mid-cycle summary review slots blueprint priority project management
    Related posts: Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: conclusion; Juno nova mid-cycle meetup summary: DB2 support

posted at: 00:34 | path: /openstack/juno | permanent link to this entry


Sun, 17 Aug 2014



Juno nova mid-cycle meetup summary: scheduler

    This post is in a series covering the discussions at the Juno Nova mid-cycle meetup. This post will cover the current state of play of our scheduler refactoring efforts. The scheduler refactor has been running for a fair while now, dating back to at least the Hong Kong summit (so about 1.5 release cycles ago).

    The original intent of the scheduler sub-team's effort was to pull the scheduling code out of Nova so that it could be rapidly iterated on its own, with the eventual goal being to support a single scheduler across the various OpenStack services. For example, the scheduler that makes placement decisions about your instances could also be making decisions about the placement of your storage resources and could therefore ensure that they are co-located as much as possible.

    During this process we realized that a big bang replacement is actually much harder than we thought, and the plan has morphed into being a multi-phase effort. The first step is to make the interface for the scheduler more clearly defined inside the Nova code base. For example, in previous releases, it was the scheduler that launched instances: the API would ask the scheduler to find available hypervisor nodes, and then the scheduler would instruct those nodes to boot the instances. We need to refactor this so that the scheduler picks a set of nodes, but then the API is the one which actually does the instance launch. That way, when the scheduler does move out it's not trusted to perform actions that change hypervisor state, and the Nova code does that for it. This refactoring work is under way, along with work to isolate the SQL database accesses inside the scheduler.

    I would like to set expectations that this work is what will land in Juno. It has little visible impact for users, but positions us to better solve these problems in Kilo.

    We discussed the need to ensure that any new scheduler is at least as fast and accurate as the current one. Jay Pipes has volunteered to work with the scheduler sub-team to build a testing framework to validate this work. Jay also has some concerns about the resource tracker work that is being done at the moment that he is going to discuss with the scheduler sub-team. Since the mid-cycle meetup there has been a thread on the openstack-dev mailing list about similar resource tracker concerns (here), which might be of interest to people interested in scheduler work.

    We also need to test our assumption at some point that other OpenStack services such as Neutron and Cinder would be even willing to share a scheduler service if a central one was implemented. We believe that Neutron is interested, but we shouldn't be surprising our fellow OpenStack projects by just appearing with a complete solution. There is a plan to propose a cross-project session at the Paris summit to cover this work.

    In the next post in this series we'll discuss possibly the most controversial part of the mid-cycle meetup. The proposal for "slots" for landing blueprints during Kilo.

    Tags for this post: openstack juno nova mid-cycle summary scheduler
    Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: conclusion; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: slots

posted at: 20:06 | path: /openstack/juno | permanent link to this entry


Juno nova mid-cycle meetup summary: bug management

    Welcome to the next exciting installment of the Nova Juno mid-cycle meetup summary. In the previous chapter, our hero battled a partially complete cells implementation, by using his +2 smile of good intentions. In this next exciting chapter, watch him battle our seemingly never ending pile of bugs! Sorry, now that I'm on to my sixth post in this series I feel like it's time to get more adventurous in the introductions.

    For at least the last cycle, and probably longer, Nova has been struggling with the number of bugs filed in Launchpad. I don't think the problem is that Nova has terrible code, it is instead that we have a lot of users filing bugs, and the team working on triaging and closing bugs is small. The complexity of the deployment options with Nova make this problem worse, and that complexity increases as we allow new drivers for things like different storage engines to land in the code base.

    The increasing number of permutations possible with Nova configurations is a problem for our CI systems as well, as we don't cover all of these options and this sometimes leads us to discover that they don't work as expected in the field. CI is a tangent from the main intent of this post though, so I will reserve further discussion of our CI system until a later post.

    Tracy Jones and Joe Gordon have been doing good work in this cycle trying to get a grip on the state of the bugs filed against Nova. For example, a very large number of bugs (hundreds) were for problems we'd fixed, but where the bug bot had failed to close the bug when the fix merged. Many other bugs were waiting for feedback from users, but had been waiting for longer than six months. In both those cases the response was to close the bug, with the understanding that the user can always reopen it if they come back to talk to us again. Doing "quick hit" things like this has reduced our open bug count to about one thousand bugs. You can see a dashboard that Tracy has produced that shows the state of our bugs at http://54.201.139.117/nova-bugs.html. I believe that Joe has been moving towards moving this onto OpenStack hosted infrastructure, but this hasn't happened yet.

    At the mid-cycle meetup, the goal of the conversation was to try and find other ways to get our bug queue further under control. Some of the suggestions were largely mechanical, like tightening up our definitions of the confirmed (we agree this is a bug) and triaged (and we know how to fix it) bug states. Others were things like auto-abandoning bugs which are marked incomplete for more than 60 days without a reply from the person who filed the bug, or unassigning bugs when the review that proposed a fix is abandoned in Gerrit.

    Unfortunately, we have more ideas for how to automate dealing with bugs than we have people writing automation. If there's someone out there who wants to have a big impact on Nova, but isn't sure where to get started, helping us out with this automation would be a super helpful way to get started. Let Tracy or I know if you're interested.

    We also talked about having more targeted bug days. This was prompted by our last bug day being largely unsuccessful. Instead we're proposing that the next bug day have a really well defined theme, such as moving things from the "undecided" to the "confirmed" state, or similar. I believe the current plan is to run a bug day like this after J-3 when we're winding down from feature development and starting to focus on stabilization.

    Finally, I would encourage people fixing bugs in Nova to do a quick search for duplicate bugs when they are closing a bug. I wouldn't be at all surprised to discover that there are many bugs where you can close duplicates at the same time with minimal effort.

    In the next post I'll cover our discussions of the state of the current scheduler work in Nova.

    Tags for this post: openstack juno nova mi-cycle summary bugs
    Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: conclusion; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues

posted at: 19:38 | path: /openstack/juno | permanent link to this entry


Thu, 14 Aug 2014



Juno nova mid-cycle meetup summary: cells

    This is the next post summarizing the Juno Nova mid-cycle meetup. This post covers the cells functionality used by some deployments to scale Nova.

    For those unfamiliar with cells, it's a way of combining smaller Nova installations into a thing which feels like a single large Nova install. So for example, Rackspace deploys Nova in cells of hundreds of machines, and these cells form a Nova availability zone which might contain thousands of machines. The cells in one of these deployments form a tree: users talk to the top level of the tree, which might only contain API services. That cell then routes requests to child cells which can actually perform the operation requested.

    There are a few reasons why Rackspace does this. Firstly, it keeps the MySQL databases smaller, which can improve the performance of database operations and backups. Additionally, cells can contain different types of hardware, which are then partitioned logically. For example, OnMetal (Rackspace's Ironic-based baremetal product) instances come from a cell which contains OnMetal machines and only publishes OnMetal flavors to the parent cell.

    Cells was originally written by Rackspace to meet its deployment needs, but is now used by other sites as well. However, I think it would be a stretch to say that cells is commonly used, and it is certainly not the deployment default. In fact, most deployments don't run any of the cells code, so you can't really call them even a "single cell install". One of the reasons cells isn't more widely deployed is that it doesn't implement the entire Nova API, which means some features are missing. As a simple example, you can't live-migrate an instance between two child cells.

    At the meetup, the first thing we discussed regarding cells was a general desire to see cells finished and become the default deployment method for Nova. Perhaps most people end up running a single cell, but in that case at least the cells code paths are well used. The first step to get there is improving the Tempest coverage for cells. There was a recent openstack-dev mailing list thread on this topic, which was discussed at the meetup. There was commitment from several Nova developers to work on this, and notably not all of them are from Rackspace.

    It's important that we improve the Tempest coverage for cells, because it positions us for the next step in the process, which is bringing feature parity to cells compared with a non-cells deployment. There is some level of frustration that the work on cells hasn't really progressed in Juno, and that it is currently incomplete. At the meetup, we made a commitment to bringing a well-researched plan to the Kilo summit for implementing feature parity for a single cell deployment compared with a current default deployment. We also made a commitment to make cells the default deployment model when this work is complete. If this doesn't happen in time for Kilo, then we will be forced to seriously consider removing cells from Nova. A half-done cells deployment has so far stopped other development teams from trying to solve the problems that cells addresses, so we either need to finish cells, or get out of the way so that someone else can have a go. I am confident that the cells team will take this feedback on board and come to the summit with a good plan. Once we have a plan we can ask the whole community to rally around and help finish this effort, which I think will benefit all of us.

    In the next blog post I will cover something we've been struggling with for the last few releases: how we get our bug count down to a reasonable level.

    Tags for this post: openstack juno nova mid-cycle summary cells
    Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: conclusion; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues

posted at: 21:20 | path: /openstack/juno | permanent link to this entry