So you want to setup a Ceph dev environment using OSA

    Support for installing and configuring Ceph was added to openstack-ansible in Ocata, so now that I have a need for a Ceph development environment it seems logical that I would build it by building an openstack-ansible Ocata AIO. There were a few gotchas there, so I want to explain the process I used.

    First off, Ceph is enabled in an openstack-ansible AIO using a thing I've never seen before called a "Scenario". Basically this means that you need to export an environment variable called "SCENARIO" before running the AIO install. Something like this will do the trick?L:

      export SCENARIO=ceph

    Next you need to set the global pg_num in the ceph role or the install will fail. I did that with this patch:

      --- /etc/ansible/roles/ceph.ceph-common/defaults/main.yml       2017-05-26 08:55:07.803635173 +1000
      +++ /etc/ansible/roles/ceph.ceph-common/defaults/main.yml       2017-05-26 08:58:30.417019878 +1000
      @@ -338,7 +338,9 @@
       #     foo: 1234
       #     bar: 5678
      -ceph_conf_overrides: {}
      +  global:
      +    osd_pool_default_pg_num: 8
      @@ -373,4 +375,4 @@
       # Set this to true to enable File access via NFS.  Requires an MDS role.
       nfs_file_gw: true
       # Set this to true to enable Object access via NFS. Requires an RGW role.
      -nfs_obj_gw: false
      \ No newline at end of file
      +nfs_obj_gw: false

    That of course needs to be done after the Ceph role has been fetched, but before it is executed, so in other words after the AIO bootstrap, but before the install.

    And that was about it (although of course that took a fair while to work out). I have this automated in my little install helper thing, so I'll never need to think about it again which is nice.

    Once Ceph is installed, you interact with it via the monitor container, not the utility container, which is a bit odd. That said, all you really need is the Ceph config file and the Ceph utilities, so you could move those elsewhere.

      root@labosa:/etc/openstack_deploy# lxc-attach -n aio1_ceph-mon_container-a3d8b8b1
      root@aio1-ceph-mon-container-a3d8b8b1:/# ceph -s
          cluster 24424319-b5e9-49d2-a57a-6087ab7f45bd
           health HEALTH_OK
           monmap e1: 1 mons at {aio1-ceph-mon-container-a3d8b8b1=}
                  election epoch 3, quorum 0 aio1-ceph-mon-container-a3d8b8b1
           osdmap e20: 3 osds: 3 up, 3 in
                  flags sortbitwise,require_jewel_osds
            pgmap v36: 40 pgs, 5 pools, 0 bytes data, 0 objects
                  102156 kB used, 3070 GB / 3070 GB avail
                        40 active+clean
      root@aio1-ceph-mon-container-a3d8b8b1:/# ceph osd tree
      -1 2.99817 root default                                      
      -2 2.99817     host labosa                                   
       0 0.99939         osd.0        up  1.00000          1.00000 
       1 0.99939         osd.1        up  1.00000          1.00000 
       2 0.99939         osd.2        up  1.00000          1.00000 

    Tags for this post: openstack osa ceph openstack-ansible
    Related posts: Configuring docker to use rexray and Ceph for persistent storage

posted at: 18:30 | path: /openstack/osa | permanent link to this entry