Nova presents configuration information to instances it starts via a mechanism
called metadata. This metadata is made available via either a configdrive, or
the metadata service. These mechanisms are widely used via helpers such as
cloud-init to specify things like the root password the instance should use.
There are three separate groups of people who need to be able to specify
metadata for an instance.
User provided data
The user who booted the instance can pass metadata to the instance in several
ways. For authentication keypairs, the keypairs functionality of the Nova APIs
can be used to upload a key and then specify that key during the Nova boot API
request. For less structured data, a small opaque blob of data may be passed
via the user-data feature of the Nova API. Examples of such unstructured data
would be the puppet role that the instance should use, or the HTTP address of a
server to fetch post-boot configuration information from.
Nova provided data
Nova itself needs to pass information to the instance via its internal
implementation of the metadata system. Such information includes the network
configuration for the instance, as well as the requested hostname for the
instance. This happens by default and requires no configuration by the user or
Deployer provided data
There is however a third type of data. It is possible that the deployer of
OpenStack needs to pass data to an instance. It is also possible that this data
is not known to the user starting the instance. An example might be a
cryptographic token to be used to register the instance with Active Directory
post boot -- the user starting the instance should not have access to Active
Directory to create this token, but the Nova deployment might have permissions
to generate the token on the user's behalf.
Nova supports a mechanism to add "vendordata" to the metadata handed to
instances. This is done by loading named modules, which must appear in the nova
source code. We provide two such modules:
- StaticJSON: a module which can include the contents of a static JSON file loaded from disk. This can be used for things which don't change between instances, such as the location of the corporate puppet server.
- DynamicJSON: a module which will make a request to an external REST service to determine what metadata to add to an instance. This is how we recommend you generate things like Active Directory tokens which change per instance.
Tell me more about DynamicJSON
Having said all that, this post is about how to configure the DynamicJSON plugin, as I think its the most interesting bit here.
To use DynamicJSON, you configure it like this:
- Add "DynamicJSON" to the vendordata_providers configuration option. This can also include "StaticJSON" if you'd like.
- Specify the REST services to be contacted to generate metadata in the vendordata_dynamic_targets configuration option. There can be more than one of these, but note that they will be queried once per metadata request from the instance, which can mean a fair bit of traffic depending on your configuration and the configuration of the instance.
The format for an entry in vendordata_dynamic_targets is like this:
Where name is a short string not including the '@' character, and where the
URL can include a port number if so required. An example would be:
Metadata fetched from this target will appear in the metadata service at a
new file called vendordata2.json, with a path (either in the metadata service
URL or in the configdrive) like this:
For each dynamic target, there will be an entry in the JSON file named after
that target. For example::
Do not specify the same name more than once. If you do, we will ignore
subsequent uses of a previously used name.
The following data is passed to your REST service as a JSON encoded POST:
- project-id: the UUID of the project that owns the instance
- instance-id: the UUID of the instance
- image-id: the UUID of the image used to boot this instance
- user-data: as specified by the user at boot time
- hostname: the hostname of the instance
- metadata: as specified by the user at boot time
Nova provides authentication to external metadata services in order to provide
some level of certainty that the request came from nova. This is done by
providing a service token with the request -- you can then just deploy your
metadata service with the keystone authentication WSGI middleware. This is
configured using the keystone authentication parameters in the
vendordata_dynamic_auth configuration group.
This behavior is optional however, if you do not configure a service user nova will not authenticate with the external metadata service.
Deploying the same vendordata service
There is a sample vendordata service that is meant to model what a deployer would use for their custom metadata at http://github.com/mikalstill/vendordata
. Deploying that service is relatively simple:
$ git clone http://github.com/mikalstill/vendordata
$ cd vendordata
$ apt-get install virtualenvwrapper
$ . /etc/bash_completion.d/virtualenvwrapper (only needed if virtualenvwrapper wasn't already installed)
$ mkvirtualenv vendordata
$ pip install -r requirements.txt
We need to configure the keystone WSGI middleware to authenticate against the right keystone service. There is a sample configuration file in git, but its configured to work with an openstack-ansible all in one install that I setup up for my private testing, which probably isn't what you're using:
insecure = False
auth_plugin = password
auth_url = http://172.29.236.100:35357
auth_uri = http://172.29.236.100:5000
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = 5dff06ac0c43685de108cc799300ba36dfaf29e4
region_name = RegionOne
Per the README file in the vendordata sample repository, you can test the vendordata server in a stand alone manner by generating a token manually from keystone:
$ curl -d @credentials.json -H "Content-Type: application/json" http://172.29.236.100:5000/v2.0/tokens > token.json
$ token=`cat token.json | python -c "import sys, json; print json.loads(sys.stdin.read())['access']['token']['id'];"`
We then include that token in a test request to the vendordata service:
curl -H "X-Auth-Token: $token" http://127.0.0.1:8888/
Configuring nova to use the external metadata service
Now we're ready to wire up the sample metadata service with nova. You do that by adding something like this to the nova.conf configuration file:
is the IP address or hostname of the server running the external metadata service. Now if we boot an instance like this:
nova boot --image 2f6e96ca-9f58-4832-9136-21ed6c1e3b1f --flavor tempest1 --nic net-name=public --config-drive true foo
We end up with a config drive which contains the information or external metadata service returned (in the example case, handy Carrie Fischer quotes):
# cat openstack/latest/vendor_data2.json | python -m json.tool
"carrie_says": "I really love the internet. They say chat-rooms are the trailer park of the internet but I find it amazing."
Tags for this post: openstack nova metadata vendordata configdrive cloud-initRelated posts: Things I read today: the best description I've seen of metadata routing in neutron; Juno nova mid-cycle meetup summary: bug management; Upgrade problems with the new Fixed IP quota; Cataloguing meta data against multi media formats; Specs for Kilo; Juno Nova PTL Candidacy