My work with Puppet

So I have been busy in the past weeks. I’m currently working a lots with OpenStack and Ceph. This Post is about some issue I found inside the puppet-ceph module, further I forked it and solved it. More here osd::devices allow working on dmcrypt block devices.

Lets get start:

When you want to run Ceph, there different ways to handle this. Inktank provide a tool called ceph-deploy. It’s in python develop software, but this is no option for environment that has to work automatically. We did run into some bugs and in the next moment the disk were messed up.

Beside, how do this work when you want to scale. We’re using puppet fo this. So we’re need some good modules.

Some days ago the puppet-ceph module was finish. So I started to work with this. The basic module were quite fine, but for our Dtagcloud environment we did need some extra.

For this I created, with some help a dmcrypt module.(I’ll release it at some point later)

I did a test with the Vagrant environment. A small trip with vagrant on kvm. Vagrant is great, but in my option the biggest issue is Virtualbox. For a Linux use it’s hard to work with it. KVM is more efficient in sense of Virutalzation. It would accelerate the work quite a lot. But I’m no ruby guy. To get it running it will take some more, I’m looking forward to it!

Back to puppet-ceph

The basic integration works fine. I build up a basic site.pp that include all what I need. I add a second disk and did encrypt it.

Here kick the issue in.

Error: mkfs.xfs -f -d agcount=1 -l size=1024m -n size=64k /dev/mapper/osd-0 returned 1 instead of one of [0] Error: /Stage[main]/Dtagcloud::Osd/Ceph::Osd::Device[/dev/mapper/osd-0]/Exec[mkfs_OSD-0]/returns: change from notrun to 0 failed: mkfs.xfs -f -d agcount=1 -l size=1024m -n size=64k /dev/mapper/OSD-01 returned 1 instead of one of [0]

Wait what happens here? I look into the Code and saw this:

exec { “mktable_gpt_${devname}”: command => “parted -a optimal –script ${name} mktable gpt”, unless => “parted –script ${name} print|grep -sq ‘Partition Table: gpt’”, require => Package[‘parted’] }

exec { “mkpart_${devname}”: command => “parted -a optimal -s ${name} mkpart ceph 0% 100%”, unless => “parted ${name} print | egrep ‘^ 1.*ceph$’”, require => [Package[‘parted’], Exec[“mktable_gpt_${devname}”]] }

exec { “mkfs_${devname}”: command => “mkfs.xfs -f -d agcount=${::processorcount} -l
size=1024m -n size=64k ${name}1”, unless => “xfs_admin -l ${name}1”, require => [Package[‘xfsprogs’], Exec[“mkpart_${devname}”]], }

That’s interesting, first of all that there generating this disk and then assum that the first partition is always called DEVICESn. This isn’t working for dmcrypt devices.

Why? Simple: We’re creating a partition layout on top of a encrypted devices. Sure you could handle this way. The better way would be to create the partition before the encryption.

But here a small issue with this: For what? When running with a Ceph OSD, the disk will be all ocopuity. Partition tables are logiscal separtion of disks. There is no need for this here.

When dmcrypt has a partition table with a logical paration on it. The disk will be address as /dev/mapper/OSD-0p1

The Code add a 1 to each disk related command.

device => “${name}1”,

That’s bad. For the moment the solution was to commenced the code and keep going. I’ll building parameter to allow everyone to handle this by them self.

But what’s that? Still not working?

So the nest issue is here:

Puppet does convert names internal. What? Wait don’t we’re in 2013? So lets so more detail about this:

ceph::osd::device { “/dev/mapper/OSD-${id}”: }

My goal was to highlight for Administrator the mounted disk with uppercase. Puppet export this to the facter only lowercase. There is the 1

http://docs.puppetlabs.com/puppet/3/reference/lang_datatypes.html#resource-references

Summery:

Sources: [clug] Accessing partitions on loop devices