Just some blog

Sometimes I do use this blog

What Opensource Means for Me

For me open source is about collaboration, independences and freedom. My intellectual freedom, that knowledge doesn’t belong to anyone but all of us. It allows us to lean and gain a more depth of understanding. It has also a economically side to this. Open source is the way I make my living.

What does this mean? There is often a problem that need to be solved and some have already solved it. I don’t have to redo it. I just use the solution of him. By using open source I also gain independences. I might modify it to my own. Some tweak here, some fix there and the software works even more better than before.

While it saves me often time there lies it downside as well. The code of someone else might cause problem and I need to invest time in fixing this. While open source might offer a better degree of security or code quality, this can only be verify or achieved by persons with the right degree of understanding. This type of persons are where the cost are. Years of training and work is necessary for them to accomplish this. Hence getting a problem fix in a professional matter almost always imply a high cost in money.

Open source works well when you have enough time and money for it. But to often there it’s just a half baked solution to a problem. At some point you need more developer to keep it working. In fact time, money and people are just a quantity. Your project needs the necessary significance. Without, no community will develop and the project perhaps will die.

In then end it’s a longtime investment that often have no pay off before. Only when you keep on track and stay focus it becomes rewarding and valuable experience.

My Failure With Users

Failure of an technical guys,a retrospective.

Since last April I started to attend the university. It all began with an introduction week. A get tougher of everyone who was new to this main subject. Some games to get to know each other.

One session was to find a tool everyone agreed to use. One of my wishes was not to use any services like Google or Facebook. Most of them agreed and I proposed to use a owncloud. For chatting it would be a Jabber and everying else we will use the mailling list from the University. During this session we met some students from they masters, begin ‘student buddies’. As an helpful gesture I took all of their notes from the previous years. I’ll then put all of this data into the owncloud.

I started right away and installed a new owncloud. At first I use a docker container, but had to change due some performance issue. After some handwork it was done! I allow the users via a special script to login ejabber with their owncloud credentials.

Managing all the user and import them to the owncloud was the most annoying part. I provide a

I was happy! But I didn’t consider a important fact! the user.

Despite of the data and ‘our’ agreement, most of the user never used cloud at all. Some managed to use it.

As technical person, you tend to focus differently then others. But I missed this out and this made it to ofne of my biggest issues.

For example: Most of the users accessed the owncloud with a SmartPhone.I never expected that, my focus was on a desktop computer. One that can use the native client to access the cloud. Nevertheless most only use a SmartPhone. It might related to a change base lines, that users just want to access things on the fastest way possible, something that is a bit odd to me.

In additionally matter of fact was that when you want to use the owncloud app(on iPhone/Android) you need to pay money. I kind of only use a f-droid in wich the owncloud app comes for free. There is no free alternativ to iPhone.

The chat had similar issue. Also to get a good app for chatting imply paying money.

Also the fact that most of them don’t read the documentation or even trying to work with the service someone managed to them. Just the afford to take to get know of the technology seems oddly to much.

As a result everyone uses what they know best. Google and Facebook services.

What brings me back to a blog post I read some time ago. I can not finding it in the moment, but the message was simple: What’s the point in Open Source software when the user is the limited itself in using it.

It brings me to the conclustion that you have to enforce users, but that only work when you have the power to this. In every other chase the user will look for a more simpler alternative. Beside teaching the user to work with this seems important, something I failed to do well enough.

As an additional note: I have to say it’s an quite impressive project and work very well. It has only one element I dislike, PHP. There are have some concerns about,but it manage to do a good job.

Also my expectation of the use of the Jabber Server. There is real work in there to get the ejabber service to use the owncloud user databases to authenticated them. It has even a integration into the website!

But the use case was again that they user rely on a desktop machine, not on a smart phone.

Another issue I missed is how afraid users are when they have to do something new. As User I tend to do the same, but in the sense: What’s the best work flow.

But it’s a good lesson to be learned.

That's Why It's Called Bleeding

Why bleeding edge is sometimes bad!

I had a problem a while ago with my archlinux and it was necessary to get the latest kernel from testing. In general arch takes some days to week to push the upstream kernel into the main repository.

Since then I had the testing repository enabled. Not minding it, till this one night. I fetched an update and after a quick look something was strange. PAM did stop to work. I wasn’t able to login anymore. Damit!

It was late that night, so I went to bed in hope that this issue might be fixed by the one who has caused it till next morning.

So I started my laptop with the kernel option single and took a look into the /varlog/pacman.log for the new updated packages:

1
2
3
[2015-06-17 22:29] [ALPM] upgraded gnutls (3.4.1-1 -> 3.4.2-1)
[2015-06-17 22:29] [ALPM] upgraded libtirpc (0.3.1-1 -> 0.3.2-1)
[2015-06-17 22:29] [ALPM] upgraded plasma-framework (5.11.0-1 -> 5.11.0-2)

PAM did not recieve an update? So one of this three might be the one package that causing some issues. Next was to look into systemd journal:

1
2
3
Jun 18 19:53:05 slaxy sudo[573]: root : TTY=tty1 ; PWD=/root ; USER=root ; COMMAND=list /usr/sbin/pacman --color auto -U /var/cache/pacman/pkg/libtirpc-0.3.1-1-x86_64.pkg.tar.xz
Jun 18 19:53:05 slaxy sudo[574]: PAM unable to dlopen(/usr/lib/security/pam_unix.so): /usr/lib/libtirpc.so.1: undefined symbol: __rpc_get_default_domain
Jun 18 19:53:05 slaxy sudo[574]: PAM adding faulty module: /usr/lib/security/pam_unix.so

Aye…. after a look into libtirpc, it was clear to me: libtirpc was broken. I downgraded the package.

I took a look in the Archlinux site and check the latest changes for the package, reading in libtirpc following commit:

add upstream patch to fix broken sudo and pam

And also that they did this commit 30 minutes before….. ayee.. The update was stop and no rolled out. So what did I learn:

Life by the edge die by the edge… or I might should use only testing when I really think I should. It might mean that you only get half backed updates. As far I can tell this was not the case very often yet. Probly the first time. Still I disabled testing for production reasons. Unless I have to change this.

best regards Akendo

Writing Ahead!

The intention of this blog was in the beginning to have some sort of documentation. Yet I not managed to really keep up with this. Many reasons can be phrased why I didn’t did it. Time, focus,lack of motivation. But somehow I have develop a anxiety of making mistakes in my texts. Something that’s is frighten and stop me for publishing entries. Using any kind of argument to not do it.

In order to change this, I’m going to write a blog entry everyday over the next days. My goal is to embed this as a behavior into my everyday life routine. While the focus will not be only on technical things, I’m trying to. The target of this exercise is to improve my English writing skills and losing the fear of blogging.

best regards akendo

Some Work About Privacy

During these days I have did some research in regards to privacy and reflected. This is done so I’m able to get a better view of this topic.

Privacy is something complex and one thing maybe very close to be foundation of our own freedom. There are four core elements to this:

The right to be left alone

This is from my point of view the most important part. The physical right to be left alone and to be away from others. While humans are social animals, it’s important to gather ourself by being alone, this is an essential value to come closer to ourselves.

A friend of mine did once said:

“We’re only truly ourself when we’re being alone. Everyone has a mask to carry in front of each other. Most of can’t not live without it.”

Discretion and Intimacy

When not being alone, we enjoy to be around people we like and we want to keep this relationships intimate. Often we think out loud in front of friends, maybe our parents, or coworkers. Often we would never say directly what is on our minds to not hurt them or maybe to protect ourself from harm.

We care for intimacy and discretion.

Anonymity

What should be open to the public, without reflecting back to us. Also, as an act of free speech, sometimes it’s necessary to remain nameless so we’re able to share our mind and ideas to others.

Also we understand the need for anonymity in a democratic system of voting.

Self determination information discloser

Diminish and control of data that is direct or indirect related to us.

What is the kind of information we care to reveal to others ? Maybe it is just limiting the possibilty of it’s usage. Corporations that we entrust with our bank data should keep it only for billing and should not forward it to someone else.

Yet our desire to be found and recognized for our work by others give us the need to be seen in public. Information we care to share and we think gives others interest in us.

At last

I like to highlight is this video of Glenn Gleenwald ‘:Why privacy matters’ cause he explain it very well:

so far Akendo

Edit: Thanks to crowbar for his help!

Something on My Mind

There is something on my mind for a while:

Encryption is a expression of free speech?

I think so atleast

[ArchLinux]random MAC-address for New Wireless Connections

I used to travel more over the past year. Goal was different places: England, Belgian. This means I also have to use untrusted wireless connection.

This leaves a good trace wherever you go. Simply by the fact that the MAC-Address being used every time you do a connection to any wlan. This is often stored, but How long? There are good example where this information is begin havest for money.

Beside, You never know who else listen and might want uses this data. To mitigate this problem I do following: I generate a random mac address for each new connection.

As an ArchLinux user I prefer to use the netctl tool. Within each new connection via a profile[1] with the netctl tool, it will by default source this two files:

/etc/netctl/hooks           # General script that will always be executed
/etc/netctl/interfaces      # Interface related scripts that will start when a profile uses this interface.

So I’ll simply call macchanger for the interface that being use.

The script should be place in /etc/netctl/interfaces/ with the name of the interface. Here wlan0 as example

/etc/netctl/interfaces/wlan0

The content of this file:

#!/usr/bin/env sh
/usr/bin/macchanger -r wlan0

Ensure that it can be executed:

chmod o+x /etc/netctl/interfaces/wlan0

Now whenever you start a profile that will use the wlan0 interface, it will be executing the /etc/netctl/interfaces/wlan0 script additional.

See:

ip link show wlan0 |tail -n1|grep -m1 -E  '.([0-9,a-f]{2})'
 link/ether a2:0d:98:2d:ec:b5 brd ff:ff:ff:ff:ff:ff

New connection - In this case a restart:

netctl restart SomeWlan 
ip link show wlan0 |tail -n1|grep -m1 -E  '.([0-9,a-f]{2})'
 link/ether a2:4e:8d:2f:4a:3c brd ff:ff:ff:ff:ff:ff

Another try:

netctl restart SomeWlan 
ip link show wlan0 |tail -n1|grep -m1 -E  '.([0-9,a-f]{2})'
 link/ether 5a:d4:e5:4c:8f:ad brd ff:ff:ff:ff:ff:ff

You can do the same with the any other interface.

so far 4k3nd0

[1]man netctl.profile

New Year

Happy new year everyone!

Sorry for posting so few here, I had some things to do. I know: ‘not enough discipline’

Also not enough time. But I hope to do some more work in the next days. NO, Not new years revolution. Just I’ll work fewer hours. So there will be more time for this blog.

so far and good start into the next year.

Akendo

A OpenWrt Sysupgrade Trap

So yesterday I upgraded my router to the latest version via Sysupgrade. The upgrade went fine without any problem. But when I tried to login to the router something went wrong. The ssh connection was reseted and closed.

Fuck! logout of my router! Does something wrong with dropbear? No no no… ssh is working fine. Ok check the documenation… what my I have change on the system that seems not be normal? Ah! I installed bash!

Looks like when you do a Sysupgrade on OpenWRT that all installed packages will be lost. Duo my use to bash, I did enable bash as default shell for root within /etc/passwd.

That must be the problem! But after the upgrade the router the bash package was gone. To fix this problem I had to start the router in a failsafe mode. The default shell of OpenWrt is ash or sh (A Shell or Almquist shell). After I enable the failsafe mode, I just had to run mount_root. Then I change the entrie within /etc/passwd back to ash. A reboot later my login was working again. Lucky me

But what I do to get my bash? Simple .profile has been made for this. Here the best solution for this:

[ -x /bin/bash ] && exec /bin/bash

So things to be learned: Don’t try to be smart and re-define system-shells within /etc/passwd/. Use your .profile for this.

fpm+reprepro=Awesome

In the last days I had to work on an out-dated version of etherpad. etherpad is a collaborative editing tool that runs with NodeJS. It’s used a lot for planning or maintenances. I was looking for a good way for deploying this onto different nodes. We had a version running with a MySQL database. So I wanted to migrate this as well. But I had some issue getting a etherpad onto my system deployed.

Problem

The installation was a git managed version (1.15) of etherpad. It was a quick and dirty installation. No init or upstart script was in place. The automatization was left out. It was hosted via nginx as reverse proxy to the NodeJS, but started via nohup with a provided script as normal user. Random crashes were normal. Someone had to go onto the server and start the service again.

I did not want to touch the server. One problem when upgrading: I can’t roll back as easy as I wish when something is going wrong. I need to have a own instance of a VM for it. This should be able to run without any interaction of a person, even when it does go wrong. I also want to extend the current version with some more features.

So my task will to:

  • Make this automatically via puppet
  • Install via puppet it to a own VM.
  • Migrate the databases to this VM.
  • Extend the etherpad with some more useful things.

The creation of the VM is easy as cake. But there are not real packages. This is caused by the fact that the offical website of etherpad is pointing to github master brach zip. But couldn’t find any packages by someone.

fpm

I dislike the deployment via git on a node. Here comes the fpm in place. It allows to create a simple package. deb or rpm, everything is possible. I heared about it on the puppetcamp in Berlin. It’s a ruby script that try to be as easy as possible. You can installed it via gem install or with ArchLinux yaourt ruby-fpm or look at the project site.

Note: I do use rvm to maintain my ruby environments

rvm use ruby-2.1.0
gem install fpm
fpm --version
1.1.0

Make sure that you’re using at least version 1.1.0 or higher. I had some issue when I tired to unpack from a zip with an older version. It was version 0.9.8 (When I do remember right). The project is moving quite fast so make sure that it’s up-to-date. Here an example with the unzip folder:

fpm -s dir\
     -t deb\
     -C /tmp \
     -n etherpad\
     -v '1.4.0-1-gc3a6a23'\
     -m 'Akendo <4k3nd0@gmail.com>'\
     --license 'GPLv3'\
     --url 'http://etherpad.org/'\
     --deb-user etherpad \
     --deb-group etherpad \
     --prefix /opt/ \
     --description 'Etherpad is a highly customizable Open Source online editor providing collaborative editing in really real-time' \
     etherpad-lite

This will generate the package etherpad_1.4.0-1-gc3a6a23_amd64.deb I test it on my vagrant environment. I could use there simply dpkg -i etherpad_1.4.0-1-gc3a6a23_amd64.deb on Ubuntu and Debian. This will install the package to the folder /opt/etherpad.

puppet

I had a look into some puppet module for etherpad a while ago. There wasn’t to much promising. I only found two projects on github. This project seems to do the job in a basic way:

velaluqa/puppet-etherpad

But it had some weaknesses. As said before. It uses the git branch to deploy. git is depending on user input. To checkout or update things. But in most case to resolve merge issues. Doing this via puppet may can overwrite changes I later need.

Beside it doesn’t track the dependency of the module. Further problem was that the module had some mistakes within the template of the settings.json.

The other project on github for etherpad But this is even in a more unusable state as the first one.

So lucky me got at least something.

I started to deploy on Ubuntu 12.04, but it was not working. Wrong packages was installed. Some logical mistakes. I forked and fixed them. Also added a list of the dependency to track them via puppet-librarian. This allowed it to run in my vagrant environment. No one responded every since on my merge request, this is bad. Now I did a review and see that the puppet-code there wasn’t the ‘best’.

So I started to do more rework.

I replace the module for node with a native ppa for Ubuntu to have the latest version of node. This allows to remove more puppet related dependency. But it binds the module more to Ubuntu. PPA seems only to work with Ubuntu.

Things will be done more clearly. But how do I get my self build package to the VM?

Reprepro

A colleague of mine, was working on another project showing me this neat software. It allows me to create a simple debian repository with any package I want. On the puppet camp there was this recommendation to host your packages called bintray But I need to be able to place this on a local mirror without Internet connection. reprepro allows me to deploy this everywhere I need to.

In the Debian wiki you can find a simple how-to, to build a simple repository that will be then hosted with Apache. I followed the process as explained on the wiki and found it and made a proof of concept. When I was done I could add this repository to the /etc/apt/sources.list.d/ on my test VM.

The steps for this a quite simple. On a Debian based system do installed it via apt-get install reprepro or in Archlinux via yaourt reprepro.

You then need to have a GPG key that will sign the package. Then you create a folder structurer:

cd wished/place/to/create
mkdir -p ./repos/apt/debian/conf

Then you create a distributions files. This contains configuration for the package of what version of different distributions you’re going to host with this package. For the moment I’ll only support Ubuntu 12.04. Simple for the target system.

Origin: Your project name
Label: Your project name
Codename: <osrelease>
Architectures: i386 amd64
Components: main
Description: Apt repository for project x
SignWith: <key-id>

The SignWith is your keyid

Then you need to create a file for reprepro to get options from. Edit the file ./repos/apt/debian/conf/options with following content:

verbose
basedir /var/www/repos/apt/debian
ask-passphrase

Now you can create the files for the with the debian package:

reprepro includedeb precise etherpad-lite_1.4.0-1-gc3a6a23_amd64.deb

This will create follwing folders:

ls
conf    db  dists  pool

When you did follow the way of the debian wiki the current folder would be hosted via apache webserver. This URL will then add to the /etc/apt/sources.list.d/ like this:

cat /etc/apt/sources.list.d/etherpad-lite.list
# etherpad-lite
deb http://192.168.200.201 precise main

Deploy

Let put this together. I’ll use nginx for my repository.

When you’re using the local reprepro folder you have to deny the access to all other files. Only dists/ and pool/. I create the repository on my local Laptop and then deploy it to a web server.

This is my PoC nginx configuration file:

server {
  listen 192.168.200.201:80;

  access_log /var/log/nginx/packages-access.log;
  error_log /var/log/nginx/packages-error.log;I

  location / {
    root /var/www/reprepro/debian;
    index index.html;
  }
}

What’s left is to do, sync the file from reprepro to my test vm.

rsync -rauvPh dists pool 192.168.56.201:/var/www/reprepro/

I host this package also via apt.akendo.eu/etherpad. I’ll create some more package in the further.

Remarks

The current way this packages are build and maintenance is very simple. I don’t have that much understanding of what fpm is and is not able to do. Debian packages are very powerful and allow to configure main elements. This package is quite simple and only place a folder with correct permission within the system.

There are no dependency marked or anything. This will be more work to do. This works as long you use this package with puppet, but it can not work well without. I have also some thing to do: Remove git folders from the package.

This package can be tested via my vagrant project

Update

I updated my vagrant environment you can test this etherpad via:

git clone git@github.com:Akendo/vagrant-skel.git -b etherpad-lite
cd vagrant-skel/
librarian-puppet install
vagrant up

Have fun.