Update of My Blog 3

2 minute read

Status

I updated some days ago my blog. I migrated to Octopress, an awesome tool that allows me to write my Blog inside of markdown. My main problem was that Wordpress had become to fat, at some point I started to drop off everything from the server that I didn’t need. But still Apache went into “out of memory” problems. Connections was failing. Duo an outdated kernel version (which i can’t control) there is no OOM and at some point I wasn’t able to login to the server. The only solution was to reboot.

To prevent problem like this i moved to Nginx, but Wordpress don’t work to well with this. Apache owns a nice module for PHP, what is missing for the Nginx. The solution here is to run with a fastcgi that will hosted on the localhost. Nginx just forwarding the requests to the fastcgi socket.

After this my web service took lesser memory but the fastcgi had now everything in use. I could save some memory (around 100Mb). Beside that Wordpress needs a MySQL Databases. What doesn’t makes me to happy.

Original Django was the Framework of my choice, but duo missing time and skills I wasn’t able to make a blog there.

Installation

For the Server it’s quite simple, I’m using a the normal webserver.

The “client” where I write and generate the actucaly entrys. I followed the documection from octopress for the setup here.

Note: There is a bug inside of Gentoo, what will cause the rbenv to start correctly, this will be fix by running: unset RUBYOPT.

rake generate
/home/akendo/.rbenv/versions/1.9.3-p194/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- auto_gem (LoadError)
from /home/akendo/.rbenv/versions/1.9.3-p194/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'

Migration

For my blog

Work flow

First

Then I write my entry, to check that the md is looking fine I’m checking this on with this online convert

Deploy

Summery

The Webserver now only needs 50 Mb of Memeroy, I don’t have to use PHP or MySQL and I do save 900 Mb of Memory. Awesome! I’m not very use to blogging like this, but it make me happy.

so far 4k3nd0

My Gentoo Mirror

1 minute read

The Idea

I wanted to give back something to the Gentoo Community for doing such a great job. Using Gentoo for some years did makes me happy. So how I can i give something back?

For this I’ll try to host a mirror of the gentoo emerge portage tree. I just updated the rsyncd service followed this documenation.(Warning its’ in German) The vps may not fit the requirements, but I give it a try.

I did send a request to the mirror admin, in the hope there adding my mirror the official list.

How to use

Adding this GENTOO_MIRRORS="rsync://gentoo-mirror.akendo.eu/" to my /etc/make.conf then run eix-sync. Here is output:

eix-sync
* Copying old database to /var/cache/eix/previous.eix
* Running emerge --sync
>>> Starting rsync with rsync://88.80.202.107/gentoo-portage...
>>> Checking server timestamp ...
This is a simple gentoo rsync mirror gentoo-mirror.akendo.eu (88.80.202.107) 

Serveradmin: 4k3nd0@gmail.com

Linux Version 2.6.18-028stab091.2-ent, Compiled #1 SMP Fri Jun 3 01:00:01 MSD 2011
Four 2GHz Intel Pentium Xeon Processors, 2M RAM, 15960 Bogomips Total

receiving incremental file list
timestamp.chk

[.....]

Awesome! I added a cronjob that will sync to the main mirror twice a hour. Maybe I add http(s) support someday.

so far 4k3nd0

KVM and I/O problems

2 minute read KVM and Postgresql Problem: Slow I/O - with Software Raid 1 and LVM2. So we had some trouble with a Software Raid 1 and our KVM Virtual Machines, the writing and reading speed was just bad. But this was only  for paralle wirte/read requests. So doing single test like dd or even bonnie didn't show the problem. HUGOMORE42 A friend of my helped me out. The problem is the writing scheduler. After i know this I found in a post here, followed by the first link:
This approach is great, but the fatal flaw is that it assumes a single, physical disk, attached to a single physical SCSI controller in a single physical host. How does the elevator algorithm know what to do when the disk is actually a RAID array? Does it? Or, what if that one Linux kernel isn’t the only kernel running on a physical host? Does the elevator mechanism still help in virtual environments? No, no it doesn’t. Hypervisors have elevators, too. So do disk arrays. Remember that in virtual environments the hypervisor can’t tell what is happening inside the VM[0]. It’s a black box, and all it sees is the stream of I/O requests that eventually get passed to the hypervisor. It doesn’t know if they got reordered, how they got reordered, or why. It doesn’t know how long the request has been outstanding. As a result it probably won’t make the best decision about handling those requests, adding latency and extra work for the array. Worst case, all that I/O ends up looking very random to the disk array.
Fixing this using the io scheduler of the SCSI HDD/Raid 1. Disabling it via on a live system.
echo noop > /sys/block/queue/scheduler
Enabling it into the system by editing /etc/grub.conf with option
elevator=noop
 
Sources:
http://www.gnutoolbox.com/linux-io-elevator/
http://blog.bodhizazen.net/linux/improve-kvm-performance/
http://lonesysadmin.net/2008/02/21/elevatornoop/
https://www.redhat.com/magazine/008jun05/features/schedulers/

MySQL to Postgresql

2 minute read I dislike MySQL. I never did like it, since Oracle bought Sun it starting to become more then just disliking it. I prefer to work with Postgresql. I want to migrate some of my MySQL DB to Postgresql. HUGOMORE42 The first try was using the mysqldump with the option of --compatible=postgresql what is looking quite promossing. But it isn't, some extra work! I found on the https://en.wikibooks.org/wiki/Converting_MySQL_to_PostgreSQL a reffert to a gem program. But like ruby gems always, it didn't worked (What do people like about ruby? It seems never to work for me....) But lucky the itnernet is a wide space, so i found this blog post http://onestoryeveryday.com/mysql-to-postgresql-conversionmigration.html the referd perl script was not woring. But it helped me to get the ruby gem to work... Where was the mysql2psql.yml ? Ah... it have to run once the program, then it creates a mysql2psql.yml into the working directory. But here kicks a problem in, you need a mysql connection. So when you have just a dump it's bad. It would mean to install a new mysql db, import the dump and then export it. Else you could save this time and do it directly from the live db.But i had to do it anyway. It's a fine program, that took a bit of time. Thanks for this, but some better shell handling would be welcomed. So far, this are my steps;
sudo apt-get install mysql-server
sudo  apt-get install ruby-dev rake libmysql++-dev libpq-dev
sudo gem install mysql2psql
cd /tmp # Here is where my dump file is stored.
sudo apt-get install mysql-server
mysql -u root -p << EOF
CREATE DATABASE mail;
EOF
mysql -u root -p mail < mail.dump
mysql2psql # Need to edit the config file
mysql2psql # This can take a while
createdb -U postgres mail
psql -U postgres -d mail < mysql_db_to_pg.dump
Here an example for the mysql2psql.yml file:
mysql:
hostname: localhost
port: 3306
socket: /var/run/mysqld/mysqld.sock
username: root
password: apassword
database: mail

destination:
# if file is given, output goes to file, else postgres
file: mysql_db_to_pg.dump
postgres:
hostname: localhost
port: 5432
username: mysql2psql
password:
database: mysql2psql_test

# if tables is given, only the listed tables will be converted. leave empty to convert all tables.
#tables:
#- table1
#- table2
# if exclude_tables is given, exclude the listed tables from the conversion.
#exclude_tables:
#- table3
#- table4
# if supress_data is true, only the schema definition will be exported/migrated, and not the data
supress_data: false

# if supress_ddl is true, only the data will be exported/imported, and not the schema
supress_ddl: false

# if force_truncate is true, forces a table truncate before table loading
force_truncate: false
  so far Akendo

How to deploy wsgi with apache

1 minute read Related to the  previors Post, i hade now to apply from virtualenv to a apache webserver. I used to deployed it with  the WSGI. For that you need to enable the mod_wsgi in apache2. To enable it in debian:
apt-get install libapache2-mod-wsgi
a2enmod wsgi
services apache2 restart
Now disabling the old mod_python in the configuration files. Add this to the top of your virtualhost file:
WSGIDaemonProcess $GROUPNAME python-path=/path/to/django/project/.env/lib/python2.6/site-packages user=apache group=apache processes=2 threads=25
WSGIProcessGroup $GROUPNAME
WSGIScriptAlias / "/path/to/django/project/wsgi.py"
Documentroot /path/to/django/project/


 Order allow,deny
 Allow from all
 
This is it. Some details: Important is to set a  WSGIDaemonProcess, it allows to specifically the resource that the WSGI Process can use. Then the $GROUPENAME(Set it as you like to), so when you have more then one WSGI implementation, that the WSGI no interfering each other. There still some more options possible.

Bulding a virtualenv

1 minute read

virutalenv

Virutalenv is a python program that allows you to create system separate container with an own version of python/pip. All installed packages(with pip) are store there. This allow a developer to install all necessarily packages into the userspace. It's smoother and allow to have separate version of the same project, for example django. Starting command:
virtualenv --no-site-packages .env
To apply the virtualenv to your local environment run
source .env/bin/activate
(.env)akendo @ akendo :: .../Django
That will change you bash environment to use the python/pip version of the virtualenv. Now you can install you python packages via pip without interfering with other version in the system. An example for our django project, this the list of need packages, all stored in a "requirements.txt" file. To create a requirements.txt, push all the requiert package into the file. You can use the pip freeze command to create file for all installed packages.
 django==1.2.7
 django-celery
 psycopg2
 PIL
 BeautifulSoup
 Markdown
 django-tastypie
 django-oauth
 oauth2
 simplejson
When you have all, just use the -r option to the pip call:
pip install -r path/to/requirements.txt
 
Sources:
Setting up Django virtual environment via python  
 

Enable remote access for PostgreSQL

2 minute read

Remote access to a Postgtres Database

In the last days i have worked a lot with PostgreSQL. We have some Django Application which needs some extra SQL love. I have a installation script what insert all the script to the extra SQL into the database. The Problem: " How to do this on a remote database host, without coping everything to it more often?" A try with psql command show also support for remote host. The great thing about psql is that will automatic using a SSL connection. The command is:
psql -h $DATABASE_HOST -d DATABASE -U $ROLE
To make this work, i have first enable that the socket, to  listing on a different Interface then localhost. Change listen address in postgresql.conf
#------------------------------------------------------------------------------
 # CONNECTIONS AND AUTHENTICATION
 #------------------------------------------------------------------------------
 # - Connection Settings -
 listen_addresses = '0.0.0.0' # what IP address(es) to listen on;
 # comma-separated list of addresses;
 # defaults to 'localhost', '*' = all
 # (change requires restart)
 port = 5432 # (change requires restart)
 max_connections = 100 # (change requires restart)
 # Note: Increasing max_connections costs ~400 bytes of shared memory per
 # connection slot, plus lock space (see max_locks_per_transaction).
Add your accessing host to the pg_hpa.conf
# Database administrative login by UNIX sockets
local all postgres trust
# TYPE DATABASE USER CIDR-ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust
host myDB all 172.17.7.2/27 trust
Now restarting the postgreSQL and i can access via psql from my host.
psql -h POSTGRESQL_HOST -U $USER -d myDB
psql (8.4.11)
Type "help" for help.
myDB=#
 

Security note:

The "trust" is taking every request reguardless who is doing. This IS very DANGEROUS! I did it becourse i'm a lazy admin and don't want to insert everytime a password during deployments. Be aware of Spoofing Attacks!

rsync - the better scp

3 minute read Regard this Post, i make this post. First: scp is great. But i recomand to use rsync instead. It has replace for me also the cp and scp command. Why? Simple rsync is more powerful. I allows me dry runs, then it's very efficient. File moves should be easy and save. That is what rsync does! It's some tool that need a little bit use. rsync is simply for backups, but that is almost the same coping around like the cp/scp.

Some examples

Start with a dry run:
rsync -nv  somefile user@server:/path/to/copy
This will connect as user to server. Then show what he will copying, That is very useful to check when your running recusive. A / makes a lots of different in the path showing.
 rsync -rnv  somefolder user@server:/path/to/copy/
somefolder/a
rsync -rnv  somefolder/ user@server:/path/to/copy/
./a
 

Problems with scp

You have a big file you want to push to your vServer, you don't want fill up all of your bandwidth so you set a set a limit of for bw.  Your using scp -l, so we setting a limit of 150 kB/s
scp -l 150 somefile user@server:/path/to/save
I got some wired problem with the scp limiter. Sometimes it works, sometimes it didn't. Why? That is the version with rsync
rsync --bwlimit 150 somefile user@server:/path/to/save
It's basically the same, but it worked reliable on rsync. But when the copy process stops before your done (lost connection for example) you have to start over again. .  The solution is using inplace option in rsync:   inplace copy: rsync is able to copy a file on inplace. This very important you are able to suspend a copy process.
rsync --inplace somefile user@server:/path/to/save
(Lost Connection)
rsync --inplace somefile user@server:/path/to/save
  Keeping the rights, scp will copy and place with the default umask of the system user. rsync allow you to keep the right permission with -a option
ls -l somefile
-rw-r--r-- 1 akendo akendo 347  4. Dez 16:21 somefile
scp -a somefile user@server:/path/to/save
user@server: ls -l somefile
-rw-r----- 1 user user 347  Mar  7 00:19 somefile
rsync -a somefile user@server:/path/to/save
user@server: ls -l somefile
-rw-r--r-- 1 akendo akendo 347  4. Dez 16:21 somefile
As you can see, lose the User/Groups , Time Stamp will be lost with scp. In rsync this will remain the same.   Using Humanreadable
rsync -vh somefile user@server:/path/to/copy
somefile
sent 2.02K bytes received 112 bytes 1.42K bytes/sec
total size is 964.25M speedup is 452912.60 (DRY RUN)
  Some nice looking example, to clone a folder:
rsync -rauvh /file user@server:/path/to/save
This will copy the folder "file" to "/path/to/save" on the host server as user. In some other post i show the best way of using it as a backup method. Feel free to add comments or improve this post.  

Howto fix libgeos_c.so not found

1 minute read I'm working with Python Django and PostGIS. When i try to add the Geo support to the Project, i hit this error message:
OSError: /usr/local/lib/libgeos_c.so: cannot open shared object file: No such file or directory
As I notfied about that he is try to work with the /usr/local/ path what is wrong. I found the lib installed corretly to the /usr/lib/libgeos_c.so . So i simply link it to the /usr/lib/ folder:
sudo ln -s /usr/lib/libgeos_c.so /usr/local/lib/
That does the job.
Make sure that you have the sci-libs/geos installed.