Archive
How to create a DokuWiki farm
Let’s see how to install DokuWiki, create a farm of wikis and run it privately i.e. with our wikis not being reachable from the Internet.
I assume you have an Apache web server and PHP-7 running on your box. I’ve done my installation on a Ubuntu box.
Install the wiki
First we download the tarball from here and uncompress it. The result is a directory called dokuwiki
. Then follow the steps:
$ sudo mv /home/vicent/Downloads/dokuwiki /var/www/dokuwiki
$ cd /var/www
$ sudo chown -R www-data:www-data dokuwiki
$ sudo a2enmod rewrite
Then, using /etc/apache2/sites-available/000-default.conf
as a template, we create and enable the following virtual host (/etc/apache2/sites-available/dokuwiki.conf
)
<VirtualHost *:80>
ServerName dokuwiki
ServerAdmin webmaster@localhost
DocumentRoot /var/www/dokuwiki
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
Then edit /etc/hosts
and add the line:
127.0.0.1 dokuwiki
and restart the Apache service:
$ sudo systemctl restart apache2
Finally run the installation script with the web browser:
http://dokuwiki/install.php
Just fill the fields (it is pretty easy) and you are done 🙂 Now you can run your wiki at http://dokuwiki
About the farm
Suppose that you want to have a wiki for every member of your family (for instance you, your wife and your daughter). Of course you can do it simply creating and organizing properly pages on your wiki instance. But just for fun you can also create a farm of wikis with three independent wikis: one for you, one for your wife and one more for your daughter.
Let’s see how to manually set up a farm of wikis. Usually there is only one farm per server. The farm is made of a farmer (the actual DokuWiki installation, described in the previous section) and one or more animals (i.e. individual wiki instances). The setup of a farm directory is what is needed to start farming. Our setup will be:
/var/www/dokuwiki -> the dokuwiki engine
/var/www/farm -> the dokuwiki farm directory which contains the animals
In the farm directory we can have as many animals as we like:
/var/www/farm/wiki_1 -> one wiki
/var/www/farm/wiki_2 -> another wiki
There are two different setups: virtual host
based and .htaccess
based. We will use the first one. Its main advantage is that it can create much more flexible URLs which are independent of the underlying file structure. The disadvantage is that this method requires access to the the Apache configuration files.
Beware that in our case (a wiki running locally, not visible from the Internet) the mentioned advantage is not really important and the disadvantage simply doesn’t apply.
To access the farm from your local machine you have to edit the /etc/hosts
file as described above
Create the farm directory
Create an empty directory named /var/www/farm
. That will be the farm directory and needs to be writeable by the web server.
$ sudo mkdir /var/www/farm
$ sudo chown -R www-data:www-data /var/www/farm
$ sudo chmod -R ug+rwx /var/www/farm
$ sudo chmod -R o-rwx /var/www/farm
Activate the farm
This is easy too.
$ sudo cp /var/www/dokuwiki/inc/preload.php.dist /var/www/dokuwiki/inc/preload.php
Open that file, uncomment the two relevant lines and set the farm directory:
if(!defined('DOKU_FARMDIR')) define('DOKU_FARMDIR', '/var/www/farm');
include(fullpath(dirname(__FILE__)).'/farm.php');
Add an animal
Download the animal template and extract it in the farm directory. The archive contains a directory called _animal
which includes an empty data
and a pre-filled conf
directory. Rename the directory. Beware that virtual host setup needs animal directory names that reflect their URL e.g. the URL vicent.uvemas.org
works with a directory named vicent.uvemas.org
.
In your /etc/hosts
file you should add the line
127.0.0.1 vicent.uvemas.org
Virtual Host Based Setup
For this setup we create and enable a new site in /etc/apache2/sites-available
for each new animal. For example for vicent.uvemas.org
we will create the file vicent.uvemas.org.conf
with the following content:
<VirtualHost *:80>
ServerName vicent.uvemas.org # this is the URL of the wiki animal
ServerAdmin webmaster@localhost
DocumentRoot /var/www/dokuwiki # the document root always needs to be the DokuWiki *farmer* directory
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
Now you should be able to visit the animal’s URL e.g. http://vicent.uvemas.org
and enjoy your wiki.
Change the admin password
As the animal template includes a default admin
user account with the password admin
we should change that password as soon as possible and change the admin’s email address too.
I will stop here but there are a lot of things that you can do now: setup ACLs, install plugins, create namespaces… Please visit the DokuWiki website. There you will find tons of info about everything.
How to install Arch Linux on a VM VirtualBox (II)
In the last entry we began to see howto install Arch Linux on a VM VirtualBox. We created and setup the virtual machine and installed the Arch Linux. Now we are going to see how to setup Arch Linux for running smoothly on the virtual machine.
So we logon as root and continue our work. The first thing to do is to be sure that you can use the keyboard. If your keyboard layout is English you have to do nothing but if it is not then you need to setup the proper keyboard layout. In my case (Spanish layout) I have to run the command:
# localectl set-keymap --no-convert es
which set the value of the KEYMAP
variable in the /etc/vconsole.conf
file:
KEYMAP=es
This configuration is persistent and also applies to the current session. You can find more information about how to configurate the keyboard in console here.
Next thing is to automatically connect to the Internet when the system boots. We can achieve this goal if we start the DHCP as a service:
# systemctl enable dhcpcd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dhcpcd.service to /usr/lib/systemd/system/dhcpcd.service
# reboot
We logon again and check the network connection using the ping
command:
# ping -c 3 www..google.com
PING www.l.google.com (74.125.224.146) 56(84) bytes of data.
64 bytes from 74.125.224.146: icmp_req=1 ttl=50 time=437 ms
64 bytes from 74.125.224.146: icmp_req=2 ttl=50 time=385 ms
64 bytes from 74.125.224.146: icmp_req=3 ttl=50 time=298 ms
--- www.l.google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 298.107/373.642/437.202/57.415 ms
Now let’s pay attention to the time synchronization. It is an important topic on a virtual machine because the CPU is shared among several systems. For instance you can see time delays on your virtual machine if the host system goes to sleep. There are several options for getting the time synchronized. The following works fine for me:
# pacman -S ntp
# systemctl enable ntpd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service
i.e. I install the ntp
package (which contains an NTP server and client) and start it as a service every time the system boots but I don’t setup my system as an NTP server. This setup causes the hardware clock to be re-synchronised every 11 minutes. In theory there are simpler ways to achieve the synchronization goal (like using SNTP) but I’ve not been able to do they work properly. You can get more information about this topic here and here.
After checking that time synchronization works fine we can go to the next task: adding a new user. It is a typical task when administering a Lynux system and can be done easily:
# useradd -m -s /bin/bash vicent
# passwd vicent
Introduzca la nueva contraseña de Unix:
Vuelva a escribir la nueva contraseña de Unix:
passwd: contraseña actualizada correctamente
The above commands create a user called vicent
, create its home directory in /home/vicent
, give it a bash shell and set his password.
Next we’ll add the new user to the sudoers
file. This way vicent
will be able to execute a command with root
privileges temporarily granted to that single command. How privileges are scaled depends on how the sudoers
file is changed. In order to get both the sudo
command and the sudoers
file we install the sudo package:
# pacman -S sudo
Instead of editing the sudoers
file directly we create files under the /etc/sudoers.d
directory. These files will be automatically included in the sudoers
file every time the sudo
command is issued. This way we keep the sudoers
file clean and easy to read. The sudoers
file and the files under /etc/sudoers.d
are edited with the visudo
command which edit the files in a safe fashion (see the man
page of the visudo
command for details):
# visudo -f /etc/sudoers.d/90-vicent
We add the following line to the file:
vicent ALL=(ALL) ALL
It means that, on all hosts where this sudoers
file has been distributed, vicent
can execute any command with root
privileges (after being prompted with vicent
‘s password).
Now it’s time to install the graphical components. We begin installin the X Window System as follows:
# pacman -S xorg-server xorg-server-utils xorg-apps xorg-twm xorg-xinit xterm xorg-xclock ttf-dejavu --noconfirm
The above command will install the main components of the X, including the twm
window manager. The X configuration files are:
- /usr/share/X11/xorg.conf.d
- /etc/X11/xorg.conf.d
None of those files contains the keyboard configuration so in order to keep my non-English layout when the X is running I execute the command:
# localectl --no-convert set-x11-keymap es
which creates the file /etc/X11/xorg.conf.d/00-keyboard.conf
.
Now, before starting the X, we install the VirtualBox Guest Additions package and configure it:
# pacman -S virtualbox-guest-utils --noconfirm
we load the following modules:
# modprobe -a vboxguest vboxsf vboxvideo
and create the virtualbox.conf
configuration file with the following content:
# echo vboxguest >> /etc/modules-load.d/virtualbox.conf
# echo vboxsf >> /etc/modules-load.d/virtualbox.conf
# echo vboxvideo >> /etc/modules-load.d/virtualbox.conf
Now we ensure that the user created before will be able to access with read-write permissions to the shared folder (we created the shared folder in the first part of this tutorial):
# usermod -a -G vboxsf vicent
# chown root.vboxsf /media
# chmod 770 /media
Finally we enable the guest additions service so they will be started et every system boot:
# systemctl enable vboxservice.service
Created symlink from /etc/systemd/system/multi-user.target.wants/vboxservice.service to /usr/lib/systemd/system/vboxservice.service
Now we’are ready to come back to the X Window System. Before to start it we have to create the /root/.xinitrc
file (indeed we need that file in the $HOME of every user starting the X) with the following contents:
# Make sure the root user uses the right keyboard map
setxkbmap -model pc104 -layout es
# Start the VirtualBox Guest Additions
/usr/bin/VBoxClient-all
# Start the window manager
exec twm
Then we issue the startx
command wich in turns sources the .xinitrc
file so the result is a screen like this:
Reboot the system and logon again in a virtual console. We have reached the last step of the process i.e. the installation of a desktop environment. As I adhere to the ‘keep it simple’ philosophy of Arch Linux my choice was the LXDE. In order to install it we have to issue the following commands:
# pacman -S lxde
# systemctl enable lxdm.service
Created symlink from /etc/systemd/system/display-manager.service to /usr/lib/systemd/system/lxdm.service
# vi /etc/lxdm/lxdm.conf
uncomment the line
session=/usr/bin/startlxde
It is important to note that the startx
command is not called and so the .xinitrc
file is not sourced: the display manager (which is started as a service every time the system boots) calls directly to the startlxde
command which is in charge of starting the LXDE desktop environment.
To make sure that the non-English keyboard map will persist between LXDE sessions we edit the file /etc/xdg/lxsession/LXDE/autostart
and append the line:
setxkbmap -model pc104 -layout es
Now reboot, logon and you will get a nice LXDE screen. In my case, after some tweaking it looks like this:
How to install Arch Linux on a VM VirtualBox (I)
On this entry I’ll describe the steps I followed to successfully install Arch Linux on a VirtualBox virtual machine. The host system is a Windows 8.1.
My main source of documentation has been the excellent Arch Linux Installation Guide wiki. As the rest of the wiki it has a very high quality.
The first thing to do is to download the latest Arch ISO. While the ISO is downloading you can create the virtual machine. The properties of the VM I created are:
- name: ArchLinuxVM
- type of operating system: ArchLinux (64bit)
- RAM memory size: 2GB
- hard drive type: VDI, dynamically allocated
- hard drive size: 20GB
- bidirectional clipboard: yes
- shared folder: yes (remember that the shared folder must exist on the host system before setting this property)
- shared folder auto-mount: yes
If you aren’t new to VirtualBox and know how to setup the machine described above you can skip the next section.
Creating and Configuring the Virtual Machine
Open the VirtualBox program, click the New
button of the toolbar, write down the machine name, and choose the OS type and version.
Choose the RAM size. In my case the host system has 8GB so 2GB of RAM was a sensible choice for my VM.
The next step is to create a virtual hard drive.
In the next screens we choose the disk type to be VDI and to allocate the space dynamically. Then we choose the hard disk size. The default size is 8 GB which is probably too small so we increase the size until 20GB.
We click the Create
button and then we start the setup of the VM by clicking the Settings
button of the toolbar.
Now we go to the General -> Advanced
tab and setup the bidirectional clipboard.
Afterward we setup a shared folder. It will be useful to share data between the host and guest systems. In the host system it is seen as a regular folder. In the guest system it is a folder with the same name but living in the /media
directory. Before setting up the shared folder it must be created on the host system.
We go to the Shared Folders
tab, enter the path of the shared folder and tip the auto-mount check box.
If everything went O.K. it should look like this:
Eventually we select the Storage
tab. Click the Add CD
button (the small CD with a plus sign picture) and virtually insert the previously downloaded ISO in the CD drive of the VM.
The VM is now created and configured so we can proceed with the Arch Linux installation.
Installing Arch Linux
Now we are ready, on the VirtualBox program click Start
on the toolbar and a boot screen will appear, showing you several boot options. Press Enter
(i.e. choose the Boot Arch Linux x86_64). After a few seconds you will get terminal with the root user automatically logged on.
The first thing to do if you’re not using an English keyboard is to set the keyboard layout. I’m living in Spain and using a keyboard with Spanish layout so I have to run the command:
# loadkeys /usr/share/kbd/keymaps/i386/qwerty/es
Next you have to partition the virtual hard disk. But first you need to know how your disk is named, so you issue the command lsblk
.
In my case the name is sda
(I know it because its type is disk and it is 20 GB big). The last thing to do before partitioning is to choose the format of the partition table. You have two options: the classic MBR format and the modern GPT format. In general, if your boot loader is not GRUB legacy and you are not running a multi-boot system with Windows using BIOS, then it is recommended to use the GPT format so we will use it (you can read more about both formats here).
Now that we know the disk name and the partition table format we can issue the proper command to partition the disk, in our case:
# gdisk /dev/sda
gdisk
is the GPT version of fdisk
. A prompt asks we what we want to do (create new partitions, set partition start and end sectors, etc.). The following screenshot shows an example:
After partitioning the disk the partitions table looks like:
The partition 1 is for installing the GRUB bootloader, the partition 2 is for the /
filesystem, the partition 3 is for /boot
filesystem and the partition 4 is for the /home
filesystem. As you can see we aren’t creating a swap partition. This is because we have a large amount of RAM ans a swap partition will probably not be necessary.
Next we format our partitions with the proper flavors of the mkfs
command.
Now we have to create the mount points for the partitions and mount them (beware that we don’t mount the boot partition):
# mkdir /mnt/boot
# mkdir /mnt/home
# mount /dev/sda2 /mnt
# mount /dev/sda3 /mnt/boot
# mount /dev/sda4 /mnt/home
Next step is to test our connection to the Internet using the ping
command:
# ping -c 3 www..google.com
PING www.l.google.com (74.125.224.146) 56(84) bytes of data.
64 bytes from 74.125.224.146: icmp_req=1 ttl=50 time=437 ms
64 bytes from 74.125.224.146: icmp_req=2 ttl=50 time=385 ms
64 bytes from 74.125.224.146: icmp_req=3 ttl=50 time=298 ms
--- www.l.google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 298.107/373.642/437.202/57.415 ms
Everything seems O.K. (no packet loss) so we go to the next step, the selection of download mirrors. We edit the /etc/pacman.d/mirrorlist
file and select the desired mirrors. Regional mirrors usually work best, but it may be necessary to consider other concerns. In my case I simply selected the first five mirrors in the list (for selecting just uncomment the line containing the server). The less the score is the better the server works.
Now we download from the Internet the base system and install it:
# pacstrap /mnt base
This is the base system so don’t expect a graphical web browser to be installed 🙂
Now we generate the fstab
file:
# genfstab -pU /mnt >> /mnt/etc/fstab
At this point we are ready to change root into the system:
# arch-chroot /mnt
The next steps are pretty easy. First we set the hostname and the time zone (in my case Europe, Madrid):
# echo ArchLinuxVM > /etc/hostname
# ln -sf /usr/share/zoneinfo/Europe/Madrid /etc/localtime
Then we have to generate and setup the wanted locales. It is a three steps process:
First we edit the /etc/locale.gen
file and uncomment the needed locales (es_ES.UTF-8 in my case)
Second, we generate the required locales:
# locale-gen
And third, we set the locale preferences in the /etc/locale.conf
file:
# echo LANG=es_ES.UTF-8 > /etc/locale.conf
Now we set the password for the root
user:
# passwd
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Now it’s time to install the GRUB bootlader in the boot partition (/dev/sda1
in our case):
# pacman -S grub
# grub-install --target=i386-pc --recheck --debug /dev/sda
# grub-mkconfig -o /boot/grub/grub.cfg
Note that grub-install
installs the bootloader to the desired device and copy GRUB images and modules in /boot/grub/i386-pc
. In addition grub-mkconfig generate the GRUB configuration file grub.cfg
and saves it under /boot/grub
.
Once the bootloader has been installed we can reboot the virtual machine:
- – leave the change root environment
# exit
- – optionally unmount all the partitions
# umount -R /mnt
- – remove the installation media (i.e. go to the VM VirtualBox top menu and, in the Devices menu, choose CD/DVD devices and remove disk from the virtual drive)
- – issue the reboot command
# reboot
And that’s enough for today. In the next blog entry we’ll complete the virtual machine configuration (with a permanent setup of the Internet connection, user’s creation, installation of X Window System, etc.).
Some whitespace pitfalls in Bash programming
Bash is a Unix shell, a command interpreter. It is a layer between the system function calls and the user that provides its own syntax for executing commands. It can work in interactive mode or in batch mode (i.e., via scripts). In interactive mode the Bash prompt, that usually ends with the $
symbol, tells you that Bash is ready and waiting for your commands.
Because the Bash doesn’t have as many features as other programming languages like Java or Python many people suffer from the misconception that it is a sort of simple tool and that takes only a few hours to get proficient with it. Wrong approach. As it happens with a `decent` programming language, if one doesn’t pay attention and learns it properly she will see lots of unexpected errors when running her scripts.
An endless source of errors in Bash programming is the management of whitespaces. In this blog entry we will collect some whitespace pitfalls that usually hit Bash beginners and also developers that only write Bash scripts from time to time (like me).
Commands and arguments
Bash reads commands from its input (a file, a terminal emulator…). In general, each read line is treated as a command, i.e., a instruction to be carried out. Bash divides each line into words at each whitespace (spaces or tabs). The firs word it finds is the name of the command to be executed and the remaining words become arguments to that command.
The above paragraph is IMHO the golden rule that one should never forget if she wants to use Bash in a painless way. It seems trivial (and in fact, it is) but most programming languages are not so strict when using whitespaces so, if someone regularly develops in Perl, Java, C…, it is likely that she tends to use the whitespace in a wrong way when writing Bash scripts. For instance, in Python one can setup a variable as follows:
my_var = 4
In fact, it is recommended to delimit the operator with whitespaces for improving readability. However, doing the same thing in Bash will raise an error:
$ my_var = 4
my_var: command not found
because Bash interprets the above line as: execute the my_var
command with arguments =
and 4
. But the my_var
command doesn’t exist so we get an error. The proper way to do the assignment is:
$ my_var=4
Parameter expansion
In order to access the data stored in a variable, Bash uses parameter expansion i.e., the replacement of a variable by its value (on expansion time the parameter or its value can be modified in a number of ways but we will only use simple replacements here). The syntax for telling Bash that we want to do a parameter expansion is to prepend the variable name with a $
symbol. Although not always necessary, it is recommended to put curly braces around the variable name:
$ echo "my_var = ${my_var}"
my_var = 4
Non double-quoted results of expansions are subject to word splitting. Double-quoted results of expansions are not. An because after the replacement Bash can still execute actions on the result (due to the golden rule mentioned above) we have just met a pervasive mistake involving whitespaces: the wrong quotation of results of parameter expansions. For instance, if we want to create an empty file named My favourite quotations.txt
then we shouldn’t issue the following command:
$ filename="My favourite quotations.txt"
$ touch ${filename}
the result of the parameter expansion is not double-quoted so word splitting happens and we execute the touch
command with three arguments, My
, favourite
and quotation.txt
. Instead we should double-quote the result of the expansion for avoiding the word splitting after the parameter expansion:
$ touch "${filename}"
The above example is trivial and harmless but the same considerations apply to potentially dangerous commands like rm
. If we forget the double quotes we may end up removing the wrong files (which can be annoying if we don’t have a backup of those files). When a variable contains whitespaces we have to decide if its expansion needs double quotation or not. The general advice is to put double quotes around every parameter expansion; bugs caused by unquoted parameter expansions are harder to debug than those due to quoted expansions.
Another source of problems related with parameter expansions and whitespaces is the use of metacharacters because unquoted expansions are also subject to globbing. Globbing happens after word splitting. For instance, if in our current directory we have a TODO.txt
and a README.txt
files, issuing the following similar commands will produce very different results:
$ msg="There are *.txt files in this directory"
$ echo $msg
There are README.txt TODO.txt files in this directory
$echo "$msg"
There are *.txt files in this directory
Testing conditions
Another common misuse of whitespaces hits many people when they call the [
builtin or the [[
keyword for testing some expression (you can found a nice explanation about the types of Bash commands here). It seems to be very easy to forget that both of them are commands whose arguments are an expression and a closing ]
(or ]]
). So one can write the following for checking if the file TODO.txt
in current directory is indeed a file:
$ [-f TODO.txt]
[-f: command not found
$ [ -f TODO.txt]
-bash: [: missing `]'
$ [ -f TODO.txt ] && echo OK
OK
In the first case we are trying to execute a non-existing command because the [
is not followed with a whitespace. In the second case the last argument passed is not a ]
because the ]
is not preceded with a whitespace. The third case is fine.
In addition, [
and [[
manage whitespaces in a different way. Parameter expansions inside a [
are subject to word splitting and globbing so quoting those expansions can be critical. However [[
is a keyword and it parses its arguments and does the expansion itself, taking the result as a single argument even if the result contains whitespaces. So if we want to check the content of:
$ my_var="I like quotes"
we can do:
$ [ "$my_var" = "I like quotes" ] && echo "Me too!"
Me too!
$ [[ $my_var = "I like quotes" ]] && echo "Me too!"
Me too!
[[
has more nice features (pattern matching, support of control operators for combining expressions) that make it more powerful than [
but those features are not directly related with whitespaces so they will not be considered here.
Also notice that many beginners tend to think that the if
keyword must be followed by a [
or a [[
just as if
must be followed by (
in C and other programming languages. It is not the case in Bash. if
must be followed by one or more commands. Those commands can be [
or [[
but they don’t have to.
Grouping commands
The last example of a common mistake related to whitespaces is the use of the one line version of grouping commands. The proper syntax is::
{ <LIST>; }
So we have to pay attention to the whitespaces between the curly braces and the list of commands and also to the semicolon after the list of commands or we can get errors like these:
$ {rm TODO.txt || echo "Unable to delete file" >&2; }
-bash: syntax error near unexpected token `}'
$ { rm TODO.txt || echo "Unable to delete file" >&2 }
> ^C
$
In the first case the closing brace is unexpected because there is no opening brace (instead we have issued a non existing {rm
command). In the second case there is no semicolon after the last command. As a consequence Bash thinks that the closing }
is an argument of the last command and waits for the user to close the group of commands. We have aborted the unfinished command and come back to the default interactive prompt by pressing Ctrl-C.
Further reading
A couple of great sources of information about Bash are:
- tutorial for newbies: the Bash guide in the Greg’s wiki
- not for newbies: the Bash hackers wiki
Setup an Android development environment on Ubuntu
Recently I’ve been involved in a project for developing an Android smartphone application. This is a new working field for me as I had never developed applications for mobile devices before. So it requires an important extra effort on my side (tons of things to learn). As I always do when I find myself turned into a newbie I’ve started to read documentation… and I’ve setup my development environment. For doing the setup I’ve followed the instructions found here. In theory it is an easy process. In practice it can be a little bit complicated so I decided to write this post. The setup described in this post has been tested on my Kubuntu Oneiric laptop.
First of all I’ve installed the openJDK implementation of the Java6 SDK:
$ sudo apt-get install openjdk
Other Java implementations have been discarded due to different reasons:
- Java7 is not supported by Android
- Java Oracle packages are not available on Ubuntu Oneiric official/partner repositories
- the GNU Java compiler is not compatible with the Eclipse IDE so it is not an option if you plan to develop with Eclipse
The recommended IDE for developing Android applications is Eclipse because there is a plugin for integrating the Android SDK with it. The Ubuntu Eclipse package uses openJDK by default but it depends on the GNU Java compiler which, as I said, is not compatible with the Android SDK so I don’t now if it is a good idea to install Eclipse from the Ubuntu repos. Just in case I’ve downloaded Eclipse Classic (the version recommended by Android) from its website and installed it on the /opt/
folder. Installing Eclipse is trivial, just untar it and add the eclipse
folder to your PATH
.
Next I’ve installed the Android SDK Starter Package under /opt/
. Again, the installation is trivial, just untar the package and add the android-sdk-yourplatform/tools
and android-sdk-yourplatform/platform-tools
folders to your PATH
.
Once the Starter Package is installed one should execute the command
$ android &
which launches the Android SDK Manager, a tool included in the Starter Package. It is a graphical program, with a simple UI, that allows you to setup your SDK by downloading the essential packages for your development environment. In my case I’ve installed the following packages:
- SDK Tools (latest version is required)
- SDK Platform-tools (latest version is required)
- SDK Platform (latest one is recommended)
- Documentation for Android SDK
Additional packages that I’ve installed include the Google API, SDK Samples and the sources for Android SDK.
If you plan to publish your application, you will want to download additional SDK platforms corresponding to the Android platform versions on which you want the application to run.
Downloading those packages is sometimes a very slow process. If this problem hits you just cancel the installation and try again later (it is a simple workaround but it worked for me).
The last step is to install the Android Development Tool plugin for Eclipse. It must be done using the Update Manager feature of Eclipse as described here. The plugin configuration is very easy, just follow the wizard steps. At the end you will have an Android toolbar on your Eclipse main window. This new toolbar will contain buttons for launching the Android SDK Manager, managing Android Virtual Devices, etc.
Using this plugin is not mandatory but it seems to be highly recommended. If you don’t want to use it then you aren’t forced to use the Eclipse IDE.
That’s all. Now I’ve to see how it works and decide if I like it or I prefer to look for alternative environments. If you’re writing Android apps with a different development environment (for instance, not using Eclipse or not using an IDE at all) please, leave a comment.
Irssi and tmux (or screen)
In my last post I described a basic setup for bitlbee and irssi. Now I’ll describe my current irssi configuration. The setup described here has been tested on my Kubuntu Oneiric laptop.
Irssi is highly configurable via Perl scripts. You can write your own scripts, use those included in your GNU/Linux distro or download them from the scripts section of the irssi website or from other places. I’ve used the last two methods.
Currently I’m using three scripts: adv_windowlist.pl
(downloaded from here), nicklist.pl
and hilightwin.pl
(both of them from the irssi-scripts
Kubuntu package).
The advanced window list script allows you to customise the channels status bar and the active windows list.
The nicklist script places all nicknames in a channel in a bar at the side of the window like many other IRC-channels. You can use it in two modes: fifo or screen.
With the hilightwin.pl
script every time you get hilighted (someone types your nickname or any other higlighted word or sends you a private message) a copy of that message will be sent to a separate window.
In order to run the scripts automatically at irssi startup I’ve created the recommended ~/.irssi/scripts/autorun
subtree. I’ve put adv_windowlist.pl
under the scripts
directory and created symbolic links to all the three scripts mentioned above under the autorun
directory.
And now for the scripts configuration. Start irssi
and from the status window run the following commands to setup the hilightwin.pl
script so that it displays a window on the top of the terminal at all time. That way it will be difficult for you to miss important messages:
/run autorun/hilightwin.pl
/window new split
/window name hilight
/window size 6
/layout save
Next configure the adv_windowlist.pl
script. The settings and explanations for them are atop the script, in the OPTIONS
section. From the status window run the command:
/set awl_display_key $Q%K|%n$H$C$S
/set awl_block -15
where $Q
is the meta-keymap key (ALT
on my system), %K
changes the color of the pipe character, %n
changes the color of the window name, $H
means start hilighting, $S
means stop hilightning and $C is the name of the window. The second line defines the width of the status bar region dedicated to every window. Unfortunaltely the hilight part fails for me (maybe I’m misunderstanding something, suggestions are welcome 🙂
Next configure the nicklist.pl
script. Here I’ll assume you’re running irssi
in a tmux session. From the status window execute (comments have been added for clarity, obviously you don’t have to write them):
# split the terminal window in two panes
CTRL+b %
# resize the right pane to its minimum with
CTRL+b right_arrow_pressed until the desired width is get
# back to the pane where irsii is running
CTRL+b o
# Configure the nicklist script in FIFO mode
/nicklist fifo
# back to the right pane and get its size (rows, columns)
$ stty size
42 20
$ cat ~/.irssi/nicklistfifo
# back to the pane where irsii is running
/set nicklist_height 42
/set nicklist_width 20
/nicklist fifo
If you use screen instead of tmux
the setup is easier, but it doesn’t work so smoothly:
/nicklist screen
Eventually save your configuration:
/save
Here you can see the whole thing in action:
PS: As an added bonus of running irssi
inside a tmux
session the nicklist script works even if you are not running a X session. It is fun to see it working in a Linux console 🙂
BitlBee and Irssi
If you are a real geek and want to do your chatting (IRC or IM) on text mode instead of using a GUI then this entry is for you.
After searching the Internet and trying some nice applications for chatting using the command line (e.g. CenterIM) my choice is the pair BitlBee/Irssi. The setup described in this entry has been tested on my Kubuntu Oneiric laptop.
BitlBee is an IRC gateway program for MSN, ICQ, AIM, Jabber and Google Talk. It behaves as a IRC server, creates a IRC channel with all your contacts and allows you to talk to them as if they were normal IRC users. So it must be combined with a IRC client such as Irss (other cool combination you can try is done with web browser IRC clients such as cgi-irc).
BitlBee can be used in two different ways, via its public servers or installing it on your computer. Using a public server does involve a security risk, where as running your own BitlBee server does not. So my advice is to install your own server locally and run it via xinetd as it seems the safer option.
I’ve installed the following packages:
- xinetd
- bitlbee
- irssi
- irssi-scripts
It is recommended to run bitlbee
via xinetd
but the bitlbee package doesn’t provide/create the right file under /etc/xinetd.d/
so we have to add it by hand. The filename is bitlbee
and its contents are:
service ircd
{
socket_type = stream
protocol = tcp
wait = no
user = bitlbee
server = /usr/sbin/bitlbee
port = 6667
disable = no
bind = localhost # prevent non-local access by binding to the loopback device
}
At this point one should stop the bitlbee
daemon if it is running and restart the xinetd
:
# /etc/init.d/bitlbee stop
# /etc/init.d/xinetd restart
The next step is to configure the BitlBee server. You have to launch irss
and connect it to the BitlBee server:
$ irssi
/connect localhost
This is done in the status window (which prompt is [(status)]
). If everything goes fine the control channel window (which prompt is [&bitlbee]
) will be created. Change to the control channel window with the command:
/window 2
or with ALT+2 (you can cycle between windows using ALT+left/right arrow). Then you register yourself in the server using a password:
register your_password
(note that this is not an IRC command but a BitlBee one so it is not prefixed with a slash). After registering all your IM settings (passwords, contacts, etc.) will be saved on the BitlBee server. Finally you add your IM accounts. For instance, if you want to add a Google Talk account enter the following command:
account add jabber example@gmail.com
Then you will be asked to use the /OPER
command to enter the account password. Do it:
/OPER
At the prompt, enter your account password (it won’t be visible). That’s all, the account has been added.
Once you’ve created all your accounts you’ll need to activate them. The following command does it:
account on
Lastly, save the settings on your account and quit the program:
save
/quit
Now that the basic BitlBee setup is done let’s see how a typical session is run. On your favorite terminal launch the Irssi client:
$ irssi
A new Irssi session will be started and you will see the status window (with prompt [(status)]). Type the command:
/connect localhost
A message saying that the connection has been established should be displayed. Change to the bitlbee control channel pressing ALT+2 or typing the command:
/window 2
The prompt on this window is [&bitlbee]
. Now identify yourself with the password used for registering in the BitlBee server:
identify your_password
Now you are recognized and logged on to all your IM accounts automatically. In the bitlbee control channel there are 2 users now, @your_nick
and @root
. You can see the list of IM accounts you’re connected to:
account list
Or you can see the list of all your contacts (buddies):
blist
You can chat with your buddies on the bitlbee control channel:
buddy_nick: Hi, how are you?
If you prefer to create a dedicated window for private chatting you can use the /msg
or /query
IRC commands:
/msg buddy_nick Hi, how are you?
Move to the just created window (which will have a [buddy_nick] prompt) and chat normally.
You can close this window with the /q
command or with the /wc
command (which is useful too for parting channels on disconnected networks).
Irssi can handle multiple IRC connections simultaneously, thus it is possible to be active in channels on different networks at the same time. So being connected to your Google Talk account you can move to the status window, connect to, let’s say, the Freenode network, switch to that network and join to the #ubuntu channel:
/connect irc.freenode.net
CTRL+X
/join #ubuntu
You can use Ctrl-X to switch between network connections and see which the active network is by looking at the status bar.
As expected, you can leave a given channel on a connected network using /part
, disconnect from a network using /disconnect
and quit your IRC session using /quit.
This has been just a brief introduction to BitlBee/Irssi but there are lots of things you can do yet: customise your Irssi instalation, enhance it via themes and scripts, run it in a screen
session (this is really cool :-)…
Integrating keychain with KDE
In the last post I introduced keychain
and compare it with pam_ssh
. I described some nice features of keychain
, in particular how it can use long term running SSH/GPG agents. However the explained setup is not well integrated with KDE because of the environment problem described in that post. Although KMail can be configurated to sign sent messages with GPG keys, this feature doesn’t work with keychain
out of the box. Let’s suppose that we want to sign our messages with the key 5E653DA8
-without entering the passphrase- and we want to use keychain
for managing our GPG keys. In order to get our goal we configure properly the gpg.conf
(ensuring that GPG will use the gpg-agent) file and add to our .bashrc
file a block like
eval `keychain --nogui --noinherit --stop others id 5E653DA8`
if [ -f "${HOME}/.keychain/${HOSTNAME}-sh-gpg" ]; then
. "${HOME}/.keychain/${HOSTNAME}-sh-gpg"
fi
We restart the X session just to be sure that keychain
will add the key to the gpg-agent. Now we start KMail from the KMenu and try to send a signed message using the GPG support. The result is a dialog displaying the message “Signing failed: Bad passphrase”. The reason, as you can guess, is that the environment variables that keychain
uses to expose the GPG agent are not known by KMail (in fact they are not available to KDE). This can be fixed in several ways. We can launch KMail via command line from a shell running keychain
. But we prefer to launch it from the KMenu so we discard this workaround. Other possibility is to use the KmenuEdit tool and change the launch command from KMail to something like:
GPG_AGENT_INFO=/tmp/gpg-xJRtSl/S.gpg-agent:2249:1; kmail
(of course we get the GPG_AGENT_INFO value from ~./keychain/${HOSTNAME}-sh-gpg
). But this doesn’t work if we use KMail as a component of Kontact. We can try to do the KMenuEdit trick with Kontact but then KMail will show us again the error message if we try to sign a message (it seems that Kontact doesn’t pass the environment to the KMail plugin).
The proper way to deal with this problem is to use the ~/.kde/env
folder, of course. After all is a environment problem. So we put the following script in this folder:
#!/bin/sh
eval `keychain --nogui --noinherit --stop others`
if [ -f "${HOME}/.keychain/${HOSTNAME}-sh-gpg" ]; then
. "${HOME}/.keychain/${HOSTNAME}-sh-gpg"
fi
This way the environment variables setup by keychain
will be available to the KDE. Now we can start KMail in every possible way and we will be able to send signed messages without entering the passphrase (we may need to adjust the ttl of the passphrase to a value suitable for our needs. This change can be done in the gpg-agent.conf
file).
Single sign-on: keychain vs pam_ssh
As an unexpected consequence of the previous post about single sign-on in kdm
via pam_ssh
I met keychain
. It is a nice tool for dealing with both SSH keys and GPG keys. Its main goal is to share a unique ssh-agent between logins. In this post I’ll describe briefly some nice features of keychain
and will explain how it can be used for getting single sign-on. As usual, everything shown here has been done on a (testing) Debian box with KDE SC4.
Before starting I assume that your /etc/pam.d/kdm
is not using pam_ssh
, that OpenSSH is properly installed in your system and you have created a RSA key. In your ~/.bashrc
file you have added the line
eval `keychain --nogui id_rsa`
If you restart your X session -so that current ssh-agent and gpg-agent will be killed and new agents will be created during the X session startup sequence- and open a konsole
you’ll see something like:
The already running agents are, by default, inherited by keychain
. Then it uses ssh-add
to add the SSH keys specified in the command line to the ssh-agent, and set up the shell environment so that ssh
can find the running agent. Because this is the first time we login in this system, the ssh-agent doesn’t know the required SSH keys and we’ll be prompted for a passphrase. If the supplied passphrase is correct then the SSH key will be added to the ssh-agent. If we want to add more identities we can do it via ssh-add
command.
Your ~./keychain
directory is now populated with the files initialised during the keychain
startup (see the above screenshot).
Let’s suppose you start a new konsole
(or whatever terminal emulator you like). It doesn’t matter if it is a subshell of the current konsole
or not. The .bashrc
file will be sourced and keychain
executed allowing you to reuse the running ssh-agent so the SSH key added in the first opened shell is available to this new shell too:
Things get interesting when you want to use ssh
in situations in which the environement needed by the ssh-agent (SSH_AUTH_SOCK and SSH_AGENT_PID variables) is not known by the shell. Normally you would need to start new agents. But keychain
solves this problem in a clever way: the required environment is described in the files under .keychain
and those files can be sourced, exposing the environment to the shell. Let’s see some examples.
You will face the environment problem if you want to run ssh
commands in a non interactive shell, for instance in cron jobs. A simple working example of a cron job (assuming that your job is a bash-like script and the the job will be run by the user running the agent) follows:
#!/bin/sh
source /home/vmas/.keychain/rachael-sh
ssh vmas@a_remote_server "ls -l" >> ~/output.txt
As an alternative you can do:
#!/bin/sh
eval `keychain --noask --eval id_rsa` || exit 1
ssh vmas@a_remote_server "ls -l" >> ~/output.txt
Other example. If you connect remotely (for instance via ssh
) to your X session you will see something like:
As you can see, the problem is fixed by sourcing the appropriate file.
As a last example, you can login in a virtual console (for instance tty3 via Alt+Ctrl+F3). You will be presented with the usual keychain
stuff. However, no identities will be added to the ssh-agent due, one more time, to the environment problem. So the ssh-agent -l
command will display the message:
Could not open a connection to your authentication agent.
This problem is fixed again by sourcing the .keychain/${HOSTNAME}-sh
.
You can make things easier adding the next lines to your .bashrc
, after the line calling keychain
. They remove the need of explicitly sourcing files in interactive sessions:
if [ -f "${HOME}/.keychain/${HOSTNAME}-sh" ]; then
. "${HOME}/.keychain/${HOSTNAME}-sh"
fi
Last but not least, keychain
can provide you with long term running agents (one of my favorites features). Until now we’ve launched keychain
in a way that inherits the agents provided by the X session. It means that if we restart that session the agents will be killed and created again so keychain
will use a new pair of agents every time an X session starts. We can force keychain
to keep running the agents used the first time it was invoked. In order to do that we change our .bashrc
replacing the old keychain
invocation with this one:
eval `keychain --nogui --noinherit --stop others id_rsa`
meaning that keychain
will not inherit the agents started by the X session (in fact they will be killed). Instead keychain
will use its own agents.
In summary, we can say that using keychain
we’ll have a unique, long term running, ssh-agent shared between user logins instead of a ssh-agent per login and we’ll be able to use SSH keys in non-interactive sessions too. All the examples above use SSH keys but keychain
also supports GPG keys.
Even more, we can use keychain
to get single sign-on and a unique ssh-agent shared between logins all at once. Simply add the following lines to your .bashrc
:
eval `keychain --nogui --noinherit --stop others id_rsa`
if [ -f "${HOME}/.keychain/${HOSTNAME}-sh" ]; then
. "${HOME}/.keychain/${HOSTNAME}-sh"
fi
This is an interesting combination. Now the very first time that you login in a X session, you will have to authenticate twice: first with your regular password in order to start the session, and then with your passphrase (required by keychain
). But from now on every time you restart your X session you will enjoy a nice single sign-on using just your regular password (something I’ve not been able to do with pam_ssh) plus the flexible management of SSH/GPG keys provided by keychain
.
Single sign-on with kdm for Debian via pam_ssh (III)
In my previous post I thought I got pam_ssh and gpg-agent working together in a seamless way. As Sheldon Cooper would say, “In the world of emoticons, I was colon capital D”. But although all configurations included there worked for me like a charm, they didn’t work for Ivan. Indeed things worked for me better than I expected: when looking for the reasons of Ivan’s problems I realized that I could remove any reference to pam_ssh from my /etc/pam.d/kdm
file, start a new X session using my regular password for login and still have my SSH keys added to the agent! Obviously it was all a mirage. I wasn’t aware that a ~/.gnupg/sshcontrol
file containing references to all my SSH keys was living in my system. It seems that due to this file the SSH keys were automatically added to the agent every time I started a X session, because ssh-add -L
always returned a list of keys, even when I removed every pam_ssh reference from /etc/pam.d/kdm
.
When I removed the sshcontrol
file the inconditional addition of SSH keys went away. So let me start again from the beginning. At the moment we forget about the gpg-agent. The following configurations work for me using the ssh-agent:
auth required pam_ssh.so
#@include common-auth
…
@include common-session
session optional pam_ssh.so
This config forces me to authenticate with my passphrase. The SSH keys are then added to the ssh-agent and I can use them during my X session without entering the passphrase. So far so good.
@include common-auth
auth optional pam_ssh.so use_first_pass
…
@include common-session
session optional pam_ssh.so
With this config I can login using my password. But my SSH keys are not added to the ssh-agent. However the README.Debian, in the paragraph talking about this configuration says:
“By thus adding ssh-auth after common-auth, ssh-auth can use the user’s
password to decrypt the user’s traditional SSH keys (identity, id_rsa,
or id_dsa)…”
So, if I understand it properly, pam_ssh should be able to add my SSH keys to the agent when I authenticate using my password. But it doesn’t (unless that the password equals to the passphrase, which doesn’t make sense for me) and I feel a little bit disappointed. The same happens with the next configuration:
auth sufficient pam_ssh.so try_first_pass
@include common-auth
…
@include common-session
session optional pam_ssh.so
With this config I can login using my passphrase or my password. If I use my passphrase then my SSH keys are added to the ssh-agent but again if I use my password they are not.
And now let’s consider the replacement of ssh-agent with gpg-agent. I’ve setup my system as follows in order to use only the gpg-agent (detailed information can be found, for instance, here) :
– in /etc/X11/Xsession.options
I’ve commented out the line use-ssh-agent
– in /etc/X11/Xsession.d/90gpg-agent
I’ve added the --enable-ssh-support
option to the
STARTUP line
– I’ve disabled ssh at the gnome-keyring:
$ gconftool-2 --set -t bool /apps/gnome-keyring/daemon-components/ssh false
However, all the above /etc/pam.d/kdm
configurations fail with this setup. The “session optional pam_ssh.so” line always start a ssh-agent and SSH keys are never added to it. If I remove that line the ssh-agent is not run but SSH keys are not loaded into the gpg-agent so pam_ssh doesn’t appear to be compatible with gpg-agent. If any of you know how to make them work together, please, let me know. In the meantime I’ll have a look to alternatives to pam_ssh: keychain, libpam-gnome-keyring… As usual, suggestions are welcome 🙂