Coding every day

For the last two months I have been coding every single day. At least I tried. By coding every day I mean pushing new code, that is my goal. Let met tell you why I did this.

Bad coding skills

Coding for me has always been a kind of hate/love relationship, I like coding in the way that sometimes I’m capable of building something I make up from scratch.

Although most of what I do are toy projectes, the feeling is really nice. Long time ago I struggled trying to code the most simple thing - and sometimes I still do - however I realized that coding is not different from any other activity you do.

If you like doing sports, and you want to run faster, there is only one way to get that goal, traning.

Practice helps you to reinforce the information you learn days behind. Suddenly some technique becomes natural on you and you do not have to think more about it.

A routine

One of the problems I have/had with coding was getting started and persevering. For instance, I love doing sports and I practice crossfit, normally I workout between 4-5 days per week and I take it seriously.

Let’s say that one week I workout two days, next week one day then I workout fivedays in a row. You always are getting started because you do not have a routine, and you spend too much time learning the basic moves, and you also get disscouraged, you do not see any improvement.

I exactly have/had this issue with coding. As you may realized, if you do not stick with a routine, you are most probably going to fail. It does not matter the language, I had to review the basics of it and I didn’t get the feeling I was improving.

The problem is always the same, we are trying to get too many things done, but we are always complaining about time.

Daily coding

Right now I can not remember, but I read a write-up about somebody that was coding for every single day, and I thought I could give it a try. I had never tried such a thing, but I was willing to.

Ideally I like to have some pet projects where I can hack around for probably one week or even more. To me, the biggest issue I have is creativity, I need to think and read a lot to get inspired. Sometimes you are not in the mood, or you are on vacation, or probably you do not have your equipment with you. Even simpler than that, you just do not feel like coding.

However, I found the perfect way to code almost every day. I say almost because most sure, I will skip some day. I guess the point is, it is ok coding every day as long as it does not turns out in duty.


The best to code something like 20-30min per day was with snippets. A snippet is just a small chunk of code that shows a funcionality. Let’s say you want to parse XML information using Python, that would be the task to complete in that time. Sometimes I even spend more time, that is not the point. The key is to learn something new and write some new code.

This has worked really well for me. Think of this like reading, some people in order to get a lecture habit start reading either a certain time or a number of pages per day. Well, this is what I wanted to do, that coding was something that I like to do and I want to get that habit, alghough not always you have the time for coding even ten minutes.

My job

I work as a Service Engineer and I have to travel all around the world, doing my thing, I should explain that too, but probably in other post.

The last five weeks I have been working from Mexico. I’m used to spend a lot of time abroad, but this time was different, I had to work at nights and I was travelling to different states during the day. I barely had time and I was really tired because my sleeping was not very goot at all.

At the end I have been able of coding every day, but it was really hard, and just the first week I came up with something that I was hacking like one week. I really like those small projects, because you only need to define a new feature and every day you can do a small part of it.

I will go on with this experiment, who knows If I quit in two days or two months, the point is that there are things beyond our willigness that can delay our personal projects, and if it happens every now and then, it’s ok.

Enabling SSL

Nowadays getting a SSL certificate is pretty affordable and I thought it was time to get one. In this post, I will take you through the procedure of getting and installing one. Let’s keep it interesting and imagine we want to do the same for our new awesome domain,

Create a certificate request

In order to get a certificate we need to create a certificate request. Basically this request contains all the information regarding the domain we want the certificate for. There are some things important here:

  • FDQDN: This must match your domain, otherwise it will not valide it.

Also you should know that and are two different things.

Let’s get started and genereate a CSR for our new domain.

$ openssl req -newkey rsa:2048 -nodes -keyout -out
Generating a 2048 bit RSA private key
writing new private key to ''
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
Country Name (2 letter code) [AU]:ES
State or Province Name (full name) [Some-State]:Madrid
Locality Name (eg, city) []:Madrid
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Geek on the road
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []
Email Address []

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []: xxxxxx
An optional company name []:

You may notice the challange password thing. This is different from the password you would use for your private key. In this case we don’t need a password for the key because it would prompt it every time the service is restarted.

Get a SSL certificate

There are a bunch of options, but finally I got mine from Namecheap. If you have any trouble, take a look to the excellent resource from Digitalocean [1] on how to proceed and get yours.

Verify your certificates

At this point you should have a mail with two certificates embeeded as plain text. Now you need to copy those in two different files. If you want to save some time, check both certificates don’t prompt any error:

$ openssl x509 -in lovingsystemd.crt -text -noout 
unable to load certificate
1103315939110079:error:0906D064:PEM routines:PEM_read_bio:bad base64 decode:pem_lib.c:818:

This could happen if you copied some strange character to the file. If any of them prompts an error, the webserver will not start.

$ openssl x509 -in intermediate.crt -text -noout 

At this point you have to chain both certificates. I recommend you to follow exactly the intructions from your provider and nothing else.

$ cat intermediate.crt >>  lovingsystemd.crt

Install certificate

I run nginx, so here is the configuration for the website

server {
    listen 80;
    rewrite ^/(.*)$1 permanent;
server {
	listen 443 ssl;
	# Chained certificate
	ssl_certificate /opt/nginx/ssl/lovingsystemd.crt;
	ssl_certificate_key /opt/nginx/ssl/;

	# Options for  better grading on SSL tests.
        add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ciphers AES256+EECDH:AES256+EDH:!aNULL;
	ssl_dhparam /opt/nginx/ssl/dhparam.pem;

I benchmarked the SSL score from ssllabs on each small modification. In this way I could see how the grading was changing and what was the problem about. I think this was a good approach. Check your grade in SSL labs.


[1] Installing ssl certificates - Digitalocean

[2] Strong SSL security -

[3] Setting HSTS in Nginx - Scott Helme

Git best practices @Atlassian

This morning was the first day at Silicon Valley Codecamp and I have to say I did not expect such a huge effort and well organized event.

Bear in mind this event was based on donations, despite of this, they provided coffee, lunch and very nice stands with different companies such as JetBrains, IBM or Pivotal.

I thought that SaaS Workflows & Git Best Practices by Tim Pettersen and Erik van Zijst could be a good talk, although I’not a sofware engineer I do keep my personal projects under Git, and I wanted to learn more about it.

The speakers made the talk very easy to follow, here are some of the ideas I found quite useful.


In Git branching is very cheap, the message was clear:

Branch all the things!!

People not familiar to work with Git, use a more tradicional workflow, Linear workflow

If somebody push something that breaks, all the team is affected. On the other hand, you may use the Merge workflow, it looks like something like this

The idea is pretty simple, and very useful if you have a CI intrastructure. Below are the main branches in the repository.

Master Branch

All the code that is already in production.

Developemnet Branch

All the code that is in staging. It branches off from Master and it merges back to Development.

Feature Branch

It branches off from Development.

The name of the branches is a convention :



How hotfixes work is a bit different,

  • Branch off from Master to a new branch with naming hotfix/JIRA-.
  • Merge the previous hotfix into Development.
  • If the previous worked, merge hotfix into Master

Although Rebase is a very neat worklow, is also dangerous and pretty easy to mess your repository up, the most important thing to remember, do not rebase public branches, because rebase rewrites your history, this means it changes the SHA-1 of your commits.

Merge commit

  • uggly
  • full traceability
  • hard to screw up

Rebase (fast forward)

  • no merge commits

Rebase (squash)

  • easy to read
  • more difficult to trace changes

And this is pretty much it, at the end of the talk there was a Git Quiz very funny. Thanks to the organization for this event.

Docker Palo Alto Meetup

Today I assisted a new Docker Palo Alto Meetup, although it was crowded, I got a good spot and I learnt a few more things about Docker and CoreOS. Yet there were two talks and the second was quite interesting topic, Mesos a cluster resource manager, I did not take too many notes.

In the talk Building infrastructure based on CoreOS and Docker, Damien Metzler from Nuxeo explains how they use CoreOS and Docker, together with some custom tool they developed. Nuxeo plaform provides content managament in an easy an fast way, below are the main points

  • Fully open source
  • Testing has to be easy and fast
  • Provide quick trial
  • Provide a software factory to the customer
  • Choose models
  • Run the app

Basically they run on a container all the applications that from the main dashboard a customer chooses. Interesting the concept

Design your cluster for failure

It turns out they use heavily Java processes that can eat up to 1Gb of memory, and eventually you reach out the max capacity of your bare metal.


Comes with a minimal Linux distribution

  • Docker
  • etcd (key/value distribution)
  • fleet (launch jobs on your cluster)
  • systemd (replaces init)
  • Active/Passive root partition

I really like the active/passive root partition. You have two partitions for the OS, if a patch is required, you stay with your current partition while upgrading the secondary one, then reboots and runs the new one.


In order to run a job in the cluster, you use a systemd-like definition that fleet processes.


Helps to distribute information to the members of the cluster.

  • register a service on etcd
  • location
  • status


Gogeta is a proxy-reverse written in Go. Some ideas:

  • manteins the last access time
  • the last access key in etcd
  • restarts the service if required
  • kills the container after inactivity

Why would you use Go?

  • easy to use
  • etcd channels
  • stactic status pages (error, wait)
  • first working prototype in one week (while learning go)
  • build a static executable (good for containers)

A very useful tip Damien gives, #don’t ever put data on the container , if the container goes down, you loose the data. You can use Elasticsearch and spread your data across the cluster.

In summary, Docker and CoreOS are hot topics together with the set of tools that provide a different way to manage your clusters. Increasingly I see how most of these tools are written in Go, and how its community grews.

IPv4 subnetting

During the preparation for the CCNA I had to brush up network subnetting. Finally I found a pretty straightforward way to calculate subnets. Here are some advices.

1. Finding subnet network, broadcast and last host

The simplest thing to do is to work with groups.

Prefix  /+1 /+2 /+3 /+4 /+5 /+6 /+7 +/8
Netmask 128 192 224 240 248 252 254 255
Group   128 64  32  16  8   4   2   1

If I get the IP address I know that the netmask is what means I need to look for groups of 16 according to the table. Now you need to find the closest group to that IP address.

Here are the groups of networks base on the previous group. In this case the is the closest one for the previous IP address.

Starting with up to Just moving on the third octed in groups of 16.

In order to find out the broadcast address and the last host available you just need to add the group number (16), trying to get the next subnet available which in this case would be

Broadcast address = Next Subnet - 1
Last host = Next Subnet - 2

From the previous example:


Last Host:


As you can see this is a very simple way to find out to what subnet an IP address belongs to.

2. Start splitting from higher prefix to lower

First and foremost, you need to know that subnetting is always done from higher to smaller prefix in order to avoid overlapping.

Assuming I want to split the next network in several subnets I should start with the next biggest prefix I can use, /25, then I take this one and I split it in lower subnets.

The problem is when you decide to get first a smaller subnet and later a bigger one. For example, Afterwards you start subnetting on what you think is the next subnet available which according with previous subnet you might think is .

As you already may notice, this is wrong and it would match the range of ip addresses - what actually includes the IP addresses inside

Nowadays I will normally use an IP calculator, however I find this an excellent work out for mental calculations.

Troubleshooting EIGRP

Preparing the CCNA is being challenging, troubleshooting just guessing is consuming time, besides you get quicker results if you define some steps about how to proceed. This is the procedure I will normally follow if something is not working properly.

1. Checking the interfaces are in UP/UP state

Before starting with routing protocols, verify the interfaces are working properly

R1> show ip int brief

2. Checking L2

Review if there is any problem on the serial link:

  • KeepAlive removed from one router, it will appear up/up for that interface, on the other end of the link up/down
  • Authentication with either wrong username or wrong password will show down/down on both ends of the link.
  • Mismatched Encapsulation will show down/down on both ends of the link.

3. EIGRP neighbors and AS

Confirm that you have the expected neighbors, besides examine the AS is the same in all the routers.

R1> show ip eigrp neighbor

4. EIGRP interfaces

It may occur that either some interfaces are not enabled or there are some interfaces enabled with a wrong network command.

R1> show ip eigrp interface

If there is some interface that is enabled (from the previous step), but neighbors routers do not see that network, review the configuration. Regarding a network command, review the command itself or the wilcard.

i.e: network (but you actually have
i.e: network (actually you want for /30)

R1> show ip protocols 

The above command will show if there is any error, with the network definition. If the network command for that interface was not added, it will not appear.

5. K-values and passive interfaces

Actually, we can get the next information from the previous check, however I think is much clear to review it aside.

R1> show ip protocols 

This command shows you any passive-interface, what it would actually avoid to establish neighbor relationships. Remember that K-values must match on both routers, you can check the values using the previous command as well.

R1(config-router)# no passive-interface s0/0/0

6. EIGRP Authentication

I’m not sure if this topic is covered for the CCNA, however is for sure one of the issues you may find. In this case, configuring:

  • Key chain
  • Key ID
  • Key String

The previous parameters must agree when setting up EIGRP authentication.

7. Multicast in serial links

If Frame Relay is configured on a physical interface, broadcast will not be supported, the same occurs for point-to-multipoint links. Define subinterfaces or modify the ospf network type.

In multipoint networks add the broadcast keyword at the end. The show frame-relay map shuld show broadcast, otherwise it will not work.

R1(config-if)# frame-relay map ip <IPADDRESS> <DLCI> broadcast

8. Access lists filtering

At this point everything has been configured properly, you can see one router is showing adjacency going up and down. Take a look if there is any access list blocking the IP traffic.

R1# show access-list

Definitely a guess method is not an option when you are trying to narrow down some issues and the clock is not from your side.

In my opinion an analysis through the layers L1, L2, L3 and EIGRP details should point you out the error.

Debugging NAT Overload

It turns out I was asked to get the CCNA certification at work. It’s being quite difficult to find time to prepare it, besides I’m traveling quite often, what makes it even more complicated. I was tinkering with Packet Tracer, reviewing some concepts about NAT and I wond up with an interesting case I did not expect, basically because I did not understand NAT at all, now I’m starting.

A router implementing NAT overload keeps a table with a private IP address (RFC1918) and source port mapped to one external routable IP address and its destination port. Below is the image of the lab I prepared today:

Here is the configuration for R1

interface GigabitEthernet0/0
 ip address
 ip nat outside
interface GigabitEthernet0/1
 ip address
 ip nat inside
ip nat inside source list 1 interface GigabitEthernet0/0 overload
ip route GigabitEthernet0/0 
access-list 1 permit host

The configuration for R2 was exactly the same, but using a different network.

interface GigabitEthernet0/0
 ip address
 ip nat outside
interface GigabitEthernet0/1
 ip address
 ip nat inside
ip nat inside source list 1 interface GigabitEthernet0/0 overload
ip route GigabitEthernet0/0 
access-list 1 permit host

The first test was a simple ping form R2 towards R1, it did not work, requests timeout. Taking a look to the translation table on both routers show me the error.

R2# show ip nat translations
Pro  Inside global     Inside local       Outside local      Outside global

On R1 we found:

R1# show ip nat translations
Pro  Inside global     Inside local       Outside local      Outside global

The problem, R1 was performing NAT, modifying the source IP address from to When the packet arrived to R2, this looked up (outside global ), trying to know the Inside local and proceed to forward the packet, however there is not a translation for this packet. Verifying the statistics, I see the misses (increasing during the ping)

R2# show ip nat statistics 
Total translations: 4 (0 static, 4 dynamic, 3 extended)
Outside Interfaces: GigabitEthernet0/0
Inside Interfaces: GigabitEthernet0/1
Hits: 1  Misses: 3
Expired translations: 0
Dynamic mappings:

R2# show ip nat statistics 
Total translations: 5 (0 static, 5 dynamic, 4 extended)
Outside Interfaces: GigabitEthernet0/0
Inside Interfaces: GigabitEthernet0/1
Hits: 1  Misses: 5
Expired translations: 0

One of the solutions was to disable NAT on either R1 or R2. However, defining a static map for that miss would do the ping work. The only drawback, this mapping only works for the IP address, any other host would fail answering.

R2(config)# ip nat outside source static 

The conclusion is that the best approach before debugging is to understand how things work.

Recovering a VDI disk

Recently I ran into an issue regarding a vm’s storage. It turns out one of the VDI on my virtual machine was faulty. I had some data inside and I didn’t want to lose it.

First of all, we can convert from VDI to raw. I did the conversion from Windows, I guess it should be the same from Linux.

C:/VDIs/VboxManage internalcommands converttoraw  disk-vm-testing.vdi  vdisk.raw 

Now let’s see what is inside the raw disk:

dm@testing:#{~} fdisk  -l vdisk.raw
You must set cylinders.
You can do this from the extra functions menu.

Disk vdisk.raw: 0 MB, 0 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000d598a

    Device Boot      Start         End      Blocks   Id  System
vdisk.raw1   *           1         996     7993344   83  Linux
Partition 1 does not end on cylinder boundary.
vdisk.raw2             996        1045      392193    5  Extended
Partition 2 has different physical/logical endings:
     phys=(1023, 254, 63) logical=(1044, 52, 32)
vdisk.raw5             996        1045      392192   82  Linux swap / Solaris

In this case I only have a first partition, because the second one was used as swap device. I have to find out the offset where the data is.

dm@testing:#{~}  parted vdisk.raw
GNU Parted 2.3
Using /media/sf_Downloads/vdisk.raw
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) unit
Unit?  [compact]? B
(parted) p
Model:  (file)
Disk /media/sf_Downloads/vdisk.raw: 8589934592B
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start        End          Size         Type      File system     Flags
 1      1048576B     8186232831B  8185184256B  primary   ext4            boot
 2      8187280384B  8588886015B  401605632B   extended
 5      8187281408B  8588886015B  401604608B   logical   linux-swap(v1)


The ‘Start’ column shows me the offset for the partition I’m interested in. Next step is to map this offset with a loopback device and mount it.

dm@testing:#{~} losetup -o 1048576  /dev/loop0  vdisk.raw
dm@testing:#{~} mount /dev/loop0 /mnt/vdi
dm@testing:#{~} mount | grep loop
/dev/loop0 on /mnt/vdi type ext4 (rw)
dm@testing:#{~} ls /mnt/vdi
ls /mnt/
bin  boot  dev  etc  home  initrd.img  lib  lost+found  media  mnt  opt  proc  root  sbin  selinux  srv  sys  tmp  usr  var  vmlinuz

At this point you should be able to access your data, or at least some part of it.

Strace to the rescue

Today in the IRC somebody asked which would be the best way to know if a process already exists in the system. The choice was between using test -d /proc/PID or kill -0 PID.

Both of them do the job, the question here, we want to use the best one. Suddenly I remembered an option that comes with strace and lets you query the number of syscalls for a given trace. Besides we can order based on the number of syscalls.

adm@testing:${~} strace -c -S calls kill -0 1234
kill: No such process
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
  -nan    0.000000           0        12           mmap2
  -nan    0.000000           0         6           close
  -nan    0.000000           0         5           open
  -nan    0.000000           0         5           fstat64
  -nan    0.000000           0         4           read
  -nan    0.000000           0         4         4 access
  -nan    0.000000           0         3           brk
  -nan    0.000000           0         3           munmap
  -nan    0.000000           0         2           mprotect
  -nan    0.000000           0         1           write
  -nan    0.000000           0         1           execve
  -nan    0.000000           0         1           getpid
  -nan    0.000000           0         1         1 kill
  -nan    0.000000           0         1           dup
  -nan    0.000000           0         1         1 _llseek
  -nan    0.000000           0         1           fcntl64
  -nan    0.000000           0         1           set_thread_area
------ ----------- ----------- --------- --------- ----------------
100.00    0.000000                    {52}       6 total

On the other hand, the second command’s output: adm@testing:${~}strace -c -S calls test -d /proc/1234 % time seconds usecs/call calls errors syscall —— ———– ———– ——— ——— —————- -nan 0.000000 0 7 mmap2 -nan 0.000000 0 5 close -nan 0.000000 0 3 open -nan 0.000000 0 3 3 access -nan 0.000000 0 3 brk -nan 0.000000 0 3 fstat64 -nan 0.000000 0 2 mprotect -nan 0.000000 0 1 read -nan 0.000000 0 1 execve -nan 0.000000 0 1 munmap -nan 0.000000 0 1 1 stat64 -nan 0.000000 0 1 set_thread_area —— ———– ———– ——— ——— —————- 100.00 0.000000 {31} 4 total

The column calls helps to know the number of performed syscalls. Using test -d /proc/PID gives a better performance due to a minor number of syscalls.

I really like strace, is a tool you had better know, here I got syscalls statistics, but you can trace syscalls either specific or a bunch of them, follow forked processes and much more, this is only a simple example. I hope it helps.

Interesting shell

Finally here is my last compilation about shell environment variables.


If this is set, keeps a directory list separated by ‘:’ and every time that you type the command “cd DIR” it searches for the directory in the above list, even if your current directory is not the right one. Let’s see an example, I define the directory holding all my git repositories, if I try to make cd repo from any part on the system, even if I not in the right place, I will get there:

export CDPATH="~/mygits"
devadm@testing$(~)ls mygits
blog scripts org-mode cv
devadm@testing$(/usr/share/doc) cd  blog

In my opinion is useful to keep just one directory or probably up to two, more than this, it could get messy.


If you want to narrow down the ouput when performing filename completion, this is without a doubt your shell env. When set, it ignores suffixes while performing filename completion.

devadm@testing$(~)export FIGIGNORE="#:.o:~"
devadm@testing$(~/application)ls file*
file1.o file2.o file1 file1 file~
devadm@testing$(~)ls [TAB] [TAB]
file1 file2

I mentioned this shell env because it was curious to me. I guess I have never found any case where I had to use such a thing, but that does not mean it wouldn’t be useful to somebody else.


It can be set to ignorespace , and it does not record words starting with empty space. On the other hand I do not like to have duplicated lines, ingnoredups does not repeat entries in the history. Finally, I like to join both options so I use ignoreboth instead.

export HISTCONTROL=ignoreboth


It holds the path to a file containing a list of hosts in the same format as /etc/hosts , if it is set, tries to complete the hostname with one of the entries on the file. Otherwiese will look for /etc/hosts.

It could be pretty useful if you want to keep a personal file with your hosts.

export HOSTFILE="~/.myhosts"

The only problem I see here, if I you want to access hosts defined in /etc/hosts. You could make a script that checks if there is a new entry in the /etc/hosts and then appends the last entries to your personal list. Once again, this is only an idea I came up with, but actually I did not follow through.


If set, bash uses its value as the name of a directory in which creates temporary files. With this option I can set my personal temp directory.

export TMPDIR="~/.tmpbash"


Sets a timeout and affects to read or select builtin commands, when not input is given. An interesting case is if you set this to your current shell. After N seconds without providing any input it will kill your shell, so keep this in mind.


# Sets to 3 seconds timeout

printf 'Could you please give me an absolute path ?'
read -r -s -n10 absolute_path

[ -z name ] && printf 'Your path is : %s\n' $absolute_path

Well, the last posts were focused on bash shell environment because I was digging into the man bash and I found quite interesting things I did not know, I hope some of them were useful for you as well.

Shell environment variables

In the previous post I wrote about different topics such as BASH_ENV, subshells or expressions. Today I’m going to talk about shell enviroment variables.


This is the PID of the current bash process. This behaviour is different from $$ in cases such a subshell where $BASHPID says the PID of the subshell, whereas $$ shows the PID of the bash holding the subshell.

$ echo $$ $BASHPID # 23353 23353
$ echo 'Subsshell' $(echo $$; echo $BASHPID)# 21060 23353


Number of lines in the current script, from the beginning to the line the function was called from.

function show_env_vars() {
  echo "$FUNCNAME"    # show_env_vars
  echo "$BASH_LINENO" # Number on lines till show_env_vars was call (11)
  echo "$LINENO"      # Current line 7


Array with the directories you are moving using popd and pushd builtins to add/remove directories.

# After doing some pushd 
$ for i in  ${DIRSTACK[@]}; do printf  'DIR: %s\n' $i; done                                                                                                                                                               
DIR: /tmp
DIR: /var/tmp/testing
DIR: /home/cartoon
DIR: /usr/share/doc
DIR: /var/tmp
DIR: /var/www


EUID of the current user, initialized at shell startup. This variable is read only. On the other hand GROUPS is an array of groups which the current user is member of.

for ((i=0; i < ${#GROUPS[@]};  i++)); do printf 'gid:%d\n' ${GROUPS[$i]}" ; done
gid: 1002
gid: 1001

5. Gathering OS info

Below there are some of the shell environment vars you could use, for instance if you need to check the architecture. I have not tried in Sparc yet but I would like.

HOSTNAME='Self explanatory'
HOSTTYPE='Arhcitecture i486'.
OSTYPE='Operating System where bash is running i.e: linux-gnu'
PPID='Parent process ID of the current shell'

6. Random numbers

RANDOM A random number betwwen 0 and 32767.
# Getting a random number between 0 and 99

echo "RANDOM: $(( $RANDOM % 100))

In sum, there are a wide number of shell environment variables to keep in mind if you are scripting. Next time I will post more shell vars, but focused on customizing your bash shell.

Nifty bash scripting

One of the things I should do more often is to read the man pages. Recently I have spent some time digging into the bash man pages, and I would like to share some interesting stuff.


If you run a script, it looks for this variable and if it’s set, expands the value to the name of a file and reads its content. The filename should content the absolute pathmane otherwise it will no be able to locate the file.

export BASH_ENV="$HOME/.customs"

Take a look to the above example. From now, and beacuse I set BASH_ENV in my .bashrc, my scripts will have a set of fuctions or whatever I defined in there. It would be the same doing source $HOME/.customs in every script. This is really good, you have a set of custom functions or variables available to all your scripts.

A good idea would be to set this variable in either /etc/profile or /etc/bash.bashrc , if you want to share it with the rest of users in the system.

2.Run in a subshell ( ) vs current shell{ }

This is someting you might not pay attention. In order to assign the ouput of some shell commands to a variable, try first to use { }. Let’s see an example:

# Current shell
declare -A colors=();

{ for key in ${!colors[@]}; do line+="$key" ; done; }

# Subshell
$(for key in ${!colors[@]}; do line+="$key" ; done;)

Perhaps you realized the problem it crops up here.

**Current shell {} **

-Variables are available while the script is running. In the previus example I can access both variables, colors and line. Commands are separated by ‘;’. It’s also quite handy when using conditionals.

 if_true  &&  { cmd1; cmd2; cmd3; }

**Subshell $() **

-Variable assigments do not remain. Any change to any variable in the script would not take effect after ending the command execution. However it would be possible to get the output by means of either echo or printf commands. Besides variable scope, performance could be worse due to new subshell execution.

3.Using [[ ]] expressions

You might know the old form ** [ ] ** , but this is the new one, and has some pretty cool properties as Pattern Matching. I guess the best way to get an idea is watching an example.

[[ $ip_address =~ ^([0-9]{1,3})\.([0-9]{1,3})\.([0-9]{1,3})\.([0-9]{1,3})$ ]] && echo "Valid IP address"

# Iterating matches
for n in ${BASH_REMATCH[@]}; do echo $n; done

Of course the above example is just for purpose, actually it does not validate a real ip address, but does the trick. Most interesting is the variable BASH_REMATCH, an array holding each substring matched by parenthesized subexpressions. Regarding regular expressions you should look for in the man as in regex(3) and regex(7).

4. Case and ;& operator

This operator continues the execution to next option if available. I came up with an example and you will see how it works:

while getopts $options  flag
    case $flag in
            [ $list_all == "on" ]  && shift
            [ $list_all == "off" ] && shift && shift

            if [ $list_all == "on" ];then
                for color in ${!colors[@]}
                    draw_with_color "$color" "$msg"
                check_color "$c" && draw_with_color "$c" "$msg"

            printf 'Available colors: \n'
            _show_colors && exit
            usage && exit
            usage && exit

Previus chunk of code is part of the pickcolor script. The main idea was to jazz some text up with a chosen color, then I thought it would be more practical to enable the option of pating the same text with all the available colors.
usage: [OPTIONS] message
       -a --all    Test all colors for message
       -c --color  Set font color.
       -l --list   List available colors.
       -h --help   Help

# Colors up using all the colors
$ pickcolor -a  Show me the colors

# Color up using just one color
$ pickcolor -c red Paint my room !!

The special operator ;& was really useful and did the trick. Next part I will write about shell vars and some builtin commands to have in mind.

Shared Libraries

If you have your own shared libraries with the whole set of your favorite functions, probably you will have seen this common error:

./myapp: error while loading shared libraries: cannot open shared object file: No such file or direct

Let’s take a look inside the binary:

tuxedo@host:$> ldd test =>  (0xb7ef6000) => not found => /lib/i686/cmov/ (0xb7d81000)

By default the system is looking for in the paths defined in the /etc/, which recursively adds the definitions in the folder /etc/ Here’s the quick trick:

tuxedo@host:$> export LD_LIBRARY_PATH=`pwd` 

I use this whereas I’m implementing my library, after that you can put it wherever you feel like.

tuxedo@host:$> ldd test =>  (0xb7f5e000) => /home/tuxedo/syslib/ (0xb7f56000) => /lib/i686/cmov/ (0xb7de3000)
    /lib/ (0xb7f5f000)

Now the executable will work. I’ll retake this issue, I have some interesting things to tell about shared libraries. Happy coding!!

Sudo LDAP (II)

The second part of this article is here, so if you missed the first one, you might take a look Part One

LDAP setup

Let’s guess your root suffix is dc=company,dc=com, you need to append the next entry to your directory :

dn: ou=sudoers,dc=company,dc=com
  objectClass: top
  objectClass: organizationalunit
  description: Sudo Configuration
  ou: sudoers

Besides we will need a default profile:

dn: cn=defaults,ou=sudoers,dc=company,dc=com
  sudoOption: ignore_local_sudoers
  objectClass: top
  objectClass: sudoRole
  cn: defaults
  description: Our default options
  sudooption: log_host
  sudooption: logfile=/var/log/sudolog
  sudooption: !syslog

Perhaps you would like to get the most of sudo’s powder, take a look in its website. You can add as much profiles as you like, suppose you want to add one for system administration:

dn: cn=sysadmin,ou=sudoers,dc=company,dc=com
 objectClass: top
 objectClass: sudoRole
 cn: unix_admins
 sudoUser: tuxman
 sudoUser: darkman
 sudoUser: bill
 sudoHost: ALL
 sudoCommand: /usr/bin/ls

As far as I concern, how to configure sudo is out of this post, however together the source of sudo there is an utility, _sudoers2ldif__, a perl script that helps you to translate your sudo’s configuration file. Next step requires to modify our profile. Probably you will have a similar profile to this one:

dn: cn=default,ou=profile,dc=company,dc=com
objectClass: DUAConfigProfile
defaultSearchBase: dc=company,dc=com
cn: default
credentialLevel: proxy
profileTTL: 300
searchTimeLimit: 60
authenticationMethod: simple
serviceSearchDescriptor: passwd:cn=sudoers,dc=company,dc=com

After these modifications you must initialize your client.

Setting up /etc/ldap.conf and nsswitch.conf

It’s time to tell our client where to find sudoers file, by means of /etc/ldap.conf, that looks something like this.

   uri ldap://
   sudoers_base ou=sudoers,dc=company,dc=com
   bindpw  cn=proxyagent,ou=profile,dc=company,dc=com
   binddn  password
   sudoers_debug 0

You might use anonymous access, that’s your choice, just remember to check your ACI’s. Pretty interesting the option sudoers_debug which helps you to debug, at level 3 will show you as much information as possible. The last step, how to find our sudoers’ profile, nsswitch.conf sudoers: ldap

Let’s check if is working:

tuxman@host:$> sudo ls
[sudo] password for client: 

sudo ls
LDAP Config Summary
uri              ldap://
ldap_version     3
sudoers_base     ou=sudoers,dc=company,dc=com
binddn           (anonymous)
bindpw           (anonymous)
ssl              (no)
sudo: ldap_initialize(ld, ldap://
sudo: ldap_set_option: debug -> 0
sudo: ldap_set_option: ldap_version -> 3
sudo: ldap_sasl_bind_s() ok
sudo: found:cn=defaults,ou=sudoers,dc=company,dc=com
sudo: ldap sudoOption: 'ignore_local_sudoers'
sudo: ldap sudoOption: 'log_host'
sudo: ldap sudoOption: 'logfile=/var/log/sudolog'
sudo: ldap sudoOption: '!syslog'
sudo: ldap search '(|(sudoUser=tuxman)(sudoUser=%other)(sudoUser=ALL))'
sudo: found:cn=sysadmin,ou=sudoers,dc=company,dc=com
sudo: ldap sudoHost 'ALL' ... MATCH!
sudo: ldap sudoCommand '/usr/bin/ls' ... MATCH!
sudo: Command allowed
sudo: user_matches=1
sudo: host_matches=1
sudo: sudo_ldap_lookup(0)=0x02
tuxman@host:$> files/

At this point everything should be working. Last step, to translate our sudoers file.

Rotating backups with Ruby

Today I’ve made some simple scripts to get my backups update. Two entries in our crontab will make the rest of work for us.

The first one gets a backup of our database, the second rotates the files. I also add support to Syslog, because I would like to know if my script worked out.

DATE=$(date +%Y-%m-%d)
mysqldump -h host userdb  database  --password=1234 | gzip > \


Let’s add some entries to our crontab: user@home:~> crontab -e As far as I concern I would review the entries, it’s pretty easy to make some mistakes while we’re writing, besides I guess it’s a best practice. user@home:~> crontab -l The above command will show our entries.

# m h  dom mon dow   command

0 00 * * * ${HOME}/scripts/
0 00 * * * ${HOME}/scripts/rotatedb.rb

It’s a MUST to leave an empty line at the end of the cron, otherwise the cron will not run. There is another thing to take into account, PATHS. Don’t forget to set them.


Due to I wanted support Syslog, I had to set it up correctly in my /etc/syslog.conf local7.* /var/log/backups.log Aside to create the above file, you must reload syslog daemon. Afterwards, I wanted to know if that configuration would work. There is a command that will help you:

user@home:~> logger -p local7.debug "Sending a message to debug"
user@home:~> more /var/log/backups.log
Aug  6 20:31:22 home user: Sending a message to debug

The first argument local7 is the Syslog’s FACILITY and the other one is the PRIORITY. You must adapt to your own script.
Here is the script to rotate. After then days, will remove only the four oldest backups.

#!/usr/bin/env ruby
%w(syslog).each {|c| require c }


module SyslogMsg"rotatedb", Syslog::LOG_PID  \
             | Syslog::LOG_CONS ,  Syslog::LOG_LOCAL7 )
  def send(msg="Message sent")
    Syslog.log(Syslog::LOG_DEBUG, msg)

include SyslogMsg
files = []

Dir.entries(BACKUP).each do |e|
  if e !~ /^\./
    files << e

if files.length > 10
  0.upto(4) do |index|
    File.delete( BACKUP + "/" + files[index] )
  SyslogMsg::send("Backups for mysql rotated.")

Finally, if you get some troubles, review the above steps, check that you have the right permissions and you have reloaded your syslog configuration. If you try the logger command and you see the message you have just sent, everything should work out.

Sudo LDAP (I)

Last day at work I had to get working sudo and ldap. I’m not gonna get into a discussion about if it’s worth or not to use sudo. I can just say from my own experience, if you have a large number of users and hosts, they are clearly distinguishable and you are using roles (i.e: sysadmin, backup, any kind of group…) it’s totally worth.

Moreover, bear in mind you would have to update every sudo config file , definitely would be tedious, and here’s when LDAP gets in.

I brought into play two virtual machines, both of them running Solaris 10. Commonly I prefer doing that before making some huge mistake in a real environment, so here’s what I did. I called box0 to the client and ldapbox to the server. The rest of the post, I’ll assume you have set up a Directory Server and Native LDAP client service working fine. Summary:

Extending Schema

This step is pretty straightforward, you only need to add the schema to your directory instance, and restart the server. Assuming your instance might be in the default path /var/opt/SUNWdsee/dsins1/config/schema/99users.ldif.

attributeTypes: ( NAME 'sudoUser' DESC 'User(s) who may run sudo'
 EQUALITY caseExactIA5Match SUBSTR caseExactIA5SubstringsMatch SYNTAX X-ORIGIN 'SUDO' )

attributeTypes: ( NAME 'sudoHost' DESC 'Host(s) who may run sudo' 
EQUALITY caseExactIA5Match SUBSTR caseExactIA5SubstringsMatch SYNTAX X-ORIGIN 'SUDO' )

attributeTypes: ( NAME 'sudoCommand' DESC' Command(s) to be executed by sudo' 

attributeTypes: ( NAME 'sudoRunAs' DESC 'User(s) impersonated by sudo' 

attributeTypes: ( NAME 'sudoOption' DESC 'Options(s) followed by sudo'

objectClasses: ( NAME 'sudoRole' SUP top
STRUCTURAL DESC 'Sudoer Entries' MUST ( cn ) MAY ( sudoUser $ sudoHost
$ sudoCommand $ sudoRunAs $ sudoOption $ description ) X-ORIGIN 'SUDO'

Now you only need to restart the server.Why do I need to restart my server? The matter is, the first time you started your server, the file was read into memory, so any change you make later will not have effect, at least you restart the instance.

LDAP support for sudo

You will need to get the source code of sudo and at least 1.7 version or upper. The reason is because earlier versions will not read nsswitch.conf. I used sudo-1.7.2p7, you can get it from Sunfreeware. Of course if you want to compile you will need to solve some dependencies, here is the list, however you had better confirm by yourself.

         |-* gcc-3.4.6-sol10-x86-local  
         |-* libiconv-1.13.1-sol10-x86-local
         |-* libintl-3.4.0-sol10-x86-local 
         |-* openssl-1.0.0a-sol10-x86-local

 /---|( OpenLdap )
         |- * db-4.7.25.NC-sol10-x86-local
         |- * libtool-2.2.6b-sol10-x86-local
         |- * sasl-2.1.21-sol10-x86-local
         |- * openldap-2.4.22-sol10-x86-local

At this point and after installing all the dependencies we just need to compile:

./configure --with-ldap && make 

Those people who like tinkering with Unix tools, is time to call ldd and take a look into sudo. =>   /lib/ =>    /lib/
* =>      <em><b>(/usr/local/lib/</b></em>
* =>      <em><b>(/usr/local/lib/</b></em> =>  /usr/local/lib/ =>        /lib/ =>   /lib/ =>     /lib/ =>   /lib/ =>        /usr/lib/ =>   /usr/lib/ =>         /usr/local/lib/ =>       /usr/local/ssl/lib/ =>    /usr/local/ssl/lib/ =>         /usr/local/lib/ =>         /usr/local/lib/ =>   /usr/lib/ =>    /lib/ =>    /lib/ =>   /lib/ =>   /lib/ =>  /lib/ =>         /lib/ =>     /lib/

If the LDAP libraries does not appear remember to add the path:

# crle -l -u  PATH_TO_LIBRARIES</b>

Obviously I wouldn’t like having to install OpenLdap in all my clients (if you want to apply to more than one), so I thought to carry just with the libraries I needed. In a nutshell, we have just extended the schema and also enabled ldap support for sudo. The two last points for the next post.

Jobs in bash

In most cases when I’m working with the shell, I send my applications to background, mostly my emacs. Nonetheless I forget quickly, and I open too many times the same file.

Due to my lack of memory I decided to make a function that shows me how many programs I have in background, here is the function:

function get_njobs {
 njobs=$(jobs | wc -l)

 if [ $njobs -gt "0" ];
   echo $njobs |  sed -e 's/\([0-9]*\)/(\1)/g'

function prompt {

#  Shell into Emacs
 if [ $TERM != "dumb" ];
  alias ls='ls --color=auto'

Now you can do some work in background and will see what I mean:

user@host:{~} emacs &

As you may notice, the number inside the brackets remembers me if there is some program running in background. Do not forget to add the function at the end of your bashrc.

UPDATE: If you add \j to your PS1 variable you will get the same effect but if there is not jobs you will always get a (0).

WPA roaming

Sometimes I go to visit my parents, and it crops out some troubles with the wireless and everything gets messed up on my laptop. If you are moving around constantly, for example among work,home and university you should be using roaming in order to connect the suitable network automatically.

    id_str="mom"                    # Tha'ts an ID.

    id_str="home"                 # And that's another ID

Now is time to set up the network /etc/network/interfaces

auto wlan0
iface wlan0 inet manual
    wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf
iface home inet dhcp
iface work inet dhcp

It’s important to set the interface to manual, don’t forget to give an “id_str” for each network, you can add a static configuration instead of dhcp :

iface work inet static

If you get some troubles, please take a look in /usr/share/doc/, it’s really useful.

LVM: Mirroring /home

I usually have my home directory in a different partition.Last week I had to reinstall my Debian, so I decided to add a little bit of redundance in order to prevent data losing. I had two disks, so I decided to migrate the data from the old home to the mirror.

I would recommend people trying this, because is easier than it seems, and will not take longer than ten minutes.

Before doing anything , it is needed download lvm2 package, if you work with Debian or a system based on:

hlab:# aptitude install lvm2
hlab:# sfdisk -d /dev/sda | sfdisk /dev/sdb

This will make the second disk’s partition table /dev/sdb identical to the the first hard drive /dev/sdb.

hlab:/mnt/vdata# sfdisk -d /dev/sda
Warning: extended partition does not start at a cylinder boundary.
DOS and Linux will interpret the contents differently.
# partition table of /dev/sda
unit: sectors

/dev/sda1 : start=       63, size=976768002, Id= 5
/dev/sda2 : start=        0, size=        0, Id= 0
/dev/sda3 : start=        0, size=        0, Id= 0
/dev/sda4 : start=        0, size=        0, Id= 0
/dev/sda5 : start=      126, size=238275954, Id=83
/dev/sda6 : start=238276143, size=738491922, Id=83

1.Physical volumes

The first thing is to add physical volumes , and by physical I mean, either partitions or whole disks, here is the way:

hlab:# pvcreate /dev/sda5 /dev/sdb6
It's suitable to check the results:
hlab:# pvdisplay

--- Physical volume ---
PV Name               /dev/sda5
VG Name               volhome
PV Size               113.62 GB / not usable 1.68 MB
Allocatable           yes
PE Size (KByte)       4096
Total PE              29086
Free PE               926
Allocated PE          28160
PV UUID               1ZPA7A-gXwn-O0G6-my3b-PUVt-mk9o-0A4sW1

--- Physical volume ---
PV Name               /dev/sdc5
VG Name               volhome
PV Size               113.62 GB / not usable 1.68 MB
Allocatable           yes
PE Size (KByte)       4096
Total PE              29086
Free PE               926
Allocated PE          28160
PV UUID               0DfPHc-tXBn-dcYR-yKag-EL7j-RI4A-K9Eo0j

2. Volumes groups

The above step is just a conventionality ( I see it like that) for telling that you have available some partitions or disks. Now you can create some volumes, which will appear in /dev/volname just take a look to the command:

hlab:# vgcreate -A y vols /dev/sda5 /dev/sdb5 

If you are familiarized with ZFS, at this point we have like zfs pool.

3. Logical volumes Now is the time for setting the mirror. Here is the command

hlab:# lvcreate -m1 (--nosync)  -L 110G --mirror-log core  -n home  vols 

But default you need a third partition for keeping the logs, so instead of creating another partition, I omitted it.

At this point is possible to show the mirror:

hlab:# lvs -a -o +devices
LV              VG      Attr   LSize   Origin Snap%  Move Log Copy%  Convert Devices
home            vols <b>mwi-ao</b> 110.00G                        100.00         home_mimage_0(0),home_mimage_1(0)
[home_mimage_0] vols iwi-ao 110.00G                          /dev/sda5(0)
[home_mimage_1] vols iwi-ao 110.00G

The interesting part here, is the meaning of the attributes, you can make it out by means of “man”:

4. Data migration

The last point is to copy the home data and add a line to /etc/fstab . Instead of using a command such as cp -R I prefer using tar, because is cleaner and will copy absolutely all the files, here is the command:

First move your old home

hlab:#  mv /home /home_tmp
Second mount the partition 
/dev/vols/home /home         auto    rw
hlab:#  mount -a; cd /home_tmp
hlab:#  tar cvf - . (cd /home; tar xvf -)

Summing up, to have a mirror with LVM is cheap and pretty straightforward. However, LVM is just for Volumes, I guess the best way it would be using RAID linux (mdadm) and after you can use LVM over the raid.

PAM mkhomedir for solaris

There is a PAM module available in order to create home directories on fly, This is quite useful if you have a LDAP server ( in this case Directory Server 6.3) and you are inserting users but their home directories were not created.

First of all , you will need to download the files

PATH=/usr/sfw/bin:/usr/ccs/bin:$PATH;export PATH

gcc -c -g -O2 -D_REENTRANT -DPAM_DYNAMIC -Wall

-fPIC -I../../libpam/include \

-I../../libpamc/include   \

-I../pammodutil/include pam_mkhomedir.c

After the compilation the module dit not work, What should I do now? Well, I tried to debug why the module was not working properly. First I enabled debug mode in syslog daemon, you only need to add.

*.debug /var/adm/pam_log

in the /etc/syslog.conf. Here is what I found out after poking around the logs:

May 18 10:27:25 kestod sshd[26177]: 
[ID 547715 auth.debug] PAM[26177]: load_function: successful load of 
May 18 10:27:25 kestodd sshd[26177]:
[ID 482737 auth.debug] PAM[26177]: pam_open_session(8a828, 0) 
May 18 10:27:25 des-to16-d sshd[26177]: [ID 926797 auth.debug]
PAM[26177]: load_modules(8a828,

Nothing special pointed me out how to solve this, so I tried a different approach. Perhaps trying with a LDAP user, trought different services I could find out something.I tried first SSH and I was kicked out the system. My second thought was to try telnet and I got this:

login: user1 
Password: login: fatal: relocation error:
file /usr/lib/security/ symbol _pammodutil_getpwnam: referenced symbol not found 
Connection to localhost closed by foreign host.

This got me some clues. I edited pam_mkhomedir.c and I found the name of four functions:


Those functions are no available in Solaris 10 ( of course neither above versions ). What did I do? I put all these functions together in the same file, and I added some includes, so this is all the code you need to add pam_mkhomedir.c and compile afterwards:

You have to copy and paste both declarations and their implementations.

other session required skel=/etc/skel umask=0022

Now you can try to log in the system with a LDAP user:

  ssh -l user5 localhost 
  Creating directory '/export/home/user5'. 

  Last login: Thu May 14 17:16:21 2009 from localhost 

You can also try to access using telnet. There is backward compability among different versions of Solaris, that means, it will work out in Solaris 8,9 as well. I hope this information can be useful to somebody.


I currently work as an Integration Engineer @Ericsson Spain. I spend most the time travelling all around the world. I like learning about linux, system infrastructures, performance, and programming.

In the last years I’ve been trying to post something useful, blogging is about writting regularly and above everyting, being unique and original, something that might not seem easy to achieve.

If you can’t explain it simply, you don’t understand it well enough. Albert Einstein

Mastering any discipline requires time and practice, be passionate but not obseseed.