Geek on the road

open source, open knowledge

Git Best Practices @Atlassian

This morning was the first day at Silicon Valley Codecamp and I have to say I did not expect such a huge effort and well organized event.

Bear in mind this event was based on donations, despite of this, they provided coffee, lunch and very nice stands with different companies such as JetBrains, IBM or Pivotal.

I thought that SaaS Workflows & Git Best Practices by Tim Pettersen and Erik van Zijst could be a good talk, although I’not a sofware engineer I do keep my personal projects under Git, and I wanted to learn more about it.

The speakers made the talk very easy to follow, here are some of the ideas I found quite useful.

Branching

In Git branching is very cheap, the message was clear:

Branch all the things!!

People not familiar to work with Git, use a more tradicional workflow, Linear workflow

If somebody push something that breaks, all the team is affected. On the other hand, you may use the Merge workflow, it looks like something like this

The idea is pretty simple, and very useful if you have a CI intrastructure. Below are the main branches in the repository.

Master Branch

All the code that is already in production.

Developemnet Branch

All the code that is in staging. It branches off from Master and it merges back to Development.

Feature Branch

It branches off from Development.

The name of the branches is a convention :

feature/JIRA-<ID_JIRA><-DESCRIPTION>

Hotfix/Bugfix

How hotfixes work is a bit different,

  • Branch off from Master to a new branch with naming hotfix/JIRA-<ID_JIRA>.
  • Merge the previous hotfix into Development.
  • If the previous worked, merge hotfix into Master

Although Rebase is a very neat worklow, is also dangerous and pretty easy to mess your repository up, the most important thing to remember, do not rebase public branches, because rebase rewrites your history, this means it changes the SHA-1 of your commits.

Merge commit

  • uggly
  • full traceability
  • hard to screw up

Rebase (fast forward)

  • no merge commits

Rebase (squash)

  • easy to read
  • more difficult to trace changes

And this is pretty much it, at the end of the talk there was a Git Quiz very funny. Thanks to the organization for this event.

Docker Palo Alto Meetup

Today I assisted a new Docker Palo Alto Meetup, although it was crowded, I got a good spot and I learnt a few more things about Docker and CoreOS. Yet there were two talks and the second was quite interesting topic, Mesos a cluster resource manager, I did not take too many notes.

In the talk Building infrastructure based on CoreOS and Docker, Damien Metzler from Nuxeo explains how they use CoreOS and Docker, together with some custom tool they developed. Nuxeo plaform provides content managament in an easy an fast way, below are the main points

  • Fully open source
  • Testing has to be easy and fast
  • Provide quick trial
  • Provide a software factory to the customer
  • Choose models
  • Run the app

Basically they run on a container all the applications that from the main dashboard a customer chooses. Interesting the concept

Design your cluster for failure

It turns out they use heavily Java processes that can eat up to 1Gb of memory, and eventually you reach out the max capacity of your bare metal.

CoreOS

Comes with a minimal Linux distribution

  • Docker
  • etcd (key/value distribution)
  • fleet (launch jobs on your cluster)
  • systemd (replaces init)
  • Active/Passive root partition

I really like the active/passive root partition. You have two partitions for the OS, if a patch is required, you stay with your current partition while upgrading the secondary one, then reboots and runs the new one.

fleet

In order to run a job in the cluster, you use a systemd-like definition that fleet processes.

ectd

Helps to distribute information to the members of the cluster.

  • register a service on etcd
  • location
  • status

Gogeta

Gogeta is a proxy-reverse written in Go. Some ideas:

  • manteins the last access time
  • the last access key in etcd
  • restarts the service if required
  • kills the container after inactivity

Why would you use Go?

  • easy to use
  • etcd channels
  • stactic status pages (error, wait)
  • first working prototype in one week (while learning go)
  • build a static executable (good for containers)

A very useful tip Damien gives,

don’t ever put data on the container , if the container goes down, you loose the data.

You can use Elasticsearch and spread your data across the cluster.

In summary, Docker and CoreOS are hot topics together with the set of tools that provide a different way to manage your clusters. Increasingly I see how most of these tools are written in Go, and how its community grews.

IPv4 Subnetting

During the preparation for the CCNA I had to brush up network subnetting. Finally I found a pretty straightforward way to calculate subnets. Here are some advices.

1. Finding subnet network, broadcast and last host

The simplest thing to do is to work with groups.

1
2
3
Prefix   /+1 /+2 /+3 /+4 /+5 /+6 /+7 +/8
Netmask 128   192 224 240 248 252 254 255
Group 128 64  32  16  8   4   2   1

If I get the IP address 191.10.10.243/28 I know that the netmask is 255.255.255.240 what means I need to look for groups of 16 according to the table. Now you need to find the closest group to that IP address.

Here are the groups of networks base on the previous group. In this case the 191.10.10.240/28 is the closest one for the previous IP address.

Starting with 191.10.10.0/28 up to 191.10.10.240/28. Just moving on the third octed in groups of 16.

In order to find out the broadcast address and the last host available you just need to add the group number (16), trying to get the next subnet available which in this case would be 191.10.10.256.

Broadcast address = Next Subnet - 1
Last host = Next Subnet - 2

From the previous example:

Subnet: 191.10.10.240/28

Last Host: 191.10.10.254/28

Broadcast: 191.10.10.255/28

As you can see this is a very simple way to find out to what subnet an IP address belongs to.

2. Start splitting from higher prefix to lower

First and foremost, you need to know that subnetting is always done from higher to smaller prefix in order to avoid overlapping.

Assuming I want to split the next network 10.20.0.0/24 in several subnets I should start with the next biggest prefix I can use, /25, then I take this one and I split it in lower subnets.

10.20.0.0/25
10.20.0.128/25

The problem is when you decide to get first a smaller subnet and later a bigger one. For example, 10.20.0.0/30. Afterwards you start subnetting on what you think is the next subnet available which according with previous subnet you might think is 10.20.0.4/27 .

As you already may notice, this is wrong and it would match the range of ip addresses 10.20.0.1 – 10.20.0.30 what actually includes the IP addresses inside 10.20.0.0/30.

Nowadays I will normally use an IP calculator, however I find this an excellent work out for mental calculations.

Troubleshooting EIGRP

Preparing the CCNA is being challenging, troubleshooting just guessing is consuming time, besides you get quicker results if you define some steps about how to proceed. This is the procedure I will normally follow if something is not working properly.

1. Checking the interfaces are in UP/UP state

Before starting with routing protocols, verify the interfaces are working properly

R1> show ip int brief

2. Checking L2

Review if there is any problem on the serial link:

  • KeepAlive removed from one router, it will appear up/up for that interface, on the other end of the link up/down
  • Authentication with either wrong username or wrong password will show down/down on both ends of the link.
  • Mismatched Encapsulation will show down/down on both ends of the link.

3. EIGRP neighbors and AS

Confirm that you have the expected neighbors, besides examine the AS is the same in all the routers.

R1> show ip eigrp neighbor

4. EIGRP interfaces

It may occur that either some interfaces are not enabled or there are some interfaces enabled with a wrong network command.

R1> show ip eigrp interface

If there is some interface that is enabled (from the previous step), but neighbors routers do not see that network, review the configuration. Regarding a network command, review the command itself or the wilcard.

i.e: network 10.0.0.0 (but you actually have 10.4.0.0)
i.e: network 191.1.1.0 0.0.0.1 (actually you want 0.0.0.3 for /30)

R1> show ip protocols 

The above command will show if there is any error, with the network definition. If the network command for that interface was not added, it will not appear.

5. K-values and passive interfaces

Actually, we can get the next information from the previous check, however I think is much clear to review it aside.

R1> show ip protocols 

This command shows you any passive-interface, what it would actually avoid to establish neighbor relationships. Remember that K-values must match on both routers, you can check the values using the previous command as well.

R1(config-router)# no passive-interface s0/0/0

6. EIGRP Authentication

I’m not sure if this topic is covered for the CCNA, however is for sure one of the issues you may find. In this case, configuring:

  • Key chain
  • Key ID
  • Key String

The previous parameters must agree when setting up EIGRP authentication.

7. Multicast in serial links

If Frame Relay is configured on a physical interface, broadcast will not be supported, the same occurs for point-to-multipoint links. Define subinterfaces or modify the ospf network type.

In multipoint networks add the broadcast keyword at the end. The show frame-relay map shuld show broadcast, otherwise it will not work.

R1(config-if)# frame-relay map ip <IPADDRESS> <DLCI> broadcast

8. Access lists filtering

At this point everything has been configured properly, you can see one router is showing adjacency going up and down. Take a look if there is any access list blocking the IP traffic.

R1# show access-list

Definitely a guess method is not an option when you are trying to narrow down some issues and the clock is not from your side.

In my opinion an analysis through the layers L1, L2, L3 and EIGRP details should point you out the error.

Debugging NAT Overload

It turns out I was asked to get the CCNA certification at work. It’s being quite difficult to find time to prepare it, besides I’m traveling quite often, what makes it even more complicated. I was tinkering with Packet Tracer, reviewing some concepts about NAT and I wond up with an interesting case I did not expect, basically because I did not understand NAT at all, now I’m starting.

A router implementing NAT overload keeps a table with a private IP address (RFC1918) and source port mapped to one external routable IP address and its destination port. Below is the image of the lab I prepared today:

Here is the configuration for R1

1
2
3
4
5
6
7
8
9
10
11
12
interface GigabitEthernet0/0
 ip address 191.1.1.1 255.255.255.252
 ip nat outside
!
interface GigabitEthernet0/1
 ip address 192.168.1.1 255.255.255.0
 ip nat inside
!
ip nat inside source list 1 interface GigabitEthernet0/0 overload
ip route 172.16.1.0 255.255.255.0 GigabitEthernet0/0 
!
access-list 1 permit host 192.168.1.2

The configuration for R2 was exactly the same, but using a different network.

1
2
3
4
5
6
7
8
9
10
11
12
interface GigabitEthernet0/0
 ip address 191.1.1.2 255.255.255.252
 ip nat outside
!
interface GigabitEthernet0/1
 ip address 172.16.1.1 255.255.255.0
 ip nat inside
!
ip nat inside source list 1 interface GigabitEthernet0/0 overload
ip route 192.168.1.0 255.255.255.0 GigabitEthernet0/0 
!
access-list 1 permit host 172.16.1.2

The first test was a simple ping form R2 towards R1, it did not work, requests timeout. Taking a look to the translation table on both routers show me the error.

1
2
3
R2# show ip nat translations
Pro  Inside global     Inside local       Outside local      Outside global
icmp 191.1.1.2:21      172.16.1.2:21      192.168.1.2:21     192.168.1.2:21

On R1 we found:

1
2
3
R1# show ip nat translations
Pro  Inside global     Inside local       Outside local      Outside global
icmp 191.1.1.1:21      192.168.1.2:21     191.1.1.2:21       191.1.1.2:21

The problem, R1 was performing NAT, modifying the source IP address from 192.168.1.2 to 191.1.1.1. When the packet arrived to R2, this looked up 192.168.1.2 (outside global ), trying to know the Inside local and proceed to forward the packet, however there is not a translation for this packet. Verifying the statistics, I see the misses (increasing during the ping)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
R2# show ip nat statistics 
Total translations: 4 (0 static, 4 dynamic, 3 extended)
Outside Interfaces: GigabitEthernet0/0
Inside Interfaces: GigabitEthernet0/1
Hits: 1  Misses: 3
Expired translations: 0
Dynamic mappings:

R2# show ip nat statistics 
Total translations: 5 (0 static, 5 dynamic, 4 extended)
Outside Interfaces: GigabitEthernet0/0
Inside Interfaces: GigabitEthernet0/1
Hits: 1  Misses: 5
Expired translations: 0

One of the solutions was to disable NAT on either R1 or R2. However, defining a static map for that miss would do the ping work. The only drawback, this mapping only works for the IP address 192.168.1.2, any other host would fail answering.

1
R2(config)# ip nat outside source static 191.1.1.1 192.168.1.2

The conclusion is that the best approach before debugging is to understand how things work.

Recovering a VDI Disk

Recently I ran into an issue regarding a vm’s storage. It turns out one of the VDI on my virtual machine was faulty. I had some data inside and I didn’t want to lose it.

First of all, we can convert from VDI to raw. I did the conversion from Windows, I guess it should be the same from Linux.

converting
1
 C:/VDIs/VboxManage internalcommands converttoraw  disk-vm-testing.vdi  vdisk.raw

Now let’s see what is inside the raw disk:

fdisk
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
dm@testing:#{~} fdisk  -l vdisk.raw
You must set cylinders.
You can do this from the extra functions menu.

Disk vdisk.raw: 0 MB, 0 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000d598a

    Device Boot      Start         End      Blocks   Id  System
vdisk.raw1   *           1         996     7993344   83  Linux
Partition 1 does not end on cylinder boundary.
vdisk.raw2             996        1045      392193    5  Extended
Partition 2 has different physical/logical endings:
     phys=(1023, 254, 63) logical=(1044, 52, 32)
vdisk.raw5             996        1045      392192   82  Linux swap / Solaris

In this case I only have a first partition, because the second one was used as swap device. I have to find out the offset where the data is.

offset
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
dm@testing:#{~}  parted vdisk.raw
GNU Parted 2.3
Using /media/sf_Downloads/vdisk.raw
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) unit
Unit?  [compact]? B
(parted) p
Model:  (file)
Disk /media/sf_Downloads/vdisk.raw: 8589934592B
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start        End          Size         Type      File system     Flags
 1      1048576B     8186232831B  8185184256B  primary   ext4            boot
 2      8187280384B  8588886015B  401605632B   extended
 5      8187281408B  8588886015B  401604608B   logical   linux-swap(v1)

(parted)q

The ‘Start’ column shows me the offset for the partition I’m interested in. Next step is to map this offset with a loopback device and mount it.

mount
1
2
3
4
5
6
7
dm@testing:#{~} losetup -o 1048576  /dev/loop0  vdisk.raw
dm@testing:#{~} mount /dev/loop0 /mnt/vdi
dm@testing:#{~} mount | grep loop
/dev/loop0 on /mnt/vdi type ext4 (rw)
dm@testing:#{~} ls /mnt/vdi
ls /mnt/
bin  boot  dev  etc  home  initrd.img  lib  lost+found  media  mnt  opt  proc  root  sbin  selinux  srv  sys  tmp  usr  var  vmlinuz

At this point you should be able to access your data, or at least some part of it.

Strace to the Rescue

Today in the IRC somebody asked which would be the best way to know if a process already exists in the system. The choice was between using test -d /proc/PID or kill -0 PID.

Both of them do the job, the question here, we want to use the best one. Suddenly I remembered an option that comes with strace and lets you query the number of syscalls for a given trace. Besides we can order based on the number of syscalls.

cmd1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
adm@testing:${~} strace -c -S calls kill -0 1234
kill: No such process
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
  -nan    0.000000           0        12           mmap2
  -nan    0.000000           0         6           close
  -nan    0.000000           0         5           open
  -nan    0.000000           0         5           fstat64
  -nan    0.000000           0         4           read
  -nan    0.000000           0         4         4 access
  -nan    0.000000           0         3           brk
  -nan    0.000000           0         3           munmap
  -nan    0.000000           0         2           mprotect
  -nan    0.000000           0         1           write
  -nan    0.000000           0         1           execve
  -nan    0.000000           0         1           getpid
  -nan    0.000000           0         1         1 kill
  -nan    0.000000           0         1           dup
  -nan    0.000000           0         1         1 _llseek
  -nan    0.000000           0         1           fcntl64
  -nan    0.000000           0         1           set_thread_area
------ ----------- ----------- --------- --------- ----------------
100.00    0.000000                    {52}       6 total

On the other hand, the second command’s output:

cmd2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
adm@testing:${~}strace -c -S calls test -d /proc/1234
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
  -nan    0.000000           0         7           mmap2
  -nan    0.000000           0         5           close
  -nan    0.000000           0         3           open
  -nan    0.000000           0         3         3 access
  -nan    0.000000           0         3           brk
  -nan    0.000000           0         3           fstat64
  -nan    0.000000           0         2           mprotect
  -nan    0.000000           0         1           read
  -nan    0.000000           0         1           execve
  -nan    0.000000           0         1           munmap
  -nan    0.000000           0         1         1 stat64
  -nan    0.000000           0         1           set_thread_area
------ ----------- ----------- --------- --------- ----------------
100.00    0.000000                   {31}         4 total

The column calls helps to know the number of performed syscalls. Using test -d /proc/PID gives a better performance due to a minor number of syscalls.

I really like strace, is a tool you had better know, here I got syscalls statistics, but you can trace syscalls either specific or a bunch of them, follow forked processes and much more, this is only a simple example. I hope it helps.

Interesting Shell

Finally here is my last compilation about shell environment variables.

1.CDPATH

If this is set, keeps a directory list separated by ‘:’ and every time that you type the command “cd DIR” it searches for the directory in the above list, even if your current directory is not the right one. Let’s see an example, I define the directory holding all my git repositories, if I try to make cd repo from any part on the system, even if I not in the right place, I will get there:

shell
1
2
3
4
5
export CDPATH="~/mygits"
devadm@testing$(~)ls mygits
blog scripts org-mode cv
devadm@testing$(/usr/share/doc) cd  blog
devadm@testing$(~/mygits/blog)

In my opinion is useful to keep just one directory or probably up to two, more than this, it could get messy.

2.FIGNORE

If you want to narrow down the ouput when performing filename completion, this is without a doubt your shell env. When set, it ignores suffixes while performing filename completion.

shell
1
2
3
4
5
devadm@testing$(~)export FIGIGNORE="#:.o:~"
devadm@testing$(~/application)ls file*
file1.o file2.o file1 file1 file~
devadm@testing$(~)ls [TAB] [TAB]
file1 file2

I mentioned this shell env because it was curious to me. I guess I have never found any case where I had to use such a thing, but that does not mean it wouldn’t be useful to somebody else.

3.HISTCONTROL

It can be set to ignorespace , and it does not record words starting with empty space. On the other hand I do not like to have duplicated lines, ingnoredups does not repeat entries in the history. Finally, I like to join both options so I use ignoreboth instead.

.bashrc
1
export HISTCONTROL=ignoreboth

4.HOSTFILE

It holds the path to a file containing a list of hosts in the same format as /etc/hosts , if it is set, tries to complete the hostname with one of the entries on the file. Otherwiese will look for /etc/hosts.

It could be pretty useful if you want to keep a personal file with your hosts.

.bashrc
1
export HOSTFILE="~/.myhosts"

The only problem I see here, if I you want to access hosts defined in /etc/hosts. You could make a script that checks if there is a new entry in the /etc/hosts and then appends the last entries to your personal list. Once again, this is only an idea I came up with, but actually I did not follow through.

5.TMPDIR

If set, bash uses its value as the name of a directory in which creates temporary files. With this option I can set my personal temp directory.

.bashrc
1
2
 export TMPDIR="~/.tmpbash"

6.TMOUT

Sets a timeout and affects to read or select builtin commands, when not input is given. An interesting case is if you set this to your current shell. After N seconds without providing any input it will kill your shell, so keep this in mind.

tmout-read.sh
1
2
3
4
5
6
7
8
9
#!/bin/bash

# Sets to 3 seconds timeout
TMOUT="3"

printf 'Could you please give me an absolute path ?'
read -r -s -n10 absolute_path

[ -z name ] && printf 'Your path is : %s\n' $absolute_path

Well, the last posts were focused on bash shell environment because I was digging into the man bash and I found quite interesting things I did not know, I hope some of them were useful for you as well.

Shell Environment Variables

In the previous post I wrote about different topics such as BASH_ENV, subshells or expressions. Today I’m going to talk about shell enviroment variables.

1. BASHPID

This is the PID of the current bash process. This behaviour is different from $$ in cases such a subshell where $BASHPID says the PID of the subshell, whereas $$ shows the PID of the bash holding the subshell.

bashpid.sh
1
2
$ echo $$ $BASHPID # 23353 23353
$ echo 'Subsshell' $(echo $$; echo $BASHPID)# 21060 23353

2. BASH_LINENO

Number of lines in the current script, from the beginning to the line the function was called from.

bash_lineno.sh
1
2
3
4
5
6
7
8
9
10
11
#!/bin/bash 

function show_env_vars()

  echo "$FUNCNAME"    # show_env_vars
  echo "$BASH_LINENO" # Number on lines till show_env_vars was call (11)
  echo "$LINENO"      # Current line 7

}

show_env_vars

3. DIRSTACK

Array with the directories you are moving using popd and pushd builtins to add/remove directories.

dirstack.sh
1
2
3
4
5
6
7
8
9
10
# After doing some pushd 

$ for i in  ${DIRSTACK[@]}; do printf  'DIR: %s\n' $i; done                                                                                                                                                               

DIR: /tmp
DIR: /var/tmp/testing
DIR: /home/cartoon
DIR: /usr/share/doc
DIR: /var/tmp
DIR: /var/www

4. EUID and GROUPS

EUID of the current user, initialized at shell startup. This variable is read only. On the other hand GROUPS is an array of groups which the current user is member of.

euid_groups.sh
1
2
3
for ((i=0; i < ${#GROUPS[@]};  i++)); do printf 'gid:%d\n' ${GROUPS[$i]}" ; done
gid: 1002
gid: 1001

5. Gathering OS info

Below there are some of the shell environment vars you could use, for instance if you need to check the architecture. I have not tried in Sparc yet but I would like.

1
2
3
4
HOSTNAME='Self explanatory'
HOSTTYPE='Arhcitecture i486'.
OSTYPE='Operating System where bash is running i.e: linux-gnu'
PPID='Parent process ID of the current shell'

6. Random numbers

1
2
3
4
RANDOM A random number betwwen 0 and 32767.
# Getting a random number between 0 and 99

echo "RANDOM: $(( $RANDOM % 100))

In sum, there are a wide number of shell environment variables to keep in mind if you are scripting. Next time I will post more shell vars, but focused on customizing your bash shell.

Nifty Bash Scripting

One of the things I should do more often is to read the man pages. Recently I have spent some time digging into the bash man pages, and I would like to share some interesting stuff.

1.BASH_ENV

If you run a script, it looks for this variable and if it’s set, expands the value to the name of a file and reads its content. The filename should content the absolute pathmane otherwise it will no be able to locate the file.

.bashrc
1
export BASH_ENV="$HOME/.customs"

Take a look to the above example. From now, and beacuse I set BASH_ENV in my .bashrc, my scripts will have a set of fuctions or whatever I defined in there. It would be the same doing source $HOME/.customs in every script. This is really good, you have a set of custom functions or variables available to all your scripts.

A good idea would be to set this variable in either /etc/profile or /etc/bash.bashrc , if you want to share it with the rest of users in the system.

2.Run in a subshell ( ) vs current shell{ }

This is someting you might not pay attention. In order to assign the ouput of some shell commands to a variable, try first to use { }. Let’s see an example:

colors.sh
1
2
3
4
5
6
7
8
9
# Current shell
declare -A colors=();

line=''
{ for key in ${!colors[@]}; do line+="$key" ; done; }

# Subshell
line=
$(for key in ${!colors[@]}; do line+="$key" ; done;)

Perhaps you realized the problem it crops up here.

Current shell {}

-Variables are available while the script is running. In the previus example I can access both variables, colors and line. Commands are separated by ‘;’. It’s also quite handy when using conditionals.

1
 if_true  &&  { cmd1; cmd2; cmd3; }

Subshell $()

-Variable assigments do not remain. Any change to any variable in the script would not take effect after ending the command execution. However it would be possible to get the output by means of either echo or printf commands. Besides variable scope, performance could be worse due to new subshell execution.

3.Using [[ ]] expressions

You might know the old form [ ] , but this is the new one, and has some pretty cool properties as Pattern Matching. I guess the best way to get an idea is watching an example.

bash_regex.sh
1
2
3
4
5
6
7
8
9
10
ip_address="10.20.30.40"
[[ $ip_address =~ ^([0-9]{1,3})\.([0-9]{1,3})\.([0-9]{1,3})\.([0-9]{1,3})$ ]] && echo "Valid IP address"

# Iterating matches
for n in ${BASH_REMATCH[@]}; do echo $n; done
10.20.30.40
10
20
30
40

Of course the above example is just for purpose, actually it does not validate a real ip address, but does the trick. Most interesting is the variable BASH_REMATCH, an array holding each substring matched by parenthesized subexpressions. Regarding regular expressions you should look for in the man as in regex(3) and regex(7).

4. Case and ;& operator

This operator continues the execution to next option if available. I came up with an example and you will see how it works:

pickcolor.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
options="ac:lh"
while getopts $options  flag
do
        case $flag in
                a)
                        list_all="on"
                        ;&
                c)

                        c=${OPTARG,,}
                        [ $list_all == "on" ]  && shift
                        [ $list_all == "off" ] && shift && shift

                        args="$*"
                        msg="${args:-$default}"


                        if [ $list_all == "on" ];then
                                for color in ${!colors[@]}
                                do
                                        draw_with_color "$color" "$msg"
                                done
                        else
                                check_color "$c" && draw_with_color "$c" "$msg"
                        fi

                           ;;
                l|-list)
                        printf 'Available colors: \n'
                        _show_colors && exit
                        ;;
                h|-help)
                        usage && exit
                        ;;
                *)
                        usage && exit
                        ;;
        esac
done

Previus chunk of code is part of the pickcolor script. The main idea was to jazz some text up with a chosen color, then I thought it would be more practical to enable the option of pating the same text with all the available colors.

1
2
3
4
5
6
7
8
9
10
11
12
13
14

pickcolor.sh
usage: pickcolor.sh [OPTIONS] message
Options:
       -a --all    Test all colors for message
       -c --color  Set font color.
       -l --list   List available colors.
       -h --help   Help

# Colors up using all the colors
$ pickcolor -a  Show me the colors

# Color up using just one color
$ pickcolor -c red Paint my room !!

The special operator ;& was really useful and did the trick. Next part I will write about shell vars and some builtin commands to have in mind.