Archive for the 'Linux' Category


FreeIPA and automount NIS maps

I am playing since recently with FreeIPA, Red Hat's identity management solution built on top of Red Hat's DS389 directory server. One of the main reasons why I decided for FreeIPA (apart from integrated Kerberos for single sign-on and possible integration with Microsoft Active Directory in the future) is, also integrated, NIS server - proxy system that receives requests from NIS clients, gets the data from LDAP server and sends it back to clients. Now, in order to understand why I need to support both LDAP and NIS you need to know few things about the environment I'm in charge of.

I'm working for a software development company. We produce billing software for telecommunication operators - mainly used by mobile telecommunication companies. That means, when you make a call, your call needs to be tracked, recorded and properly billed on the end, all done by our software (called BSCS btw). Sounds simple enough. Multiply that by one hundred million customers making calls and it's not so simple anymore. :) Anyway, our customers use our software on different platforms, most of them use HP-UX, some are on Solaris, some AIX, some are on Linux and we even have some customers on Tru64. In order to provide support to all those customers we need to have all those systems as well. So on the end we end up 100+ servers of all types of UNIX systems. That's not a big problem, it's even interesting, but the problem comes up when those systems are not being upgraded. We have Solaris 2.6 servers and Tru64 4.0D servers, until recently we even had AIX 4.3 and HP-UX 10.30 servers. All of the mentioned systems are 13 years old! Scary!

As you can imagine, those outdated systems do not support many things we take for granted today. Shadow passwords and LDAP authentication are few of those things. And this gets us back to the main topic of this post. FreeIPA (or rather DS389) provides integrated NIS server for unlucky people like myself via SLAPI-NIS plugin. All you have to do in order to use it, is to enable compat and NIS plugin.

# ipa-compat-manage enable
Directory Manager password:
Enabling plugin
This setting will not take effect until you restart Directory Server.
# ipa-nis-manage enable
Directory Manager password:
Enabling plugin
This setting will not take effect until you restart Directory Server.
The rpcbind service may need to be started.

And after directory server restart you have a working NIS server.

# rpcinfo -p
program vers proto   port  service
100000     4   tcp    111  portmapper
100000     3   tcp    111  portmapper
100000     2   tcp    111  portmapper
100000     4   udp    111  portmapper
100000     3   udp    111  portmapper
100000     2   udp    111  portmapper
100024     1   udp  49833  status
100024     1   tcp  36837  status
100004     2   udp    699  ypserv
100004     2   tcp    699  ypserv

By default only passwd, group and netgroup maps are supported but other maps can easily be added. In our environment we are heavily relaying on automounter maps so I had to find a way to add them into FreeIPA NIS server. Luckily, as everything else with FreeIPA, this is very simple. First let me show you how to add automount entries in FreeIPA, it is surprisingly easy.

When it comes to automounter, FreeIPA has support for different locations. So for example, you can have different maps for your production environment, test environment and DMZ environment. Pretty neat. In my example, I will create a new location for our DMZ environment.

# ipa automountlocation-add dmz
  Location: dmz

New location is automatically created with auto.master and maps.

# ipa automountmap-find dmz
  Map: auto.master
  Number of entries returned 2

I would like to add a new map for user home folders.

# ipa automountmap-add dmz auto.home
  Map: auto.home

Then we need to add an entry in auto.master map to associate /home mount point with auto.home map.

# ipa automountkey-add dmz auto.master /home --info=auto.home
  Key: /home
  Mount information: auto.home

Finally, we add an entry into auto.home map specifying which share to mount for user miljan.

# ipa automountkey-add dmz auto.home miljan --info=filer01:/vol/users/home/miljan
  Key: miljan
  Mount information: filer01:/vol/users/home/miljan

And voila, when user miljan logs-on he will have his home folder mounted.

Final step would be to have this in NIS as well. For this we need to manually add few entries into LDAP server. In the example below we add support for auto.master map. There are probably few things you would need to change, though. First, the domain name in DN and nis-domain lines - in the example I am using as a domain. Second, nis-base line - value of this attribute needs to be the DN of your automount map.

# ldapadd -x -D "cn=Directory Manager" -W
dn:,cn=NIS Server,cn=plugins,cn=config
objectClass: extensibleObject
nis-map: auto.master
nis-base: automountmapname=auto.master,cn=dmz,cn=automount,dc=example,dc=com
nis-filter: (objectclass=*)
nis-key-format: %{automountKey}
nis-value-format: %{automountInformation}

Repeat the same for auto.home map and you are set to go.

$ ypcat -d -h -k auto.master
/home auto.home
$ ypcat -d -h -k auto.home
miljan filer01:/vol/users/home/miljan

Nice and easy. :)


Linux LVM: logical volume migration

I was recently confronted with the task of migrating logical volume holding MySQL databases to a separate physical volume. Due to important application running on top of MySQL any downtime was out of the question.

Easiest way to perform this operation would be using pvmove command.

[root@r2d2 ~]# pvmove -n test /dev/sda5 /dev/sdb1
/dev/sda5: Moved: 6.2%
/dev/sda5: Moved: 18.8%
/dev/sda5: Moved: 28.1%
/dev/sda5: Moved: 37.5%
/dev/sda5: Moved: 46.9%
/dev/sda5: Moved: 56.2%
/dev/sda5: Moved: 65.6%
/dev/sda5: Moved: 75.0%
/dev/sda5: Moved: 84.4%
/dev/sda5: Moved: 93.8%
/dev/sda5: Moved: 100.0%

In the example we moved logical volume 'test' from physical volume /dev/sda5 to /dev/sdb1. While pvmove is running, we can get the necessary information including the progress by issuing command lvs.

[root@r2d2 ~]# lvs -a -o+devices
LV        VG  Attr   LSize  Move     Copy% Devices
home      sys -wi-ao 20.00G                /dev/sda5(544)
mysql     sys -wi-a- 1.00G                 /dev/sda5(1888)
[pvmove0] sys p-C-ao 1.00G /dev/sda5 56.25 /dev/sda5(1920),/dev/sdb1(0)
root      sys -wi-ao 7.00G                 /dev/sda5(256)
swap      sys -wi-ao 2.00G                 /dev/sda5(480)
test      sys -wI-ao 1.00G                 pvmove0(0)
usr       sys -wi-ao 8.00G                 /dev/sda5(0)
usr       sys -wi-ao 8.00G                 /dev/sda5(1856)
var       sys -wi-ao 2.00G                 /dev/sda5(224)
var       sys -wi-ao 2.00G                 /dev/sda5(1824)

Another, and somewhat more tricky way of doing this is to use LV mirroring. I say more tricky because it requires more work and the last part of the operation is not really intuitive so it may be prone to errors.

We start with this:

[root@r2d2 ~]# pvs
PV        VG  Fmt  Attr PSize   PFree
/dev/sda5 sys lvm2 a-   220.75G 179.75G
/dev/sdb1 sys lvm2 a-   3.72G   3.72G
[root@r2d2 ~]# lvs -a -o+devices
LV   VG  Attr   LSize Devices
test sys -wi-a- 1.00G /dev/sda5(1920)

Two PVs, the source - sda5, and unused, target PV - sdb1. Logical volume test is the LV we want to move to sdb1.

In order to verify that the content of the LV is consistent on the end, we will create a sample file and compare its content before and after.

[root@r2d2 ~]# dd if=/dev/zero of=/mnt/random bs=512 count=20000
20000+0 records in
20000+0 records out
10240000 bytes (10 MB) copied, 0.122046 s, 83.9 MB/s
[root@r2d2 ~]# md5sum /mnt/random
596c35b949baf46b721744a13f76a258 /mnt/random

We start by adding another copy to 'test' LV and placing it on a new PV.

[root@r2d2 ~]# lvconvert -m 1 sys/test /dev/sdb1
Not enough PVs with free space available for parallel allocation.
Consider --alloc anywhere if desperate.
Unable to allocate extents for mirror(s).

Hm, confusing error. :) The reason for this is that Linux LVM implementation requires mirror to use a log volume, and this log volume needs to reside on a PV of its own. In order to go around this we can instruct LVM to place the log in the memory. This is not recommended for situations when mirroring is needed on the long run, since mirror needs to be rebuilt every time the LV is activated. But this is the perfect solution for our needs as our mirror will not last for long.

[root@r2d2 ~]# lvconvert -m 1 sys/test /dev/sdb1 --mirrorlog core
sys/test: Converted: 6.2%
sys/test: Converted: 15.6%
sys/test: Converted: 28.1%
sys/test: Converted: 37.5%
sys/test: Converted: 46.9%
sys/test: Converted: 56.2%
sys/test: Converted: 65.6%
sys/test: Converted: 75.0%
sys/test: Converted: 84.4%
sys/test: Converted: 93.8%
sys/test: Converted: 100.0%
Logical volume test converted.

Again, we can get all information with lvs command.

[root@r2d2 ~]# lvs -a -o+devices
LV              VG  Attr   LSize Copy%  Devices
home            sys -wi-ao 20.00G       /dev/sda5(544)
mysql           sys -wi-a- 1.00G        /dev/sda5(1888)
root            sys -wi-ao 7.00G        /dev/sda5(256)
swap            sys -wi-ao 2.00G        /dev/sda5(480)
test            sys mwi-a- 1.00G 15.62  test_mimage_0(0),test_mimage_1(0)
[test_mimage_0] sys Iwi-ao 1.00G        /dev/sda5(1920)
[test_mimage_1] sys Iwi-ao 1.00G        /dev/sdb1(0)
usr             sys -wi-ao 8.00G        /dev/sda5(0)
usr             sys -wi-ao 8.00G        /dev/sda5(1856)
var             sys -wi-ao 2.00G        /dev/sda5(224)
var             sys -wi-ao 2.00G        /dev/sda5(1824)

So now we have our LV mirrored with copies on both PVs. Just for test, we can verify the content of the file.

[root@r2d2 ~]# md5sum /mnt/random
596c35b949baf46b721744a13f76a258 /mnt/random

Seems fine. :)

Last step, the one where we have to be more careful, is removing mirror copy from old PV. We have to be more careful because the PV name in this case is the name of a PV we want to remove from mirroring, which is the opposite from what we had when establishing the mirror. If we use wrong PV name we will end up on the begining, with LV on a wrong PV and without mirroring.

[root@r2d2 ~]# lvconvert -m 0 sys/test /dev/sda5
Logical volume test converted.

Again we use lvs to see the status.

[root@r2d2 ~]# lvs -a -o+devices
LV   VG  Attr   LSize Devices
test sys -wi-a- 1.00G /dev/sdb1(0)

On the end, to verify that we have a clean situation, we can remount the LV and compare the content.

[root@r2d2 ~]# umount /mnt/
[root@r2d2 ~]# lvchange -an /dev/sys/test
[root@r2d2 ~]# lvchange -ay /dev/sys/test
[root@r2d2 ~]# mount /dev/sys/test /mnt
[root@r2d2 ~]# md5sum /mnt/random
596c35b949baf46b721744a13f76a258 /mnt/random

Everything fine. :)

Just a small note for the end. In case you try to create a mirror on a logical volume that was not yet activated, sync of the mirror will start after the activation.

[root@r2d2 ~]# lvconvert -m 1 sys/test /dev/sdb1 --mirrorlog core
Conversion starts after activation


PS1 for your Shell?

Few years ago I went on a quest to find a perfect shell prompt. I asked the mighty Internets for ideas, but it seemed futile. I tried many things, simple prompts, complex prompts, but nothing could satisfy my requirements (I don't even remember what were my requirements back then.) So I picked best of both worlds and got this little monster.

:) oscar:~#

Happy face! And in case of an error, it looks sad.

:( 2 oscar:~#

Cute, a? It even prints the exit code. Useful and cute at the same time! And here is definition of the prompt. As you can see it uses simple function to determine return code of executed command and adjust its feelings accordingly.

smiley() {
    [[ ${RC} == 0 ]] && echo ':)' || echo ":( ${RC}"
export PS1="\$(smiley) \u@\h:\w\\$ "

I think I got the idea for the smiley thing somewhere, but unfortunately I don't remember anymore where from.
For more adventurous people, maybe this prompt would be more interesting.

export PS1="\[\033[0;36m\]\033(0l\033(B\[\033[0m\][\[\033[1;31m\]\u\[\033[0m\]]\[\033[0;36m\]\033(0q\033(B\[\033[0m\][\[\033[1;33m\]@\h\[\033[0m\]]\[\033[0;36m\]\033(0q\033(B\[\033[0m\][\[\033[0;37m\]\T\[\033[0m\]]\[\033[0;36m\]\033(0q\033(B\033(0q\033(B\033(0q\033(B\033(0q\033(B\033(0q\033(B\033(0q\033(B\033(0q\033(B\033(0q\033(B\[\033[0m\][\[\033[1;33m\]\w\[\033[0m\]]\n\[\033[0;36m\]\033(0m\033(B\[\033[0m\]\$ "

Magic! :)

What is your favorite prompt? Please leave boring \u@\h:\w for dinner with your parents. :P

Load Average > 680

I just found this old screenshot from one of my previous jobs. It was taken on December 9th 2003, while one of the web hosting servers went woowoo due to badly optimized web site. Load average went sky high to 682! Anyone else had such a high load before or am I the absolute champion? :)

PS: I was using Fluxbox back than! Wow, crazy youth! :)


LVM Snapshots and XFS

Small note for everyone planing to use Linux LVM snapshots and using XFS at the same time. XFS has UUIDs which are unique identifiers of the filesystem. Two file systems with same UUID can not be mounted on the same server. Now, if we know that a snapshot of a logical volume represents a point-in-time copy of the original logical volume it doesn't take much time to realize that the filesystem on the snapshot is also a copy, thus it will have the same UUID as the filesystem on the original logical volume. So here is what happens when you try to mount the snapshot:

[root@server ~]# df -hT /var/lib/mysql_backup/
Filesystem    Type    Size  Used Avail Use% Mounted on
              xfs    25G   19G  6.3G  76% /var/lib/mysql_backup
[root@server ~]# lvcreate -s -n bkp-snap -L1G /dev/sys/mysql_backup
  Logical volume "bkp-snap" created
[root@server ~]# mount /dev/sys/bkp-snap /mnt/misc/
mount: wrong fs type, bad option, bad superblock on /dev/sys/bkp-snap,
       missing codepage or other
       In some cases useful info is found in syslog - try
       dmesg | tail or so
[root@server ~]# dmesg | tail -1
XFS: Filesystem dm-9 has duplicate UUID - can't mount

It doesn't look very good. There are two solution for this problem. One is to use nouuid option for mount command.

[root@server ~]# mount -o nouuid /dev/sys/bkp-snap /mnt/misc/
[root@server ~]# dmesg | tail -3
XFS mounting filesystem dm-9
Starting XFS recovery on filesystem: dm-9 (logdev: internal)
Ending XFS recovery on filesystem: dm-9 (logdev: internal)

Another option would be to change UUID of the filesystem on the snapshot using xfs_admin command.

[root@server ~]# xfs_admin -U generate /dev/sys/bkp-snap
Clearing log and setting UUID
writing all SBs
new UUID = 1bdcf6e1-62fb-47f2-83e4-dc398bb7a1cd
[root@server ~]# dmesg | tail -2
XFS mounting filesystem dm-9
Ending clean XFS mount for filesystem: dm-9

I am in favour of the first option (mount -o nouuid) since it does not perform any modification on the filesystem. It just feels safer, that's all... :)

More information on Linux memory management

I have just noticed that I missed to mention one very important thing in my previous post.

File /proc/meminfo contains a very useful field named Committed_AS. This field indicates TOTAL value of committed memory. If all applications would require all memory allocated to them your server would need this amount of memory.

If we look in the example from my previous post we would find following values:

loreto:/tmp # cat /proc/meminfo
MemTotal:     33274944 kB

Committed_AS: 49751960 kB

So my server has 32GB of RAM, but total amount of memory allocated is 48GB. That is 150%! If all this memory would be required at once server would crash pretty bad (or OOM killer would start butchering my Oracle databases to get some memory back!). :-)


Linux memory management

Yesterday I had a request for memory usage report on Oracle servers in my company. As we are using Centreon, Nagios frontend which makes good use of performance data reported by Nagios plugins and makes nice graphs out of it, it was a matter of pasting the images into the mail and sending it. But than interesting question was raised: how come on a server with 32GB of RAM and with 30+ databases running, only 5GB of RAM is reported as used? Strange indeed.

I quickly logged in to server and checked memory usage:

loreto:/tmp # free
             total       used       free    shared   buffers     cached
Mem:      33274944   32931032     343912         0        20   27013200
-/+ buffers/cache:    5917812   27357132
Swap:     16779884    5603256   11176628

Really only 5GB, check_memory plugin was not wrong. Next this I checked were shared memory segments – Oracle uses shared memory in huge quantities, so this is also very important parameter.

loreto:/tmp # a=0; for i in $(ipcs -m|grep ^0x|awk ‘{print $5}’); do let a+=$i; done; echo $a

Ugh, 20GB allocated for shared memory, while system reports only 5GB. Something is very wrong here. Confused, I took out the artillery.

loreto:/tmp # cat /proc/meminfo
MemTotal:     33274944 kB
MemFree:        198580 kB
Buffers:            20 kB
Cached:       27439580 kB
SwapCached:     223880 kB
Active:       16333936 kB
Inactive:     15428724 kB
HighTotal:    32634140 kB
HighFree:        33516 kB
LowTotal:       640804 kB
LowFree:        165064 kB
SwapTotal:    16779884 kB
SwapFree:     10856700 kB
Dirty:            1668 kB
Writeback:           0 kB
AnonPages:     4089416 kB
Mapped:       10222968 kB
Slab:           427584 kB
CommitLimit:  33417356 kB
Committed_AS: 49751960 kB
PageTables:     826016 kB
VmallocTotal:   112632 kB
VmallocUsed:     22228 kB
VmallocChunk:    90180 kB
HugePages_Total:     0
HugePages_Free:      0
HugePages_Rsvd:      0
Hugepagesize:     2048 kB

Usually, used memory on Linux is calculated as (Total Memory – (Unused Memory + Buffers + Page Cache)). Why buffers and caches are not counted into memory usage? Simply because it contains data that is not really critical for operating system and applications running. It contains data that can be flushed and removed from the memory at any time.

So in my case that was:

33274944 – (198580 + 20 + 27439580) = 5636764

OK, this matches output from free command. But what about those 20GB of allocated shared memory?

Next few hours I spent in searching and reading Linux documentation on memory management and found few interesting things.

Linux uses principle of memory overcommitment. Basically, what this means is that when application requests memory to be allocated, kernel will always “give: the memory hoping that application will not really use it, or at least not the whole size allocated. Only when application tries to write the data into the memory, kernel will mark the memory as used. This can lead to situation where the size of allocated memory is actually higher than the size of physical memory inside the machine. But as long as there is no demand for allocated memory, system is running without problems.

And this is the core of my dilemma. Shared memory is allocated, but since there is no data in it, it is not counted into used memory.

Memory overcommit can be configured via two parameters:

loreto:/tmp # sysctl -a|grep overcommit
vm.overcommit_ratio = 50
vm.overcommit_memory = 0

From Red Hat manual:

  • overcommit_memory — Configures the conditions under which a large memory request is accepted or denied. The following three modes are available:
    • 0 — The kernel performs heuristic memory over commit handling by estimating the amount of memory available and failing requests that are blatantly invalid. Unfortunately, since memory is allocated using a heuristic rather than a precise algorithm, this setting can sometimes allow available memory on the system to be overloaded. This is the default setting.
    • 1 — The kernel performs no memory over commit handling. Under this setting, the potential for memory overload is increased, but so is performance for memory intensive tasks (such as those executed by some scientific software).
    • 2 — The kernel fails requests for memory that add up to all of swap plus the percent of physical RAM specified in /proc/sys/vm/overcommit_ratio. This setting is best for those who desire less risk of memory overcommitment.
      Note This setting is only recommended for systems with swap areas larger than physical memory.
  • overcommit_ratio — Specifies the percentage of physical RAM considered when /proc/sys/vm/overcommit_memory is set to 2. The default value is 50.

Pidgin status v2

As requested by bleketux, I made some modifications to script.

Main news is that now it is possible to change Pidgin status message periodically. Script will go to background (it is a real daemon now :P), change the status, and wait for the set time interval until it changes the message again, and then all over again, wait-change-wait-change.

To change status message every 5 minutes with a random line from file /home/miljan/quotes/dusko_radovic.txt: -d -t 5 -f /home/miljan/quote/dusko_radovic.txt

And in Pidgin you would get something like this every five minutes:

To show the song you are listening to as status message: -s “Mukeka di Rato – Kustapassaaessedrmobral”

And in Pidgin it would look like:

You can see all possible options by running script with -h argument for help.

bleketux, I hope you are still around to enjoy this. ;)



Why I love strace

Strace is a tool that should be in a toolbox of every system administrator. Not only that it can help in troubleshooting simple problems (ie. missing libraries in newly created chroot, which ldd mysteriously misses to report) but it also helps in debugging very complex system problems and performance issues.

Recently I experienced a very strange problem with one of the RHEL 3 servers we’ve got. Problem manifested in a very strange way, SSH and su logins hanged, other daemons were also hanging during the startup, only way to reboot or shutdown the server was to physically press the restart/power off button, etc. All this could have been caused by problems on both software and hardware level. First suspicious was bad RAID controller, but after tests this proved to be a mislead. After more tests and brainstorms hardware problems were definitely excluded, so problem has to be on the software side. But what could be the problem?

After few more misleading steps I tried to trace system calls created by su command and found very interesting results.

$ strace -f -s 1024 -o /tmp/su.strace.out su -
[-- cut --]
3138 open(“/dev/audit”, O_RDWR) = 3
3138 fcntl64(3, F_GETFD) = 0
3138 fcntl64(3, F_SETFD, FD_CLOEXEC) = 0
3138 ioctl(3, 0x801c406f

And this is where the strace output ends and su command hangs. Audit device file is opened (file descriptor 3) and as soon as the first request is dispatched to this device (ioctl system call to file descriptor 3) command freezes. According to this I should just disable audit on the server and the problem will be gone.
As a test, audit daemon was temporarily stopped and I tried to switch to another user and the problem was indeed gone.

After searching for similar problems with audit daemon I found an article in Red Hat knowledge base regarding the exactly same
issue (
From the article:

When the free space in the filesystem holding the audit logs is less than 20%, the above notify command will error out and auditd will enter suspend mode. This causes all system calls to block.

So this behavior is not a bug but actual feature of the software. :o) From security point of view this is expected behaviour – attacker could fill up filesystem where audit logs are stored before the attack and audit will be disabled, meaning no logs of his activity, so better not to allow ANY activity on the system if audit is not able to write to its logs. But still, this kind of behaviour renders the system completely useless to legitimate users.

The topic of this post is not audit, so I will stop here. Important thing is that strace led us directly to the main source of the problem. Resolution of issues like this would be much more complex and time consuming without this great little tool. :)


Setting Pidgin Status with Python or How to Waste Perfectly Good Saturday

I was very bored today. Tired from working on Ratuus (don’t go there, site is under heavy construction :)) I needed something to help me take my mind off everything. And what better way to do it, than playing with Python, Pidgin and D-BUS. :D

To cut the long story short, I needed something that will update my Pidgin status message with the information about the current song I am listening. Till recently I was using Rhythmbox player and there is a perfect little Pidgin plugin called Current Track that worked with this player. Last week I discovered gmusicbrowser and fell in love immediately. It is fast, rich with functionalities but still simple to use. Exactly what I want from audio player. (Hm, I just noticed it is written in PERL. Now when Python is used for everything this comes as a big surprise.)

gmusicbrowser already has a plugin called NowPlaying. It will trigger some command whenever song is changed. I just needed to write this command that will inform Pidgin about the change. So, this seemed like a perfect exercise for slow Saturday. :)

Quick search on Pidgin and D-BUS showed extensive documentation about Pidgin API accessible through D-BUS. There is even a working example of how to change the status message! :)

But that was too simple, so I got another idea. Some time ago, I wrote a small daemon in C that will bind to a specific port and display random bofh-excuses fortune messages when someone would telnet to it. (Seems like I have a lot of spare time. I should really find some hobby!) Something similar to telnet 666 (here for more information). So I was thinking about implementing the same for my Pidgin status. Random BOFH excuses in your status message! How cool geeky is that!

The result of all that is short (~60 lines of code) Python script that will set your Pidgin status message to:

a) you current song -m The Real McKenzies – Outta Scotch

b) random line from a file -f /usr/local/share/bofh-example

c) anything you give as the command line argument Some very interesting and funny status message

Only difference between a) and c) is the type of the icon that will be shown. In example a) there will be a small musical note, while in example b) and c) nice arrow pointing to right side will be show.

In the middle of testing I noticed this strange message:

Being from Serbia myself, I find this extremely funny. Although, I didn’t know Serbian hackers are so notorious! :)

I hope someone will find it useful. In any case, I am accepting donations for some long and adventurous vacation. As you can see, I really need it! :D