Author Archive for miljan

25
Jan

Solaris MPxIO: LUN distribution

Lately my work tasks are leading me to learn a lot about Solaris storage configuration. I have a “test” environment with 13 HP ProLiant servers, Solaris x86 and HP EVA8400 storage.

EVA8400 is advertised as an active-active (AA) storage – a LUN can be accessed via both controllers. But if you look at the small letters you will see that it is actually asymmetric AA storage (AAA), which is a cheaper class of AA storages. While all LUNs can still be accessed via both controllers, LUNs still have owning controller. If I/O request comes to a non-owning controller it will be transfered to the owner and only then performed. This of course has impact on the performance. To avoid this EVA monitors the requests, and if more than 60% of requests are coming through non-owning controller ownership is changed.

Preferred method of access for AAA storages is using asymmetric logical unit access (ALUA) – method that enables target (storage system) to set different access characteristics to different paths based on the owning controller (this is known as target port group support (TPGS)). For example, if Controller_A owns LUN_1, all paths for LUN_1 to Controller_A will be marked as Active/Optimized, while all paths to Controller_B will be marked as Active/NonOptimized.

Knowing all this, it is only obvious that ideal setup for AAA storages is when one half of LUNs is owned by one controller and the other half by second controller. This enables us to balance the load on both controllers and get the maximum performance out of the storage.

Now, on the client side some systems have Veritas suite installed, but there are a couple of servers that are using Solaris native MPxIO as a multipathing solution. By specification MPxIO is ALUA aware plus it comes with a nice feature to balance the traffic across all Active paths. EVA8400 controllers have 4 x 4Gbps fibre channel ports each, while my servers have 2 x 8Gbps HBAs. Spreading the traffic across all 4 target ports would help me get needed performance.

Everything sounds perfect in theory, but I’m having problems to make this work in practice. For some reason MPxIO does all the I/O over Controller A and, since with MPxIO path priority can not be set manually, after a while all LUNs are moved to this controller leaving second controller idle. Since TPGS specification is vendor specific there seems to be some incompatibility between HP and Sun implementation. Or maybe there are some hidden undocumented options that have to be set – so far I had no luck in finding these. :-/

During the tests and in order to see how the LUNs were spread I wrote a script that counts Active paths per target port. Maybe someone else will find it useful. Download can be found here. And here is how it looks in action:

root@xdb1-ora:~# get_san_paths.py
Initiator ports found:    2
Target ports found:       8
LUNs found:              87
 
Path \ LUNs:
  50001fe150229f78: 77 LUNs
  50001fe150229f79: 77 LUNs
  50001fe150229f7c: 10 LUNs
  50001fe150229f7d: 10 LUNs
  50001fe150229f7a: 77 LUNs
  50001fe150229f7b: 77 LUNs
  50001fe150229f7e: 10 LUNs
  50001fe150229f7f: 10 LUNs

18
Oct

FreeIPA and automount NIS maps

I am playing since recently with FreeIPA, Red Hat's identity management solution built on top of Red Hat's DS389 directory server. One of the main reasons why I decided for FreeIPA (apart from integrated Kerberos for single sign-on and possible integration with Microsoft Active Directory in the future) is, also integrated, NIS server - proxy system that receives requests from NIS clients, gets the data from LDAP server and sends it back to clients. Now, in order to understand why I need to support both LDAP and NIS you need to know few things about the environment I'm in charge of.

I'm working for a software development company. We produce billing software for telecommunication operators - mainly used by mobile telecommunication companies. That means, when you make a call, your call needs to be tracked, recorded and properly billed on the end, all done by our software (called BSCS btw). Sounds simple enough. Multiply that by one hundred million customers making calls and it's not so simple anymore. :) Anyway, our customers use our software on different platforms, most of them use HP-UX, some are on Solaris, some AIX, some are on Linux and we even have some customers on Tru64. In order to provide support to all those customers we need to have all those systems as well. So on the end we end up 100+ servers of all types of UNIX systems. That's not a big problem, it's even interesting, but the problem comes up when those systems are not being upgraded. We have Solaris 2.6 servers and Tru64 4.0D servers, until recently we even had AIX 4.3 and HP-UX 10.30 servers. All of the mentioned systems are 13 years old! Scary!

As you can imagine, those outdated systems do not support many things we take for granted today. Shadow passwords and LDAP authentication are few of those things. And this gets us back to the main topic of this post. FreeIPA (or rather DS389) provides integrated NIS server for unlucky people like myself via SLAPI-NIS plugin. All you have to do in order to use it, is to enable compat and NIS plugin.

# ipa-compat-manage enable
Directory Manager password:
 
Enabling plugin
This setting will not take effect until you restart Directory Server.
 
# ipa-nis-manage enable
Directory Manager password:
 
Enabling plugin
This setting will not take effect until you restart Directory Server.
The rpcbind service may need to be started.

And after directory server restart you have a working NIS server.

# rpcinfo -p
program vers proto   port  service
100000     4   tcp    111  portmapper
100000     3   tcp    111  portmapper
100000     2   tcp    111  portmapper
100000     4   udp    111  portmapper
100000     3   udp    111  portmapper
100000     2   udp    111  portmapper
100024     1   udp  49833  status
100024     1   tcp  36837  status
100004     2   udp    699  ypserv
100004     2   tcp    699  ypserv

By default only passwd, group and netgroup maps are supported but other maps can easily be added. In our environment we are heavily relaying on automounter maps so I had to find a way to add them into FreeIPA NIS server. Luckily, as everything else with FreeIPA, this is very simple. First let me show you how to add automount entries in FreeIPA, it is surprisingly easy.

When it comes to automounter, FreeIPA has support for different locations. So for example, you can have different maps for your production environment, test environment and DMZ environment. Pretty neat. In my example, I will create a new location for our DMZ environment.

# ipa automountlocation-add dmz
  Location: dmz

New location is automatically created with auto.master and auto.direct maps.

# ipa automountmap-find dmz
  Map: auto.master
 
  Map: auto.direct
  ----------------------------
  Number of entries returned 2
  ----------------------------

I would like to add a new map for user home folders.

# ipa automountmap-add dmz auto.home
  Map: auto.home

Then we need to add an entry in auto.master map to associate /home mount point with auto.home map.

# ipa automountkey-add dmz auto.master /home --info=auto.home
  Key: /home
  Mount information: auto.home

Finally, we add an entry into auto.home map specifying which share to mount for user miljan.

# ipa automountkey-add dmz auto.home miljan --info=filer01:/vol/users/home/miljan
  Key: miljan
  Mount information: filer01:/vol/users/home/miljan

And voila, when user miljan logs-on he will have his home folder mounted.

Final step would be to have this in NIS as well. For this we need to manually add few entries into LDAP server. In the example below we add support for auto.master map. There are probably few things you would need to change, though. First, the domain name in DN and nis-domain lines - in the example I am using example.com as a domain. Second, nis-base line - value of this attribute needs to be the DN of your automount map.

# ldapadd -x -D "cn=Directory Manager" -W
dn: nis-domain=example.com+nis-map=auto.master,cn=NIS Server,cn=plugins,cn=config
objectClass: extensibleObject
nis-domain: example.com
nis-map: auto.master
nis-base: automountmapname=auto.master,cn=dmz,cn=automount,dc=example,dc=com
nis-filter: (objectclass=*)
nis-key-format: %{automountKey}
nis-value-format: %{automountInformation}

Repeat the same for auto.home map and you are set to go.

$ ypcat -d example.com -h freeipa.example.com -k auto.master
/home auto.home
 
$ ypcat -d example.com -h freeipa.example.com -k auto.home
miljan filer01:/vol/users/home/miljan

Nice and easy. :)

06
Oct

SVC lsmigrate

A year or so ago, I got annoyed by the ugly and unreadable output of IBM SVC lsmigrate command so I sat down and wrote a short script that will provide much nicer and more informational output. Output includes information about the VDisk that is being migrated (name, ID and size), destination MDisk group and migration information (number of threads and progress). If started in verbose mode, information about the source MDisk group is printed as well.

Non-verbose mode:

$ ./svc_lsmigrate.py -H svccluster
# (ID  ) Vdisk         Size      (ID ) Mdisk Group    Threads Progress
======================================================================
1 (67  ) esx_srvf05_d  1000.00GB (8  ) DS482_5r10_SK1 1       48 %
2 (157 ) esx_srvf06_g  1000.00GB (5  ) DS483_8r5_2SK3 1       96 %
3 (118 ) esx_srvf01_h  1022.00GB (1  ) DS482_8r5_SK3  1       63 %
4 (117 ) esx_srvf01_g     1.00TB (1  ) DS482_8r5_SK3  1       63 %
5 (120 ) esx_srvf01_i  1023.00GB (1  ) DS482_8r5_SK3  1       63 %
6 (39  ) tsm_disk4        5.00GB (10 ) DS484_9r5_1SK2 1       98 %
7 (19  ) oracode_tunis  800.00GB (5  ) DS483_8r5_2SK3 1       59 %

If you find the script useful, you can download it here.

Note: in order for script to work, you need to have SVC connection parameters set in your SSH config file. Example could be:

$ grep -p svccluster ~/.ssh/config
Host svccluster
  Hostname 192.168.1.34
  User admin
  IdentityFile ~/.ssh/admin.key

Note #2: Script was tested on SVC software levels 4.3 and 5.1.

04
May

Linux LVM: logical volume migration

I was recently confronted with the task of migrating logical volume holding MySQL databases to a separate physical volume. Due to important application running on top of MySQL any downtime was out of the question.

Easiest way to perform this operation would be using pvmove command.

[root@r2d2 ~]# pvmove -n test /dev/sda5 /dev/sdb1
/dev/sda5: Moved: 6.2%
/dev/sda5: Moved: 18.8%
/dev/sda5: Moved: 28.1%
/dev/sda5: Moved: 37.5%
/dev/sda5: Moved: 46.9%
/dev/sda5: Moved: 56.2%
/dev/sda5: Moved: 65.6%
/dev/sda5: Moved: 75.0%
/dev/sda5: Moved: 84.4%
/dev/sda5: Moved: 93.8%
/dev/sda5: Moved: 100.0%

In the example we moved logical volume 'test' from physical volume /dev/sda5 to /dev/sdb1. While pvmove is running, we can get the necessary information including the progress by issuing command lvs.

[root@r2d2 ~]# lvs -a -o+devices
LV        VG  Attr   LSize  Move     Copy% Devices
home      sys -wi-ao 20.00G                /dev/sda5(544)
mysql     sys -wi-a- 1.00G                 /dev/sda5(1888)
[pvmove0] sys p-C-ao 1.00G /dev/sda5 56.25 /dev/sda5(1920),/dev/sdb1(0)
root      sys -wi-ao 7.00G                 /dev/sda5(256)
swap      sys -wi-ao 2.00G                 /dev/sda5(480)
test      sys -wI-ao 1.00G                 pvmove0(0)
usr       sys -wi-ao 8.00G                 /dev/sda5(0)
usr       sys -wi-ao 8.00G                 /dev/sda5(1856)
var       sys -wi-ao 2.00G                 /dev/sda5(224)
var       sys -wi-ao 2.00G                 /dev/sda5(1824)

Another, and somewhat more tricky way of doing this is to use LV mirroring. I say more tricky because it requires more work and the last part of the operation is not really intuitive so it may be prone to errors.

We start with this:

[root@r2d2 ~]# pvs
PV        VG  Fmt  Attr PSize   PFree
/dev/sda5 sys lvm2 a-   220.75G 179.75G
/dev/sdb1 sys lvm2 a-   3.72G   3.72G
 
[root@r2d2 ~]# lvs -a -o+devices
LV   VG  Attr   LSize Devices
[---]
test sys -wi-a- 1.00G /dev/sda5(1920)
[---]

Two PVs, the source - sda5, and unused, target PV - sdb1. Logical volume test is the LV we want to move to sdb1.

In order to verify that the content of the LV is consistent on the end, we will create a sample file and compare its content before and after.

[root@r2d2 ~]# dd if=/dev/zero of=/mnt/random bs=512 count=20000
20000+0 records in
20000+0 records out
10240000 bytes (10 MB) copied, 0.122046 s, 83.9 MB/s
 
[root@r2d2 ~]# md5sum /mnt/random
596c35b949baf46b721744a13f76a258 /mnt/random

We start by adding another copy to 'test' LV and placing it on a new PV.

[root@r2d2 ~]# lvconvert -m 1 sys/test /dev/sdb1
Not enough PVs with free space available for parallel allocation.
Consider --alloc anywhere if desperate.
Unable to allocate extents for mirror(s).

Hm, confusing error. :) The reason for this is that Linux LVM implementation requires mirror to use a log volume, and this log volume needs to reside on a PV of its own. In order to go around this we can instruct LVM to place the log in the memory. This is not recommended for situations when mirroring is needed on the long run, since mirror needs to be rebuilt every time the LV is activated. But this is the perfect solution for our needs as our mirror will not last for long.

[root@r2d2 ~]# lvconvert -m 1 sys/test /dev/sdb1 --mirrorlog core
sys/test: Converted: 6.2%
sys/test: Converted: 15.6%
sys/test: Converted: 28.1%
sys/test: Converted: 37.5%
sys/test: Converted: 46.9%
sys/test: Converted: 56.2%
sys/test: Converted: 65.6%
sys/test: Converted: 75.0%
sys/test: Converted: 84.4%
sys/test: Converted: 93.8%
sys/test: Converted: 100.0%
Logical volume test converted.

Again, we can get all information with lvs command.

[root@r2d2 ~]# lvs -a -o+devices
LV              VG  Attr   LSize Copy%  Devices
home            sys -wi-ao 20.00G       /dev/sda5(544)
mysql           sys -wi-a- 1.00G        /dev/sda5(1888)
root            sys -wi-ao 7.00G        /dev/sda5(256)
swap            sys -wi-ao 2.00G        /dev/sda5(480)
test            sys mwi-a- 1.00G 15.62  test_mimage_0(0),test_mimage_1(0)
[test_mimage_0] sys Iwi-ao 1.00G        /dev/sda5(1920)
[test_mimage_1] sys Iwi-ao 1.00G        /dev/sdb1(0)
usr             sys -wi-ao 8.00G        /dev/sda5(0)
usr             sys -wi-ao 8.00G        /dev/sda5(1856)
var             sys -wi-ao 2.00G        /dev/sda5(224)
var             sys -wi-ao 2.00G        /dev/sda5(1824)

So now we have our LV mirrored with copies on both PVs. Just for test, we can verify the content of the file.

[root@r2d2 ~]# md5sum /mnt/random
596c35b949baf46b721744a13f76a258 /mnt/random

Seems fine. :)

Last step, the one where we have to be more careful, is removing mirror copy from old PV. We have to be more careful because the PV name in this case is the name of a PV we want to remove from mirroring, which is the opposite from what we had when establishing the mirror. If we use wrong PV name we will end up on the begining, with LV on a wrong PV and without mirroring.

[root@r2d2 ~]# lvconvert -m 0 sys/test /dev/sda5
Logical volume test converted.

Again we use lvs to see the status.

[root@r2d2 ~]# lvs -a -o+devices
LV   VG  Attr   LSize Devices
[---]
test sys -wi-a- 1.00G /dev/sdb1(0)
[---]

On the end, to verify that we have a clean situation, we can remount the LV and compare the content.

[root@r2d2 ~]# umount /mnt/
[root@r2d2 ~]# lvchange -an /dev/sys/test
[root@r2d2 ~]# lvchange -ay /dev/sys/test
[root@r2d2 ~]# mount /dev/sys/test /mnt
 
[root@r2d2 ~]# md5sum /mnt/random
596c35b949baf46b721744a13f76a258 /mnt/random

Everything fine. :)

Just a small note for the end. In case you try to create a mirror on a logical volume that was not yet activated, sync of the mirror will start after the activation.

[root@r2d2 ~]# lvconvert -m 1 sys/test /dev/sdb1 --mirrorlog core
Conversion starts after activation

30
Nov

PS1 for your Shell?

Few years ago I went on a quest to find a perfect shell prompt. I asked the mighty Internets for ideas, but it seemed futile. I tried many things, simple prompts, complex prompts, but nothing could satisfy my requirements (I don't even remember what were my requirements back then.) So I picked best of both worlds and got this little monster.

:) oscar:~#

Happy face! And in case of an error, it looks sad.

:( 2 oscar:~#

Cute, a? It even prints the exit code. Useful and cute at the same time! And here is definition of the prompt. As you can see it uses simple function to determine return code of executed command and adjust its feelings accordingly.

smiley() {
    RC=$?
    [[ ${RC} == 0 ]] && echo ':)' || echo ":( ${RC}"
}
export PS1="\$(smiley) \u@\h:\w\\$ "

I think I got the idea for the smiley thing somewhere, but unfortunately I don't remember anymore where from.
For more adventurous people, maybe this prompt would be more interesting.

export PS1="\[\033[0;36m\]\033(0l\033(B\[\033[0m\][\[\033[1;31m\]\u\[\033[0m\]]\[\033[0;36m\]\033(0q\033(B\[\033[0m\][\[\033[1;33m\]@\h\[\033[0m\]]\[\033[0;36m\]\033(0q\033(B\[\033[0m\][\[\033[0;37m\]\T\[\033[0m\]]\[\033[0;36m\]\033(0q\033(B\033(0q\033(B\033(0q\033(B\033(0q\033(B\033(0q\033(B\033(0q\033(B\033(0q\033(B\033(0q\033(B\[\033[0m\][\[\033[1;33m\]\w\[\033[0m\]]\n\[\033[0;36m\]\033(0m\033(B\[\033[0m\]\$ "

Magic! :)

What is your favorite prompt? Please leave boring \u@\h:\w for dinner with your parents. :P
23
Nov

Load Average > 680

I just found this old screenshot from one of my previous jobs. It was taken on December 9th 2003, while one of the web hosting servers went woowoo due to badly optimized web site. Load average went sky high to 682! Anyone else had such a high load before or am I the absolute champion? :)

PS: I was using Fluxbox back than! Wow, crazy youth! :)

16
Nov

LVM Snapshots and XFS

Small note for everyone planing to use Linux LVM snapshots and using XFS at the same time. XFS has UUIDs which are unique identifiers of the filesystem. Two file systems with same UUID can not be mounted on the same server. Now, if we know that a snapshot of a logical volume represents a point-in-time copy of the original logical volume it doesn't take much time to realize that the filesystem on the snapshot is also a copy, thus it will have the same UUID as the filesystem on the original logical volume. So here is what happens when you try to mount the snapshot:

[root@server ~]# df -hT /var/lib/mysql_backup/
Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/mapper/sys-mysql_backup
              xfs    25G   19G  6.3G  76% /var/lib/mysql_backup
 
[root@server ~]# lvcreate -s -n bkp-snap -L1G /dev/sys/mysql_backup
  Logical volume "bkp-snap" created
 
[root@server ~]# mount /dev/sys/bkp-snap /mnt/misc/
mount: wrong fs type, bad option, bad superblock on /dev/sys/bkp-snap,
       missing codepage or other
       In some cases useful info is found in syslog - try
       dmesg | tail or so
 
[root@server ~]# dmesg | tail -1
XFS: Filesystem dm-9 has duplicate UUID - can't mount

It doesn't look very good. There are two solution for this problem. One is to use nouuid option for mount command.

[root@server ~]# mount -o nouuid /dev/sys/bkp-snap /mnt/misc/
 
[root@server ~]# dmesg | tail -3
XFS mounting filesystem dm-9
Starting XFS recovery on filesystem: dm-9 (logdev: internal)
Ending XFS recovery on filesystem: dm-9 (logdev: internal)

Another option would be to change UUID of the filesystem on the snapshot using xfs_admin command.

[root@server ~]# xfs_admin -U generate /dev/sys/bkp-snap
Clearing log and setting UUID
writing all SBs
new UUID = 1bdcf6e1-62fb-47f2-83e4-dc398bb7a1cd
 
[root@server ~]# dmesg | tail -2
XFS mounting filesystem dm-9
Ending clean XFS mount for filesystem: dm-9

I am in favour of the first option (mount -o nouuid) since it does not perform any modification on the filesystem. It just feels safer, that's all... :)
10
Nov

More information on Linux memory management

I have just noticed that I missed to mention one very important thing in my previous post.

File /proc/meminfo contains a very useful field named Committed_AS. This field indicates TOTAL value of committed memory. If all applications would require all memory allocated to them your server would need this amount of memory.

If we look in the example from my previous post we would find following values:

loreto:/tmp # cat /proc/meminfo
MemTotal:     33274944 kB

Committed_AS: 49751960 kB

So my server has 32GB of RAM, but total amount of memory allocated is 48GB. That is 150%! If all this memory would be required at once server would crash pretty bad (or OOM killer would start butchering my Oracle databases to get some memory back!). :-)

10
Jul

Linux memory management

Yesterday I had a request for memory usage report on Oracle servers in my company. As we are using Centreon, Nagios frontend which makes good use of performance data reported by Nagios plugins and makes nice graphs out of it, it was a matter of pasting the images into the mail and sending it. But than interesting question was raised: how come on a server with 32GB of RAM and with 30+ databases running, only 5GB of RAM is reported as used? Strange indeed.

I quickly logged in to server and checked memory usage:

loreto:/tmp # free
             total       used       free    shared   buffers     cached
Mem:      33274944   32931032     343912         0        20   27013200
-/+ buffers/cache:    5917812   27357132
Swap:     16779884    5603256   11176628

Really only 5GB, check_memory plugin was not wrong. Next this I checked were shared memory segments – Oracle uses shared memory in huge quantities, so this is also very important parameter.

loreto:/tmp # a=0; for i in $(ipcs -m|grep ^0x|awk ‘{print $5}’); do let a+=$i; done; echo $a
20443037885

Ugh, 20GB allocated for shared memory, while system reports only 5GB. Something is very wrong here. Confused, I took out the artillery.

loreto:/tmp # cat /proc/meminfo
MemTotal:     33274944 kB
MemFree:        198580 kB
Buffers:            20 kB
Cached:       27439580 kB
SwapCached:     223880 kB
Active:       16333936 kB
Inactive:     15428724 kB
HighTotal:    32634140 kB
HighFree:        33516 kB
LowTotal:       640804 kB
LowFree:        165064 kB
SwapTotal:    16779884 kB
SwapFree:     10856700 kB
Dirty:            1668 kB
Writeback:           0 kB
AnonPages:     4089416 kB
Mapped:       10222968 kB
Slab:           427584 kB
CommitLimit:  33417356 kB
Committed_AS: 49751960 kB
PageTables:     826016 kB
VmallocTotal:   112632 kB
VmallocUsed:     22228 kB
VmallocChunk:    90180 kB
HugePages_Total:     0
HugePages_Free:      0
HugePages_Rsvd:      0
Hugepagesize:     2048 kB

Usually, used memory on Linux is calculated as (Total Memory – (Unused Memory + Buffers + Page Cache)). Why buffers and caches are not counted into memory usage? Simply because it contains data that is not really critical for operating system and applications running. It contains data that can be flushed and removed from the memory at any time.

So in my case that was:

33274944 – (198580 + 20 + 27439580) = 5636764

OK, this matches output from free command. But what about those 20GB of allocated shared memory?

Next few hours I spent in searching and reading Linux documentation on memory management and found few interesting things.

Linux uses principle of memory overcommitment. Basically, what this means is that when application requests memory to be allocated, kernel will always “give: the memory hoping that application will not really use it, or at least not the whole size allocated. Only when application tries to write the data into the memory, kernel will mark the memory as used. This can lead to situation where the size of allocated memory is actually higher than the size of physical memory inside the machine. But as long as there is no demand for allocated memory, system is running without problems.

And this is the core of my dilemma. Shared memory is allocated, but since there is no data in it, it is not counted into used memory.

Memory overcommit can be configured via two parameters:

loreto:/tmp # sysctl -a|grep overcommit
vm.overcommit_ratio = 50
vm.overcommit_memory = 0

From Red Hat manual:

  • overcommit_memory — Configures the conditions under which a large memory request is accepted or denied. The following three modes are available:
    • 0 — The kernel performs heuristic memory over commit handling by estimating the amount of memory available and failing requests that are blatantly invalid. Unfortunately, since memory is allocated using a heuristic rather than a precise algorithm, this setting can sometimes allow available memory on the system to be overloaded. This is the default setting.
    • 1 — The kernel performs no memory over commit handling. Under this setting, the potential for memory overload is increased, but so is performance for memory intensive tasks (such as those executed by some scientific software).
    • 2 — The kernel fails requests for memory that add up to all of swap plus the percent of physical RAM specified in /proc/sys/vm/overcommit_ratio. This setting is best for those who desire less risk of memory overcommitment.
      Note This setting is only recommended for systems with swap areas larger than physical memory.
  • overcommit_ratio — Specifies the percentage of physical RAM considered when /proc/sys/vm/overcommit_memory is set to 2. The default value is 50.
10
Oct

Pidgin status v2

As requested by bleketux, I made some modifications to pidgin_status.py script.

Main news is that now it is possible to change Pidgin status message periodically. Script will go to background (it is a real daemon now :P), change the status, and wait for the set time interval until it changes the message again, and then all over again, wait-change-wait-change.

To change status message every 5 minutes with a random line from file /home/miljan/quotes/dusko_radovic.txt:

pidgin_status.py -d -t 5 -f /home/miljan/quote/dusko_radovic.txt

And in Pidgin you would get something like this every five minutes:

To show the song you are listening to as status message:

pidgin_status.py -s “Mukeka di Rato – Kustapassaaessedrmobral”

And in Pidgin it would look like:

You can see all possible options by running script with -h argument for help.

bleketux, I hope you are still around to enjoy this. ;)

Download: https://github.com/miljank/pidgin-status