![]() |
Request for feedback
clusterfork is a fairly new program, designed for myself. If you think it could benefit from other options or interface tweaks, please let me know. |
1. Availability
clusterfork is released under the Free Software Foundation’s Affero General Public License (GPL) version 3.
The Perl code is available here.
2. Introduction
While modern cluster technologies (Perceus, ROCKS) should obviate the need for this kind of utility, there are many non-Perceus/ROCKS clusters and even more cluster-like aggregations of nodes that often need this kind of tool. Even in Perceus/ROCKS clusters there’s often the need to issue a command to each node to evaluate hardware, search logs, determine memory errors, etc that is not met by the provisioning system.
3. Features
Why use clusterfork rather than the tools noted below?
clusterfork:
-
is config file-based (and will write an example template if one doesn’t exist). You can also specify alternative config files.
-
has an easy way to specify large, discontinuous IP# ranges with negations: ie 128.200.34.[23:45 -25 77:155 -100:-120] will send the cmd to the nodes (on net 128.200.34.0) 23 to 45 EXCEPT 25 and then 77 to 155 EXCEPT the nodes 100 to 120. Such specifications can also be chained or instantiated in the config file. See Specifying Ranges below.
-
config file can specify IP ranges based on arbitrary scripts such as SGE’s qhost.
-
can combine IP ranges into larger groups via named group addition (GRP1 + GRP2 + GRP3)
-
is pretty fast (by default, forks commands so that they execute in parallel).
-
comes with at least a decent amount of documentation, both external (this file) as well as internal help (clusterfork -h)
-
code is short (<1000LOC, including rc file template and help text), pretty well-documented and easy to modify.
-
provides a mechanism to evaluate the results of the command (like pssh, but better).
-
can archive the results, altho in a fairly primitive way.
-
can be used to (crudely) monitor the status of a cluster’s locally installed software.
-
will note which IP #s overlap in a command so nodes won’t receive multiple copies of commands.
-
provides multiple mechanisms for showing results: MD5 hash and wordcount of machine-specific output to show both exact and similar output, as well as saving the output for later perusal in a newly created subdir which also contains a summary file. You can view the output as a summary, or via Midnight Commander browsing of the results dir.
4. Known Problems
clusterfork is a fairly new program and while it works pretty well, there are some known or possible problems, mostly having to do with regular expressions.
-
regular expressions passed as part of the remote command may suffer in the ssh to shell translation and be garbled on the remote end. I’ve been using it in production for about 4 months and haven’t found any problems but it’s a consideration.
-
the 1st 20 characters of the commandline are used as part of the directory name and regular expressions included as part of those 20 characters may sometimes be garbled into impossible dir names which are rejected by the OS. An error message should be emitted if this happens and I’m trying to catch and retranslate these regexes.
-
if the Linux distro from which you issue the cf command aliases /bin/sh to dash instead of bash (as do most recent Ubuntu-derived distros), the redirection of the output when using the default fork behavior will be odd. It will not redirect to the usual files, but will be written to the screen in one blurb. The dash/bash weirdness also contributes other instabilities, so I usually change this so that /bin/sh → /bin/bash. If too many people complain, I’ll add some work-around code to the script.
-
in versions prior to 1.55, if the host/IP specification included a negation of the last number in the series, it would write an undef in the last position. ie: in the spec a64-[001:040 -012 -013 -040], 040 would be undef’ed leading to an warning when running and the last host not being processed (but since it was negated, it wasn’t meant to be processed anyway.)
PLEASE let me know if you run into any of these regex problems or other problems.
5. Prerequisites
Note that it does have a few Perl dependencies beyond the usual strict and Env:
-
Getopt::Long to process options
-
Config::Simple to process the configuration file.
-
Socket to provide hostname resolution.
It also requires some apps that are usually installed on most Linux boxen anyway.:
-
ssh for executing the commands (and obviously the nodes have to share ssh keys to provide passwordless ssh)
-
mutt, for emailing out notifications (if desired)
-
diff, for doing comparisons between files of IP #s
-
yum or apt-get if you want to use it for updating / installing apps.
-
mc (Midnight Commander) a Norton Commander-like clone to view/manipulate files
-
whatever local apps, scripts, etc that you need to use to generate IP# lists if this is of interest (SGE’s qhost to see what nodes are alive, for example).
6. Installation
for recent Ubuntu-based distros, the following will install the prerequisite packages.
sudo apt-get install libgetopt-mixed-perl libconfig-simple-perl \ libio-interface-perl mc diff yum apt mutt
for CentOS5 and comparable RedHat based systems:
sudo yum install perl-Config-Simple.noarch perl-Getopt-Mixed.noarch \ perl-Config-Simple.noarch perl-IO-Interface.<arch> mc.<arch> \ diffutils.<arch> yum.noarch mutt.<arch>
where <arch> is either x86 or x86_64.
Beyond that, the installation requires:
-
downloading the clusterfork script itself.
-
move it to your /usr/local/bin as clusterfork (and optionally, symlink it to cf)
-
chmod it to make it executable
-
run it once to write a .clusterforkrc file to your $HOME (see below).
-
edit that file to adjust it to your local requirements
-
start clusterforking.
7. Initialization
The 1st time you use clusterfork, you should get this message (unless you’ve already copied a ~.clusterforkrc file from somewhere else). Just follow the instructions.
$ clusterfork It looks like this is the 1st time you've run clusterfork as this user on this system. An example .clusterforkrc file will be written to your home dir. Once you edit it to your specifications, run a non-destructive command with it (ie 'ls -lSh') to make sure it's working and examine the output so that you understand the workflow and the output. Remember that in order for clusterfork to work, passwordless ssh keys must be operational from the node where you execute clusterfork to the client nodes. If you're going to use sudo to execute clusterfork, the root user public ssh key must be shared out to the clients. Typical cluster use implies a shared /home file system which means that the shared keys should only have to be installed once in /home/$USER/.ssh/authorized_keys. Please edit the ~/.clusterforkrc template that's just been written so that the next time things go smoother.
8. The .clusterforkrc configuration file
The .clusterforkrc config file (by default in your $HOME) is arranged like a Windows .INI file with stanza headers indicated with [STANZA]. Each stanza can have an arbitrary number of entries, but only the stanzas shown are supported by cf. Nothing prevents you from adding more, but you’ll have to process them yourself. Within each stanza, you can edit the entries
The stanzas named [IPRANGE] and [GROUPS] can be expanded arbitarily and cf should pick them up. Additionally, if you specify groups which have overlapping IP ranges, cf will detect that overlap and will only issue the command once per IP #.
# This is the config file for the 'clusterfork' application (aka cf) which executes # commands on a range of machines defined as below. Use 'clusterfork -h' # to view the help file # Comments start with a pound ('#') sign and //cannot share the same line// # with other configuration data. # Strings do not need to be quoted unles they contain commas (imply list entries) [ADMIN] # RPMDB - file that lists the RPMs that cf has been used to install RPMDB = /home/hmangala/BDUC_RPM_LIST.DB # ALLNODESFILE holds a list of ALL the IP nodes that this will support. # this should actually be generated outside of cf and written out if required. ALLNODESFILE = /home/hmangala/ALLNODESFILE # emails to notify of newly installed packages (note the escaping of the '@' EMAIL_LIST = "hmangala\@uci.edu, jsaska\@uci.edu, lopez\@uci.edu" # command to install apps - if this is found in the command, triggers a routine to # email admins with updated install info. INSTALLCMD = "yum install -y" [SGE] CELL = bduc_nacs JOB_DIR = /sge62/bduc_nacs/spool/qmaster/jobs EXECD_PORT = 537 QMASTER_PORT = 536 ROOT = /sge62 [APPS] # these will probably not change much among distros, but YMMV yum = /usr/bin/yum diff = /usr/bin/diff mutt = /usr/bin/mutt mc = /usr/bin/mc [IPRANGE] # you //definitely// need to change these. # use ';' as separators, not commas. Spaces are ignored. ADC_2X = 10.255.78.[10:22 26 35:49] ; 10.255.78.[77:90] ; 12.23.34.[13:25 33:44 56:75] ADC_4X = 10.255.78.[50:76] ICS_2X = 10.255.89.[5:44] CLAWS = 10.255.78.[5:9] # for a definition based on a script, the value must be in the form of: # [SCRIPT:"whatever the script is"] # with required escaping being embedded in the submitted script # (see below for an example in QHOST) # the following QHOST example uses the host-local SGE 'qhost' and 'scut' urilities # to generate a list of hosts to process and filters only 'a64' hosts # which are responsive (don't have ' - ' entries). Returns the list as # a space-delimited set of names. #QHOST = SCRIPT:"qhost |grep a64 | grep -v ' - ' | scut --c1=0 | perl -e 's/\\n/ /gi' -p" # Set temporarily dead nodes in here if required. IGNORE = 10.255.78.12 ; 10.255.78.48 ; 12.23.34.[22:25] [GROUPS] # GROUPS can be composed of primary IPRANGE groups as well as other # GROUP groups as long as they have been previously defined. ALL_2X = ICS_2X + ADC_2X CENTOS = ICS_2X + ADC_2X + ADC_4X ADC_ALL = ALL_2X + ADC_4X + CLAWS
9. Specifying Ranges
The range specifier in cf is fairly flexible (mostly taken from the slice’n'dice utility scut). You can specify the target ranges with either IP #s (128.200.15.[34:88 -55]) or alphnumeric hostnames (node_[211-345 -255:-258].somenet.podunk.edu).
BUT only 1 variable specification per hostname string, please (don’t try cn_[45:299].net-[23:35].podunk.edu; you’ll regret it.)
Also, if the 1st number is entered using leading/padding zeros, the numbers emitted will also have the same number of characters. For example, [0005:0023] will generate 0005 0006 0007 … 0022 0023, as will [0005:23] - it’s just the 1st number that sets the pad length.
You can also chain specifications like this: (taken from an example ~/.clusterforkrc file).
# ADC_2X can be specified either with a central negation range: ADC_2X = a64-[104:181 -141:-167] # or as this a chain of 2 separated ranges # (you must use a ';' as a chain character) ADC_2X = a64-[104:140] ; a64-[168:181] # you can also specify hostgroups with chains of mixed IP #s and hostnames # they should resolve to each other and both are listed on output BOOBOO = 10.255.78.[22:46] ; a64-[168:177]
10. Options
The options are spelled out fairly well by the cf -h command. However, here they are again, verbatim.
--help / -h............. dump usage, tips --version / -v ......... dump version # --config=/alt/config/file .. an alternative config file. On 1st execution, clusterfork will write a template config file to ~/.clusterforkrc and exit. You must edit the template to provide parameters for your particular setup. --target=[quoted IP_Range or predefined GROUP name] where IP_Range -> 12.23.23.[27:45 -33 54:88] or 'a64-[00023:35 -25:-28].bduc' (Note that leading zeros in the FIRST range specifier will be replicated in the output; the above pattern will generate: a64-00023.bduc, a64-00024.bduc, a64-00029.bduc, etc) where GROUP -> 'ICS_2X,ADC_4X,CLAWS' (from config file) (see docs for longer exposition on IP_Ranges and GROUP definition) --listgroup=[GROUP,GROUP..] (GROUPs from config file) If no GROUP specified, dumps IP #s for ALL GROUPS. This option does not require a 'remote command' and ignores it if given. --fork (default) Sends 'remote command' to all nodes in parallel and saves output from nodes into a dated subdir for later perusal. If you submit a command to run in parallel, it must run to completion without intervention. ie: to install on a CentOS node, the 'yum install' command must have the '-y' flag as well: 'yum install -y' to signify that 'Y' is assumed to be the answer to all questions. If you use the --fork command (as above)), instead of producing the stdout/err immediately, a new subdir will be created with the name of the format: REMOTE_CMD-(20 chars of the command)-time_date and the output for each node will be directed into a separate file named for the IP number or hostname (whichever the input spec was). --nofork .... Execs 'remote command' serially to each specified node and emits the output from each node as it's generated. If executed with this option, it will produce a list of stanzas corresponding to the nodes requested: -------------------------------- a64-101 [192.168.0.10]: <output from the command> a64-102 [192.168.0.11]: <output from the command> -------------------------------- --debug ..... causes voluminous debug messages to spew forth
11. Some real-world examples
To cause cf to dump its help file into the less pager
$ clusterfork -h
Have cf read the alternative config file ./this_cf_file and list the groups that are defined there.
clusterfork --config=./this_cf_file --listgroup
Have cf read the default config file and target the group CLAWS with the command ls -lSh
$ clusterfork --target=CLAWS 'ls -lSh'
Check the memory error counts for the nodes 192.168.1.15 thru 192.168.1.75 except 192.168.1.66
$ clusterfork --target=192.18.1.[15:75 -66] \ 'cd /sys/devices/system/edac/mc && grep [0-9]* mc*/csrow*/[cu]e_count'
Tell the nodes in the group ALL_ADC to send a single ping to the login node
$ clusterfork --target=ALL_ADC 'ping -c 1 bduc-login'
Ask the nodes [10.255.78.12 to 10.255.78.45] and [10.255.35.101 to 10.255.35.165] to dump the catalog of their installed packages.
clusterfork --target='10.255.78.[12:45] 10.255.35.[101:165]' 'dpkg -l'
Ask the nodes in the ADC_2X group to /serially/ dump their hardware memory configuration
$ sudo clusterfork --target=ADC_2X --nofork 'lshw -short -class memory'
Do all the nodes have libXp.so.6?
$ cf --target=QHOST 'locate libXp.so.6
You can use hostnames as well as IP #s:
$ sudo clusterfork --target=a64-[0005:0078 -22 -52 -57].bduc 'ps aux |grep gromacs'
Using a script to specify the list of nodes to target:
$ sudo clusterfork --target=QHOST 'yum install -y tree'
In the above example, the ~/.clusterforkrc file has the following line:
QHOST = "SCRIPT:qhost |grep a64 | grep -v ' - ' | scut --c1=0 | perl -e 's/\\n/ /gi' -p"
which calls the SGE utility qhost, filters the output and stream-edits the STDOUT to create a space-delimited list that can be processed by cf.
12. Output
When cf is executed in forking mode, it waits for all the slave processes to finish, then allows you to view both the Summary and the full results, which are also written to the newly created directory named with a combination of the command, the date and the time:
./clusterfork.pl --target=QHOST 'ls -lSh *gz' INFO: Creating dir [REMOTE_CMD-ls--lSh--gz-13.53.12_2010-08-17]....success INFO: Processing name: [QHOST] ====================================================== Processing [QHOST] ====================================================== ... Host: a64-166 [10.255.78.75]: Host: a64-167 [10.255.78.76]: Host: a64-182.bduc [10.255.78.91]: # of process at start: [119] Waiting for [1] running processess ... All slave processes finished! You can find the results of your command in the dir [ REMOTE_CMD-ls--lSh--gz-13.53.12_2010-08-17 ] Summary printed to REMOTE_CMD-ls--lSh--gz-13.53.12_2010-08-17/Summary. View in 'less'? [Yn]
If you hit [n], the Summary will be skipped. If you hit [Enter] or [Y] the Summary of the command execution will be shown, with both wc output to indicate similar output (line/word/chars) and MD5 sum to indicate identical output:
Analysis of contents for files in REMOTE_CMD-ls--lSh--gz-13.53.12_2010-08-17 Command: [ls -lSh *gz] ======================================================================== line / word / chars | md5 sum | # | hosts -> 21 189 1443 538ca54bd6f10af5da3872b3a6f14c3e 120 a64-001 a64-002 a64-003 .. REMOTE_CMD-ls--lSh--gz-13.53.12_2010-08-17/Summary (END)
In the above case, because of the shared dir structure, the Summary shows that the result is identical on all nodes. In the case below, where the result is a network latency, there’s quite a bit more variability.
Analysis of contents for files in REMOTE_CMD-ping--c-1-bduc-login-13.58.33_2010-08-17 Command: [ping -c 1 bduc-login] ======================================================================== line / word / chars | md5 sum | # | hosts -> 6 36 270 c86c8f74e14ef6e6b42d51d00ade483a 1 a64-021 6 36 270 86e1ee2d631857ae7e65cbe6a7615fb8 1 a64-012 6 36 270 19a407cf64c01f03ceaa508be0269c40 1 a64-024 6 36 270 ceba5ef2b4cb4b647c36e7de9361ca46 2 a64-106 a64-139 6 36 270 36be0140aaee1d45a6565a6c6783c06d 1 a64-145 6 36 270 be484f980418af69358be51f3ef2184b 1 a64-179 6 36 270 abbf9229964805d06cabb4a0b8a361ec 1 a64-123 6 36 270 102f114c2607222132bfed67822fc57f 1 a64-016 6 36 270 e78da0916e4b5aea70f1f1dd828a477b 2 a64-141 a64-161 6 36 270 a73cb10e936aa0cf2061329c8e92c03b 2 a64-023 a64-028 6 36 270 24d5417a54458bc9f502b79b514a8f2f 1 a64-010
The last option of the clusterfork analysis allows you to choose to see the results in Midnight Commander (aka mc). The above results shown in mc look like this:
13. Alternatives
There are existing tools that do something similar:
-
pssh - pssh is a very nice set of tools written in Python. It’s fairly mature and has been packaged nicely. It can use a configuration file (and in fact doesn’t allow IP range specification from the commandline), and it can write the results to a dir (but doesn’t write a summary or allow in-line viewing. A nice example is described here. It doesn’t allow as easy a IP range specification, nor grouping as clusterfork. It also is a set of tools rather than one. But since it is available via both RPM and deb, it is very convenient to install. If you’re using pssh and are familiar with it, I’d suggest staying with it.
-
ClusterIt - this is a fairly large hammer when all I wanted was to send commands to a set of nodes. ClusterIt is writ in C, for speed supposedly, tho what it’s doing is just issuing commands so speed of execution shouldn’t be an issue. It’s also trying to be a scheduling tool which complicates the core functionality of what should be a pretty simple tool.
-
clusterssh aka cssh is a similar tool (but requires tcl/tk). As such it has some advantages - can set up hosts and de/select hosts via mouse, but you interact with the targets via 1 xterm per host, hardly an efficient use of your desktop.
-
gexec (now part of the ganglia project) is both considerably more complex and (possibly) more capable than cf. It requires the installation of authd and the ganglia system. By comparison, cf installs its own config file on 1st run and then is fairly independent. If you’re forking the commands over several thousand nodes instead of a few hundred, gexec may be worth the extra effort.
None of the above utils has 'cf’s very slick host range specification tho… :)
14. Latest Version
The latest version of the clusterfork code and documentation will always be found here.