"Fossies" - the Fresh Open Source Software Archive  

Source code changes of the file "src/parallel.pod" between
parallel-20210122.tar.bz2 and parallel-20210222.tar.bz2

About: GNU Parallel is a shell tool for executing jobs in parallel using multiple CPU cores and/or multiple computers.

parallel.pod  (parallel-20210122.tar.bz2):parallel.pod  (parallel-20210222.tar.bz2)
skipping to change at line 281 skipping to change at line 281
without extension. It is a combination of B<{>I<n>B<}>, B<{/}>, and without extension. It is a combination of B<{>I<n>B<}>, B<{/}>, and
B<{.}>. B<{.}>.
This positional replacement string will be replaced by the input from This positional replacement string will be replaced by the input from
input source I<n> (when used with B<-a> or B<::::>) or with the input source I<n> (when used with B<-a> or B<::::>) or with the
I<n>'th argument (when used with B<-N>). The input will have the I<n>'th argument (when used with B<-N>). The input will have the
directory (if any) and extension removed. directory (if any) and extension removed.
To understand positional replacement strings see B<{>I<n>B<}>. To understand positional replacement strings see B<{>I<n>B<}>.
=item B<{=>I<perl expression>B<=}> =item B<{=>I<perl expression>B<=}> (alpha testing)
Replace with calculated I<perl expression>. B<$_> will contain the Replace with calculated I<perl expression>. B<$_> will contain the
same as B<{}>. After evaluating I<perl expression> B<$_> will be used same as B<{}>. After evaluating I<perl expression> B<$_> will be used
as the value. It is recommended to only change $_ but you have full as the value. It is recommended to only change $_ but you have full
access to all of GNU B<parallel>'s internal functions and data access to all of GNU B<parallel>'s internal functions and data
structures. A few convenience functions and data structures have been structures.
made:
The expression must give the same result if evaluated twice -
otherwise the behaviour is undefined. E.g. this will not work as expected:
parallel echo '{= $_= ++$wrong_counter =}' ::: a b c
A few convenience functions and data structures have been made:
=over 15 =over 15
=item Z<> B<Q(>I<string>B<)> =item Z<> B<Q(>I<string>B<)>
shell quote a string shell quote a string
=item Z<> B<pQ(>I<string>B<)> =item Z<> B<pQ(>I<string>B<)>
perl quote a string perl quote a string
skipping to change at line 686 skipping to change at line 692
parallel --csv echo {1} of {2} at {3} parallel --csv echo {1} of {2} at {3}
Even quoted newlines are parsed correctly: Even quoted newlines are parsed correctly:
(echo '"Start of field 1 with newline' (echo '"Start of field 1 with newline'
echo 'Line 2 in field 1";value 2') | echo 'Line 2 in field 1";value 2') |
parallel --csv --colsep ';' echo Field 1: {1} Field 2: {2} parallel --csv --colsep ';' echo Field 1: {1} Field 2: {2}
When used with B<--pipe> only pass full CSV-records. When used with B<--pipe> only pass full CSV-records.
=item B<--delay> I<mytime> (beta testing) =item B<--delay> I<mytime>
Delay starting next job by I<mytime>. GNU B<parallel> will pause Delay starting next job by I<mytime>. GNU B<parallel> will pause
I<mytime> after starting each job. I<mytime> is normally in seconds, I<mytime> after starting each job. I<mytime> is normally in seconds,
but can be floats postfixed with B<s>, B<m>, B<h>, or B<d> which would but can be floats postfixed with B<s>, B<m>, B<h>, or B<d> which would
multiply the float by 1, 60, 3600, or 86400. Thus these are multiply the float by 1, 60, 3600, or 86400. Thus these are
equivalent: B<--delay 100000> and B<--delay 1d3.5h16.6m4s>. equivalent: B<--delay 100000> and B<--delay 1d3.5h16.6m4s>.
If you append 'auto' to I<mytime> (e.g. 13m3sauto) GNU B<parallel> will If you append 'auto' to I<mytime> (e.g. 13m3sauto) GNU B<parallel> will
automatically try to find the optimal value: If a job fails, I<mytime> automatically try to find the optimal value: If a job fails, I<mytime>
is doubled. If a job succeeds, I<mytime> is decreased by 10%. is doubled. If a job succeeds, I<mytime> is decreased by 10%.
skipping to change at line 808 skipping to change at line 814
B<--pipepart> will give data to the program on stdin (standard B<--pipepart> will give data to the program on stdin (standard
input). With B<--fifo> GNU B<parallel> will create a temporary fifo input). With B<--fifo> GNU B<parallel> will create a temporary fifo
with the name in B<{}>, so you can do: B<parallel --pipe --fifo wc {}>. with the name in B<{}>, so you can do: B<parallel --pipe --fifo wc {}>.
Beware: If data is not read from the fifo, the job will block forever. Beware: If data is not read from the fifo, the job will block forever.
Implies B<--pipe> unless B<--pipepart> is used. Implies B<--pipe> unless B<--pipepart> is used.
See also: B<--cat>. See also: B<--cat>.
=item B<--filter> I<filter> (alpha testing)
Only run jobs where I<filter> is true. I<filter> can contain
replacement strings and Perl code. Example:
parallel --filter '{1} < {2}+1' echo ::: {1..3} ::: {1..3}
Outputs: 1,1 1,2 1,3 2,2 2,3 3,3
=item B<--filter-hosts> =item B<--filter-hosts>
Remove down hosts. For each remote host: check that login through ssh Remove down hosts. For each remote host: check that login through ssh
works. If not: do not use this host. works. If not: do not use this host.
For performance reasons, this check is performed only at the start and For performance reasons, this check is performed only at the start and
every time B<--sshloginfile> is changed. If an host goes down after every time B<--sshloginfile> is changed. If an host goes down after
the first check, it will go undetected until B<--sshloginfile> is the first check, it will go undetected until B<--sshloginfile> is
changed; B<--retries> can be used to mitigate this. changed; B<--retries> can be used to mitigate this.
skipping to change at line 843 skipping to change at line 858
followed by stderr (standard error). followed by stderr (standard error).
This takes in the order of 0.5ms per job and depends on the speed of This takes in the order of 0.5ms per job and depends on the speed of
your disk for larger output. It can be disabled with B<-u>, but this your disk for larger output. It can be disabled with B<-u>, but this
means output from different commands can get mixed. means output from different commands can get mixed.
B<--group> is the default. Can be reversed with B<-u>. B<--group> is the default. Can be reversed with B<-u>.
See also: B<--line-buffer> B<--ungroup> See also: B<--line-buffer> B<--ungroup>
=item B<--group-by> I<val> (beta testing) =item B<--group-by> I<val>
Group input by value. Combined with B<--pipe>/B<--pipepart> Group input by value. Combined with B<--pipe>/B<--pipepart>
B<--group-by> groups lines with the same value into a record. B<--group-by> groups lines with the same value into a record.
The value can be computed from the full line or from a single column. The value can be computed from the full line or from a single column.
I<val> can be: I<val> can be:
=over 15 =over 15
skipping to change at line 1029 skipping to change at line 1044
replacement variables: B<{column name}>, B<{column name/}>, B<{column replacement variables: B<{column name}>, B<{column name/}>, B<{column
name//}>, B<{column name/.}>, B<{column name.}>, B<{=column name perl name//}>, B<{column name/.}>, B<{column name.}>, B<{=column name perl
expression =}>, .. expression =}>, ..
For B<--pipe> the matched header will be prepended to each output. For B<--pipe> the matched header will be prepended to each output.
B<--header :> is an alias for B<--header '.*\n'>. B<--header :> is an alias for B<--header '.*\n'>.
If I<regexp> is a number, it is a fixed number of lines. If I<regexp> is a number, it is a fixed number of lines.
=item B<--hostgroups> (alpha testing) =item B<--hostgroups> (beta testing)
=item B<--hgrp> (alpha testing) =item B<--hgrp> (beta testing)
Enable hostgroups on arguments. If an argument contains '@' the string Enable hostgroups on arguments. If an argument contains '@' the string
after '@' will be removed and treated as a list of hostgroups on which after '@' will be removed and treated as a list of hostgroups on which
this job is allowed to run. If there is no B<--sshlogin> with a this job is allowed to run. If there is no B<--sshlogin> with a
corresponding group, the job will run on any hostgroup. corresponding group, the job will run on any hostgroup.
Example: Example:
parallel --hostgroups \ parallel --hostgroups \
--sshlogin @grp1/myserver1 -S @grp1+grp2/myserver2 \ --sshlogin @grp1/myserver1 -S @grp1+grp2/myserver2 \
skipping to change at line 1277 skipping to change at line 1292
job continuously while it is running, then lines from the second job job continuously while it is running, then lines from the second job
while that is running. It will buffer full lines, but jobs will not while that is running. It will buffer full lines, but jobs will not
mix. Compare: mix. Compare:
parallel -j0 'echo {};sleep {};echo {}' ::: 1 3 2 4 parallel -j0 'echo {};sleep {};echo {}' ::: 1 3 2 4
parallel -j0 --lb 'echo {};sleep {};echo {}' ::: 1 3 2 4 parallel -j0 --lb 'echo {};sleep {};echo {}' ::: 1 3 2 4
parallel -j0 -k --lb 'echo {};sleep {};echo {}' ::: 1 3 2 4 parallel -j0 -k --lb 'echo {};sleep {};echo {}' ::: 1 3 2 4
See also: B<--group> B<--ungroup> See also: B<--group> B<--ungroup>
=item B<--xapply> =item B<--xapply> (alpha testing)
=item B<--link> =item B<--link> (alpha testing)
Link input sources. Read multiple input sources like B<xapply>. If Link input sources. Read multiple input sources like B<xapply>. If
multiple input sources are given, one argument will be read from each multiple input sources are given, one argument will be read from each
of the input sources. The arguments can be accessed in the command as of the input sources. The arguments can be accessed in the command as
B<{1}> .. B<{>I<n>B<}>, so B<{1}> will be a line from the first input B<{1}> .. B<{>I<n>B<}>, so B<{1}> will be a line from the first input
source, and B<{6}> will refer to the line with the same line number source, and B<{6}> will refer to the line with the same line number
from the 6th input source. from the 6th input source.
Compare these two: Compare these two:
skipping to change at line 1349 skipping to change at line 1364
only start as many as there is memory for. If less than I<size> bytes only start as many as there is memory for. If less than I<size> bytes
are free, no more jobs will be started. If less than 50% I<size> bytes are free, no more jobs will be started. If less than 50% I<size> bytes
are free, the youngest job will be killed, and put back on the queue are free, the youngest job will be killed, and put back on the queue
to be run later. to be run later.
B<--retries> must be set to determine how many times GNU B<parallel> B<--retries> must be set to determine how many times GNU B<parallel>
should retry a given job. should retry a given job.
See also: B<--memsuspend> See also: B<--memsuspend>
=item B<--memsuspend> I<size> (alpha testing) =item B<--memsuspend> I<size> (beta testing)
Suspend jobs when there is less than 2 * I<size> memory free. The Suspend jobs when there is less than 2 * I<size> memory free. The
I<size> can be postfixed with K, M, G, T, P, k, m, g, t, or p which I<size> can be postfixed with K, M, G, T, P, k, m, g, t, or p which
would multiply the size with 1024, 1048576, 1073741824, 1099511627776, would multiply the size with 1024, 1048576, 1073741824, 1099511627776,
1125899906842624, 1000, 1000000, 1000000000, 1000000000000, or 1125899906842624, 1000, 1000000, 1000000000, 1000000000000, or
1000000000000000, respectively. 1000000000000000, respectively.
If the available memory falls below 2 * I<size>, GNU B<parallel> If the available memory falls below 2 * I<size>, GNU B<parallel>
will suspend some of the running jobs. If the available memory falls will suspend some of the running jobs. If the available memory falls
below I<size>, only one job will be running. below I<size>, only one job will be running.
skipping to change at line 1452 skipping to change at line 1467
=item B<--outputasfiles> =item B<--outputasfiles>
=item B<--files> =item B<--files>
Instead of printing the output to stdout (standard output) the output Instead of printing the output to stdout (standard output) the output
of each job is saved in a file and the filename is then printed. of each job is saved in a file and the filename is then printed.
See also: B<--results> See also: B<--results>
=item B<--pipe> (beta testing) =item B<--pipe>
=item B<--spreadstdin> (beta testing) =item B<--spreadstdin>
Spread input to jobs on stdin (standard input). Read a block of data Spread input to jobs on stdin (standard input). Read a block of data
from stdin (standard input) and give one block of data as input to one from stdin (standard input) and give one block of data as input to one
job. job.
The block size is determined by B<--block>. The strings B<--recstart> The block size is determined by B<--block>. The strings B<--recstart>
and B<--recend> tell GNU B<parallel> how a record starts and/or and B<--recend> tell GNU B<parallel> how a record starts and/or
ends. The block read will have the final partial record removed before ends. The block read will have the final partial record removed before
the block is passed on to the job. The partial record will be the block is passed on to the job. The partial record will be
prepended to next block. prepended to next block.
skipping to change at line 1524 skipping to change at line 1539
=item B<--plus> =item B<--plus>
Activate additional replacement strings: {+/} {+.} {+..} {+...} {..} Activate additional replacement strings: {+/} {+.} {+..} {+...} {..}
{...} {/..} {/...} {##}. The idea being that '{+foo}' matches the opposite of {...} {/..} {/...} {##}. The idea being that '{+foo}' matches the opposite of
'{foo}' and {} = {+/}/{/} = {.}.{+.} = {+/}/{/.}.{+.} = {..}.{+..} = '{foo}' and {} = {+/}/{/} = {.}.{+.} = {+/}/{/.}.{+.} = {..}.{+..} =
{+/}/{/..}.{+..} = {...}.{+...} = {+/}/{/...}.{+...} {+/}/{/..}.{+..} = {...}.{+...} = {+/}/{/...}.{+...}
B<{##}> is the total number of jobs to be run. It is incompatible with B<{##}> is the total number of jobs to be run. It is incompatible with
B<-X>/B<-m>/B<--xargs>. B<-X>/B<-m>/B<--xargs>.
B<{0%}> zero-padded jobslot. (alpha testing)
B<{0#}> zero-padded sequence number. (alpha testing)
B<{choose_k}> is inspired by n choose k: Given a list of n elements, B<{choose_k}> is inspired by n choose k: Given a list of n elements,
choose k. k is the number of input sources and n is the number of choose k. k is the number of input sources and n is the number of
arguments in an input source. The content of the input sources must arguments in an input source. The content of the input sources must
be the same and the arguments must be unique. be the same and the arguments must be unique.
Shorthands for variables: Shorthands for variables:
{slot} $PARALLEL_JOBSLOT (see {%}) {slot} $PARALLEL_JOBSLOT (see {%})
{sshlogin} $PARALLEL_SSHLOGIN {sshlogin} $PARALLEL_SSHLOGIN
{host} $PARALLEL_SSHHOST {host} $PARALLEL_SSHHOST
skipping to change at line 1794 skipping to change at line 1813
If I<name> ends in B<.csv>/B<.tsv> the output will be a CSV-file If I<name> ends in B<.csv>/B<.tsv> the output will be a CSV-file
named I<name>. named I<name>.
B<.csv> gives a comma separated value file. B<.tsv> gives a TAB B<.csv> gives a comma separated value file. B<.tsv> gives a TAB
separated value file. separated value file.
B<-.csv>/B<-.tsv> are special: It will give the file on stdout B<-.csv>/B<-.tsv> are special: It will give the file on stdout
(standard output). (standard output).
B<JSON file output> (beta testing) B<JSON file output>
If I<name> ends in B<.json> the output will be a JSON-file If I<name> ends in B<.json> the output will be a JSON-file
named I<name>. named I<name>.
B<-.json> is special: It will give the file on stdout (standard B<-.json> is special: It will give the file on stdout (standard
output). output).
B<Replacement string output file> (beta testing) B<Replacement string output file>
If I<name> contains a replacement string and the replaced result does If I<name> contains a replacement string and the replaced result does
not end in /, then the standard output will be stored in a file named not end in /, then the standard output will be stored in a file named
by this result. Standard error will be stored in the same file name by this result. Standard error will be stored in the same file name
with '.err' added, and the sequence number will be stored in the same with '.err' added, and the sequence number will be stored in the same
file name with '.seq' added. file name with '.seq' added.
E.g. E.g.
parallel --results my_{} echo ::: foo bar baz parallel --results my_{} echo ::: foo bar baz
skipping to change at line 2335 skipping to change at line 2354
If B<--sqlworker> runs on the local machine, the hostname in the SQL If B<--sqlworker> runs on the local machine, the hostname in the SQL
table will not be ':' but instead the hostname of the machine. table will not be ':' but instead the hostname of the machine.
=item B<--ssh> I<sshcommand> =item B<--ssh> I<sshcommand>
GNU B<parallel> defaults to using B<ssh> for remote access. This can GNU B<parallel> defaults to using B<ssh> for remote access. This can
be overridden with B<--ssh>. It can also be set on a per server be overridden with B<--ssh>. It can also be set on a per server
basis (see B<--sshlogin>). basis (see B<--sshlogin>).
=item B<--sshdelay> I<mytime> (beta testing) =item B<--sshdelay> I<mytime>
Delay starting next ssh by I<mytime>. GNU B<parallel> will not start Delay starting next ssh by I<mytime>. GNU B<parallel> will not start
another ssh for the next I<mytime>. another ssh for the next I<mytime>.
For details on I<mytime> see B<--delay>. For details on I<mytime> see B<--delay>.
=item B<-S> I<[@hostgroups/][ncpus/]sshlogin[,[@hostgroups/][ncpus/]sshlogin[,.. .]]> =item B<-S> I<[@hostgroups/][ncpus/]sshlogin[,[@hostgroups/][ncpus/]sshlogin[,.. .]]>
=item B<-S> I<@hostgroup> =item B<-S> I<@hostgroup>
skipping to change at line 2467 skipping to change at line 2486
=item B<--slotreplace> I<replace-str> =item B<--slotreplace> I<replace-str>
Use the replacement string I<replace-str> instead of B<{%}> for Use the replacement string I<replace-str> instead of B<{%}> for
job slot number. job slot number.
=item B<--silent> =item B<--silent>
Silent. The job to be run will not be printed. This is the default. Silent. The job to be run will not be printed. This is the default.
Can be reversed with B<-v>. Can be reversed with B<-v>.
=item B<--template> I<file>=I<repl> (alpha testing)
=item B<--tmpl> I<file>=I<repl> (alpha testing)
Copy I<file> to I<repl>. All replacement strings in the contents of
I<file> will be replaced. All replacement strings in the name I<repl>
will be replaced.
With B<--cleanup> the new file will be removed when the job is done.
If I<my.tmpl> contains this:
Xval: {x}
Yval: {y}
FixedValue: 9
# x with 2 decimals
DecimalX: {=x $_=sprintf("%.2f",$_) =}
TenX: {=x $_=$_*10 =}
RandomVal: {=1 $_=rand() =}
it can be used like this:
myprog() { echo Using "$@"; cat "$@"; }
export -f myprog
parallel --cleanup --header : --tmpl my.tmpl={#}.t myprog {#}.t \
::: x 1.234 2.345 3.45678 ::: y 1 2 3
=item B<--tty> =item B<--tty>
Open terminal tty. If GNU B<parallel> is used for starting a program Open terminal tty. If GNU B<parallel> is used for starting a program
that accesses the tty (such as an interactive program) then this that accesses the tty (such as an interactive program) then this
option may be needed. It will default to starting only one job at a option may be needed. It will default to starting only one job at a
time (i.e. B<-j1>), not buffer the output (i.e. B<-u>), and it will time (i.e. B<-j1>), not buffer the output (i.e. B<-u>), and it will
open a tty for the job. open a tty for the job.
You can of course override B<-j1> and B<-u>. You can of course override B<-j1> and B<-u>.
skipping to change at line 2804 skipping to change at line 2850
line. If B<{}> is used multiple times each B<{}> will be replaced line. If B<{}> is used multiple times each B<{}> will be replaced
with all the arguments. with all the arguments.
Support for B<--xargs> with B<--sshlogin> is limited and may fail. Support for B<--xargs> with B<--sshlogin> is limited and may fail.
See also B<-X> for context replace. If in doubt use B<-X> as that will See also B<-X> for context replace. If in doubt use B<-X> as that will
most likely do what is needed. most likely do what is needed.
=back =back
=head1 EXAMPLE: Working as xargs -n1. Argument appending =head1 EXAMPLES
=head2 EXAMPLE: Working as xargs -n1. Argument appending
GNU B<parallel> can work similar to B<xargs -n1>. GNU B<parallel> can work similar to B<xargs -n1>.
To compress all html files using B<gzip> run: To compress all html files using B<gzip> run:
find . -name '*.html' | parallel gzip --best find . -name '*.html' | parallel gzip --best
If the file names may contain a newline use B<-0>. Substitute FOO BAR with If the file names may contain a newline use B<-0>. Substitute FOO BAR with
FUBAR in all files in this dir and subdirs: FUBAR in all files in this dir and subdirs:
find . -type f -print0 | \ find . -type f -print0 | \
parallel -q0 perl -i -pe 's/FOO BAR/FUBAR/g' parallel -q0 perl -i -pe 's/FOO BAR/FUBAR/g'
Note B<-q> is needed because of the space in 'FOO BAR'. Note B<-q> is needed because of the space in 'FOO BAR'.
=head1 EXAMPLE: Simple network scanner =head2 EXAMPLE: Simple network scanner
B<prips> can generate IP-addresses from CIDR notation. With GNU B<prips> can generate IP-addresses from CIDR notation. With GNU
B<parallel> you can build a simple network scanner to see which B<parallel> you can build a simple network scanner to see which
addresses respond to B<ping>: addresses respond to B<ping>:
prips 130.229.16.0/20 | \ prips 130.229.16.0/20 | \
parallel --timeout 2 -j0 \ parallel --timeout 2 -j0 \
'ping -c 1 {} >/dev/null && echo {}' 2>/dev/null 'ping -c 1 {} >/dev/null && echo {}' 2>/dev/null
=head1 EXAMPLE: Reading arguments from command line =head2 EXAMPLE: Reading arguments from command line
GNU B<parallel> can take the arguments from command line instead of GNU B<parallel> can take the arguments from command line instead of
stdin (standard input). To compress all html files in the current dir stdin (standard input). To compress all html files in the current dir
using B<gzip> run: using B<gzip> run:
parallel gzip --best ::: *.html parallel gzip --best ::: *.html
To convert *.wav to *.mp3 using LAME running one process per CPU run: To convert *.wav to *.mp3 using LAME running one process per CPU run:
parallel lame {} -o {.}.mp3 ::: *.wav parallel lame {} -o {.}.mp3 ::: *.wav
=head1 EXAMPLE: Inserting multiple arguments =head2 EXAMPLE: Inserting multiple arguments
When moving a lot of files like this: B<mv *.log destdir> you will When moving a lot of files like this: B<mv *.log destdir> you will
sometimes get the error: sometimes get the error:
bash: /bin/mv: Argument list too long bash: /bin/mv: Argument list too long
because there are too many files. You can instead do: because there are too many files. You can instead do:
ls | grep -E '\.log$' | parallel mv {} destdir ls | grep -E '\.log$' | parallel mv {} destdir
This will run B<mv> for each file. It can be done faster if B<mv> gets This will run B<mv> for each file. It can be done faster if B<mv> gets
as many arguments that will fit on the line: as many arguments that will fit on the line:
ls | grep -E '\.log$' | parallel -m mv {} destdir ls | grep -E '\.log$' | parallel -m mv {} destdir
In many shells you can also use B<printf>: In many shells you can also use B<printf>:
printf '%s\0' *.log | parallel -0 -m mv {} destdir printf '%s\0' *.log | parallel -0 -m mv {} destdir
=head1 EXAMPLE: Context replace =head2 EXAMPLE: Context replace
To remove the files I<pict0000.jpg> .. I<pict9999.jpg> you could do: To remove the files I<pict0000.jpg> .. I<pict9999.jpg> you could do:
seq -w 0 9999 | parallel rm pict{}.jpg seq -w 0 9999 | parallel rm pict{}.jpg
You could also do: You could also do:
seq -w 0 9999 | perl -pe 's/(.*)/pict$1.jpg/' | parallel -m rm seq -w 0 9999 | perl -pe 's/(.*)/pict$1.jpg/' | parallel -m rm
The first will run B<rm> 10000 times, while the last will only run The first will run B<rm> 10000 times, while the last will only run
B<rm> as many times needed to keep the command line length short B<rm> as many times needed to keep the command line length short
enough to avoid B<Argument list too long> (it typically runs 1-2 times). enough to avoid B<Argument list too long> (it typically runs 1-2 times).
You could also run: You could also run:
seq -w 0 9999 | parallel -X rm pict{}.jpg seq -w 0 9999 | parallel -X rm pict{}.jpg
This will also only run B<rm> as many times needed to keep the command This will also only run B<rm> as many times needed to keep the command
line length short enough. line length short enough.
=head1 EXAMPLE: Compute intensive jobs and substitution =head2 EXAMPLE: Compute intensive jobs and substitution
If ImageMagick is installed this will generate a thumbnail of a jpg If ImageMagick is installed this will generate a thumbnail of a jpg
file: file:
convert -geometry 120 foo.jpg thumb_foo.jpg convert -geometry 120 foo.jpg thumb_foo.jpg
This will run with number-of-cpus jobs in parallel for all jpg files This will run with number-of-cpus jobs in parallel for all jpg files
in a directory: in a directory:
ls *.jpg | parallel convert -geometry 120 {} thumb_{} ls *.jpg | parallel convert -geometry 120 {} thumb_{}
skipping to change at line 2911 skipping to change at line 2959
(e.g. running B<convert -geometry 120 ./foo/bar.jpg (e.g. running B<convert -geometry 120 ./foo/bar.jpg
thumb_./foo/bar.jpg> would clearly be wrong). The command will thumb_./foo/bar.jpg> would clearly be wrong). The command will
generate files like ./foo/bar.jpg_thumb.jpg. generate files like ./foo/bar.jpg_thumb.jpg.
Use B<{.}> to avoid the extra .jpg in the file name. This command will Use B<{.}> to avoid the extra .jpg in the file name. This command will
make files like ./foo/bar_thumb.jpg: make files like ./foo/bar_thumb.jpg:
find . -name '*.jpg' | \ find . -name '*.jpg' | \
parallel convert -geometry 120 {} {.}_thumb.jpg parallel convert -geometry 120 {} {.}_thumb.jpg
=head1 EXAMPLE: Substitution and redirection =head2 EXAMPLE: Substitution and redirection
This will generate an uncompressed version of .gz-files next to the .gz-file: This will generate an uncompressed version of .gz-files next to the .gz-file:
parallel zcat {} ">"{.} ::: *.gz parallel zcat {} ">"{.} ::: *.gz
Quoting of > is necessary to postpone the redirection. Another Quoting of > is necessary to postpone the redirection. Another
solution is to quote the whole command: solution is to quote the whole command:
parallel "zcat {} >{.}" ::: *.gz parallel "zcat {} >{.}" ::: *.gz
Other special shell characters (such as * ; $ > < | >> <<) also need Other special shell characters (such as * ; $ > < | >> <<) also need
to be put in quotes, as they may otherwise be interpreted by the shell to be put in quotes, as they may otherwise be interpreted by the shell
and not given to GNU B<parallel>. and not given to GNU B<parallel>.
=head1 EXAMPLE: Composed commands =head2 EXAMPLE: Composed commands
A job can consist of several commands. This will print the number of A job can consist of several commands. This will print the number of
files in each directory: files in each directory:
ls | parallel 'echo -n {}" "; ls {}|wc -l' ls | parallel 'echo -n {}" "; ls {}|wc -l'
To put the output in a file called <name>.dir: To put the output in a file called <name>.dir:
ls | parallel '(echo -n {}" "; ls {}|wc -l) >{}.dir' ls | parallel '(echo -n {}" "; ls {}|wc -l) >{}.dir'
skipping to change at line 2960 skipping to change at line 3008
Create a mirror directory with the same filenames except all files and Create a mirror directory with the same filenames except all files and
symlinks are empty files. symlinks are empty files.
cp -rs /the/source/dir mirror_dir cp -rs /the/source/dir mirror_dir
find mirror_dir -type l | parallel -m rm {} '&&' touch {} find mirror_dir -type l | parallel -m rm {} '&&' touch {}
Find the files in a list that do not exist Find the files in a list that do not exist
cat file_list | parallel 'if [ ! -e {} ] ; then echo {}; fi' cat file_list | parallel 'if [ ! -e {} ] ; then echo {}; fi'
=head1 EXAMPLE: Composed command with perl replacement string =head2 EXAMPLE: Composed command with perl replacement string
You have a bunch of file. You want them sorted into dirs. The dir of You have a bunch of file. You want them sorted into dirs. The dir of
each file should be named the first letter of the file name. each file should be named the first letter of the file name.
parallel 'mkdir -p {=s/(.).*/$1/=}; mv {} {=s/(.).*/$1/=}' ::: * parallel 'mkdir -p {=s/(.).*/$1/=}; mv {} {=s/(.).*/$1/=}' ::: *
=head1 EXAMPLE: Composed command with multiple input sources =head2 EXAMPLE: Composed command with multiple input sources
You have a dir with files named as 24 hours in 5 minute intervals: You have a dir with files named as 24 hours in 5 minute intervals:
00:00, 00:05, 00:10 .. 23:55. You want to find the files missing: 00:00, 00:05, 00:10 .. 23:55. You want to find the files missing:
parallel [ -f {1}:{2} ] "||" echo {1}:{2} does not exist \ parallel [ -f {1}:{2} ] "||" echo {1}:{2} does not exist \
::: {00..23} ::: {00..55..5} ::: {00..23} ::: {00..55..5}
=head1 EXAMPLE: Calling Bash functions =head2 EXAMPLE: Calling Bash functions
If the composed command is longer than a line, it becomes hard to If the composed command is longer than a line, it becomes hard to
read. In Bash you can use functions. Just remember to B<export -f> the read. In Bash you can use functions. Just remember to B<export -f> the
function. function.
doit() { doit() {
echo Doing it for $1 echo Doing it for $1
sleep 2 sleep 2
echo Done with $1 echo Done with $1
} }
skipping to change at line 3007 skipping to change at line 3055
To do this on remote servers you need to transfer the function using To do this on remote servers you need to transfer the function using
B<--env>: B<--env>:
parallel --env doit -S server doit ::: 1 2 3 parallel --env doit -S server doit ::: 1 2 3
parallel --env doubleit -S server doubleit ::: 1 2 3 ::: a b parallel --env doubleit -S server doubleit ::: 1 2 3 ::: a b
If your environment (aliases, variables, and functions) is small you If your environment (aliases, variables, and functions) is small you
can copy the full environment without having to B<export -f> can copy the full environment without having to B<export -f>
anything. See B<env_parallel>. anything. See B<env_parallel>.
=head1 EXAMPLE: Function tester =head2 EXAMPLE: Function tester
To test a program with different parameters: To test a program with different parameters:
tester() { tester() {
if (eval "$@") >&/dev/null; then if (eval "$@") >&/dev/null; then
perl -e 'printf "\033[30;102m[ OK ]\033[0m @ARGV\n"' "$@" perl -e 'printf "\033[30;102m[ OK ]\033[0m @ARGV\n"' "$@"
else else
perl -e 'printf "\033[30;101m[FAIL]\033[0m @ARGV\n"' "$@" perl -e 'printf "\033[30;101m[FAIL]\033[0m @ARGV\n"' "$@"
fi fi
} }
export -f tester export -f tester
parallel tester my_program ::: arg1 arg2 parallel tester my_program ::: arg1 arg2
parallel tester exit ::: 1 0 2 0 parallel tester exit ::: 1 0 2 0
If B<my_program> fails a red FAIL will be printed followed by the failing If B<my_program> fails a red FAIL will be printed followed by the failing
command; otherwise a green OK will be printed followed by the command. command; otherwise a green OK will be printed followed by the command.
=head1 EXAMPLE: Continously show the latest line of output =head2 EXAMPLE: Continously show the latest line of output
It can be useful to monitor the output of running jobs. It can be useful to monitor the output of running jobs.
This shows the most recent output line until a job finishes. After This shows the most recent output line until a job finishes. After
which the output of the job is printed in full: which the output of the job is printed in full:
parallel '{} | tee >(cat >&3)' ::: 'command 1' 'command 2' \ parallel '{} | tee >(cat >&3)' ::: 'command 1' 'command 2' \
3> >(perl -ne '$|=1;chomp;printf"%.'$COLUMNS's\r",$_." "x100') 3> >(perl -ne '$|=1;chomp;printf"%.'$COLUMNS's\r",$_." "x100')
=head1 EXAMPLE: Log rotate =head2 EXAMPLE: Log rotate
Log rotation renames a logfile to an extension with a higher number: Log rotation renames a logfile to an extension with a higher number:
log.1 becomes log.2, log.2 becomes log.3, and so on. The oldest log is log.1 becomes log.2, log.2 becomes log.3, and so on. The oldest log is
removed. To avoid overwriting files the process starts backwards from removed. To avoid overwriting files the process starts backwards from
the high number to the low number. This will keep 10 old versions of the high number to the low number. This will keep 10 old versions of
the log: the log:
seq 9 -1 1 | parallel -j1 mv log.{} log.'{= $_++ =}' seq 9 -1 1 | parallel -j1 mv log.{} log.'{= $_++ =}'
mv log log.1 mv log log.1
=head1 EXAMPLE: Removing file extension when processing files =head2 EXAMPLE: Removing file extension when processing files
When processing files removing the file extension using B<{.}> is When processing files removing the file extension using B<{.}> is
often useful. often useful.
Create a directory for each zip-file and unzip it in that dir: Create a directory for each zip-file and unzip it in that dir:
parallel 'mkdir {.}; cd {.}; unzip ../{}' ::: *.zip parallel 'mkdir {.}; cd {.}; unzip ../{}' ::: *.zip
Recompress all .gz files in current directory using B<bzip2> running 1 Recompress all .gz files in current directory using B<bzip2> running 1
job per CPU in parallel: job per CPU in parallel:
skipping to change at line 3069 skipping to change at line 3117
Convert all WAV files to MP3 using LAME: Convert all WAV files to MP3 using LAME:
find sounddir -type f -name '*.wav' | parallel lame {} -o {.}.mp3 find sounddir -type f -name '*.wav' | parallel lame {} -o {.}.mp3
Put all converted in the same directory: Put all converted in the same directory:
find sounddir -type f -name '*.wav' | \ find sounddir -type f -name '*.wav' | \
parallel lame {} -o mydir/{/.}.mp3 parallel lame {} -o mydir/{/.}.mp3
=head1 EXAMPLE: Removing strings from the argument =head2 EXAMPLE: Removing strings from the argument
If you have directory with tar.gz files and want these extracted in If you have directory with tar.gz files and want these extracted in
the corresponding dir (e.g foo.tar.gz will be extracted in the dir the corresponding dir (e.g foo.tar.gz will be extracted in the dir
foo) you can do: foo) you can do:
parallel --plus 'mkdir {..}; tar -C {..} -xf {}' ::: *.tar.gz parallel --plus 'mkdir {..}; tar -C {..} -xf {}' ::: *.tar.gz
If you want to remove a different ending, you can use {%string}: If you want to remove a different ending, you can use {%string}:
parallel --plus echo {%_demo} ::: mycode_demo keep_demo_here parallel --plus echo {%_demo} ::: mycode_demo keep_demo_here
You can also remove a starting string with {#string} You can also remove a starting string with {#string}
parallel --plus echo {#demo_} ::: demo_mycode keep_demo_here parallel --plus echo {#demo_} ::: demo_mycode keep_demo_here
To remove a string anywhere you can use regular expressions with To remove a string anywhere you can use regular expressions with
{/regexp/replacement} and leave the replacement empty: {/regexp/replacement} and leave the replacement empty:
parallel --plus echo {/demo_/} ::: demo_mycode remove_demo_here parallel --plus echo {/demo_/} ::: demo_mycode remove_demo_here
=head1 EXAMPLE: Download 24 images for each of the past 30 days =head2 EXAMPLE: Download 24 images for each of the past 30 days
Let us assume a website stores images like: Let us assume a website stores images like:
http://www.example.com/path/to/YYYYMMDD_##.jpg http://www.example.com/path/to/YYYYMMDD_##.jpg
where YYYYMMDD is the date and ## is the number 01-24. This will where YYYYMMDD is the date and ## is the number 01-24. This will
download images for the past 30 days: download images for the past 30 days:
getit() { getit() {
date=$(date -d "today -$1 days" +%Y%m%d) date=$(date -d "today -$1 days" +%Y%m%d)
num=$2 num=$2
echo wget http://www.example.com/path/to/${date}_${num}.jpg echo wget http://www.example.com/path/to/${date}_${num}.jpg
} }
export -f getit export -f getit
parallel getit ::: $(seq 30) ::: $(seq -w 24) parallel getit ::: $(seq 30) ::: $(seq -w 24)
B<$(date -d "today -$1 days" +%Y%m%d)> will give the dates in B<$(date -d "today -$1 days" +%Y%m%d)> will give the dates in
YYYYMMDD with B<$1> days subtracted. YYYYMMDD with B<$1> days subtracted.
=head1 EXAMPLE: Download world map from NASA =head2 EXAMPLE: Download world map from NASA
NASA provides tiles to download on earthdata.nasa.gov. Download tiles NASA provides tiles to download on earthdata.nasa.gov. Download tiles
for Blue Marble world map and create a 10240x20480 map. for Blue Marble world map and create a 10240x20480 map.
base=https://map1a.vis.earthdata.nasa.gov/wmts-geo/wmts.cgi base=https://map1a.vis.earthdata.nasa.gov/wmts-geo/wmts.cgi
service="SERVICE=WMTS&REQUEST=GetTile&VERSION=1.0.0" service="SERVICE=WMTS&REQUEST=GetTile&VERSION=1.0.0"
layer="LAYER=BlueMarble_ShadedRelief_Bathymetry" layer="LAYER=BlueMarble_ShadedRelief_Bathymetry"
set="STYLE=&TILEMATRIXSET=EPSG4326_500m&TILEMATRIX=5" set="STYLE=&TILEMATRIXSET=EPSG4326_500m&TILEMATRIX=5"
tile="TILEROW={1}&TILECOL={2}" tile="TILEROW={1}&TILECOL={2}"
format="FORMAT=image%2Fjpeg" format="FORMAT=image%2Fjpeg"
url="$base?$service&$layer&$set&$tile&$format" url="$base?$service&$layer&$set&$tile&$format"
parallel -j0 -q wget "$url" -O {1}_{2}.jpg ::: {0..19} ::: {0..39} parallel -j0 -q wget "$url" -O {1}_{2}.jpg ::: {0..19} ::: {0..39}
parallel eval convert +append {}_{0..39}.jpg line{}.jpg ::: {0..19} parallel eval convert +append {}_{0..39}.jpg line{}.jpg ::: {0..19}
convert -append line{0..19}.jpg world.jpg convert -append line{0..19}.jpg world.jpg
=head1 EXAMPLE: Download Apollo-11 images from NASA using jq =head2 EXAMPLE: Download Apollo-11 images from NASA using jq
Search NASA using their API to get JSON for images related to 'apollo Search NASA using their API to get JSON for images related to 'apollo
11' and has 'moon landing' in the description. 11' and has 'moon landing' in the description.
The search query returns JSON containing URLs to JSON containing The search query returns JSON containing URLs to JSON containing
collections of pictures. One of the pictures in each of these collections of pictures. One of the pictures in each of these
collection is I<large>. collection is I<large>.
B<wget> is used to get the JSON for the search query. B<jq> is then B<wget> is used to get the JSON for the search query. B<jq> is then
used to extract the URLs of the collections. B<parallel> then calls used to extract the URLs of the collections. B<parallel> then calls
skipping to change at line 3154 skipping to change at line 3202
q="q=apollo 11" q="q=apollo 11"
description="description=moon landing" description="description=moon landing"
media_type="media_type=image" media_type="media_type=image"
wget -O - "$base?$q&$description&$media_type" | wget -O - "$base?$q&$description&$media_type" |
jq -r .collection.items[].href | jq -r .collection.items[].href |
parallel wget -O - | parallel wget -O - |
jq -r .[] | jq -r .[] |
grep large | grep large |
parallel wget parallel wget
=head1 EXAMPLE: Download video playlist in parallel =head2 EXAMPLE: Download video playlist in parallel
B<youtube-dl> is an excellent tool to download videos. It can, B<youtube-dl> is an excellent tool to download videos. It can,
however, not download videos in parallel. This takes a playlist and however, not download videos in parallel. This takes a playlist and
downloads 10 videos in parallel. downloads 10 videos in parallel.
url='youtu.be/watch?v=0wOf2Fgi3DE&list=UU_cznB5YZZmvAmeq7Y3EriQ' url='youtu.be/watch?v=0wOf2Fgi3DE&list=UU_cznB5YZZmvAmeq7Y3EriQ'
export url export url
youtube-dl --flat-playlist "https://$url" | youtube-dl --flat-playlist "https://$url" |
parallel --tagstring {#} --lb -j10 \ parallel --tagstring {#} --lb -j10 \
youtube-dl --playlist-start {#} --playlist-end {#} '"https://$url"' youtube-dl --playlist-start {#} --playlist-end {#} '"https://$url"'
=head1 EXAMPLE: Prepend last modified date (ISO8601) to file name =head2 EXAMPLE: Prepend last modified date (ISO8601) to file name
parallel mv {} '{= $a=pQ($_); $b=$_;' \ parallel mv {} '{= $a=pQ($_); $b=$_;' \
'$_=qx{date -r "$a" +%FT%T}; chomp; $_="$_ $b" =}' ::: * '$_=qx{date -r "$a" +%FT%T}; chomp; $_="$_ $b" =}' ::: *
B<{=> and B<=}> mark a perl expression. B<pQ> perl-quotes the B<{=> and B<=}> mark a perl expression. B<pQ> perl-quotes the
string. B<date +%FT%T> is the date in ISO8601 with time. string. B<date +%FT%T> is the date in ISO8601 with time.
=head1 EXAMPLE: Save output in ISO8601 dirs =head2 EXAMPLE: Save output in ISO8601 dirs
Save output from B<ps aux> every second into dirs named Save output from B<ps aux> every second into dirs named
yyyy-mm-ddThh:mm:ss+zz:zz. yyyy-mm-ddThh:mm:ss+zz:zz.
seq 1000 | parallel -N0 -j1 --delay 1 \ seq 1000 | parallel -N0 -j1 --delay 1 \
--results '{= $_=`date -Isec`; chomp=}/' ps aux --results '{= $_=`date -Isec`; chomp=}/' ps aux
=head1 EXAMPLE: Digital clock with "blinking" : =head2 EXAMPLE: Digital clock with "blinking" :
The : in a digital clock blinks. To make every other line have a ':' The : in a digital clock blinks. To make every other line have a ':'
and the rest a ' ' a perl expression is used to look at the 3rd input and the rest a ' ' a perl expression is used to look at the 3rd input
source. If the value modulo 2 is 1: Use ":" otherwise use " ": source. If the value modulo 2 is 1: Use ":" otherwise use " ":
parallel -k echo {1}'{=3 $_=$_%2?":":" "=}'{2}{3} \ parallel -k echo {1}'{=3 $_=$_%2?":":" "=}'{2}{3} \
::: {0..12} ::: {0..5} ::: {0..9} ::: {0..12} ::: {0..5} ::: {0..9}
=head1 EXAMPLE: Aggregating content of files =head2 EXAMPLE: Aggregating content of files
This: This:
parallel --header : echo x{X}y{Y}z{Z} \> x{X}y{Y}z{Z} \ parallel --header : echo x{X}y{Y}z{Z} \> x{X}y{Y}z{Z} \
::: X {1..5} ::: Y {01..10} ::: Z {1..5} ::: X {1..5} ::: Y {01..10} ::: Z {1..5}
will generate the files x1y01z1 .. x5y10z5. If you want to aggregate will generate the files x1y01z1 .. x5y10z5. If you want to aggregate
the output grouping on x and z you can do this: the output grouping on x and z you can do this:
parallel eval 'cat {=s/y01/y*/=} > {=s/y01//=}' ::: *y01* parallel eval 'cat {=s/y01/y*/=} > {=s/y01//=}' ::: *y01*
For all values of x and z it runs commands like: For all values of x and z it runs commands like:
cat x1y*z1 > x1z1 cat x1y*z1 > x1z1
So you end up with x1z1 .. x5z5 each containing the content of all So you end up with x1z1 .. x5z5 each containing the content of all
values of y. values of y.
=head1 EXAMPLE: Breadth first parallel web crawler/mirrorer =head2 EXAMPLE: Breadth first parallel web crawler/mirrorer
This script below will crawl and mirror a URL in parallel. It This script below will crawl and mirror a URL in parallel. It
downloads first pages that are 1 click down, then 2 clicks down, then downloads first pages that are 1 click down, then 2 clicks down, then
3; instead of the normal depth first, where the first link link on 3; instead of the normal depth first, where the first link link on
each page is fetched first. each page is fetched first.
Run like this: Run like this:
PARALLEL=-j100 ./parallel-crawl http://gatt.org.yeslab.org/ PARALLEL=-j100 ./parallel-crawl http://gatt.org.yeslab.org/
skipping to change at line 3256 skipping to change at line 3304
wget -qm -l1 -Q1 {} \; echo Spidered: {} \>\&2 | wget -qm -l1 -Q1 {} \; echo Spidered: {} \>\&2 |
perl -ne 's/#.*//; s/\s+\d+.\s(\S+)$/$1/ and perl -ne 's/#.*//; s/\s+\d+.\s(\S+)$/$1/ and
do { $seen{$1}++ or print }' | do { $seen{$1}++ or print }' |
grep -F $BASEURL | grep -F $BASEURL |
grep -v -x -F -f $SEEN | tee -a $SEEN > $URLLIST2 grep -v -x -F -f $SEEN | tee -a $SEEN > $URLLIST2
mv $URLLIST2 $URLLIST mv $URLLIST2 $URLLIST
done done
rm -f $URLLIST $URLLIST2 $SEEN rm -f $URLLIST $URLLIST2 $SEEN
=head1 EXAMPLE: Process files from a tar file while unpacking =head2 EXAMPLE: Process files from a tar file while unpacking
If the files to be processed are in a tar file then unpacking one file If the files to be processed are in a tar file then unpacking one file
and processing it immediately may be faster than first unpacking all and processing it immediately may be faster than first unpacking all
files. files.
tar xvf foo.tgz | perl -ne 'print $l;$l=$_;END{print $l}' | \ tar xvf foo.tgz | perl -ne 'print $l;$l=$_;END{print $l}' | \
parallel echo parallel echo
The Perl one-liner is needed to make sure the file is complete before The Perl one-liner is needed to make sure the file is complete before
handing it to GNU B<parallel>. handing it to GNU B<parallel>.
=head1 EXAMPLE: Rewriting a for-loop and a while-read-loop =head2 EXAMPLE: Rewriting a for-loop and a while-read-loop
for-loops like this: for-loops like this:
(for x in `cat list` ; do (for x in `cat list` ; do
do_something $x do_something $x
done) | process_output done) | process_output
and while-read-loops like this: and while-read-loops like this:
cat list | (while read x ; do cat list | (while read x ; do
skipping to change at line 3333 skipping to change at line 3381
can both be rewritten as: can both be rewritten as:
doit() { doit() {
x=$1 x=$1
do_something $x do_something $x
[... 100 lines that do something with $x ...] [... 100 lines that do something with $x ...]
} }
export -f doit export -f doit
cat list | parallel doit cat list | parallel doit
=head1 EXAMPLE: Rewriting nested for-loops =head2 EXAMPLE: Rewriting nested for-loops
Nested for-loops like this: Nested for-loops like this:
(for x in `cat xlist` ; do (for x in `cat xlist` ; do
for y in `cat ylist` ; do for y in `cat ylist` ; do
do_something $x $y do_something $x $y
done done
done) | process_output done) | process_output
can be written like this: can be written like this:
skipping to change at line 3359 skipping to change at line 3407
(for colour in red green blue ; do (for colour in red green blue ; do
for size in S M L XL XXL ; do for size in S M L XL XXL ; do
echo $colour $size echo $colour $size
done done
done) | sort done) | sort
can be written like this: can be written like this:
parallel echo {1} {2} ::: red green blue ::: S M L XL XXL | sort parallel echo {1} {2} ::: red green blue ::: S M L XL XXL | sort
=head1 EXAMPLE: Finding the lowest difference between files =head2 EXAMPLE: Finding the lowest difference between files
B<diff> is good for finding differences in text files. B<diff | wc -l> B<diff> is good for finding differences in text files. B<diff | wc -l>
gives an indication of the size of the difference. To find the gives an indication of the size of the difference. To find the
differences between all files in the current dir do: differences between all files in the current dir do:
parallel --tag 'diff {1} {2} | wc -l' ::: * ::: * | sort -nk3 parallel --tag 'diff {1} {2} | wc -l' ::: * ::: * | sort -nk3
This way it is possible to see if some files are closer to other This way it is possible to see if some files are closer to other
files. files.
=head1 EXAMPLE: for-loops with column names =head2 EXAMPLE: for-loops with column names
When doing multiple nested for-loops it can be easier to keep track of When doing multiple nested for-loops it can be easier to keep track of
the loop variable if is is named instead of just having a number. Use the loop variable if is is named instead of just having a number. Use
B<--header :> to let the first argument be an named alias for the B<--header :> to let the first argument be an named alias for the
positional replacement string: positional replacement string:
parallel --header : echo {colour} {size} \ parallel --header : echo {colour} {size} \
::: colour red green blue ::: size S M L XL XXL ::: colour red green blue ::: size S M L XL XXL
This also works if the input file is a file with columns: This also works if the input file is a file with columns:
cat addressbook.tsv | \ cat addressbook.tsv | \
parallel --colsep '\t' --header : echo {Name} {E-mail address} parallel --colsep '\t' --header : echo {Name} {E-mail address}
=head1 EXAMPLE: All combinations in a list =head2 EXAMPLE: All combinations in a list
GNU B<parallel> makes all combinations when given two lists. GNU B<parallel> makes all combinations when given two lists.
To make all combinations in a single list with unique values, you To make all combinations in a single list with unique values, you
repeat the list and use replacement string B<{choose_k}>: repeat the list and use replacement string B<{choose_k}>:
parallel --plus echo {choose_k} ::: A B C D ::: A B C D parallel --plus echo {choose_k} ::: A B C D ::: A B C D
parallel --plus echo 2{2choose_k} 1{1choose_k} ::: A B C D ::: A B C D parallel --plus echo 2{2choose_k} 1{1choose_k} ::: A B C D ::: A B C D
B<{choose_k}> works for any number of input sources: B<{choose_k}> works for any number of input sources:
parallel --plus echo {choose_k} ::: A B C D ::: A B C D ::: A B C D parallel --plus echo {choose_k} ::: A B C D ::: A B C D ::: A B C D
=head1 EXAMPLE: From a to b and b to c =head2 EXAMPLE: From a to b and b to c
Assume you have input like: Assume you have input like:
aardvark aardvark
babble babble
cab cab
dab dab
each each
and want to run combinations like: and want to run combinations like:
skipping to change at line 3427 skipping to change at line 3475
If the input is in the file in.txt: If the input is in the file in.txt:
parallel echo {1} - {2} ::::+ <(head -n -1 in.txt) <(tail -n +2 in.txt) parallel echo {1} - {2} ::::+ <(head -n -1 in.txt) <(tail -n +2 in.txt)
If the input is in the array $a here are two solutions: If the input is in the array $a here are two solutions:
seq $((${#a[@]}-1)) | \ seq $((${#a[@]}-1)) | \
env_parallel --env a echo '${a[{=$_--=}]} - ${a[{}]}' env_parallel --env a echo '${a[{=$_--=}]} - ${a[{}]}'
parallel echo {1} - {2} ::: "${a[@]::${#a[@]}-1}" :::+ "${a[@]:1}" parallel echo {1} - {2} ::: "${a[@]::${#a[@]}-1}" :::+ "${a[@]:1}"
=head1 EXAMPLE: Count the differences between all files in a dir =head2 EXAMPLE: Count the differences between all files in a dir
Using B<--results> the results are saved in /tmp/diffcount*. Using B<--results> the results are saved in /tmp/diffcount*.
parallel --results /tmp/diffcount "diff -U 0 {1} {2} | \ parallel --results /tmp/diffcount "diff -U 0 {1} {2} | \
tail -n +3 |grep -v '^@'|wc -l" ::: * ::: * tail -n +3 |grep -v '^@'|wc -l" ::: * ::: *
To see the difference between file A and file B look at the file To see the difference between file A and file B look at the file
'/tmp/diffcount/1/A/2/B'. '/tmp/diffcount/1/A/2/B'.
=head1 EXAMPLE: Speeding up fast jobs =head2 EXAMPLE: Speeding up fast jobs
Starting a job on the local machine takes around 10 ms. This can be a Starting a job on the local machine takes around 10 ms. This can be a
big overhead if the job takes very few ms to run. Often you can group big overhead if the job takes very few ms to run. Often you can group
small jobs together using B<-X> which will make the overhead less small jobs together using B<-X> which will make the overhead less
significant. Compare the speed of these: significant. Compare the speed of these:
seq -w 0 9999 | parallel touch pict{}.jpg seq -w 0 9999 | parallel touch pict{}.jpg
seq -w 0 9999 | parallel -X touch pict{}.jpg seq -w 0 9999 | parallel -X touch pict{}.jpg
If your program cannot take multiple arguments, then you can use GNU If your program cannot take multiple arguments, then you can use GNU
skipping to change at line 3479 skipping to change at line 3527
E.g. E.g.
mygenerator() { mygenerator() {
seq 10000000 | perl -pe 'print "echo This is fast job number "'; seq 10000000 | perl -pe 'print "echo This is fast job number "';
} }
mygenerator | parallel --pipe --block 10M sh mygenerator | parallel --pipe --block 10M sh
The overhead is 100000 times smaller namely around 100 nanoseconds per The overhead is 100000 times smaller namely around 100 nanoseconds per
job. job.
=head1 EXAMPLE: Using shell variables =head2 EXAMPLE: Using shell variables
When using shell variables you need to quote them correctly as they When using shell variables you need to quote them correctly as they
may otherwise be interpreted by the shell. may otherwise be interpreted by the shell.
Notice the difference between: Notice the difference between:
ARR=("My brother's 12\" records are worth <\$\$\$>"'!' Foo Bar) ARR=("My brother's 12\" records are worth <\$\$\$>"'!' Foo Bar)
parallel echo ::: ${ARR[@]} # This is probably not what you want parallel echo ::: ${ARR[@]} # This is probably not what you want
and: and:
skipping to change at line 3517 skipping to change at line 3565
parallel echo "'$VAR'" ::: '!' parallel echo "'$VAR'" ::: '!'
If you use them in a function you just quote as you normally would do: If you use them in a function you just quote as you normally would do:
VAR="My brother's 12\" records are worth <\$\$\$>" VAR="My brother's 12\" records are worth <\$\$\$>"
export VAR export VAR
myfunc() { echo "$VAR" "$1"; } myfunc() { echo "$VAR" "$1"; }
export -f myfunc export -f myfunc
parallel myfunc ::: '!' parallel myfunc ::: '!'
=head1 EXAMPLE: Group output lines =head2 EXAMPLE: Group output lines
When running jobs that output data, you often do not want the output When running jobs that output data, you often do not want the output
of multiple jobs to run together. GNU B<parallel> defaults to grouping of multiple jobs to run together. GNU B<parallel> defaults to grouping
the output of each job, so the output is printed when the job the output of each job, so the output is printed when the job
finishes. If you want full lines to be printed while the job is finishes. If you want full lines to be printed while the job is
running you can use B<--line-buffer>. If you want output to be running you can use B<--line-buffer>. If you want output to be
printed as soon as possible you can use B<-u>. printed as soon as possible you can use B<-u>.
Compare the output of: Compare the output of:
parallel wget --limit-rate=100k \ parallel wget --limit-rate=100k \
https://ftpmirror.gnu.org/parallel/parallel-20{}0822.tar.bz2 \ https://ftpmirror.gnu.org/parallel/parallel-20{}0822.tar.bz2 \
::: {12..16} ::: {12..16}
parallel --line-buffer wget --limit-rate=100k \ parallel --line-buffer wget --limit-rate=100k \
https://ftpmirror.gnu.org/parallel/parallel-20{}0822.tar.bz2 \ https://ftpmirror.gnu.org/parallel/parallel-20{}0822.tar.bz2 \
::: {12..16} ::: {12..16}
parallel -u wget --limit-rate=100k \ parallel -u wget --limit-rate=100k \
https://ftpmirror.gnu.org/parallel/parallel-20{}0822.tar.bz2 \ https://ftpmirror.gnu.org/parallel/parallel-20{}0822.tar.bz2 \
::: {12..16} ::: {12..16}
=head1 EXAMPLE: Tag output lines =head2 EXAMPLE: Tag output lines
GNU B<parallel> groups the output lines, but it can be hard to see GNU B<parallel> groups the output lines, but it can be hard to see
where the different jobs begin. B<--tag> prepends the argument to make where the different jobs begin. B<--tag> prepends the argument to make
that more visible: that more visible:
parallel --tag wget --limit-rate=100k \ parallel --tag wget --limit-rate=100k \
https://ftpmirror.gnu.org/parallel/parallel-20{}0822.tar.bz2 \ https://ftpmirror.gnu.org/parallel/parallel-20{}0822.tar.bz2 \
::: {12..16} ::: {12..16}
B<--tag> works with B<--line-buffer> but not with B<-u>: B<--tag> works with B<--line-buffer> but not with B<-u>:
parallel --tag --line-buffer wget --limit-rate=100k \ parallel --tag --line-buffer wget --limit-rate=100k \
https://ftpmirror.gnu.org/parallel/parallel-20{}0822.tar.bz2 \ https://ftpmirror.gnu.org/parallel/parallel-20{}0822.tar.bz2 \
::: {12..16} ::: {12..16}
Check the uptime of the servers in I<~/.parallel/sshloginfile>: Check the uptime of the servers in I<~/.parallel/sshloginfile>:
parallel --tag -S .. --nonall uptime parallel --tag -S .. --nonall uptime
=head1 EXAMPLE: Colorize output =head2 EXAMPLE: Colorize output
Give each job a new color. Most terminals support ANSI colors with the Give each job a new color. Most terminals support ANSI colors with the
escape code "\033[30;3Xm" where 0 <= X <= 7: escape code "\033[30;3Xm" where 0 <= X <= 7:
seq 10 | \ seq 10 | \
parallel --tagstring '\033[30;3{=$_=++$::color%8=}m' seq {} parallel --tagstring '\033[30;3{=$_=++$::color%8=}m' seq {}
parallel --rpl '{color} $_="\033[30;3".(++$::color%8)."m"' \ parallel --rpl '{color} $_="\033[30;3".(++$::color%8)."m"' \
--tagstring {color} seq {} ::: {1..10} --tagstring {color} seq {} ::: {1..10}
To get rid of the initial \t (which comes from B<--tagstring>): To get rid of the initial \t (which comes from B<--tagstring>):
... | perl -pe 's/\t//' ... | perl -pe 's/\t//'
=head1 EXAMPLE: Keep order of output same as order of input =head2 EXAMPLE: Keep order of output same as order of input
Normally the output of a job will be printed as soon as it Normally the output of a job will be printed as soon as it
completes. Sometimes you want the order of the output to remain the completes. Sometimes you want the order of the output to remain the
same as the order of the input. This is often important, if the output same as the order of the input. This is often important, if the output
is used as input for another system. B<-k> will make sure the order of is used as input for another system. B<-k> will make sure the order of
output will be in the same order as input even if later jobs end output will be in the same order as input even if later jobs end
before earlier jobs. before earlier jobs.
Append a string to every line in a text file: Append a string to every line in a text file:
skipping to change at line 3619 skipping to change at line 3667
To download byte 10000000-19999999 you can use B<curl>: To download byte 10000000-19999999 you can use B<curl>:
curl -r 10000000-19999999 http://example.com/the/big/file >file.part curl -r 10000000-19999999 http://example.com/the/big/file >file.part
To download a 1 GB file we need 100 10MB chunks downloaded and To download a 1 GB file we need 100 10MB chunks downloaded and
combined in the correct order. combined in the correct order.
seq 0 99 | parallel -k curl -r \ seq 0 99 | parallel -k curl -r \
{}0000000-{}9999999 http://example.com/the/big/file > file {}0000000-{}9999999 http://example.com/the/big/file > file
=head1 EXAMPLE: Parallel grep =head2 EXAMPLE: Parallel grep
B<grep -r> greps recursively through directories. On multicore CPUs B<grep -r> greps recursively through directories. On multicore CPUs
GNU B<parallel> can often speed this up. GNU B<parallel> can often speed this up.
find . -type f | parallel -k -j150% -n 1000 -m grep -H -n STRING {} find . -type f | parallel -k -j150% -n 1000 -m grep -H -n STRING {}
This will run 1.5 job per CPU, and give 1000 arguments to B<grep>. This will run 1.5 job per CPU, and give 1000 arguments to B<grep>.
=head1 EXAMPLE: Grepping n lines for m regular expressions. =head2 EXAMPLE: Grepping n lines for m regular expressions.
The simplest solution to grep a big file for a lot of regexps is: The simplest solution to grep a big file for a lot of regexps is:
grep -f regexps.txt bigfile grep -f regexps.txt bigfile
Or if the regexps are fixed strings: Or if the regexps are fixed strings:
grep -F -f regexps.txt bigfile grep -F -f regexps.txt bigfile
There are 3 limiting factors: CPU, RAM, and disk I/O. There are 3 limiting factors: CPU, RAM, and disk I/O.
skipping to change at line 3651 skipping to change at line 3699
free memory (e.g. when running B<top>), then RAM is a limiting factor. free memory (e.g. when running B<top>), then RAM is a limiting factor.
CPU is also easy to measure: If the B<grep> takes >90% CPU in B<top>, CPU is also easy to measure: If the B<grep> takes >90% CPU in B<top>,
then the CPU is a limiting factor, and parallelization will speed this then the CPU is a limiting factor, and parallelization will speed this
up. up.
It is harder to see if disk I/O is the limiting factor, and depending It is harder to see if disk I/O is the limiting factor, and depending
on the disk system it may be faster or slower to parallelize. The only on the disk system it may be faster or slower to parallelize. The only
way to know for certain is to test and measure. way to know for certain is to test and measure.
=head2 Limiting factor: RAM =head3 Limiting factor: RAM
The normal B<grep -f regexps.txt bigfile> works no matter the size of The normal B<grep -f regexps.txt bigfile> works no matter the size of
bigfile, but if regexps.txt is so big it cannot fit into memory, then bigfile, but if regexps.txt is so big it cannot fit into memory, then
you need to split this. you need to split this.
B<grep -F> takes around 100 bytes of RAM and B<grep> takes about 500 B<grep -F> takes around 100 bytes of RAM and B<grep> takes about 500
bytes of RAM per 1 byte of regexp. So if regexps.txt is 1% of your bytes of RAM per 1 byte of regexp. So if regexps.txt is 1% of your
RAM, then it may be too big. RAM, then it may be too big.
If you can convert your regexps into fixed strings do that. E.g. if If you can convert your regexps into fixed strings do that. E.g. if
skipping to change at line 3702 skipping to change at line 3750
parallel --pipepart -a regexps.txt --block $percpu --compress \ parallel --pipepart -a regexps.txt --block $percpu --compress \
grep -F -f - -n bigfile | \ grep -F -f - -n bigfile | \
sort -un | perl -pe 's/^\d+://' sort -un | perl -pe 's/^\d+://'
If you can live with duplicated lines and wrong order, it is faster to do: If you can live with duplicated lines and wrong order, it is faster to do:
parallel --pipepart -a regexps.txt --block $percpu --compress \ parallel --pipepart -a regexps.txt --block $percpu --compress \
grep -F -f - bigfile grep -F -f - bigfile
=head2 Limiting factor: CPU =head3 Limiting factor: CPU
If the CPU is the limiting factor parallelization should be done on If the CPU is the limiting factor parallelization should be done on
the regexps: the regexps:
cat regexps.txt | parallel --pipe -L1000 --roundrobin --compress \ cat regexps.txt | parallel --pipe -L1000 --roundrobin --compress \
grep -f - -n bigfile | \ grep -f - -n bigfile | \
sort -un | perl -pe 's/^\d+://' sort -un | perl -pe 's/^\d+://'
The command will start one B<grep> per CPU and read I<bigfile> one The command will start one B<grep> per CPU and read I<bigfile> one
time per CPU, but as that is done in parallel, all reads except the time per CPU, but as that is done in parallel, all reads except the
skipping to change at line 3732 skipping to change at line 3780
This will split I<bigfile> into 100MB chunks and run B<grep> on each of This will split I<bigfile> into 100MB chunks and run B<grep> on each of
these chunks. To parallelize both reading of I<bigfile> and I<regexps.txt> these chunks. To parallelize both reading of I<bigfile> and I<regexps.txt>
combine the two using B<--cat>: combine the two using B<--cat>:
parallel --pipepart --block 100M -a bigfile --cat cat regexps.txt \ parallel --pipepart --block 100M -a bigfile --cat cat regexps.txt \
\| parallel --pipe -L1000 --roundrobin grep -f - {} \| parallel --pipe -L1000 --roundrobin grep -f - {}
If a line matches multiple regexps, the line may be duplicated. If a line matches multiple regexps, the line may be duplicated.
=head2 Bigger problem =head3 Bigger problem
If the problem is too big to be solved by this, you are probably ready If the problem is too big to be solved by this, you are probably ready
for Lucene. for Lucene.
=head1 EXAMPLE: Using remote computers =head2 EXAMPLE: Using remote computers
To run commands on a remote computer SSH needs to be set up and you To run commands on a remote computer SSH needs to be set up and you
must be able to login without entering a password (The commands must be able to login without entering a password (The commands
B<ssh-copy-id>, B<ssh-agent>, and B<sshpass> may help you do that). B<ssh-copy-id>, B<ssh-agent>, and B<sshpass> may help you do that).
If you need to login to a whole cluster, you typically do not want to If you need to login to a whole cluster, you typically do not want to
accept the host key for every host. You want to accept them the first accept the host key for every host. You want to accept them the first
time and be warned if they are ever changed. To do that: time and be warned if they are ever changed. To do that:
# Add the servers to the sshloginfile # Add the servers to the sshloginfile
skipping to change at line 3810 skipping to change at line 3858
GNU B<parallel> will try to determine the number of CPUs on each of GNU B<parallel> will try to determine the number of CPUs on each of
the remote computers, and run one job per CPU - even if the remote the remote computers, and run one job per CPU - even if the remote
computers do not have the same number of CPUs. computers do not have the same number of CPUs.
If the number of CPUs on the remote computers is not identified If the number of CPUs on the remote computers is not identified
correctly the number of CPUs can be added in front. Here the computer correctly the number of CPUs can be added in front. Here the computer
has 8 CPUs. has 8 CPUs.
seq 10 | parallel --sshlogin 8/server.example.com echo seq 10 | parallel --sshlogin 8/server.example.com echo
=head1 EXAMPLE: Transferring of files =head2 EXAMPLE: Transferring of files
To recompress gzipped files with B<bzip2> using a remote computer run: To recompress gzipped files with B<bzip2> using a remote computer run:
find logs/ -name '*.gz' | \ find logs/ -name '*.gz' | \
parallel --sshlogin server.example.com \ parallel --sshlogin server.example.com \
--transfer "zcat {} | bzip2 -9 >{.}.bz2" --transfer "zcat {} | bzip2 -9 >{.}.bz2"
This will list the .gz-files in the I<logs> directory and all This will list the .gz-files in the I<logs> directory and all
directories below. Then it will transfer the files to directories below. Then it will transfer the files to
I<server.example.com> to the corresponding directory in I<server.example.com> to the corresponding directory in
skipping to change at line 3880 skipping to change at line 3928
find logs/ -name '*.gz' | parallel --sshloginfile mycomputers \ find logs/ -name '*.gz' | parallel --sshloginfile mycomputers \
--trc {.}.bz2 "zcat {} | bzip2 -9 >{.}.bz2" --trc {.}.bz2 "zcat {} | bzip2 -9 >{.}.bz2"
If the file I<~/.parallel/sshloginfile> contains the list of computers If the file I<~/.parallel/sshloginfile> contains the list of computers
the special short hand I<-S ..> can be used: the special short hand I<-S ..> can be used:
find logs/ -name '*.gz' | parallel -S .. \ find logs/ -name '*.gz' | parallel -S .. \
--trc {.}.bz2 "zcat {} | bzip2 -9 >{.}.bz2" --trc {.}.bz2 "zcat {} | bzip2 -9 >{.}.bz2"
=head1 EXAMPLE: Distributing work to local and remote computers =head2 EXAMPLE: Distributing work to local and remote computers
Convert *.mp3 to *.ogg running one process per CPU on local computer Convert *.mp3 to *.ogg running one process per CPU on local computer
and server2: and server2:
parallel --trc {.}.ogg -S server2,: \ parallel --trc {.}.ogg -S server2,: \
'mpg321 -w - {} | oggenc -q0 - -o {.}.ogg' ::: *.mp3 'mpg321 -w - {} | oggenc -q0 - -o {.}.ogg' ::: *.mp3
=head1 EXAMPLE: Running the same command on remote computers =head2 EXAMPLE: Running the same command on remote computers
To run the command B<uptime> on remote computers you can do: To run the command B<uptime> on remote computers you can do:
parallel --tag --nonall -S server1,server2 uptime parallel --tag --nonall -S server1,server2 uptime
B<--nonall> reads no arguments. If you have a list of jobs you want B<--nonall> reads no arguments. If you have a list of jobs you want
to run on each computer you can do: to run on each computer you can do:
parallel --tag --onall -S server1,server2 echo ::: 1 2 3 parallel --tag --onall -S server1,server2 echo ::: 1 2 3
Remove B<--tag> if you do not want the sshlogin added before the Remove B<--tag> if you do not want the sshlogin added before the
output. output.
If you have a lot of hosts use '-j0' to access more hosts in parallel. If you have a lot of hosts use '-j0' to access more hosts in parallel.
=head1 EXAMPLE: Running 'sudo' on remote computers =head2 EXAMPLE: Running 'sudo' on remote computers
Put the password into passwordfile then run: Put the password into passwordfile then run:
parallel --ssh 'cat passwordfile | ssh' --nonall \ parallel --ssh 'cat passwordfile | ssh' --nonall \
-S user@server1,user@server2 sudo -S ls -l /root -S user@server1,user@server2 sudo -S ls -l /root
=head1 EXAMPLE: Using remote computers behind NAT wall =head2 EXAMPLE: Using remote computers behind NAT wall
If the workers are behind a NAT wall, you need some trickery to get to If the workers are behind a NAT wall, you need some trickery to get to
them. them.
If you can B<ssh> to a jumphost, and reach the workers from there, If you can B<ssh> to a jumphost, and reach the workers from there,
then the obvious solution would be this, but it B<does not work>: then the obvious solution would be this, but it B<does not work>:
parallel --ssh 'ssh jumphost ssh' -S host1 echo ::: DOES NOT WORK parallel --ssh 'ssh jumphost ssh' -S host1 echo ::: DOES NOT WORK
It does not work because the command is dequoted by B<ssh> twice where It does not work because the command is dequoted by B<ssh> twice where
skipping to change at line 3940 skipping to change at line 3988
Or you can instead put this in B<~/.ssh/config>: Or you can instead put this in B<~/.ssh/config>:
Host host1 host2 host3 Host host1 host2 host3
ProxyCommand ssh jumphost.domain nc -w 1 %h 22 ProxyCommand ssh jumphost.domain nc -w 1 %h 22
It requires B<nc(netcat)> to be installed on jumphost. With this you It requires B<nc(netcat)> to be installed on jumphost. With this you
can simply: can simply:
parallel -S host1,host2,host3 echo ::: This does work parallel -S host1,host2,host3 echo ::: This does work
=head2 No jumphost, but port forwards =head3 No jumphost, but port forwards
If there is no jumphost but each server has port 22 forwarded from the If there is no jumphost but each server has port 22 forwarded from the
firewall (e.g. the firewall's port 22001 = port 22 on host1, 22002 = host2, firewall (e.g. the firewall's port 22001 = port 22 on host1, 22002 = host2,
22003 = host3) then you can use B<~/.ssh/config>: 22003 = host3) then you can use B<~/.ssh/config>:
Host host1.v Host host1.v
Port 22001 Port 22001
Host host2.v Host host2.v
Port 22002 Port 22002
Host host3.v Host host3.v
Port 22003 Port 22003
Host *.v Host *.v
Hostname firewall Hostname firewall
And then use host{1..3}.v as normal hosts: And then use host{1..3}.v as normal hosts:
parallel -S host1.v,host2.v,host3.v echo ::: a b c parallel -S host1.v,host2.v,host3.v echo ::: a b c
=head2 No jumphost, no port forwards =head3 No jumphost, no port forwards
If ports cannot be forwarded, you need some sort of VPN to traverse If ports cannot be forwarded, you need some sort of VPN to traverse
the NAT-wall. TOR is one options for that, as it is very easy to get the NAT-wall. TOR is one options for that, as it is very easy to get
working. working.
You need to install TOR and setup a hidden service. In B<torrc> put: You need to install TOR and setup a hidden service. In B<torrc> put:
HiddenServiceDir /var/lib/tor/hidden_service/ HiddenServiceDir /var/lib/tor/hidden_service/
HiddenServicePort 22 127.0.0.1:22 HiddenServicePort 22 127.0.0.1:22
skipping to change at line 3986 skipping to change at line 4034
parallel --ssh 'torsocks ssh' -S izjafdceobowklhz.onion \ parallel --ssh 'torsocks ssh' -S izjafdceobowklhz.onion \
-S zfcdaeiojoklbwhz.onion,auclucjzobowklhi.onion echo ::: a b c -S zfcdaeiojoklbwhz.onion,auclucjzobowklhi.onion echo ::: a b c
If not all hosts are accessible through TOR: If not all hosts are accessible through TOR:
parallel -S 'torsocks ssh izjafdceobowklhz.onion,host2,host3' \ parallel -S 'torsocks ssh izjafdceobowklhz.onion,host2,host3' \
echo ::: a b c echo ::: a b c
See more B<ssh> tricks on https://en.wikibooks.org/wiki/OpenSSH/Cookbook/Proxies _and_Jump_Hosts See more B<ssh> tricks on https://en.wikibooks.org/wiki/OpenSSH/Cookbook/Proxies _and_Jump_Hosts
=head1 EXAMPLE: Parallelizing rsync =head2 EXAMPLE: Parallelizing rsync
B<rsync> is a great tool, but sometimes it will not fill up the B<rsync> is a great tool, but sometimes it will not fill up the
available bandwidth. Running multiple B<rsync> in parallel can fix available bandwidth. Running multiple B<rsync> in parallel can fix
this. this.
cd src-dir cd src-dir
find . -type f | find . -type f |
parallel -j10 -X rsync -zR -Ha ./{} fooserver:/dest-dir/ parallel -j10 -X rsync -zR -Ha ./{} fooserver:/dest-dir/
Adjust B<-j10> until you find the optimal number. Adjust B<-j10> until you find the optimal number.
skipping to change at line 4011 skipping to change at line 4059
rsync -zR ././sub/dir/file fooserver:/dest-dir/ rsync -zR ././sub/dir/file fooserver:/dest-dir/
The B</./> is what B<rsync -R> works on. The B</./> is what B<rsync -R> works on.
If you are unable to push data, but need to pull them and the files If you are unable to push data, but need to pull them and the files
are called digits.png (e.g. 000000.png) you might be able to do: are called digits.png (e.g. 000000.png) you might be able to do:
seq -w 0 99 | parallel rsync -Havessh fooserver:src/*{}.png destdir/ seq -w 0 99 | parallel rsync -Havessh fooserver:src/*{}.png destdir/
=head1 EXAMPLE: Use multiple inputs in one command =head2 EXAMPLE: Use multiple inputs in one command
Copy files like foo.es.ext to foo.ext: Copy files like foo.es.ext to foo.ext:
ls *.es.* | perl -pe 'print; s/\.es//' | parallel -N2 cp {1} {2} ls *.es.* | perl -pe 'print; s/\.es//' | parallel -N2 cp {1} {2}
The perl command spits out 2 lines for each input. GNU B<parallel> The perl command spits out 2 lines for each input. GNU B<parallel>
takes 2 inputs (using B<-N2>) and replaces {1} and {2} with the inputs. takes 2 inputs (using B<-N2>) and replaces {1} and {2} with the inputs.
Count in binary: Count in binary:
skipping to change at line 4039 skipping to change at line 4087
Convert files from all subdirs to PNG-files with consecutive numbers Convert files from all subdirs to PNG-files with consecutive numbers
(useful for making input PNG's for B<ffmpeg>): (useful for making input PNG's for B<ffmpeg>):
parallel --link -a <(find . -type f | sort) \ parallel --link -a <(find . -type f | sort) \
-a <(seq $(find . -type f|wc -l)) convert {1} {2}.png -a <(seq $(find . -type f|wc -l)) convert {1} {2}.png
Alternative version: Alternative version:
find . -type f | sort | parallel convert {} {#}.png find . -type f | sort | parallel convert {} {#}.png
=head1 EXAMPLE: Use a table as input =head2 EXAMPLE: Use a table as input
Content of table_file.tsv: Content of table_file.tsv:
foo<TAB>bar foo<TAB>bar
baz <TAB> quux baz <TAB> quux
To run: To run:
cmd -o bar -i foo cmd -o bar -i foo
cmd -o quux -i baz cmd -o quux -i baz
you can run: you can run:
parallel -a table_file.tsv --colsep '\t' cmd -o {2} -i {1} parallel -a table_file.tsv --colsep '\t' cmd -o {2} -i {1}
Note: The default for GNU B<parallel> is to remove the spaces around Note: The default for GNU B<parallel> is to remove the spaces around
the columns. To keep the spaces: the columns. To keep the spaces:
parallel -a table_file.tsv --trim n --colsep '\t' cmd -o {2} -i {1} parallel -a table_file.tsv --trim n --colsep '\t' cmd -o {2} -i {1}
=head1 EXAMPLE: Output to database =head2 EXAMPLE: Output to database
GNU B<parallel> can output to a database table and a CSV-file: GNU B<parallel> can output to a database table and a CSV-file:
dburl=csv:///%2Ftmp%2Fmydir dburl=csv:///%2Ftmp%2Fmydir
dbtableurl=$dburl/mytable.csv dbtableurl=$dburl/mytable.csv
parallel --sqlandworker $dbtableurl seq ::: {1..10} parallel --sqlandworker $dbtableurl seq ::: {1..10}
It is rather slow and takes up a lot of CPU time because GNU It is rather slow and takes up a lot of CPU time because GNU
B<parallel> parses the whole CSV file for each update. B<parallel> parses the whole CSV file for each update.
skipping to change at line 4099 skipping to change at line 4147
Or MySQL: Or MySQL:
dburl=mysql://user:pass@host/mydb dburl=mysql://user:pass@host/mydb
dbtableurl=$dburl/mytable dbtableurl=$dburl/mytable
parallel --sqlandworker $dbtableurl seq ::: {1..10} parallel --sqlandworker $dbtableurl seq ::: {1..10}
sql -p -B $dburl "SELECT * FROM mytable;" > mytable.tsv sql -p -B $dburl "SELECT * FROM mytable;" > mytable.tsv
perl -pe 's/"/""/g; s/\t/","/g; s/^/"/; s/$/"/; perl -pe 's/"/""/g; s/\t/","/g; s/^/"/; s/$/"/;
%s=("\\" => "\\", "t" => "\t", "n" => "\n"); %s=("\\" => "\\", "t" => "\t", "n" => "\n");
s/\\([\\tn])/$s{$1}/g;' mytable.tsv s/\\([\\tn])/$s{$1}/g;' mytable.tsv
=head1 EXAMPLE: Output to CSV-file for R =head2 EXAMPLE: Output to CSV-file for R
If you have no need for the advanced job distribution control that a If you have no need for the advanced job distribution control that a
database provides, but you simply want output into a CSV file that you database provides, but you simply want output into a CSV file that you
can read into R or LibreCalc, then you can use B<--results>: can read into R or LibreCalc, then you can use B<--results>:
parallel --results my.csv seq ::: 10 20 30 parallel --results my.csv seq ::: 10 20 30
R R
> mydf <- read.csv("my.csv"); > mydf <- read.csv("my.csv");
> print(mydf[2,]) > print(mydf[2,])
> write(as.character(mydf[2,c("Stdout")]),'') > write(as.character(mydf[2,c("Stdout")]),'')
=head1 EXAMPLE: Use XML as input =head2 EXAMPLE: Use XML as input
The show Aflyttet on Radio 24syv publishes an RSS feed with their audio The show Aflyttet on Radio 24syv publishes an RSS feed with their audio
podcasts on: http://arkiv.radio24syv.dk/audiopodcast/channel/4466232 podcasts on: http://arkiv.radio24syv.dk/audiopodcast/channel/4466232
Using B<xpath> you can extract the URLs for 2019 and download them Using B<xpath> you can extract the URLs for 2019 and download them
using GNU B<parallel>: using GNU B<parallel>:
wget -O - http://arkiv.radio24syv.dk/audiopodcast/channel/4466232 | \ wget -O - http://arkiv.radio24syv.dk/audiopodcast/channel/4466232 | \
xpath -e "//pubDate[contains(text(),'2019')]/../enclosure/@url" | \ xpath -e "//pubDate[contains(text(),'2019')]/../enclosure/@url" | \
parallel -u wget '{= s/ url="//; s/"//; =}' parallel -u wget '{= s/ url="//; s/"//; =}'
=head1 EXAMPLE: Run the same command 10 times =head2 EXAMPLE: Run the same command 10 times
If you want to run the same command with the same arguments 10 times If you want to run the same command with the same arguments 10 times
in parallel you can do: in parallel you can do:
seq 10 | parallel -n0 my_command my_args seq 10 | parallel -n0 my_command my_args
=head1 EXAMPLE: Working as cat | sh. Resource inexpensive jobs and evaluation =head2 EXAMPLE: Working as cat | sh. Resource inexpensive jobs and evaluation
GNU B<parallel> can work similar to B<cat | sh>. GNU B<parallel> can work similar to B<cat | sh>.
A resource inexpensive job is a job that takes very little CPU, disk A resource inexpensive job is a job that takes very little CPU, disk
I/O and network I/O. Ping is an example of a resource inexpensive I/O and network I/O. Ping is an example of a resource inexpensive
job. wget is too - if the webpages are small. job. wget is too - if the webpages are small.
The content of the file jobs_to_run: The content of the file jobs_to_run:
ping -c 1 10.0.0.1 ping -c 1 10.0.0.1
skipping to change at line 4154 skipping to change at line 4202
... ...
ping -c 1 10.0.0.255 ping -c 1 10.0.0.255
wget http://example.com/status.cgi?ip=10.0.0.255 wget http://example.com/status.cgi?ip=10.0.0.255
To run 100 processes simultaneously do: To run 100 processes simultaneously do:
parallel -j 100 < jobs_to_run parallel -j 100 < jobs_to_run
As there is not a I<command> the jobs will be evaluated by the shell. As there is not a I<command> the jobs will be evaluated by the shell.
=head1 EXAMPLE: Call program with FASTA sequence =head2 EXAMPLE: Call program with FASTA sequence
FASTA files have the format: FASTA files have the format:
>Sequence name1 >Sequence name1
sequence sequence
sequence continued sequence continued
>Sequence name2 >Sequence name2
sequence sequence
sequence continued sequence continued
more sequence more sequence
To call B<myprog> with the sequence as argument run: To call B<myprog> with the sequence as argument run:
cat file.fasta | cat file.fasta |
parallel --pipe -N1 --recstart '>' --rrs \ parallel --pipe -N1 --recstart '>' --rrs \
'read a; echo Name: "$a"; myprog $(tr -d "\n")' 'read a; echo Name: "$a"; myprog $(tr -d "\n")'
=head1 EXAMPLE: Processing a big file using more CPUs =head2 EXAMPLE: Processing a big file using more CPUs
To process a big file or some output you can use B<--pipe> to split up To process a big file or some output you can use B<--pipe> to split up
the data into blocks and pipe the blocks into the processing program. the data into blocks and pipe the blocks into the processing program.
If the program is B<gzip -9> you can do: If the program is B<gzip -9> you can do:
cat bigfile | parallel --pipe --recend '' -k gzip -9 > bigfile.gz cat bigfile | parallel --pipe --recend '' -k gzip -9 > bigfile.gz
This will split B<bigfile> into blocks of 1 MB and pass that to B<gzip This will split B<bigfile> into blocks of 1 MB and pass that to B<gzip
-9> in parallel. One B<gzip> will be run per CPU. The output of B<gzip -9> in parallel. One B<gzip> will be run per CPU. The output of B<gzip
skipping to change at line 4208 skipping to change at line 4256
B<bigfile.sort>. B<bigfile.sort>.
GNU B<parallel>'s B<--pipe> maxes out at around 100 MB/s because every GNU B<parallel>'s B<--pipe> maxes out at around 100 MB/s because every
byte has to be copied through GNU B<parallel>. But if B<bigfile> is a byte has to be copied through GNU B<parallel>. But if B<bigfile> is a
real (seekable) file GNU B<parallel> can by-pass the copying and send real (seekable) file GNU B<parallel> can by-pass the copying and send
the parts directly to the program: the parts directly to the program:
parallel --pipepart --block 100m -a bigfile --files sort |\ parallel --pipepart --block 100m -a bigfile --files sort |\
parallel -Xj1 sort -m {} ';' rm {} >bigfile.sort parallel -Xj1 sort -m {} ';' rm {} >bigfile.sort
=head1 EXAMPLE: Grouping input lines =head2 EXAMPLE: Grouping input lines
When processing with B<--pipe> you may have lines grouped by a When processing with B<--pipe> you may have lines grouped by a
value. Here is I<my.csv>: value. Here is I<my.csv>:
Transaction Customer Item Transaction Customer Item
1 a 53 1 a 53
2 b 65 2 b 65
3 b 82 3 b 82
4 c 96 4 c 96
5 c 67 5 c 67
skipping to change at line 4246 skipping to change at line 4294
a 50 character random string, which we then use as the separator: a 50 character random string, which we then use as the separator:
sep=`perl -e 'print map { ("a".."z","A".."Z")[rand(52)] } (1..50);'` sep=`perl -e 'print map { ("a".."z","A".."Z")[rand(52)] } (1..50);'`
cat my.csv | \ cat my.csv | \
perl -ape '$F[1] ne $l and print "'$sep'"; $l = $F[1]' | \ perl -ape '$F[1] ne $l and print "'$sep'"; $l = $F[1]' | \
parallel --recend $sep --rrs --pipe -N1 wc parallel --recend $sep --rrs --pipe -N1 wc
If your program can process multiple customers replace B<-N1> with a If your program can process multiple customers replace B<-N1> with a
reasonable B<--blocksize>. reasonable B<--blocksize>.
=head1 EXAMPLE: Running more than 250 jobs workaround =head2 EXAMPLE: Running more than 250 jobs workaround
If you need to run a massive amount of jobs in parallel, then you will If you need to run a massive amount of jobs in parallel, then you will
likely hit the filehandle limit which is often around 250 jobs. If you likely hit the filehandle limit which is often around 250 jobs. If you
are super user you can raise the limit in /etc/security/limits.conf are super user you can raise the limit in /etc/security/limits.conf
but you can also use this workaround. The filehandle limit is per but you can also use this workaround. The filehandle limit is per
process. That means that if you just spawn more GNU B<parallel>s then process. That means that if you just spawn more GNU B<parallel>s then
each of them can run 250 jobs. This will spawn up to 2500 jobs: each of them can run 250 jobs. This will spawn up to 2500 jobs:
cat myinput |\ cat myinput |\
parallel --pipe -N 50 --roundrobin -j50 parallel -j50 your_prg parallel --pipe -N 50 --roundrobin -j50 parallel -j50 your_prg
This will spawn up to 62500 jobs (use with caution - you need 64 GB This will spawn up to 62500 jobs (use with caution - you need 64 GB
RAM to do this, and you may need to increase /proc/sys/kernel/pid_max): RAM to do this, and you may need to increase /proc/sys/kernel/pid_max):
cat myinput |\ cat myinput |\
parallel --pipe -N 250 --roundrobin -j250 parallel -j250 your_prg parallel --pipe -N 250 --roundrobin -j250 parallel -j250 your_prg
=head1 EXAMPLE: Working as mutex and counting semaphore =head2 EXAMPLE: Working as mutex and counting semaphore
The command B<sem> is an alias for B<parallel --semaphore>. The command B<sem> is an alias for B<parallel --semaphore>.
A counting semaphore will allow a given number of jobs to be started A counting semaphore will allow a given number of jobs to be started
in the background. When the number of jobs are running in the in the background. When the number of jobs are running in the
background, GNU B<sem> will wait for one of these to complete before background, GNU B<sem> will wait for one of these to complete before
starting another command. B<sem --wait> will wait for all jobs to starting another command. B<sem --wait> will wait for all jobs to
complete. complete.
Run 10 jobs concurrently in the background: Run 10 jobs concurrently in the background:
skipping to change at line 4296 skipping to change at line 4344
seq 3 | parallel sem sed -i -e '1i{}' myfile seq 3 | parallel sem sed -i -e '1i{}' myfile
As I<myfile> can be very big it is important only one process edits As I<myfile> can be very big it is important only one process edits
the file at the same time. the file at the same time.
Name the semaphore to have multiple different semaphores active at the Name the semaphore to have multiple different semaphores active at the
same time: same time:
seq 3 | parallel sem --id mymutex sed -i -e '1i{}' myfile seq 3 | parallel sem --id mymutex sed -i -e '1i{}' myfile
=head1 EXAMPLE: Mutex for a script =head2 EXAMPLE: Mutex for a script
Assume a script is called from cron or from a web service, but only Assume a script is called from cron or from a web service, but only
one instance can be run at a time. With B<sem> and B<--shebang-wrap> one instance can be run at a time. With B<sem> and B<--shebang-wrap>
the script can be made to wait for other instances to finish. Here in the script can be made to wait for other instances to finish. Here in
B<bash>: B<bash>:
#!/usr/bin/sem --shebang-wrap -u --id $0 --fg /bin/bash #!/usr/bin/sem --shebang-wrap -u --id $0 --fg /bin/bash
echo This will run echo This will run
sleep 5 sleep 5
skipping to change at line 4326 skipping to change at line 4374
Here B<python>: Here B<python>:
#!/usr/local/bin/sem --shebang-wrap -u --id $0 --fg /usr/bin/python #!/usr/local/bin/sem --shebang-wrap -u --id $0 --fg /usr/bin/python
import time import time
print "This will run "; print "This will run ";
time.sleep(5) time.sleep(5)
print "exclusively"; print "exclusively";
=head1 EXAMPLE: Start editor with filenames from stdin (standard input) =head2 EXAMPLE: Start editor with filenames from stdin (standard input)
You can use GNU B<parallel> to start interactive programs like emacs or vi: You can use GNU B<parallel> to start interactive programs like emacs or vi:
cat filelist | parallel --tty -X emacs cat filelist | parallel --tty -X emacs
cat filelist | parallel --tty -X vi cat filelist | parallel --tty -X vi
If there are more files than will fit on a single command line, the If there are more files than will fit on a single command line, the
editor will be started again with the remaining files. editor will be started again with the remaining files.
=head1 EXAMPLE: Running sudo =head2 EXAMPLE: Running sudo
B<sudo> requires a password to run a command as root. It caches the B<sudo> requires a password to run a command as root. It caches the
access, so you only need to enter the password again if you have not access, so you only need to enter the password again if you have not
used B<sudo> for a while. used B<sudo> for a while.
The command: The command:
parallel sudo echo ::: This is a bad idea parallel sudo echo ::: This is a bad idea
is no good, as you would be prompted for the sudo password for each of is no good, as you would be prompted for the sudo password for each of
skipping to change at line 4358 skipping to change at line 4406
sudo echo This sudo echo This
parallel sudo echo ::: is a good idea parallel sudo echo ::: is a good idea
or: or:
sudo parallel echo ::: This is a good idea sudo parallel echo ::: This is a good idea
This way you only have to enter the sudo password once. This way you only have to enter the sudo password once.
=head1 EXAMPLE: GNU Parallel as queue system/batch manager =head2 EXAMPLE: GNU Parallel as queue system/batch manager
GNU B<parallel> can work as a simple job queue system or batch manager. GNU B<parallel> can work as a simple job queue system or batch manager.
The idea is to put the jobs into a file and have GNU B<parallel> read The idea is to put the jobs into a file and have GNU B<parallel> read
from that continuously. As GNU B<parallel> will stop at end of file we from that continuously. As GNU B<parallel> will stop at end of file we
use B<tail> to continue reading: use B<tail> to continue reading:
true >jobqueue; tail -n+0 -f jobqueue | parallel true >jobqueue; tail -n+0 -f jobqueue | parallel
To submit your jobs to the queue: To submit your jobs to the queue:
skipping to change at line 4417 skipping to change at line 4465
they will start, and after that you can submit one at a time, and job they will start, and after that you can submit one at a time, and job
will start immediately if free slots are available. Output from the will start immediately if free slots are available. Output from the
running or completed jobs are held back and will only be printed when running or completed jobs are held back and will only be printed when
JobSlots more jobs has been started (unless you use --ungroup or JobSlots more jobs has been started (unless you use --ungroup or
--line-buffer, in which case the output from the jobs are printed --line-buffer, in which case the output from the jobs are printed
immediately). E.g. if you have 10 jobslots then the output from the immediately). E.g. if you have 10 jobslots then the output from the
first completed job will only be printed when job 11 has started, and first completed job will only be printed when job 11 has started, and
the output of second completed job will only be printed when job 12 the output of second completed job will only be printed when job 12
has started. has started.
=head1 EXAMPLE: GNU Parallel as dir processor =head2 EXAMPLE: GNU Parallel as dir processor
If you have a dir in which users drop files that needs to be processed If you have a dir in which users drop files that needs to be processed
you can do this on GNU/Linux (If you know what B<inotifywait> is you can do this on GNU/Linux (If you know what B<inotifywait> is
called on other platforms file a bug report): called on other platforms file a bug report):
inotifywait -qmre MOVED_TO -e CLOSE_WRITE --format %w%f my_dir |\ inotifywait -qmre MOVED_TO -e CLOSE_WRITE --format %w%f my_dir |\
parallel -u echo parallel -u echo
This will run the command B<echo> on each file put into B<my_dir> or This will run the command B<echo> on each file put into B<my_dir> or
subdirs of B<my_dir>. subdirs of B<my_dir>.
skipping to change at line 4442 skipping to change at line 4490
inotifywait -qmre MOVED_TO -e CLOSE_WRITE --format %w%f my_dir |\ inotifywait -qmre MOVED_TO -e CLOSE_WRITE --format %w%f my_dir |\
parallel -S .. -u echo parallel -S .. -u echo
If the files to be processed are in a tar file then unpacking one file If the files to be processed are in a tar file then unpacking one file
and processing it immediately may be faster than first unpacking all and processing it immediately may be faster than first unpacking all
files. Set up the dir processor as above and unpack into the dir. files. Set up the dir processor as above and unpack into the dir.
Using GNU B<parallel> as dir processor has the same limitations as Using GNU B<parallel> as dir processor has the same limitations as
using GNU B<parallel> as queue system/batch manager. using GNU B<parallel> as queue system/batch manager.
=head1 EXAMPLE: Locate the missing package =head2 EXAMPLE: Locate the missing package
If you have downloaded source and tried compiling it, you may have seen: If you have downloaded source and tried compiling it, you may have seen:
$ ./configure $ ./configure
[...] [...]
checking for something.h... no checking for something.h... no
configure: error: "libsomething not found" configure: error: "libsomething not found"
Often it is not obvious which package you should install to get that Often it is not obvious which package you should install to get that
file. Debian has `apt-file` to search for a file. `tracefile` from file. Debian has `apt-file` to search for a file. `tracefile` from
skipping to change at line 4663 skipping to change at line 4711
=head1 ENVIRONMENT VARIABLES =head1 ENVIRONMENT VARIABLES
=over 9 =over 9
=item $PARALLEL_HOME =item $PARALLEL_HOME
Dir where GNU B<parallel> stores config files, semaphores, and caches Dir where GNU B<parallel> stores config files, semaphores, and caches
information between invocations. Default: $HOME/.parallel. information between invocations. Default: $HOME/.parallel.
=item $PARALLEL_ARGHOSTGROUPS =item $PARALLEL_ARGHOSTGROUPS (beta testing)
When using B<--hostgroups> GNU B<parallel> sets this to the hostgroups When using B<--hostgroups> GNU B<parallel> sets this to the hostgroups
of the job. of the job.
Remember to quote the $, so it gets evaluated by the correct shell. Or Remember to quote the $, so it gets evaluated by the correct shell. Or
use B<--plus> and {agrp}. use B<--plus> and {agrp}.
=item $PARALLEL_HOSTGROUPS =item $PARALLEL_HOSTGROUPS
When using B<--hostgroups> GNU B<parallel> sets this to the hostgroups When using B<--hostgroups> GNU B<parallel> sets this to the hostgroups
skipping to change at line 4838 skipping to change at line 4886
file rather than the global or user configuration files. You can have file rather than the global or user configuration files. You can have
multiple B<--profiles>. multiple B<--profiles>.
Profiles are searched for in B<~/.parallel>. If the name starts with Profiles are searched for in B<~/.parallel>. If the name starts with
B</> it is seen as an absolute path. If the name starts with B<./> it B</> it is seen as an absolute path. If the name starts with B<./> it
is seen as a relative path from current dir. is seen as a relative path from current dir.
Example: Profile for running a command on every sshlogin in Example: Profile for running a command on every sshlogin in
~/.ssh/sshlogins and prepend the output with the sshlogin: ~/.ssh/sshlogins and prepend the output with the sshlogin:
echo --tag -S .. --nonall > ~/.parallel/n echo --tag -S .. --nonall > ~/.parallel/nonall_profile
parallel -Jn uptime parallel -J nonall_profile uptime
Example: Profile for running every command with B<-j-1> and B<nice> Example: Profile for running every command with B<-j-1> and B<nice>
echo -j-1 nice > ~/.parallel/nice_profile echo -j-1 nice > ~/.parallel/nice_profile
parallel -J nice_profile bzip2 -9 ::: * parallel -J nice_profile bzip2 -9 ::: *
Example: Profile for running a perl script before every command: Example: Profile for running a perl script before every command:
echo "perl -e '\$a=\$\$; print \$a,\" \",'\$PARALLEL_SEQ',\" \";';" \ echo "perl -e '\$a=\$\$; print \$a,\" \",'\$PARALLEL_SEQ',\" \";';" \
> ~/.parallel/pre_perl > ~/.parallel/pre_perl
 End of changes. 90 change blocks. 
89 lines changed or deleted 137 lines changed or added

Home  |  About  |  Features  |  All  |  Newest  |  Dox  |  Diffs  |  RSS Feeds  |  Screenshots  |  Comments  |  Imprint  |  Privacy  |  HTTP(S)