"Fossies" - the Fresh Open Source Software Archive  

Source code changes of the file "doc/mlr.1" between
mlr-5.9.0.tar.gz and mlr-5.9.1.tar.gz

About: Miller is like sed, awk, cut, join, and sort for name-indexed data such as CSV and tabular JSON.

mlr.1  (mlr-5.9.0):mlr.1  (mlr-5.9.1)
skipping to change at line 14 skipping to change at line 14
miller - like awk, sed, cut, join, and sort for name-indexed data such as CSV and tabular JSON. miller - like awk, sed, cut, join, and sort for name-indexed data such as CSV and tabular JSON.
SYNOPSIS SYNOPSIS
Usage: mlr [I/O options] {verb} [verb-dependent options ...] {zero or mor e file names} Usage: mlr [I/O options] {verb} [verb-dependent options ...] {zero or mor e file names}
DESCRIPTION DESCRIPTION
Miller operates on key-value-pair data while the familiar Unix tools oper ate on integer-indexed fields: Miller operates on key-value-pair data while the familiar Unix tools oper ate on integer-indexed fields:
if the natural data structure for the latter is the array, then Miller's natural data structure is the if the natural data structure for the latter is the array, then Miller's natural data structure is the
insertion-ordered hash map. This encompasses a variety of data formats, including but not limited to the insertion-ordered hash map. This encompasses a variety of data formats, including but not limited to the
familiar CSV, TSV, and JSON. (Miller can handle positionally-indexed dat a as a special case.) This familiar CSV, TSV, and JSON. (Miller can handle positionally-indexed dat a as a special case.) This
manpage documents Miller v5.9.0. manpage documents Miller v5.9.1.
EXAMPLES EXAMPLES
COMMAND-LINE SYNTAX COMMAND-LINE SYNTAX
mlr --csv cut -f hostname,uptime mydata.csv mlr --csv cut -f hostname,uptime mydata.csv
mlr --tsv --rs lf filter '$status != "down" && $upsec >= 10000' *.tsv mlr --tsv --rs lf filter '$status != "down" && $upsec >= 10000' *.tsv
mlr --nidx put '$sum = $7 < 0.0 ? 3.5 : $7 + 2.1*$8' *.dat mlr --nidx put '$sum = $7 < 0.0 ? 3.5 : $7 + 2.1*$8' *.dat
grep -v '^#' /etc/group | mlr --ifs : --nidx --opprint label group,pass,g id,member then sort -f group grep -v '^#' /etc/group | mlr --ifs : --nidx --opprint label group,pass,g id,member then sort -f group
mlr join -j account_id -f accounts.dat then group-by account_name balance s.dat mlr join -j account_id -f accounts.dat then group-by account_name balance s.dat
mlr --json put '$attr = sub($attr, "([0-9]+)_([0-9]+)_.*", "\1:\2")' data /*.json mlr --json put '$attr = sub($attr, "([0-9]+)_([0-9]+)_.*", "\1:\2")' data /*.json
mlr stats1 -a min,mean,max,p10,p50,p90 -f flag,u,v data/* mlr stats1 -a min,mean,max,p10,p50,p90 -f flag,u,v data/*
skipping to change at line 253 skipping to change at line 253
--j2c --j2t --j2d --j2n --j2x --j2p --j2m --j2c --j2t --j2d --j2n --j2x --j2p --j2m
--x2c --x2t --x2d --x2n --x2j --x2p --x2m --x2c --x2t --x2d --x2n --x2j --x2p --x2m
--p2c --p2t --p2d --p2n --p2j --p2x --p2m --p2c --p2t --p2d --p2n --p2j --p2x --p2m
The letters c t d n j x p m refer to formats CSV, TSV, DKVP, NIDX, JSON, XTAB, The letters c t d n j x p m refer to formats CSV, TSV, DKVP, NIDX, JSON, XTAB,
PPRINT, and markdown, respectively. Note that markdown format is availabl e for PPRINT, and markdown, respectively. Note that markdown format is availabl e for
output only. output only.
COMPRESSED I/O COMPRESSED I/O
--prepipe {command} This allows Miller to handle compressed inputs. You can do --prepipe {command} This allows Miller to handle compressed inputs. You can do
without this for single input files, e.g. "gunzip < myfile.csv.gz | mlr ...". without this for single input files, e.g. "gunzip < myfile.csv.gz | mlr ...".
However, when multiple input files are present, between-file separation s are However, when multiple input files are present, between-file separation s are
lost; also, the FILENAME variable doesn't iterate. Using --prepipe you can lost; also, the FILENAME variable doesn't iterate. Using --prepipe you can
specify an action to be taken on each input file. This pre-pipe command must specify an action to be taken on each input file. This pre-pipe command must
be able to read from standard input; it will be invoked with be able to read from standard input; it will be invoked with
{command} < {filename}. {command} < {filename}.
Examples: Examples:
mlr --prepipe 'gunzip' mlr --prepipe 'gunzip'
mlr --prepipe 'zcat -cf' mlr --prepipe 'zcat -cf'
mlr --prepipe 'xz -cd' mlr --prepipe 'xz -cd'
mlr --prepipe cat mlr --prepipe cat
mlr --prepipe-gunzip
mlr --prepipe-zcat
Note that this feature is quite general and is not limited to decompres sion Note that this feature is quite general and is not limited to decompres sion
utilities. You can use it to apply per-file filters of your choice. utilities. You can use it to apply per-file filters of your choice.
For output compression (or other) utilities, simply pipe the output: For output compression (or other) utilities, simply pipe the output:
mlr ... | {your compression command} mlr ... | {your compression command}
There are shorthands --prepipe-zcat and --prepipe-gunzip which are
valid in .mlrrc files. The --prepipe flag is not valid in .mlrrc
files since that would put execution of the prepipe command under
control of the .mlrrc file.
SEPARATORS SEPARATORS
--rs --irs --ors Record separators, e.g. 'lf' or ' \r\n' --rs --irs --ors Record separators, e.g. 'lf' or ' \r\n'
--fs --ifs --ofs --repifs Field separators, e.g. comma --fs --ifs --ofs --repifs Field separators, e.g. comma
--ps --ips --ops Pair separators, e.g. equals sign --ps --ips --ops Pair separators, e.g. equals sign
Notes about line endings: Notes about line endings:
* Default line endings (--irs and --ors) are "auto" which means autodet ect from * Default line endings (--irs and --ors) are "auto" which means autodet ect from
the input file format, as long as the input file(s) have lines ending in either the input file format, as long as the input file(s) have lines ending in either
LF (also known as linefeed, '\n', 0x0a, Unix-style) or CRLF (also kno wn as LF (also known as linefeed, '\n', 0x0a, Unix-style) or CRLF (also kno wn as
carriage-return/linefeed pairs, '\r\n', 0x0d 0x0a, Windows style). carriage-return/linefeed pairs, '\r\n', 0x0d 0x0a, Windows style).
skipping to change at line 916 skipping to change at line 924
been lost. been lost.
* The combination "--implode --values --across-records" is non-streaming: * The combination "--implode --values --across-records" is non-streaming:
no output records are produced until all input records have been read. In no output records are produced until all input records have been read. In
particular, this means it won't work in tail -f contexts. But all other flag particular, this means it won't work in tail -f contexts. But all other flag
combinations result in streaming (tail -f friendly) data processing. combinations result in streaming (tail -f friendly) data processing.
* It's up to you to ensure that the nested-fs is distinct from your data' s IFS: * It's up to you to ensure that the nested-fs is distinct from your data' s IFS:
e.g. by default the former is semicolon and the latter is comma. e.g. by default the former is semicolon and the latter is comma.
See also mlr reshape. See also mlr reshape.
nothing nothing
Usage: mlr nothing [options] Usage: mlr nothing
Drops all input records. Useful for testing, or after tee/print/etc. have Drops all input records. Useful for testing, or after tee/print/etc. have
produced other output. produced other output.
put put
Usage: mlr put [options] {expression} Usage: mlr put [options] {expression}
Adds/updates specified field(s). Expressions are semicolon-separated and must Adds/updates specified field(s). Expressions are semicolon-separated and must
either be assignments, or evaluate to boolean. Booleans with following either be assignments, or evaluate to boolean. Booleans with following
statements in curly braces control whether those statements are executed; statements in curly braces control whether those statements are executed;
booleans without following curly braces do nothing except side effects (e .g. booleans without following curly braces do nothing except side effects (e .g.
regex-captures into \1, \2, etc.). regex-captures into \1, \2, etc.).
skipping to change at line 2339 skipping to change at line 2347
AUTHOR AUTHOR
Miller is written by John Kerl <kerl.john.r@gmail.com>. Miller is written by John Kerl <kerl.john.r@gmail.com>.
This manual page has been composed from Miller's help output by Eric MSP Veith <eveith@veith-m.de>. This manual page has been composed from Miller's help output by Eric MSP Veith <eveith@veith-m.de>.
SEE ALSO SEE ALSO
awk(1), sed(1), cut(1), join(1), sort(1), RFC 4180: Common Format and MIM E Type for Comma-Separated awk(1), sed(1), cut(1), join(1), sort(1), RFC 4180: Common Format and MIM E Type for Comma-Separated
Values (CSV) Files, the miller website http://johnkerl.org/miller/doc Values (CSV) Files, the miller website http://johnkerl.org/miller/doc
2020-08-19 MILLER(1) 2020-09-03 MILLER(1)
 End of changes. 6 change blocks. 
2 lines changed or deleted 10 lines changed or added

Home  |  About  |  Features  |  All  |  Newest  |  Dox  |  Diffs  |  RSS Feeds  |  Screenshots  |  Comments  |  Imprint  |  Privacy  |  HTTP(S)