2opml - Convert list of URLs to OPML.


NAME

2opml - Convert list of URLs to OPML.


SYNOPSIS

2opml [--add-attributes <ATTRIBUTES>] < urls.txt


DESCRIPTION

Convert text file, containing "<TITLE> <URL>" looking lines, to OPML.

 a8e - Abbreviate words in the input stream



NAME

a8e - Abbreviate words in the input stream


SYNOPSIS

a8e [OPTIONS]

Abbreviate words by leaving the first and last letter of them and replace internal letters by the number indicating how many were they. Like l10n, i18n, and a11y the conventional abbreviation of localization, internationalization, and accessibility respectively.


OPTIONS

-m, --minlength N

Abbreviate words at least N (default 4) char long. Useful to be greater than the boundary letters kept (see -l, -t, and -k) plus one.

-l, --leading-letters N
-t, --trailing-letters N
-k, --keep-letters N

Set how many letter to keep at the beginning of words by -l, or at the end by -t, or set both at once by -k (default is 1 for both)

-r, --word-pattern REGEX

What counts as a word? (default [a-zA-Z]+)

 adr2html - Convert Opera Hostlist 2.0 bookmarks to HTML



NAME

adr2html - Convert Opera Hostlist 2.0 bookmarks to HTML

 args2env - Turns command arguments into environment variables and executes command with the remained arguments



NAME

args2env - Turns command arguments into environment variables and executes command with the remained arguments


SYNOPSIS

args2env [OPTIONS] COMMAND ARG_1 ARG_2 ... ARG_R2 ARG_R1


DESCRIPTION


OPTIONS

-a, --arg NUM

Move the NUMth argument to the environment by the name ARG_NUM > (may be overridden by --template option). Counting starts from 1. The 0th argument would be the COMMAND itself. NUM may be negative number, in which case it's counted from the end backwards.

-r, --right-arg NUM

Same as --arg -NUM >.

-A, --all

Move all arguments to environment.

-k, --keep NUM

Keep the first NUM arguments as arguments, and move the rest of them to environment. Don't use it with -A, -a, or -r.

-t, --template TEMPLATE

How to name environment variables? Must contain a %d macro. Default is ARG_%d. So the value of argument given by --arg 1 goes to ARG_1 variable.

-nt, --negative-template TEMPLATE

How to name environment variables for arguments specified by negative number? Must contain a %d macro. Default is ARG_R%d, R is for "right", because this arg is counted from the right. So the value of argument given by --arg -1 goes to ARG_R1 variable.

-s, --set NAME=NUM

Set NAME variable to the NUMth argument (negative numbers also may be given) and remove the argument from the argument list (keeping the numbering of remaining arguments unchanged). Number-based variables (ARG_n > and ARG_Rn >) are still available.


SEE ALSO

args2stdin(1)

 args2stdin - Turns command arguments into input stream on STDIN



NAME

args2stdin - Turns command arguments into input stream on STDIN


SYNOPSIS

args2stdin [OPTIONS] COMMAND ARG_1 [ARG_2 [...]]


DESCRIPTION

Execute COMMAND command with ARG_n arguments, except remove those which are specified in OPTIONS and write them on the command's STDIN instead.


OPTIONS

-a, --arg NUM

Remove the NUMth argument and write it on STDIN. Counting starts from 1. The 0th argument would be the COMMAND itself. NUM may be negative number, in which case it's counted from the end backwards.

-r, --right-arg NUM

Same as --arg -NUM >.

-A, --all

Move all arguments to STDIN.

-e, --all-after STRING

STRING marks the end of arguments. All arguments after this will be passed in STDIN. This argument won't be passed to COMMAND anywhere. It's usually --. args2stdin(1) does not have any default for this, so no particular argument makes the rest of them go to STDIN.

-k, --keep NUM

Keep the first NUM arguments as arguments, and move the rest of them. Don't use it with -A, -a, or -r.

-d, --delimiter STRING

Delimit arguments by STRING string. Default is linefeed (\ ).

-t, --tab-delimiter

Delimit arguments by TAB char.

-0, --null

Delimit arguments by NUL char.


SEE ALSO

args2env(1)

 asterisk-log-separator - Split up Asterisk PBX log file into multiple files based on which process wrote each part



NAME

asterisk-log-separator - Split up Asterisk PBX log file into multiple files based on which process wrote each part

 

 awk-cut - Select fields from input stream with awk


NAME

awk-cut - Select fields from input stream with awk


SYNOPSIS

awk-cut [COLUMNS-SPEC]

Where COLUMNS-SPEC is a variation of these:

COLUMN-
-COLUMN
COLUMN-COLUMN
COLUMN[,COLUMN[,COLUMN[,...]]]


SEE ALSO

cut.awk(1)

 base58 - Encode to Base58



NAME

base58 - Encode to (decode from) Base58

 base64url - Encode to Base64-URL encoding



NAME

base64url - Encode to (decode from) Base64-URL encoding

 bencode2json - Convert Bencode to JSON


NAME

bencode2json - Convert Bencode (BitTorrent's loosely structured data) to JSON

 header - Echo the input stream up to the first empty line



NAME

header - Echo the input stream up to the first empty line (usual end-of-header marker)

body - Skip everything in the input stream up the the first empty line (usual end-of-header marker) and echo the rest


SYNOPSIS

header FILE [FILE [FILE [...]]]

header < FILE

body FILE [FILE [FILE [...]]]

body < FILE

 cdexec - Run a given command in the given directory


NAME

cdexec - Run a given command in the given directory


SYNOPSIS

cdexec [--home | <DIRECTORY>] [--] <COMMAND> [<ARGS>]

Run a given command in the given directory. Set the target directory to the command's self directory if not given.


SEE ALSO

execline-cd by execlineb(1)

 chattr-cow - try hard to enable Copy-on-Write attribute on files


NAME

chattr-cow - try hard to enable Copy-on-Write attribute on files

chattr-nocow - try hard to disable Copy-on-Write attribute on files

 chattr-cow - try hard to enable Copy-on-Write attribute on files


NAME

chattr-cow - try hard to enable Copy-on-Write attribute on files

chattr-nocow - try hard to disable Copy-on-Write attribute on files

 chromium_cookie_decrypt.py - Decrypt Chromium web browser stored cookies and output cleartext


NAME

chromium_cookie_decrypt.py - Decrypt Chromium web browser stored cookies and output cleartext

 chshebang - Change a script's default interpreter


NAME

chshebang - Change a script's default interpreter

 cred - Credentials and secrets management in command line


NAME

cred - Credentials and secrets management in command line


SYNOPSIS

cred SUBCOMMAND SITE [ARGUMENTS]

cred site SITE SUBCOMMAND [ARGUMENTS]


DESCRIPTION

SITE, most often a website name, is a container of one or more properties. But it can be anything you want to tie properties to, typically passwords, keys, pin codes, API tokens as secrets and username, email address, etc. as ordinary properties.

SITE is represented in a directory in the credentials base dir. You may also enter a directory path on the filesystem for SITE. You don't need to create a SITE: it's created automatically when you write in it.

For websites and other services you have more than one account or identity for, recommended to organize them into sub-directories like: SITE/IDENTITY, eg: mail.example.net/joe@example.net and mail.example.net/jane@example.net.


SUBCOMMANDS

compscript

Output a bash script to setup tab-completion for the cred command. Use it by eg: eval "$(cred compscript)"

list-sites
dump [reveal-secrets | mask-secrets | hash-secrets | blank-secrets] [subdirs]

Display all properties (and their values) of a given site. Optional parameter is how secrets are displayed: mask-secrets is the default and replaces a secret string with 5 asterisks (*****) uniformly (so number of chars are not leaked). hash-secrets replaces secrets by a hash and the checksum algorithm' name is appended to the hash with a tab, like: <TAB>hash-algo=NAME. blank-secrets displays the secret property name but leaves the value field empty. Finally reveal-secrets displays secret strings in clear text just like ordinary properties.

The option subdirs dumps properties from the sub-directories too.

Those properies are considered to be secret at the moment which contain at least one of these words (case insensitive) : pass, key, cvc, secret, pin, code, token, totp (but not totp-issuer).

generate-password

Generate a new password and put in PASSWORD property; append its old value to the OLDPASSWORDS property; copy the new one to the clipboard.

list-props
prop PROPERTY [set NEW-VALUE | edit | read | del | show | reveal | clip]

Manage properties of a given site. See individual instruction descriptions at the subcommands below which are aliases to these prop ... commands.

set PROPERTY NEW-VALUE
edit PROPERTY

Open up the $EDITOR (falling back to $VISUAL) to edit the given property's value. =item read PROPERTY

Read the new value from the STDIN (readline is supported if bash does support it, see help read in bash(1)). Secrets are read in no-echo mode.

del PROPERTY
show PROPERTY
reveal PROPERTY

Subcommand show shows only non-secrets. Enter reveal to show secrets as well.

clip PROPERTY

By clip you may copy the value to the clipboard. If you use CopyQ(1), secrets are prevented to get to CopyQ's clipboard items history.

fill-form PROPERTY [PROPERTY [...]]

Takes one or more property names and types their values to the window accessible by pressing Alt+Tab on your desktop. Also presses <TAB> after each string, but does not press <RETURN>. A single dot (.) is a pseudo PROPERTY name: if it's given, nothing will be typed in its place, but <TAB> is still pressed after it. Use it if the form has fields which you don't want to fill in. Obviously it's useful only with a $DISPLAY. Depends on xdotool(1).


SPECIAL PROPERTIES

TOTP

TOTP property (Timed One-Time Passcode) can be set (simply by cred ... set TOTP, no value needed), delelted, shown, and revealed. When accessed, cotp(1) programm is called to search a TOTP code with its ISSUER (combined with LABEL, if taking the ISSUER only would be ambiguous) matching to the selected SITE.

How SITE and ISSUER (LABEL) are matched: If the site has OTP-ISSUER propery, it is searched. Otherwise the site's name itself is takes as ISSUER name. If the site is at more than 1 directory levels deep under the credentials base dir, then only the first path component satisfies the search criteria as well. For example, TOTP codes for a site like "example.com/my-2nd-account" are searched under both "example.com/my-2nd-account" and "example.com" issuers.

If the above filtering yields more than 1 cotp(1) records, it's further filtered by LABEL. The following properties are tried as LABEL in order: EMAIL, USERNAME, LOGIN. Once only 1 cotp(1) record is yielded, it is taken as the TOTP code.


FILES

Credentials directory is hardcoded to ~/cred.


SEE ALSO

 

 convert_chromium_cookies_to_netscape.sh - Convert Chromium and derivative web browser's cookies to Netscape format


NAME

convert_chromium_cookies_to_netscape.sh - Convert Chromium and derivative web browser's cookies to Netscape format (used by wget and curl)

 corner_time - Place a digital clock in the upper right hand corner of the terminal


NAME

corner_time - Place a digital clock in the upper right hand corner of the terminal

 cpyfattr - Copy file attributes


NAME

cpyfattr - Copy file attributes (xattr)


SYNOPSIS

cpyfattr SOURCE DESTINATION [OPTIONS]


DESCRIPTION

Copy SOURCE file''>s all xattributes to DESTINATION using getfattr(1) and setfattr(1).


OPTIONS

All options passed to setfattr(1). Note that OPTIONS are at the end of argument list.


SEE ALSO

getfattr(1), setfattr(1)

 cronrun - convenience features to run commands in task scheduler environment


NAME

cronrun - convenience features to run commands in task scheduler environment


SYNOPSIS

cronrun [OPTIONS] <COMMAND> [ARGS]

Run COMMAND in a way most scheduled jobs are intended to run, ie:

Set computing priority (nice(1), ionice(1)) to low
Delay start for random amount of time, thus avoiding load-burst when multiple jobs start at the same time
Allow only one instance at a time (by locking)


OPTIONS

--random-delay, -d TIME

Delay program execution at most TIME amount of time. Default is to wait nothing. Also can be set by CRONRUN_DELAY environment.

TIME is a series of AMOUNT and UNIT pairs after each other without space, ie:

 I<AMOUNT> I<UNIT> [ I<AMOUNT> I<UNIT> [ I<AMOUNT> I<UNIT> [...] ] ]

Where UNIT is s, m, h, d for seconds, minutes, hours, days respectively.

Example: 1h30m

A single number without UNIT is seconds.

--wait-lock, -W

Wait for the lock to release. By default cronrun(1) fails immediately if locked.


DESCRIPTION

Lock is based on CRONJOBID environment, or COMMAND if CRONJOBID is not set.

If CRONJOBID is set, STDIO goes to syslog too, in the "cron" facility, stdout at info level, stderr at error level. If not set, STDIO is not redirected.


FILES

~/.cache/cronrun

Lock files stored in this directory.


ENVIRONMENT

CRONJOBID

Recommended practice is to set CRONJOBID=something in your crontab before each cronrun ... job definition.

CRONRUN_DELAY

Set value for the --random-delay option.


LIMITATIONS


SEE ALSO

 cut.awk - Output only the selected fields from the input stream, parameters follow awk syntax


NAME

cut.awk - Output only the selected fields from the input stream, parameters follow awk(1) syntax


SEE ALSO

awk-cut(1)

 daemonctl - Manage preconfigured libslack daemon daemons more conveniently


NAME

daemonctl - Manage preconfigured libslack daemon(1) daemons more conveniently


DESCRIPTIONS

Daemonctl presumes some facts about the system:

daemons are configured in /etc/daemon.conf
daemons log to /syslog/daemon/daemon.<DAEMON>/today.log
 dataurl2bin - Decode "data:..." URLs from input stream and output the raw binary data



NAME

dataurl2bin - Decode "data:..." URLs from input stream and output the raw binary data

 dbus-call - Browse DBus and call its methods



NAME

dbus-call - Browse DBus and call its methods


SYNOPSIS

dbus-call [OPTIONS] [SERVICE [OBJECT [INTERFACE [METHOD [ARGUMENTS]]]]]


DESCRIPTION

May leave out any parameters from the right, in which case possible values for the first left-out parameter are listed.


OPTIONS

--system

Connect to the system DBus.

--session

Connect to the session DBus.

--bus ADDRESS

Connect to ADDRESS DBus.

-r, --raw

output in raw if the output is a single string or number.

 debdiff - Display differences between 2 Debian packages


NAME

debdiff - Display differences between 2 Debian packages (*.deb files)

 delfattr - Removes given attributes from files


NAME

delfattr - Removes given attributes (xattr) from files


SYNOPSIS

delfattr -n NAME [-n NAME [..]] FILE [FILE [...]]


DESCRIPTION

Remove NAME xattribute(s) from the given files.


SEE ALSO

setfattr(1)

 descpids - List all descendant process PIDs of the given process


NAME

descpids - List all descendant process PIDs of the given process(es)

 dfbar - Display disk space usage with simple bar chart



NAME

dfbar - Display disk space usage with simple bar chart (as reported by df(1))

 digasn - Query Autonom System Number from DNS


NAME

digasn - Query Autonom System Number (ASN) from DNS

 diu - Display Inode usage, similar to du for space usage


NAME

diu - Display Inode usage, similar to du(1) for space usage

 dlnew - Download web resource if local copy is older



NAME

dlnew - Download web resource if local copy is older


SYNOPSIS

dlnew [-C] <url> <file>


DESCRIPTION

Download content from web if newer than local copy (based on Last-Modified and caching headers).


PARAMETERS

-C

Bypass validating cache.

url

URL to be downloaded. Schema can be HTTP or HTTPS.

file

Local file data have to written in. If omitted, last component (basename) of url will be used.


EXIT STATUS

  1. Url is found and downloaded.

  2. General error, system errors.

  3. Local file's freshness validated by saved cache metadata, not downloaded.

  4. Download not OK. (usually Not Found)

  5. Url found but not modified. (HTTP 304)

  6. Url found but not updated, based on Last-Modified header.

 

 

 

 eat - Read and echo back input


NAME

eat - Read and echo back input (like cat(1)) until interrupted (ie. ignore end-of-file)

 errorlevel - Exit with the given status code


NAME

errorlevel - Exit with the given status code

 

 evhand - Process new events in a textfile, events described per lines



NAME

evhand - Process new events in a textfile, events described per lines


SYNOPSIS

evhand [OPTIONS] EVENT-FILE STATE-FILE HANDLER [ARGS]


DESCRIPTION

evhand(1) iterates through EVENT-FILE and run HANDLER command on each new lines. What is considered new is decided by STATE-FILE. Handled events are recorded in STATE-FILE (either by verbatim or by checksum), so new events are those not in the state file.

If HANDLER command fails, the event is not considered to have been handled.


OPTIONS

-e, --errexit

Exit at the first failed HANDLER command. Exit status will be the failed handler command's exit status if terminated normally, and 128 + signal number if killed by a signal. By default, run HANDLER for all events, and exit with zero regardless of handler commands exit status.

-C, --checksum-state

Record and check the event's checksum in STATE-FILE instead of the verbatim event string itself.

--shrink-state

Remove those entries from the state file which are not encountered in the event file. Shrinks only when the whole event file could be read up, so not if interrupted by a failed handler command (in --errexit mode) nor if any other error prevented to learn all the events in the event file.

This is useful if you regularly purge old events from the event file and don't want the state file to grow indefinitely.


ENVIRONMENT

EVENT

The string representing the event to be handled. This is passed by evhand(1) to the HANDLER programm.


LIMITATIONS

EVENT should not contain NUL byte as it can not be put in the environment.

stdin(3) is closed for the HANDLER process.

STATE-FILE is locked during the event handling process, so only 1 process can handle events per each STATE-FILE.


NON-FEATURES

Out-of-scope features for evhand(1) and suggestions what to do instead:

record any output from the event hander

See eg. logto(1), redirexec(1), ...

record the date/time when the event is handled

See eg. ts(1), timestamper(1), ...

automatic retry

Just re-run evhand(1).

Or wrap it by repeat(1) like:

 env REPEAT_UNTIL=0 repeat evhand -e ...

It restarts evhand until its exit status is zero. Assumed that the failure is temporary.

watch the event file for new events

Use an inotify(7) frontend, like iwatch(1) to trigger evhand(1).

parallel event processing

Sort events into multiple separate event files and run other evhand(1) sessions on them.


SEE ALSO

uniproc(1)

 fcomplete - Complete a smaller file with the data from a bigger one



NAME

fcomplete - Complete a smaller file with the data from a bigger one

 fc-search-codepoint - Print the names of available X11 fonts containing the given code point


NAME

fc-search-codepoint - Print the names of available X11 fonts containing the given code point(s)

 fdupes-hardlink - Make hardlinks from identical files as reported by fdupes


NAME

fdupes-hardlink - Make hardlinks from identical files as reported by fdupes(1)

 ff - Find files horizontally, ie. a whole directory level at a time, across subtrees



NAME

ff - Find files horizontally, ie. a whole directory level at a time, across subtrees


SYNOPSIS

ff <pattern> [path-1] [path-2] ... [path-n]


DESCRIPTION

Search files which name matches to pattern in paths directories recursively, case-insensitively. The file's path is matched if pattern contains '/'. Searching is done horizontaly, ie. scan the upper-most directory level first completely, then dive into the next level and scan those directories before moving to the 3rd level deep, and so on. This way users usually find what they search for more quickly.

 ffilt - Filter a file via a command's STDIO and write back to the file


NAME

ffilt - Filter a file via a command's STDIO and write back to the file


SYNOPSIS

ffilt FILE COMMAND [ARGS]


DESCRIPTION

Feed FILE into COMMAND's stdin, then save its stdout back to FILE if COMMAND ran successfully.

ffilt(1) is a quasi shorthand for this shell construct:

 output=`cat FILE | COMMAND`
 [ $? = 0 ] && echo "$output" > FILE


LIMITATIONS


SEE ALSO

sponge(1), insitu(1) https://github.com/athas/insitu

 fgat - Execute command in foreground at a given time



NAME

fgat - Execute command in foreground at a given time


SYNOPSIS

fgat <time-spec> <command> [arguments]


DESCRIPTION

In opposite of at(1), fgat(1) stays in console's foreground and waits for time-spec, after that runs command. time-spec can be any string accepted by date(1).

 filesets - Set operations on text files, lines being set elements



NAME

filesets - Set operations on text files, lines being set elements


SYNOPSIS

filesets [OPTIONS] EXPRESSION FILE-1 FILE-2 [...]


EXPRESSION

Sets are identified by the number of file, 1-indexed.

These are the supported operators, may be given by word or by symbol:

union, +
intersect, ^
difference, -
complement, !

Nested parentheses are supported.


DESCRIPTION

Output the resulting set.


OPTIONS


ENVIRONMENT


LIMITATIONS


SEE ALSO

comm(1), uniq(1), setop(1)

 filterexec - Echo those arguments with which the given command returns zero.



NAME

filterexec - Echo those arguments with which the given command returns zero.


SYNOPSIS

filterexec COMMAND [ARGS] -- DATA-1 [DATA-2 [... DATA-n]]


DESCRIPTION

Prints each DATA (1 per line) only if command COMMAND ARGS DATA exits succesfully, ie. with zero exit status.

If you want to evaluate not command line arguments, but data read on STDIN, then combine filterexec(1) with foreach(1).


EXAMPLE

  filterexec test -d -- $(ls)

Shows only the directories. The shell's tokenization may wrongly splits up file names containing space. Perhaps set IFS to newline only.

  ls -1 | foreach filterexec test -d --

Same, but file names are supplied 1-by-1, not all at once, hence filterexec(1) is invoked multiple times.

 find-by-date - Find files with GNU find but with easier to comprehend time interval formats



NAME

find-by-date - Find files with GNU find(1) but with easier to comprehend time interval formats


SYNOPSIS

find-by-date [FROM--][TO] [FIND-ARGS]


DESCRIPTION

Takes your FROM--TO date-time specifications and turns into the appropriative -mmin -MINUTES and -mmin +MINUTES parameters for find(1), then call find(1).


SUPPORTED DATE FORMATS

Recognize these date-time formats in FROM and TO:

  YYYY-mm-dd_HH:MM
  YYYY-mm-dd_HH
  YYYY-mm-dd
  YYYY-mm
  YYYY
       mm-dd
          dd
       mm-dd_HH:MM
       mm-dd_HH
          dd_HH:MM
          dd_HH
             HH:
            _HH

Enter 0--TO to select any time up to TO. Enter FROM-- to select any time starting from FROM.

 findnewestfile - Search for the newest file in a given path recursively and always show the most recent while scanning



NAME

findnewestfile - Search for the newest file in a given path recursively and always show the most recent while scanning

findoldestfile - Search for the oldest file in a given path recursively and always show the most older while scanning


SYNOPSIS

findnewestfile [path]

findoldestfile [path]


DESCRIPTION

Search for the newest/oldest file in given directory and in its subdirectories showing files immediately when found one newer/older than the past ones.

 findnewestfile - Search for the newest file in a given path recursively and always show the most recent while scanning



NAME

findnewestfile - Search for the newest file in a given path recursively and always show the most recent while scanning

findoldestfile - Search for the oldest file in a given path recursively and always show the most older while scanning


SYNOPSIS

findnewestfile [path]

findoldestfile [path]


DESCRIPTION

Search for the newest/oldest file in given directory and in its subdirectories showing files immediately when found one newer/older than the past ones.

 

 fixlogfiledatetime - Set the target files modification time to their respective last log entry's timestamp


NAME

fixlogfiledatetime - Set the target files modification time to their respective last log entry's timestamp

 fixRFC822filemtime - Set a file's last modification time, which contains an email message in RFC-822 format, to the email's Date


NAME

fixRFC822filemtime - Set a file's last modification time, which contains an email message in RFC-822 format, to the email's Date

 fmtkv - Tranform key=value pairs into 1 pair by 1 line on the output



NAME

fmtkv - Tranform key=value (each optionally double-quoted) pairs into 1 pair by 1 line on the output

 foreach - Run an OS or shell command on each input line, similar to xargs


NAME

foreach - Run an OS or shell command on each input line, similar to xargs(1)


SYNOPSIS

foreach [OPTIONS] COMMAND [ARGS ...]


DESCRIPTION

Take each input line from stdin as DATA, and run COMMAND with DATA appended to the end of ARGS as a single argument.

If {} is there in ARGS then it's substituted with DATA rather than append to the end, unless --no-placeholder is given, because then {} is read literally. Additionally, foreach(1) parses DATA into fields and add each of them to the end of ARGS if --fields is given. Numbered placeholders, like {0}, {1}, ... are substituted with the respective field's value. A stand-alone {@} (curly bracket open, at sign, curly bracket close) argument is substituted to all fields as separate arguments.

So, for example, if you have not specified any ARGS in the command line and type both --data and --fields, then DATA goes into argv[1], and the first field goes into argv[2], second to argv[3] and so on. If have not given --data nor --fields, then --data is implied.

If called with --sh option, COMMAND is run within a shell context; input line goes to $DATA, individual fields go to ${FIELD[@]} (0-indexed).

Both in command and shell (--sh) modes, individual fields are available in $F0, $F1, ... environment variables.

Set -d DELIM if you want to split DATA not by $IFS but by other delimiter chars, eg. -d ',:' for comma and colon. There is also -t/--tab option to set delimiter to TAB for your convenience.


OPTIONS

-e, --sh

COMMAND is a shell script and for each DATA, it runs in the same shell context, so variables are preserved across invocations.

-l, --data

Pass DATA in the arguments after the user-specified ARGS.

-f, --fields

Pass individual fields of DATA in the arguments after DATA if --data is given, or after the user-specified ARGS if --data is not given.

-i, --input DATA

Don't read any DATA from stdin, but take DATA given at --input option(s). This option is repeatable.

-d, --delimiter DELIM

Cut up DATA into fields at DELIM chars. Default is $IFS.

-t, --tab

Cut up DATA into fields at TAB chars..

-P, --no-placeholder

Do not substitute {} with DATA.

-p, --prefix TEMPLATE

Print something before each command execution. TEMPLATE is a bash-interpolated string, may contain $DATA and ${FIELD[n]}. Probably need to put in single quotes when passing to foreach(1) by the invoking shell. It's designed to be evaluated, so backtick, command substitution, semicolon, and other shell expressions are eval'ed by bash.

--prefix-add TEMPLATE

Append TEMPLATE to the prefix template. See --prefix option.

--prefix-add-data

Add DATA to the prefix which is printed before each command execution. See --prefix option.

--prefix-add-tab

Add a TAB char to the prefix which is printed before each command execution. See --prefix option.

-v, --verbose
-n, --dry-run
-E, --errexit

Stop executing if a COMMAND returns non-zero. Rather exit with the said command's exit status code.


EXAMPLES

 ls -l --time-style +%FT%T%z | foreach --data --fields sh -c 'echo size: $5, file: $7'
 ls -l --time-style +%FT%T%z | foreach --sh 'echo size: ${FIELD[4]}, file: ${FIELD[6]}'


LIMITS

Placeholders for field values ({0}, {1}, ...) are considered from 0 up to 99. There must be a limit somewhere, otherwise I had to write a more complex replace routine.


CAVEATS

Placeholder {} is substituted in all ARGS anywhere, not just stand-alone {} arguments, but IS NOT ESCAPED! So be careful using it in shell command arguments like sh -e 'echo "data is: {}"'.


SEE ALSO

xargs(1), xe(1) https://github.com/leahneukirchen/xe, apply(1), xapply(1) https://www.databits.net/~ksb/msrc/local/bin/xapply/xapply.html

 

 g_filename_to_uri - Mimic g_filename_to_uri GLib function creating a file:// url from path string


NAME

g_filename_to_uri - Mimic g_filename_to_uri() GLib function creating a file:// url from path string

 getcvt - Print the current active Virtual Terminal


NAME

getcvt - Print the current active Virtual Terminal


SYNOPSIS

getcvt


SEE ALSO

chvt(1)

 

 

 gitconfigexec - Change git settings for a given command run only



NAME

gitconfigexec - Change git settings for a given command run only


SYNOPSIS

gitconfigexec KEY=VALUE [KEY=VALUE [...]] [--] COMMAND ARGS


DESCRIPTION

KEY is a valid git config option (see git-config(1)). Set GIT_CONFIG_COUNT, GIT_CONFIG_KEY_n, and GIT_CONFIG_VALUE_n environment variables, so git(1) takes them as session-override settings.

 git_diff - View two files' diff by git-diff, even not under git version control


NAME

git_diff - View two files' diff by git-diff(1), even not under git version control

 git-submodule-auto-add - Automatically add submodules to a git repo according to .gitmodules file


NAME

git-submodule-auto-add - Automatically add submodules to a git repo according to .gitmodules file


SYNOPSIS

git submodule-auto-add [OPTIONS]


OPTIONS

Those which git-submodule(1) add accepts.


DESCRIPTION

Call as many git submodule add ... commands as many submodules are defined in .gitmodules file in the current repo's root. Automatically adding submodules this way.

An extra feature is to able to define on which name the submodule's remote should be called ("origin" or the tracking remote of superproject's current branch, see git-submodule(1) for details). Add remotename option to the submodule's section in .gitmodules to achieve this.


CAVEATS

Does not fail if a submodule can not be added, but continues with the next one.

 

 glob - Expand shell-wildcard patterns



NAME

glob - Expand shell-wildcard patterns


SYNOPSIS

glob [OPTIONS] [--] PATTERN [PATTERN [PATTERN [...]]]


DESCRIPTION

Expand PATTERN as shell-wildcard patterns and output matching filenames. Output all matched file names once and sorted alphabetically.


OPTIONS

-0

Output filenames as NULL byte temrinated strings.

-f

Fail if can not read a directory. See GLOB_ERR in File::Glob(3perl).

-E

Fail if any PATTERN did not match. Exit code is 2 in this case.

-i

Match case-insensitively. Default is case-sensitive.

-b

Support curly bracket expansion. See GLOB_BRACE in File::Glob(3perl).


LIMITATIONS

Uses perl(1)'s bsd_glob function from File::Glob(3perl),


SEE ALSO

File::Glob(3perl), perldoc(1): glob

 

 

 Head - output as many lines from the first part of files as many lines on the terminal currently


NAME

Head - output as many lines from the first part of files as many lines on the terminal currently

 header - Echo the input stream up to the first empty line



NAME

header - Echo the input stream up to the first empty line (usual end-of-header marker)

body - Skip everything in the input stream up the the first empty line (usual end-of-header marker) and echo the rest


SYNOPSIS

header FILE [FILE [FILE [...]]]

header < FILE

body FILE [FILE [FILE [...]]]

body < FILE

 hlcal - Highlight BSD cal output



NAME

hlcal - Highlight BSD cal(1) output

hlncal - Highlight BSD ncal(1) output


SYNOPSIS

hlcal [OPTIONS] [CAL-OPTIONS]

hlncal [OPTIONS] [NCAL-OPTIONS]


DESCRIPTION

Wrap cal(1), ncal(1) around and highlight specific days.


OPTIONS

DOW=COLOR
DATE=COLOR
START-DATE...END-DATE[,DOW[,DOW[,...]]]=COLOR

Where DOW is a day-of-week name (3 letters), COLOR is a space- or hyphen-delimited list of ANSI color or other formatting style name, DATE (and START-DATE, END-DATE) is in [[YYYY-]MM-]DD format, ie. year and month are optional, and lack of them interpreted as "every year" and "every month" respectively.

In single date definition, DATE, may enter an asterisk * as month to select a given date in every month in the given year, or in every year if you leave out the year as well. Example: 1917-*-15

In the interval definition, may add several DOW days which makes only those days highlighted in the specified interval. Examples: 04-01...06-30,WED means every Wednesday in the second quarter. 1...7,FRI means the first Friday in every month.


SUPPORTED ANSI COLORS AND STYLES

Colors: black, red, green, yellow, blue, magenta, cyan, white, default.

May be preceded by bright, eg: bright red. May be followed by bg to set the background color instead of the foreground, eg: yellow-bg.

Styles: bold, faint, italic, underline, blink_slow, blink_rapid, inverse, conceal, crossed,

Note, not all styles are supported by all terminal emulators.


EXAMPLE

  hlncal today=inverse `ncal -e`=yellow_bg-red SUN=bright-red SAT=red -bM3
 hlcal - Highlight BSD cal output



NAME

hlcal - Highlight BSD cal(1) output

hlncal - Highlight BSD ncal(1) output


SYNOPSIS

hlcal [OPTIONS] [CAL-OPTIONS]

hlncal [OPTIONS] [NCAL-OPTIONS]


DESCRIPTION

Wrap cal(1), ncal(1) around and highlight specific days.


OPTIONS

DOW=COLOR
DATE=COLOR
START-DATE...END-DATE[,DOW[,DOW[,...]]]=COLOR

Where DOW is a day-of-week name (3 letters), COLOR is a space- or hyphen-delimited list of ANSI color or other formatting style name, DATE (and START-DATE, END-DATE) is in [[YYYY-]MM-]DD format, ie. year and month are optional, and lack of them interpreted as "every year" and "every month" respectively.

In single date definition, DATE, may enter an asterisk * as month to select a given date in every month in the given year, or in every year if you leave out the year as well. Example: 1917-*-15

In the interval definition, may add several DOW days which makes only those days highlighted in the specified interval. Examples: 04-01...06-30,WED means every Wednesday in the second quarter. 1...7,FRI means the first Friday in every month.


SUPPORTED ANSI COLORS AND STYLES

Colors: black, red, green, yellow, blue, magenta, cyan, white, default.

May be preceded by bright, eg: bright red. May be followed by bg to set the background color instead of the foreground, eg: yellow-bg.

Styles: bold, faint, italic, underline, blink_slow, blink_rapid, inverse, conceal, crossed,

Note, not all styles are supported by all terminal emulators.


EXAMPLE

  hlncal today=inverse `ncal -e`=yellow_bg-red SUN=bright-red SAT=red -bM3
 htmlentities - Convert plain text into HTML-safe text



NAME

htmlentities - Convert plain text into HTML-safe text


OPTIONS

--[no-]control

escape control chars (0x00-0x1F except TAB, LF, and CR)

--[no-]meta

escape meta chars (less-than, greater-than, ampersand, double- and single-quote)

--[no-]highbit

scape non-ASCII chars

 indent2tree - Makes TAB-indented text into ascii tree chart



NAME

indent2tree - Makes TAB-indented text into ascii tree chart


OPTIONS

-a, --ascii

Set -v, -h, -c, and -l options' values to ASCII line-art chars.

-v, --vertical CHAR
-h, --horizontal CHAR
-c, --child CHAR
-l, --last CHAR
-p, --paths [ SEP ]

Output path-like strings per line, instead of tree-like diagram. If SEP is specified, take it as path separator instead of the default slash (/) char.


DESCRIPTION

Input: lines with leading TAB chars representing the depth in the tree. Multiline records are supported by terminating lines (all but the last one) by backslash.

Output: tree diagramm with (ascii or unicode) drawing chars. Set custom drawing chars by -v, -h, -c, and -l options.


LIMITATIONS

Input data must have at least one "root" item, ie. text starting at the beginning of the line, without preceeding TAB.

Tree depth needs to be denoted by TAB chars, not any other whitespace. Pre-format it if you need to.

Since there can be multiple root items and root items do not have ancestry lines, a multiline root item can be confused with multiple items all having zero children (except maybe the last one). If it matters to you, put a common parent above the tree by inserting a root item to the 0th line and indenting all other lines by 1 level.

Multiline items are not supported in --paths mode.


SEE ALSO

paths2indent(1)
https://github.com/jez/as-tree
 indent2graph - Generate graph out of whitespace-indented hierarchical text



NAME

indent2graph - Generate graph out of whitespace-indented hierarchical text


SYNOPSIS

indent2graph < tree.txt > tree.dot


DESCRIPTION

Take line-based input, and output a directed graph in a given format, eg. dot(1) (see graphviz(1)). Each input line is a node. How much the line is indented (by leading spaces or TABs) determines its relation to the nodes of the surrounding lines. Lines which are indented to the same level, go to the same rank on the tree-like graph in the output. The graph may contain loops: lines with the same text (apart from the leading whitespace) are considered the same node (except when --tree option is set).


EXAMPLE

Input:

  /usr/bin/ssh
    libselinux
      libpcre2-8
    libgssapi_krb5
      libkrb5
        libkeyutils
        libresolv
      libk5crypto
      libcom_err
      libkrb5support
    libcrypto
    libz
    libc

Command:

  indent2graph -f clojure | vijual draw-tree -

Output:

                                +------------+
                                | /usr/bin/s |
                                |     sh     |
                                +-----+------+
                                      |
        +------------------------+----+---------+----------+--------+
        |                        |              |          |        |
  +-----+------+           +-----+------+ +-----+-----+ +--+---+ +--+---+
  | libselinux |           | libgssapi_ | | libcrypto | | libz | | libc |
  +-----+------+           |    krb5    | +-----------+ +------+ +------+
        |                  +-----+------+
        |                        |
        |             +----------+-+--------------+--------------+
  +-----+------+      |            |              |              |
  | libpcre2-8 | +----+----+ +-----+------+ +-----+------+ +-----+------+
  +------------+ | libkrb5 | | libk5crypt | | libcom_err | | libkrb5sup |
                 +----+----+ |     o      | +------------+ |    port    |
                      |      +------------+                +------------+
             +--------+-----+
             |              |
       +-----+------+ +-----+-----+
       | libkeyutil | | libresolv |
       |     s      | +-----------+
       +------------+


OPTIONS

-f, --format FORMAT

Output format.

dot (default)

The graphviz(1) (dot(1)) format.

pairs

Simple TAB-separated node name pairs, each describes a graph edge, 1 per line.

clojure

Clojure-style nested vectors (represented as string). Useful for vijual(1).

grapheasy

Graph::Easy(3pl)'s own "txt" format. With graph-easy(1) you can transform further into other formats, like GDL, VCG, ...

mermaid

TODO

-a, --ascendent

Indentation in the input represents ascendents, not descendents. Default is descendent chart. This influences to where arrows point.

-t, --tree

Interpret input strictly as a tree with no cycles. By default, without --tree, lines with the same text represent the same node, so you can build arbitrary graph. With --tree, you can build a tree-like graph in which different nodes may have the same text (label).

-d, --rankdir DIR

This is the dot(1) graph's rankdir parameter. This option is although specific to dot(1) format, but translated to grapheasy if it is the chosen output format. DIR is one of TB, BT, LR, RL. Default is LR ie. left-to-right. See graphviz(1) documentation for details.


SEE ALSO

indent2tree(1), graphviz(1), dot(1), vijual(1), Graph::Easy(3pl)

 cpfx2indent - Filter text of lines by replacing common prefixes to indentation



NAME

cpfx2indent - Filter text of lines by replacing common prefixes to indentation


SYNOPSIS

cpfx2indent [OPTIONS]


DESCRIPTION

Analyzes input lines on STDIN to detect common prefixes and replaces each line’s leading segment with a number of TABs proportional to the length of the prefix it shares with other lines.


OPTIONS

-d, --delimiter PATTERN

Tokenize input lines by PATTERN regexp pattern. Default is any whitespace (\s+).

-i, --indent-by STRING

Indent output by STRING. Default is TAB. Other useful STRING are for example space, and double space.


ENVIRONMENT


LIMITATIONS


SEE ALSO

indent2graph(1), indent2tree(1), paths2indent(1)

 inisort - Sort keys in an INI file according to the order of keys in an other INI file



NAME

inisort - Sort keys in an INI file according to the order of keys in an other INI file


SYNOPSIS

inisort [<UNSORTED>] [<REFERENCE>] > [<SORTED>]

 is_gzip - Return 0 if the file in argument has gzip signature


NAME

is_gzip - Return 0 if the file in argument has gzip signature

 levenshtein-distance - Calculate the Levenshtein distance of given strings


NAME

levenshtein-distance - Calculate the Levenshtein distance of given strings

jaro-metric - Calculate the Jaro metric of given strings

jaro-winkler-metric - Calculate the Jaro-Winkler metric of given strings

 levenshtein-distance - Calculate the Levenshtein distance of given strings


NAME

levenshtein-distance - Calculate the Levenshtein distance of given strings

jaro-metric - Calculate the Jaro metric of given strings

jaro-winkler-metric - Calculate the Jaro-Winkler metric of given strings

 



NAME

jobsel


SYNOPSIS

jobsel <joblist> [COLUMNS]


DESCRIPTION

Improved job control frontend for bash. joblist is a jobs -l output from which jobsel builds a menu. COLUMNS is an optional parameter, if omitted jobsel calls tput(1) to obtain number of columns on the terminal.


KEYS

 Left,Right  Select item
 Enter       Switch to job in forground
 U           Hangup selected process             SIGHUP
 I           Interrupt process                   SIGINT
 S,T,Space   Suspend, Resume job        SIGCONT,SIGTSTP
 K           Kill process                       SIGKILL
 D           Process details
 X,C,L       Expanded, collapsed, in-line display mode
 Q           Dismiss menu


EXAMPLE

eval $(jobsel "$(jobs -l)" $COLUMNS)


HINTS

Use as an alias

 alias j='eval $(jobsel "$(jobs -l)" $COLUMNS)'

Bind a function key for it

 bind -x '"\204"':"eval \$(jobsel \"\$(jobs -l)\" \$COLUMNS)"
 bind '"\ej"':"\"\204\"" # ESC-J
 Where 204 is an arbitrary free keyscan code
 

 

 json2bencode - Convert JSON to Bencode


NAME

json2bencode - Convert JSON to Bencode (BitTorrent's loosely structured data)

 

 killp - Send signal to processes by PID until they end



NAME

killp - Send signal to processes (kill, terminate, ...) by PID until they end

killpgrp - Send signal to processes (kill, terminate, ...) by PGID until they end

killcmd - Send signal to processes (kill, terminate, ...) by command line until they end

killexe - Send signal to processes (kill, terminate, ...) by executable path until they end


SYNOPSIS

killp [OPTIONS] <PID> [<PID> [...]]


DESCRIPTION

Send signal to process(es) by PID, PGID (process group ID), command name, or by executable path until the selected process(es) exists. Ie. in usuall invocation, eg. killcmd java tries to SIGTERM all java processes until at least 1 exists, and returns only afterwards.


OPTIONS

The following options control how killcmd and killexe finds processes. Semantics are the same as in grep(1):

  -E --extended-regexp
  -F --fixed-strings
  -G --basic-regexp
  -P --perl-regexp
  -i --ignore-case
  -w --word-regexp
  -x --line-regexp

Other options:

-a

killcmd looks for matching substring in the command's arguments too. By default, only the command name is considered (first word in the command line).

-f

killcmd and killexe look for matching substring in the command's full path too. By default, only the basename is considered.

[--]signal=SIG, [-]s=SIG

Which signal to send. See kill(1) and signal(7) for valid SIG signal names and numbers.

[--]interval=IVAL

How much to wait between attempts. See sleep(1) for valid IVAL intervals.

-q, --quiet
-v, --verbose

By default, prints what is being killed on the second attempt onward. With --verbose, prints the first attempt too. With --quiet, does not print what is being killed.

-n, --dryrun


SEE ALSO

kill(1), pkill(1), pgrep(1), killall(1), signal(7)

 killp - Send signal to processes by PID until they end



NAME

killp - Send signal to processes (kill, terminate, ...) by PID until they end

killpgrp - Send signal to processes (kill, terminate, ...) by PGID until they end

killcmd - Send signal to processes (kill, terminate, ...) by command line until they end

killexe - Send signal to processes (kill, terminate, ...) by executable path until they end


SYNOPSIS

killp [OPTIONS] <PID> [<PID> [...]]


DESCRIPTION

Send signal to process(es) by PID, PGID (process group ID), command name, or by executable path until the selected process(es) exists. Ie. in usuall invocation, eg. killcmd java tries to SIGTERM all java processes until at least 1 exists, and returns only afterwards.


OPTIONS

The following options control how killcmd and killexe finds processes. Semantics are the same as in grep(1):

  -E --extended-regexp
  -F --fixed-strings
  -G --basic-regexp
  -P --perl-regexp
  -i --ignore-case
  -w --word-regexp
  -x --line-regexp

Other options:

-a

killcmd looks for matching substring in the command's arguments too. By default, only the command name is considered (first word in the command line).

-f

killcmd and killexe look for matching substring in the command's full path too. By default, only the basename is considered.

[--]signal=SIG, [-]s=SIG

Which signal to send. See kill(1) and signal(7) for valid SIG signal names and numbers.

[--]interval=IVAL

How much to wait between attempts. See sleep(1) for valid IVAL intervals.

-q, --quiet
-v, --verbose

By default, prints what is being killed on the second attempt onward. With --verbose, prints the first attempt too. With --quiet, does not print what is being killed.

-n, --dryrun


SEE ALSO

kill(1), pkill(1), pgrep(1), killall(1), signal(7)

 killp - Send signal to processes by PID until they end



NAME

killp - Send signal to processes (kill, terminate, ...) by PID until they end

killpgrp - Send signal to processes (kill, terminate, ...) by PGID until they end

killcmd - Send signal to processes (kill, terminate, ...) by command line until they end

killexe - Send signal to processes (kill, terminate, ...) by executable path until they end


SYNOPSIS

killp [OPTIONS] <PID> [<PID> [...]]


DESCRIPTION

Send signal to process(es) by PID, PGID (process group ID), command name, or by executable path until the selected process(es) exists. Ie. in usuall invocation, eg. killcmd java tries to SIGTERM all java processes until at least 1 exists, and returns only afterwards.


OPTIONS

The following options control how killcmd and killexe finds processes. Semantics are the same as in grep(1):

  -E --extended-regexp
  -F --fixed-strings
  -G --basic-regexp
  -P --perl-regexp
  -i --ignore-case
  -w --word-regexp
  -x --line-regexp

Other options:

-a

killcmd looks for matching substring in the command's arguments too. By default, only the command name is considered (first word in the command line).

-f

killcmd and killexe look for matching substring in the command's full path too. By default, only the basename is considered.

[--]signal=SIG, [-]s=SIG

Which signal to send. See kill(1) and signal(7) for valid SIG signal names and numbers.

[--]interval=IVAL

How much to wait between attempts. See sleep(1) for valid IVAL intervals.

-q, --quiet
-v, --verbose

By default, prints what is being killed on the second attempt onward. With --verbose, prints the first attempt too. With --quiet, does not print what is being killed.

-n, --dryrun


SEE ALSO

kill(1), pkill(1), pgrep(1), killall(1), signal(7)

 killp - Send signal to processes by PID until they end



NAME

killp - Send signal to processes (kill, terminate, ...) by PID until they end

killpgrp - Send signal to processes (kill, terminate, ...) by PGID until they end

killcmd - Send signal to processes (kill, terminate, ...) by command line until they end

killexe - Send signal to processes (kill, terminate, ...) by executable path until they end


SYNOPSIS

killp [OPTIONS] <PID> [<PID> [...]]


DESCRIPTION

Send signal to process(es) by PID, PGID (process group ID), command name, or by executable path until the selected process(es) exists. Ie. in usuall invocation, eg. killcmd java tries to SIGTERM all java processes until at least 1 exists, and returns only afterwards.


OPTIONS

The following options control how killcmd and killexe finds processes. Semantics are the same as in grep(1):

  -E --extended-regexp
  -F --fixed-strings
  -G --basic-regexp
  -P --perl-regexp
  -i --ignore-case
  -w --word-regexp
  -x --line-regexp

Other options:

-a

killcmd looks for matching substring in the command's arguments too. By default, only the command name is considered (first word in the command line).

-f

killcmd and killexe look for matching substring in the command's full path too. By default, only the basename is considered.

[--]signal=SIG, [-]s=SIG

Which signal to send. See kill(1) and signal(7) for valid SIG signal names and numbers.

[--]interval=IVAL

How much to wait between attempts. See sleep(1) for valid IVAL intervals.

-q, --quiet
-v, --verbose

By default, prints what is being killed on the second attempt onward. With --verbose, prints the first attempt too. With --quiet, does not print what is being killed.

-n, --dryrun


SEE ALSO

kill(1), pkill(1), pgrep(1), killall(1), signal(7)

 kt - Run command in background terminal; keept convenience wrapper


NAME

kt - Run command in background terminal; keept(1) convenience wrapper


SYNOPSIS

kt [jobs | COMMAND ARGS]


DESCRIPTION

Run COMMAND in a keept(1) session, so you may send it to the background with all of its terminal I/O, and recall with the same kt COMMAND ARGS command.

Call kt jobs to show running command sessions.


FILES

Stores control files in ~/.cache/keept.


SEE ALSO

keept(1)

 LevelDB - Commandline interface for Google's leveldb key-value storage



NAME

LevelDB - Commandline interface for Google's leveldb key-value storage

 levenshtein-distance - Calculate the Levenshtein distance of given strings


NAME

levenshtein-distance - Calculate the Levenshtein distance of given strings

jaro-metric - Calculate the Jaro metric of given strings

jaro-winkler-metric - Calculate the Jaro-Winkler metric of given strings

 lines - Output only the given lines of the input stream



NAME

lines - Output only the given lines of the input stream


SYNOPSIS

lines [RANGES [RANGES [...]]] [-- FILE [FILE [...]] | < FILE]


DESCRIPTION

Read from from FILEs if specified, STDIN otherwise. RANGES is comma-delimited list of line numbers and inclusive ranges. Special word "EOF" in a range's upper limit represents the end of the file.

Starts the line numbering from 1.

If multiple files are given, restart the line numbering on each file.

Always displays the lines in the in-file order, not in arguments-order, how they were given in RANGES arguments; ie. does not buffer or seek in the input files. So lines 1,2 and lines 2,1 both display the 1st line before the 2nd.


EXAMPLES

lines 1
lines 2-10
lines 1,5-10 3
lines 2-4 6,8 10-EOF


EXIT STATUS

Exit 2 if there was a range which was not found, ie. a file had less lines than requested.

 lnto - Convenience wrapper for ln. User enters link target paths relative to the current directory


NAME

lnto - Convenience wrapper for ln(1). User enters link target paths relative to the current directory

 

 loggerexec - Run a command and send STDOUT and STDERR to syslog


NAME

loggerexec - Run a command and send STDOUT and STDERR to syslog


SYNOPSIS

loggerexec [-s] FACILITY IDENT COMMAND [ARGS]

Send COMMAND's stdout and stderr to syslog. FACILITY is one of standard syslog facility names (user, mail, daemon, auth, local0, ...). IDENT is a freely choosen identity name, also known as tag or programname. COMMAND's stdout goes as info log level, stderr goes as error log level. Option -s puts the output on stdout/stderr too.


SEE ALSO

logger(1), stdsyslog(1)

 logto - Run a command and append its STDOUT and STDERR to a file


NAME

logto - Run a command and append its STDOUT and STDERR to a file


SYNOPSIS

logto FILENAME COMMAND [ARGS]


DESCRIPTION

Save command's output (stdout and stderr) to file and keep normal stdout and stderr as well.

 lpjobs - Show printer queue jobs


NAME

lpjobs - Show printer queue jobs (wrapper for lpq and lpstat)

 

 

 lsata - List ATA devices on the system


NAME

lsata - List ATA devices on the system

 lsenv - List environment variables of a process



NAME

lsenv - List environment variables of a process


SYNOPSIS

lsenv <pid>

 

 

 mail-extract-raw-headers - Get named headers from RFC822-format input.



NAME

mail-extract-raw-headers - Get named headers from RFC822-format input.


SYNOPSIS

mail-extract-raw-headers [OPTIONS] <NAME> [<NAME> [...]]


OPTIONS

-k, --keep-newlines, --keep-linefeeds

Keep linefeeds in multiline text.

-n, --header-names

Output the header name(s) too, not only the contents.

 maskfiles - Lay over several text files on top of each other like transparency sheets for overhead projectors



NAME

maskfiles - Lay over several text files on top of each other like transparency sheets for overhead projectors


SYNOPSIS

maskfiles [OPTIONS] [--] FILE_1 FILE_2 [FILE_3 ... FILE_n >]


DESCRIPTION

Take files from 1 to n and virtually put them on top of each other by matching byte offsets. If a file from the upper layer have a hole (space by default, otherwise see --hole-char option), then the char on lower layers "looks through" it. Non hole chars just block the lower layers, so they are visible at the end.

Output is STDOUT. No input files are written.


OPTIONS

-h, --hole-chars CHARS

Which chars are to be looked through. By default space is the only hole char. Add underscore to it by example: --hole-chars=" _"

--nul-hole

Make NUL chars to look through as well.

--linewise

Respect line breaks.

 

 

 mime_extract - Extract parts from a MIME multipart file and save them into separate files



NAME

mime_extract - Extract parts from a MIME multipart file and save them into separate files

 mime-header-decode - Decode MIME-encoded stream on stdin line-by-line



NAME

mime-header-decode - Decode MIME-encoded stream on stdin line-by-line

 mkdeb - Create a Debian package


NAME

mkdeb - Create a Debian package (.deb)


SYNOPSIS

mkdeb [-m | --multiarch]


DESCRIPTION

Create a *.deb file according to the package name and version info found in ./deb/DEBIAN/control file and include all file in the package found in ./deb folder. Update some of control file's fields, eg. Version (increase by 1 if there is any file in the package newer than control file), Installed-Size...

In multiarch mode, instead of ./deb folder, it takes data from all folders in the current working directory which name is a valid Debian architecture name (eg. amd64, i386, ...), and stores temporary files in ./deb for building each architecture's package.

Mkdeb also considers mkdeb-perms.txt file in the current working directory to set some file attributes in the package, otherwise all file attributes are gonna be the same as on the original. Each line in this file looks like:

<MODE> <OWNER> <GROUP> <PATH>

Where

<MODE>

is an octal file permission mode, 3 or 4 digits, or "-" to ignore

<OWNER>

UID or name of the owner user

<GROUP>

GID or name of the owner group

<PATH>

the file's path itself to which the attributes are applied, relative to ./deb directory.

 mkmagnetlink - Create a "magnet:" link out of a torrent file



NAME

mkmagnetlink - Create a "magnet:" link out of a torrent file

 movesymlinks - Rename file and correct its symlinks to keep point to it.


NAME

movesymlinks - Rename file and correct its symlinks to keep point to it.


SYNOPSIS

movesymlinks OLDNAME NEWNAME [DIR [DIR [...]]]

Rename file OLDNAME to NEWNAME and search DIR directories for symlinks pointing to OLDNAME and change them to point to NEWNAME.

 moz_bookmarks - Read Mozilla bookmarks database and display titles and URLs line-by-line


NAME

moz_bookmarks - Read Mozilla bookmarks database and display titles and URLs line-by-line

 msg - Write to given user's open terminals


NAME

msg - Write to given user's open terminals

 

 multicmd - Run multiple commands in series



NAME

multicmd - Run multiple commands in series


SYNOPSIS

multicmd [OPTIONS] [--] COMMAND-1 ARGS-1 ";" COMMAND-2 ARGS-2 ";" ...

Run COMMAND-1, COMMAND-2, ... COMMAND-n after each other, similarly like shells would do, except not involving any shell.


OPTIONS

-d, --delimiter STRING

Set command delimiter to STRING. Default is a literal ; semicolon. Probably need to shell-escape. If you want -- (double dash) for delimiter, to avoid confusion, put it as: --delimiter=--.

-e, --errexit

Exit if a command did not run successfully (ie. non-zero exit status or signaled) and do not run further commands. Similar to bash(1)'s errexit (set -e) mode. multicmd(1)'s exit code will be the failed command exit code (128+n if terminated by a signal n).


CAVEATS

Note, that ; (or the non-default delimiter set by --delimiter) is a shell meta-char in your shell, so you need to escape/quote it, but it's a separate literal argument when you call multicmd(1) in other layers (eg. execve(2)), so don't just stick to the preceding word. Ie:

WRONG: multicmd date\; ls

WRONG: multicmd 'date; ls'

WRONG: multicmd 'date ; ls'

CORRECT: multicmd date \; ls

CORRECT: multicmd date ';' ls


EXIT STATUS

multicmd(1) exit with the exit code of the last command.

 multithrottler - Run given command if not reached the defined rate limit



NAME

multithrottler - Run given command if not reached the defined rate limit

 mysql-fix-orphan-privileges.php - Suggest SQL commands to clean up unused records in system tables which hold permission data



NAME

mysql-fix-orphan-privileges.php - Suggest SQL commands to clean up unused records in system tables which hold permission data

 netrc - manage ~/.netrc file



NAME

netrc - manage ~/.netrc file


SYNOPSIS

netrc list [PROPERTY_NAME [PROPERTY_NAME [...]]]

netrc set [machine MACHINE_NAME | default] PROPERTY_NAME PROPERTY_VALUE [PROPERTY_NAME PROPERTY_VALUE [...]]


DESCRIPTION

Query entries from ~/.netrc file. Set and add properties as well as new entries.

netrc list command lists machine and login names by default in tabular data format. Supply PROPERTY_NAMEs to display other properies besides machine names. Machine name is the key, so it's always displayed.

netrc set command set one or more property of the given MACHINE_NAME machine. If the property does not exist yet, it's appended after the last property. If the machine does not exist yet, it's appended after the last machine entry.

As the machine name is the key, if there are multiple entries with the same machine name, yet different login names, refer to one of those by LOGIN_NAME@MACHINE_NAME. A login token has to be there in this case. While the simple MACHINE_NAME keeps referring to the first occurrance.

Refer to the default entry by an empty machine name.


ENVIRONMENT

NETRC_PATH

Alternative path instead of ~/.netrc.


LIMITATIONS

File is not locked during read/write.

Does not support macdef token.


SEE ALSO

netrc(5)

 

 noacute - Strip diacritics from letters on the input stream


NAME

noacute - Strip diacritics (acute, umlaut, ...) from letters on the input stream

 nocomment - remove comment lines from input stream



NAME

nocomment - remove comment lines from input stream


SYNOPSIS

nocomment [grep-arguments]


DESCRIPTION

This command does not overwrite nor write files, just prints them without comments. I.e. removing lines starting hashmark or semicolon.


SEE ALSO

grep(1)

 notashell - A non-interactive shell lacking of any shell syntax



NAME

notashell - A non-interactive shell lacking of any shell syntax


SYNOPSIS

notashell -c COMMANDLINE


DESCRIPTION

notashell(1) is a program with non-interactive shell interface (ie. sh -c commandLine), and intentionally does not understand any shell syntax or meta character, rather takes the first word of COMMANDLINE and executes it as a single command with all of the rest of COMMANDLINE as its arguments.

This is useful when you have a program which normally calls some other commands via shell (eg. system(3)), notably with user-controlled parts in it, ie. data from an untrusted source. This potentially makes the call vulnerable to shell-injection. Like incrond(8) since 2015, which triggered the author to make this defense tool.

These kind of programs usually try to guard by escaping user input, but it often turns out that the re-implemented shell-escape mechanism was bad or incomplete.

Using notashell(1) enables you to fully evade this type of shell-injection attacks. Since if you control at least the first word of COMMANDLINE, you can trustworthly call a program (wrapper script) in which the supplied COMMANDLINE can be re-examined, accepted, rejected, rewritten, etc. and pass the execution forward now with verified user input.

No need to think on "is it safe to run by shell?" or quotation-mark/escape-backslash forests ever again.


FILES

Customize how COMMANDLINE is parsed by /etc/notashell/custom.pl. If this file exists, notashell(1) executes it inside its main context, so in custom.pl you can build in custom logic. There are some perl variables accessible: $CommandString, @CommandArgs, and $ExecName.

$CommandString is just the COMMANDLINE and recommended that only read it in custom.pl, because changing it does not affect what will be executed. @CommandArgs is COMMANDLINE split into parts by spaces. You may change or redefine it to control what will be the arguments of the executed command at the end. $ExecName is the command's name or path ($CommandArgs[0] by default) what will be executed at the end. You may change this one too, and it's does not need to be aligned with $CommandArgs[0].

You are also given some utility functions to use in custom.pl at your dispense: stripQuotes(), setupIORedirects(). stripQuotes() currently just return the supplied string without surrounding single and double quotes.

setupIORedirects() scans the supplied list for common shell IO redirection syntax, setup these redirections on the current process, and return the input list except those elements which are found to be part of the redirection.

Example:

 setupIORedirects("date", "-R", ">", "/tmp/date.txt")
 # returns: ("date", "-R")
 # and have STDOUT redirected to the file.

Recognized representation:

operators:

write ( >>) and append ( >> >>)

an integer before the operator;

optional, defaults are the same as in sh(1)

filename

just right after the operator or in the next argument; strings only matching to [a-zA-Z0-9_,./-]+ are considered filenames.

Don't forget to exit from custom.pl with a true value.

Typical custom.pl script:

  @CommandArgs = setupIORedirects(@CommandArgs);
  @CommandArgs = map {stripQuotes($_)} @CommandArgs;
  1;


SETUP

You probably need a tool to force the neglegent program (which is the attack vector to shell-injection) to run notashell(1) in place of normal shell (sh(1), bash(1)). See for example noshellinject tool to accomplish this (in ../root-tools directory in notashell's source git repo).

 

 

 

 organizebydate - Rename files based on their date-time


NAME

organizebydate - Rename files based on their date-time


SYNOPSIS

organizebydate [OPTIONS] PATHS [FIND-PARAM]


DESCRIPTION

Organize files by date and time, typically into a directory structure.

PATHS are file and/or directory paths.

FIND-PARAM are find(1) expressions (predicates) to filter which files to work on, or -H, -L, or -P options - see find(1).


OPTIONS

-t, --template TMPL

Target path name template using strftime(3) macros.

Default: %Y/%m/%d/

Extra macros accepted:

%@

File's directory path

%.

File's name itself (basename)

--move, --copy

Move or copy files. Default is copy.

--move-success-template TMPL

Move successfully copied files according to TMPL template. This is useful only with --copy. Default is not to move away successfully copied filed. This is useful if you want to keep backed up files on the source too but in another directory so they won't be processed again.

--overwrite

Overwrite already existing target files. Default is to silently ignore them. Note, this affects only --copy and --move, not --handler.

--handler PROG

Execute PROG to handle files 1-by-1 instead of internal copy or move. You may do --handler "rsync -Pvit --inplace --mkpath" --template HOSTNAME:PATH to upload via ssh/rsync (beware, --set-*time > and conflicting filename checking work only on local paths) or implement any file transfer method here. Arguments passed (after those which are given in PROG) are first, the source file path, and second, the target file path. Conflicting target path is still checked and resolver is run before PROG if --conflict-resolver-cmd or --conflict-resolver-script is specified; if not, PROG should implement conflicting file name resolution logic.

--conflict-resolver-cmd CMD, --conflict-resolver-script SCRIPT

Run a custom conflict resolver logic on already existing target files. Unless conflict resolver is given, organizebydate(1) ignores conflicts silently or overwrites target unconditionally if --overwrite is specified.

The conflict resolver can either be a single word command or a command and arguments - when CMD contains IFS chars (like space) (in this case you can not pass arguments themself containing spaces to the command because each space-delimited word goes to a separate argument), or a whole bash(1) script if --conflict-resolver-script is given. SCRIPT is run as a separate command too, not in organizebydate(1)'s own shell context.

COMMAND ARGUMENTS

Arguments passed to conflict resolver command/script (after the arguments included in CMD, if any) are the source file's path first and the target path secondly:

  1. source file path
  2. target file path

EXAMPLES

 --conflict-resolver-cmd some-command
 # runs this: some-command SOURCE TARGET
 
 --conflict-resolver-cmd "some-command --option x"
 # runs this: some-command --option x SOURCE TARGET
 --conflict-resolver-cmd "some-command --option \"a and b\""
 # WRONG: "a and b" goes into 3 separate arguments, not one
 
 --conflict-resolver-script "some-command --option \"a and b\" \"$@\""
 # RIGHT: runs this within a bash script
 # and the end this in ran: some-command --option "a and b" SOURCE TARGET

ENVIRONMENT

Environment variables passed to conflict resolver programms:

ORGANIZEBYDATE_MODE

copy or move

SOURCE_FILE_MTIME
TARGET_FILE_MTIME
SOURCE_FILE_CTIME
TARGET_FILE_CTIME
SOURCE_FILE_ATIME
TARGET_FILE_ATIME
SOURCE_FILE_SIZE
TARGET_FILE_SIZE

Some attributes of the source and target files to help the resolver. File time attributes are in unix timestamp, size is in bytes.

EXIT STATUS

If the conflict resolver programm return a non-zero exit status, it is considered failure (and recorded if --faillog is given). On zero exit status, the conflict resolution is get from the command's output's last line. Don't write more than 1 newline char (\ ) to the very end, otherwise the last line would only contain the empty string.

SIGNALS

Conflict resolution signals, ie. what the resolver programm can signal back to organizebydate(1) by its last STDOUT line.

skip

Don't copy (move) source file, and don't do any processing (eg. don't move successfully copied source file).

proceed [NEW-TARGET]

Copy (move) source to target. Optionally with a new target path. This always (attempts to) overwrites the target even if --overwrite is not given, and either if NEW-TARGET is given or nor. So it's the resolver programm's responsibility to prevent unwanted overwrite.

done

Indicate that the source file is already there on the target path, and no need to copy/move. organizebydate(1) may still set the target's mtime (atime) when --set-mtime (--set-atime) option is given; and still moves the source file when --move-success-template is given. Emit done signal if the same file is already present on the target location, or the conflict resolver put it there, with equal binary content to the source file.

fail

Similar to done but explicitely fail the current item copy/move. This is recorded in fail log if --faillog option is given.

other words are considered failure for now.

You may do extra steps in the conflict resolver's logic: eg. rename old target or move to an other directory and signal proceed at the end, or eg. remove source file and signal skip - this is useful in move mode.

If you want to ask the user interactively, don't read from stdin(3), rather re-open the tty(4).

stdout(3) is buffered and then echoed except the last line. stderr(3) is let through as-is.

-c, -m, -a, --ctime, --mtime, --atime

Determining timestamps is based on the file's change-, modify-, or access-time. Default is mtime.

-E, --email

Files are raw Emails.

Determining timestamps is based on the Date header.

-J, --jpeg

Files are JPEG images.

Determining timestamps is based on EXIF tags.

--fallback-to-filetime

Fall back to file mtime (ctime, atime) if datetime info is not found in embedded metadata (RFC-822, Exif, ...)

--set-mtime, --set-atime

Set the copied (moved) files' mtime (atime) to the datetime used in the template.

--faillog FILE

Save failed paths to FILE.

-v, --verbose

Verbose mode

-n, --dry-run

Dry run. Do not copy (move) files. Output what would be done in OPERATION TAB SOURCE TAB TARGET format. Where OPERATION is one of:

copy
move
custom

for --copy, --move, and --handler operation modes respectively and the target does not exist.

skip

when the target already exists (and neither --overwrite nor --conflict-resolver-* option is given).

overwrite

when --overwrite is allowed.

conflict

when a --conflict-resolver-* option is given.

--help, --pod, --troff

Output documentation in plain text, POD, or troff (for man(1)) formats.

-i, --min-depth

Minimum directory level to traverse. Equivalent to find(1)'s -mindepth option.

-x, --max-depth

Maximum directory level to traverse. Equivalent to find(1)'s -maxdepth option.


EXIT STATUS

Exit 0 if all files processed successfully.

Exit 1 on parameter error.

Exit 2 if at least 1 file is failed.

 organizebydate-conflict-resolve-filename-version - Filename conflict resolver script for organizebydate



NAME

organizebydate-conflict-resolve-filename-version - Filename conflict resolver script for organizebydate(1)


SYNOPSIS

organizebydate-conflict-resolve-filename-version [OPTIONS] SOURCE TARGET


DESCRIPTION

This is a helper programm used by organizebydate(1) as a filename conflict resolver command. It signals that the SOURCE is already equivalent to the TARGET if their SHA-256 checksums match. If not, then sets a new target file name for organizebydate(1). The new target includes a version number in between the file's basename and extension, taking into account any already existing versioned file names, so no files gonna be overwritten (unless there is a race condition with other processes writing to the target directory).


OPTIONS

-s STR

Set STR as the string separating the filename (basename) from the version number. Default is dot: ..

-t STR

Set STR as the string separating the version number from the filename suffix (extension). Default is empty, so the version number is followed by the dot directly which is the part of the suffix, if there is an extension.


EXAMPLES

 organizebydate-conflict-resolve-filename-version -s '(v' -t ')' ...


SEE ALSO

organizebydate(1)

 palemoon-current-urls - Display Palemoon web browser's currently opened URLs per window and per tab


NAME

palemoon-current-urls - Display Palemoon web browser's currently opened URLs per window and per tab


LIMITATIONS

Assuming the "default" browser profile.

Assuming the "default" browser profile is in the *.default folder in Pale Moon's profiles folder.

Assuming sessionstore.js is up-to-date.

 pararun - run commands parallelly



NAME

pararun - run commands parallelly


SYNOPSIS

pararun [OPTIONS] [COMMON_ARGS] --- PARTICULAR_ARGS [+ PARTICULAR_ARGS [+ ...]] [--- COMMON_ARGS]


DESCRIPTION

Start several processes simultaneously. Starting several different commands and starting the same command with different arguments are not distinguished: COMMON_ARGS may be empty - in this case each PARTICULAR_ARGS is a command followed by its arguments; When COMMON_ARGS consists at least 1 argument, then it's the command to be started with the rest of the COMMON_ARGS arguments appended by each PARTICULAR_ARGS arguments per each child process.


EXAMPLES

 pararun --- ./server + ./client
 
Runs C<./server> and C<./client> programms in parallel.
 
 pararun ls --- /usr + /etc + /var
 
Runs C<ls /usr>, C<ls /etc>, and C<ls /var>.
 pararun --- ./server + ./client --- --port=12345
 
Runs C<./server> and C<./client> programms in parallel with the same command line argument.
 
=head1 OPTIONS
-s, --common-sep SEP

Let the string SEP close the common arguments (including the command if it is common as well) instead of the default triple dash (---).

-S, --particular-sep SEP

The string SEP separates the particular arguments instead of the default plus sign (+).

-i, --particular-args-stdin

Read additional PARTICULAR_ARGS from STDIN. Each line is taken as 1 argument unless -d is given.

-d, --stdin-delimiter PATTERN

When reading PARTICULAR_ARGS from STDIN, split up lines into arguments by PATTERN regex pattern. Useful delimiter is \t TAB which you may need to quote in your shell, like '\t' in bash(1).

-a, --success-any

Exit the lowest status code of the childer processes. Ie. exit with zero status code if at least one of the parallel commands succeeded. Although still waits for all to complete.

-p, --prefix-first-particular-arg

Prefix each output line with the given command's first particular argument.

-C, --colorize-prefix

Colorize each particular command's prefix. Implies -p.

-T, --prefix-trailer STR

Separate prefix from the rest of the line with this string. Default is one space.

-e, --end-summary

Show textual summary at the end about how each command exited. Exit code, exit signal.

-B, --no-bold

Don't use ANSI bold colors.


EXIT STATUS

Exit with the highest exit status of the children processes.


LIMITATIONS

If a command terminates due to a signal, and prefixing and/or prefix coloring is turned on, then the signaled state is not preserved because pararun(1) pipes commands through stdfilt(1) to get them prefixed and/or colored.


SEE ALSO

parallel(1)


INSPIRED BY

polysh https://github.com/innogames/polysh/

 parsel - Select parts of a HTML document based on CSS selectors


NAME

parsel - Select parts of a HTML document based on CSS selectors


INVOCATION

parsel <SELECTOR> [<SELECTOR> [...]] < document.html


DESCRIPTION

This command takes an HTML document in STDIN and some CSS selectors in arguments. See 'parsel' and 'cssselect' python modules to see which selectors and pseudo selectors are supported.

Each SELECTOR selects a part in the DOM, but unlike CSS, does not narrow the DOM tree down for subsequent selectors. So a sequence of div p arguments (2 arguments) selects all <DIV> and then all <P> in the document; in other words it is NOT equivalent to the div p css selector which selects only those <P> which are under any <DIV>. To combine selectors, see the / (slash) operator below.

Each SELECTOR also outputs what was matched, in the following format: First output an integer how many distinct HTML parts were selected, then output the selected parts themself each in its own line. CR, LF, and Backslash chars are escaped by one Backslash char. It's useful for programmatic consumption, because you only have to fist read a line which tells how many subsequent lines to read: each one is one selected DOM sub-tree on its own (or text, see ::text and [[ATTRIB]] below). Then just unescape Backslash-R, Backslash-N, and double Backslashes (for example with sed -e 's/\\\\/\\/g; s/\\ /\ /g; s/\\ /\ /g') to get the HTML content.

Additionally it takes these special arguments as well:

@SELECTOR

Prefix your selector with an @ at sign to suppress output. Mnemonic: Command line echo suppression in DOS batch and in Makefile.

text{} or ::text

Remove HTML tags and leaves text content only before output. text{} syntax is borrowed from pup(1). ::text form is there for you if curly brackets are magical in your shell and you don't want to type escaping. Note, ::text is not a standard CSS pseudo selector at the moment.

attr{ATTRIB} or [[ATTRIB]]

Output only the value of the uppermost selected element's ATTRIB attribute. attr{} syntax is borrowed from pup(1). Mnemonic for the [[ATTRIB]] form: in CSS you filter by tag attribute with [attr] square brackets, but as it's a valid selector, parsel(1) takes double square brackets to actually output the attribute.

/ (forward slash)

A stand-alone / takes the current selection as a base for the rest of the selectors. Therefore the subsequent SELECTORs work on the previously selected elements, not on the document root. Mnemonic: one directory level deeper. So this arg sequence: .content / p div selects only those P and DIV elements which are inside a "content" class. This is useful because with css only, you can not group P and DIV together here. In other words neither .content p, div nor .content > p, div provides the same result.

SEL1/SEL2/SEL3

A series of selectors delmited by / forward slashes in a single argument is to delve into the DOM tree, but show only those elements which the last selector yields. In contrast to the multi-argument variant SEL1 / SEL2 / SEL3, which shows everything SEL1, SEL2, SEL3, etc produces. Similar to this 5 words argument: @SEL1 / @SEL2 / SEL3, except SEL1/SEL2/SEL3 rewinds the base selection to the one before SEL1, while the former one moves the base selection to SEL3 at the end.

You may still silence its output by prepending @, like: @SEL1/SEL2/SEL3, so not even SEL3 will be shown. This is useful when you want only its attributes or inner text (see text{} and attr{}).

Since slashes may occour normally in valid CSS selectors, please double those / slashes which are not meant to separate selectors, but are part of a selector - usually an URL in a tag attribute. Eg. instead of a[href="http://example.net/page"], input a[href="http:////example.net//page"].

.. (double period)

A stand-alone .. rewinds the base DOM selection to the previous base selection before the last /. Mnemonic: parent directory. Note, it does not select the parent element in the DOM tree, but the stuff previously selected in this parsel(1) run. To select the parent element(s) use parent{}.

parent{} or :parent

Select the currently selected elements' parent elements on the DOM tree. Note, :parent is not a standard CSS selector at the moment. Use the parent{} form to disambiguate it from real (standardized) CSS selectors in your code.

@:root

Rewind base selection back to the DOM's root. Note, :root is also a valid CSS pseudo selector, but in a subtree (entered into by /) it would yield only that subtree, not the original DOM, so parsel(1) goes back to it at this point. You likely need @ too to suppress output the whole document here.


OPTIONS

-1

Show only the first element found. The output is not escaped in this case.


EXAMPLE OUTPUT

  $ parsel input[type=text] < page.html
  2
  <input type="text" name="domain" />
  <input type="text" name="username" />
  $ parsel input[type=text] [[name]] < page.html
  2
  <input type="text" name="domain" />
  <input type="text" name="username" />
  2
  domain
  username
  $ parsel @input[type=text] [[name]] < page.html
  2
  domain
  username
  $ parsel @form ::text < page.html
  1
  Enter your logon details:\
\
Domain:\
\
Username:\
\
Password:\
\
Click here to login:\
\


REFERENCE

https://www.w3schools.com/cssref/css_selectors.php
https://developer.mozilla.org/en-US/docs/Web/CSS/Reference#selectors
https://github.com/scrapy/cssselect
https://cssselect.readthedocs.io/en/latest/#supported-selectors


SIMILAR TOOLS

https://github.com/ericchiang/pup
https://github.com/suntong/cascadia
https://github.com/mgdm/htmlq
 partial - Show an earlier started long-running command's partial output


NAME

partial - Show an earlier started long-running command's partial output


SYNOPSIS

partial [--restart|--forget|--wait|--pid] <COMMAND> [<ARGUMENTS>]


DESCRIPTION

On first invocation partial(1) starts COMMAND in the background. On subsequent invocations, it prints the command's output to stdout which is generated so far, including the parts which are shown before too, and keep it running in the background. Hence the name 'partial', because it shows a command's partial output. When the command finished, partial(1) prints the whole output and exits with COMMAND's exit code.


OPTIONS

-f, --forget

Terminate (SIGTERM) previous instance of the same command and clean up status directory, even if it's running.

-r, --restart

Terminate command if running (like with --forget) and start it again.

-w, --wait

On first run, wait for the complete output.

-p, --pid

display PID

-q, --quiet

less verbose


STATUS CODES

  1. command started

  2. partial output shown

    nnn

    called command returned with this status code nnn


LIMITS

If COMMAND does not exit normally, but gets terminated by a signal, the exit code is indistinguishable from a normal exit's status code, due to bash(1) uses the value of 128+N as the exit status when a command terminates on a fatal signal N.

 pathmod - Run command with a modified PATH



NAME

pathmod - Run command with a modified PATH


SYNOPSIS

pathmod [OPTIONS] [--] COMMAND [ARGS]


DESCRIPTION


OPTIONS

-d, --direct

Lookup only COMMAND according to the modified PATH. Command calls by COMMAND and its children still inherits PATH environment variable from pathmod(1)'s caller. Unless of course COMMAND changes it on its own.

If neither -d nor -s is given, the default mode is -d.

-s, --subsequent

Modify PATH environment for COMMAND, so COMMAND is still looked up according to the same PATH as pathmod(1), but its children are going to be looked up according to the modified path.

-d -s

Simultaneous --direct and --subsequent is supported. In this case COMMAND is looked up according to the modified PATH and the PATH environment is changed too. This is nearly the same as env PATH=MOD_PATH COMMAND ARGS.

-r, --remove DIR

Remove DIR directory from the PATH. Note, items in PATH are normalized first. Normalization rules:

--remove-regex PATTERN
-a, --append DIR
-p, --prepend DIR
-i, --insert-before PATTERN:DIR

Insert DIR before each item in the PATH which matches to PATTERN regexp.

empty item is the self (. "dot") directory
remove trailing slash


LIMITATIONS


SEE ALSO

 paths2indent - Transform list of filesystem paths to an indented list of the leaf elements



NAME

paths2indent - Transform list of filesystem paths to an indented list of the leaf elements


SYNOPSIS

paths2indent [OPTIONS]


DESCRIPTION

Input: list of file paths line-by-line

Output: leaf file names indented by as many tabs as deep the file is on the tree


OPTIONS

-d, --separator CHAR
-t, --stop PATTERN
-s, --sort


LIMITATIONS

Can not have empty path elements (ie. consecutive slashes).


SEE ALSO

https://github.com/jez/as-tree
 pcut - Cut given fields of text input separated by the given Perl regex



NAME

pcut - Cut given fields of text input separated by the given Perl regex


SYNOPSIS

pcut [OPTIONS] [FILE [FILE [...]]]


DESCRIPTION

Standard cut(1) breaks up input lines by a given single char. pcut(1) does this by the given perl(1)-compatible regular expression. cut(1) outputs fields always in ascending order, without duplication. pcut(1) outputs fields in the requested order, even multiple times if asked so by the -f option.


OPTIONS

-f, --fields NUMBERS

Counted from 1. See cut(1) for syntax.

-d, --delimiter REGEX

Default is whitespace (\s+).

-s, --only-delimited

See the same option in cut(1).

-D, --output-delimiter STRING

Define the output field delimiter. Default is not to use a constant output delimiter, but to preserve the separator substrings as they matched to the pattern of -d option (see --prefer-preceding-delimiter and --prefer-succeeding-delimiter options).

-P, --prefer-preceding-delimiter
--prefer-succeeding-delimiter (default)

Contrary to cut(1), pcut(1) does not always use a constant delimiter char, but a regexp pattern which may match to different substrings between fields in the input lines.

Each output field (except the last one) is followed by that substring which was matched to the delimiter pattern just right after that field in the input.

With --prefer-preceding-delimiter, each output field (except the first one) is similarly preceded by that substring which was matched to the delimiter pattern just before that field in the input.

--delimiter-before-first STRING

Write STRING before field 1 if it is not the first field on the output (in --prefer-preceding-delimiter mode).

--delimiter-after-last STRING

Write STRING after the last field if it is written not as the last field on the output.

-z, --zero-terminated

Terminate output records (lines) by NUL char instead of LineFeed.


LIMITATIONS


SEE ALSO

cut(1), hck, tuc, rextr(1), arr(1) arr, choose

 

 perl-repl - Read-Evaluate-Print-Loop wrapper for perl


NAME

perl-repl - Read-Evaluate-Print-Loop wrapper for perl(1)

 pfx2pem - Convert PFX certificate file to PEM format


NAME

pfx2pem - Convert PFX (PKCS#12) certificate file to PEM format

 pipecmd - Run a command and pipe its output to an other one


NAME

pipecmd - Run a command and pipe its output to an other one


SYNOPSIS

pipecmd CMD_1 [ARGS] -- CMD_2 [ARGS]


DESCRIPTION

Equivalent to this shell command:

 CMD_1 | CMD_2

The first command's (CMD_1) arguments can not contain a double-dash (--), because it's the command separator for pipecmd(1). However, since only a total of 2 commands are supported, arguments for CMD_2 may contain double-dash(es).

You can chain pipecmd(1) commands together to get a pipeline equivalent to CMD_1 | CMD_2 | CMD_3, like:

 pipecmd CMD_1 -- pipecmd CMD_2 -- CMD_3


RATIONALE

It's sometimes more convenient to don't involve shell command-line parser.


SEE ALSO

pipexec(1)

 pipekill - Send signal to a process on the other end of the given pipe filedescriptor


NAME

pipekill - Send signal to a process on the other end of the given pipe filedescriptor

 PMbwmon - Poor man's bandwidth monitor


NAME

PMbwmon - Poor man's bandwidth monitor


SYNOPSIS

PMbwmon [kMG][bit | Byte] [INTERFACES...]

 PMdirindex - Poor man's directory index generator, output HTML


NAME

PMdirindex - Poor man's directory index generator, output HTML

 PMdirindex - Poor man's hex diff viewer


NAME

PMdirindex - Poor man's hex diff viewer

 PMnslist - Poor man's namespace list


NAME

PMnslist - Poor man's namespace list

 PMpwgen - Poor man's password generator


NAME

PMpwgen - Poor man's password generator

 PMrecdiff - Poor man's directory tree difference viewer, comparing file names and sizes recursively


NAME

PMrecdiff - Poor man's directory tree difference viewer, comparing file names and sizes recursively

 PMwrite - poor man's write - BSD write program alternative


NAME

PMwrite - poor man's write - BSD write program alternative


SYNOPSIS

PMwrite USER


DESCRIPTION

Write a message on USER's terminals who is currently logged in on the local host and has messaging enabled (by eg. mesg y).

PMwrite writes the message on all the terminals USER enabled messaging.


SEE ALSO

write(1)

 pngmetatext - Put metadata text into PNG file


NAME

pngmetatext - Put metadata text into PNG file

 prefixlines - Prefix lines from STDIN



NAME

prefixlines - Prefix lines from STDIN


SYNOPSIS

prefixlines [PREFIX]

 pvalve - Control how much a given command should run by an other command's exit code



NAME

pvalve - Control how much a given command should run by an other command's exit code


SYNOPSIS

pvalve [<CONTROL COMMAND>] -- [<LONG RUNNING COMMAND>]

Controls when LONG RUNNING COMMAND should run, by pause and unpause it according to the CONTROL COMMAND's exit status.


DESCRIPTION

Pause LONG RUNNING COMMAND process group with STOP signal(7) if CONTROL COMMAND exits non-zero. Unpause LONG RUNNING COMMAND process group with CONT signal(7) if CONTROL COMMAND exits zero.

Pvalve takes the last line from CONTROL COMMAND's stdout, and if looks like a time interval (ie. positive number with optional fraction followed by optional "s", "m", or "h" suffix) then the next checking of CONTROL COMMAND will start after that much time. Otherwise it takes PVALVE_INTERVAL environment variable, or start next check immediately if it's not set.

Pvalve won't bombard LONG RUNNING COMMAND with more consecutive STOP or CONT signals.


USEFULNESS

It's useful eg. for basic load control. Start a CPU-intensive program in LONG RUNNING COMMAND and check hardware temperature in CONTROL COMMAND. Make it exit 0 when temperature is below a certain value, and exit 1 if above an other, higher value.


ENVIRONMENT

PVALVE_INTERVAL

Default interval between two CONTROL COMMAND runs.

PVALVE_STATUS

PVALVE_STATUS describes whether LONG RUNNING COMMAND should be in running or in paused state. Possible values: RUN, STOP. This environment is available by CONTROL COMMAND.

PVALVE_PID

PID of LONG RUNNING COMMAND.


CAVEATS

Further process groups which are created by LONG RUNNING COMMAND will not be affected.

 pyzor-files - Run a pyzor command on the given files


NAME

pyzor-files - Run a pyzor(1) command on the given files

 qrwifi - Generate a string, used in WiFi-setup QR codes, containing a hotspot name and password


NAME

qrwifi - Generate a string, used in WiFi-setup QR codes, containing a hotspot name and password

 

 randstr - Generate random string from a given set of characters and with a given length.



NAME

randstr - Generate random string from a given set of characters and with a given length.


SYNOPSIS

randstr <LENGTH> [<CHARS>]


DESCRIPTION

CHARS is a character set expression, see tr(1). Default CHARS is [a-zA-Z0-9_]

 rcmod - Run a given command and modify its Return Code according to the rules given by the user



NAME

rcmod - Run a given command and modify its Return Code according to the rules given by the user


SYNOPSIS

rcmod [<FROM>=<TO> [<FROM>=<TO> [...]]] <COMMAND> [<ARGS>]


DESCRIPTION

If COMMAND returned with code FROM then rcmod(1) returns with TO. FROM may be a comma-delimited list. Keyword any means any return code not specified in FROM parameters. Keyword same makes the listed exit codes to be preserved.

  rcmod any=0 1=13 2,3=same user-command

It runs user-command, then exits with status 13 if user-command exited with 1, 2 if 2, 3 if 3, and 0 for any other return value.

If COMMAND was terminated by a signal, rcmod(1) exits with 128 + signal number like bash(1) does.


SEE ALSO

reportcmdstatus(1), sigdispatch(1)

 

 

 redirexec - Execute a command with some file descriptors redirected.



NAME

redirexec - Execute a command with some file descriptors redirected.


SYNOPSIS

redirexec [FILENO:MODE:file:PATH] [--] COMMAND ARGS

redirexec [FILENO:MODE:fd:FILENO] [--] COMMAND ARGS

redirexec [FILENO:-] [--] COMMAND ARGS


DESCRIPTION

Setup redirections before executing COMMAND. You can setup the same type of file and file descriptor redirections as in shell.

FILENO is file descriptor integers or names: "stdin", "stdout", and "stderr" for the stadard file descriptors.

MODE is one of:

r

read

c

create/clobber

rw

read and write

a

append


SHORTHANDS

--STD_FD_NAME-file=PATH
--std[out | err]-append=PATH
--STD_FD_NAME-fd=FILENO
--STD_FD_NAME-close


EXAMPLES

  +-----------------+-------------------------------+
  | shell syntax    | redirexec(1) equivalents      |
  +=================+===============================+
  | > output.txt    | stdout:c:file:output.txt      |
  |                 | 1:c:file:output.txt           |
  |                 | --stdout-file=output.txt      |
  +-----------------+-------------------------------+
  | 2>&1            | stderr:c:fd:stdout            |
  |                 | 2:c:fd:1                      |
  |                 | --stderr-fd=1                 |
  |                 | --stderr-fd=stdout            |
  +-----------------+-------------------------------+
  | < /dev/null     | 0:r:file:/dev/null            |
  |                 | 0:-                           |
  |                 | --stdin-close                 |
  +-----------------+-------------------------------+
  | 10< pwd         | 10:r:file:pwd                 |
  +-----------------+-------------------------------+
  | >/dev/null 2>&1 | 1:- 2:-                       |
  |                 | --stdout-close --stderr-close |
  +-----------------+-------------------------------+


SEE ALSO

redirfd by execlineb(1)

 regargwrap - Replace non-regular file arguments to regular ones



NAME

regargwrap - Replace non-regular file arguments to regular ones


SYNOPSIS

regargwrap [OPTIONS] COMMAND [ARGS]


DESCRIPTION

Saves the content of non-regular files found in ARGS into a temporary file, then runs COMMAND ARGS with the non-regular file arguments replace to the regular (yet temporary) ones.

This is useful if COMMAND does not support reading from pipes or other non-seekable files.


OPTIONS

--pipes
--sockets
--blocks
--chars

Replace only pipe/socket/block/char special files. If no option like these specified, by default, replace any of them.


EXAMPLES

  regargwrap git diff --no-index <(ls -1 dir_a) <(ls -1 dir_b)


LIMITATIONS

Impractical with huge files, because they possibly do not fit on the temporary files' filesystem.


SEE ALSO

regargwrap(1) is a generalization of seekstdin(1).

 renamemanual - Interactive file rename tool


NAME

renamemanual - Interactive file rename tool


SYNOPSIS

renamemanual FILE [FILE [...]]


DESCRIPTION

Prompt for the user for new names for the files given in arguments. Won't overwrite existing files, rather keep asking the new name until it can be renamed to (without overwriting an existing file). Skip a file by entering empty name.


SEE ALSO

mv(1), rename(1), file-rename(1p) (prename(1)), rename.ul (rename(1)), rename.td(1)

 rename.td - rename multiple files by a Perl expression


NAME

rename.td - rename multiple files by a Perl expression


SYNOPSIS

rename.td [ -v[v] ] [ -n ] [ -f ] perlexprfiles ]

cat files.list | rename.td [ -v[v] ] [ -n ] [ -f ] perlexpr


DESCRIPTION

rename.td renames the files supplied according to the rule specified as the first argument. The perlexpr argument is a Perl expression which is expected to modify the $_ string in Perl for at least some of the filenames specified. If a given filename is not modified by the expression, it will not be renamed. If no filenames are given on the command line, filenames will be read via standard input.

For example, to rename all files matching *.bak to strip the extension, you might say

        rename.td 's/\.bak$//' *.bak

To translate uppercase names to lower, you'd use

        rename.td 'y/A-Z/a-z/' *


OPTIONS

-v, --verbose

Verbose: print names of files successfully renamed.

-vv

Verbose extra: print names of files of which name is not changed.

-n, --dry-run, --no-act

No Action: show what files would have been renamed, or skipped.

-f, --force

Force: overwrite existing files.

--mkdir

Create missing directories.


OUTPUT

Output Tab-delimited fields line-by-line. First line is the headers. Each subsequent line describes a file in this way:

  1. st field - status
    KEEP - no change in file name, shown in -vv mode
    SKIP - destination already exists, not in --force mode
    WOULD - would be attempted to rename, in --dry-run mode
    OK - successfully renamed
    ERR nnn - error happened during rename, error code is nnn
  2. nd field - old file name
  3. rd field - new file name


EXIT STATUS

Zero when all rename succeeded, otherwise the highest error number of all the failed renames, if any. See rename(2) for these error numbers.


ENVIRONMENT

No environment variables are used.


CREDITS

Larry Wall (author of the original)

Robin Barker


SEE ALSO

mv(1), perl(1), rename(2), file-rename(1p) (prename(1)), rename.ul (rename(1)), renamemanual(1)


DIAGNOSTICS

If you give an invalid Perl expression you'll get a syntax error.

 repeat - Run the given command repeatedly


NAME

repeat - Run the given command repeatedly


SYNOPSIS

repeat COMMAND [ARGS]


ENVIRONMENT

REPEAT_TIMES

How many times to repeat the given command. Default is -1 which means infinite.

REPEAT_COUNT

How many times the command has been ran. It is not a variable repeat(1) itself takes as input, but passes to COMMAND for its information.

REPEAT_UNTIL

Stop repeat(1) if COMMAND exists with this return code. By default the return code is not checked.

REPEAT_DELAY

Sleep interval between invocations. In seconds, by default. See sleep(1) for valid parameters, eg. "10m" for 10 minutes. Default is no delay.

The exceptional value for REPEAT_DELAY is enter, for which repeat(1) waits until the user presses Enter on the terminal, to repeat the given command.

 replcmd - Wrap any command in a REPL interface


NAME

replcmd - Wrap any command in a REPL interface


SYNOPSIS

replcmd COMMAND [ARGS]


DESCRIPTION

Run COMMAND repeatedly with words read from STDIN appended to its argument list after ARGS. You may add prompt, history, and other CLI-goodies on top of replcmd(1) by eg. rlwrap(1).


RUNTIME COMMANDS

WORDS

Run COMMAND ARGS WORDS. WORDS get split on $IFS.

# [PARAM-1 [PARAM-2 [...]]]

Prefix the line with a # hash mark to set fixed parameters for COMMAND. These will be inserted between ARGS and WORDS read form STDIN.


EXAMPLE

rlwrap --remember --command-name dict --substitute-prompt "dict> " replcmd dict

 reportcmdstatus - Textually show how the given command finished



NAME

reportcmdstatus - Textually show how the given command finished (exit status/signal)


SYNOPSIS

reportcmdstatus [OPTIONS] [--] COMMAND [ARGS]


OPTIONS

-c, --clone-status

Take COMMAND's status and itself exits with it. Default is to exit 0.

If COMMAND did not exit normally, but it is terminated by a signal, exit 128 + SIGNAL, like most shells do.

-s, --report-start

Report what is being started, ie. COMMAND ARGS, to the STDERR.

-w, --wait-end

Wait when COMMAND ended for the user to press Enter before quit.

 

 rotate-counters - Increment numbers in file names


NAME

rotate-counters - Increment numbers in file names

 

 rsacrypt - Encrypt/decrypt files with RSA


NAME

rsacrypt - Encrypt/decrypt files with RSA

 

 rsysrq - Send SysRQ commands remotely over the network


NAME

rsysrq - Send SysRQ commands remotely over the network

 saveout - Save a programm's output to dynamically named files



NAME

saveout - Save a programm's output to dynamically named files


SYNOPSIS

saveout OPTIONS [--] COMMAND [ARGS]


DESCRIPTION

Run COMMAND and redirect its STDOUT, and/or STDERR, and/or other file descriptors, line-by-line, to dynamically named files. Always append to output files. Useful for eg. to save logs of a long running command (service) in separate files per day.

You can set flush rules (see below) for each output (STDOUT, STDERR, specific FD...). A particular file is always flushed when the filename of the given output changes (as the old file is closed). Output is written per complete lines, so don't expect long data not delimited by linefeed to appear chunk-by-chunk, even with bytes- or time-based flushing. Only linefeed is taken as line terminator, not even a sole carriage-return.


OPTIONS

--out TEMPLATE

Equivalent to --fd-1 TEMPLATE.

--err TEMPLATE

Equivalent to --fd-2 TEMPLATE.

--fd-N TEMPLATE

Write COMMAND's output on the Nth file descriptor (STDOUT, STDERR, ...) to a file of which path and name is constructed according to TEMPLATE. TEMPLATE may contain the following macros:

time macros

Support all strftime(3) macros, eg. %c, %s, %F, %T, ...

%[pid]

The PID of the process running COMMAND.

%[substr:POS[:LEN]]

The line's substring at the given POS position and LEN length, or to the end of line (excluding the terminating linefeed char) if :LEN is not given. Both POS and LEN can be negative; see perldoc -f substr for details.

Beware of potential unwanted path traversal! Make sure that the resulting file path does not go outsite of the directory you intended to write, by eg. output, controlled by untrusted party, containing something like ../../../etc/.

%[regex:PATTERN:CAPTURE]

Not implemented.

%[perl:EXPR]

Not implemented.

-L, --flush-lines [LINES]

Flush output files after LINES number of lines. Flush per each line if LINES is not given (default LINES is 1). By default, leaves flushing to the underlaying IO layer, which usually buffers 4-8k blocks. If you want to set different flushing rules on different outputs, othen than buffered-IO or other than the default given by -L option, override by -Lo, -Le, and/or -Ln. See below.

-Lo, --flush-lines-stdout [LINES]

Equivalent to -Ln 1=LINES.

-Le, --flush-lines-stdout [LINES]

Equivalent to -Ln 2=LINES.

-Ln, --flush-lines-fd FD[=LINES]

Set file descriptor FD's output to be flushed by each LINES lines. Default LINES is 1.

-B, --flush-bytes BYTES
-Bo, --flush-bytes-stdout BYTES
-Bo, --flush-bytes-stderr BYTES
-Bn, --flush-bytes-fd FD[=BYTES]

Similar to the --flush-lines option group, except flush after at least BYTES bytes are written to the selected outputs. Maybe does not make sense to set more than the buffered-IO block size.

-S, --flush-sec SEC
-So, --flush-sec-stdout SEC
-So, --flush-sec-stderr SEC
-Sn, --flush-sec-fd FD[=SEC]

Similar to the --flush-lines and --flush-bytes option groups, except flush after at least SEC seconds passed since last write to the selected outputs.

-f, --failure ACTION

If can not write to the output file, or can not even open it, print the failed line of text to its own STDERR, then depending on the ACTION:

TERM

Terminate COMMAND by SIGTERM, then continue running, but will also exit soon as COMMAND probably terminates upon the signal. This is the default.

PIPE

Send SIGPIPE to COMMAND, then continue running. COMMAND may recover from the error condition itself.

IGNORE

Just ignore.


SECURITY

See the above comment on %[substr] template macro.


SEE ALSO

savelog(8), logto(1), stdsyslog(1), loggerexec(1), redirexec(1), logger(1), stdfilt(1)

 screenconsole - Interactive CLI to run GNU/screen commands against current or specified screen session


NAME

screenconsole - Interactive CLI to run GNU/screen commands against current or specified screen session

 

 screen-notify - Send status-line message to the current GNU/Screen instance


NAME

screen-notify - Send status-line message to the current GNU/Screen instance

 screenreattach - Reattach to GNU/screen and import environment variables


NAME

screenreattach - Reattach to GNU/screen and import environment variables

 screens - List all GNU/Screen sessions accessible by the user and all of their inner windows as well


NAME

screens - List all GNU/Screen sessions accessible by the user and all of their inner windows as well


OPTIONS

-W

don't show individual windows in each GNU/Screen session

 seekstdin - Makes STDIN seekable for a given command



NAME

seekstdin - Makes STDIN seekable for a given command


SYNOPSIS

seekstdin COMMAND [ARGS]


DESCRIPTION

Saves the content of STDIN into a temporary file, then runs COMMAND. This is useful if COMMAND does not support reading from pipe. One of the reasons why reading from pipe is usually not supported is that it is not seekable. seekstdin(1) makes COMMAND's STDIN seekable by saving its own input to a file which is unlinked right away, so it won't occupy disk space once COMMAND ends.


LIMITATIONS

Impractical with huge files, because they possibly do not fit on the temporary files' filesystem.


SEE ALSO

ordinargs(1)

 set-sys-path - Set PATH according to /etc/environment and run the given command


NAME

set-sys-path - Set PATH according to /etc/environment and run the given command

 set-xcursor-lock-and-run - Set X11 cursor to a padlock and run a command


NAME

set-xcursor-lock-and-run - Set X11 cursor to a padlock and run a command

 

 

 spoolprocess - process files in a spool directory



NAME

spoolprocess - process files in a spool directory


SYNOPSIS

spoolprocess [OPTIONS] -d DIRECTORY


DESCRIPTION

Take all files in DIRECTORY specified by -d option, group them by their basename, ie. name without an optional "dot + number" suffix (.1, .2, ..., also known as version number), and call /etc/spoolprocess/BASENAME > programm for each group to handle each files.

The handler programm (usually a script) gets the spool file's path as an argument.

If the programm succeeds, spoolprocess(1) deletes the files for which the handler script was successful, or all files in the group if --latest asked and it was successful.


OPTIONS

-d, --directory DIRECTORY

This option is repeatable.

-g, --group BASENAME

Process only those files with BASENAME. This option is repeatable.

-L, --latest

Process only the latest (highest version number) file in each group. The default is to process all files in ascending order of version numbers.

-S, --scriptdir DIR

Lookup programms in DIR instead of /etc/spoolprocess. This option is repeatable.

-w, --wrapper COMMAND

Prepend COMMAND to handler scripts found in --scriptdir. COMMAND is tokenized by whitespaces. So -w "bash -x" makes script invoked like this for example:

 bash -x /etc/spoolprocess/something spooldir/something.1
-v, --verbose


LIMITATIONS

spoolprocess(1) does not do locking. Run it under flock(1), singleinstance(1), cronrun(1), or similar if you deem it necessary.

DIRECTORY is scanned non-recursively.


SEE ALSO

uniproc(1)

 ssh-agent-finder - Find a working ssh agent on the system so you get the same in each of your logon sessions


NAME

ssh-agent-finder - Find a working ssh agent on the system so you get the same in each of your logon sessions


USAGE EXAMPLE

. ssh-agent-finder -Iva

 stdfilt - Run a command but filter its STDOUT and STDERR



NAME

stdfilt - Run a command but filter its STDOUT and STDERR


SYNOPSIS

stdfilt [OPTIONS] [--] COMMAND [ARGS]


DESCRIPTION

Run COMMAND and match each of its output lines (both stdout and stderr separately) against the filter rules given by command arguments (-f) or in files (-F). All filter expressions are evaluated and the last matching rule wins. So it's a good idea to add wider matching patterns first, then the more specific ones later.


OPTIONS

-F, --filter-file FILE
-f, --filter EXPR


FILTER FILE FORMAT

Empty and comments are ignored as well as leading whitespace. Comment is everything after a hashmark (#) preceded by whitespace or the while line if it starts with a hashmark.

Each line is a filter rule, of which syntax is:

[match_tags] [pattern [offset]] [replacer] [set_tags]

match_tags

Tag names, each of them in square-bracket (eg. [blue] [red]). The rest of the rule will be evaluated only if the tags are on the current stream. Tags can be added, removed by the set_tags element.

If a rule only consists of match_tags tags, it opens a section in the filter file (and in -f arguments too). In this section, all rules are interpreted as they had the given match_tags of the section written in them. For example this filter-set selects all ranges in the output (and stderr) stream bound by those regexp patterns inclusively, and blocks everying in them except "errors":

 /begin checking procedure/ [checking]
 /checking finished/+1 [/checking]
 [checking]
 !//
 /error/i
 [/checking]

The 2 streams, stdout and stderr are tagged by default by "STDOUT" and "STDERR" respectively: So this filters out everying in the stdout except "errors":

 [STDOUT]
 !//
 /error/i
 [/STDOUT]
pattern

Regexp pattern (perlre(1)) to match to the streams' (stdout and stderr) lines. In the form of /PATTERN/MODIFIERS. Optionally prefixed with an exclamation mark (!) which negates the result.

Pass every line by //. Exclude every line by !//.

If there is a pattern in the rule, replacement or tagging will only take place if the pattern matched (or not matched if it was negated).

If there is no pattern, only match_tags controls if the rest will be applied or not.

You may escape slash (/) in the PATTERN normally as it's customary in Perl, by backslash, but to keep the filter expression parsing simple, an escaped backslash itself (double backslash) at the end of the regexp pattern, ie. just before the closing slash, won't be noticed. So type it as \x5C instead.

Further limitation, that only slash / can be used, others, eg. m{...} not.

offset

A pattern may be followed by a plus sign and a number (+N) to denote that the given action (string replacement, or tagging) should take effect after the given number of lines.

This way you can exclude the triggering line from the tagging.

A pattern with offset but without replacer or set_tags is meaningless.

replacer

A s/// string substitution Perl expression. Optionally with modifiers. This can be abused to execute any perl code (with the "e" modifier).

set_tags

The syntax is the same as for match_tags. But is the square-bracketed tags are at the right side of the pattern, then the tags are applied to the stream.

Remove tags by a leading slash, like [/blue].

set_tags is useful with a pattern.

Example filter:

 /BEGIN/ [keyblock]
 /END/ [/keyblock]
 [keyblock] s/^/\t/

This prepends a TAB char to each lines in the output stream which are between the lines containing "BEGIN" and "END".


SIGNALS

HUP - re-read filter files given at command line


EXAMPLES

Prefix each output (and stderr) lines with the COMMAND process'es PID:

 stdfilt -f 's/^/$CHILD_PID: /' some_command...

Prefix each line with literal STDOUT/STDERR string:

 stdfilt -f '[STDOUT]' -f 's/^/STDOUT: /' -f '[/STDOUT]' -f '[STDERR]' -f 's/^/STDERR: /' -f '[/STDERR]' some_command...


SEE ALSO

grep(1), stdbuf(1), logwall(8), perlre(1)

 stdmux - Multiplex the given command's STDOUT and STDERR by prefixing lines



NAME

stdmux - Multiplex the given command's STDOUT and STDERR by prefixing lines


SYNOPSIS

stdmux [-o STDOUT_PREFIX | -e STDERR_PREFIX] [--] COMMAND [ARGS]


OPTIONS

-o
-e
-u, --unbuffered

TODO


EXIT STATUS

stdmux(1) exits with the COMMAND's exit status.


USAGE EXAMPLE

  mux_output=`stdmux command`
  demux() { local prefix=$1; sed -ne "s/^$prefix//p"; }
  output_text=`echo "$mux_output" | demux 1`
  error_text=`echo "$mux_output" | demux 2`
 stdout2env - Substitute other command's STDOUT in command arguments and run the resulting command



NAME

stdout2env - Substitute other command's STDOUT in command arguments and run the resulting command


SYNOPSIS

stdout2env [OPTIONS] -- ENVNAME-1 CMD-1 [ARG [ARG [...]]] [-- ENVNAME-2 CMD-2 [ARG [ARG [...]]] [-- ...]] [--] COMMAND [ARGS]


DESCRIPTION

Run all CMD-1, CMD-2, ... commands in series, and after each run, set the ENVNAME-n environment variable to the corresponding command's STDOUT output. Then at the end, run the last COMMAND with all the environment set up. Very similar to backtick notation `CMD` (and $(CMD)) in ordinary shells.

Earlier set ENVNAME variables are visible in later commands with their new value.


OPTIONS

-d, --delimiter STRING

Take STRING as the command argv-list delimiter. Default is double-dash --.

--keep-eol

Keep the newline char at the very end of each output.


CAVEATS

Overwriting PATH environment may render the subsequent commands not being found.


RATIONALE

Sometimes you don't want a shell to be in the picture when composing commands.


SEE ALSO

backtick by execlineb(1), multicmd(1), substenv(1)

 

 strip-ansi-seq - Dumb script removing more-or-less any ANSI escape sequences from the input stream


NAME

strip-ansi-seq - Dumb script removing more-or-less any ANSI escape sequences from the input stream

 substenv - Substitute environment variables in parameters and run the resulting command



NAME

substenv - Substitute environment variables in parameters and run the resulting command


SYNOPSIS

substenv [OPTIONS] [--] COMMAND [ARGS]


DESCRIPTION

Replace all occurrances of $NAME in COMMAND and ARGS to the NAME environment variable's value, whatever NAME would be, then run COMMAND ARGS. Support ${NAME} curly bracket notation too.


OPTIONS

-a, --all

Replace all occurrances of any $NAME (and ${NAME}) substring (for details see LIMITATIONS). This is the default behaviur, unless -e is given.

-e, --environment NAME

Replace the occurrances of NAME environment variable. May be specified more than once. If -a option is NOT given, ONLY these NAMEs are replaced.

-k, --keep-undefined

Do not replace variables which are not defined (ie. not in the environment), but keep them as-is. By default they are replaced with the empty string.

--dryrun, --dry-run

Do not run COMMAND, just print what would be executed.


EXAMPLE

This function call, in C, runs substenv(1), note, there is no dollar-interpolation in C.

 execve("substenv", ["substenv", "ls", "$HOME/.config"])

Then substenv issues this system call:

 execve("ls", ["ls", "/home/jdoe/.config"])


LIMITATIONS

In "substitute all" mode (without -e flag) it replaces only names with uppercase letters, digits, and underscore ([A-Z0-9_]+), as env vars usually contain only these chars. However it still replaces variables with lowercase letters in ${NAME} notation, and specific variable(s) given in -e option(s).

Does not honour escaped dollar marks, ie. \$.


NOTES

Does not support full shell-like variable interpolation. Use a real shell for it.


RATIONALE

Sometimes you don't want a shell to be in the picture when composing commands, yet need to weave some environment variable into it.


SEE ALSO

envsubst(1) from gettext-base package

 subst_sudo_user - Sudo helper program


NAME

subst_sudo_user - Sudo helper program


SYNOPSIS

subst_sudo_user <COMMAND> [<ARGUMENTS>]

Substitute literal $SUDO_USER in the ARGUMENTS and run COMMAND.


RATIONALE

It enables sys admins to define sudoers(5) rule in which each user is allowed to call a privileged command with thier own username in parameters. Example:

  %users ALL=(root:root) NOPASSWD: /usr/tool/subst_sudo_user passwd $SUDO_USER

This rule allows users to run subst_sudo_user (and subsequentially passwd(1)) as root with verbatim $SUDO_USER parameter. So no shell variable resolution happens so far. Subst_sudo_user in turn, running as root, replaces $SUDO_USER to the value of SUDO_USER environment variable, which is, by sudo(1), guaranteed to be the caller username. Then it runs passwd(1) (still as root) to change the given user's password. So effectively with this rule, each user can change their password without knowing the current one first (because passwd(1) usually does not ask root for his password).


EXAMPLES

  %USERS ALL=(root:root) NOPASSWD: /usr/tool/subst_sudo_user /usr/bin/install -o $SUDO_USER -m 0750 -d /var/backup/user/$SUDO_USER
 swap - swaps two files' names


NAME

swap - swaps two files' names

 symlinks2dot - Generate a graph in dot format representing the symlink-target relations among the given files



NAME

symlinks2dot - Generate a graph in dot(1) format representing the symlink-target relations among the given files

 symlinks-analyze - Discover where symlinks point at, recursively


NAME

symlinks-analyze - Discover where symlinks point at, recursively

 tabularize - Takes TAB-delimited lines of text and outputs formatted table.



NAME

tabularize - Takes TAB-delimited lines of text and outputs formatted table.


SYNOPSIS

COMMAND | tabularize [OPTIONS]


OPTIONS

-a, --ascii

7-bit ascii borders

-u, --unicode

borders with nice graphical chars

-H, --no-horizontal

no horizontal lines in the output

-M, --no-margins

no margins, ie. no right-most and left-most vertical borders

-p, --padding NUM

add padding space to left and right side of cells. NUM is how many spaces. Default is no padding.

-v, --output-vertical-separator CHAR

vertical separator character(s) in the output

-r, --align-right NUM

align these columns (0-indexed) to the right, others are auto-detected and if they seem to hold mostly numeric data, then aligned to the right; otherwise to the left. this option is repeatable.

-l, --align-left NUM

similar to --align-right option


ENVIRONMENT

PAGER

If $PAGER is set and standard output is a terminal and the resulting table is wider than the terminal, then pipe the table through $PAGER.


SEE ALSO

column(1), untabularize(1)

 Tail - output as many lines from the end of files as many lines on the terminal currently


NAME

Tail - output as many lines from the end of files as many lines on the terminal currently

 takeown - Take ownership on files, even for unprivileged users



NAME

takeown - Take ownership on files, even for unprivileged users


SYNOPSIS

takeown [options] <files and directories>


DESCRIPTION

Command chown(1) or chown(2) is permitted only for root (and processes with CAP_CHOWN), but normal users can imitate this behavior. You can copy other users' file to your own in a directory writable by you, and then replace the original file with your copy. It is quite tricky and maybe expensive (copying huge files), but gives you an option. Say, when somebody forgot to use the right user account when saving files directly to your folders.

takeown(1) uses *.takeown and *.tookown filename extensions to create new files and to rename existing files to respectively.

See takeown --help for option list.


TECH REFERENCE

Call stack

  script --> main --> takeown
                      /  |  \
                     /   |   \
                    /    |    \
            takeown   takeown  takeown
            _file    _symlink  _directory
              |         |           |
  - - - - - - | - - - - | - - - - - | - - - - - - - - - - - - - -
  error       |         |           |
  handler     |         |           V         ,---> register_created_dir
  function:   |         |  ,--> takeown       |
  cleanup     |         |  |    _directory ---+---> register_moved_file
              |         |  |    _recursive    |
              |         |  |      | |     \   `---> register_
              |         |  `------´ |      \    ,-> copied_file
              |         V           V       \   |
              |      copy_out <-- copy_out --\--'
              |      _symlink     /           \
              V                  /            |
          copy_out <------------´             |
            _file                             |
              |                               |
              `-------> copy_attributes <-----´

 taslis - WM's Window List



NAME

taslis - WM's Window List


DESCRIPTION

Taslis stands for tasklist. List X11 clients provided by wmctrl(1) in ANSI-compatible terminal.


KEYS

Left,Right

Select item

Enter

Switch to workspace and raise window

C

Close window gracefully

H

Hangup selected process

I

Interrupt process

S,T,Space

Suspend, Resume process

K

Kill process

D

Process's details

Q

Dismiss

?

Help

 terminaltitle - Set the current terminal's title string


NAME

terminaltitle - Set the current terminal's title string

 tests - Show all attributes of the given files which can be tested by test shows them


NAME

tests - Show all attributes of the given files which can be tested by test(1) in the same color as ls(1) shows them

 text2img-dataurl - Convert text input to image in "data:..." URL representation


NAME

text2img-dataurl - Convert text input to image in "data:..." URL representation

 timestamper - Prepend a timestamp to each input line



NAME

timestamper - Prepend a timestamp to each input line


SYNOPSIS

timestamper


DESCRIPTION

Read STDIN and put everything on STDOUT, only prepending each line by a timestamp and a TAB char.


ENVIRONMENT

TIMESTAMP_FMT

Timestamp format, see strftime(3). Default is "%F %T %z".


SEE ALSO

ts(1) from moreutils

 touchx - set execution bit on files and creates them if neccessary


NAME

touchx - set execution bit on files and creates them if neccessary

 trackrun - Record when the given command was started and ended and expose to it in environment variables



NAME

trackrun - Record when the given command was started and ended and expose to it in environment variables


SYNOPSIS

trackrun [OPTIONS] [--] COMMAND [ARGS]


DESCRIPTION

It records when it starts COMMAND and when it ends, identifying COMMAND either by one of 4 options:

Full command line including ARGS.
Only the command name, COMMAND.
By the name given by the user in NAME.
By the environment variable value given by name ENV.

Set TRACKRUN_LAST_STARTED and TRACKRUN_LAST_ENDED environments for COMMAND to the ISO 8601 representation of the date and time when COMMAND was last started and ended respectively. Set TRACKRUN_LAST_STATUS to the status COMMAND last exited with. Those are left empty if no data yet.

On every run, a UUID is generated, so you can connect events of concurrent runs in the track report. It is exposed in TRACKRUN_UUID env.


OPTIONS

-f, --full-command
-b, --command-basename (default)
-n, --name NAME
-e, --env-var ENV
-h, --show-hash

Show the hash generated from either of those options above before run the COMMAND. This hash is used for the filename in which command-related events are stored.

-u, --show-uuid

Show the current run's UUID before actually start the command.

-U, --write-uuid FILE

Write current run's UUID in the given file before start the command.

-R, --report

Do not run COMMAND, instead display its tracked history.


FILES

Store tracking data in ~/.trackrun directory.


ENVIRONMENT

TRACKRUN_LAST_STARTED
TRACKRUN_LAST_ENDED
TRACKRUN_LAST_STATUS
TRACKRUN_LAST_UUID
TRACKRUN_LAST_SUCCESSFUL_STARTED
TRACKRUN_LAST_SUCCESSFUL_ENDED
TRACKRUN_LAST_SUCCESSFUL_UUID

The last successful run's UUID, date-time when started and ended.

TRACKRUN_UUID

The current run's UUID


LIMITATIONS

Trackrun does not do locking. You may take care of it if you need using flock(1), cronrun(1), or similar.

 triggerexec - Run a command and do various specified actions depending on what command does



NAME

triggerexec - Run a command and do various specified actions depending on what command does


SYNOPSIS

triggerexec [EVENT ACTION [EVENT ACTION [...]]] [--] COMMAND [ARGS]


DESCRIPTION

Run COMMAND and execute specific actions depending on what COMMAND does.

Supported EVENT events:

stdout:PATTERN
stderr:PATTERN

Match PATTERN regex pattern to stdout/stderr line-wise.

Supported ACTION actions:

perl:EXPR

Evaluate perl expression in triggerexec(1)'s own context. Useful variables: $COMMAND_PID is the COMMAND's PID. $PARAM is a hash ref containing event parameters, for example $PARAM-{line} >> is the text triggered the action - if applicable (stdout:/stderr: events).


LIMITATIONS


SEE ALSO

expect(1)

 ttinput - Inject console input in a terminal as if the user typed


NAME

ttinput - Inject console input in a terminal as if the user typed


SYNOPSIS

echo Lorm ipsum | ttinput /dev/pts/1


CREDITS

https://johnlane.ie/injecting-terminal-input.html

 uchmod - chmod files according to umask



NAME

uchmod - chmod files according to umask


SYNOPSIS

uchmod [-v] [-R] [path-1] [path-2] ... [path-n]


DESCRIPTION

Change mode bits of files and directories according to umask(1) settings using chmod(1). Use it when file modes were messed up, uchmod change them like mode of newly created files.

 unicodestyle - Add font styles to input text using Unicode



NAME

unicodestyle - Add font styles to input text using Unicode

 uniproc - Universal data processing tool



NAME

uniproc - Universal data processing tool


SYNOPSIS

uniproc [OPTIONS] INPUTFILE COMMAND [ARGS]


DESCRIPTION

Take each line from INPUTFILE as DATA (chopping end-of-line chars), pass each TAB-delimited fields of DATA to COMMAND as arguments after ARGS (unless placeholder is in COMMAND or ARGS, see below), run COMMAND and then record the exit status.

Can be parallelized well. uniproc(1) itself does not run multiple instances of COMMAND in parallel, just in series, but if you start multiple instances of uniproc(1), then you can run COMMANDs concurrently. Locking ensures no overlapping data being processed. So you don't need special precautions (locking, data partitioning) when starting uniproc(1) multiple times on the same INPUTFILE.

Use a wrapper command/script for COMMAND if you want either of these:

save COMMAND's output as well.

By default it goes to STDOUT. Use redirexec(1) for example.

pass DATA on the STDIN or in environment variable instead of command argument.

Use args2env(1) or args2stdin(1) for example.

If re-run after an interrupt, won't process already processed data. But you may re-try the failed ones by the --retry option.

The user is allowed to append new lines of data to INPUTFILE between executions or during runtime - it won't mess up the processing. However editing or reordering lines which are already in the file, confuses the results - don't do it.

ARGS (and COMMAND too, somewhat usefully) supports placeholders: A curly bracket-pair {} is replaced to DATA as one argument, including TAB chars if any, anywhere in COMMAND ARGS. If there is a number in it, {N}, then the Nth TAB-delimited field (1-indexed) is gonna be substituted in. A lone {@} argument expands to as many arguments as TAB-delimited fields there are in DATA. Multiple numbers in the placeholder like {5,3,4} expands to all of the data fields specified by the index numbers, into multiple arguments. Note that in this case, the multi-index placeholder must stand in its own separate argument, just as the all-fields {@} placeholder. Indexing a non-existing field expands to empty string. Be aware that your shell (eg. bash(1)) may expand arguments like {5,3,4} before it gets to uniproc(1), so escape it if neccessary (eg. '{5,3,4}'). If there is any curly bracket placeholder like these, DATA fields won't be added to ARGS as the last argument.


OPTIONS

-r, --retry

Process those data which were earlier failed (according to INPUTFILE.uniproc > state file) too, besides the unprocessed ones.

-f, --failed

Process only the earlier failed items.

-1, --one-item

Process only 1 item, then exit. Default is to process as many items in series as possible.

-n, --items NUM

How many items to process.

-e, --errexit

Stop processing items as soon as the first COMMAND exits non-zero, and uniproc(1) itself exists which that exit code (or 128+signal if signaled).

-Q, --quasilock

Create and check locks using lock files instead of flock(2). Useful for network filesystems which does not support shared locks (eg. sshfs). It is assumed that either all instances of uniproc(1), across all hosts that are working on a given INPUTFILE, are run in quasi-lock mode, or all in flock(2)-lock mode - do not mix. These quasi lock files are:

INPUTFILE.uniproc.lock

locking the INPUTFILE.uniproc state file, and

INPUTFILE.uniproc.NUM

locking the command processing the NUMth item. Note, this is the same file which is locked by flock(2) in real-lock mode.

Beware, when using quasi-locks: the user may manually clean up lock files which are left there after an interrupted process. While atomic lock acquisition is approximated using general filesystem primitives, there is no simple race-free to automatically release the lock when a process terminates. Therefore uniproc(1) does not even try to emulate such lock-release mechanism, so it neither detects nor reclaims stale lock files. However to help the user identify possibly alive processes which expect resources being exclusively allocated to them, uniproc(1) writes some useful info about the current process in the lock files: PID START_TIMESTAMP HOSTNAME.

-sp, --show-progress

Show which item is being started to process.

-sd, --show-data

Show the raw data what is being started to process.

-ss, --show-summary

Show stats summary when exit.

--debug

Output debug messages.


FILES

It maintains INPUTFILE.uniproc file by writing the processing status of each lines of input data in it line-by-line. Processing status is either:

all spaces ( )

processing not yet started

periods (...)

in progress

digits, possibly padded by spaces ( 0)

result status (exit code)

exclamation mark (!) followed by hexadecimal digits (!0f)

termination signal (COMMAND teminated abnormally)

EOF (ie. fewer lines than input data)

processing of this item has not started yet

INPUTFILE.uniproc is locked while read/write to ensure consistency. INPUTFILE.uniproc.NUM are the name of the files which hold the lock for the currently in-progress processes, where NUM is the line number of the corresponding piece of data in INPUTFILE. A lock is held on each of these INPUTFILE.proc.NUM files by the respective instance of COMMAND to detect if the processing is still going or the process crashed.


LIMITATION

Due to currently used locking mechanism (Fcntl(3perl)), running on multiple hosts may disrespect locking, depending on the network filesystem. See --quasilock option.


ENVIRONMENT

When running COMMAND, the following environment is set:

UNIPROC_DATANUM

Number of the particular piece of data (ie. line number in INPUTFILE, 0-indexed) which is need to be processed by the current process.

UNIPROC_DATANUM_1INDEX

Same as UNIPROC_DATANUM but 1-indexed instead of 0-indexed.

UNIPROC_TOTALNUM

Total number of items (processed and unprocessed). Note this figure may be outdated because INPUTFILE is not always measured before each COMMAND start.


EXAMPLES

Display the data processing status before each line of data:

  paste datafile.uniproc datafile

How much competed?

  awk -v total=$(wc -l < datafile) 'BEGIN{ok=ip=fail=0} {if($1==0){ok++} else if($1=="..."){ip++} else if($1!=""){fail++}} END{print "total: "total", completed: "ok" ("(ok*100/total)"%), in-progress: "ip" ("(ip*100/total)"%), failed: "fail" ("(fail*100/total)"%)"}' datafile.uniproc
  
Output:
  total: 8, completed: 4 (50%), in-progress: 1 (12.5%), failed: 1 (12.5%)

Record output of data processing into a file per each data item:

  uniproc datafile sh -c 'some-command "$@" | tee output-$UNIPROC_DATANUM' --
  uniproc datafile substenv -e UNIPROC_DATANUM redirexec '1:a:file:output-$UNIPROC_DATANUM' some-command

Same as above, plus keep the output on STDOUT as well as in separate files. Note, the {} argument is there to pass DATA to the right command:

  uniproc datafile pipecmd some-command {} -- substenv -e UNIPROC_DATANUM tee -a 'output-$UNIPROC_DATANUM'

Display data number, processing status, input data, (last line of) output data in a table:

  join -t $'\t' <(nl -ba -v0 datafile.uniproc) <(nl -ba -v0 datafile) | foreach -t --prefix-add-data --prefix-add-tab tail -n1 output-{0}
 untabularize - Revert the formatting done by tabularize



NAME

untabularize - Revert the formatting done by tabularize(1)


SYNOPSIS

untabularize [OPTIONS]


DESCRIPTION


OPTIONS

-P, --no-pipe-in-header

Expect no pipe char (|) in column names, so it's less ambiguous to determine vertical gridlines.

-p, --padding NUM

Untabularize the input as it was tabularized by -p NUM > padding.

-w, --allow-whitespace

Strip leading whitespace in column names to learn each column's left margin.

-F, --no-trim-filler

Don't remove trailing (or leading, in case of right-aligned cells) space, which is often just a filler.


LIMITATIONS

Does not reliably distinguish filler space from semantically significant space, so it's either sometimes significant space gets removed or filler space left there (with -F option). The default mode is to trim space in cell data from the right, if any, else trim at the left. The padding, if specified by -p option, is always trimmed (even if it's non-space).


SEE ALSO

tabularize(1)

 upsidedown - Transliterate input stream to text with upsidedown-looking chars


NAME

upsidedown - Transliterate input stream to text with upsidedown-looking chars

 url_encode - Escape URL-unsafe chars in text given either in parameters or in stdin by percent-encoding



NAME

url_encode - Escape URL-unsafe chars in text given either in parameters or in stdin by percent-encoding

url_decode - Unescape percent-encoded sequences given either in parameters or in stdin

 url_encode - Escape URL-unsafe chars in text given either in parameters or in stdin by percent-encoding



NAME

url_encode - Escape URL-unsafe chars in text given either in parameters or in stdin by percent-encoding

url_decode - Unescape percent-encoded sequences given either in parameters or in stdin

 url_encode_bf - Make all chars given either in parameters or in stdin to percent-encoded sequence


NAME

url_encode_bf - Make all chars given either in parameters or in stdin to percent-encoded sequence

 url-parts - Extract specified parts from URLs given in input stream


NAME

url-parts - Extract specified parts from URLs given in input stream


SYNOPSIS

echo <URL> | url-parts <PART> [<PART> [<PART> [...]]]


DESCRIPTION

Supported parts: fragment, hostname, netloc, password, path, port, query, scheme, username, and query.NAME for the query parameter NAME, and query.NAME.N for Nth element of the array parameter NAME.

Run url-parts --help for the definitive list of URL part names supported by the python urlparse module installed on your system.

 verscmp - Compare version numbers



NAME

verscmp - Compare version numbers


SYNOPSIS

verscmp VERSION_A [gt | lt | ge | le | eq | ne] VERSION_B

verscmp VERSION_A between VERSION_START VERSION_END [VERSION_START VERSION_END [...]]

verscmp VERSION_A in VERSION_B1 VERSION_B2 [VERSION_B3 [...]]


EXIT CODE

  1. Comparison is satisfied

  2. Runtime error

  3. Parameter error

  4. Comparison is NOT satisfied


SEE ALSO

vercmp(1) from makepkg package, Version::Util(3pm)

 vidir-sanitize - Helper script to change tricky filenames in a directory


NAME

vidir-sanitize - Helper script to change tricky filenames in a directory


INVOCATION

Not need to invoke vidir-sanitize directly. vidir(1) calls it internally.


USAGE

VISUAL=vidir-sanitize vidir


SEE ALSO

vidir(1) from moreutils

 vifiles - Edit multiple files as one


NAME

vifiles - Edit multiple files as one


CAVEATS

If a LF char at the end of any files is missing, it'll be added after edit.


SEE ALSO

vidir(1) from moreutils

 visymlinks - Bulk edit symlinks names and targets



NAME

visymlinks - Bulk edit symlinks names and targets


SYNOPSIS

visymlinks [PATH [PATH [...]]]


DESCRIPTION

Open up your default editor (see sensible-editor(1)) to edit the targets of PATH symlinks given in command arguments as well as their own filenames. If no PATH given, all symlinks in the current working directory will be loaded into the editor. Once finished editing, visymlinks(1) changes the target of those symlinks which were edited.

Contrary to visymlinks(1)'s relative, vidir(1), if a PATH symlink is removed in the editor, it won't be removed from the filesystem.


RETURN VALUE

Returns zero if everything went well.

Returns the exit status of the editor if it was not zero (also won't change symlinks).

Returns the error code of symlink(2) if any of such calls failed.


LIMITATIONS

Special characters disallowed in PATH filenames and symlinks targets: TAB and LF linefeed (newline).


SEE ALSO

vidir(1) from moreutils, vifiles(1)

 

 

 waitpid - Wait for a process to end


NAME

waitpid - Wait for a process to end (even if not child of current shell)

 

 whisper-retention-info - Show data retention policy in Whisper timeseries database file


NAME

whisper-retention-info - Show data retention policy in Whisper timeseries database file

 wikibot - Update Wikimedia article



NAME

wikibot - Update Wikimedia (Wikipedia) article

 

 xdg-autostart - Start XDG autostart programms


NAME

xdg-autostart - Start XDG autostart programms

 xml2json - Convert XML input to JSON


NAME

xml2json - Convert XML input to JSON