| 2opml - Convert list of URLs to OPML. |
2opml - Convert list of URLs to OPML.
2opml [--add-attributes <ATTRIBUTES>] < urls.txt
Convert text file, containing "<TITLE> <URL>" looking lines, to OPML.
| a8e - Abbreviate words in the input stream |
a8e - Abbreviate words in the input stream
a8e [OPTIONS]
Abbreviate words by leaving the first and last letter of them and replace internal letters by the number indicating how many were they. Like l10n, i18n, and a11y the conventional abbreviation of localization, internationalization, and accessibility respectively.
Abbreviate words at least N (default 4) char long. Useful to be greater than the boundary letters kept (see -l, -t, and -k) plus one.
Set how many letter to keep at the beginning of words by -l, or at the end by -t, or set both at once by -k (default is 1 for both)
What counts as a word? (default [a-zA-Z]+)
| adr2html - Convert Opera Hostlist 2.0 bookmarks to HTML |
adr2html - Convert Opera Hostlist 2.0 bookmarks to HTML
| args2env - Turns command arguments into environment variables and executes command with the remained arguments |
args2env - Turns command arguments into environment variables and executes command with the remained arguments
args2env [OPTIONS] COMMAND ARG_1 ARG_2 ... ARG_R2 ARG_R1
Move the NUMth argument to the environment by the name ARG_NUM > (may be overridden by --template option). Counting starts from 1. The 0th argument would be the COMMAND itself. NUM may be negative number, in which case it's counted from the end backwards.
Same as --arg -NUM >.
Move all arguments to environment.
Keep the first NUM arguments as arguments, and move the rest of them to environment. Don't use it with -A, -a, or -r.
How to name environment variables? Must contain a %d macro. Default is ARG_%d. So the value of argument given by --arg 1 goes to ARG_1 variable.
How to name environment variables for arguments specified by negative number? Must contain a %d macro. Default is ARG_R%d, R is for "right", because this arg is counted from the right. So the value of argument given by --arg -1 goes to ARG_R1 variable.
Set NAME variable to the NUMth argument (negative numbers also may be given) and remove the argument from the argument list (keeping the numbering of remaining arguments unchanged). Number-based variables (ARG_n > and ARG_Rn >) are still available.
args2stdin(1)
| args2stdin - Turns command arguments into input stream on STDIN |
args2stdin - Turns command arguments into input stream on STDIN
args2stdin [OPTIONS] COMMAND ARG_1 [ARG_2 [...]]
Execute COMMAND command with ARG_n arguments, except remove those which are specified in OPTIONS and write them on the command's STDIN instead.
Remove the NUMth argument and write it on STDIN. Counting starts from 1. The 0th argument would be the COMMAND itself. NUM may be negative number, in which case it's counted from the end backwards.
Same as --arg -NUM >.
Move all arguments to STDIN.
STRING marks the end of arguments.
All arguments after this will be passed in STDIN.
This argument won't be passed to COMMAND anywhere.
It's usually --.
args2stdin(1) does not have any default for this, so no particular argument makes the rest of them go to STDIN.
Keep the first NUM arguments as arguments, and move the rest of them. Don't use it with -A, -a, or -r.
Delimit arguments by STRING string. Default is linefeed (\ ).
Delimit arguments by TAB char.
Delimit arguments by NUL char.
args2env(1)
| asterisk-log-separator - Split up Asterisk PBX log file into multiple files based on which process wrote each part |
asterisk-log-separator - Split up Asterisk PBX log file into multiple files based on which process wrote each part
| awk-cut - Select fields from input stream with awk |
awk-cut - Select fields from input stream with awk
awk-cut [COLUMNS-SPEC]
Where COLUMNS-SPEC is a variation of these:
cut.awk(1)
| base58 - Encode to Base58 |
base58 - Encode to (decode from) Base58
| base64url - Encode to Base64-URL encoding |
base64url - Encode to (decode from) Base64-URL encoding
| bencode2json - Convert Bencode to JSON |
bencode2json - Convert Bencode (BitTorrent's loosely structured data) to JSON
| header - Echo the input stream up to the first empty line |
header - Echo the input stream up to the first empty line (usual end-of-header marker)
body - Skip everything in the input stream up the the first empty line (usual end-of-header marker) and echo the rest
header FILE [FILE [FILE [...]]]
header < FILE
body FILE [FILE [FILE [...]]]
body < FILE
| cdexec - Run a given command in the given directory |
cdexec - Run a given command in the given directory
cdexec [--home | <DIRECTORY>] [--] <COMMAND> [<ARGS>]
Run a given command in the given directory. Set the target directory to the command's self directory if not given.
execline-cd by execlineb(1)
| chattr-cow - try hard to enable Copy-on-Write attribute on files |
chattr-cow - try hard to enable Copy-on-Write attribute on files
chattr-nocow - try hard to disable Copy-on-Write attribute on files
| chattr-cow - try hard to enable Copy-on-Write attribute on files |
chattr-cow - try hard to enable Copy-on-Write attribute on files
chattr-nocow - try hard to disable Copy-on-Write attribute on files
| chromium_cookie_decrypt.py - Decrypt Chromium web browser stored cookies and output cleartext |
chromium_cookie_decrypt.py - Decrypt Chromium web browser stored cookies and output cleartext
| chshebang - Change a script's default interpreter |
chshebang - Change a script's default interpreter
| cred - Credentials and secrets management in command line |
cred - Credentials and secrets management in command line
cred SUBCOMMAND SITE [ARGUMENTS]
cred site SITE SUBCOMMAND [ARGUMENTS]
SITE, most often a website name, is a container of one or more properties. But it can be anything you want to tie properties to, typically passwords, keys, pin codes, API tokens as secrets and username, email address, etc. as ordinary properties.
SITE is represented in a directory in the credentials base dir. You may also enter a directory path on the filesystem for SITE. You don't need to create a SITE: it's created automatically when you write in it.
For websites and other services you have more than one account or identity for,
recommended to organize them into sub-directories like: SITE/IDENTITY,
eg: mail.example.net/joe@example.net and mail.example.net/jane@example.net.
Output a bash script to setup tab-completion for the cred command.
Use it by eg: eval "$(cred compscript)"
Display all properties (and their values) of a given site.
Optional parameter is how secrets are displayed:
mask-secrets is the default and replaces a secret string with 5 asterisks (*****) uniformly (so number of chars are not leaked).
hash-secrets replaces secrets by a hash and the checksum algorithm' name
is appended to the hash with a tab, like: <TAB>hash-algo=NAME.
blank-secrets displays the secret property name but leaves the value field empty.
Finally reveal-secrets displays secret strings in clear text just like ordinary properties.
The option subdirs dumps properties from the sub-directories too.
Those properies are considered to be secret at the moment which contain at least one of these words (case insensitive) : pass, key, cvc, secret, pin, code, token, totp (but not totp-issuer).
Generate a new password and put in PASSWORD property; append its old value to the OLDPASSWORDS property; copy the new one to the clipboard.
Manage properties of a given site. See individual instruction descriptions at the subcommands below which are aliases to these prop ... commands.
Open up the $EDITOR (falling back to $VISUAL) to edit the given property's value. =item read PROPERTY
Read the new value from the STDIN (readline is supported if bash does support it, see help read in bash(1)).
Secrets are read in no-echo mode.
Subcommand show shows only non-secrets. Enter reveal to show secrets as well.
By clip you may copy the value to the clipboard.
If you use CopyQ(1), secrets are prevented to get to CopyQ's clipboard items history.
Takes one or more property names and types their values to the window accessible by pressing Alt+Tab on your desktop.
Also presses <TAB> after each string, but does not press <RETURN>.
A single dot (.) is a pseudo PROPERTY name: if it's given, nothing will be typed in its place,
but <TAB> is still pressed after it.
Use it if the form has fields which you don't want to fill in.
Obviously it's useful only with a $DISPLAY.
Depends on xdotool(1).
TOTP property (Timed One-Time Passcode) can be set (simply by cred ... set TOTP, no value needed), delelted, shown, and revealed.
When accessed, cotp(1) programm is called to search a TOTP code with its ISSUER (combined with LABEL, if taking the ISSUER only would be ambiguous)
matching to the selected SITE.
How SITE and ISSUER (LABEL) are matched: If the site has OTP-ISSUER propery, it is searched. Otherwise the site's name itself is takes as ISSUER name. If the site is at more than 1 directory levels deep under the credentials base dir, then only the first path component satisfies the search criteria as well. For example, TOTP codes for a site like "example.com/my-2nd-account" are searched under both "example.com/my-2nd-account" and "example.com" issuers.
If the above filtering yields more than 1 cotp(1) records, it's further filtered by LABEL.
The following properties are tried as LABEL in order: EMAIL, USERNAME, LOGIN.
Once only 1 cotp(1) record is yielded, it is taken as the TOTP code.
Credentials directory is hardcoded to ~/cred.
| convert_chromium_cookies_to_netscape.sh - Convert Chromium and derivative web browser's cookies to Netscape format |
convert_chromium_cookies_to_netscape.sh - Convert Chromium and derivative web browser's cookies to Netscape format (used by wget and curl)
| corner_time - Place a digital clock in the upper right hand corner of the terminal |
corner_time - Place a digital clock in the upper right hand corner of the terminal
| cpyfattr - Copy file attributes |
cpyfattr - Copy file attributes (xattr)
cpyfattr SOURCE DESTINATION [OPTIONS]
Copy SOURCE file''>s all xattributes to DESTINATION
using getfattr(1) and setfattr(1).
All options passed to setfattr(1).
Note that OPTIONS are at the end of argument list.
getfattr(1), setfattr(1)
| cronrun - convenience features to run commands in task scheduler environment |
cronrun - convenience features to run commands in task scheduler environment
cronrun [OPTIONS] <COMMAND> [ARGS]
Run COMMAND in a way most scheduled jobs are intended to run, ie:
Delay program execution at most TIME amount of time. Default is to wait nothing. Also can be set by CRONRUN_DELAY environment.
TIME is a series of AMOUNT and UNIT pairs after each other without space, ie:
I<AMOUNT> I<UNIT> [ I<AMOUNT> I<UNIT> [ I<AMOUNT> I<UNIT> [...] ] ]
Where UNIT is s, m, h, d for seconds, minutes, hours, days respectively.
Example: 1h30m
A single number without UNIT is seconds.
Wait for the lock to release.
By default cronrun(1) fails immediately if locked.
Lock is based on CRONJOBID environment, or COMMAND if CRONJOBID is not set.
If CRONJOBID is set, STDIO goes to syslog too, in the "cron" facility, stdout at info level, stderr at error level.
If not set, STDIO is not redirected.
Lock files stored in this directory.
Recommended practice is to set CRONJOBID=something in your crontab before each cronrun ... job definition.
Set value for the --random-delay option.
| cut.awk - Output only the selected fields from the input stream, parameters follow awk syntax |
cut.awk - Output only the selected fields from the input stream, parameters follow awk(1) syntax
awk-cut(1)
| daemonctl - Manage preconfigured libslack daemon daemons more conveniently |
daemonctl - Manage preconfigured libslack daemon(1) daemons more conveniently
Daemonctl presumes some facts about the system:
| dataurl2bin - Decode "data:..." URLs from input stream and output the raw binary data |
dataurl2bin - Decode "data:..." URLs from input stream and output the raw binary data
| dbus-call - Browse DBus and call its methods |
dbus-call - Browse DBus and call its methods
dbus-call [OPTIONS] [SERVICE [OBJECT [INTERFACE [METHOD [ARGUMENTS]]]]]
May leave out any parameters from the right, in which case possible values for the first left-out parameter are listed.
Connect to the system DBus.
Connect to the session DBus.
Connect to ADDRESS DBus.
output in raw if the output is a single string or number.
| debdiff - Display differences between 2 Debian packages |
debdiff - Display differences between 2 Debian packages (*.deb files)
| delfattr - Removes given attributes from files |
delfattr - Removes given attributes (xattr) from files
delfattr -n NAME [-n NAME [..]] FILE [FILE [...]]
Remove NAME xattribute(s) from the given files.
setfattr(1)
| descpids - List all descendant process PIDs of the given process |
descpids - List all descendant process PIDs of the given process(es)
| dfbar - Display disk space usage with simple bar chart |
dfbar - Display disk space usage with simple bar chart (as reported by df(1))
| digasn - Query Autonom System Number from DNS |
digasn - Query Autonom System Number (ASN) from DNS
| diu - Display Inode usage, similar to du for space usage |
diu - Display Inode usage, similar to du(1) for space usage
| dlnew - Download web resource if local copy is older |
dlnew - Download web resource if local copy is older
dlnew [-C] <url> <file>
Download content from web if newer than local copy (based on Last-Modified and caching headers).
Bypass validating cache.
URL to be downloaded. Schema can be HTTP or HTTPS.
Local file data have to written in. If omitted, last component (basename) of url will be used.
Url is found and downloaded.
General error, system errors.
Local file's freshness validated by saved cache metadata, not downloaded.
Download not OK. (usually Not Found)
Url found but not modified. (HTTP 304)
Url found but not updated, based on Last-Modified header.
| eat - Read and echo back input |
eat - Read and echo back input (like cat(1)) until interrupted (ie. ignore end-of-file)
| errorlevel - Exit with the given status code |
errorlevel - Exit with the given status code
| evhand - Process new events in a textfile, events described per lines |
evhand - Process new events in a textfile, events described per lines
evhand [OPTIONS] EVENT-FILE STATE-FILE HANDLER [ARGS]
evhand(1) iterates through EVENT-FILE and run HANDLER command on each new lines.
What is considered new is decided by STATE-FILE.
Handled events are recorded in STATE-FILE (either by verbatim or by checksum),
so new events are those not in the state file.
If HANDLER command fails, the event is not considered to have been handled.
Exit at the first failed HANDLER command. Exit status will be the failed handler command's exit status if terminated normally, and 128 + signal number if killed by a signal. By default, run HANDLER for all events, and exit with zero regardless of handler commands exit status.
Record and check the event's checksum in STATE-FILE instead of the verbatim event string itself.
Remove those entries from the state file which are not encountered in the event file. Shrinks only when the whole event file could be read up, so not if interrupted by a failed handler command (in --errexit mode) nor if any other error prevented to learn all the events in the event file.
This is useful if you regularly purge old events from the event file and don't want the state file to grow indefinitely.
The string representing the event to be handled.
This is passed by evhand(1) to the HANDLER programm.
EVENT should not contain NUL byte as it can not be put in the environment.
stdin(3) is closed for the HANDLER process.
STATE-FILE is locked during the event handling process, so only 1 process can handle events per each STATE-FILE.
Out-of-scope features for evhand(1) and suggestions what to do instead:
See eg. logto(1), redirexec(1), ...
See eg. ts(1), timestamper(1), ...
Just re-run evhand(1).
Or wrap it by repeat(1) like:
env REPEAT_UNTIL=0 repeat evhand -e ...
It restarts evhand until its exit status is zero. Assumed that the failure is temporary.
Use an inotify(7) frontend, like iwatch(1) to trigger evhand(1).
Sort events into multiple separate event files and run other evhand(1) sessions on them.
uniproc(1)
| fcomplete - Complete a smaller file with the data from a bigger one |
fcomplete - Complete a smaller file with the data from a bigger one
| fc-search-codepoint - Print the names of available X11 fonts containing the given code point |
fc-search-codepoint - Print the names of available X11 fonts containing the given code point(s)
| fdupes-hardlink - Make hardlinks from identical files as reported by fdupes |
fdupes-hardlink - Make hardlinks from identical files as reported by fdupes(1)
| ff - Find files horizontally, ie. a whole directory level at a time, across subtrees |
ff - Find files horizontally, ie. a whole directory level at a time, across subtrees
ff <pattern> [path-1] [path-2] ... [path-n]
Search files which name matches to pattern in paths directories recursively, case-insensitively. The file's path is matched if pattern contains '/'. Searching is done horizontaly, ie. scan the upper-most directory level first completely, then dive into the next level and scan those directories before moving to the 3rd level deep, and so on. This way users usually find what they search for more quickly.
| ffilt - Filter a file via a command's STDIO and write back to the file |
ffilt - Filter a file via a command's STDIO and write back to the file
ffilt FILE COMMAND [ARGS]
Feed FILE into COMMAND's stdin, then save its stdout back to FILE if COMMAND ran successfully.
ffilt(1) is a quasi shorthand for this shell construct:
output=`cat FILE | COMMAND` [ $? = 0 ] && echo "$output" > FILE
sponge(1), insitu(1) https://github.com/athas/insitu
| fgat - Execute command in foreground at a given time |
fgat - Execute command in foreground at a given time
fgat <time-spec> <command> [arguments]
In opposite of at(1), fgat(1) stays in console's foreground and waits for time-spec, after that runs command.
time-spec can be any string accepted by date(1).
| filesets - Set operations on text files, lines being set elements |
filesets - Set operations on text files, lines being set elements
filesets [OPTIONS] EXPRESSION FILE-1 FILE-2 [...]
Sets are identified by the number of file, 1-indexed.
These are the supported operators, may be given by word or by symbol:
Nested parentheses are supported.
Output the resulting set.
comm(1), uniq(1), setop(1)
| filterexec - Echo those arguments with which the given command returns zero. |
filterexec - Echo those arguments with which the given command returns zero.
filterexec COMMAND [ARGS] -- DATA-1 [DATA-2 [... DATA-n]]
Prints each DATA (1 per line) only if command COMMAND ARGS DATA exits succesfully, ie. with zero exit status.
If you want to evaluate not command line arguments, but data read on STDIN,
then combine filterexec(1) with foreach(1).
filterexec test -d -- $(ls)
Shows only the directories.
The shell's tokenization may wrongly splits up file names containing space.
Perhaps set IFS to newline only.
ls -1 | foreach filterexec test -d --
Same, but file names are supplied 1-by-1, not all at once,
hence filterexec(1) is invoked multiple times.
| find-by-date - Find files with GNU find but with easier to comprehend time interval formats |
find-by-date - Find files with GNU find(1) but with easier to comprehend time interval formats
find-by-date [FROM--][TO] [FIND-ARGS]
Takes your FROM--TO date-time specifications and turns into the
appropriative -mmin -MINUTES and -mmin +MINUTES
parameters for find(1), then call find(1).
Recognize these date-time formats in FROM and TO:
YYYY-mm-dd_HH:MM
YYYY-mm-dd_HH
YYYY-mm-dd
YYYY-mm
YYYY
mm-dd
dd
mm-dd_HH:MM
mm-dd_HH
dd_HH:MM
dd_HH
HH:
_HH
Enter 0--TO to select any time up to TO.
Enter FROM-- to select any time starting from FROM.
| findnewestfile - Search for the newest file in a given path recursively and always show the most recent while scanning |
findnewestfile - Search for the newest file in a given path recursively and always show the most recent while scanning
findoldestfile - Search for the oldest file in a given path recursively and always show the most older while scanning
findnewestfile [path]
findoldestfile [path]
Search for the newest/oldest file in given directory and in its subdirectories showing files immediately when found one newer/older than the past ones.
| findnewestfile - Search for the newest file in a given path recursively and always show the most recent while scanning |
findnewestfile - Search for the newest file in a given path recursively and always show the most recent while scanning
findoldestfile - Search for the oldest file in a given path recursively and always show the most older while scanning
findnewestfile [path]
findoldestfile [path]
Search for the newest/oldest file in given directory and in its subdirectories showing files immediately when found one newer/older than the past ones.
| fixlogfiledatetime - Set the target files modification time to their respective last log entry's timestamp |
fixlogfiledatetime - Set the target files modification time to their respective last log entry's timestamp
| fixRFC822filemtime - Set a file's last modification time, which contains an email message in RFC-822 format, to the email's Date |
fixRFC822filemtime - Set a file's last modification time, which contains an email message in RFC-822 format, to the email's Date
| fmtkv - Tranform key=value pairs into 1 pair by 1 line on the output |
fmtkv - Tranform key=value (each optionally double-quoted) pairs into 1 pair by 1 line on the output
| foreach - Run an OS or shell command on each input line, similar to xargs |
foreach - Run an OS or shell command on each input line, similar to xargs(1)
foreach [OPTIONS] COMMAND [ARGS ...]
Take each input line from stdin as DATA, and run COMMAND with DATA appended to the end of ARGS as a single argument.
If {} is there in ARGS then it's substituted with DATA rather than append to the end,
unless --no-placeholder is given, because then {} is read literally.
Additionally, foreach(1) parses DATA into fields and add each of them to the end of ARGS if --fields is given.
Numbered placeholders, like {0}, {1}, ... are substituted with the respective field's value.
A stand-alone {@} (curly bracket open, at sign, curly bracket close) argument is substituted to all fields as separate arguments.
So, for example, if you have not specified any ARGS in the command line and type both --data and --fields, then DATA goes into argv[1], and the first field goes into argv[2], second to argv[3] and so on. If have not given --data nor --fields, then --data is implied.
If called with --sh option, COMMAND is run within a shell context;
input line goes to $DATA, individual fields go to ${FIELD[@]} (0-indexed).
Both in command and shell (--sh) modes, individual fields are available in
$F0, $F1, ... environment variables.
Set -d DELIM if you want to split DATA not by $IFS but by other delimiter chars,
eg. -d ',:' for comma and colon.
There is also -t/--tab option to set delimiter to TAB for your convenience.
COMMAND is a shell script and for each DATA, it runs in the same shell context, so variables are preserved across invocations.
Pass DATA in the arguments after the user-specified ARGS.
Pass individual fields of DATA in the arguments after DATA if --data is given, or after the user-specified ARGS if --data is not given.
Don't read any DATA from stdin, but take DATA given at --input option(s). This option is repeatable.
Cut up DATA into fields at DELIM chars.
Default is $IFS.
Cut up DATA into fields at TAB chars..
Do not substitute {} with DATA.
Print something before each command execution.
TEMPLATE is a bash-interpolated string,
may contain $DATA and ${FIELD[n]}.
Probably need to put in single quotes when passing to foreach(1) by the invoking shell.
It's designed to be evaluated, so backtick, command substitution, semicolon, and other shell expressions are eval'ed by bash.
Append TEMPLATE to the prefix template. See --prefix option.
Add DATA to the prefix which is printed before each command execution. See --prefix option.
Add a TAB char to the prefix which is printed before each command execution. See --prefix option.
Stop executing if a COMMAND returns non-zero. Rather exit with the said command's exit status code.
ls -l --time-style +%FT%T%z | foreach --data --fields sh -c 'echo size: $5, file: $7'
ls -l --time-style +%FT%T%z | foreach --sh 'echo size: ${FIELD[4]}, file: ${FIELD[6]}'
Placeholders for field values ({0}, {1}, ...) are considered from 0 up to 99.
There must be a limit somewhere, otherwise I had to write a more complex replace routine.
Placeholder {} is substituted in all ARGS anywhere, not just stand-alone {} arguments,
but IS NOT ESCAPED!
So be careful using it in shell command arguments like sh -e 'echo "data is: {}"'.
xargs(1), xe(1) https://github.com/leahneukirchen/xe, apply(1), xapply(1) https://www.databits.net/~ksb/msrc/local/bin/xapply/xapply.html
| g_filename_to_uri - Mimic g_filename_to_uri GLib function creating a file:// url from path string |
g_filename_to_uri - Mimic g_filename_to_uri() GLib function creating a file:// url from path string
| getcvt - Print the current active Virtual Terminal |
getcvt - Print the current active Virtual Terminal
getcvt
chvt(1)
| gitconfigexec - Change git settings for a given command run only |
gitconfigexec - Change git settings for a given command run only
gitconfigexec KEY=VALUE [KEY=VALUE [...]] [--] COMMAND ARGS
KEY is a valid git config option (see git-config(1)).
Set GIT_CONFIG_COUNT, GIT_CONFIG_KEY_n, and GIT_CONFIG_VALUE_n
environment variables, so git(1) takes them as session-override settings.
| git_diff - View two files' diff by git-diff, even not under git version control |
git_diff - View two files' diff by git-diff(1), even not under git version control
| git-submodule-auto-add - Automatically add submodules to a git repo according to .gitmodules file |
git-submodule-auto-add - Automatically add submodules to a git repo according to .gitmodules file
git submodule-auto-add [OPTIONS]
Those which git-submodule(1) add accepts.
Call as many git submodule add ... commands as many submodules are defined in
.gitmodules file in the current repo's root.
Automatically adding submodules this way.
An extra feature is to able to define on which name the submodule's remote should be called ("origin" or the tracking remote of superproject's current branch, see git-submodule(1) for details). Add remotename option to the submodule's section in .gitmodules to achieve this.
Does not fail if a submodule can not be added, but continues with the next one.
| glob - Expand shell-wildcard patterns |
glob - Expand shell-wildcard patterns
glob [OPTIONS] [--] PATTERN [PATTERN [PATTERN [...]]]
Expand PATTERN as shell-wildcard patterns and output matching filenames. Output all matched file names once and sorted alphabetically.
Output filenames as NULL byte temrinated strings.
Fail if can not read a directory. See GLOB_ERR in File::Glob(3perl).
Fail if any PATTERN did not match. Exit code is 2 in this case.
Match case-insensitively. Default is case-sensitive.
Support curly bracket expansion. See GLOB_BRACE in File::Glob(3perl).
Uses perl(1)'s bsd_glob function from File::Glob(3perl),
File::Glob(3perl), perldoc(1): glob
| Head - output as many lines from the first part of files as many lines on the terminal currently |
Head - output as many lines from the first part of files as many lines on the terminal currently
| header - Echo the input stream up to the first empty line |
header - Echo the input stream up to the first empty line (usual end-of-header marker)
body - Skip everything in the input stream up the the first empty line (usual end-of-header marker) and echo the rest
header FILE [FILE [FILE [...]]]
header < FILE
body FILE [FILE [FILE [...]]]
body < FILE
| hlcal - Highlight BSD cal output |
hlcal - Highlight BSD cal(1) output
hlncal - Highlight BSD ncal(1) output
hlcal [OPTIONS] [CAL-OPTIONS]
hlncal [OPTIONS] [NCAL-OPTIONS]
Wrap cal(1), ncal(1) around and highlight specific days.
Where DOW is a day-of-week name (3 letters), COLOR is a space- or hyphen-delimited list of ANSI color or other formatting style name, DATE (and START-DATE, END-DATE) is in [[YYYY-]MM-]DD format, ie. year and month are optional, and lack of them interpreted as "every year" and "every month" respectively.
In single date definition, DATE, may enter an asterisk * as month
to select a given date in every month in the given year, or in every
year if you leave out the year as well.
Example: 1917-*-15
In the interval definition, may add several DOW days which makes only
those days highlighted in the specified interval.
Examples:
04-01...06-30,WED means every Wednesday in the second quarter.
1...7,FRI means the first Friday in every month.
Colors: black, red, green, yellow, blue, magenta, cyan, white, default.
May be preceded by bright, eg: bright red.
May be followed by bg to set the background color instead of the
foreground, eg: yellow-bg.
Styles: bold, faint, italic, underline, blink_slow, blink_rapid, inverse, conceal, crossed,
Note, not all styles are supported by all terminal emulators.
hlncal today=inverse `ncal -e`=yellow_bg-red SUN=bright-red SAT=red -bM3
| hlcal - Highlight BSD cal output |
hlcal - Highlight BSD cal(1) output
hlncal - Highlight BSD ncal(1) output
hlcal [OPTIONS] [CAL-OPTIONS]
hlncal [OPTIONS] [NCAL-OPTIONS]
Wrap cal(1), ncal(1) around and highlight specific days.
Where DOW is a day-of-week name (3 letters), COLOR is a space- or hyphen-delimited list of ANSI color or other formatting style name, DATE (and START-DATE, END-DATE) is in [[YYYY-]MM-]DD format, ie. year and month are optional, and lack of them interpreted as "every year" and "every month" respectively.
In single date definition, DATE, may enter an asterisk * as month
to select a given date in every month in the given year, or in every
year if you leave out the year as well.
Example: 1917-*-15
In the interval definition, may add several DOW days which makes only
those days highlighted in the specified interval.
Examples:
04-01...06-30,WED means every Wednesday in the second quarter.
1...7,FRI means the first Friday in every month.
Colors: black, red, green, yellow, blue, magenta, cyan, white, default.
May be preceded by bright, eg: bright red.
May be followed by bg to set the background color instead of the
foreground, eg: yellow-bg.
Styles: bold, faint, italic, underline, blink_slow, blink_rapid, inverse, conceal, crossed,
Note, not all styles are supported by all terminal emulators.
hlncal today=inverse `ncal -e`=yellow_bg-red SUN=bright-red SAT=red -bM3
| htmlentities - Convert plain text into HTML-safe text |
htmlentities - Convert plain text into HTML-safe text
escape control chars (0x00-0x1F except TAB, LF, and CR)
escape meta chars (less-than, greater-than, ampersand, double- and single-quote)
scape non-ASCII chars
| indent2tree - Makes TAB-indented text into ascii tree chart |
indent2tree - Makes TAB-indented text into ascii tree chart
Set -v, -h, -c, and -l options' values to ASCII line-art chars.
Output path-like strings per line, instead of tree-like diagram.
If SEP is specified, take it as path separator
instead of the default slash (/) char.
Input: lines with leading TAB chars representing the depth in the tree. Multiline records are supported by terminating lines (all but the last one) by backslash.
Output: tree diagramm with (ascii or unicode) drawing chars. Set custom drawing chars by -v, -h, -c, and -l options.
Input data must have at least one "root" item, ie. text starting at the beginning of the line, without preceeding TAB.
Tree depth needs to be denoted by TAB chars, not any other whitespace. Pre-format it if you need to.
Since there can be multiple root items and root items do not have ancestry lines, a multiline root item can be confused with multiple items all having zero children (except maybe the last one). If it matters to you, put a common parent above the tree by inserting a root item to the 0th line and indenting all other lines by 1 level.
Multiline items are not supported in --paths mode.
| indent2graph - Generate graph out of whitespace-indented hierarchical text |
indent2graph - Generate graph out of whitespace-indented hierarchical text
indent2graph < tree.txt > tree.dot
Take line-based input, and output a directed graph in a given format, eg. dot(1) (see graphviz(1)).
Each input line is a node.
How much the line is indented (by leading spaces or TABs) determines its relation to the nodes of the surrounding lines.
Lines which are indented to the same level, go to the same rank on the tree-like graph in the output.
The graph may contain loops:
lines with the same text (apart from the leading whitespace) are considered the same node
(except when --tree option is set).
Input:
/usr/bin/ssh
libselinux
libpcre2-8
libgssapi_krb5
libkrb5
libkeyutils
libresolv
libk5crypto
libcom_err
libkrb5support
libcrypto
libz
libc
Command:
indent2graph -f clojure | vijual draw-tree -
Output:
+------------+
| /usr/bin/s |
| sh |
+-----+------+
|
+------------------------+----+---------+----------+--------+
| | | | |
+-----+------+ +-----+------+ +-----+-----+ +--+---+ +--+---+
| libselinux | | libgssapi_ | | libcrypto | | libz | | libc |
+-----+------+ | krb5 | +-----------+ +------+ +------+
| +-----+------+
| |
| +----------+-+--------------+--------------+
+-----+------+ | | | |
| libpcre2-8 | +----+----+ +-----+------+ +-----+------+ +-----+------+
+------------+ | libkrb5 | | libk5crypt | | libcom_err | | libkrb5sup |
+----+----+ | o | +------------+ | port |
| +------------+ +------------+
+--------+-----+
| |
+-----+------+ +-----+-----+
| libkeyutil | | libresolv |
| s | +-----------+
+------------+
Output format.
The graphviz(1) (dot(1)) format.
Simple TAB-separated node name pairs, each describes a graph edge, 1 per line.
Clojure-style nested vectors (represented as string).
Useful for vijual(1).
Graph::Easy(3pl)'s own "txt" format. With graph-easy(1) you can transform further into other formats, like GDL, VCG, ...
TODO
Indentation in the input represents ascendents, not descendents. Default is descendent chart. This influences to where arrows point.
Interpret input strictly as a tree with no cycles. By default, without --tree, lines with the same text represent the same node, so you can build arbitrary graph. With --tree, you can build a tree-like graph in which different nodes may have the same text (label).
This is the dot(1) graph's rankdir parameter.
This option is although specific to dot(1) format,
but translated to grapheasy if it is the chosen output format.
DIR is one of TB, BT, LR, RL.
Default is LR ie. left-to-right.
See graphviz(1) documentation for details.
indent2tree(1), graphviz(1), dot(1), vijual(1), Graph::Easy(3pl)
| cpfx2indent - Filter text of lines by replacing common prefixes to indentation |
cpfx2indent - Filter text of lines by replacing common prefixes to indentation
cpfx2indent [OPTIONS]
Analyzes input lines on STDIN to detect common prefixes and replaces each line’s leading segment with a number of TABs proportional to the length of the prefix it shares with other lines.
Tokenize input lines by PATTERN regexp pattern.
Default is any whitespace (\s+).
Indent output by STRING. Default is TAB. Other useful STRING are for example space, and double space.
indent2graph(1), indent2tree(1), paths2indent(1)
| inisort - Sort keys in an INI file according to the order of keys in an other INI file |
inisort - Sort keys in an INI file according to the order of keys in an other INI file
inisort [<UNSORTED>] [<REFERENCE>] > [<SORTED>]
| is_gzip - Return 0 if the file in argument has gzip signature |
is_gzip - Return 0 if the file in argument has gzip signature
| levenshtein-distance - Calculate the Levenshtein distance of given strings |
levenshtein-distance - Calculate the Levenshtein distance of given strings
jaro-metric - Calculate the Jaro metric of given strings
jaro-winkler-metric - Calculate the Jaro-Winkler metric of given strings
| levenshtein-distance - Calculate the Levenshtein distance of given strings |
levenshtein-distance - Calculate the Levenshtein distance of given strings
jaro-metric - Calculate the Jaro metric of given strings
jaro-winkler-metric - Calculate the Jaro-Winkler metric of given strings
jobsel
jobsel <joblist> [COLUMNS]
Improved job control frontend for bash. joblist is a jobs -l output from which jobsel builds a menu.
COLUMNS is an optional parameter, if omitted jobsel calls tput(1) to obtain number of columns on the terminal.
Left,Right Select item Enter Switch to job in forground U Hangup selected process SIGHUP I Interrupt process SIGINT S,T,Space Suspend, Resume job SIGCONT,SIGTSTP K Kill process SIGKILL D Process details X,C,L Expanded, collapsed, in-line display mode Q Dismiss menu
eval $(jobsel "$(jobs -l)" $COLUMNS)
alias j='eval $(jobsel "$(jobs -l)" $COLUMNS)'
bind -x '"\204"':"eval \$(jobsel \"\$(jobs -l)\" \$COLUMNS)" bind '"\ej"':"\"\204\"" # ESC-J
Where 204 is an arbitrary free keyscan code
| json2bencode - Convert JSON to Bencode |
json2bencode - Convert JSON to Bencode (BitTorrent's loosely structured data)
| killp - Send signal to processes by PID until they end |
killp - Send signal to processes (kill, terminate, ...) by PID until they end
killpgrp - Send signal to processes (kill, terminate, ...) by PGID until they end
killcmd - Send signal to processes (kill, terminate, ...) by command line until they end
killexe - Send signal to processes (kill, terminate, ...) by executable path until they end
killp [OPTIONS] <PID> [<PID> [...]]
Send signal to process(es) by PID, PGID (process group ID), command name, or by executable path
until the selected process(es) exists.
Ie. in usuall invocation, eg. killcmd java tries to SIGTERM all java processes until at
least 1 exists, and returns only afterwards.
The following options control how killcmd and killexe finds processes. Semantics are the same as in grep(1):
-E --extended-regexp -F --fixed-strings -G --basic-regexp -P --perl-regexp -i --ignore-case -w --word-regexp -x --line-regexp
Other options:
killcmd looks for matching substring in the command's arguments too. By default, only the command name is considered (first word in the command line).
killcmd and killexe look for matching substring in the command's full path too. By default, only the basename is considered.
Which signal to send.
See kill(1) and signal(7) for valid SIG signal names and numbers.
How much to wait between attempts.
See sleep(1) for valid IVAL intervals.
By default, prints what is being killed on the second attempt onward. With --verbose, prints the first attempt too. With --quiet, does not print what is being killed.
kill(1), pkill(1), pgrep(1), killall(1), signal(7)
| killp - Send signal to processes by PID until they end |
killp - Send signal to processes (kill, terminate, ...) by PID until they end
killpgrp - Send signal to processes (kill, terminate, ...) by PGID until they end
killcmd - Send signal to processes (kill, terminate, ...) by command line until they end
killexe - Send signal to processes (kill, terminate, ...) by executable path until they end
killp [OPTIONS] <PID> [<PID> [...]]
Send signal to process(es) by PID, PGID (process group ID), command name, or by executable path
until the selected process(es) exists.
Ie. in usuall invocation, eg. killcmd java tries to SIGTERM all java processes until at
least 1 exists, and returns only afterwards.
The following options control how killcmd and killexe finds processes. Semantics are the same as in grep(1):
-E --extended-regexp -F --fixed-strings -G --basic-regexp -P --perl-regexp -i --ignore-case -w --word-regexp -x --line-regexp
Other options:
killcmd looks for matching substring in the command's arguments too. By default, only the command name is considered (first word in the command line).
killcmd and killexe look for matching substring in the command's full path too. By default, only the basename is considered.
Which signal to send.
See kill(1) and signal(7) for valid SIG signal names and numbers.
How much to wait between attempts.
See sleep(1) for valid IVAL intervals.
By default, prints what is being killed on the second attempt onward. With --verbose, prints the first attempt too. With --quiet, does not print what is being killed.
kill(1), pkill(1), pgrep(1), killall(1), signal(7)
| killp - Send signal to processes by PID until they end |
killp - Send signal to processes (kill, terminate, ...) by PID until they end
killpgrp - Send signal to processes (kill, terminate, ...) by PGID until they end
killcmd - Send signal to processes (kill, terminate, ...) by command line until they end
killexe - Send signal to processes (kill, terminate, ...) by executable path until they end
killp [OPTIONS] <PID> [<PID> [...]]
Send signal to process(es) by PID, PGID (process group ID), command name, or by executable path
until the selected process(es) exists.
Ie. in usuall invocation, eg. killcmd java tries to SIGTERM all java processes until at
least 1 exists, and returns only afterwards.
The following options control how killcmd and killexe finds processes. Semantics are the same as in grep(1):
-E --extended-regexp -F --fixed-strings -G --basic-regexp -P --perl-regexp -i --ignore-case -w --word-regexp -x --line-regexp
Other options:
killcmd looks for matching substring in the command's arguments too. By default, only the command name is considered (first word in the command line).
killcmd and killexe look for matching substring in the command's full path too. By default, only the basename is considered.
Which signal to send.
See kill(1) and signal(7) for valid SIG signal names and numbers.
How much to wait between attempts.
See sleep(1) for valid IVAL intervals.
By default, prints what is being killed on the second attempt onward. With --verbose, prints the first attempt too. With --quiet, does not print what is being killed.
kill(1), pkill(1), pgrep(1), killall(1), signal(7)
| killp - Send signal to processes by PID until they end |
killp - Send signal to processes (kill, terminate, ...) by PID until they end
killpgrp - Send signal to processes (kill, terminate, ...) by PGID until they end
killcmd - Send signal to processes (kill, terminate, ...) by command line until they end
killexe - Send signal to processes (kill, terminate, ...) by executable path until they end
killp [OPTIONS] <PID> [<PID> [...]]
Send signal to process(es) by PID, PGID (process group ID), command name, or by executable path
until the selected process(es) exists.
Ie. in usuall invocation, eg. killcmd java tries to SIGTERM all java processes until at
least 1 exists, and returns only afterwards.
The following options control how killcmd and killexe finds processes. Semantics are the same as in grep(1):
-E --extended-regexp -F --fixed-strings -G --basic-regexp -P --perl-regexp -i --ignore-case -w --word-regexp -x --line-regexp
Other options:
killcmd looks for matching substring in the command's arguments too. By default, only the command name is considered (first word in the command line).
killcmd and killexe look for matching substring in the command's full path too. By default, only the basename is considered.
Which signal to send.
See kill(1) and signal(7) for valid SIG signal names and numbers.
How much to wait between attempts.
See sleep(1) for valid IVAL intervals.
By default, prints what is being killed on the second attempt onward. With --verbose, prints the first attempt too. With --quiet, does not print what is being killed.
kill(1), pkill(1), pgrep(1), killall(1), signal(7)
| kt - Run command in background terminal; keept convenience wrapper |
kt - Run command in background terminal; keept(1) convenience wrapper
kt [jobs | COMMAND ARGS]
Run COMMAND in a keept(1) session, so you may send it to the background
with all of its terminal I/O, and recall with the same kt COMMAND ARGS
command.
Call kt jobs to show running command sessions.
Stores control files in ~/.cache/keept.
keept(1)
| LevelDB - Commandline interface for Google's leveldb key-value storage |
LevelDB - Commandline interface for Google's leveldb key-value storage
| levenshtein-distance - Calculate the Levenshtein distance of given strings |
levenshtein-distance - Calculate the Levenshtein distance of given strings
jaro-metric - Calculate the Jaro metric of given strings
jaro-winkler-metric - Calculate the Jaro-Winkler metric of given strings
| lines - Output only the given lines of the input stream |
lines - Output only the given lines of the input stream
lines [RANGES [RANGES [...]]] [-- FILE [FILE [...]] | < FILE]
Read from from FILEs if specified, STDIN otherwise. RANGES is comma-delimited list of line numbers and inclusive ranges. Special word "EOF" in a range's upper limit represents the end of the file.
Starts the line numbering from 1.
If multiple files are given, restart the line numbering on each file.
Always displays the lines in the in-file order, not in arguments-order, how they were given in RANGES arguments; ie. does not buffer or seek in the input files. So lines 1,2 and lines 2,1 both display the 1st line before the 2nd.
Exit 2 if there was a range which was not found, ie. a file had less lines than requested.
| lnto - Convenience wrapper for ln. User enters link target paths relative to the current directory |
lnto - Convenience wrapper for ln(1). User enters link target paths relative to the current directory
| loggerexec - Run a command and send STDOUT and STDERR to syslog |
loggerexec - Run a command and send STDOUT and STDERR to syslog
loggerexec [-s] FACILITY IDENT COMMAND [ARGS]
Send COMMAND's stdout and stderr to syslog.
FACILITY is one of standard syslog facility names (user, mail, daemon, auth, local0, ...).
IDENT is a freely choosen identity name, also known as tag or programname.
COMMAND's stdout goes as info log level, stderr goes as error log level.
Option -s puts the output on stdout/stderr too.
logger(1), stdsyslog(1)
| logto - Run a command and append its STDOUT and STDERR to a file |
logto - Run a command and append its STDOUT and STDERR to a file
logto FILENAME COMMAND [ARGS]
Save command's output (stdout and stderr) to file and keep normal stdout and stderr as well.
| lpjobs - Show printer queue jobs |
lpjobs - Show printer queue jobs (wrapper for lpq and lpstat)
| lsata - List ATA devices on the system |
lsata - List ATA devices on the system
| lsenv - List environment variables of a process |
lsenv - List environment variables of a process
lsenv <pid>
| mail-extract-raw-headers - Get named headers from RFC822-format input. |
mail-extract-raw-headers - Get named headers from RFC822-format input.
mail-extract-raw-headers [OPTIONS] <NAME> [<NAME> [...]]
Keep linefeeds in multiline text.
Output the header name(s) too, not only the contents.
| maskfiles - Lay over several text files on top of each other like transparency sheets for overhead projectors |
maskfiles - Lay over several text files on top of each other like transparency sheets for overhead projectors
maskfiles [OPTIONS] [--] FILE_1 FILE_2 [FILE_3 ... FILE_n >]
Take files from 1 to n and virtually put them on top of each other by matching byte offsets. If a file from the upper layer have a hole (space by default, otherwise see --hole-char option), then the char on lower layers "looks through" it. Non hole chars just block the lower layers, so they are visible at the end.
Output is STDOUT. No input files are written.
Which chars are to be looked through. By default space is the only hole char. Add underscore to it by example: --hole-chars=" _"
Make NUL chars to look through as well.
Respect line breaks.
| mime_extract - Extract parts from a MIME multipart file and save them into separate files |
mime_extract - Extract parts from a MIME multipart file and save them into separate files
| mime-header-decode - Decode MIME-encoded stream on stdin line-by-line |
mime-header-decode - Decode MIME-encoded stream on stdin line-by-line
| mkdeb - Create a Debian package |
mkdeb - Create a Debian package (.deb)
mkdeb [-m | --multiarch]
Create a *.deb file according to the package name and version info found in ./deb/DEBIAN/control file
and include all file in the package found in ./deb folder. Update some of control file's fields, eg.
Version (increase by 1 if there is any file in the package newer than control file), Installed-Size...
In multiarch mode, instead of ./deb folder, it takes data from all folders in the current working directory which name is a valid Debian architecture name (eg. amd64, i386, ...), and stores temporary files in ./deb for building each architecture's package.
Mkdeb also considers mkdeb-perms.txt file in the current working directory to set
some file attributes in the package, otherwise all file attributes are gonna be the same as on the original.
Each line in this file looks like:
<MODE> <OWNER> <GROUP> <PATH>
Where
is an octal file permission mode, 3 or 4 digits, or "-" to ignore
UID or name of the owner user
GID or name of the owner group
the file's path itself to which the attributes are applied, relative to ./deb directory.
| mkmagnetlink - Create a "magnet:" link out of a torrent file |
mkmagnetlink - Create a "magnet:" link out of a torrent file
| movesymlinks - Rename file and correct its symlinks to keep point to it. |
movesymlinks - Rename file and correct its symlinks to keep point to it.
movesymlinks OLDNAME NEWNAME [DIR [DIR [...]]]
Rename file OLDNAME to NEWNAME and search DIR directories for symlinks pointing to OLDNAME and change them to point to NEWNAME.
| moz_bookmarks - Read Mozilla bookmarks database and display titles and URLs line-by-line |
moz_bookmarks - Read Mozilla bookmarks database and display titles and URLs line-by-line
| msg - Write to given user's open terminals |
msg - Write to given user's open terminals
| multicmd - Run multiple commands in series |
multicmd - Run multiple commands in series
multicmd [OPTIONS] [--] COMMAND-1 ARGS-1 ";" COMMAND-2 ARGS-2 ";" ...
Run COMMAND-1, COMMAND-2, ... COMMAND-n after each other, similarly like shells would do, except not involving any shell.
Set command delimiter to STRING.
Default is a literal ; semicolon.
Probably need to shell-escape.
If you want -- (double dash) for delimiter, to avoid confusion, put it as:
--delimiter=--.
Exit if a command did not run successfully (ie. non-zero exit status or signaled) and do not run further commands. Similar to bash(1)'s errexit (set -e) mode. multicmd(1)'s exit code will be the failed command exit code (128+n if terminated by a signal n).
Note, that ; (or the non-default delimiter set by --delimiter) is a shell meta-char
in your shell, so you need to escape/quote it, but it's a separate literal argument
when you call multicmd(1) in other layers (eg. execve(2)),
so don't just stick to the preceding word. Ie:
WRONG: multicmd date\; ls
WRONG: multicmd 'date; ls'
WRONG: multicmd 'date ; ls'
CORRECT: multicmd date \; ls
CORRECT: multicmd date ';' ls
multicmd(1) exit with the exit code of the last command.
| multithrottler - Run given command if not reached the defined rate limit |
multithrottler - Run given command if not reached the defined rate limit
| mysql-fix-orphan-privileges.php - Suggest SQL commands to clean up unused records in system tables which hold permission data |
mysql-fix-orphan-privileges.php - Suggest SQL commands to clean up unused records in system tables which hold permission data
| netrc - manage ~/.netrc file |
netrc - manage ~/.netrc file
netrc list [PROPERTY_NAME [PROPERTY_NAME [...]]]
netrc set [machine MACHINE_NAME | default] PROPERTY_NAME PROPERTY_VALUE [PROPERTY_NAME PROPERTY_VALUE [...]]
Query entries from ~/.netrc file. Set and add properties as well as new entries.
netrc list command lists machine and login names by default in tabular data format.
Supply PROPERTY_NAMEs to display other properies besides machine names.
Machine name is the key, so it's always displayed.
netrc set command set one or more property of the given MACHINE_NAME machine.
If the property does not exist yet, it's appended after the last property.
If the machine does not exist yet, it's appended after the last machine entry.
As the machine name is the key, if there are multiple entries with the same machine name, yet different login names, refer to one of those by LOGIN_NAME@MACHINE_NAME. A login token has to be there in this case. While the simple MACHINE_NAME keeps referring to the first occurrance.
Refer to the default entry by an empty machine name.
Alternative path instead of ~/.netrc.
File is not locked during read/write.
Does not support macdef token.
netrc(5)
| noacute - Strip diacritics from letters on the input stream |
noacute - Strip diacritics (acute, umlaut, ...) from letters on the input stream
| nocomment - remove comment lines from input stream |
nocomment - remove comment lines from input stream
nocomment [grep-arguments]
This command does not overwrite nor write files, just prints them without comments. I.e. removing lines starting hashmark or semicolon.
grep(1)
| notashell - A non-interactive shell lacking of any shell syntax |
notashell - A non-interactive shell lacking of any shell syntax
notashell -c COMMANDLINE
notashell(1) is a program with non-interactive shell interface (ie. sh -c commandLine),
and intentionally does not understand any shell syntax or meta character,
rather takes the first word of COMMANDLINE and executes it as a single command
with all of the rest of COMMANDLINE as its arguments.
This is useful when you have a program which normally calls some other commands via shell (eg. system(3)),
notably with user-controlled parts in it, ie. data from an untrusted source.
This potentially makes the call vulnerable to shell-injection.
Like incrond(8) since 2015, which triggered the author to make this defense tool.
These kind of programs usually try to guard by escaping user input, but it often turns out that the re-implemented shell-escape mechanism was bad or incomplete.
Using notashell(1) enables you to fully evade this type of shell-injection attacks.
Since if you control at least the first word of COMMANDLINE,
you can trustworthly call a program (wrapper script) in which the supplied COMMANDLINE
can be re-examined, accepted, rejected, rewritten, etc.
and pass the execution forward now with verified user input.
No need to think on "is it safe to run by shell?" or quotation-mark/escape-backslash forests ever again.
Customize how COMMANDLINE is parsed by /etc/notashell/custom.pl.
If this file exists, notashell(1) executes it inside its main context,
so in custom.pl you can build in custom logic.
There are some perl variables accessible:
$CommandString, @CommandArgs, and $ExecName.
$CommandString is just the COMMANDLINE and recommended that only read it in custom.pl,
because changing it does not affect what will be executed.
@CommandArgs is COMMANDLINE split into parts by spaces.
You may change or redefine it to control what will be the arguments of the executed command at the end.
$ExecName is the command's name or path ($CommandArgs[0] by default) what will be executed at the end.
You may change this one too, and it's does not need to be aligned with $CommandArgs[0].
You are also given some utility functions to use in custom.pl at your dispense: stripQuotes(), setupIORedirects(). stripQuotes() currently just return the supplied string without surrounding single and double quotes.
setupIORedirects() scans the supplied list for common shell IO redirection syntax, setup these redirections on the current process, and return the input list except those elements which are found to be part of the redirection.
Example:
setupIORedirects("date", "-R", ">", "/tmp/date.txt")
# returns: ("date", "-R")
# and have STDOUT redirected to the file.
Recognized representation:
write ( >>) and append ( >> >>)
optional, defaults are the same as in sh(1)
just right after the operator or in the next argument; strings only matching to [a-zA-Z0-9_,./-]+ are considered filenames.
Don't forget to exit from custom.pl with a true value.
Typical custom.pl script:
@CommandArgs = setupIORedirects(@CommandArgs);
@CommandArgs = map {stripQuotes($_)} @CommandArgs;
1;
You probably need a tool to force the neglegent program (which is the attack vector to shell-injection)
to run notashell(1) in place of normal shell (sh(1), bash(1)).
See for example noshellinject tool to accomplish this (in ../root-tools directory in notashell's source git repo).
| organizebydate - Rename files based on their date-time |
organizebydate - Rename files based on their date-time
organizebydate [OPTIONS] PATHS [FIND-PARAM]
Organize files by date and time, typically into a directory structure.
PATHS are file and/or directory paths.
FIND-PARAM are find(1) expressions (predicates) to filter which files to work on,
or -H, -L, or -P options - see find(1).
Target path name template using strftime(3) macros.
Default: %Y/%m/%d/
Extra macros accepted:
File's directory path
File's name itself (basename)
Move or copy files. Default is copy.
Move successfully copied files according to TMPL template. This is useful only with --copy. Default is not to move away successfully copied filed. This is useful if you want to keep backed up files on the source too but in another directory so they won't be processed again.
Overwrite already existing target files. Default is to silently ignore them. Note, this affects only --copy and --move, not --handler.
Execute PROG to handle files 1-by-1 instead of internal copy or move.
You may do --handler "rsync -Pvit --inplace --mkpath" --template HOSTNAME:PATH to upload via ssh/rsync
(beware, --set-*time > and conflicting filename checking work only on local paths)
or implement any file transfer method here.
Arguments passed (after those which are given in PROG) are
first, the source file path, and second, the target file path.
Conflicting target path is still checked and resolver is run before PROG
if --conflict-resolver-cmd or --conflict-resolver-script is specified;
if not, PROG should implement conflicting file name resolution logic.
Run a custom conflict resolver logic on already existing target files.
Unless conflict resolver is given, organizebydate(1) ignores conflicts silently
or overwrites target unconditionally if --overwrite is specified.
The conflict resolver can either be a single word command
or a command and arguments - when CMD contains IFS chars (like space)
(in this case you can not pass arguments themself containing spaces to the command
because each space-delimited word goes to a separate argument),
or a whole bash(1) script if --conflict-resolver-script is given.
SCRIPT is run as a separate command too, not in organizebydate(1)'s own shell context.
COMMAND ARGUMENTS
Arguments passed to conflict resolver command/script (after the arguments included in CMD, if any) are the source file's path first and the target path secondly:
EXAMPLES
--conflict-resolver-cmd some-command # runs this: some-command SOURCE TARGET --conflict-resolver-cmd "some-command --option x" # runs this: some-command --option x SOURCE TARGET
--conflict-resolver-cmd "some-command --option \"a and b\"" # WRONG: "a and b" goes into 3 separate arguments, not one --conflict-resolver-script "some-command --option \"a and b\" \"$@\"" # RIGHT: runs this within a bash script # and the end this in ran: some-command --option "a and b" SOURCE TARGET
ENVIRONMENT
Environment variables passed to conflict resolver programms:
ORGANIZEBYDATE_MODEcopy or move
SOURCE_FILE_MTIMETARGET_FILE_MTIMESOURCE_FILE_CTIMETARGET_FILE_CTIMESOURCE_FILE_ATIMETARGET_FILE_ATIMESOURCE_FILE_SIZETARGET_FILE_SIZESome attributes of the source and target files to help the resolver. File time attributes are in unix timestamp, size is in bytes.
EXIT STATUS
If the conflict resolver programm return a non-zero exit status,
it is considered failure (and recorded if --faillog is given).
On zero exit status, the conflict resolution is get from the command's output's last line.
Don't write more than 1 newline char (\
) to the very end,
otherwise the last line would only contain the empty string.
SIGNALS
Conflict resolution signals,
ie. what the resolver programm can signal back to organizebydate(1) by its last STDOUT line.
Don't copy (move) source file, and don't do any processing (eg. don't move successfully copied source file).
Copy (move) source to target. Optionally with a new target path. This always (attempts to) overwrites the target even if --overwrite is not given, and either if NEW-TARGET is given or nor. So it's the resolver programm's responsibility to prevent unwanted overwrite.
Indicate that the source file is already there on the target path,
and no need to copy/move.
organizebydate(1) may still set the target's mtime (atime)
when --set-mtime (--set-atime) option is given;
and still moves the source file when --move-success-template is given.
Emit done signal if the same file is already present on the target location,
or the conflict resolver put it there,
with equal binary content to the source file.
Similar to done but explicitely fail the current item copy/move. This is recorded in fail log if --faillog option is given.
You may do extra steps in the conflict resolver's logic: eg. rename old target or move to an other directory and signal proceed at the end, or eg. remove source file and signal skip - this is useful in move mode.
If you want to ask the user interactively, don't read from stdin(3),
rather re-open the tty(4).
stdout(3) is buffered and then echoed except the last line.
stderr(3) is let through as-is.
Determining timestamps is based on the file's change-, modify-, or access-time. Default is mtime.
Files are raw Emails.
Determining timestamps is based on the Date header.
Files are JPEG images.
Determining timestamps is based on EXIF tags.
Fall back to file mtime (ctime, atime) if datetime info is not found in embedded metadata (RFC-822, Exif, ...)
Set the copied (moved) files' mtime (atime) to the datetime used in the template.
Save failed paths to FILE.
Verbose mode
Dry run. Do not copy (move) files.
Output what would be done in OPERATION TAB SOURCE TAB TARGET format.
Where OPERATION is one of:
for --copy, --move, and --handler operation modes respectively and the target does not exist.
when the target already exists (and neither --overwrite nor --conflict-resolver-* option is given).
when --overwrite is allowed.
when a --conflict-resolver-* option is given.
Output documentation in plain text, POD, or troff (for man(1)) formats.
Minimum directory level to traverse. Equivalent to find(1)'s -mindepth option.
Maximum directory level to traverse. Equivalent to find(1)'s -maxdepth option.
Exit 0 if all files processed successfully.
Exit 1 on parameter error.
Exit 2 if at least 1 file is failed.
| organizebydate-conflict-resolve-filename-version - Filename conflict resolver script for organizebydate |
organizebydate-conflict-resolve-filename-version - Filename conflict resolver script for organizebydate(1)
organizebydate-conflict-resolve-filename-version [OPTIONS] SOURCE TARGET
This is a helper programm used by organizebydate(1) as a filename conflict resolver command.
It signals that the SOURCE is already equivalent to the TARGET if their SHA-256 checksums match.
If not, then sets a new target file name for organizebydate(1).
The new target includes a version number in between the file's basename and extension,
taking into account any already existing versioned file names, so no files gonna be overwritten
(unless there is a race condition with other processes writing to the target directory).
Set STR as the string separating the filename (basename) from the version number.
Default is dot: ..
Set STR as the string separating the version number from the filename suffix (extension). Default is empty, so the version number is followed by the dot directly which is the part of the suffix, if there is an extension.
organizebydate-conflict-resolve-filename-version -s '(v' -t ')' ...
organizebydate(1)
| palemoon-current-urls - Display Palemoon web browser's currently opened URLs per window and per tab |
palemoon-current-urls - Display Palemoon web browser's currently opened URLs per window and per tab
Assuming the "default" browser profile.
Assuming the "default" browser profile is in the *.default folder in Pale Moon's profiles folder.
Assuming sessionstore.js is up-to-date.
| pararun - run commands parallelly |
pararun - run commands parallelly
pararun [OPTIONS] [COMMON_ARGS] --- PARTICULAR_ARGS [+ PARTICULAR_ARGS [+ ...]] [--- COMMON_ARGS]
Start several processes simultaneously. Starting several different commands and starting the same command with different arguments are not distinguished: COMMON_ARGS may be empty - in this case each PARTICULAR_ARGS is a command followed by its arguments; When COMMON_ARGS consists at least 1 argument, then it's the command to be started with the rest of the COMMON_ARGS arguments appended by each PARTICULAR_ARGS arguments per each child process.
pararun --- ./server + ./client Runs C<./server> and C<./client> programms in parallel. pararun ls --- /usr + /etc + /var Runs C<ls /usr>, C<ls /etc>, and C<ls /var>.
pararun --- ./server + ./client --- --port=12345 Runs C<./server> and C<./client> programms in parallel with the same command line argument. =head1 OPTIONS
Let the string SEP close the common arguments (including the command if it is common as well)
instead of the default triple dash (---).
The string SEP separates the particular arguments
instead of the default plus sign (+).
Read additional PARTICULAR_ARGS from STDIN. Each line is taken as 1 argument unless -d is given.
When reading PARTICULAR_ARGS from STDIN, split up lines into arguments by PATTERN regex pattern.
Useful delimiter is \t TAB which you may need to quote in your shell, like '\t' in bash(1).
Exit the lowest status code of the childer processes. Ie. exit with zero status code if at least one of the parallel commands succeeded. Although still waits for all to complete.
Prefix each output line with the given command's first particular argument.
Colorize each particular command's prefix. Implies -p.
Separate prefix from the rest of the line with this string. Default is one space.
Show textual summary at the end about how each command exited. Exit code, exit signal.
Don't use ANSI bold colors.
Exit with the highest exit status of the children processes.
If a command terminates due to a signal, and prefixing and/or prefix coloring is turned on,
then the signaled state is not preserved because pararun(1) pipes commands through
stdfilt(1) to get them prefixed and/or colored.
parallel(1)
polysh https://github.com/innogames/polysh/
| parsel - Select parts of a HTML document based on CSS selectors |
parsel - Select parts of a HTML document based on CSS selectors
parsel <SELECTOR> [<SELECTOR> [...]] < document.html
This command takes an HTML document in STDIN and some CSS selectors in arguments. See 'parsel' and 'cssselect' python modules to see which selectors and pseudo selectors are supported.
Each SELECTOR selects a part in the DOM, but unlike CSS, does not
narrow the DOM tree down for subsequent selectors. So a sequence of
div p arguments (2 arguments) selects all <DIV> and then all <P> in
the document; in other words it is NOT equivalent to the div p css
selector which selects only those <P> which are under any <DIV>.
To combine selectors, see the / (slash) operator below.
Each SELECTOR also outputs what was matched, in the following format:
First output an integer how many distinct HTML parts were selected, then
output the selected parts themself each in its own line.
CR, LF, and Backslash chars are escaped by one Backslash char. It's
useful for programmatic consumption, because you only have to fist read
a line which tells how many subsequent lines to read: each one is one
selected DOM sub-tree on its own (or text, see ::text and [[ATTRIB]] below).
Then just unescape Backslash-R, Backslash-N, and double Backslashes
(for example with sed -e 's/\\\\/\\/g; s/\\
/\
/g; s/\\
/\
/g')
to get the HTML content.
Additionally it takes these special arguments as well:
Prefix your selector with an @ at sign to suppress output.
Mnemonic: Command line echo suppression in DOS batch and in Makefile.
Remove HTML tags and leaves text content only before output.
text{} syntax is borrowed from pup(1).
::text form is there for you if curly brackets are magical in your shell and you don't want to type escaping.
Note, ::text is not a standard CSS pseudo selector at the moment.
Output only the value of the uppermost selected element's ATTRIB attribute.
attr{} syntax is borrowed from pup(1).
Mnemonic for the [[ATTRIB]] form: in CSS you filter by tag attribute
with [attr] square brackets, but as it's a valid selector,
parsel(1) takes double square brackets to actually output the attribute.
A stand-alone / takes the current selection as a base for the rest of the selectors.
Therefore the subsequent SELECTORs work on the previously selected elements,
not on the document root.
Mnemonic: one directory level deeper.
So this arg sequence: .content / p div selects only those P and DIV elements
which are inside a "content" class.
This is useful because with css only, you can not group P and DIV together here.
In other words neither .content p, div nor .content > p, div provides
the same result.
A series of selectors delmited by / forward slashes in a single argument
is to delve into the DOM tree, but show only those elements which the last selector yields.
In contrast to the multi-argument variant SEL1 / SEL2 / SEL3, which shows everything
SEL1, SEL2, SEL3, etc produces.
Similar to this 5 words argument: @SEL1 / @SEL2 / SEL3, except SEL1/SEL2/SEL3
rewinds the base selection to the one before SEL1, while the former one moves the
base selection to SEL3 at the end.
You may still silence its output by prepending @, like: @SEL1/SEL2/SEL3, so
not even SEL3 will be shown.
This is useful when you want only its attributes or inner text (see text{} and attr{}).
Since slashes may occour normally in valid CSS selectors,
please double those / slashes which are not meant to separate selectors,
but are part of a selector - usually an URL in a tag attribute.
Eg. instead of a[href="http://example.net/page"], input a[href="http:////example.net//page"].
A stand-alone .. rewinds the base DOM selection to the
previous base selection before the last /.
Mnemonic: parent directory.
Note, it does not select the parent element in the DOM tree,
but the stuff previously selected in this parsel(1) run.
To select the parent element(s) use parent{}.
Select the currently selected elements' parent elements on the DOM tree.
Note, :parent is not a standard CSS selector at the moment.
Use the parent{} form to disambiguate it from real (standardized) CSS selectors in your code.
Rewind base selection back to the DOM's root.
Note, :root is also a valid CSS pseudo selector, but in a subtree (entered into by /)
it would yield only that subtree, not the original DOM, so parsel(1) goes back to it at this point.
You likely need @ too to suppress output the whole document here.
Show only the first element found. The output is not escaped in this case.
$ parsel input[type=text] < page.html 2 <input type="text" name="domain" /> <input type="text" name="username" />
$ parsel input[type=text] [[name]] < page.html 2 <input type="text" name="domain" /> <input type="text" name="username" /> 2 domain username
$ parsel @input[type=text] [[name]] < page.html 2 domain username
$ parsel @form ::text < page.html 1 Enter your logon details:\ \ Domain:\ \ Username:\ \ Password:\ \ Click here to login:\ \
| partial - Show an earlier started long-running command's partial output |
partial - Show an earlier started long-running command's partial output
partial [--restart|--forget|--wait|--pid] <COMMAND> [<ARGUMENTS>]
On first invocation partial(1) starts COMMAND in the background.
On subsequent invocations, it prints the command's output to stdout which is
generated so far, including the parts which are shown before too,
and keep it running in the background.
Hence the name 'partial', because it shows a command's partial output.
When the command finished, partial(1) prints the whole output
and exits with COMMAND's exit code.
Terminate (SIGTERM) previous instance of the same command and clean up status directory, even if it's running.
Terminate command if running (like with --forget) and start it again.
On first run, wait for the complete output.
display PID
less verbose
command started
partial output shown
called command returned with this status code nnn
If COMMAND does not exit normally, but gets terminated by a signal,
the exit code is indistinguishable from a normal exit's status code,
due to bash(1) uses the value of 128+N as the exit status
when a command terminates on a fatal signal N.
| pathmod - Run command with a modified PATH |
pathmod - Run command with a modified PATH
pathmod [OPTIONS] [--] COMMAND [ARGS]
Lookup only COMMAND according to the modified PATH. Command calls by COMMAND and its children still inherits PATH environment variable from pathmod(1)'s caller. Unless of course COMMAND changes it on its own.
If neither -d nor -s is given, the default mode is -d.
Modify PATH environment for COMMAND,
so COMMAND is still looked up according to the same PATH as pathmod(1),
but its children are going to be looked up according to the modified path.
Simultaneous --direct and --subsequent is supported.
In this case COMMAND is looked up according to the modified PATH
and the PATH environment is changed too.
This is nearly the same as env PATH=MOD_PATH COMMAND ARGS.
Remove DIR directory from the PATH. Note, items in PATH are normalized first. Normalization rules:
Insert DIR before each item in the PATH which matches to PATTERN regexp.
| paths2indent - Transform list of filesystem paths to an indented list of the leaf elements |
paths2indent - Transform list of filesystem paths to an indented list of the leaf elements
paths2indent [OPTIONS]
Input: list of file paths line-by-line
Output: leaf file names indented by as many tabs as deep the file is on the tree
Can not have empty path elements (ie. consecutive slashes).
| pcut - Cut given fields of text input separated by the given Perl regex |
pcut - Cut given fields of text input separated by the given Perl regex
pcut [OPTIONS] [FILE [FILE [...]]]
Standard cut(1) breaks up input lines by a given single char.
pcut(1) does this by the given perl(1)-compatible regular expression.
cut(1) outputs fields always in ascending order, without duplication.
pcut(1) outputs fields in the requested order, even multiple times if asked so by the -f option.
Counted from 1.
See cut(1) for syntax.
Default is whitespace (\s+).
See the same option in cut(1).
Define the output field delimiter. Default is not to use a constant output delimiter, but to preserve the separator substrings as they matched to the pattern of -d option (see --prefer-preceding-delimiter and --prefer-succeeding-delimiter options).
Contrary to cut(1), pcut(1) does not always use a constant delimiter char,
but a regexp pattern which may match to different substrings between fields in the input lines.
Each output field (except the last one) is followed by that substring which was matched to the delimiter pattern just right after that field in the input.
With --prefer-preceding-delimiter, each output field (except the first one) is similarly preceded by that substring which was matched to the delimiter pattern just before that field in the input.
Write STRING before field 1 if it is not the first field on the output (in --prefer-preceding-delimiter mode).
Write STRING after the last field if it is written not as the last field on the output.
Terminate output records (lines) by NUL char instead of LineFeed.
cut(1),
hck,
tuc,
rextr(1),
arr(1) arr,
choose
| perl-repl - Read-Evaluate-Print-Loop wrapper for perl |
perl-repl - Read-Evaluate-Print-Loop wrapper for perl(1)
| pfx2pem - Convert PFX certificate file to PEM format |
pfx2pem - Convert PFX (PKCS#12) certificate file to PEM format
| pipecmd - Run a command and pipe its output to an other one |
pipecmd - Run a command and pipe its output to an other one
pipecmd CMD_1 [ARGS] -- CMD_2 [ARGS]
Equivalent to this shell command:
CMD_1 | CMD_2
The first command's (CMD_1) arguments can not contain a double-dash (--),
because it's the command separator for pipecmd(1).
However, since only a total of 2 commands are supported,
arguments for CMD_2 may contain double-dash(es).
You can chain pipecmd(1) commands together to get a pipeline equivalent to
CMD_1 | CMD_2 | CMD_3, like:
pipecmd CMD_1 -- pipecmd CMD_2 -- CMD_3
It's sometimes more convenient to don't involve shell command-line parser.
pipexec(1)
| pipekill - Send signal to a process on the other end of the given pipe filedescriptor |
pipekill - Send signal to a process on the other end of the given pipe filedescriptor
| PMbwmon - Poor man's bandwidth monitor |
PMbwmon - Poor man's bandwidth monitor
PMbwmon [kMG][bit | Byte] [INTERFACES...]
| PMdirindex - Poor man's directory index generator, output HTML |
PMdirindex - Poor man's directory index generator, output HTML
| PMdirindex - Poor man's hex diff viewer |
PMdirindex - Poor man's hex diff viewer
| PMnslist - Poor man's namespace list |
PMnslist - Poor man's namespace list
| PMpwgen - Poor man's password generator |
PMpwgen - Poor man's password generator
| PMrecdiff - Poor man's directory tree difference viewer, comparing file names and sizes recursively |
PMrecdiff - Poor man's directory tree difference viewer, comparing file names and sizes recursively
| PMwrite - poor man's write - BSD write program alternative |
PMwrite - poor man's write - BSD write program alternative
PMwrite USER
Write a message on USER's terminals who is currently logged in on the local host
and has messaging enabled (by eg. mesg y).
PMwrite writes the message on all the terminals USER enabled messaging.
write(1)
| pngmetatext - Put metadata text into PNG file |
pngmetatext - Put metadata text into PNG file
| prefixlines - Prefix lines from STDIN |
prefixlines - Prefix lines from STDIN
prefixlines [PREFIX]
| pvalve - Control how much a given command should run by an other command's exit code |
pvalve - Control how much a given command should run by an other command's exit code
pvalve [<CONTROL COMMAND>] -- [<LONG RUNNING COMMAND>]
Controls when LONG RUNNING COMMAND should run, by pause and unpause it according to the CONTROL COMMAND's exit status.
Pause LONG RUNNING COMMAND process group with STOP signal(7) if CONTROL COMMAND exits non-zero.
Unpause LONG RUNNING COMMAND process group with CONT signal(7) if CONTROL COMMAND exits zero.
Pvalve takes the last line from CONTROL COMMAND's stdout, and if looks like a time interval (ie. positive number with optional fraction followed by optional "s", "m", or "h" suffix) then the next checking of CONTROL COMMAND will start after that much time. Otherwise it takes PVALVE_INTERVAL environment variable, or start next check immediately if it's not set.
Pvalve won't bombard LONG RUNNING COMMAND with more consecutive STOP or CONT signals.
It's useful eg. for basic load control. Start a CPU-intensive program in LONG RUNNING COMMAND and check hardware temperature in CONTROL COMMAND. Make it exit 0 when temperature is below a certain value, and exit 1 if above an other, higher value.
Default interval between two CONTROL COMMAND runs.
PVALVE_STATUS describes whether LONG RUNNING COMMAND should be in running or in paused state. Possible values: RUN, STOP. This environment is available by CONTROL COMMAND.
PID of LONG RUNNING COMMAND.
Further process groups which are created by LONG RUNNING COMMAND will not be affected.
| pyzor-files - Run a pyzor command on the given files |
pyzor-files - Run a pyzor(1) command on the given files
| qrwifi - Generate a string, used in WiFi-setup QR codes, containing a hotspot name and password |
qrwifi - Generate a string, used in WiFi-setup QR codes, containing a hotspot name and password
| randstr - Generate random string from a given set of characters and with a given length. |
randstr - Generate random string from a given set of characters and with a given length.
randstr <LENGTH> [<CHARS>]
CHARS is a character set expression, see tr(1).
Default CHARS is [a-zA-Z0-9_]
| rcmod - Run a given command and modify its Return Code according to the rules given by the user |
rcmod - Run a given command and modify its Return Code according to the rules given by the user
rcmod [<FROM>=<TO> [<FROM>=<TO> [...]]] <COMMAND> [<ARGS>]
If COMMAND returned with code FROM then rcmod(1) returns with TO.
FROM may be a comma-delimited list.
Keyword any means any return code not specified in FROM parameters.
Keyword same makes the listed exit codes to be preserved.
rcmod any=0 1=13 2,3=same user-command
It runs user-command, then exits with status 13 if user-command exited with 1, 2 if 2, 3 if 3, and 0 for any other return value.
If COMMAND was terminated by a signal, rcmod(1) exits with 128 + signal number
like bash(1) does.
reportcmdstatus(1), sigdispatch(1)
| redirexec - Execute a command with some file descriptors redirected. |
redirexec - Execute a command with some file descriptors redirected.
redirexec [FILENO:MODE:file:PATH] [--] COMMAND ARGS
redirexec [FILENO:MODE:fd:FILENO] [--] COMMAND ARGS
redirexec [FILENO:-] [--] COMMAND ARGS
Setup redirections before executing COMMAND. You can setup the same type of file and file descriptor redirections as in shell.
FILENO is file descriptor integers or names: "stdin", "stdout", and "stderr" for the stadard file descriptors.
MODE is one of:
read
create/clobber
read and write
append
+-----------------+-------------------------------+ | shell syntax | redirexec(1) equivalents | +=================+===============================+ | > output.txt | stdout:c:file:output.txt | | | 1:c:file:output.txt | | | --stdout-file=output.txt | +-----------------+-------------------------------+ | 2>&1 | stderr:c:fd:stdout | | | 2:c:fd:1 | | | --stderr-fd=1 | | | --stderr-fd=stdout | +-----------------+-------------------------------+ | < /dev/null | 0:r:file:/dev/null | | | 0:- | | | --stdin-close | +-----------------+-------------------------------+ | 10< pwd | 10:r:file:pwd | +-----------------+-------------------------------+ | >/dev/null 2>&1 | 1:- 2:- | | | --stdout-close --stderr-close | +-----------------+-------------------------------+
redirfd by execlineb(1)
| regargwrap - Replace non-regular file arguments to regular ones |
regargwrap - Replace non-regular file arguments to regular ones
regargwrap [OPTIONS] COMMAND [ARGS]
Saves the content of non-regular files found in ARGS into a temporary file, then runs COMMAND ARGS with the non-regular file arguments replace to the regular (yet temporary) ones.
This is useful if COMMAND does not support reading from pipes or other non-seekable files.
Replace only pipe/socket/block/char special files. If no option like these specified, by default, replace any of them.
regargwrap git diff --no-index <(ls -1 dir_a) <(ls -1 dir_b)
Impractical with huge files, because they possibly do not fit on the temporary files' filesystem.
regargwrap(1) is a generalization of seekstdin(1).
| renamemanual - Interactive file rename tool |
renamemanual - Interactive file rename tool
renamemanual FILE [FILE [...]]
Prompt for the user for new names for the files given in arguments. Won't overwrite existing files, rather keep asking the new name until it can be renamed to (without overwriting an existing file). Skip a file by entering empty name.
mv(1), rename(1), file-rename(1p) (prename(1)), rename.ul (rename(1)), rename.td(1)
| rename.td - rename multiple files by a Perl expression |
rename.td - rename multiple files by a Perl expression
rename.td [ -v[v] ] [ -n ] [ -f ] perlexpr [ files ]
cat files.list | rename.td [ -v[v] ] [ -n ] [ -f ] perlexpr
rename.td renames the files supplied according to the rule specified as the first argument.
The perlexpr argument is a Perl expression which is expected to modify the $_
string in Perl for at least some of the filenames specified.
If a given filename is not modified by the expression, it will not be renamed.
If no filenames are given on the command line, filenames will be read via standard input.
For example, to rename all files matching *.bak to strip the extension,
you might say
rename.td 's/\.bak$//' *.bak
To translate uppercase names to lower, you'd use
rename.td 'y/A-Z/a-z/' *
Verbose: print names of files successfully renamed.
Verbose extra: print names of files of which name is not changed.
No Action: show what files would have been renamed, or skipped.
Force: overwrite existing files.
Create missing directories.
Output Tab-delimited fields line-by-line. First line is the headers. Each subsequent line describes a file in this way:
Zero when all rename succeeded, otherwise the highest error number of all the failed renames, if any.
See rename(2) for these error numbers.
No environment variables are used.
Larry Wall (author of the original)
Robin Barker
mv(1), perl(1), rename(2), file-rename(1p) (prename(1)), rename.ul (rename(1)), renamemanual(1)
If you give an invalid Perl expression you'll get a syntax error.
| repeat - Run the given command repeatedly |
repeat - Run the given command repeatedly
repeat COMMAND [ARGS]
How many times to repeat the given command. Default is -1 which means infinite.
How many times the command has been ran.
It is not a variable repeat(1) itself takes as input,
but passes to COMMAND for its information.
Stop repeat(1) if COMMAND exists with this return code.
By default the return code is not checked.
Sleep interval between invocations.
In seconds, by default.
See sleep(1) for valid parameters, eg. "10m" for 10 minutes.
Default is no delay.
The exceptional value for REPEAT_DELAY is enter,
for which repeat(1) waits until the user presses Enter on the terminal, to repeat the given command.
| replcmd - Wrap any command in a REPL interface |
replcmd - Wrap any command in a REPL interface
replcmd COMMAND [ARGS]
Run COMMAND repeatedly with words read from STDIN appended to its argument list after ARGS.
You may add prompt, history, and other CLI-goodies on top of replcmd(1) by eg. rlwrap(1).
Run COMMAND ARGS WORDS.
WORDS get split on $IFS.
Prefix the line with a # hash mark to
set fixed parameters for COMMAND.
These will be inserted between ARGS and WORDS read form STDIN.
rlwrap --remember --command-name dict --substitute-prompt "dict> " replcmd dict
| reportcmdstatus - Textually show how the given command finished |
reportcmdstatus - Textually show how the given command finished (exit status/signal)
reportcmdstatus [OPTIONS] [--] COMMAND [ARGS]
Take COMMAND's status and itself exits with it. Default is to exit 0.
If COMMAND did not exit normally, but it is terminated by a signal, exit 128 + SIGNAL, like most shells do.
Report what is being started, ie. COMMAND ARGS, to the STDERR.
Wait when COMMAND ended for the user to press Enter before quit.
| rotate-counters - Increment numbers in file names |
rotate-counters - Increment numbers in file names
| rsacrypt - Encrypt/decrypt files with RSA |
rsacrypt - Encrypt/decrypt files with RSA
| rsysrq - Send SysRQ commands remotely over the network |
rsysrq - Send SysRQ commands remotely over the network
| saveout - Save a programm's output to dynamically named files |
saveout - Save a programm's output to dynamically named files
saveout OPTIONS [--] COMMAND [ARGS]
Run COMMAND and redirect its STDOUT, and/or STDERR, and/or other file descriptors, line-by-line, to dynamically named files. Always append to output files. Useful for eg. to save logs of a long running command (service) in separate files per day.
You can set flush rules (see below) for each output (STDOUT, STDERR, specific FD...). A particular file is always flushed when the filename of the given output changes (as the old file is closed). Output is written per complete lines, so don't expect long data not delimited by linefeed to appear chunk-by-chunk, even with bytes- or time-based flushing. Only linefeed is taken as line terminator, not even a sole carriage-return.
Equivalent to --fd-1 TEMPLATE.
Equivalent to --fd-2 TEMPLATE.
Write COMMAND's output on the Nth file descriptor (STDOUT, STDERR, ...) to a file of which path and name is constructed according to TEMPLATE. TEMPLATE may contain the following macros:
Support all strftime(3) macros, eg. %c, %s, %F, %T, ...
The PID of the process running COMMAND.
The line's substring at the given POS position and LEN length, or to the end of line (excluding the terminating linefeed char) if :LEN is not given. Both POS and LEN can be negative; see perldoc -f substr for details.
Beware of potential unwanted path traversal!
Make sure that the resulting file path does not go outsite of the directory you intended to write,
by eg. output, controlled by untrusted party, containing something like ../../../etc/.
Not implemented.
Not implemented.
Flush output files after LINES number of lines. Flush per each line if LINES is not given (default LINES is 1). By default, leaves flushing to the underlaying IO layer, which usually buffers 4-8k blocks. If you want to set different flushing rules on different outputs, othen than buffered-IO or other than the default given by -L option, override by -Lo, -Le, and/or -Ln. See below.
Equivalent to -Ln 1=LINES.
Equivalent to -Ln 2=LINES.
Set file descriptor FD's output to be flushed by each LINES lines. Default LINES is 1.
Similar to the --flush-lines option group, except flush after at least BYTES bytes are written to the selected outputs. Maybe does not make sense to set more than the buffered-IO block size.
Similar to the --flush-lines and --flush-bytes option groups, except flush after at least SEC seconds passed since last write to the selected outputs.
If can not write to the output file, or can not even open it, print the failed line of text to its own STDERR, then depending on the ACTION:
Terminate COMMAND by SIGTERM, then continue running, but will also exit soon as COMMAND probably terminates upon the signal. This is the default.
Send SIGPIPE to COMMAND, then continue running. COMMAND may recover from the error condition itself.
Just ignore.
See the above comment on %[substr] template macro.
savelog(8), logto(1), stdsyslog(1), loggerexec(1), redirexec(1), logger(1), stdfilt(1)
| screenconsole - Interactive CLI to run GNU/screen commands against current or specified screen session |
screenconsole - Interactive CLI to run GNU/screen commands against current or specified screen session
| screen-notify - Send status-line message to the current GNU/Screen instance |
screen-notify - Send status-line message to the current GNU/Screen instance
| screenreattach - Reattach to GNU/screen and import environment variables |
screenreattach - Reattach to GNU/screen and import environment variables
| screens - List all GNU/Screen sessions accessible by the user and all of their inner windows as well |
screens - List all GNU/Screen sessions accessible by the user and all of their inner windows as well
don't show individual windows in each GNU/Screen session
| seekstdin - Makes STDIN seekable for a given command |
seekstdin - Makes STDIN seekable for a given command
seekstdin COMMAND [ARGS]
Saves the content of STDIN into a temporary file,
then runs COMMAND.
This is useful if COMMAND does not support reading from pipe.
One of the reasons why reading from pipe is usually not supported
is that it is not seekable.
seekstdin(1) makes COMMAND's STDIN seekable by saving its own input
to a file which is unlinked right away,
so it won't occupy disk space once COMMAND ends.
Impractical with huge files, because they possibly do not fit on the temporary files' filesystem.
ordinargs(1)
| set-sys-path - Set PATH according to /etc/environment and run the given command |
set-sys-path - Set PATH according to /etc/environment and run the given command
| set-xcursor-lock-and-run - Set X11 cursor to a padlock and run a command |
set-xcursor-lock-and-run - Set X11 cursor to a padlock and run a command
| spoolprocess - process files in a spool directory |
spoolprocess - process files in a spool directory
spoolprocess [OPTIONS] -d DIRECTORY
Take all files in DIRECTORY specified by -d option,
group them by their basename, ie. name without an optional "dot + number" suffix
(.1, .2, ..., also known as version number),
and call /etc/spoolprocess/BASENAME > programm for each group to handle each files.
The handler programm (usually a script) gets the spool file's path as an argument.
If the programm succeeds, spoolprocess(1) deletes the files for which the handler script was successful,
or all files in the group if --latest asked and it was successful.
This option is repeatable.
Process only those files with BASENAME. This option is repeatable.
Process only the latest (highest version number) file in each group. The default is to process all files in ascending order of version numbers.
Lookup programms in DIR instead of /etc/spoolprocess. This option is repeatable.
Prepend COMMAND to handler scripts found in --scriptdir. COMMAND is tokenized by whitespaces. So -w "bash -x" makes script invoked like this for example:
bash -x /etc/spoolprocess/something spooldir/something.1
spoolprocess(1) does not do locking.
Run it under flock(1), singleinstance(1), cronrun(1), or similar
if you deem it necessary.
DIRECTORY is scanned non-recursively.
uniproc(1)
| ssh-agent-finder - Find a working ssh agent on the system so you get the same in each of your logon sessions |
ssh-agent-finder - Find a working ssh agent on the system so you get the same in each of your logon sessions
. ssh-agent-finder -Iva
| stdfilt - Run a command but filter its STDOUT and STDERR |
stdfilt - Run a command but filter its STDOUT and STDERR
stdfilt [OPTIONS] [--] COMMAND [ARGS]
Run COMMAND and match each of its output lines (both stdout and stderr separately) against the filter rules given by command arguments (-f) or in files (-F). All filter expressions are evaluated and the last matching rule wins. So it's a good idea to add wider matching patterns first, then the more specific ones later.
Empty and comments are ignored as well as leading whitespace.
Comment is everything after a hashmark (#) preceded by whitespace or the while line if it starts with a hashmark.
Each line is a filter rule, of which syntax is:
[match_tags] [pattern [offset]] [replacer] [set_tags]
Tag names, each of them in square-bracket (eg. [blue] [red]).
The rest of the rule will be evaluated only if the tags are on the current stream.
Tags can be added, removed by the set_tags element.
If a rule only consists of match_tags tags, it opens a section in the filter file (and in -f arguments too). In this section, all rules are interpreted as they had the given match_tags of the section written in them. For example this filter-set selects all ranges in the output (and stderr) stream bound by those regexp patterns inclusively, and blocks everying in them except "errors":
/begin checking procedure/ [checking] /checking finished/+1 [/checking] [checking] !// /error/i [/checking]
The 2 streams, stdout and stderr are tagged by default by "STDOUT" and "STDERR" respectively: So this filters out everying in the stdout except "errors":
[STDOUT] !// /error/i [/STDOUT]
Regexp pattern (perlre(1)) to match to the streams' (stdout and stderr) lines.
In the form of /PATTERN/MODIFIERS.
Optionally prefixed with an exclamation mark (!) which negates the result.
Pass every line by //.
Exclude every line by !//.
If there is a pattern in the rule, replacement or tagging will only take place if the pattern matched (or not matched if it was negated).
If there is no pattern, only match_tags controls if the rest will be applied or not.
You may escape slash (/) in the PATTERN normally as it's customary in Perl,
by backslash, but to keep the filter expression parsing simple,
an escaped backslash itself (double backslash) at the end of the regexp pattern,
ie. just before the closing slash,
won't be noticed.
So type it as \x5C instead.
Further limitation, that only slash / can be used, others, eg. m{...} not.
A pattern may be followed by a plus sign and a number (+N)
to denote that the given action (string replacement, or tagging)
should take effect after the given number of lines.
This way you can exclude the triggering line from the tagging.
A pattern with offset but without replacer or set_tags is meaningless.
A s/// string substitution Perl expression.
Optionally with modifiers.
This can be abused to execute any perl code (with the "e" modifier).
The syntax is the same as for match_tags. But is the square-bracketed tags are at the right side of the pattern, then the tags are applied to the stream.
Remove tags by a leading slash, like [/blue].
set_tags is useful with a pattern.
Example filter:
/BEGIN/ [keyblock] /END/ [/keyblock] [keyblock] s/^/\t/
This prepends a TAB char to each lines in the output stream which are between the lines containing "BEGIN" and "END".
HUP - re-read filter files given at command line
Prefix each output (and stderr) lines with the COMMAND process'es PID:
stdfilt -f 's/^/$CHILD_PID: /' some_command...
Prefix each line with literal STDOUT/STDERR string:
stdfilt -f '[STDOUT]' -f 's/^/STDOUT: /' -f '[/STDOUT]' -f '[STDERR]' -f 's/^/STDERR: /' -f '[/STDERR]' some_command...
grep(1), stdbuf(1), logwall(8), perlre(1)
| stdmux - Multiplex the given command's STDOUT and STDERR by prefixing lines |
stdmux - Multiplex the given command's STDOUT and STDERR by prefixing lines
stdmux [-o STDOUT_PREFIX | -e STDERR_PREFIX] [--] COMMAND [ARGS]
TODO
stdmux(1) exits with the COMMAND's exit status.
mux_output=`stdmux command`
demux() { local prefix=$1; sed -ne "s/^$prefix//p"; }
output_text=`echo "$mux_output" | demux 1`
error_text=`echo "$mux_output" | demux 2`
| stdout2env - Substitute other command's STDOUT in command arguments and run the resulting command |
stdout2env - Substitute other command's STDOUT in command arguments and run the resulting command
stdout2env [OPTIONS] -- ENVNAME-1 CMD-1 [ARG [ARG [...]]] [-- ENVNAME-2 CMD-2 [ARG [ARG [...]]] [-- ...]] [--] COMMAND [ARGS]
Run all CMD-1, CMD-2, ... commands in series,
and after each run, set the ENVNAME-n environment variable to the corresponding command's STDOUT output.
Then at the end, run the last COMMAND with all the environment set up.
Very similar to backtick notation `CMD` (and $(CMD)) in ordinary shells.
Earlier set ENVNAME variables are visible in later commands with their new value.
-d, --delimiter STRING
Take STRING as the command argv-list delimiter.
Default is double-dash --.
--keep-eol
Keep the newline char at the very end of each output.
Overwriting PATH environment may render the subsequent commands not being found.
Sometimes you don't want a shell to be in the picture when composing commands.
backtick by execlineb(1), multicmd(1), substenv(1)
| strip-ansi-seq - Dumb script removing more-or-less any ANSI escape sequences from the input stream |
strip-ansi-seq - Dumb script removing more-or-less any ANSI escape sequences from the input stream
| substenv - Substitute environment variables in parameters and run the resulting command |
substenv - Substitute environment variables in parameters and run the resulting command
substenv [OPTIONS] [--] COMMAND [ARGS]
Replace all occurrances of $NAME in COMMAND and ARGS to the NAME environment
variable's value, whatever NAME would be, then run COMMAND ARGS.
Support ${NAME} curly bracket notation too.
Replace all occurrances of any $NAME (and ${NAME}) substring
(for details see LIMITATIONS).
This is the default behaviur, unless -e is given.
Replace the occurrances of NAME environment variable. May be specified more than once. If -a option is NOT given, ONLY these NAMEs are replaced.
Do not replace variables which are not defined (ie. not in the environment), but keep them as-is. By default they are replaced with the empty string.
Do not run COMMAND, just print what would be executed.
This function call, in C, runs substenv(1),
note, there is no dollar-interpolation in C.
execve("substenv", ["substenv", "ls", "$HOME/.config"])
Then substenv issues this system call:
execve("ls", ["ls", "/home/jdoe/.config"])
In "substitute all" mode (without -e flag) it replaces only names
with uppercase letters, digits, and underscore ([A-Z0-9_]+),
as env vars usually contain only these chars.
However it still replaces variables with lowercase letters in ${NAME} notation,
and specific variable(s) given in -e option(s).
Does not honour escaped dollar marks, ie. \$.
Does not support full shell-like variable interpolation. Use a real shell for it.
Sometimes you don't want a shell to be in the picture when composing commands, yet need to weave some environment variable into it.
envsubst(1) from gettext-base package
| subst_sudo_user - Sudo helper program |
subst_sudo_user - Sudo helper program
subst_sudo_user <COMMAND> [<ARGUMENTS>]
Substitute literal $SUDO_USER in the ARGUMENTS and run COMMAND.
It enables sys admins to define sudoers(5) rule in which each user is allowed to
call a privileged command with thier own username in parameters. Example:
%users ALL=(root:root) NOPASSWD: /usr/tool/subst_sudo_user passwd $SUDO_USER
This rule allows users to run subst_sudo_user (and subsequentially
passwd(1)) as root with verbatim $SUDO_USER parameter. So no shell
variable resolution happens so far. Subst_sudo_user in turn, running
as root, replaces $SUDO_USER to the value of SUDO_USER environment
variable, which is, by sudo(1), guaranteed to be the caller username.
Then it runs passwd(1) (still as root) to change the given user's
password. So effectively with this rule, each user can change their
password without knowing the current one first (because passwd(1)
usually does not ask root for his password).
%USERS ALL=(root:root) NOPASSWD: /usr/tool/subst_sudo_user /usr/bin/install -o $SUDO_USER -m 0750 -d /var/backup/user/$SUDO_USER
| swap - swaps two files' names |
swap - swaps two files' names
| symlinks2dot - Generate a graph in dot format representing the symlink-target relations among the given files |
symlinks2dot - Generate a graph in dot(1) format representing the symlink-target relations among the given files
| symlinks-analyze - Discover where symlinks point at, recursively |
symlinks-analyze - Discover where symlinks point at, recursively
| tabularize - Takes TAB-delimited lines of text and outputs formatted table. |
tabularize - Takes TAB-delimited lines of text and outputs formatted table.
COMMAND | tabularize [OPTIONS]
7-bit ascii borders
borders with nice graphical chars
no horizontal lines in the output
no margins, ie. no right-most and left-most vertical borders
add padding space to left and right side of cells. NUM is how many spaces. Default is no padding.
vertical separator character(s) in the output
align these columns (0-indexed) to the right, others are auto-detected and if they seem to hold mostly numeric data, then aligned to the right; otherwise to the left. this option is repeatable.
similar to --align-right option
If $PAGER is set and standard output is a terminal and the resulting table is wider than the terminal, then pipe the table through $PAGER.
column(1), untabularize(1)
| Tail - output as many lines from the end of files as many lines on the terminal currently |
Tail - output as many lines from the end of files as many lines on the terminal currently
| takeown - Take ownership on files, even for unprivileged users |
takeown - Take ownership on files, even for unprivileged users
takeown [options] <files and directories>
Command chown(1) or chown(2) is permitted only for root (and processes with CAP_CHOWN),
but normal users can imitate this behavior.
You can copy other users' file to your own in a directory writable by you,
and then replace the original file with your copy.
It is quite tricky and maybe expensive (copying huge files), but gives you an option.
Say, when somebody forgot to use the right user account when saving files directly to your folders.
takeown(1) uses *.takeown and *.tookown filename extensions to create new files
and to rename existing files to respectively.
See takeown --help for option list.
script --> main --> takeown
/ | \
/ | \
/ | \
takeown takeown takeown
_file _symlink _directory
| | |
- - - - - - | - - - - | - - - - - | - - - - - - - - - - - - - -
error | | |
handler | | V ,---> register_created_dir
function: | | ,--> takeown |
cleanup | | | _directory ---+---> register_moved_file
| | | _recursive |
| | | | | \ `---> register_
| | `------´ | \ ,-> copied_file
| V V \ |
| copy_out <-- copy_out --\--'
| _symlink / \
V / |
copy_out <------------´ |
_file |
| |
`-------> copy_attributes <-----´
| taslis - WM's Window List |
taslis - WM's Window List
Taslis stands for tasklist. List X11 clients provided by wmctrl(1) in ANSI-compatible terminal.
Select item
Switch to workspace and raise window
Close window gracefully
Hangup selected process
Interrupt process
Suspend, Resume process
Kill process
Process's details
Dismiss
Help
| terminaltitle - Set the current terminal's title string |
terminaltitle - Set the current terminal's title string
| tests - Show all attributes of the given files which can be tested by test shows them |
tests - Show all attributes of the given files which can be tested by test(1) in the same color as ls(1) shows them
| text2img-dataurl - Convert text input to image in "data:..." URL representation |
text2img-dataurl - Convert text input to image in "data:..." URL representation
| timestamper - Prepend a timestamp to each input line |
timestamper - Prepend a timestamp to each input line
timestamper
Read STDIN and put everything on STDOUT, only prepending each line by a timestamp and a TAB char.
Timestamp format, see strftime(3).
Default is "%F %T %z".
ts(1) from moreutils
| touchx - set execution bit on files and creates them if neccessary |
touchx - set execution bit on files and creates them if neccessary
| trackrun - Record when the given command was started and ended and expose to it in environment variables |
trackrun - Record when the given command was started and ended and expose to it in environment variables
trackrun [OPTIONS] [--] COMMAND [ARGS]
It records when it starts COMMAND and when it ends, identifying COMMAND either by one of 4 options:
Set TRACKRUN_LAST_STARTED and TRACKRUN_LAST_ENDED environments for COMMAND to the ISO 8601 representation of the date and time when COMMAND was last started and ended respectively. Set TRACKRUN_LAST_STATUS to the status COMMAND last exited with. Those are left empty if no data yet.
On every run, a UUID is generated, so you can connect events of concurrent runs in the track report. It is exposed in TRACKRUN_UUID env.
Show the hash generated from either of those options above before run the COMMAND. This hash is used for the filename in which command-related events are stored.
Show the current run's UUID before actually start the command.
Write current run's UUID in the given file before start the command.
Do not run COMMAND, instead display its tracked history.
Store tracking data in ~/.trackrun directory.
The last successful run's UUID, date-time when started and ended.
The current run's UUID
Trackrun does not do locking. You may take care of it if you need using flock(1), cronrun(1), or similar.
| triggerexec - Run a command and do various specified actions depending on what command does |
triggerexec - Run a command and do various specified actions depending on what command does
triggerexec [EVENT ACTION [EVENT ACTION [...]]] [--] COMMAND [ARGS]
Run COMMAND and execute specific actions depending on what COMMAND does.
Supported EVENT events:
Match PATTERN regex pattern to stdout/stderr line-wise.
Supported ACTION actions:
Evaluate perl expression in triggerexec(1)'s own context. Useful variables: $COMMAND_PID is the COMMAND's PID. $PARAM is a hash ref containing event parameters, for example $PARAM-{line} >> is the text triggered the action - if applicable (stdout:/stderr: events).
expect(1)
| ttinput - Inject console input in a terminal as if the user typed |
ttinput - Inject console input in a terminal as if the user typed
echo Lorm ipsum | ttinput /dev/pts/1
https://johnlane.ie/injecting-terminal-input.html
| uchmod - chmod files according to umask |
uchmod - chmod files according to umask
uchmod [-v] [-R] [path-1] [path-2] ... [path-n]
Change mode bits of files and directories according to umask(1) settings using chmod(1).
Use it when file modes were messed up, uchmod change them like mode of newly created files.
| unicodestyle - Add font styles to input text using Unicode |
unicodestyle - Add font styles to input text using Unicode
| uniproc - Universal data processing tool |
uniproc - Universal data processing tool
uniproc [OPTIONS] INPUTFILE COMMAND [ARGS]
Take each line from INPUTFILE as DATA (chopping end-of-line chars), pass each TAB-delimited fields of DATA to COMMAND as arguments after ARGS (unless placeholder is in COMMAND or ARGS, see below), run COMMAND and then record the exit status.
Can be parallelized well.
uniproc(1) itself does not run multiple instances of COMMAND in parallel, just in series,
but if you start multiple instances of uniproc(1), then you can run COMMANDs concurrently.
Locking ensures no overlapping data being processed.
So you don't need special precautions (locking, data partitioning) when starting uniproc(1) multiple times on the same INPUTFILE.
Use a wrapper command/script for COMMAND if you want either of these:
By default it goes to STDOUT.
Use redirexec(1) for example.
Use args2env(1) or args2stdin(1) for example.
If re-run after an interrupt, won't process already processed data. But you may re-try the failed ones by the --retry option.
The user is allowed to append new lines of data to INPUTFILE between executions or during runtime - it won't mess up the processing. However editing or reordering lines which are already in the file, confuses the results - don't do it.
ARGS (and COMMAND too, somewhat usefully) supports placeholders:
A curly bracket-pair {} is replaced to DATA as one argument, including TAB chars if any, anywhere in COMMAND ARGS.
If there is a number in it, {N}, then the Nth TAB-delimited field (1-indexed) is gonna be substituted in.
A lone {@} argument expands to as many arguments as TAB-delimited fields there are in DATA.
Multiple numbers in the placeholder like {5,3,4} expands to all of the data fields specified by the index numbers, into multiple arguments.
Note that in this case, the multi-index placeholder must stand in its own separate argument, just as the all-fields {@} placeholder.
Indexing a non-existing field expands to empty string.
Be aware that your shell (eg. bash(1)) may expand arguments like {5,3,4} before it gets to uniproc(1),
so escape it if neccessary (eg. '{5,3,4}').
If there is any curly bracket placeholder like these, DATA fields won't be added to ARGS as the last argument.
Process those data which were earlier failed (according to INPUTFILE.uniproc > state file) too, besides the unprocessed ones.
Process only the earlier failed items.
Process only 1 item, then exit. Default is to process as many items in series as possible.
How many items to process.
Stop processing items as soon as the first COMMAND exits non-zero,
and uniproc(1) itself exists which that exit code (or 128+signal if signaled).
Create and check locks using lock files instead of flock(2).
Useful for network filesystems which does not support shared locks (eg. sshfs).
It is assumed that either all instances of uniproc(1),
across all hosts that are working on a given INPUTFILE,
are run in quasi-lock mode, or all in flock(2)-lock mode - do not mix.
These quasi lock files are:
locking the INPUTFILE.uniproc state file, and
locking the command processing the NUMth item.
Note, this is the same file which is locked by flock(2) in real-lock mode.
Beware, when using quasi-locks: the user may manually clean up lock files
which are left there after an interrupted process.
While atomic lock acquisition is approximated using general filesystem primitives,
there is no simple race-free to automatically release the lock when a process terminates.
Therefore uniproc(1) does not even try to emulate such lock-release mechanism,
so it neither detects nor reclaims stale lock files.
However to help the user identify possibly alive processes which expect resources being exclusively allocated to them,
uniproc(1) writes some useful info about the current process in the lock files:
PID START_TIMESTAMP HOSTNAME.
Show which item is being started to process.
Show the raw data what is being started to process.
Show stats summary when exit.
Output debug messages.
It maintains INPUTFILE.uniproc file by writing the processing status of each lines of input data in it line-by-line. Processing status is either:
)processing not yet started
...)in progress
0)result status (exit code)
!) followed by hexadecimal digits (!0f)termination signal (COMMAND teminated abnormally)
processing of this item has not started yet
INPUTFILE.uniproc is locked while read/write to ensure consistency. INPUTFILE.uniproc.NUM are the name of the files which hold the lock for the currently in-progress processes, where NUM is the line number of the corresponding piece of data in INPUTFILE. A lock is held on each of these INPUTFILE.proc.NUM files by the respective instance of COMMAND to detect if the processing is still going or the process crashed.
Due to currently used locking mechanism (Fcntl(3perl)), running on multiple hosts may disrespect locking, depending on the network filesystem. See --quasilock option.
When running COMMAND, the following environment is set:
Number of the particular piece of data (ie. line number in INPUTFILE, 0-indexed) which is need to be processed by the current process.
Same as UNIPROC_DATANUM but 1-indexed instead of 0-indexed.
Total number of items (processed and unprocessed). Note this figure may be outdated because INPUTFILE is not always measured before each COMMAND start.
Display the data processing status before each line of data:
paste datafile.uniproc datafile
How much competed?
awk -v total=$(wc -l < datafile) 'BEGIN{ok=ip=fail=0} {if($1==0){ok++} else if($1=="..."){ip++} else if($1!=""){fail++}} END{print "total: "total", completed: "ok" ("(ok*100/total)"%), in-progress: "ip" ("(ip*100/total)"%), failed: "fail" ("(fail*100/total)"%)"}' datafile.uniproc
Output:
total: 8, completed: 4 (50%), in-progress: 1 (12.5%), failed: 1 (12.5%)
Record output of data processing into a file per each data item:
uniproc datafile sh -c 'some-command "$@" | tee output-$UNIPROC_DATANUM' --
uniproc datafile substenv -e UNIPROC_DATANUM redirexec '1:a:file:output-$UNIPROC_DATANUM' some-command
Same as above, plus keep the output on STDOUT as well as in separate files.
Note, the {} argument is there to pass DATA to the right command:
uniproc datafile pipecmd some-command {} -- substenv -e UNIPROC_DATANUM tee -a 'output-$UNIPROC_DATANUM'
Display data number, processing status, input data, (last line of) output data in a table:
join -t $'\t' <(nl -ba -v0 datafile.uniproc) <(nl -ba -v0 datafile) | foreach -t --prefix-add-data --prefix-add-tab tail -n1 output-{0}
| untabularize - Revert the formatting done by tabularize |
untabularize - Revert the formatting done by tabularize(1)
untabularize [OPTIONS]
Expect no pipe char (|) in column names,
so it's less ambiguous to determine vertical gridlines.
Untabularize the input as it was tabularized by -p NUM > padding.
Strip leading whitespace in column names to learn each column's left margin.
Don't remove trailing (or leading, in case of right-aligned cells) space, which is often just a filler.
Does not reliably distinguish filler space from semantically significant space, so it's either sometimes significant space gets removed or filler space left there (with -F option). The default mode is to trim space in cell data from the right, if any, else trim at the left. The padding, if specified by -p option, is always trimmed (even if it's non-space).
tabularize(1)
| upsidedown - Transliterate input stream to text with upsidedown-looking chars |
upsidedown - Transliterate input stream to text with upsidedown-looking chars
| url_encode - Escape URL-unsafe chars in text given either in parameters or in stdin by percent-encoding |
url_encode - Escape URL-unsafe chars in text given either in parameters or in stdin by percent-encoding
url_decode - Unescape percent-encoded sequences given either in parameters or in stdin
| url_encode - Escape URL-unsafe chars in text given either in parameters or in stdin by percent-encoding |
url_encode - Escape URL-unsafe chars in text given either in parameters or in stdin by percent-encoding
url_decode - Unescape percent-encoded sequences given either in parameters or in stdin
| url_encode_bf - Make all chars given either in parameters or in stdin to percent-encoded sequence |
url_encode_bf - Make all chars given either in parameters or in stdin to percent-encoded sequence
| url-parts - Extract specified parts from URLs given in input stream |
url-parts - Extract specified parts from URLs given in input stream
echo <URL> | url-parts <PART> [<PART> [<PART> [...]]]
Supported parts: fragment, hostname, netloc, password, path, port, query, scheme, username, and query.NAME for the query parameter NAME, and query.NAME.N for Nth element of the array parameter NAME.
Run url-parts --help for the definitive list of URL part names
supported by the python urlparse module installed on your system.
| verscmp - Compare version numbers |
verscmp - Compare version numbers
verscmp VERSION_A [gt | lt | ge | le | eq | ne] VERSION_B
verscmp VERSION_A between VERSION_START VERSION_END [VERSION_START VERSION_END [...]]
verscmp VERSION_A in VERSION_B1 VERSION_B2 [VERSION_B3 [...]]
Comparison is satisfied
Runtime error
Parameter error
Comparison is NOT satisfied
vercmp(1) from makepkg package, Version::Util(3pm)
| vidir-sanitize - Helper script to change tricky filenames in a directory |
vidir-sanitize - Helper script to change tricky filenames in a directory
Not need to invoke vidir-sanitize directly. vidir(1) calls it internally.
VISUAL=vidir-sanitize vidir
vidir(1) from moreutils
| vifiles - Edit multiple files as one |
vifiles - Edit multiple files as one
If a LF char at the end of any files is missing, it'll be added after edit.
vidir(1) from moreutils
| visymlinks - Bulk edit symlinks names and targets |
visymlinks - Bulk edit symlinks names and targets
visymlinks [PATH [PATH [...]]]
Open up your default editor (see sensible-editor(1)) to edit the targets of PATH symlinks
given in command arguments as well as their own filenames.
If no PATH given, all symlinks in the current working directory will be loaded into the editor.
Once finished editing, visymlinks(1) changes the target of those symlinks which were edited.
Contrary to visymlinks(1)'s relative, vidir(1),
if a PATH symlink is removed in the editor, it won't be removed from the filesystem.
Returns zero if everything went well.
Returns the exit status of the editor if it was not zero (also won't change symlinks).
Returns the error code of symlink(2) if any of such calls failed.
Special characters disallowed in PATH filenames and symlinks targets: TAB and LF linefeed (newline).
vidir(1) from moreutils, vifiles(1)
| waitpid - Wait for a process to end |
waitpid - Wait for a process to end (even if not child of current shell)
| whisper-retention-info - Show data retention policy in Whisper timeseries database file |
whisper-retention-info - Show data retention policy in Whisper timeseries database file
| wikibot - Update Wikimedia article |
wikibot - Update Wikimedia (Wikipedia) article
| xdg-autostart - Start XDG autostart programms |
xdg-autostart - Start XDG autostart programms
| xml2json - Convert XML input to JSON |
xml2json - Convert XML input to JSON