8000 GitHub - argrelay/argrelay: A data server for CLI tools with attribute search & Tab-completion in Bash shell
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

argrelay/argrelay

Repository files navigation

PyPI package GitHub build

Supported:

  • Linux
  • Bash >= 4.0
  • Python >= 3.9

youtube

What is argrelay?

A human interface into "automation arsenal" with built-in search for scripts & their input data.

A longer answer:

A wrapper for command line interface (CLI) tools to simplify input selection.
It integrates shell + client + server to search loaded custom data
navigating custom schema via CLI args directly in the standard shell.

  • Although its initial purpose was command auto-completion, that become a trivial byproduct of...
  • Its primary feature: (tag|label|keyword|attribute)-based structured data search.

Probably the best answer:

⭐ ⭐ ⭐ ⭐ ⭐

You may find it easier to get the idea through examples in the_problem_argrelay_solves.md
and ignore the rest of this readme.md (to avoid reading distilled claims).

When is it needed?

It solves the single most annoying problem to know what to type next in the middle of typing.

  • When command args relate to sizeable data incomprehensible or difficult to
    review, select, browse, discover on the spot.

  • When knowledge about ( non-standard | undocumented ) "enterprise automation" scripts is
    constantly invalidated by frequent updates of code & data.

argrelay-registered functions are queryable - they use schemas with
loaded data to contextually limit possible options to locate or auto-complete them.

Why is it for shell?

It is not necessarily only for shell,
but there is an oversupply of GUI-s (and they do not integrate with scripts).

Vacant niche:

CLI is indispensable for rapidly evolving custom tools:

  • [A] ubiquitous automation (any command is effectively replay-able code)
  • [B] quick implementation (get it done "in the afternoon" without drilling fullstack API whole month)
  • [C] ultimate manual intervention (when everything else is already failed and unavailable)

Accepting the problem:

Achieving all three [A, B, C] is nearly impossible without CLI.

Devising a solution:

Make CLI more human-efficient by reducing manual and guess work:

  • enable inline search directly in shell (without copying-and-pasting args from other apps)
  • reduce cognitive load via feedback eliminating syntax and options memorization
  • unify interaction for all registered commands via data-driven framework

See also "general dilemma".

How does it work?

It employs the same shell API as auto-completion for any other (e.g. ls or git) command,
except that requests go to the server with its plugin-s to run search logic against indexed data
(instead of processing requests locally by querying the file system).

It is in the name:

Essentially, sitting between the user shell and the user program,
argrelay "relays" command line args (hence, the name) to the server on each request,
then it uses response data to complete/explain/invoke the command for the user:

sequenceDiagram
    autonumber
    actor U as <br/>User with shell
    box transparent <br/>argrelay
    participant C as Client
    participant S as Server
    end
    participant P as Any program:<br/>user-specific<br/>client-side-local
    U ->> C: use shell hotkeys
    activate C
    C ->> S: "relay" all args
    activate S
    S ->> C: provide data to<br/>complete<br/>explain<br/>invoke<br/>the command
    deactivate S
    C ->> U: if<br/>complete<br/>or<br/>explain
    C ->> P: if invoke
    deactivate C
    activate P
    P ->> U: produce invocation results
    deactivate P
Loading

See "full picture" below.

Typical scenario:

  • Human types in some remembered args (via Tab-auto-completion) in relaxed syntax and order.

  • Machine provides feedback (via Alt+Shift+Q-query) on the progress of the input interrogation.

    It tells:

    • What args machine already matched with the data according to command schema.

    • What else machine needs from human to disambiguate and select the remaining command args.

Try "interactive demo" below.

Request hotkeys:

Local Bash: Remote Server: Local Client:
Alt+Shift+Q reports existing and missing input displays command input interrogation status
Tab suggests options for missing input lists options to Bash for auto-completion
Enter provides data to invoke the command executes the command

Apart from the trivial Enter to execute command,
Tab and Alt+Shift+Q hotkeys are configured via Readline library Bash relies on.

How to use it?

The simplest way is to fork|branch the repo and wrap the intended logic following an example:

  • Implement loader plugin to load objects and index their properties for selection via CLI.

    For example, ssh wrapper may use generic loader for static file with data -
    see few destinations with multiple properties in ConfigOnlyLoader.ssh_dst_group.data.yaml.

  • Implement delegator plugin to wrap the logic requiring arguments selection.

    For example, ssh wrapper may use selected destinations,
    see invoke_action func in DelegatorSshDst.py.

Try "interactive demo" while changing code and data from that example to
see how CLI arguments are matched against properties of the ssh destinations loaded by the loader.

Ultimately, argrelay can be used as dependency for a separate project (in its own another repo).

Interactive demo: starting and stopping

There are two ways to start demo:

  • Using docker image:

    docker run -it argrelay/demo
  • Using git sources:

    This is a non-intrusive demo (e.g. without permanent changes to ~/.bashrc).

    Clone this repo somewhere (@/ is the project root).

    Start @/exe/relay_demo.bash sub-shell (it may take a couple of minutes to start for the first time):

    ./exe/relay_demo.bash

    This sub-shell configures request hotkeys to bind lay command with @/exe/run_argrelay_client.

Optionally, review env state (any time):

./exe/check_env.bash

To stop & clean up at any point, simply exit the sub-shell:

exit

See project_walkthrough.md for more details.

Interactive demo: elaborated test data

While still in the started "interactive demo" sub-shell:

  • Try request hotkeys with lay command (which uses demo test data):

    lay goto                 # press `Alt+Shift+Q` to describe available options
    lay goto host            # press `Tab` one or multiple times
    lay goto host dev        # press `Alt+Shift+Q` to observe changes in the output

    If executed (press Enter), it runs stub implementations (in real app it would do remote ssh-login for example).

  • Command lay goto binds to only one of the function.

    To see all bindings for lay command, run help function accessible via this command:

    lay help

    A lot of the functions are stubs and tests for demo purposes only.

Interactive demo: changes to code & data

The following ssh-related command is intentionally added to demo simple changes to code & data.

While still in the started "interactive demo" sub-shell:

  • For ssh wrapper example, try lay ssh or ar_ssh commands:

    Note that this is an example when the same function is bound to more than one command.

    lay ssh                  # press `Alt+Shift+Q` to describe available options
    ar_ssh                   # press `Alt+Shift+Q` to describe available options
  • Modify code or data (or both) in the files below.

    This ssh-wrapper function uses:

  • Restart "interactive demo" and try lay ssh or ar_ssh command again to see the changes.

You may apply many examples from the_problem_argrelay_solves.md to this ssh-wrapper.

Interactive demo: browse and replace server data

Data is accessed by snapshots.

Snapshot does not need to include entire indexed data - it can be any sub-set of it.

The data sub-set (sub-snapshot) is defined by search properties within the data.

For consistency, all entries added/deleted/replaced should have the same values for the specified search properties.

While still in the started "interactive demo" sub-shell:

  • Browse and retrieve data used by ssh wrapper:

    lay get class_ssh_dst

    stdout (one JSON per line):

    {"live_status": "offline", "service_name": "tmp", "run_mode": "active", "group_label": "core", "cluster_name": "demo", "data_center": "local", "flow_stage": "backend", "geo_region": "apac", "dir_path": "/tmp", "ip_address": "ip.127.0.0.1", "user_name": "~", "envelope_class": "class_ssh_dst", "host_name": "localhost", "code_maturity": "dev"}
    {"live_status": "online", "service_name": "root", "run_mode": "active", "group_label": "core", "cluster_name": "demo", "data_center": "local", "flow_stage": "backend", "geo_region": "apac", "dir_path": "/", "ip_address": "ip.127.0.0.1", "user_name": "~", "envelope_class": "class_ssh_dst", "host_name": "localhost", "code_maturity": "test"}
    {"live_status": "online", "service_name": "home", "run_mode": "passive", "group_label": "core", "cluster_name": "demo", "data_center": "local", "flow_stage": "backend", "geo_region": "apac", "dir_path": "~", "ip_address": "ip.127.0.0.1", "user_name": "~", "envelope_class": "class_ssh_dst", "host_name": "localhost", "code_maturity": "test"}
  • Replace data used by ssh wrapper:

    stdin (one JSON per line):

    echo '
    {"live_status": "offline", "service_name":     "tmp", "run_mode": "active", "group_label": "core", "cluster_name": "demo", "data_center": "local", "flow_stage": "backend", "geo_region": "apac", "dir_path": "/tmp", "ip_address": "ip.127.0.0.1", "user_name": "~", "envelope_class": "class_ssh_dst", "host_name": "localhost", "code_maturity": "dev"}
    {"live_status": "offline", "service_name": "new_tmp", "run_mode": "active", "group_label": "core", "cluster_name": "demo", "data_center": "local", "flow_stage": "backend", "geo_region": "apac", "dir_path": "/tmp", "ip_address": "ip.127.0.0.1", "user_name": "~", "envelope_class": "class_ssh_dst", "host_name": "localhost", "code_maturity": "dev"}
    {"live_status": "online", "service_name":     "root", "run_mode": "active", "group_label": "core", "cluster_name": "demo", "data_center": "local", "flow_stage": "backend", "geo_region": "apac", "dir_path": "/", "ip_address": "ip.127.0.0.1", "user_name": "~", "envelope_class": "class_ssh_dst", "host_name": "localhost", "code_maturity": "test"}
    {"live_status": "online", "service_name": "new_root", "run_mode": "active", "group_label": "core", "cluster_name": "demo", "data_center": "local", "flow_stage": "backend", "geo_region": "apac", "dir_path": "/", "ip_address": "ip.127.0.0.1", "user_name": "~", "envelope_class": "class_ssh_dst", "host_name": "localhost", "code_maturity": "test"}
    {"live_status": "online", "service_name":     "home", "run_mode": "passive", "group_label": "core", "cluster_name": "demo", "data_center": "local", "flow_stage": "backend", "geo_region": "apac", "dir_path": "~", "ip_address": "ip.127.0.0.1", "user_name": "~", "envelope_class": "class_ssh_dst", "host_name": "localhost", "code_maturity": "test"}
    {"live_status": "online", "service_name": "new_home", "run_mode": "passive", "group_label": "core", "cluster_name": "demo", "data_center": "local", "flow_stage": "backend", "geo_region": "apac", "dir_path": "~", "ip_address": "ip.127.0.0.1", "user_name": "~", "envelope_class": "class_ssh_dst", "host_name": "localhost", "code_maturity": "test"}
    ' |
    lay set class_ssh_dst

    Try lay ssh or ar_ssh to see more options (those service_name-s prefixed with new_).

    NOTE: Normally all data is loaded on server startup by loader plugins. Replacing data on the already running server is still in preview and will require more explanation.

See project_walkthrough.md for more details.

Primary executables

This table summarizes all executables most users ever need to know:

Executable
from @/exe/ dir
Purpose
check_env.bash checks Bash/Python environments for any issues
bootstrap_env.bash bootstraps the environment (installs or upgrades argrelay)
dev_shell.bash starts configured shell with activated venv and argrelay-linked commands
shell_env.bash script source-able by ~/.bashrc to avoid starting dev_shell.bash
run_argrelay_server runs argrelay server (in foreground)
run_argrelay_client not used directly (invoked by Alt+Shift+Q-query and Tab-completion)

See FS_29_54_67_86.dir_structure.md for details.

Data backend

There are two options at the moment - both using MongoDB API:

Category mongomock (default) pymongo
Data set size: practical convenience limit ~ 10K objects tested with ~ 1M objects
Pro: nothing else to install no practical data set size limit found (yet)
for argrelay intended use cases
Con: understandably, does not meet
performance requirements
for large data sets
require some knowledge of MongoDB,
additional setup,
additional running processes

Quantitative comparison tables between the two can be seen in docstring for DistinctValuesQuery enum.

Full picture

sequenceDiagram
    autonumber
    actor U as <br/>User
    participant B as Bash
    box transparent <br/>argrelay
    participant C as Client
    participant S as Server
    participant DB as Data backend<br/>(internal or external)
    end
    participant DS as Data sources
    participant P as Any program:<br/>user-specific<br/>client-side-local
    DS ->> S: load data
    activate S
    S ->> DB: load data
    deactivate S
    Note over S: <br/>stand-by<br/>
    U ->> B: type command and use hotkeys
    B ->> C: invoke
    activate C
    C ->> S: "relay" all args
    activate S
    S ->> DB: query request
    activate DB
    DB ->>
8B1C
; S: query result
    deactivate DB
    S ->> C: "relay" enriched lookup details
    deactivate S
    Note over C: next steps depend on hotkeys
    C ->> U: show results
    C ->> P: "relay" details to invoke
    deactivate C
    activate P
    P ->> U: show results
    deactivate P
Loading

Project status

Past stage:

It grew:

  • out of an attempt to solve the original use case
  • into an attempt to make it generic

All the core features has already been added:

  • search logic
  • extensibility via plugins
  • automated shell config and check scripts
  • demo samples for various use cases

Next stage:

Simplifying adoption:

  • Focus on feedback, target relevant features, avoid unnecessary ones.
  • Stabilize APIs, split project into individual components with semantic versioning.
  • Widen dev comprehension by refactoring for clarity, removing obsoleted parts, improving docs.

Feedback

Feel free to raise issues or discussions.

0