You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
K9s - Kubernetes CLI To Manage Your Clusters In Style!
K9s provides a terminal UI to interact with your Kubernetes clusters.
The aim of this project is to make it easier to navigate, observe and manage
your applications in the wild. K9s continually watches Kubernetes
for changes and offers subsequent commands to interact with your observed resources.
Note...
K9s is not pimped out by a big corporation with deep pockets.
It is a complex OSS project that demands a lot of my time to maintain and support.
K9s will always remain OSS and therefore free! That said, if you feel k9s makes your day to day Kubernetes journey a tad brighter, saves you time and makes you more productive, please consider sponsoring us!
Your donations will go a long way in keeping our servers lights on and beers in our fridge!
K9s is currently using GO v1.23.X or above.
In order to build K9s from source you must:
Clone the repo
Build and run the executable
make build && ./execs/k9s
Running with Docker
Running the official Docker image
You can run k9s as a Docker container by mounting your KUBECONFIG:
docker run --rm -it -v $KUBECONFIG:/root/.kube/config quay.io/derailed/k9s
For default path it would be:
docker run --rm -it -v ~/.kube/config:/root/.kube/config quay.io/derailed/k9s
Building your own Docker image
You can build your own Docker image of k9s from the Dockerfile with the following:
docker build -t k9s-docker:v0.0.1 .
You can get the latest stable kubectl version and pass it to the docker build command with the --build-arg option.
You can use the --build-arg option to pass any valid kubectl version (like v1.18.0 or v1.19.1).
docker run --rm -it -v ~/.kube/config:/root/.kube/config k9s-docker:0.1
PreFlight Checks
K9s uses 256 colors terminal mode. On `Nix system make sure TERM is set accordingly.
export TERM=xterm-256color
In order to issue resource edit commands make sure your EDITOR and KUBE_EDITOR env vars are set.
# Kubectl edit command will use this env var.export KUBE_EDITOR=my_fav_editor
K9s prefers recent kubernetes versions ie 1.28+
K8S Compatibility Matrix
k9s
k8s client
>= v0.27.0
1.26.1
v0.26.7 - v0.26.6
1.25.3
v0.26.5 - v0.26.4
1.25.1
v0.26.3 - v0.26.1
1.24.3
v0.26.0 - v0.25.19
1.24.2
v0.25.18 - v0.25.3
1.22.3
v0.25.2 - v0.25.0
1.22.0
<= v0.24
1.21.3
The Command Line
# List current version
k9s version
# To get info about K9s runtime (logs, configs, etc..)
k9s info
# List all available CLI options
k9s help# To run K9s in a given namespace
k9s -n mycoolns
# Start K9s in an existing KubeConfig context
k9s --context coolCtx
# Start K9s in readonly mode - with all cluster modification commands disabled
k9s --readonly
Logs And Debug Logs
Given the nature of the ui k9s does produce logs to a specific location.
To view the logs and turn on debug mode, use the following commands:
K9s keeps its configurations as YAML files inside of a k9s directory and the location depends on your operating system. K9s leverages XDG to load its various configurations files. For information on the default locations for your OS please see this link. If you are still confused a quick k9s info will reveal where k9s is loading its configurations from. Alternatively, you can set K9S_CONFIG_DIR to tell K9s the directory location to pull its configurations from.
Unix
macOS
Windows
~/.config/k9s
~/Library/Application Support/k9s
%LOCALAPPDATA%\k9s
NOTE: This is still in flux and will change while in pre-release stage!
You can now override the context portForward default address configuration by setting an env variable that can override all clusters portForward local address using K9S_DEFAULT_PF_ADDRESS=a.b.c.d
# $XDG_CONFIG_HOME/k9s/config.yamlk9s:
# Enable periodic refresh of resource browser windows. Default falseliveViewAutoRefresh: false# The path to screen dump. Default: '%temp_dir%/k9s-screens-%username%' (k9s info)screenDumpDir: /tmp/dumps# Represents ui poll intervals in seconds. Default 2secsrefreshRate: 2# Overrides the default k8s api server requests timeout. Defaults 120sapiServerTimeout: 15s# Number of retries once the connection to the api-server is lost. Default 15.maxConnRetry: 5# Indicates whether modification commands like delete/kill/edit are disabled. Default is falsereadOnly: false# This setting allows users to specify the default view, but it is not set by default.defaultView: ""# Toggles whether k9s should exit when CTRL-C is pressed. When set to true, you will need to exit k9s via the :quit command. Default is false.noExitOnCtrlC: false#UI settingsui:
# Enable mouse support. Default falseenableMouse: false# Set to true to hide K9s header. Default falseheadless: false# Set to true to hide the K9S logo Default falselogoless: false# Set to true to hide K9s crumbs. Default falsecrumbsless: false# Set to true to suppress the K9s splash screen on start. Default false. Note that for larger clusters or higher latency connections, there may be no resources visible initially until local caches have finished populating.splashless: false# Toggles icons display as not all terminal support these chars. Default: truenoIcons: false# Toggles reactive UI. This option provide for watching on disk artifacts changes and update the UI live Defaults to false.reactive: false# By default all contexts will use the dracula skin unless explicitly overridden in the context config file.skin: dracula # => assumes the file skins/dracula.yaml is present in the $XDG_DATA_HOME/k9s/skins directory# Allows to set certain views default fullscreen mode. (yaml, helm history, describe, value_extender, details, logs) Default falsedefaultsToFullScreen: false# Show full resource GVR (Group/Version/Resource) vs just R. Default: false.useFullGVRTitle: false# Toggles icons display as not all terminal support these chars.noIcons: false# Toggles whether k9s should check for the latest revision from the GitHub repository releases. Default is false.skipLatestRevCheck: false# When altering kubeconfig or using multiple kube configs, k9s will clean up clusters configurations that are no longer in use. Setting this flag to true will keep k9s from cleaning up inactive cluster configs. Defaults to false.keepMissingClusters: false# Logs configurationlogger:
# Defines the number of lines to return. Default 100tail: 200# Defines the total number of log lines to allow in the view. Default 1000buffer: 500# Represents how far to go back in the log timeline in seconds. Setting to -1 will tail logs. Default is -1.sinceSeconds: 300# => tail the last 5 mins.# Toggles log line wrap. Default falsetextWrap: false# Autoscroll in logs will be disabled. Default is false.disableAutoscroll: false# Toggles log line timestamp info. Default falseshowTime: false# Provide shell pod customization when nodeShell feature gate is enabled!shellPod:
# The shell pod image to use.image: killerAdmin# The namespace to launch to shell pod into.namespace: default# The resource limit to set on the shell pod.limits:
cpu: 100mmemory: 100Mi# Enable TTYtty: truehostPathVolume:
- name: docker-socket# Mount the Docker socket into the shell podmountPath: /var/run/docker.sock# The path on the host to mounthostPath: /var/run/docker.sockreadOnly: true
Popeye Configuration
K9s has integration with Popeye, which is a Kubernetes cluster sanitizer. Popeye itself uses a configuration called spinach.yml, but when integrating with K9s the cluster-specific file should be name $XDG_CONFIG_HOME/share/k9s/clusters/clusterX/contextY/spinach.yml. This allows you to have a different spinach config per cluster.
Node Shell
By enabling the nodeShell feature gate on a given cluster, K9s allows you to shell into your cluster nodes. Once enabled, you will have a new s for shell menu option while in node view. K9s will launch a pod on the selected node using a special k9s_shell pod. Furthermore, you can refine your shell pod by using a custom docker image preloaded with the shell tools you love. By default k9s uses a BusyBox image, but you can configure it as follows:
Alternatively, you can now override the context configuration by setting an env variable that can override all clusters node shell gate using K9S_FEATURE_GATE_NODE_SHELL=true|false
# $XDG_CONFIG_HOME/k9s/config.yamlk9s:
# You can also further tune the shell pod specificationshellPod:
image: cool_kid_admin:42namespace: bleelimits:
cpu: 100mmemory: 100Mi
Then in your cluster configuration file...
# $XDG_DATA_HOME/k9s/clusters/cluster-1/context-1k9s:
cluster: cluster-1readOnly: falsenamespace:
active: defaultlockFavorites: falsefavorites:
- kube-system
- defaultview:
active: pofeatureGates:
nodeShell: true # => Enable this feature gate to make nodeShell available on this clusterportForwardAddress: localhost
Customizing the Shell Pod
You can also customize the shell pod by adding a hostPathVolume to your shell pod. This allows you to mount a local directory or file into the shell pod. For example, if you want to mount the Docker socket into the shell pod, you can do so as follows:
k9s:
shellPod:
hostPathVolume:
- name: docker-socket# Mount the Docker socket into the shell podmountPath: /var/run/docker.sock# The path on the host to mounthostPath: /var/run/docker.sockreadOnly: true
This will mount the Docker socket into the shell pod at /var/run/docker.sock and make it read-only. You can also mount any other directory or file in a similar way.
Command Aliases
In K9s, you can define your very own command aliases (shortnames) to access your resources. In your $HOME/.config/k9s define a file called aliases.yaml.
A K9s alias defines pairs of alias:gvr. A gvr (Group/Version/Resource) represents a fully qualified Kubernetes resource identifier. Here is an example of an alias file:
# $XDG_DATA_HOME/k9s/aliases.yamlaliases:
pp: v1/podscrb: rbac.authorization.k8s.io/v1/clusterrolebindings# As of v0.30.0 you can also refer to another command alias...fred: pod fred app=blee # => view pods in namespace fred with labels matching app=blee
Using this aliases file, you can now type :pp or :crb or :fred to activate their respective commands.
HotKey Support
Entering the command mode and typing a resource name or alias, could be cumbersome for navigating thru often used resources.
We're introducing hotkeys that allow users to define their own key combination to activate their favorite resource views.
Additionally, you can define context specific hotkeys by add a context level configuration file in $XDG_DATA_HOME/k9s/clusters/clusterX/contextY/hotkeys.yaml
In order to surface hotkeys globally please follow these steps:
Create a file named $XDG_CONFIG_HOME/k9s/hotkeys.yaml
Add the following to your hotkeys.yaml. You can use resource name/short name to specify a command ie same as typing it while in command mode.
# $XDG_CONFIG_HOME/k9s/hotkeys.yamlhotKeys:
# Hitting Shift-0 navigates to your pod viewshift-0:
shortCut: Shift-0description: Viewing podscommand: pods# Hitting Shift-1 navigates to your deploymentsshift-1:
shortCut: Shift-1description: View deploymentscommand: dp# Hitting Shift-2 navigates to your xray deploymentsshift-2:
shortCut: Shift-2description: Xray Deploymentscommand: xray deploy# Hitting Shift-S view the resources in the namespace of your current selectionshift-s:
shortCut: Shift-Soverride: true # => will override the default shortcut related action if set to true (default to false)description: Namespaced resourcescommand: "$RESOURCE_NAME $NAMESPACE"keepHistory: true # whether you can return to the previous view
Not feeling so hot? Your custom hotkeys will be listed in the help view ?.
Also your hotkeys file will be automatically reloaded so you can readily use your hotkeys as you define them.
You can choose any keyboard shortcuts that make sense to you, provided they are not part of the standard K9s shortcuts list.
Similarly, referencing environment variables in hotkeys is also supported. The available environment variables can refer to the description in the Plugins section.
NOTE: This feature/configuration might change in future releases!
Port Forwarding over websockets
K9s follows kubectl feature flag environment variables to enable/disable port-forwarding over websockets. (default enabled in >1.30)
To disable Websocket support, set KUBECTL_PORT_FORWARD_WEBSOCKETS=false
FastForwards
As of v0.25.0, you can leverage the FastForwards feature to tell K9s how to default port-forwards. In situations where you are dealing with multiple containers or containers exposing multiple ports, it can be cumbersome to specify the desired port-forward from the dialog as in most cases, you already know which container/port tuple you desire. For these use cases, you can now annotate your manifests with the following annotations:
@ k9scli.io/auto-port-forwards
activates one or more port-forwards directly bypassing the port-forward dialog all together.
@ k9scli.io/port-forwards
pre-selects one or more port-forwards when launching the port-forward dialog.
The annotation value takes on the shape container-name::[local-port:]container-port
NOTE: for either cases above you can specify the container port by name or number in your annotation!
Example
# Pod fredapiVersion: v1kind: Podmetadata:
name: fredannotations:
k9scli.io/auto-port-forwards: zorg::5556 # => will default to container zorg port 5556 and local port 5566. No port-forward dialog will be shown.# Or...k9scli.io/port-forwards: bozo::9090:p1 # => launches the port-forward dialog selecting default port-forward on container bozo port named p1(8081)# mapping to local port 9090....spec:
containers:
- name: zorgports:
- name: p1containerPort: 5556...
- name: bozoports:
- name: p1containerPort: 8081
- name: p2containerPort: 5555...
The annotation value must specify a container to forward to as well as a local port and container port. The container port may be specified as either a port number or port name. If the local port is omitted then the local port will default to the container port number. Here are a few examples:
bozo::http - creates a pf on container bozo with port name http. If http specifies port number 8080 then the local port will be 8080 as well.
bozo::9090:http - creates a pf on container bozo mapping local port 9090->http(8080)
bozo::9090:8080 - creates a pf on container bozo mapping local port 9090->8080
You can change which columns shows up for a given resource via custom views. To surface this feature, you will need to create a new configuration file, namely $XDG_CONFIG_HOME/k9s/views.yaml. This file leverages GVR (Group/Version/Resource) to configure the associated table view columns. If no GVR is found for a view the default rendering will take over (ie what we have now). Going wide will add all the remaining columns that are available on the given resource after your custom columns. To boot, you can edit your views config file and tune your resources views live!
📢 🎉 As of release v0.40.0 you can specify json parse expressions to further customize your resources rendering.
Where :json_parse_expression represents an expression to pull a specific snippet out of the resource manifest.
Similar to kubectl -o custom-columns command. This expression is optional.
IMPORTANT! Columns must be valid YAML strings. Thus if your column definition contains non-alpha chars
they must figure with either single/double quotes or escaped via \
NOTE! Be sure to watch k9s logs as any issues with the custom views specification are only surfaced in the logs.
Additionally, you can specify column attributes to further tailor the column rendering.
To use this you will need to add a | indicator followed by your rendering bits.
You can have one or more of the following attributes:
T -> time column indicator
N -> number column indicator
W -> turns on wide column aka only shows while in wide mode. Defaults to the standard resource definition when present.
S -> Ensures a column is visible and not wide. Overrides wide std resource definition if present.
H -> Hides the column
L -> Left align (default)
R -> Right align
Here is a sample views configuration that customize a pods and services views.
# $XDG_CONFIG_HOME/k9s/views.yamlviews:
v1/pods:
columns:
- AGE
- NAMESPACE|WR # => 🌚 Specifies the NAMESPACE column to be right aligned and only visible while in wide mode
- ZORG:.metadata.labels.fred\.io\.kubernetes\.blee # => 🌚 extract fred.io.kubernetes.blee label into it's own column
- BLEE:.metadata.annotations.blee|R # => 🌚 extract annotation blee into it's own column and right align it
- NAME
- IP
- NODE
- STATUS
- READY
- MEM/RL|S # => 🌚 Overrides std resource default wide attribute via `S` for `Show`
- '%MEM/R|'# => NOTE! column names with non alpha names need to be quoted as columns must be strings!v1/pods@fred: # => 🌚 New v0.40.6! Customize columns for a given resource and namespace!columns:
- AGE
- NAME|WRv1/pods@kube*: # => 🌚 New v0.40.6! You can also specify a namespace using a regular expression.columns:
- NAME
- AGE
- LABELScool-kid: # => 🌚 New v0.40.8! You can also reference a specific alias and display a custom view for itcolumns:
- AGE
- NAMESPACE|WRv1/services:
columns:
- AGE
- NAMESPACE
- NAME
- TYPE
- CLUSTER-IP
🩻 NOTE: This is experimental and will most likely change as we iron this out!
Plugins
K9s allows you to extend your command line and tooling by defining your very own cluster commands via plugins.
Minimally we look at $XDG_CONFIG_HOME/k9s/plugins.yaml to locate all available plugins.
Additionally, K9s will scan the following directories for additional plugins:
$XDG_CONFIG_HOME/k9s/plugins
$XDG_DATA_HOME/k9s/plugins
$XDG_DATA_DIRS/k9s/plugins
The plugin file content can be either a single plugin snippet, a collections of snippets or a complete plugins definition (see examples below...).
A plugin is defined as follows:
Shortcut option represents the key combination a user would type to activate the plugin. Valid values are [a-z], Shift-[A-Z], Ctrl-[A-Z].
Override option make that the default action related to the shortcut will be overridden by the plugin
Confirm option (when enabled) lets you see the command that is going to be executed and gives you an option to confirm or prevent execution
Description will be printed next to the shortcut in the k9s menu
Scopes defines a collection of resources names/short-names for the views associated with the plugin. You can specify all to provide this shortcut for all views.
Command represents ad-hoc commands the plugin runs upon activation
Background specifies whether or not the command runs in the background
Args specifies the various arguments that should apply to the command above
OverwriteOutput boolean option allows plugin developers to provide custom messages on plugin stdout execution. See example in #2644
Dangerous boolean option enables disabling the plugin when read-only mode is set. See #2604
K9s does provide additional environment variables for you to customize your plugins arguments. Currently, the available environment variables are as follows:
$RESOURCE_GROUP -- the selected resource group
$RESOURCE_VERSION -- the selected resource api version
$RESOURCE_NAME -- the selected resource name
$NAMESPACE -- the selected resource namespace
$NAME -- the selected resource name
$CONTAINER -- the current container if applicable
$FILTER -- the current filter if any
$KUBECONFIG -- the KubeConfig location.
$CLUSTER the active cluster name
$CONTEXT the active context name
$USER the active user
$GROUPS the active groups
$POD while in a container view
$COL-<RESOURCE_COLUMN_NAME> use a given column name for a viewed resource. Must be prefixed by COL-!
Curly braces can be used to embed an environment variable inside another string, or if the column name contains special characters. (e.g. ${NAME}-example or ${COL-%CPU/L})
Plugin Examples
Define several plugins and host them in a single file. These can leave in the K9s root config so that they are available on any clusters. Additionally, you can define cluster/context specific plugins for your clusters of choice by adding clusterA/contextB/plugins.yaml file.
The following defines a plugin for viewing logs on a selected pod using ctrl-l as shortcut.
# Define several plugins in a single file in the K9s root configuration# $XDG_DATA_HOME/k9s/plugins.yamlplugins:
# Defines a plugin to provide a `ctrl-l` shortcut to tail the logs while in pod view.fred:
shortCut: Ctrl-Loverride: falseoverwriteOutput: falseconfirm: falsedangerous: falsedescription: Pod logsscopes:
- podscommand: kubectlbackground: falseargs:
- logs
- -f
- $NAME
- -n
- $NAMESPACE
- --context
- $CONTEXT
Similarly you can define the plugin above in a directory using either a file per plugin or several plugins per files as follow...
The following defines two plugins namely fred and zorg.
# Multiple plugins in a single file...# Note: as of v0.40.9 you can have ad-hoc plugin dirs# Loads plugins fred and zorg# $XDG_DATA_HOME/k9s/plugins/misc-plugins/blee.yamlfred:
shortCut: Shift-Bdescription: Bozoscopes:
- deploycommand: bozozorg:
shortCut: Shift-Zdescription: Pod logsscopes:
- svccommand: zorg
Lastly you can define plugin snippets in their own file. The snippet will be named from the file name. In this case, we define a bozo plugin using a plugin snippet.
NOTE: This is an experimental feature! Options and layout may change in future K9s releases as this feature solidifies.
Benchmark Your Applications
K9s integrates Hey from the brilliant and super talented Jaana Dogan. Hey is a CLI tool to benchmark HTTP endpoints similar to AB bench. This preliminary feature currently supports benchmarking port-forwards and services (Read the paint on this is way fresh!).
To setup a port-forward, you will need to navigate to the PodView, select a pod and a container that exposes a given port. Using SHIFT-F a dialog comes up to allow you to specify a local port to forward. Once acknowledged, you can navigate to the PortForward view (alias pf) listing out your active port-forwards. Selecting a port-forward and using CTRL-B will run a benchmark on that HTTP endpoint. To view the results of your benchmark runs, go to the Benchmarks view (alias be). You should now be able to select a benchmark and view the run stats details by pressing <ENTER>. NOTE: Port-forwards only last for the duration of the K9s session and will be terminated upon exit.
Initially, the benchmarks will run with the following defaults:
Concurrency Level: 1
Number of Requests: 200
HTTP Verb: GET
Path: /
The PortForward view is backed by a new K9s config file namely: $XDG_DATA_HOME/k9s/clusters/clusterX/contextY/benchmarks.yaml. Each cluster you connect to will have its own bench config file, containing the name of the K8s context for the cluster. Changes to this file should automatically update the PortForward view to indicate how you want to run your benchmarks.
Benchmarks result reports are stored in $XDG_STATE_HOME/k9s/clusters/clusterX/contextY
Here is a sample benchmarks.yaml configuration. Please keep in mind this file will likely change in subsequent releases!
# This file resides in $XDG_DATA_HOME/k9s/clusters/clusterX/contextY/benchmarks.yamlbenchmarks:
# Indicates the default concurrency and number of requests setting if a container or service rule does not match.defaults:
# One concurrent connectionconcurrency: 1# Number of requests that will be sent to an endpointrequests: 500containers:
# Containers section allows you to configure your http container's endpoints and benchmarking settings.# NOTE: the container ID syntax uses namespace/pod-name:container-namedefault/nginx:nginx:
# Benchmark a container named nginx using POST HTTP verb using http://localhost:port/bozo URL and headers.concurrency: 1requests: 10000http:
path: /bozomethod: POSTbody:
{"fred":"blee"}header:
Accept:
- text/htmlContent-Type:
- application/jsonservices:
# Similarly you can Benchmark an HTTP service exposed either via NodePort, LoadBalancer types.# Service ID is ns/svc-namedefault/nginx:
# Set the concurrency levelconcurrency: 5# Number of requests to be sentrequests: 500http:
method: GET# This setting will depend on whether service is NodePort or LoadBalancer. NodePort may require vendor port tunneling setting.# Set this to a node if NodePort or LB if applicable. IP or dns name.host: A.B.C.Dpath: /bumblebeetunaauth:
user: jean-baptiste-emmanuelpassword: Zorg!
K9s RBAC FU
On RBAC enabled clusters, you would need to give your users/groups capabilities so that they can use K9s to explore their Kubernetes cluster. K9s needs minimally read privileges at both the cluster and namespace level to display resources and metrics.
These rules below are just suggestions. You will need to customize them based on your environment policies. If you need to edit/delete resources extra Fu will be necessary.
NOTE! Cluster/Namespace access may change in the future as K9s evolves.
NOTE! We expect K9s to keep running even in atrophied clusters/namespaces. Please file issues if this is not the case!
If your users are constrained to certain namespaces, K9s will need to following role to enable read access to namespaced resources.
---
# K9s Reader Role (default namespace)kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata:
name: k9snamespace: defaultrules:
# Grants RO access to most namespaced resources
- apiGroups: ["", "apps", "autoscaling", "batch", "extensions"]resources: ["*"]verbs: ["get", "list", "watch"]# Grants RO access to metric server
- apiGroups: ["metrics.k8s.io"]resources: ["pods", "nodes"]verbs:
- get
- list
- watch
---
# Sample K9s user RoleBindingapiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata:
name: k9snamespace: defaultsubjects:
- kind: Username: fernandapiGroup: rbac.authorization.k8s.ioroleRef:
kind: Rolename: k9sapiGroup: rbac.authorization.k8s.io
Skins
Example: Dracula Skin ;)
You can style K9s based on your own sense of look and style. Skins are YAML files, that enable a user to change the K9s presentation layer. See this repo skins directory for examples.
You can skin k9s by default by specifying a UI.skin attribute. You can also change K9s skins based on the context you are connecting too.
In this case, you can specify a skin field on your cluster config aka skin: dracula (just the name of the skin file without the extension!) and copy this repo skins/dracula.yaml to $XDG_CONFIG_HOME/k9s/skins/ directory.
In the case where your cluster spans several contexts, you can add a skin context configuration to your context configuration.
This is a collection of {context_name, skin} tuples (please see example below!)
Colors can be defined by name or using a hex representation. Of recent, we've added a color named default to indicate a transparent background color to preserve your terminal background color settings if so desired.
NOTE: This is very much an experimental feature at this time, more will be added/modified if this feature has legs so thread accordingly!
NOTE: Please see K9s Skins for a list of available colors.
To skin a specific context and provided the file in-the-navy.yaml is present in your skins directory.
You can also specify a default skin for all contexts in the root k9s config file as so:
# $XDG_CONFIG_HOME/k9s/config.yamlk9s:
liveViewAutoRefresh: falsescreenDumpDir: /tmp/dumpsrefreshRate: 2maxConnRetry: 5readOnly: falsenoExitOnCtrlC: falseui:
enableMouse: falseheadless: falselogoless: falsecrumbsless: falsesplashless: falsenoIcons: false# Toggles reactive UI. This option provide for watching on disk artifacts changes and update the UI live Defaults to false.reactive: false# By default all contexts will use the dracula skin unless explicitly overridden in the context config file.skin: dracula # => assumes the file skins/dracula.yaml is present in the $XDG_DATA_HOME/k9s/skins directorydefaultsToFullScreen: falseskipLatestRevCheck: falsedisablePodCounting: falseshellPod:
image: busyboxnamespace: defaultlimits:
cpu: 100mmemory: 100MiimageScans:
enable: falseexclusions:
namespaces: []labels: {}logger:
tail: 100buffer: 5000sinceSeconds: -1textWrap: falsedisableAutoscroll: falseshowTime: falsethresholds:
cpu:
critical: 90warn: 70memory:
critical: 90warn: 70
# $XDG_DATA_HOME/k9s/skins/in-the-navy.yaml# Skin InTheNavy!k9s:
# General K9s stylesbody:
fgColor: dodgerbluebgColor: '#ffffff'logoColor: '#0000ff'# ClusterInfoView styles.info:
fgColor: lightskybluesectionColor: steelblue# Help panel styleshelp:
fgColor: whitebgColor: blackkeyColor: cyannumKeyColor: bluesectionColor: grayframe:
# Borders styles.border:
fgColor: dodgerbluefocusColor: aliceblue# MenuView attributes and styles.menu:
fgColor: darkblue# Style of menu text. Supported options are "dim" (default), "normal", and "bold"fgStyle: dimkeyColor: cornflowerblue# Used for favorite namespacesnumKeyColor: cadetblue# CrumbView attributes for history navigation.crumbs:
fgColor: whitebgColor: steelblueactiveColor: skyblue# Resource status and update stylesstatus:
newColor: '#00ff00'modifyColor: powderblueaddColor: lightskyblueerrorColor: indianredhighlightcolor: royalbluekillColor: slategraycompletedColor: gray# Border title styles.title:
fgColor: aquabgColor: whitehighlightColor: skybluecounterColor: slatebluefilterColor: slategrayviews:
# TableView attributes.table:
fgColor: bluebgColor: darkbluecursorColor: aqua# Header row styles.header:
fgColor: whitebgColor: darkbluesorterColor: orange# YAML info styles.yaml:
keyColor: steelbluecolonColor: bluevalueColor: royalblue# Logs styles.logs:
fgColor: lightskybluebgColor: blackindicator:
fgColor: dodgerbluebgColor: blacktoggleOnColor: limegreentoggleOffColor: gray
Contributors
Without the contributions from these fine folks, this project would be a total dud!
Known Issues
This is still work in progress! If something is broken or there's a feature
that you want, please file an issue and if so inclined submit a PR!
K9s will most likely blow up if...
You're running older versions of Kubernetes. K9s works best on later Kubernetes versions.
You don't have enough RBAC fu to manage your cluster.
ATTA Girls/Boys!
K9s sits on top of many open source projects and libraries. Our sincere
appreciations to all the OSS contributors that work nights and weekends
to make this project a reality!
Meet The Core Team!
If you have chops in GO and K8s and would like to offer your time to help maintain and enhance this project, please reach out to me.
K9s - Kubernetes CLI To Manage Your Clusters In Style!
K9s provides a terminal UI to interact with your Kubernetes clusters.
The aim of this project is to make it easier to navigate, observe and manage
your applications in the wild. K9s continually watches Kubernetes
for changes and offers subsequent commands to interact with your observed resources.
Note...
K9s is not pimped out by a big corporation with deep pockets.
It is a complex OSS project that demands a lot of my time to maintain and support.
K9s will always remain OSS and therefore free! That said, if you feel k9s makes your day to day Kubernetes journey a tad brighter, saves you time and makes you more productive, please consider sponsoring us!
Your donations will go a long way in keeping our servers lights on and beers in our fridge!
Thank you!
Screenshots
Demo Videos/Recordings
Documentation
Please refer to our K9s documentation site for installation, usage, customization and tips.
Slack Channel
Wanna discuss K9s features with your fellow
K9sers
or simply show your support for this tool?🥳 A Word From Our Rhodium Sponsors...
Below are organizations that have opted to show their support and sponsor K9s.
Installation
K9s is available on Linux, macOS and Windows platforms.
Binaries for Linux, Windows and Mac are available as tarballs in the release page.
Via Homebrew for macOS or Linux
Via MacPorts
Via snap for Linux
On Arch Linux
On OpenSUSE Linux distribution
On FreeBSD
On Ubuntu
Via Winget for Windows
Via Scoop for Windows
Via Chocolatey for Windows
Via a GO install
# NOTE: The dev version will be in effect! go install github.com/derailed/k9s@latest
Via Webi for Linux and macOS
curl -sS https://webinstall.dev/k9s | bash
Via pkgx for Linux and macOS
Via Webi for Windows
curl.exe -A MS https://webinstall.dev/k9s | powershell
As a Docker Desktop Extension (for the Docker Desktop built in Kubernetes Server)
Building From Source
K9s is currently using GO v1.23.X or above.
In order to build K9s from source you must:
Clone the repo
Build and run the executable
make build && ./execs/k9s
Running with Docker
Running the official Docker image
You can run k9s as a Docker container by mounting your
KUBECONFIG
:docker run --rm -it -v $KUBECONFIG:/root/.kube/config quay.io/derailed/k9s
For default path it would be:
docker run --rm -it -v ~/.kube/config:/root/.kube/config quay.io/derailed/k9s
Building your own Docker image
You can build your own Docker image of k9s from the Dockerfile with the following:
docker build -t k9s-docker:v0.0.1 .
You can get the latest stable
kubectl
version and pass it to thedocker build
command with the--build-arg
option.You can use the
--build-arg
option to pass any validkubectl
version (likev1.18.0
orv1.19.1
).Run your container:
docker run --rm -it -v ~/.kube/config:/root/.kube/config k9s-docker:0.1
PreFlight Checks
K9s uses 256 colors terminal mode. On `Nix system make sure TERM is set accordingly.
export TERM=xterm-256color
In order to issue resource edit commands make sure your EDITOR and KUBE_EDITOR env vars are set.
K9s prefers recent kubernetes versions ie 1.28+
K8S Compatibility Matrix
The Command Line
Logs And Debug Logs
Given the nature of the ui k9s does produce logs to a specific location.
To view the logs and turn on debug mode, use the following commands:
# Find out where the logs are stored k9s info
View K9s logs
Start K9s in debug mode
Customize logs destination
You can override the default log file destination either with the
--logFile
argument:Or through the
K9S_LOGS_DIR
environment variable:Key Bindings
K9s uses aliases to navigate most K8s resources.
?
ctrl-a
:quit
,:q
,ctrl-c
esc
:
pod⏎:
pod ns-x⏎:
pod /fred⏎:
pod app=fred,env=dev⏎:
pod @cTx1⏎/
filter⏎/
! filter⏎/
-l label-selector⏎/
-f filter⏎<esc>
d
,v
,e
,l
,...:
ctx⏎:
ctx context-name⏎:
ns⏎-
[
, forward:]
:
screendump or sd⏎ctrl-d
ctrl-k
:
pulses or pu⏎:
xray RESOURCE [NAMESPACE]⏎:
popeye or pop⏎K9s Configuration
K9s keeps its configurations as YAML files inside of a
k9s
directory and the location depends on your operating system. K9s leverages XDG to load its various configurations files. For information on the default locations for your OS please see this link. If you are still confused a quickk9s info
will reveal where k9s is loading its configurations from. Alternatively, you can setK9S_CONFIG_DIR
to tell K9s the directory location to pull its configurations from.~/.config/k9s
~/Library/Application Support/k9s
%LOCALAPPDATA%\k9s
You can now override the context portForward default address configuration by setting an env variable that can override all clusters portForward local address using
K9S_DEFAULT_PF_ADDRESS=a.b.c.d
Popeye Configuration
K9s has integration with Popeye, which is a Kubernetes cluster sanitizer. Popeye itself uses a configuration called
spinach.yml
, but when integrating with K9s the cluster-specific file should be name$XDG_CONFIG_HOME/share/k9s/clusters/clusterX/contextY/spinach.yml
. This allows you to have a different spinach config per cluster.Node Shell
By enabling the nodeShell feature gate on a given cluster, K9s allows you to shell into your cluster nodes. Once enabled, you will have a new
s
forshell
menu option while in node view. K9s will launch a pod on the selected node using a special k9s_shell pod. Furthermore, you can refine your shell pod by using a custom docker image preloaded with the shell tools you love. By default k9s uses a BusyBox image, but you can configure it as follows:Alternatively, you can now override the context configuration by setting an env variable that can override all clusters node shell gate using
K9S_FEATURE_GATE_NODE_SHELL=true|false
Then in your cluster configuration file...
Customizing the Shell Pod
You can also customize the shell pod by adding a
hostPathVolume
to your shell pod. This allows you to mount a local directory or file into the shell pod. For example, if you want to mount the Docker socket into the shell pod, you can do so as follows:This will mount the Docker socket into the shell pod at
/var/run/docker.sock
and make it read-only. You can also mount any other directory or file in a similar way.Command Aliases
In K9s, you can define your very own command aliases (shortnames) to access your resources. In your
$HOME/.config/k9s
define a file calledaliases.yaml
.A K9s alias defines pairs of alias:gvr. A gvr (Group/Version/Resource) represents a fully qualified Kubernetes resource identifier. Here is an example of an alias file:
Using this aliases file, you can now type
:pp
or:crb
or:fred
to activate their respective commands.HotKey Support
Entering the command mode and typing a resource name or alias, could be cumbersome for navigating thru often used resources.
We're introducing hotkeys that allow users to define their own key combination to activate their favorite resource views.
Additionally, you can define context specific hotkeys by add a context level configuration file in
$XDG_DATA_HOME/k9s/clusters/clusterX/contextY/hotkeys.yaml
In order to surface hotkeys globally please follow these steps:
Create a file named
$XDG_CONFIG_HOME/k9s/hotkeys.yaml
Add the following to your
hotkeys.yaml
. You can use resource name/short name to specify a command ie same as typing it while in command mode.Not feeling so hot? Your custom hotkeys will be listed in the help view
?
.Also your hotkeys file will be automatically reloaded so you can readily use your hotkeys as you define them.
You can choose any keyboard shortcuts that make sense to you, provided they are not part of the standard K9s shortcuts list.
Similarly, referencing environment variables in hotkeys is also supported. The available environment variables can refer to the description in the Plugins section.
Port Forwarding over websockets
K9s follows
kubectl
feature flag environment variables to enable/disable port-forwarding over websockets. (default enabled in >1.30)To disable Websocket support, set
KUBECTL_PORT_FORWARD_WEBSOCKETS=false
FastForwards
As of v0.25.0, you can leverage the
FastForwards
feature to tell K9s how to default port-forwards. In situations where you are dealing with multiple containers or containers exposing multiple ports, it can be cumbersome to specify the desired port-forward from the dialog as in most cases, you already know which container/port tuple you desire. For these use cases, you can now annotate your manifests with the following annotations:@
k9scli.io/auto-port-forwards
activates one or more port-forwards directly bypassing the port-forward dialog all together.
@
k9scli.io/port-forwards
pre-selects one or more port-forwards when launching the port-forward dialog.
The annotation value takes on the shape
container-name::[local-port:]container-port
Example
The annotation value must specify a container to forward to as well as a local port and container port. The container port may be specified as either a port number or port name. If the local port is omitted then the local port will default to the container port number. Here are a few examples:
bozo
with port name http. If http specifies port number 8080 then the local port will be 8080 as well.bozo
mapping local port 9090->http(8080)bozo
mapping local port 9090->8080Custom Views
SneakCast v0.17.0 on The Beach! - Yup! sound is sucking but what a setting!
You can change which columns shows up for a given resource via custom views. To surface this feature, you will need to create a new configuration file, namely
$XDG_CONFIG_HOME/k9s/views.yaml
. This file leverages GVR (Group/Version/Resource) to configure the associated table view columns. If no GVR is found for a view the default rendering will take over (ie what we have now). Going wide will add all the remaining columns that are available on the given resource after your custom columns. To boot, you can edit your views config file and tune your resources views live!📢 🎉 As of
release v0.40.0
you can specify json parse expressions to further customize your resources rendering.The new column syntax is as follows:
Where
:json_parse_expression
represents an expression to pull a specific snippet out of the resource manifest.Similar to
kubectl -o custom-columns
command. This expression is optional.Additionally, you can specify column attributes to further tailor the column rendering.
To use this you will need to add a
|
indicator followed by your rendering bits.You can have one or more of the following attributes:
T
-> time column indicatorN
-> number column indicatorW
-> turns on wide column aka only shows while in wide mode. Defaults to the standard resource definition when present.S
-> Ensures a column is visible and not wide. Overrideswide
std resource definition if present.H
-> Hides the columnL
-> Left align (default)R
-> Right alignHere is a sample views configuration that customize a pods and services views.
Plugins
K9s allows you to extend your command line and tooling by defining your very own cluster commands via plugins.
Minimally we look at
$XDG_CONFIG_HOME/k9s/plugins.yaml
to locate all available plugins.Additionally, K9s will scan the following directories for additional plugins:
$XDG_CONFIG_HOME/k9s/plugins
$XDG_DATA_HOME/k9s/plugins
$XDG_DATA_DIRS/k9s/plugins
The plugin file content can be either a single plugin snippet, a collections of snippets or a complete plugins definition (see examples below...).
A plugin is defined as follows:
all
to provide this shortcut for all views.K9s does provide additional environment variables for you to customize your plugins arguments. Currently, the available environment variables are as follows:
$RESOURCE_GROUP
-- the selected resource group$RESOURCE_VERSION
-- the selected resource api version$RESOURCE_NAME
-- the selected resource name$NAMESPACE
-- the selected resource namespace$NAME
-- the selected resource name$CONTAINER
-- the current container if applicable$FILTER
-- the current filter if any$KUBECONFIG
-- the KubeConfig location.$CLUSTER
the active cluster name$CONTEXT
the active context name$USER
the active user$GROUPS
the active groups$POD
while in a container view$COL-<RESOURCE_COLUMN_NAME>
use a given column name for a viewed resource. Must be prefixed byCOL-
!Curly braces can be used to embed an environment variable inside another string, or if the column name contains special characters. (e.g.
${NAME}-example
or${COL-%CPU/L}
)Plugin Examples
Define several plugins and host them in a single file. These can leave in the K9s root config so that they are available on any clusters. Additionally, you can define cluster/context specific plugins for your clusters of choice by adding clusterA/contextB/plugins.yaml file.
The following defines a plugin for viewing logs on a selected pod using
ctrl-l
as shortcut.Similarly you can define the plugin above in a directory using either a file per plugin or several plugins per files as follow...
The following defines two plugins namely fred and zorg.
Lastly you can define plugin snippets in their own file. The snippet will be named from the file name. In this case, we define a
bozo
plugin using a plugin snippet.Benchmark Your Applications
K9s integrates Hey from the brilliant and super talented Jaana Dogan.
Hey
is a CLI tool to benchmark HTTP endpoints similar to AB bench. This preliminary feature currently supports benchmarking port-forwards and services (Read the paint on this is way fresh!).To setup a port-forward, you will need to navigate to the PodView, select a pod and a container that exposes a given port. Using
SHIFT-F
a dialog comes up to allow you to specify a local port to forward. Once acknowledged, you can navigate to the PortForward view (aliaspf
) listing out your active port-forwards. Selecting a port-forward and usingCTRL-B
will run a benchmark on that HTTP endpoint. To view the results of your benchmark runs, go to the Benchmarks view (aliasbe
). You should now be able to select a benchmark and view the run stats details by pressing<ENTER>
. NOTE: Port-forwards only last for the duration of the K9s session and will be terminated upon exit.Initially, the benchmarks will run with the following defaults:
The PortForward view is backed by a new K9s config file namely:
$XDG_DATA_HOME/k9s/clusters/clusterX/contextY/benchmarks.yaml
. Each cluster you connect to will have its own bench config file, containing the name of the K8s context for the cluster. Changes to this file should automatically update the PortForward view to indicate how you want to run your benchmarks.Benchmarks result reports are stored in
$XDG_STATE_HOME/k9s/clusters/clusterX/contextY
Here is a sample benchmarks.yaml configuration. Please keep in mind this file will likely change in subsequent releases!
K9s RBAC FU
On RBAC enabled clusters, you would need to give your users/groups capabilities so that they can use K9s to explore their Kubernetes cluster. K9s needs minimally read privileges at both the cluster and namespace level to display resources and metrics.
These rules below are just suggestions. You will need to customize them based on your environment policies. If you need to edit/delete resources extra Fu will be necessary.
Cluster RBAC scope
Namespace RBAC scope
If your users are constrained to certain namespaces, K9s will need to following role to enable read access to namespaced resources.
Skins
Example: Dracula Skin ;)
You can style K9s based on your own sense of look and style. Skins are YAML files, that enable a user to change the K9s presentation layer. See this repo
skins
directory for examples.You can skin k9s by default by specifying a UI.skin attribute. You can also change K9s skins based on the context you are connecting too.
In this case, you can specify a skin field on your cluster config aka
skin: dracula
(just the name of the skin file without the extension!) and copy this reposkins/dracula.yaml
to$XDG_CONFIG_HOME/k9s/skins/
directory.In the case where your cluster spans several contexts, you can add a skin context configuration to your context configuration.
This is a collection of {context_name, skin} tuples (please see example below!)
Colors can be defined by name or using a hex representation. Of recent, we've added a color named
default
to indicate a transparent background color to preserve your terminal background color settings if so desired.To skin a specific context and provided the file
in-the-navy.yaml
is present in your skins directory.You can also specify a default skin for all contexts in the root k9s config file as so:
Contributors
Without the contributions from these fine folks, this project would be a total dud!
Known Issues
This is still work in progress! If something is broken or there's a feature
that you want, please file an issue and if so inclined submit a PR!
K9s will most likely blow up if...
ATTA Girls/Boys!
K9s sits on top of many open source projects and libraries. Our sincere
appreciations to all the OSS contributors that work nights and weekends
to make this project a reality!
Meet The Core Team!
If you have chops in GO and K8s and would like to offer your time to help maintain and enhance this project, please reach out to me.
We always enjoy hearing from folks who benefit from our work!
Contributions Guideline
The text was updated successfully, but these errors were encountered: