Vault, developed by HashiCorp, functions as an identity-centric platform for managing secrets and encryption. It provides encryption services that are gated by authentication and authorization methods to ensure secure, auditable and restricted access to secrets. It is used to secure, store and protect secrets and other sensitive data using a UI, CLI, or HTTP API. A secret is anything important that you want to keep safe, like passwords, keys, or certificates. Vault makes it easy to manage all these secrets securely, controlling who can access them and keeping track of who does. MORE
Vault operates by using tokens, which are linked to client policies. These policies determine what actions and paths a client can access. Tokens can be manually created and assigned to clients, or clients can obtain them by logging in. The main steps in Vault's workflow are:
Authenticate: Clients prove their identity to Vault, which then generates a token linked to their policy
Validation: Vault validates the client against third-party trusted sources, such as Github
Access: Vault grants access to secrets, keys, and encryption capabilities by issuing a token based on policies associated with the client’s identity. The client can then use their Vault token for future operations
- Many companies nowadays face a challenge known as "credentials sprawl." This means that passwords, API keys, and other credentials are scattered throughout their systems, stored in various places like plain text, application source code, configuration files, and more. This widespread distribution of credentials makes it hard to keep track of who has access to what. Moreover, storing credentials in plain text poses a significant security risk, leaving companies vulnerable to both internal and external threats.
- Vault offers secure secret storage by allowing the storage of arbitrary key/value secrets. Before writing these secrets to persistent storage, Vault encrypts them. This means that even if someone gains access to the raw storage, they won't be able to access your secrets without the proper authorization.
- Leasing and Renewal: All secrets in Vault have a lease associated with them. At the end of the lease, Vault will automatically revoke that secret. Clients are able to renew leases via built-in renew APIs.
The Bank-Vaults Secret Operator is a Kubernetes operator that automates the lifecycle management of Vault instances running within Kubernetes clusters. It leverages Kubernetes custom resources to define and manage Vault configurations, such as authentication methods, policies, and secret engines. In some scenarios, organizations may choose to deploy Vault outside of Kubernetes clusters, either for centralized management or due to existing infrastructure constraints. An external Vault cluster refers to a Vault deployment that runs independently of Kubernetes, typically managed on virtual machines, cloud instances, or on-premises servers.
Bank-Vaults Secret Injection Webhook is a specialized tool designed to facilitate secure secret management for individual applications running within a Kubernetes cluster. By seamlessly integrating with HashiCorp Vault, this webhook enables the retrieval of secrets and their injection as environment variables specifically tailored to the needs of the target application.In summary, Bank-Vaults Secret Injection Webhook provides a tailored and secure solution for fetching secrets from Vault and injecting them as environment variables into individual applications running within a Kubernetes environment. By leveraging this webhook, organizations can effectively manage secrets, enhance application security, and maintain operational integrity within their Kubernetes deployments.
- Reduced Configuration: With Raft backend, there's no need to configure Vault to connect to external providers as a client, which eliminates additional setup steps and potential points of failure.
- Enhanced Security: Since Raft is an integral part of Vault, it allows for tighter integration and control over security measures. This can lead to improved security posture as there are fewer external dependencies and potential attack vectors.
- Performance: Raft backend can offer improved performance compared to external storage systems, as it operates within the same infrastructure as Vault itself, reducing latency and overhead associated with external network communication.
- High Availability: With multiple nodes, Vault can continue to operate even if one or more nodes fail. This ensures that critical services relying on Vault can continue running without interruption.
- Scalability: As the demand for secrets management grows, a multi-node cluster allows for horizontal scaling by adding more nodes to distribute the workload and handle increased traffic.
- Fault Tolerance: Multi-node clusters provide fault tolerance by replicating data across nodes. In case of node failure, data can be seamlessly retrieved from other nodes, preventing data loss or service disruptions.
- Load Balancing: A cluster enables load balancing of client requests across multiple nodes, improving performance and resource utilization, reducing the risk of single points of failure and enhancing overall system reliability.
You just need to change the variables in the defaults/main.yml file inside the roles to use these ansible roles!
This Ansible role performs the following tasks:
- Installs Vault on all servers.
- Configures Vault according to specified settings.
- Starts Vault using systemd.
- Establishes a cluster with three nodes (the roles are adapted to a cluster of 3 nodes)
- Enables interconnection of Raft storage among the nodes.
Change IP address with your`s
vault_server_1: # * leader ip address
vault_server_2: # * follower 1 ip address
vault_server_3: # * follower 2 ip address
This allows you cluster metrics to /v1/sys/metrics
path. You can use it with true
. It allows you to build a monitor (via Prometheus and Grafana)
prometheus: true
Save all unseal key and token in leader server /root/key
. You can use it with true
vault_save_unseal_file: true
Auto unseal when vault restarted. You can customize this by changing the value to true
. By doing this, a bash scritp is generated and stored in /usr/local/bin/unseal_vault.sh
This will automatically unseal the vault in systemd when it restarts Warnig!!! Unseal keys can be dangerous to store. This may give others access to the Vault!
Warning
Unseal keys can be dangerous to store. This may give others access to the Vault!
vault_auto_unseal: true
leader
follower1
follower2
These names must be used in hostnames. This is used during the task!
This Ansible role performs the following tasks:
- Install Certbot and Nginx
- Configuration Nginx for Vault Cluster LoadBalancer
- Gets a certificate for secury!
- Match your domain name to loadbalancer's IP
- Change variables in the defaults/main.yml
- Change gmail to your`s
- Use Domain which you set for loadbalancer server`s IP
- Write down the IP addresses of the vault cluster servers (the roles are adapted to a cluster of 3 nodes)
loadbalancer
These names must be used in hostnames. This is used during the task!
Tip
You can run Vault and nginx loadbalancer roles at the same time!
This Ansible role performs the following tasks:
- Install vault CLI
- Backup at any time (through the rack operator)
- Set Cronjob for backup
Change variables in the defaults/main.yml
- Create and specify a token from Vault for backup
- Specify the address of your Vault cluster
- Edit Path for save backups
- If the value of
backup_cronjob
istrue
. A cronjob is set to take a backup every night at 00:00
Note
This Ansible role automates the setup and configuration of a Vault Secret Operator, facilitating seamless integration between Vault and Kubernetes for managing secrets.
This Ansible role performs the following tasks:
- Creating
ServiceAccount, Secret, ClusterRoleBinding
for vault - Create vault policy for read collection data
- Creates a secret in k8s from the data in the vault collection in the specified path and updates it continuously
- MORE
Change variables in the defaults/main.yml
- Change the value of
vso_create_directory
totrue
if you don't already have a defined secret and collection! - If you already have secret, collection and data Change the value of
vso_create_directory
tofalse
and write path - Write your secret name to
vso_secret_name
- Write your collection name to
vso_secret_collection
- Change to your
s domain or ip address of
vso_vault_address` - Set to your vault token
vso_vault_token
Note
Since this process is not automated, you have to do it manually
And you have to set value of prometheus: true
in Vault role!
- export your vault address and token with
export VAULT_ADDR=https://your_vault_Address
andexport VAULT_TOKEN="your vault token"
- Create vault policy
vault policy write prometheus-metrics - << EOF
path "/sys/metrics" {
capabilities = ["read"]
}
EOF
- Create token with policy
vault token create \
-field=token \
-policy prometheus-metrics \
> /tmp/prometheus-token
- Install prometheus if does not exists
- Create job in sudo
/etc/prometheus/prometheus.yml
- job_name: vault
metrics_path: /v1/sys/metrics
params:
format: ['prometheus']
scheme: http
authorization:
credentials_file: /tmp/prometheus-token
static_configs:
- targets: ['leader_ip:8200', 'follower1_ip:8200', 'follower2_ip:8200'] # change ips with your vault servers
Warning
Don't use Domain name of LoadBalancer! Change ips of vault servers
- Restart Prometheus.
systemctl restart prometheus
You must have 3 nodes UP in your target!
- Install grafana if does not exists
- Select
connections
and findprometheus
- Write the ip address where prometheus is located and specify port 9090
-
Select
build dashboard
and then selectimport dashboard
-
Enter ID
12904
and selectLoad
-
Result
Your thoughts, suggestions, and questions are invaluable to us! If you have any inquiries, ideas, or requests regarding your GitHub repository or project, please don't hesitate to reach out.
📧 Email: bexruzturobjonov0955@gmail.com
💬 Telegram: https://t.me/blvck_sudo
We would be delighted to collaborate with you in enhancing your project together!