8000 [WIP] doc: Add Documentation by poornimag · Pull Request #152 · gluster/gcs · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
This repository was archived by the owner on May 24, 2020. It is now read-only.

[WIP] doc: Add Documentation #152

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 32 additions & 0 deletions doc/Administration Guide/dynamic provisioning.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
## Dynamic provisioning of persistent volumes
This is a feature native to kubernetes where the cluster can dynamically provision a volume if the users request of PVC doesn't match any pre created persistent volume. For more information about dynmic provisioning please refer to kubernetes documentation.[https://kubernetes.io/docs/concepts/storage/persistent-volumes/#dynamic]

## Storage Classes
To Allow dynamic provisioning of volumes, administrator must create storage classes that defines the classes of storage a user can provision volumes from. For more information about the storage classes, please refer to the kubernetes documentation[https://kubernetes.io/docs/concepts/storage/storage-classes/]

On deploying of Gluster Container Storage in the kbernetes cluster, there will be 2 storage classes created by default:

```
$ kubectl get storageclass
NAME PROVISIONER AGE
glusterfs-csi (default) org.gluster.glusterfs 70m
glustervirtblock-csi org.gluster.glustervirtblock 69m
local-storage kubernetes.io/no-provisioner 80m
```
#### glusterfs-csi Storage Class

##### Parameters and default values
Thin arbiter setup-> why use this as compared to replica 3

#### glustervirtblock-csi Storage Class

##### Parameters and default values

#### Create your own Storage Class

## Create RWO PersistentVolumeClaim
- Specify storage classes?

## Create RWX PersistentVolumeClaim

## Create ROX PersistentVolumeClaim
14 changes: 14 additions & 0 deletions doc/Administration Guide/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Administration Guide

1. Creating Persisten Volume

* [Static provisioning](./static provisioning.md)
* [Dynamic provisioning](./dynamic provisioning.md)

2. Snapshot and Cloning Volumes(./snapshot.md)

3. Scaling of the Storage(./scaling.md)

4. Performance(./performance.md)

5. Uinstalling Gluster Container Storage(./uninstall.md)
Empty file.
Empty file.
1 change: 1 addition & 0 deletions doc/Administration Guide/static provisioning.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@

Empty file.
59 changes: 59 additions & 0 deletions doc/Deployment Guide/deploy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
To deploy Gluster Container Storage(GCS), we expect a kubernetes cluster. If you do not already have the kubernetes cluster, you can deploy kubernetes on Centos(guide)[https://github.com/kotreshhr/gcs-setup/tree/master/kube-cluster] , coreos[] and many others.

Once you have the kubernetes cluster up and running, its few simple steps to deploy GCS.

#### Deploy GCS on kubernetes Cluster

The python tool way:

https://github.com/kotreshhr/gcs-setup/tree/master/kube-cluster
https://github.com/kotreshhr/gcs-setup/tree/master/gcs-setup
Note: Make sure that the devices do not have file syetem, and are not part of any pv/vg/lvm, if not ensured the device addition step of gcs deployment will fail.

Or the Ansible way:

1. Install ansible on master node:<br/>
`$ yum install ansible`

2. To ensure ansible is able to ssh into the nodes, add public key of master node{kube1} as authorized
key in other kube nodes<br/>
`$ cat ~/.ssh/id_rsa.pub | ssh root@kube2 'cat >> ~/.ssh/authorized_keys'`

3. In the kubernetes master node, gluster the gcs repo:
`$ git clone --recurse-submodules git@github.com:gluster/gcs.git`

4. Create an inventory file to be used by the GCS ansible playbook:
```
$ cat ~/gcs/deploy/inventory.yml
kube1 ansible_host=<node-ip> gcs_disks='["<device1>", "<device2>"]'
kube2 ansible_host=<node-ip> gcs_disks='["<device1>", "<device2>"]'
kube3 ansible_host=<node-ip> gcs_disks='["<device1>", "<device2>"]'

## Hosts that will be kubernetes master nodes
[kube-master]
kube1

## Hosts that will be kubernetes nodes
[kube-node]
kube1
kube2
kube3

## Hosts that will be used for GCS.
## Systems grouped here need to define 'gcs_disks' as hostvars, which are the disks that will be used by GCS to
provision storage.
[gcs-node]
kube1
kube2
kube3
```

5. Deploy GCS on kubernetes cluster
Execute the deploy-gcs.yml file using ansible from master node.
`$ ansible-playbook -i <path to inventory file>/<name of inventory file> <path to deploy-gcs.yml file>/deploy-gcs.yml`.
eg: `$ cd ~/gcs ; ansible-playbook -i deploy/inventory.yml deploy/deploy-gcs.yml`

6. Verify the deployment is successfull
`$ kubectl get pods -n gcs`
All the pods should be in `Running` state.

99 changes: 99 additions & 0 deletions doc/Quickstart Guide/cluster.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
The vagrant tool in the gcs repository can be used to bring up a small cluster with 3 nodes, with kubernetes and gluster container storage(GCS) setup. Please follow the below instructions to deploy the cluster:

1. Clone gcs repository
```
$ git clone --recurse-submodules git@github.com:gluster/gcs.git
```

2. Setup the cluster
```
$ cd gcs/deploy
$ vagrant up
```

If there are any timeout failures with GCS components in the deploy, please retry
```
$ vagrant destroy -f
$ vagrant up
```

3. Verify the cluster is up and running
```
$ vagrant ssh kube1
[vagrant@kube1 ~]$ kubectl get pods -ngcs
NAME READY STATUS RESTARTS AGE
alertmanager-alert-0 2/2 Running 0 64m
alertmanager-alert-1 2/2 Running 0 64m
anthill-58b9b9b6f-c8rhw 1/1 Running 0 74m
csi-glusterfsplugin-attacher-0 2/2 Running 0 68m
csi-glusterfsplugin-nodeplugin-cvnzw 2/2 Running 0 68m
csi-glusterfsplugin-nodeplugin-h9hf6 2/2 Running 0 68m
csi-glusterfsplugin-nodeplugin-nhvr4 2/2 Running 0 68m
csi-glusterfsplugin-provisioner-0 4/4 Running 0 68m
csi-glustervirtblock-attacher-0 2/2 Running 0 66m
csi-glustervirtblock-nodeplugin-8kdjt 2/2 Running 0 66m
csi-glustervirtblock-nodeplugin-jwb77 2/2 Running 0 66m
csi-glustervirtblock-nodeplugin-q5cnp 2/2 Running 0 66m
csi-glustervirtblock-provisioner-0 3/3 Running 0 66m
etcd-988db4s64f 1/1 Running 0 73m
etcd-gqds2t99bn 1/1 Running 0 74m
etcd-kbw8rpxfmr 1/1 Running 0 74m
etcd-operator-77bfcd6595-zvrdt 1/1 Running 0 75m
gluster-kube1-0 2/2 Running 1 73m
gluster-kube2-0 2/2 Running 1 72m
gluster-kube3-0 2/2 Running 1 72m
gluster-mixins-62jkz 0/1 Completed 0 64m
grafana-9df95dfb5-tqrqw 1/1 Running 0 64m
kube-state-metrics-86bc74fd4c-mh6kl 4/4 Running 0 65m
node-exporter-855rg 2/2 Running 0 64m
node-exporter-bxwg9 2/2 Running 0 64m
node-exporter-tm98k 2/2 Running 0 65m
prometheus-operator-6c4b6cfc76-lsgxb 1/1 Running 0 65m
prometheus-prometheus-0 3/3 Running 1 65m
prometheus-prometheus-1 3/3 Running 1 63m
```

4. Create your first PVC from the gluster container storage

RWO PV:
```
$ cat pvc.yaml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: glusterblock-csi-pv
spec:
storageClassName: glustervirtblock-csi
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
```

RWX PV:
```
$ cat pvc.yaml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: glusterblock-csi-pv
spec:
storageClassName: glustervirtblock-csi
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
```

Note: This is a test cluster with very less resources allocated to the cluster nodes. Hence any kind of scale/stress test on this shall lead to failures and node not responding state. If you have to try scale/stress tests, please use the Deplyments guide[] to deploy GCS on nodes that meet the resource requirements.

5. Destroy the cluster
From the node where vagrant up was executed, run the following command to destroy the kubernetes cluster
```
$ cd gcs/deploy
$ vagrant destroy -f
```
1 change: 1 addition & 0 deletions doc/Quickstart Guide/minikube.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Steps to deploy gcs on minikube
0