8000 Add docs for install customization by qmhu · Pull Request #607 · gocrane/crane · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Add docs for install customization #607

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Nov 9, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 0 additions & 15 deletions deploy/metric-adapter/apiservice.yaml
Original file line number Diff line number Diff line change
@@ -1,20 +1,5 @@
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1beta1.custom.metrics.k8s.io
spec:
service:
name: metric-adapter
namespace: crane-system
group: custom.metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100

---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1beta1.external.metrics.k8s.io
spec:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ Since Crane is installed, the result is as follows.
```sh
NAME SERVICE AVAILABLE AGE
v1beta1.batch Local True 35d
v1beta1.custom.metrics.k8s.io crane-system/metric-adapter True 18d
v1beta1.custom.metrics.k8s.io Local True 18d
v1beta1.discovery.k8s.io Local True 35d
v1beta1.events.k8s.io Local True 35d
v1beta1.external.metrics.k8s.io crane-system/metric-adapter True 18d
Expand All @@ -47,7 +47,6 @@ v1beta1.metrics.k8s.io kube-system/metrics-service True
Remove the installed ApiService by crane

```bash
kubectl delete apiservice v1beta1.custom.metrics.k8s.io
kubectl delete apiservice v1beta1.external.metrics.k8s.io
```

Expand Down Expand Up @@ -98,7 +97,7 @@ spec:

#### RemoteAdapter Capabilities

![](/images/remote-adapter.png)
![remote adapter](/images/remote-adapter.png)

Kubernetes restricts an ApiService to configure only one backend service, so in order to use the Metric provided by Crane and the Metric provided by PrometheusAdapter within a cluster, Crane supports a RemoteAdapter to solve this problem

Expand Down Expand Up @@ -241,8 +240,8 @@ Annotation in the Effective HPA adds the configuration according to the followin

```yaml
annotations:
# metric-query.autoscaling.crane.io 是固定的前缀,后面是 Metric 名字,需跟 spec.metrics 中的 Metric.name 相同,支持 Pods 类型和 External 类型
metric-query.autoscaling.crane.io/http_requests: "sum(rate(http_requests_total[5m])) by (pod)"
# metric-query.autoscaling.crane.io is a static prefix,after that should be the Metric Type and Metric Name,It should be the same as your spec.metrics's Metric.name,Supported Metric type are Pods(pods) and External (external)
metric-query.autoscaling.crane.io/pods.http_requests: "sum(rate(http_requests_total[5m])) by (pod)"
```

<summary>sample-app-hpa.yaml</summary>
Expand All @@ -253,8 +252,8 @@ kind: EffectiveHorizontalPodAutoscaler
metadata:
name: php-apache
annotations:
# metric-query.autoscaling.crane.io 是固定的前缀,后面是 Metric 名字,需跟 spec.metrics 中的 Metric.name 相同,支持 Pods 类型和 External 类型
metric-query.autoscaling.crane.io/http_requests: "sum(rate(http_requests_total[5m])) by (pod)"
# metric-query.autoscaling.crane.io 是固定的前缀,后面是 Metric 的 type 和 名字,需跟 spec.metrics 中的 Metric.name 相同,支持 Pods 类型(pods)和 External 类型(external)
metric-query.autoscaling.crane.io/pods.http_requests: "sum(rate(http_requests_total[5m])) by (pod)"
spec:
# ScaleTargetRef is the reference to the workload that should be scaled.
scaleTargetRef:
Expand Down Expand Up @@ -328,7 +327,7 @@ spec:
sampleInterval: 60s
expressionQuery:
expression: sum(rate(http_requests_total[5m])) by (pod)
resourceIdentifier: crane_custom.pods_http_requests
resourceIdentifier: pods.http_requests
type: ExpressionQuery
predictionWindowSeconds: 3600
targetRef:
Expand Down Expand Up @@ -357,7 +356,7 @@ status:
value: "0.01683"
......
ready: true
resourceIdentifier: crane_custom.pods_http_requests
resourceIdentifier: pods.http_requests
```

Looking at the HPA object created by Effective HPA, you can observe that a Metric has been created based on custom metrics predictions: ``crane_custom.pods_http_requests``.
Expand Down Expand Up @@ -386,7 +385,7 @@ spec:
type: Pods
- pods:
metric:
name: crane_custom.pods_http_requests
name: pods.http_requests
selector:
matchLabels:
autoscaling.crane.io/effective-hpa-uid: 1322c5ac-a1c6-4c71-98d6-e85d07b22da0
Expand Down
50 changes: 49 additions & 1 deletion site/content/en/docs/Getting started/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,55 @@ helm install fadvisor -n crane-system --create-namespace crane/fadvisor
{{< /tab >}}
{{% /tabpane %}}

### Using Existing Prometheus(Optional)

Normally in a production environment offen already have a specified Prometheus. You can modify the Crane Chart Release configuration by using the following command or modify the Craned Deployment container Args directly.

```bash
helm upgrade crane -n crane-system --set craned.containerArgs.prometheus-address=http://{prometheus-ip}:{port} --create-namespace crane/crane
```

At the same time, the cost analytics of the Crane Dashboard need to deploy [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics) (Prometheus Chart Will install it by default), also you need to config additional extraScrapeConfigs in your Prometheus, You can refer to [here](https://github.com/gocrane/helm-charts/blob/main/integration/prometheus/override_values.yaml#L56).

Finally, fadvisor needs to configure recording rules to aggregate cost data, You can refer to [here](https://github.com/gocrane/helm-charts/blob/main/integration/prometheus/override_values.yaml#L6) to configuration in your Prometheus.

### Using Existing Grafana(Optional)

The Crane Dashboard supports the Grafana report embedded with the Iframe to show the cost distribution. If you want to use an external Grafana to embed into the Crane Dashboard, you need to modify the nginx configuration in the configmap at first.

```bash
kubectl edit configmap -n monitor grafana
```

Change `grafana.{{ .Release.Namespace }}.svc.cluster.local` to be existing Grafana server address,Change `http://$upstream_grafana:8082` to be the existing Grafana server port。

```yaml
location /grafana {
set $upstream_grafana grafana.{{ .Release.Namespace }}.svc.cluster.local;
proxy_connect_timeout 180;
proxy_send_timeout 180;
proxy_read_timeout 180;
proxy_pass http://$upstream_grafana:8082;
proxy_redirect off;
rewrite /grafana/(.*) /$1 break;
proxy_http_version 1.1;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

```

Next up you need to config your grafana based on [here](https://github.com/gocrane/helm-charts/blob/main/integration/grafana/override_values.yaml), The idea was that Grafana supported embedding panels, but required that the corresponding permission configuration be turned on.

- Make sure Service configuration is the same as nginx
- Config datasources
- Config dashboardProviders
- Config dashboards
- Config grafana.ini

### Deploying Crane-scheduler(optional)
```bash
helm install scheduler -n crane-system --create-namespace crane/scheduler
Expand Down Expand Up @@ -176,7 +225,6 @@ echo "Dashboard link: http://${NODE_IP}:${PORT}"

### LoadBalancer


#### Quick Start

```bash
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ kubectl get apiservice
```bash
NAME SERVICE AVAILABLE AGE
v1beta1.batch Local True 35d
v1beta1.custom.metrics.k8s.io crane-system/metric-adapter True 18d
v1beta1.custom.metrics.k8s.io Local True 18d
v1beta1.discovery.k8s.io Local True 35d
v1beta1.events.k8s.io Local True 35d
v1beta1.external.metrics.k8s.io crane-system/metric-adapter True 18d
Expand All @@ -47,7 +47,6 @@ v1beta1.metrics.k8s.io kube-system/metrics-service True
删除 crane 安装的 ApiService

```bash
kubectl delete apiservice v1beta1.custom.metrics.k8s.io
kubectl delete apiservice v1beta1.external.metrics.k8s.io
```

Expand Down Expand Up @@ -98,7 +97,7 @@ spec:

#### RemoteAdapter 能力

![](../images/remote-adapter.png)
![remote adapter](/images/remote-adapter.png)

Kubernetes 限制一个 ApiService 只能配置一个后端服务,因此,为了在一个集群内使用 Crane 提供的 Metric 和 PrometheusAdapter 提供的 Metric,Crane 支持了 RemoteAdapter 解决此问题

Expand Down Expand Up @@ -241,8 +240,8 @@ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 | jq .

```yaml
annotations:
# metric-query.autoscaling.crane.io 是固定的前缀,后面是 Metric 名字,需跟 spec.metrics 中的 Metric.name 相同,支持 Pods 类型和 External 类型
metric-query.autoscaling.crane.io/http_requests: "sum(rate(http_requests_total[5m])) by (pod)"
# metric-query.autoscaling.crane.io 是固定的前缀,后面是 Metric 的 type 和 名字,需跟 spec.metrics 中的 Metric.name 相同,支持 Pods 类型(pods)和 External 类型(external)
metric-query.autoscaling.crane.io/pods.http_requests: "sum(rate(http_requests_total[5m])) by (pod)"
```

<summary>sample-app-hpa.yaml</summary>
Expand All @@ -253,8 +252,8 @@ kind: EffectiveHorizontalPodAutoscaler
metadata:
name: php-apache
annotations:
# metric-query.autoscaling.crane.io 是固定的前缀,后面是 Metric 名字,需跟 spec.metrics 中的 Metric.name 相同,支持 Pods 类型和 External 类型
metric-query.autoscaling.crane.io/http_requests: "sum(rate(http_requests_total[5m])) by (pod)"
# metric-query.autoscaling.crane.io 是固定的前缀,后面是 Metric 的 type 和 名字,需跟 spec.metrics 中的 Metric.name 相同,支持 Pods 类型(pods)和 External 类型(external)
metric-query.autoscaling.crane.io/pods.http_requests: "sum(rate(http_requests_total[5m])) by (pod)"
spec:
# ScaleTargetRef is the reference to the workload that should be scaled.
scaleTargetRef:
Expand Down Expand Up @@ -328,7 +327,7 @@ spec:
sampleInterval: 60s
expressionQuery:
expression: sum(rate(http_requests_total[5m])) by (pod)
resourceIdentifier: crane_custom.pods_http_requests
resourceIdentifier: pods.http_requests
type: ExpressionQuery
predictionWindowSeconds: 3600
targetRef:
Expand Down Expand Up @@ -357,7 +356,7 @@ status:
value: "0.01683"
......
ready: true
resourceIdentifier: crane_custom.pods_http_requests
resourceIdentifier: pods.http_requests
```

查看 Effective HPA 创建的 HPA 对象,可以观测到已经创建出基于自定义指标预测的 Metric: `crane_custom.pods_http_requests`
Expand Down Expand Up @@ -386,7 +385,7 @@ spec:
type: Pods
- pods:
metric:
name: crane_custom.pods_http_requests
name: pods.http_requests
selector:
matchLabels:
autoscaling.crane.io/effective-hpa-uid: 1322c5ac-a1c6-4c71-98d6-e85d07b22da0
Expand Down
49 changes: 49 additions & 0 deletions site/content/zh/docs/Getting started/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,6 +95,55 @@ helm install fadvisor -n crane-system --create-namespace crane/fadvisor
{{< /tab >}}
{{% /tabpane %}}

### 使用外部的 Prometheus(可选)

通常在生产环境,安装时需要配置外部的 Prometheus,你可以通过以下命令修改 Crane 的 Chart Release 配置或者直接修改 Craned Deployment 的容器 Args。

```bash
helm upgrade crane -n crane-system --set craned.containerArgs.prometheus-address=http://{prometheus-ip}:{port} --create-namespace crane/crane
```

同时,Crane Dashboard 的成本展示需要部署[kube-state-metrics](https://github.com/kubernetes/kube-state-metrics)(Prometheus Chart 中默认会安装),并且需要在你的 Prometheus 中配置额外的 extraScrapeConfigs,可以参考[这里](https://github.com/gocrane/helm-charts/blob/main/integration/prometheus/override_values.yaml#L56)。

最后,Fadvisor 需要配置 recording rules 来实现成本数据的聚合,可以参考[这里](https://github.com/gocrane/helm-charts/blob/main/integration/prometheus/override_values.yaml#L6)配置到你的 Prometheus 中。

### 使用外部的 Grafana(可选)

Crane Dashboard 支持通过 Iframe 内嵌 Grafana 报表展示成本分布。如果希望使用外部的 Grafana 内嵌到 Crane Dashboard,首先需要修改 configmap 中的 nginx 配置。

```bash
kubectl edit configmap -n monitor grafana
```

配置 `grafana.{{ .Release.Namespace }}.svc.cluster.local` 成外部的 Grafana 服务地址,配置 `http://$upstream_grafana:8082` 成外部的 Grafana 服务端口。

```yaml
location /grafana {
set $upstream_grafana grafana.{{ .Release.Namespace }}.svc.cluster.local;
proxy_connect_timeout 180;
proxy_send_timeout 180;
proxy_read_timeout 180;
proxy_pass http://$upstream_grafana:8082;
proxy_redirect off;
rewrite /grafana/(.*) /$1 break;
proxy_http_version 1.1;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

```

接下来需要参考[这里](https://github.com/gocrane/helm-charts/blob/main/integration/grafana/override_values.yaml)进行配置,原理是 Grafana 支持前端图表的内嵌,但是需要把对应的权限配置打开。

- 确定 Service 和 nginx 配置一致
- 配置 datasources
- 配置 dashboardProviders
- 配置 dashboards
- 配置 grafana.ini

### 安装 Crane-scheduler(可选)
```console
helm install scheduler -n crane-system --create-namespace crane/scheduler
Expand Down
0