8000 Duplicate Key in Helm Chart · Issue #123 · nomad-coe/nomad · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Duplicate Key in Helm Chart #123

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
sheepforce opened this issue Mar 12, 2025 · 0 comments
Open

Duplicate Key in Helm Chart #123

sheepforce opened this issue Mar 12, 2025 · 0 comments

Comments

@sheepforce
Copy link

Dear developers,
I'm currently trying to set up a NOMAD-Oasis instance on a Kubernetes cluster via the Helm chart published at https://gitlab.mpcdf.mpg.de/api/v4/projects/2187/packages/helm/latest
We are using Flux for Helm deployments on our cluster and Flux refuses to deploy the Helm chart with the following error:

error":"error while running post render on files: map[string]interface {}(nil): yaml: unmarshal errors:\n  line 9: mapping key \"app.kubernetes.io/name\" already defined at line 7","errorVerbose":"map[string]interface {}(nil): yaml: unmarshal errors:\n  line 9: mapping key \"app.kubernetes.io/name\" already defined at line 7

Inspecting a rendered Helm chart via helm template nomad/nomad --version=1.1.0, I can find the duplicate key that Flux complains about, namely metadata.labels.app."kubernetes.io/name" in nomad/templates/configmap.yml, in my example around line 194.

Here is the rendered template:

Rendered Helm template
---
# Source: nomad/charts/elasticsearch/templates/poddisruptionbudget.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: "elasticsearch-master-pdb"
spec:
  maxUnavailable: 1
  selector:
    matchLabels:
      app: "elasticsearch-master"
---
# Source: nomad/charts/mongodb/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: release-name-mongodb
  namespace: "default"
  labels:
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: mongodb
    app.kubernetes.io/version: 7.0.2
    helm.sh/chart: mongodb-14.0.4
secrets:
  - name: release-name-mongodb
automountServiceAccountToken: true
---
# Source: nomad/charts/rabbitmq/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: release-name-rabbitmq
  namespace: "default"
  labels:
    app.kubernetes.io/name: rabbitmq
    helm.sh/chart: rabbitmq-11.2.2
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/version: "3.11.5"
automountServiceAccountToken: true
secrets:
  - name: release-name-rabbitmq
---
# Source: nomad/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: release-name-nomad
  labels:
    helm.sh/chart: nomad-1.1.0
    app.kubernetes.io/name: nomad
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.2.2"
    app.kubernetes.io/managed-by: Helm
  # automountServiceAccountToken: true
---
# Source: nomad/charts/mongodb/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
  name: release-name-mongodb
  namespace: default
  labels:
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: mongodb
    app.kubernetes.io/version: 7.0.2
    helm.sh/chart: mongodb-14.0.4
    app.kubernetes.io/component: mongodb
type: Opaque
data:
  mongodb-root-password: "ZXBrU3pFejVOaw=="
---
# Source: nomad/charts/rabbitmq/templates/config-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: release-name-rabbitmq-config
  namespace: "default"
  labels:
    app.kubernetes.io/name: rabbitmq
    helm.sh/chart: rabbitmq-11.2.2
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/version: "3.11.5"
type: Opaque
data:
  rabbitmq.conf: |-
    IyMgVXNlcm5hbWUgYW5kIHBhc3N3b3JkCiMjCmRlZmF1bHRfdXNlciA9IHJhYmJpdG1xCiMjIENsdXN0ZXJpbmcKIyMKY2x1c3Rlcl9mb3JtYXRpb24ucGVlcl9kaXNjb3ZlcnlfYmFja2VuZCAgPSByYWJiaXRfcGVlcl9kaXNjb3ZlcnlfazhzCmNsdXN0ZXJfZm9ybWF0aW9uLms4cy5ob3N0ID0ga3ViZXJuZXRlcy5kZWZhdWx0CmNsdXN0ZXJfZm9ybWF0aW9uLm5vZGVfY2xlYW51cC5pbnRlcnZhbCA9IDEwCmNsdXN0ZXJfZm9ybWF0aW9uLm5vZGVfY2xlYW51cC5vbmx5X2xvZ193YXJuaW5nID0gdHJ1ZQpjbHVzdGVyX3BhcnRpdGlvbl9oYW5kbGluZyA9IGF1dG9oZWFsCiMgcXVldWUgbWFzdGVyIGxvY2F0b3IKcXVldWVfbWFzdGVyX2xvY2F0b3IgPSBtaW4tbWFzdGVycwojIGVuYWJsZSBndWVzdCB1c2VyCmxvb3BiYWNrX3VzZXJzLmd1ZXN0ID0gZmFsc2UKI2RlZmF1bHRfdmhvc3QgPSBkZWZhdWx0LXZob3N0CiNkaXNrX2ZyZWVfbGltaXQuYWJzb2x1dGUgPSA1ME1C
---
# Source: nomad/charts/rabbitmq/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
  name: release-name-rabbitmq
  namespace: "default"
  labels:
    app.kubernetes.io/name: rabbitmq
    helm.sh/chart: rabbitmq-11.2.2
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/version: "3.11.5"
type: Opaque
data:
  rabbitmq-password: "cmFiYml0bXE="
  
  rabbitmq-erlang-cookie: "U1dRT0tPRFNRQUxSUENMTk1FUUc="
---
# Source: nomad/charts/mongodb/templates/common-scripts-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: release-name-mongodb-common-scripts
  namespace: "default"
  labels:
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: mongodb
    app.kubernetes.io/version: 7.0.2
    helm.sh/chart: mongodb-14.0.4
    app.kubernetes.io/component: mongodb
data:
  startup-probe.sh: |
    #!/bin/bash
    mongosh  $TLS_OPTIONS --port $MONGODB_PORT_NUMBER --eval 'db.hello().isWritablePrimary || db.hello().secondary' | grep 'true'
  readiness-probe.sh: |
    #!/bin/bash
    # Run the proper check depending on the version
    [[ $(mongod -version | grep "db version") =~ ([0-9]+\.[0-9]+\.[0-9]+) ]] && VERSION=${BASH_REMATCH[1]}
    . /opt/bitnami/scripts/libversion.sh
    VERSION_MAJOR="$(get_sematic_version "$VERSION" 1)"
    VERSION_MINOR="$(get_sematic_version "$VERSION" 2)"
    VERSION_PATCH="$(get_sematic_version "$VERSION" 3)"
    readiness_test='db.isMaster().ismaster || db.isMaster().secondary'
    if [[ ( "$VERSION_MAJOR" -ge 5 ) || ( "$VERSION_MAJOR" -ge 4 && "$VERSION_MINOR" -ge 4 && "$VERSION_PATCH" -ge 2 ) ]]; then
        readiness_test='db.hello().isWritablePrimary || db.hello().secondary'
    fi
    mongosh  $TLS_OPTIONS --port $MONGODB_PORT_NUMBER --eval "${readiness_test}" | grep 'true'
  ping-mongodb.sh: |
    #!/bin/bash
    mongosh  $TLS_OPTIONS --port $MONGODB_PORT_NUMBER --eval "db.adminCommand('ping')"
---
# Source: nomad/templates/app/configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
  name: release-name-nomad-configmap-app-uvicorn-log-config
  labels:
    helm.sh/chart: nomad-1.1.0
    app.kubernetes.io/name: nomad
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.2.2"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: app
data:
  uvicorn.log.conf: |
    [loggers]
    keys=root

    [handlers]
    keys=console, logstash

    [formatters]
    keys=generic, logstash

    [logger_root]
    level=INFO
    handlers=console, logstash

    [handler_console]
    class=StreamHandler
    formatter=generic
    args=(sys.stdout, )

    [handler_logstash]
    class=nomad.utils.structlogging.LogstashHandler
    formatter=logstash

    [formatter_generic]
    format=%(asctime)s [%(process)d] [%(levelname)s] %(message)s
    datefmt=%Y-%m-%d %H:%M:%S
    class=logging.Formatter

    [formatter_logstash]
    class=nomad.utils.structlogging.LogstashFormatter
---
# Source: nomad/templates/configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
  name: release-name-nomad-configmap
  labels:
    app.kubernetes.io/name: nomad-configmap
    helm.sh/chart: nomad-1.1.0
    app.kubernetes.io/name: nomad
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.2.2"
    app.kubernetes.io/managed-by: Helm
data:
  nomad.yaml: |
    meta:
      deployment: "release-name"
      service: "app"
      homepage: "https://nomad-lab.eu"
      source_url: "https://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR"
      maintainer_email: "markus.scheidgen@physik.hu-berlin.de"
      beta:
        label: "latest"
        isBeta: false
        isTest: false
        usesBetaData: false
        officialUrl: "https://nomad-lab.eu/prod/v1/gui"
    process:
      reuse_parser: true
      index_materials: true
      rfc3161_skip_published: false
    reprocess:
      rematch_published: true
      reprocess_existing_entries: true
      use_original_parser: false
      add_matched_entries_to_published:  false
      delete_unmatched_published_entries: false
      index_individual_entries: false
    fs:
      tmp: ".volumes/fs/staging/tmp"
      staging_external: /nomad/fairdi/latest/fs/staging
      public_external: /nomad/fairdi/latest/fs/public
      north_home_external: /nomad/fairdi/latest/fs/north/users
      prefix_size: 1
      working_directory: /app
      
    logstash:
      enabled: true
      host: ""
      tcp_port: 5000
    services:
      api_host: "nomad-lab.eu"
      api_port: 80
      api_base_path: "/fairdi/nomad/latest"
      api_secret: "defaultApiSecret"
      https: true
      upload_limit: 10
      admin_user_id: 00000000-0000-0000-0000-000000000000
      aitoolkit_enabled: false
    rabbitmq:
      host: "release-name-rabbitmq"
    elastic:
      host: "elasticsearch-master"
      port: 9200
      timeout: 60
      bulk_timeout: 600
      bulk_size: 1000
      entries_per_material_cap: 1000
      
      entries_index: "fairdi_nomad_latest_entries_v1"
      materials_index: "fairdi_nomad_latest_materials_v1"
      
    mongo:
      
      
      host: "mongodb://"
      
      port: 27017
      db_name: "fairdi_nomad_latest"
    mail:
      enabled: false
      host: "localhost"
      
      port: 25
      
      
      
      from_address: "support@nomad-lab.eu"
      
      cc_address: null
      
    celery:
      routing: "queue"
      timeout: 7200
      acks_late: false
    client:
      user: "admin"
    keycloak:
      server_url: "https://nomad-lab.eu/keycloak/auth/"
      realm_name: "fairdi_nomad_test"
      username: "admin"
      client_id: "nomad_public"
    datacite:
      enabled: false
      prefix: "10.17172"
    
    north:
      enabled: false
      hub_host: "nomad-lab.eu"
      hub_port: 80
      hub_service_api_token: "secret-token"
---
# Source: nomad/templates/proxy/configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
  name: release-name-nomad-configmap-proxy
  labels:
    helm.sh/chart: nomad-1.1.0
    app.kubernetes.io/name: nomad
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.2.2"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: proxy
data:
  nginx.conf: |
    server {
      listen        80;
      server_name   www.example.com;
      proxy_set_header Host $host;

      proxy_connect_timeout 10;
      proxy_read_timeout 60;
      proxy_pass_request_headers      on;
      underscores_in_headers          on;
      gzip_min_length     1000;
      gzip_buffers        4 8k;
      gzip_http_version   1.0;
      gzip_disable        "msie6";
      gzip_vary           on;
      gzip on;
      gzip_proxied any;
      gzip_types
          text/css
          text/javascript
          text/xml
          text/plain
          application/javascript
          application/x-javascript
          application/json;

      location / {
        proxy_pass http://release-name-nomad-app:8000;
      }

      location ~ /fairdi/nomad/latest\/?(gui)?$ {
        rewrite ^ /fairdi/nomad/latest/gui/ permanent;
      }

      location /fairdi/nomad/latest/gui/ {
        proxy_intercept_errors on;
        error_page 404 = @redirect_to_index;
        proxy_pass http://release-name-nomad-app:8000;
      }

      location @redirect_to_index {
        rewrite ^ /fairdi/nomad/latest/gui/index.html break;
        proxy_pass http://release-name-nomad-app:8000;
      }

      location /fairdi/nomad/latest/docs/ {
        proxy_intercept_errors on;
        error_page 404 = @redirect_to_index_docs;
        proxy_pass http://release-name-nomad-app:8000;
      }

      location @redirect_to_index_docs {
        rewrite ^ /fairdi/nomad/latest/docs/index.html break;
        proxy_pass http://release-name-nomad-app:8000;
      }

      location ~ \/gui\/(service-worker\.js|meta\.json)$ {
        add_header Last-Modified $date_gmt;
        add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
        if_modified_since off;
        expires off;
        etag off;
        proxy_pass http://release-name-nomad-app:8000;
      }

      location ~ /api/v1/uploads(/?$|.*/raw|.*/bundle?$)  {
        client_max_body_size 35g;
        proxy_request_buffering off;
        proxy_pass http://release-name-nomad-app:8000;
      }

      location ~ /api/v1/.*/download {
        proxy_buffering off;
        proxy_pass http://release-name-nomad-app:8000;
      }

      location ~ /api/v1/entries/edit {
        proxy_buffering off;
        proxy_read_timeout 60;
        proxy_pass http://release-name-nomad-app:8000;
      }
    }
---
# Source: nomad/charts/mongodb/templates/standalone/pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: release-name-mongodb
  namespace: "default"
  labels:
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: mongodb
    app.kubernetes.io/version: 7.0.2
    helm.sh/chart: mongodb-14.0.4
    app.kubernetes.io/component: mongodb
  annotations:
spec:
  accessModes:
    - "ReadWriteOnce"
  resources:
    requests:
      storage: "8Gi"
---
# Source: nomad/charts/rabbitmq/templates/role.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: release-name-rabbitmq-endpoint-reader
  namespace: "default"
  labels:
    app.kubernetes.io/name: rabbitmq
    helm.sh/chart: rabbitmq-11.2.2
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/version: "3.11.5"
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create"]
---
# Source: nomad/charts/rabbitmq/templates/rolebinding.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: release-name-rabbitmq-endpoint-reader
  namespace: "default"
  labels:
    app.kubernetes.io/name: rabbitmq
    helm.sh/chart: rabbitmq-11.2.2
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/version: "3.11.5"
subjects:
  - kind: ServiceAccount
    name: release-name-rabbitmq
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: release-name-rabbitmq-endpoint-reader
---
# Source: nomad/charts/elasticsearch/templates/service.yaml
kind: Service
apiVersion: v1
metadata:
  name: elasticsearch-master
  labels:
    heritage: "Helm"
    release: "release-name"
    chart: "elasticsearch"
    app: "elasticsearch-master"
  annotations:
    {}
spec:
  type: ClusterIP
  selector:
    release: "release-name"
    chart: "elasticsearch"
    app: "elasticsearch-master"
  publishNotReadyAddresses: false
  ports:
  - name: http
    protocol: TCP
    port: 9200
  - name: transport
    protocol: TCP
    port: 9300
---
# Source: nomad/charts/elasticsearch/templates/service.yaml
kind: Service
apiVersion: v1
metadata:
  name: elasticsearch-master-headless
  labels:
    heritage: "Helm"
    release: "release-name"
    chart: "elasticsearch"
    app: "elasticsearch-master"
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
  clusterIP: None # This is needed for statefulset hostnames like elasticsearch-0 to resolve
  # Create endpoints also if the related pod isn't ready
  publishNotReadyAddresses: true
  selector:
    app: "elasticsearch-master"
  ports:
  - name: http
    port: 9200
  - name: transport
    port: 9300
---
# Source: nomad/charts/mongodb/templates/standalone/svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-mongodb
  namespace: "default"
  labels:
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: mongodb
    app.kubernetes.io/version: 7.0.2
    helm.sh/chart: mongodb-14.0.4
    app.kubernetes.io/component: mongodb
spec:
  type: ClusterIP
  sessionAffinity: None
  ports:
    - name: "mongodb"
      port: 27017
      targetPort: mongodb
      nodePort: null
  selector:
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/name: mongodb
    app.kubernetes.io/component: mongodb
---
# Source: nomad/charts/rabbitmq/templates/svc-headless.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-rabbitmq-headless
  namespace: "default"
  labels:
    app.kubernetes.io/name: rabbitmq
    helm.sh/chart: rabbitmq-11.2.2
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/version: "3.11.5"
spec:
  clusterIP: None
  ports:
    - name: epmd
      port: 4369
      targetPort: epmd
    - name: amqp
      port: 5672
      targetPort: amqp
    - name: dist
      port: 25672
      targetPort: dist
    - name: http-stats
      port: 15672
      targetPort: stats
  selector: 
    app.kubernetes.io/name: rabbitmq
    app.kubernetes.io/instance: release-name
  publishNotReadyAddresses: true
---
# Source: nomad/charts/rabbitmq/templates/svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-rabbitmq
  namespace: "default"
  labels:
    app.kubernetes.io/name: rabbitmq
    helm.sh/chart: rabbitmq-11.2.2
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/version: "3.11.5"
spec:
  type: ClusterIP
  sessionAffinity: None
  ports:
    - name: amqp
      port: 5672
      targetPort: amqp
      nodePort: null
    - name: epmd
      port: 4369
      targetPort: epmd
      nodePort: null
    - name: dist
      port: 25672
      targetPort: dist
      nodePort: null
    - name: http-stats
      port: 15672
      targetPort: stats
      nodePort: null
  selector: 
    app.kubernetes.io/name: rabbitmq
    app.kubernetes.io/instance: release-name
---
# Source: nomad/templates/app/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-nomad-app
  labels:
    helm.sh/chart: nomad-1.1.0
    app.kubernetes.io/name: nomad
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.2.2"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: app
spec:
  type: ClusterIP
  ports:
    - port: 8000
      # targetPort: http
      targetPort: 8000
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: nomad
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/component: app
---
# Source: nomad/templates/proxy/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-nomad-proxy
  labels:
    helm.sh/chart: nomad-1.1.0
    app.kubernetes.io/name: nomad
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.2.2"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: proxy
spec:
  type: ClusterIP
  ports:
    - port: 80
      # targetPort: http
      targetPort: 80
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: nomad
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/component: proxy
---
# Source: nomad/charts/mongodb/templates/standalone/dep-sts.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-mongodb
  namespace: "default"
  labels:
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: mongodb
    app.kubernetes.io/version: 7.0.2
    helm.sh/chart: mongodb-14.0.4
    app.kubernetes.io/component: mongodb
spec:
  replicas: 1
  strategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app.kubernetes.io/instance: release-name
      app.kubernetes.io/name: mongodb
      app.kubernetes.io/component: mongodb
  template:
    metadata:
      labels:
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: mongodb
        app.kubernetes.io/version: 7.0.2
        helm.sh/chart: mongodb-14.0.4
        app.kubernetes.io/component: mongodb
    spec:
      
      serviceAccountName: release-name-mongodb
      affinity:
        podAffinity:
          
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app.kubernetes.io/instance: release-name
                    app.kubernetes.io/name: mongodb
                    app.kubernetes.io/component: mongodb
                topologyKey: kubernetes.io/hostname
              weight: 1
        nodeAffinity:
          
      securityContext:
        fsGroup: 1001
        sysctls: []
      
      containers:
        - name: mongodb
          image: docker.io/bitnami/mongodb:7.0.2-debian-11-r0
          imagePullPolicy: "IfNotPresent"
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
            runAsGroup: 0
            runAsNonRoot: true
            runAsUser: 1001
            seccompProfile:
              type: RuntimeDefault
          env:
            - name: BITNAMI_DEBUG
              value: "false"
            - name: MONGODB_ROOT_USER
              value: "root"
            - name: MONGODB_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: release-name-mongodb
                  key: mongodb-root-password
            - name: ALLOW_EMPTY_PASSWORD
              value: "no"
            - name: MONGODB_SYSTEM_LOG_VERBOSITY
              value: "0"
            - name: MONGODB_DISABLE_SYSTEM_LOG
              value: "no"
            - name: MONGODB_DISABLE_JAVASCRIPT
              value: "no"
            - name: MONGODB_ENABLE_JOURNAL
              value: "yes"
            - name: MONGODB_PORT_NUMBER
              value: "27017"
            - name: MONGODB_ENABLE_IPV6
              value: "no"
            - name: MONGODB_ENABLE_DIRECTORY_PER_DB
              value: "no"
          ports:
            - name: mongodb
              containerPort: 27017
          livenessProbe:
            failureThreshold: 6
            initialDelaySeconds: 30
            periodSeconds: 20
            successThreshold: 1
            timeoutSeconds: 10
            exec:
              command:
                - /bitnami/scripts/ping-mongodb.sh
          readinessProbe:
            failureThreshold: 6
            initialDelaySeconds: 5
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
            exec:
              command:
                - /bitnami/scripts/readiness-probe.sh
          resources:
            limits: {}
            requests: {}
          volumeMounts:
            - name: datadir
              mountPath: /bitnami/mongodb
              subPath: 
            - name: common-scripts
              mountPath: /bitnami/scripts
      volumes:
        - name: common-scripts
          configMap:
            name: release-name-mongodb-common-scripts
            defaultMode: 0550
        - name: datadir
          persistentVolumeClaim:
            claimName: release-name-mongodb
---
# Source: nomad/templates/app/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-nomad-app
  labels:
    helm.sh/chart: nomad-1.1.0
    app.kubernetes.io/name: nomad
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.2.2"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: app
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: nomad
      app.kubernetes.io/instance: release-name
      app.kubernetes.io/component: app
  template:
    metadata:
      labels:
        helm.sh/chart: nomad-1.1.0
        app.kubernetes.io/name: nomad
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/version: "1.2.2"
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: app
    spec:
      serviceAccountName: release-name-nomad
      containers:
        - name: nomad-app
          image: "gitlab-registry.mpcdf.mpg.de/nomad-lab/nomad-fair:latest"
          imagePullPolicy: Always
          ports:
            - name: http
              containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              path: "/fairdi/nomad/latest/alive"
              port: 8000
            initialDelaySeconds: 90
            periodSeconds: 10
            timeoutSeconds: 70
          readinessProbe:
            httpGet:
              path: "/fairdi/nomad/latest/alive"
              port: 8000
            initialDelaySeconds: 90
            periodSeconds: 3
            timeoutSeconds: 63
          volumeMounts:
            - mountPath: /app/nomad.yaml
              name: nomad-conf
              subPath: nomad.yaml
            - mountPath: /app/uvicorn.log.conf
              name: uvicorn-log-conf
              subPath: uvicorn.log.conf
            - mountPath: /app/.volumes/fs/public
              name: public-volume
            - mountPath: /app/.volumes/fs/staging
              name: staging-volume
            - mountPath: /app/.volumes/fs/north/users
              name: north-home-volume
            - mountPath: /nomad
              name: nomad-volume
          env:
            - name: NOMAD_META_SERVICE
              value: "app"
            - name: NOMAD_CONSOLE_LOGLEVEL
              value: "INFO"
            - name: NOMAD_LOGSTASH_LEVEL
              value: "INFO"
          command: ["python", "-m", "nomad.cli", "admin", "run", "app", "--log-config", "uvicorn.log.conf", "--with-gui", "--host", "0.0.0.0"]
      volumes:
        - name: uvicorn-log-conf
          configMap:
            name: release-name-nomad-configmap-app-uvicorn-log-config
        - name: app-run-script
          configMap:
            name: release-name-nomad-app-run-script
        - name: nomad-conf
          configMap:
            name: release-name-nomad-configmap
        - name: public-volume
          hostPath:
            path: /nomad/fairdi/latest/fs/public
            # type: Directory
        - name: staging-volume
          
          hostPath:
            path: /nomad/fairdi/latest/fs/staging
            # type: Directory
          
        - name: north-home-volume
          hostPath:
            path: /nomad/fairdi/latest/fs/north/users
            # type: Directory
        - name: nomad-volume
          hostPath:
            path: /nomad
            # type: Directory
---
# Source: nomad/templates/proxy/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-nomad-proxy
  labels:
    helm.sh/chart: nomad-1.1.0
    app.kubernetes.io/name: nomad
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.2.2"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: proxy
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: nomad
      app.kubernetes.io/instance: release-name
      app.kubernetes.io/component: proxy
  template:
    metadata:
      labels:
        helm.sh/chart: nomad-1.1.0
        app.kubernetes.io/name: nomad
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/version: "1.2.2"
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: proxy
    spec:
      # serviceAccountName: release-name-nomad
      containers:
        - name: nomad-proxy
          image: "nginx:1.13.9-alpine"
          imagePullPolicy: IfNotPresent
          command:
            - nginx
          args:
            - -g
            - daemon off;
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          livenessProbe:
            httpGet:
              path: "/fairdi/nomad/latest/gui/index.html"
              port: http
            initialDelaySeconds: 90
            periodSeconds: 10
            timeoutSeconds: 70
          readinessProbe:
            httpGet:
              path: "/fairdi/nomad/latest/gui/index.html"
              port: http
            initialDelaySeconds: 90
            periodSeconds: 3
            timeoutSeconds: 63
          volumeMounts:
            - mountPath: /etc/nginx/conf.d
              readOnly: true
              name: nginx-conf
            - mountPath: /var/log/nginx
              name: log
      volumes:
        - name: nginx-conf
          configMap:
            name: release-name-nomad-configmap-proxy
            items:
            - key: nginx.conf
              path: default.conf
        - name: log
          emptyDir: {}
---
# Source: nomad/templates/worker/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-nomad-worker
  labels:
    helm.sh/chart: nomad-1.1.0
    app.kubernetes.io/name: nomad
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.2.2"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: worker
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: nomad
      app.kubernetes.io/instance: release-name
      app.kubernetes.io/component: worker
  template:
    metadata:
      labels:
        helm.sh/chart: nomad-1.1.0
        app.kubernetes.io/name: nomad
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/version: "1.2.2"
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: worker
    spec:
      serviceAccountName: release-name-nomad
      containers:
        - name: nomad-worker
          image: "gitlab-registry.mpcdf.mpg.de/nomad-lab/nomad-fair:latest"
          imagePullPolicy: Always
          volumeMounts:
            - mountPath: /app/nomad.yaml
              name: nomad-conf
              subPath: nomad.yaml
            - mountPath: /app/.volumes/fs/public
              name: public-volume
            - mountPath: /app/.volumes/fs/staging
              name: staging-volume
            - mountPath: /nomad
              name: nomad-volume
          env:
            - name: CELERY_ACKS_LATE
              value: "True"
            - name: NOMAD_META_SERVICE
              value: "worker"
            - name: NOMAD_CONSOLE_LOG_LEVEL
              value: "ERROR"
            - name: NOMAD_LOGSTASH_LEVEL
              value: "INFO"
            - name: NOMAD_CELERY_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          command: ["python", "-m", "celery", "-A", "nomad.processing", "worker", "-n", "$(NOMAD_CELERY_NODE_NAME)" , "--max-tasks-per-child", "128"]
          livenessProbe:
            exec:
              command:
              - bash
              - -c
              - NOMAD_LOGSTASH_LEVEL=WARNING python -m celery -A nomad.processing status | grep "${NOMAD_CELERY_NODE_NAME}:.*OK"
            initialDelaySeconds: 30
            periodSeconds: 120
            timeoutSeconds: 60
          readinessProbe:
            exec:
              command:
              - bash
              - -c
              - NOMAD_LOGSTASH_LEVEL=WARNING python -m celery -A nomad.processing status | grep "${NOMAD_CELERY_NODE_NAME}:.*OK"
            initialDelaySeconds: 30
            periodSeconds: 120
            timeoutSeconds: 60
      volumes:
        - name: nomad-conf
          configMap:
            name: release-name-nomad-configmap
        - name: public-volume
          hostPath:
            path: /nomad/fairdi/latest/fs/public
            # type: Directory
        - name: staging-volume
          
          hostPath:
            path: /nomad/fairdi/latest/fs/staging
            # type: Directory
          
        - name: nomad-volume
          hostPath:
            path: /nomad
            # type: Directory
---
# Source: nomad/charts/elasticsearch/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch-master
  labels:
    heritage: "Helm"
    release: "release-name"
    chart: "elasticsearch"
    app: "elasticsearch-master"
  annotations:
    esMajorVersion: "7"
spec:
  serviceName: elasticsearch-master-headless
  selector:
    matchLabels:
      app: "elasticsearch-master"
  replicas: 3
  podManagementPolicy: Parallel
  updateStrategy:
    type: RollingUpdate
  volumeClaimTemplates:
  - metadata:
      name: elasticsearch-master
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 30Gi
  template:
    metadata:
      name: "elasticsearch-master"
      labels:
        release: "release-name"
        chart: "elasticsearch"
        app: "elasticsearch-master"
      annotations:
        
    spec:
      securityContext:
        fsGroup: 1000
        runAsUser: 1000
      automountServiceAccountToken: true
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - "elasticsearch-master"
            topologyKey: kubernetes.io/hostname
      terminationGracePeriodSeconds: 120
      volumes:
      enableServiceLinks: true
      initContainers:
      - name: configure-sysctl
        securityContext:
          runAsUser: 0
          privileged: true
        image: "docker.elastic.co/elasticsearch/elasticsearch:7.17.3"
        imagePullPolicy: "IfNotPresent"
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        resources:
          {}

      containers:
      - name: "elasticsearch"
        securityContext:
          capabilities:
            drop:
            - ALL
          runAsNonRoot: true
          runAsUser: 1000
        image: "docker.elastic.co/elasticsearch/elasticsearch:7.17.3"
        imagePullPolicy: "IfNotPresent"
        readinessProbe:
          exec:
            command:
              - bash
              - -c
              - |
                set -e
                # If the node is starting up wait for the cluster to be ready (request params: "wait_for_status=green&timeout=1s" )
                # Once it has started only check that the node itself is responding
                START_FILE=/tmp/.es_start_file

                # Disable nss cache to avoid filling dentry cache when calling curl
                # This is required with Elasticsearch Docker using nss < 3.52
                export NSS_SDB_USE_CACHE=no

                http () {
                  local path="${1}"
                  local args="${2}"
                  set -- -XGET -s

                  if [ "$args" != "" ]; then
                    set -- "$@" $args
                  fi

                  if [ -n "${ELASTIC_PASSWORD}" ]; then
                    set -- "$@" -u "elastic:${ELASTIC_PASSWORD}"
                  fi

                  curl --output /dev/null -k "$@" "http://127.0.0.1:9200${path}"
                }

                if [ -f "${START_FILE}" ]; then
                  echo 'Elasticsearch is already running, lets check the node is healthy'
                  HTTP_CODE=$(http "/" "-w %{http_code}")
                  RC=$?
                  if [[ ${RC} -ne 0 ]]; then
                    echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} http://127.0.0.1:9200/ failed with RC ${RC}"
                    exit ${RC}
                  fi
                  # ready if HTTP code 200, 503 is tolerable if ES version is 6.x
                  if [[ ${HTTP_CODE} == "200" ]]; then
                    exit 0
                  elif [[ ${HTTP_CODE} == "503" && "7" == "6" ]]; then
                    exit 0
                  else
                    echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} http://127.0.0.1:9200/ failed with HTTP code ${HTTP_CODE}"
                    exit 1
                  fi

                else
                  echo 'Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )'
                  if http "/_cluster/health?wait_for_status=green&timeout=1s" "--fail" ; then
                    touch ${START_FILE}
                    exit 0
                  else
                    echo 'Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )'
                    exit 1
                  fi
                fi
          failureThreshold: 3
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 3
          timeoutSeconds: 5
        ports:
        - name: http
          containerPort: 9200
        - name: transport
          containerPort: 9300
        resources:
          limits:
            cpu: 1000m
            memory: 2Gi
          requests:
            cpu: 1000m
            memory: 2Gi
        env:
          - name: node.name
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: cluster.initial_master_nodes
            value: "elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2,"
          - name: discovery.seed_hosts
            value: "elasticsearch-master-headless"
          - name: cluster.name
            value: "elasticsearch"
          - name: network.host
            value: "0.0.0.0"
          - name: cluster.deprecation_indexing.enabled
            value: "false"
          - name: node.data
            value: "true"
          - name: node.ingest
            value: "true"
          - name: node.master
            value: "true"
          - name: node.ml
            value: "true"
          - name: node.remote_cluster_client
            value: "true"
        volumeMounts:
          - name: "elasticsearch-master"
            mountPath: /usr/share/elasticsearch/data
---
# Source: nomad/charts/rabbitmq/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: release-name-rabbitmq
  namespace: "default"
  labels:
    app.kubernetes.io/name: rabbitmq
    helm.sh/chart: rabbitmq-11.2.2
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/version: "3.11.5"
spec:
  serviceName: release-name-rabbitmq-headless
  podManagementPolicy: OrderedReady
  replicas: 1
  updateStrategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app.kubernetes.io/name: rabbitmq
      app.kubernetes.io/instance: release-name
  template:
    metadata:
      labels:
        app.kubernetes.io/name: rabbitmq
        helm.sh/chart: rabbitmq-11.2.2
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/version: "3.11.5"
      annotations:
        checksum/config: 67220425ebb08439a779a2f2a5745b9bc31ddc73f5aee789d43906f76bd0ad78
        checksum/secret: 69aec1d29567549f44699b0c6f69dad635f203041bc35cff95c5324947b40474
    spec:
      
      serviceAccountName: release-name-rabbitmq
      affinity:
        podAffinity:
          
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app.kubernetes.io/instance: release-name
                    app.kubernetes.io/name: rabbitmq
                topologyKey: kubernetes.io/hostname
              weight: 1
        nodeAffinity:
          
      securityContext:
        fsGroup: 1001
      terminationGracePeriodSeconds: 120
      initContainers:
      containers:
        - name: rabbitmq
          image: docker.io/bitnami/rabbitmq:3.11.5-debian-11-r2
          imagePullPolicy: "IfNotPresent"
          securityContext:
            runAsNonRoot: true
            runAsUser: 1001
          lifecycle:
            preStop:
              exec:
                command:
                  - /bin/bash
                  - -ec
                  - |
                    if [[ -f /opt/bitnami/scripts/rabbitmq/nodeshutdown.sh ]]; then
                        /opt/bitnami/scripts/rabbitmq/nodeshutdown.sh -t "120" -d "false"
                    else
                        rabbitmqctl stop_app
                    fi
          env:
            - name: BITNAMI_DEBUG
              value: "false"
            - name: MY_POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: MY_POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: K8S_SERVICE_NAME
              value: release-name-rabbitmq-headless
            - name: K8S_ADDRESS_TYPE
              value: hostname
            - name: RABBITMQ_FORCE_BOOT
              value: "no"
            - name: RABBITMQ_NODE_NAME
              value: "rabbit@$(MY_POD_NAME).$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local"
            - name: K8S_HOSTNAME_SUFFIX
              value: ".$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local"
            - name: RABBITMQ_MNESIA_DIR
              value: "/bitnami/rabbitmq/mnesia/$(RABBITMQ_NODE_NAME)"
            - name: RABBITMQ_LDAP_ENABLE
              value: "no"
            - name: RABBITMQ_LOGS
              value: "-"
            - name: RABBITMQ_ULIMIT_NOFILES
              value: "65536"
            - name: RABBITMQ_USE_LONGNAME
              value: "true"
            - name: RABBITMQ_ERL_COOKIE
              valueFrom:
                secretKeyRef:
                  name: release-name-rabbitmq
                  key: rabbitmq-erlang-cookie
            - name: RABBITMQ_LOAD_DEFINITIONS
              value: "no"
            - name: RABBITMQ_DEFINITIONS_FILE
              value: "/app/load_definition.json"
            - name: RABBITMQ_SECURE_PASSWORD
              value: "yes"
            - name: RABBITMQ_USERNAME
              value: "rabbitmq"
            - name: RABBITMQ_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: release-name-rabbitmq
                  key: rabbitmq-password
            - name: RABBITMQ_PLUGINS
              value: "rabbitmq_management, rabbitmq_peer_discovery_k8s, rabbitmq_auth_backend_ldap"
          envFrom:
          ports:
            - name: amqp
              containerPort: 5672
            - name: dist
              containerPort: 25672
            - name: stats
              containerPort: 15672
            - name: epmd
              containerPort: 4369
          livenessProbe:
            failureThreshold: 6
            initialDelaySeconds: 120
            periodSeconds: 30
            successThreshold: 1
            timeoutSeconds: 20
            exec:
              command:
                - /bin/bash
                - -ec
                - rabbitmq-diagnostics -q ping
          readinessProbe:
            failureThreshold: 3
            initialDelaySeconds: 10
            periodSeconds: 30
            successThreshold: 1
            timeoutSeconds: 20
            exec:
              command:
                - /bin/bash
                - -ec
                - rabbitmq-diagnostics -q check_running && rabbitmq-diagnostics -q check_local_alarms
          resources:
            limits: {}
            requests: {}
          volumeMounts:
            - name: configuration
              mountPath: /bitnami/rabbitmq/conf
            - name: data
              mountPath: /bitnami/rabbitmq/mnesia
      volumes:
        - name: configuration
          secret:
            secretName: release-name-rabbitmq-config
            items:
              - key: rabbitmq.conf
                path: rabbitmq.conf
        - name: data
          emptyDir: {}
---
# Source: nomad/charts/elasticsearch/templates/test/test-elasticsearch-health.yaml
apiVersion: v1
kind: Pod
metadata:
  name: "release-name-znlzw-test"
  annotations:
    "helm.sh/hook": test
    "helm.sh/hook-delete-policy": hook-succeeded
spec:
  securityContext:
    fsGroup: 1000
    runAsUser: 1000
  containers:
  - name: "release-name-gvupd-test"
    image: "docker.elastic.co/elasticsearch/elasticsearch:7.17.3"
    imagePullPolicy: "IfNotPresent"
    command:
      - "sh"
      - "-c"
      - |
        #!/usr/bin/env bash -e
        curl -XGET --fail 'elasticsearch-master:9200/_cluster/health?wait_for_status=green&timeout=1s'
  restartPolicy: Never

Would it be possible to fix that duplication and publish an updated Helm chart?

Thanks and all the best,
Phillip

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant
0