8000 Added imageSpec for registry/repo:tag overrides by blang9238 · Pull Request #447 · anchore/anchore-charts · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Added imageSpec for registry/repo:tag overrides #447

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 17 commits into from
May 27, 2025

Conversation

blang9238
Copy link
Contributor

Signed-off-by: Ben Lang blang@anchore.com

@blang9238 blang9238 requested a review from a team as a code owner March 20, 2025 04:05
@blang9238
Copy link
Contributor Author
blang9238 commented Mar 20, 2025

Helm Template output:

---
# Source: enterprise/charts/ui-redis/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
automountServiceAccountToken: true
metadata:
  name: release-name-ui-redis
  namespace: "default"
  labels:
    app.kubernetes.io/name: ui-redis
    helm.sh/chart: ui-redis-17.11.8
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
---
# Source: enterprise/charts/postgresql/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
  name: release-name-postgresql
  namespace: "default"
  labels:
    app.kubernetes.io/name: postgresql
    helm.sh/chart: postgresql-12.5.9
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
type: Opaque
data:
  postgres-password: "Y2lHc2ZaSmFRRg=="
  password: "YW5jaG9yZS1wb3N0Z3JlcywxMjM="
  # We don't auto-generate LDAP password when it's not provided as we do for other passwords
---
# Source: enterprise/charts/ui-redis/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: release-name-ui-redis
  namespace: "default"
  labels:
    app.kubernetes.io/name: ui-redis
    helm.sh/chart: ui-redis-17.11.8
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
type: Opaque
data:
  redis-password: "YW5jaG9yZS1yZWRpcywxMjM="
---
# Source: enterprise/templates/anchore_secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: release-name-enterprise
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    {}
type: Opaque
stringData:
  ANCHORE_ADMIN_PASSWORD: "ZIamC5OYhbWJRJJJmFFUzHggG6Dony2X"
  ANCHORECTL_PASSWORD: "ZIamC5OYhbWJRJJJmFFUzHggG6Dony2X"
  ANCHORE_DB_HOST: "release-name-postgresql"
  ANCHORE_DB_NAME: "anchore"
  ANCHORE_DB_USER: "anchore"
  ANCHORE_DB_PASSWORD: "anchore-postgres,123"
  ANCHORE_DB_PORT: "5432"
  ANCHORE_SAML_SECRET: "S6lROwti7pzdRfCYrzG7N0lbwiGkfRfE"
---
# Source: enterprise/templates/ui_secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: release-name-enterprise-ui
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: ui
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    {}
type: Opaque
stringData:
  ANCHORE_APPDB_URI: 'postgresql://anchore:anchore-postgres,123@release-name-postgresql/anchore'
  ANCHORE_REDIS_URI: 'redis://:anchore-redis,123@release-name-ui-redis-master:6379'
---
# Source: enterprise/charts/ui-redis/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: release-name-ui-redis-configuration
  namespace: "default"
  labels:
    app.kubernetes.io/name: ui-redis
    helm.sh/chart: ui-redis-17.11.8
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
data:
  redis.conf: |-
    # User-supplied common configuration:
    # Enable AOF https://redis.io/topics/persistence#append-only-file
    appendonly yes
    # Disable RDB persistence, AOF persistence already enabled.
    save ""
    # End of common configuration
  master.conf: |-
    dir /data
    # User-supplied master configuration:
    rename-command FLUSHDB ""
    rename-command FLUSHALL ""
    # End of master configuration
  replica.conf: |-
    dir /data
    # User-supplied replica configuration:
    rename-command FLUSHDB ""
    rename-command FLUSHALL ""
    # End of replica configuration
---
# Source: enterprise/charts/ui-redis/templates/health-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: release-name-ui-redis-health
  namespace: "default"
  labels:
    app.kubernetes.io/name: ui-redis
    helm.sh/chart: ui-redis-17.11.8
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
data:
  ping_readiness_local.sh: |-
    #!/bin/bash

    [[ -f $REDIS_PASSWORD_FILE ]] && export REDIS_PASSWORD="$(< "${REDIS_PASSWORD_FILE}")"
    [[ -n "$REDIS_PASSWORD" ]] && export REDISCLI_AUTH="$REDIS_PASSWORD"
    response=$(
      timeout -s 15 $1 \
      redis-cli \
        -h localhost \
        -p $REDIS_PORT \
        ping
    )
    if [ "$?" -eq "124" ]; then
      echo "Timed out"
      exit 1
    fi
    if [ "$response" != "PONG" ]; then
      echo "$response"
      exit 1
    fi
  ping_liveness_local.sh: |-
    #!/bin/bash

    [[ -f $REDIS_PASSWORD_FILE ]] && export REDIS_PASSWORD="$(< "${REDIS_PASSWORD_FILE}")"
    [[ -n "$REDIS_PASSWORD" ]] && export REDISCLI_AUTH="$REDIS_PASSWORD"
    response=$(
      timeout -s 15 $1 \
      redis-cli \
        -h localhost \
        -p $REDIS_PORT \
        ping
    )
    if [ "$?" -eq "124" ]; then
      echo "Timed out"
      exit 1
    fi
    responseFirstWord=$(echo $response | head -n1 | awk '{print $1;}')
    if [ "$response" != "PONG" ] && [ "$responseFirstWord" != "LOADING" ] && [ "$responseFirstWord" != "MASTERDOWN" ]; then
      echo "$response"
      exit 1
    fi
  ping_readiness_master.sh: |-
    #!/bin/bash

    [[ -f $REDIS_MASTER_PASSWORD_FILE ]] && export REDIS_MASTER_PASSWORD="$(< "${REDIS_MASTER_PASSWORD_FILE}")"
    [[ -n "$REDIS_MASTER_PASSWORD" ]] && export REDISCLI_AUTH="$REDIS_MASTER_PASSWORD"
    response=$(
      timeout -s 15 $1 \
      redis-cli \
        -h $REDIS_MASTER_HOST \
        -p $REDIS_MASTER_PORT_NUMBER \
        ping
    )
    if [ "$?" -eq "124" ]; then
      echo "Timed out"
      exit 1
    fi
    if [ "$response" != "PONG" ]; then
      echo "$response"
      exit 1
    fi
  ping_liveness_master.sh: |-
    #!/bin/bash

    [[ -f $REDIS_MASTER_PASSWORD_FILE ]] && export REDIS_MASTER_PASSWORD="$(< "${REDIS_MASTER_PASSWORD_FILE}")"
    [[ -n "$REDIS_MASTER_PASSWORD" ]] && export REDISCLI_AUTH="$REDIS_MASTER_PASSWORD"
    response=$(
      timeout -s 15 $1 \
      redis-cli \
        -h $REDIS_MASTER_HOST \
        -p $REDIS_MASTER_PORT_NUMBER \
        ping
    )
    if [ "$?" -eq "124" ]; then
      echo "Timed out"
      exit 1
    fi
    responseFirstWord=$(echo $response | head -n1 | awk '{print $1;}')
    if [ "$response" != "PONG" ] && [ "$responseFirstWord" != "LOADING" ]; then
      echo "$response"
      exit 1
    fi
  ping_readiness_local_and_master.sh: |-
    script_dir="$(dirname "$0")"
    exit_status=0
    "$script_dir/ping_readiness_local.sh" $1 || exit_status=$?
    "$script_dir/ping_readiness_master.sh" $1 || exit_status=$?
    exit $exit_status
  ping_liveness_local_and_master.sh: |-
    script_dir="$(dirname "$0")"
    exit_status=0
    "$script_dir/ping_liveness_local.sh" $1 || exit_status=$?
    "$script_dir/ping_liveness_master.sh" $1 || exit_status=$?
    exit $exit_status
---
# Source: enterprise/charts/ui-redis/templates/scripts-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: release-name-ui-redis-scripts
  namespace: "default"
  labels:
    app.kubernetes.io/name: ui-redis
    helm.sh/chart: ui-redis-17.11.8
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
data:
  start-master.sh: |
    #!/bin/bash

    [[ -f $REDIS_PASSWORD_FILE ]] && export REDIS_PASSWORD="$(< "${REDIS_PASSWORD_FILE}")"
    if [[ -f /opt/bitnami/redis/mounted-etc/master.conf ]];then
        cp /opt/bitnami/redis/mounted-etc/master.conf /opt/bitnami/redis/etc/master.conf
    fi
    if [[ -f /opt/bitnami/redis/mounted-etc/redis.conf ]];then
        cp /opt/bitnami/redis/mounted-etc/redis.conf /opt/bitnami/redis/etc/redis.conf
    fi
    ARGS=("--port" "${REDIS_PORT}")
    ARGS+=("--requirepass" "${REDIS_PASSWORD}")
    ARGS+=("--masterauth" "${REDIS_PASSWORD}")
    ARGS+=("--include" "/opt/bitnami/redis/etc/redis.conf")
    ARGS+=("--include" "/opt/bitnami/redis/etc/master.conf")
    exec redis-server "${ARGS[@]}"
---
# Source: enterprise/templates/analyzer_configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
  name: release-name-enterprise-analyzer
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: analyzer
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    {}
data:
  analyzer_config.yaml: |
    # Anchore analyzer configuration
    malware:
      clamav:
        db_update_enabled: true
        enabled: <ALLOW_API_CONFIGURATION>
        max_scan_time: 180000
    retrieve_files:
      file_list:
      - /etc/passwd
    secret_search:
      match_params:
      - MAXFILESIZE=10000
      - STOREONMATCH=n
      regexp_match:
      - AWS_ACCESS_KEY=(?i).*aws_access_key_id( *=+ *).*(?<![A-Z0-9])[A-Z0-9]{20}(?![A-Z0-9]).*
      - AWS_SECRET_KEY=(?i).*aws_secret_access_key( *=+ *).*(?<![A-Za-z0-9/+=])[A-Za-z0-9/+=]{40}(?![A-Za-z0-9/+=]).*
      - PRIV_KEY=(?i)-+BEGIN(.*)PRIVATE KEY-+
      - 'DOCKER_AUTH=(?i).*"auth": *".+"'
      - API_KEY=(?i).*api(-|_)key( *=+ *).*(?<![A-Z0-9])[A-Z0-9]{20,60}(?![A-Z0-9]).*
---
# Source: enterprise/templates/anchore_configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
  name: release-name-enterprise
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    {}

data:
  config.yaml: |
    # Anchore Service Configuration File, mounted from a configmap
    #
    service_dir: ${ANCHORE_SERVICE_DIR}
    tmp_dir: ${ANCHORE_TMP_DIR}
    log_level: ${ANCHORE_LOG_LEVEL} # Deprecated - prefer use of logging.log_level
    
    logging:
      colored_logging: false
      exception_backtrace_logging: false
      exception_diagnose_logging: false
      file_retention_rule: 10
      file_rotation_rule: 10 MB
      log_level: <ALLOW_API_CONFIGURATION>
      server_access_logging: true
      server_log_level: info
      server_response_debug_logging: false
      structured_logging: false
    
    server:
      max_connection_backlog: 2048
      max_wsgi_middleware_worker_count: 50
      max_wsgi_middleware_worker_queue_size: 100
      timeout_graceful_shutdown: false
      timeout_keep_alive: 5
    
    allow_awsecr_iam_auto: ${ANCHORE_ALLOW_ECR_IAM_AUTO}
    host_id: "${ANCHORE_HOST_ID}"
    internal_ssl_verify: ${ANCHORE_INTERNAL_SSL_VERIFY}
    image_analyze_timeout_seconds: ${ANCHORE_IMAGE_ANALYZE_TIMEOUT_SECONDS}
    
    global_client_connect_timeout: ${ANCHORE_GLOBAL_CLIENT_CONNECT_TIMEOUT}
    global_client_read_timeout: ${ANCHORE_GLOBAL_CLIENT_READ_TIMEOUT}
    server_request_timeout_seconds: ${ANCHORE_GLOBAL_SERVER_REQUEST_TIMEOUT_SEC}
    
    license_file: ${ANCHORE_LICENSE_FILE}
    auto_restart_services: false
    
    max_source_import_size_mb: ${ANCHORE_MAX_IMPORT_SOURCE_SIZE_MB}
    max_import_content_size_mb: ${ANCHORE_MAX_IMPORT_CONTENT_SIZE_MB}
    
    max_compressed_image_size_mb: ${ANCHORE_MAX_COMPRESSED_IMAGE_SIZE_MB}
    
    audit:
      enabled: true
      mode: log
      verbs:
        - post
        - put
        - delete
        - patch
      resource_uris:
        - "/accounts"
        - "/accounts/{account_name}"
        - "/accounts/{account_name}/state"
        - "/accounts/{account_name}/users"
        - "/accounts/{account_name}/users/{username}"
        - "/accounts/{account_name}/users/{username}/api-keys"
        - "/accounts/{account_name}/users/{username}/api-keys/{key_name}"
        - "/accounts/{account_name}/users/{username}/credentials"
        - "/rbac-manager/roles"
        - "/rbac-manager/roles/{role_name}/members"
        - "/rbac-manager/saml/idps"
        - "/rbac-manager/saml/idps/{name}"
        - "/rbac-manager/saml/idps/{name}/user-group-mappings"
        - "/system/user-groups"
        - "/system/user-groups/{group_uuid}"
        - "/system/user-groups/{group_uuid}/roles"
        - "/system/user-groups/{group_uuid}/users"
        - "/user/api-keys"
        - "/user/api-keys/{key_name}"
        - "/user/credentials"
    
    metrics:
      enabled: ${ANCHORE_ENABLE_METRICS}
      auth_disabled: ${ANCHORE_DISABLE_METRICS_AUTH}
    
    webhooks:
      {}
    
    default_admin_password: "${ANCHORE_ADMIN_PASSWORD}"
    default_admin_email: ${ANCHORE_ADMIN_EMAIL}
    
    configuration: 
      api_driven_configuration_enabled: ${ANCHORE_API_DRIVEN_CONFIGURATION_ENABLED}
    
    keys:
      secret: "${ANCHORE_SAML_SECRET}"
      public_key_path: ${ANCHORE_AUTH_PRIVKEY}
      private_key_path: ${ANCHORE_AUTH_PUBKEY}
    
    user_authentication:
      oauth:
        enabled: ${ANCHORE_OAUTH_ENABLED}
        default_token_expiration_seconds: ${ANCHORE_OAUTH_TOKEN_EXPIRATION}
        refresh_token_expiration_seconds: ${ANCHORE_OAUTH_REFRESH_TOKEN_EXPIRATION}
      hashed_passwords: ${ANCHORE_AUTH_ENABLE_HASHED_PASSWORDS}
      sso_require_existing_users: ${ANCHORE_SSO_REQUIRES_EXISTING_USERS}
      allow_api_keys_for_saml_users: false
      max_api_key_age_days: 365
      max_api_keys_per_user: 100
      remove_deleted_user_api_keys_older_than_days: 365
      disallow_native_users: false
      log_saml_assertions: false
    credentials:
      database:
        user: "${ANCHORE_DB_USER}"
        password: "${ANCHORE_DB_PASSWORD}"
        host: "${ANCHORE_DB_HOST}"
        port: "${ANCHORE_DB_PORT}"
        name: "${ANCHORE_DB_NAME}"
        db_connect_args:
          timeout: ${ANCHORE_DB_TIMEOUT}
          ssl: ${ANCHORE_DB_SSL}
        db_pool_size: ${ANCHORE_DB_POOL_SIZE}
        db_pool_max_overflow: ${ANCHORE_DB_POOL_MAX_OVERFLOW}
    
    account_gc:
      max_resource_gc_chunk: 4096
      max_deletion_threads: 4
    
    services:
      apiext:
        enabled: true
        require_auth: true
        endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME}
        max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS}
        listen: '0.0.0.0'
        port: ${ANCHORE_PORT}
        ssl_enable: ${ANCHORE_SSL_ENABLED}
        ssl_cert: ${ANCHORE_SSL_CERT}
        ssl_key: ${ANCHORE_SSL_KEY}
    
      analyzer:
        enabled: true
        require_auth: true
        endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME}
        listen: '0.0.0.0'
        port: ${ANCHORE_PORT}
        max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS}
        cycle_timer_seconds: 1
        cycle_timers:
          image_analyzer: 1
        analyzer_driver: 'nodocker'
        layer_cache_enable: ${ANCHORE_LAYER_CACHE_ENABLED}
        layer_cache_max_gigabytes: ${ANCHORE_LAYER_CACHE_SIZE_GB}
        enable_hints: ${ANCHORE_HINTS_ENABLED}
        enable_owned_package_filtering: ${ANCHORE_OWNED_PACKAGE_FILTERING_ENABLED}
        keep_image_analysis_tmpfiles: ${ANCHORE_KEEP_IMAGE_ANALYSIS_TMPFILES}
        ssl_enable: ${ANCHORE_SSL_ENABLED}
        ssl_cert: ${ANCHORE_SSL_CERT}
        ssl_key: ${ANCHORE_SSL_KEY}
    
      catalog:
        enabled: true
        require_auth: true
        endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME}
        listen: '0.0.0.0'
        port: ${ANCHORE_PORT}
        max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS}
        cycle_timer_seconds: 1
        cycle_timers:
          analyzer_queue: 1
          archive_tasks: 43200
          artifact_lifecycle_policy_tasks: 43200
          events_gc: 43200
          image_gc: 60
          image_watcher: 3600
          k8s_image_watcher: 150
          notifications: 30
          policy_bundle_sync: 300
          policy_eval: 3600
          repo_watcher: 60
          resource_metrics: 60
          service_watcher: 15
          vulnerability_scan: 14400
        event_log:
          max_retention_age_days: 180
          notification:
            enabled: false
            level:
            - error
        runtime_inventory:
          inventory_ttl_days: ${ANCHORE_ENTERPRISE_RUNTIME_INVENTORY_TTL_DAYS}
          inventory_ingest_overwrite: ${ANCHORE_ENTERPRISE_RUNTIME_INVENTORY_INGEST_OVERWRITE}
        integrations:
          integration_health_report_ttl_days: ${ANCHORE_ENTERPRISE_INTEGRATION_HEALTH_REPORTS_TTL_DAYS}
        image_gc:
          max_worker_threads: ${ANCHORE_CATALOG_IMAGE_GC_WORKERS}
        runtime_compliance:
          object_store_bucket: "runtime_compliance_check"
        down_analyzer_task_requeue: ${ANCHORE_ANALYZER_TASK_REQUEUE}
        import_operation_expiration_days: ${ANCHORE_IMPORT_OPERATION_EXPIRATION_DAYS}
        analysis_archive:
          {}
        object_store:
          compression:
            enabled: true
            min_size_kbytes: 100
          storage_driver:
            config: {}
            name: db
          verify_content_digests: true
        ssl_enable: ${ANCHORE_SSL_ENABLED}
        ssl_cert: ${ANCHORE_SSL_CERT}
        ssl_key: ${ANCHORE_SSL_KEY}
    
      simplequeue:
        enabled: true
        require_auth: true
        endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME}
        listen: '0.0.0.0'
        port: ${ANCHORE_PORT}
        max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS}
        ssl_enable: ${ANCHORE_SSL_ENABLED}
        ssl_cert: ${ANCHORE_SSL_CERT}
        ssl_key: ${ANCHORE_SSL_KEY}
    
      policy_engine:
        enabled: true
        require_auth: true
        max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS}
        endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME}
        listen: '0.0.0.0'
        port: ${ANCHORE_PORT}
        policy_evaluation_cache_ttl: ${ANCHORE_POLICY_EVAL_CACHE_TTL_SECONDS}
        enable_package_db_load: ${ANCHORE_POLICY_ENGINE_ENABLE_PACKAGE_DB_LOAD}
        enable_user_base_image: true
        vulnerabilities:
          sync:
            enabled: true
            ssl_verify: ${ANCHORE_FEEDS_SSL_VERIFY}
            connection_timeout_seconds: 3
            read_timeout_seconds: 60
            data:
              grypedb:
                enabled: true
          matching:
            exclude:
              providers: []
              package_types: []
            default:
              search:
                by_cpe:
                  enabled: ${ANCHORE_VULN_MATCHING_DEFAULT_SEARCH_BY_CPE_ENABLED}
            ecosystem_specific:
              dotnet:
                search:
                  by_cpe:
                    enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_DOTNET_SEARCH_BY_CPE_ENABLED}
              golang:
                search:
                  by_cpe:
                    enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_GOLANG_SEARCH_BY_CPE_ENABLED}
              java:
                search:
                  by_cpe:
                    enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_JAVA_SEARCH_BY_CPE_ENABLED}
              javascript:
                search:
                  by_cpe:
                    enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_JAVASCRIPT_SEARCH_BY_CPE_ENABLED}
              python:
                search:
                  by_cpe:
                    enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_PYTHON_SEARCH_BY_CPE_ENABLED}
              ruby:
                search:
                  by_cpe:
                    enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_RUBY_SEARCH_BY_CPE_ENABLED}
              stock:
                search:
                  by_cpe:
                    # Disabling search by CPE for the stock matcher will entirely disable binary-only matches and is not advised
                    enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_STOCK_SEARCH_BY_CPE_ENABLED}
        ssl_enable: ${ANCHORE_SSL_ENABLED}
        ssl_cert: ${ANCHORE_SSL_CERT}
        ssl_key: ${ANCHORE_SSL_KEY}
    
      reports:
        enabled: true
        require_auth: true
        endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME}
        listen: '0.0.0.0'
        port: ${ANCHORE_PORT}
        max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS}
        enable_graphiql: ${ANCHORE_ENTERPRISE_REPORTS_ENABLE_GRAPHIQL}
        cycle_timers:
          reports_scheduled_queries: 600
        max_async_execution_threads: ${ANCHORE_ENTERPRISE_REPORTS_MAX_ASYNC_EXECUTION_THREADS}
        async_execution_timeout: ${ANCHORE_ENTERPRISE_REPORTS_ASYNC_EXECUTION_TIMEOUT}
        ssl_enable: ${ANCHORE_SSL_ENABLED}
        ssl_cert: ${ANCHORE_SSL_CERT}
        ssl_key: ${ANCHORE_SSL_KEY}
        use_volume: false
    
      reports_worker:
        enabled: true
        require_auth: true
        endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME}
        listen: '0.0.0.0'
        port: ${ANCHORE_PORT}
        max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS}
        enable_data_ingress: ${ANCHORE_ENTERPRISE_REPORTS_ENABLE_DATA_INGRESS}
        enable_data_egress: ${ANCHORE_ENTERPRISE_REPORTS_ENABLE_DATA_EGRESS}
        data_egress_window: ${ANCHORE_ENTERPRISE_REPORTS_DATA_EGRESS_WINDOW}
        data_refresh_max_workers: ${ANCHORE_ENTERPRISE_REPORTS_DATA_REFRESH_MAX_WORKERS}
        data_load_max_workers: ${ANCHORE_ENTERPRISE_REPORTS_DATA_LOAD_MAX_WORKERS}
        cycle_timers:
          reports_extended_runtime_vuln_load: 1800
          reports_image_egress: 600
          reports_image_load: 600
          reports_image_refresh: 7200
          reports_metrics: 3600
          reports_runtime_inventory_load: 600
          reports_tag_egress: 600
          reports_tag_load: 600
          reports_tag_refresh: 7200
        runtime_report_generation:
          use_legacy_loaders_and_queries: false
          inventory_images_by_vulnerability: true
          vulnerabilities_by_k8s_namespace: ${ANCHORE_ENTERPRISE_REPORTS_VULNERABILITIES_BY_K8S_NAMESPACE}
          vulnerabilities_by_k8s_container: ${ANCHORE_ENTERPRISE_REPORTS_VULNERABILITIES_BY_K8S_CONTAINER}
          vulnerabilities_by_ecs_container: ${ANCHORE_ENTERPRISE_REPORTS_VULNERABILITIES_BY_ECS_CONTAINER}
        ssl_enable: ${ANCHORE_SSL_ENABLED}
        ssl_cert: ${ANCHORE_SSL_CERT}
        ssl_key: ${ANCHORE_SSL_KEY}
    
      notifications:
        enabled: true
        require_auth: true
        endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME}
        listen: '0.0.0.0'
        port: ${ANCHORE_PORT}
        max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS}
        cycle_timers:
          notifications: 30
        ui_url: ${ANCHORE_ENTERPRISE_UI_URL}
        ssl_enable: ${ANCHORE_SSL_ENABLED}
        ssl_cert: ${ANCHORE_SSL_CERT}
        ssl_key: ${ANCHORE_SSL_KEY}
    
      data_syncer:
        enabled: true
        require_auth: true
        endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME}
        listen: 0.0.0.0
        port: ${ANCHORE_PORT}
        auto_sync_enabled: ${ANCHORE_DATA_SYNC_AUTO_SYNC_ENABLED}
        upload_dir: /analysis_scratch
        datasets:
          vulnerability_db:
            versions: ["5"]
          clamav_db:
            versions: ["1"]
          kev_db:
            versions: ["1"]
        ssl_enable: ${ANCHORE_SSL_ENABLED}
        ssl_cert: ${ANCHORE_SSL_CERT}
        ssl_key: ${ANCHORE_SSL_KEY}
---
# Source: enterprise/templates/envvars_configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
  name: release-name-enterprise-config-env-vars
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    {}

data:
  ANCHORE_ADMIN_EMAIL: "admin@myanchore"
  ANCHORE_API_DRIVEN_CONFIGURATION_ENABLED: "true"
  ANCHORE_ALLOW_ECR_IAM_AUTO: "true"
  ANCHORE_ANALYZER_TASK_REQUEUE: "true"
  ANCHORE_AUTH_ENABLE_HASHED_PASSWORDS: "true"
  ANCHORE_AUTH_PRIVKEY: "null"
  ANCHORE_AUTH_PUBKEY: "null"
  ANCHORE_CATALOG_IMAGE_GC_WORKERS: "4"
  ANCHORE_CLI_URL: "http://localhost:8228"
  ANCHORE_CLI_USER: "admin"
  ANCHORECTL_URL: "http://localhost:8228"
  ANCHORECTL_USERNAME: "admin"
  ANCHORE_DATA_SYNC_AUTO_SYNC_ENABLED: "true"
  ANCHORE_DISABLE_METRICS_AUTH: "false"
  ANCHORE_DB_POOL_MAX_OVERFLOW: "100"
  ANCHORE_DB_POOL_SIZE: "30"
  ANCHORE_DB_SSL: "false"
  ANCHORE_DB_SSL_MODE: "verify-full"
  ANCHORE_DB_SSL_ROOT_CERT: "null"
  ANCHORE_DB_TIMEOUT: "120"
  ANCHORE_ENABLE_METRICS: "false"
  ANCHORE_ENTERPRISE_REPORTS_ASYNC_EXECUTION_TIMEOUT: "48h"
  ANCHORE_ENTERPRISE_REPORTS_ENABLE_DATA_INGRESS: "true"
  ANCHORE_ENTERPRISE_REPORTS_ENABLE_DATA_EGRESS: "false"
  ANCHORE_ENTERPRISE_REPORTS_DATA_EGRESS_WINDOW: "0"
  ANCHORE_ENTERPRISE_REPORTS_DATA_REFRESH_MAX_WORKERS: "10"
  ANCHORE_ENTERPRISE_REPORTS_DATA_LOAD_MAX_WORKERS: "10"
  ANCHORE_ENTERPRISE_REPORTS_ENABLE_GRAPHIQL: "true"
  ANCHORE_ENTERPRISE_REPORTS_MAX_ASYNC_EXECUTION_THREADS: "1"
  ANCHORE_ENTERPRISE_REPORTS_VULNERABILITIES_BY_ECS_CONTAINER: "true"
  ANCHORE_ENTERPRISE_REPORTS_VULNERABILITIES_BY_K8S_CONTAINER: "true"
  ANCHORE_ENTERPRISE_REPORTS_VULNERABILITIES_BY_K8S_NAMESPACE: "true"
  ANCHORE_ENTERPRISE_RUNTIME_INVENTORY_TTL_DAYS: "120"
  ANCHORE_ENTERPRISE_RUNTIME_INVENTORY_INGEST_OVERWRITE: "false"
  ANCHORE_ENTERPRISE_INTEGRATION_HEALTH_REPORTS_TTL_DAYS: "2"
  ANCHORE_ENTERPRISE_UI_URL: "release-name-enterprise-ui"
  ANCHORE_GLOBAL_CLIENT_CONNECT_TIMEOUT: "0"
  ANCHORE_GLOBAL_CLIENT_READ_TIMEOUT: "0"
  ANCHORE_GLOBAL_SERVER_REQUEST_TIMEOUT_SEC: "180"
  ANCHORE_HINTS_ENABLED: "false"
  ANCHORE_IMAGE_ANALYZE_TIMEOUT_SECONDS: "3600"
  ANCHORE_IMPORT_OPERATION_EXPIRATION_DAYS: "7"
  ANCHORE_INTERNAL_SSL_VERIFY: "false"
  ANCHORE_KEEP_IMAGE_ANALYSIS_TMPFILES: "false"
  ANCHORE_LAYER_CACHE_ENABLED: "false"
  ANCHORE_LAYER_CACHE_SIZE_GB: "0"
  ANCHORE_LICENSE_FILE: "/home/anchore/license.yaml"
  ANCHORE_LOG_LEVEL: "<ALLOW_API_CONFIGURATION>"
  ANCHORE_MAX_COMPRESSED_IMAGE_SIZE_MB: "-1"
  ANCHORE_MAX_IMPORT_CONTENT_SIZE_MB: "100"
  ANCHORE_MAX_IMPORT_SOURCE_SIZE_MB: "100"
  ANCHORE_MAX_REQUEST_THREADS: "50"
  ANCHORE_OAUTH_ENABLED: "true"
  ANCHORE_OAUTH_TOKEN_EXPIRATION: "3600"
  ANCHORE_OAUTH_REFRESH_TOKEN_EXPIRATION: "86400"
  ANCHORE_OWNED_PACKAGE_FILTERING_ENABLED: "true"
  ANCHORE_POLICY_ENGINE_ENABLE_PACKAGE_DB_LOAD: "true"
  ANCHORE_POLICY_EVAL_CACHE_TTL_SECONDS: "3600"
  ANCHORE_SAML_SECRET: "null"
  ANCHORE_SERVICE_DIR: "/anchore_service"
  ANCHORE_SSL_ENABLED: "false"
  ANCHORE_SSL_CERT: "null"
  ANCHORE_SSL_KEY: "null"
  ANCHORE_SSO_REQUIRES_EXISTING_USERS: "false"
  ANCHORE_TMP_DIR: "/analysis_scratch"
  ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_DOTNET_SEARCH_BY_CPE_ENABLED: "true"
  ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_GOLANG_SEARCH_BY_CPE_ENABLED: "true"
  ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_JAVA_SEARCH_BY_CPE_ENABLED: "true"
  ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_JAVASCRIPT_SEARCH_BY_CPE_ENABLED: "false"
  ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_PYTHON_SEARCH_BY_CPE_ENABLED: "true"
  ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_RUBY_SEARCH_BY_CPE_ENABLED: "true"
  ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_STOCK_SEARCH_BY_CPE_ENABLED: "true"
---
# Source: enterprise/templates/scripts_configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
  name: release-name-enterprise-scripts
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    {}
data:
  
  anchore-config: |
    #!/bin/bash
    while IFS= read -r line; do
      while [[ "$line" =~ (\$\{[a-zA-Z_][a-zA-Z_0-9]*\}) ]]; do
        VAR_NAME=${BASH_REMATCH[1]#*\{}; VAR_NAME=${VAR_NAME%\}};
        line=${line//${BASH_REMATCH[1]}/${!VAR_NAME}};
      done;
      printf '%s\n' "$line";
    done < /config/config.yaml
---
# Source: enterprise/templates/ui_configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: release-name-enterprise-ui
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: ui
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    {}
data:
  config-ui.yaml: |
    # Anchore UI configuration
    reports_uri: 'http://release-name-enterprise-api:8228/v2'
    notifications_uri: 'http://release-name-enterprise-api:8228/v2'
    enterprise_uri: 'http://release-name-enterprise-api:8228/v2'
    # redis_uri: overridden in deployment using the `ANCHORE_REDIS_URI` environment variable
    # appdb_uri: overridden in deployment using the `ANCHORE_APPDB_URI` environment variable
    license_path: '/home/anchore/'
    enable_ssl: false
    enable_proxy: false
    allow_shared_login: true
    redis_flushdb: true
    force_websocket: false
    authentication_lock:
      count: 5
      expires: 300
    appdb_config: 
      native: true
      pool:
        acquire: 30000
        idle: 10000
        max: 10
        min: 0
    log_level: 'http'
    enrich_inventory_view: true
    enable_prometheus_metrics: false
    sso_auth_only: false
---
# Source: enterprise/charts/postgresql/templates/primary/svc-headless.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-postgresql-hl
  namespace: "default"
  labels:
    app.kubernetes.io/name: postgresql
    helm.sh/chart: postgresql-12.5.9
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: primary
    # Use this annotation in addition to the actual publishNotReadyAddresses
    # field below because the annotation will stop being respected soon but the
    # field is broken in some versions of Kubernetes:
    # https://github.com/kubernetes/kubernetes/issues/58662
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
  type: ClusterIP
  clusterIP: None
  # We want all pods in the StatefulSet to have their addresses published for
  # the sake of the other Postgresql pods even before they're ready, since they
  # have to be able to talk to each other in order to become ready.
  publishNotReadyAddresses: true
  ports:
    - name: tcp-postgresql
      port: 5432
      targetPort: tcp-postgresql
  selector:
    app.kubernetes.io/name: postgresql
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/component: primary
---
# Source: enterprise/charts/postgresql/templates/primary/svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-postgresql
  namespace: "default"
  labels:
    app.kubernetes.io/name: postgresql
    helm.sh/chart: postgresql-12.5.9
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: primary
spec:
  type: ClusterIP
  sessionAffinity: None
  ports:
    - name: tcp-postgresql
      port: 5432
      targetPort: tcp-postgresql
      nodePort: null
  selector:
    app.kubernetes.io/name: postgresql
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/component: primary
---
# Source: enterprise/charts/ui-redis/templates/headless-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-ui-redis-headless
  namespace: "default"
  labels:
    app.kubernetes.io/name: ui-redis
    helm.sh/chart: ui-redis-17.11.8
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
  annotations:
    
spec:
  type: ClusterIP
  clusterIP: None
  ports:
    - name: tcp-redis
      port: 6379
      targetPort: redis
  selector:
    app.kubernetes.io/name: ui-redis
    app.kubernetes.io/instance: release-name
---
# Source: enterprise/charts/ui-redis/templates/master/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-ui-redis-master
  namespace: "default"
  labels:
    app.kubernetes.io/name: ui-redis
    helm.sh/chart: ui-redis-17.11.8
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: master
spec:
  type: ClusterIP
  internalTrafficPolicy: Cluster
  sessionAffinity: None
  ports:
    - name: tcp-redis
      port: 6379
      targetPort: redis
      nodePort: null
  selector:
    app.kubernetes.io/name: ui-redis
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/component: master
---
# Source: enterprise/templates/api_deployment.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-enterprise-api
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: api
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    {}
spec:
  type: ClusterIP
  ports:
    - name: api
      port: 8228
      targetPort: 8228
      protocol: TCP
      
  selector:
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: api
---
# Source: enterprise/templates/catalog_deployment.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-enterprise-catalog
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: catalog
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    {}
spec:
  type: ClusterIP
  ports:
    - name: catalog
      port: 8082
      targetPort: 8082
      protocol: TCP
      
  selector:
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: catalog
---
# Source: enterprise/templates/datasyncer_deployment.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-enterprise-datasyncer
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: datasyncer
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    {}
spec:
  type: ClusterIP
  ports:
    - name: datasyncer
      port: 8778
      targetPort: 8778
      protocol: TCP
      
  selector:
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: datasyncer
---
# Source: enterprise/templates/notifications_deployment.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-enterprise-notifications
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: notifications
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    {}
spec:
  type: ClusterIP
  ports:
    - name: notifications
      port: 8668
      targetPort: 8668
      protocol: TCP
      
  selector:
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: notifications
---
# Source: enterprise/templates/policyengine_deployment.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-enterprise-policy
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: policyengine
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    {}
spec:
  type: ClusterIP
  ports:
    - name: policyengine
      port: 8087
      targetPort: 8087
      protocol: TCP
      
  selector:
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: policyengine
---
# Source: enterprise/templates/reports_deployment.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-enterprise-reports
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: reports
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    {}
spec:
  type: ClusterIP
  ports:
    - name: reports
      port: 8558
      targetPort: 8558
      protocol: TCP
      
  selector:
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: reports
---
# Source: enterprise/templates/reportsworker_deployment.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-enterprise-reportsworker
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: reportsworker
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    {}
spec:
  type: ClusterIP
  ports:
    - name: reportsworker
      port: 8559
      targetPort: 8559
      protocol: TCP
      
  selector:
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: reportsworker
---
# Source: enterprise/templates/simplequeue_deployment.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-enterprise-simplequeue
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: simplequeue
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    {}
spec:
  type: ClusterIP
  ports:
    - name: simplequeue
      port: 8083
      targetPort: 8083
      protocol: TCP
      
  selector:
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: simplequeue
---
# Source: enterprise/templates/ui_deployment.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-enterprise-ui
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: ui
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    {}
spec:
  sessionAffinity: ClientIP
  type: ClusterIP
  ports:
    - name: ui
      port: 80
      protocol: TCP
      targetPort: 3000
      
  selector:
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: ui
---
# Source: enterprise/templates/analyzer_deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-enterprise-analyzer
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: analyzer
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    {}
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: release-name-enterprise
      app.kubernetes.io/component: analyzer
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        
        app.kubernetes.io/name: release-name-enterprise
        app.kubernetes.io/component: analyzer
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/version: 5.15.0
        app.kubernetes.io/part-of: anchore
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: enterprise-3.5.1
      annotations:
        
        checksum/secrets: 680f92e2ebc0f3c760e54f6028a5ff6e4dcd7895c0faf68e5fa09ee9d87bb849
        checksum/enterprise-config: e29e447ba055f81f31f12ebdde80cbc639a6c4f8e10e46a181aa753d405702aa
        checksum/analyzer-config: ce1f2ea989dace3c83db98ee671a9d8ba4b069906c4e8ac26905ef302bd6d044
        checksum/enterprise-envvar: 98f9a6efa52b5e29b2c6fad7ce1b8e99b33f6eedf292478421259fe0ff2d0148
    spec:      
      securityContext:
        fsGroup: 1000
        runAsGroup: 1000
        runAsUser: 1000
      imagePullSecrets:
        - name: anchore-enterprise-pullcreds
      volumes:
        
        - name: anchore-license
          secret:
            
            secretName: anchore-enterprise-license
        - name: anchore-scripts
          configMap:
            name: release-name-enterprise-scripts
            defaultMode: 0755
        - name: config-volume
          configMap:
            name: release-name-enterprise
        - name: "anchore-scratch"
          
          emptyDir: {}
        - name: analyzer-config-volume
          configMap:
            name: release-name-enterprise-analyzer
      containers:
        - name: "enterprise-analyzer"
          image: docker.io/anchore/enterprise:v5.15.0          
          imagePullPolicy: IfNotPresent
          command: ["/bin/sh", "-c"]
          args:
            -  /docker-entrypoint.sh anchore-enterprise-manager service start --no-auto-upgrade analyzer
          envFrom:
            - configMapRef:
                name: release-name-enterprise-config-env-vars
            - secretRef:
                name: release-name-enterprise
          env:
            
            
            # check if the domainSuffix is set on the service level of the component, if it is, use that, else use the global domainSuffix
            
            - name: ANCHORE_ENDPOINT_HOSTNAME
              value: release-name-enterprise-analyzer.default.svc.cluster.local
            - name: ANCHORE_PORT
              value: "8084"
            - name: ANCHORE_HOST_ID
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
          ports:
            - name: analyzer
              containerPort: 8084
          volumeMounts:
            
            - name: anchore-license
              mountPath: /home/anchore/license.yaml
              subPath: license.yaml
            - name: config-volume
              mountPath: /config/config.yaml
              subPath: config.yaml
            - name: anchore-scripts
              mountPath: /scripts
            - name: analyzer-config-volume
              mountPath: "/anchore_service/analyzer_config.yaml"
              subPath: analyzer_config.yaml
            - name: "anchore-scratch"
              mountPath: /analysis_scratch
          livenessProbe:
            httpGet:
              path: /health
              port: analyzer
              scheme: HTTP
            initialDelaySeconds: 120
            timeoutSeconds: 10
            periodSeconds: 10
            failureThreshold: 6
            successThreshold: 1
          readinessProbe:
            httpGet:
              path: /health
              port: analyzer
              scheme: HTTP
            timeoutSeconds: 10
            periodSeconds: 10
            failureThreshold: 3
            successThreshold: 1
---
# Source: enterprise/templates/api_deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-enterprise-api
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: api
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    {}
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: release-name-enterprise
      app.kubernetes.io/component: api
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        
        app.kubernetes.io/name: release-name-enterprise
        app.kubernetes.io/component: api
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/version: 5.15.0
        app.kubernetes.io/part-of: anchore
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: enterprise-3.5.1
      annotations:
        
        checksum/secrets: 87b9ca4a25b2997bec25ffa141f63878a980aaff25bc3b69993e65a05da9990d
        checksum/enterprise-config: e29e447ba055f81f31f12ebdde80cbc639a6c4f8e10e46a181aa753d405702aa
        checksum/enterprise-envvar: 98f9a6efa52b5e29b2c6fad7ce1b8e99b33f6eedf292478421259fe0ff2d0148
    spec:      
      securityContext:
        fsGroup: 1000
        runAsGroup: 1000
        runAsUser: 1000
      imagePullSecrets:
        - name: anchore-enterprise-pullcreds
      volumes:
        
        - name: anchore-license
          secret:
            
            secretName: anchore-enterprise-license
        - name: anchore-scripts
          configMap:
            name: release-name-enterprise-scripts
            defaultMode: 0755
        - name: config-volume
          configMap:
            name: release-name-enterprise
      containers:
        - name: "enterprise-api"
          image: docker.io/anchore/enterprise:v5.15.0  
          imagePullPolicy: IfNotPresent
          command: ["/bin/sh", "-c"]
          args:
            -  /docker-entrypoint.sh anchore-enterprise-manager service start --no-auto-upgrade apiext
          envFrom:
            - configMapRef:
                name: release-name-enterprise-config-env-vars
            - secretRef:
                name: release-name-enterprise
          env:
            
            
            # check if the domainSuffix is set on the service level of the component, if it is, use that, else use the global domainSuffix
            
            - name: ANCHORE_ENDPOINT_HOSTNAME
              value: release-name-enterprise-api.default.svc.cluster.local
            - name: ANCHORE_PORT
              value: "8228"
            - name: ANCHORE_HOST_ID
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: ANCHORE_CLI_PASS
              valueFrom:
                secretKeyRef:
                  name: release-name-enterprise
                  key: ANCHORE_ADMIN_PASSWORD
          ports:
            - name: api
              containerPort: 8228
          volumeMounts:
            
            - name: anchore-license
              mountPath: /home/anchore/license.yaml
              subPath: license.yaml
            - name: config-volume
              mountPath: /config/config.yaml
              subPath: config.yaml
            - name: anchore-scripts
              mountPath: /scripts
          livenessProbe:
            httpGet:
              path: /health
              port: api
              scheme: HTTP
            initialDelaySeconds: 120
            timeoutSeconds: 10
            periodSeconds: 10
            failureThreshold: 6
            successThreshold: 1
          readinessProbe:
            httpGet:
              path: /health
              port: api
              scheme: HTTP
            timeoutSeconds: 10
            periodSeconds: 10
            failureThreshold: 3
            successThreshold: 1
---
# Source: enterprise/templates/catalog_deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-enterprise-catalog
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: catalog
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    {}
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: release-name-enterprise
      app.kubernetes.io/component: catalog
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        
        app.kubernetes.io/name: release-name-enterprise
        app.kubernetes.io/component: catalog
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/version: 5.15.0
        app.kubernetes.io/part-of: anchore
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: enterprise-3.5.1
      annotations:
        
        checksum/enterprise-config: e29e447ba055f81f31f12ebdde80cbc639a6c4f8e10e46a181aa753d405702aa
        checksum/secrets: 4be50d27a1405e6ef893bc19ea9c0307fc9a916c1c9420927b5b41bdba643c12
        checksum/enterprise-envvar: 98f9a6efa52b5e29b2c6fad7ce1b8e99b33f6eedf292478421259fe0ff2d0148
    spec:      
      securityContext:
        fsGroup: 1000
        runAsGroup: 1000
        runAsUser: 1000
      imagePullSecrets:
        - name: anchore-enterprise-pullcreds
      volumes:
        
        - name: anchore-license
          secret:
            
            secretName: anchore-enterprise-license
        - name: anchore-scripts
          configMap:
            name: release-name-enterprise-scripts
            defaultMode: 0755
        - name: config-volume
          configMap:
            name: release-name-enterprise
        - name: anchore-scratch
          
          emptyDir: {}
      containers:
        - name: "enterprise-catalog"
          image: docker.io/anchore/enterprise:v5.15.0  
          imagePullPolicy: IfNotPresent
          command: ["/bin/sh", "-c"]
          args:
            -  /docker-entrypoint.sh anchore-enterprise-manager service start --no-auto-upgrade catalog
          envFrom:
            - configMapRef:
                name: release-name-enterprise-config-env-vars
            - secretRef:
                name: release-name-enterprise
          env:
            
            
            # check if the domainSuffix is set on the service level of the component, if it is, use that, else use the global domainSuffix
            
            - name: ANCHORE_ENDPOINT_HOSTNAME
              value: release-name-enterprise-catalog.default.svc.cluster.local
            - name: ANCHORE_PORT
              value: "8082"
            - name: ANCHORE_HOST_ID
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
          ports:
            - name: catalog
              containerPort: 8082
          volumeMounts:
            
            - name: anchore-license
              mountPath: /home/anchore/license.yaml
              subPath: license.yaml
            - name: config-volume
              mountPath: /config/config.yaml
              subPath: config.yaml
            - name: anchore-scripts
              mountPath: /scripts
            - name: anchore-scratch
              mountPath: /analysis_scratch
          livenessProbe:
            httpGet:
              path: /health
              port: catalog
              scheme: HTTP
            initialDelaySeconds: 120
            timeoutSeconds: 10
            periodSeconds: 10
            failureThreshold: 6
            successThreshold: 1
          readinessProbe:
            httpGet:
              path: /health
              port: catalog
              scheme: HTTP
            timeoutSeconds: 10
            periodSeconds: 10
            failureThreshold: 3
            successThreshold: 1
---
# Source: enterprise/templates/datasyncer_deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-enterprise-datasyncer
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: datasyncer
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    {}
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: release-name-enterprise
      app.kubernetes.io/component: datasyncer
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        
        app.kubernetes.io/name: release-name-enterprise
        app.kubernetes.io/component: datasyncer
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/version: 5.15.0
        app.kubernetes.io/part-of: anchore
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: enterprise-3.5.1
      annotations:
        
        checksum/enterprise-config: e29e447ba055f81f31f12ebdde80cbc639a6c4f8e10e46a181aa753d405702aa
        checksum/secrets: a3945c81c4cc34efbf2759f7866690b4c8b8f0d12beccea6fc1664a7f064d9f0
        checksum/enterprise-envvar: 98f9a6efa52b5e29b2c6fad7ce1b8e99b33f6eedf292478421259fe0ff2d0148
    spec:      
      securityContext:
        fsGroup: 1000
        runAsGroup: 1000
        runAsUser: 1000
      imagePullSecrets:
        - name: anchore-enterprise-pullcreds
      volumes:
        
        - name: anchore-license
          secret:
            
            secretName: anchore-enterprise-license
        - name: anchore-scripts
          configMap:
            name: release-name-enterprise-scripts
            defaultMode: 0755
        - name: config-volume
          configMap:
            name: release-name-enterprise
        - name: anchore-scratch
          
          emptyDir: {}
      containers:
        - name: "enterprise-datasyncer"
          image: docker.io/anchore/enterprise:v5.15.0  
          imagePullPolicy: IfNotPresent
          command: ["/bin/sh", "-c"]
          args:
            -  /docker-entrypoint.sh anchore-enterprise-manager service start --no-auto-upgrade data_syncer
          envFrom:
            - configMapRef:
                name: release-name-enterprise-config-env-vars
            - secretRef:
                name: release-name-enterprise
          env:
            
            
            # check if the domainSuffix is set on the service level of the component, if it is, use that, else use the global domainSuffix
            
            - name: ANCHORE_ENDPOINT_HOSTNAME
              value: release-name-enterprise-datasyncer.default.svc.cluster.local
            - name: ANCHORE_PORT
              value: "8778"
            - name: ANCHORE_HOST_ID
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
          ports:
            - name: datasyncer
              containerPort: 8778
          volumeMounts:
            
            - name: anchore-license
              mountPath: /home/anchore/license.yaml
              subPath: license.yaml
            - name: config-volume
              mountPath: /config/config.yaml
              subPath: config.yaml
            - name: anchore-scripts
              mountPath: /scripts
            - name: anchore-scratch
              mountPath: /analysis_scratch
          livenessProbe:
            httpGet:
              path: /health
              port: datasyncer
              scheme: HTTP
            initialDelaySeconds: 120
            timeoutSeconds: 10
            periodSeconds: 10
            failureThreshold: 6
            successThreshold: 1
          readinessProbe:
            httpGet:
              path: /health
              port: datasyncer
              scheme: HTTP
            timeoutSeconds: 10
            periodSeconds: 10
            failureThreshold: 3
            successThreshold: 1
---
# Source: enterprise/templates/notifications_deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-enterprise-notifications
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: notifications
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    {}
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: release-name-enterprise
      app.kubernetes.io/component: notifications
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        
        app.kubernetes.io/name: release-name-enterprise
        app.kubernetes.io/component: notifications
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/version: 5.15.0
        app.kubernetes.io/part-of: anchore
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: enterprise-3.5.1
      annotations:
        
        checksum/secrets: 6bc649378c1e0450a549ae2eb42245ca443027f473274aa4f26300e40b8ca1b3
        checksum/enterprise-config: e29e447ba055f81f31f12ebdde80cbc639a6c4f8e10e46a181aa753d405702aa
        checksum/enterprise-envvar: 98f9a6efa52b5e29b2c6fad7ce1b8e99b33f6eedf292478421259fe0ff2d0148
    spec:      
      securityContext:
        fsGroup: 1000
        runAsGroup: 1000
        runAsUser: 1000
      imagePullSecrets:
        - name: anchore-enterprise-pullcreds
      volumes:
        
        - name: anchore-license
          secret:
            
            secretName: anchore-enterprise-license
        - name: anchore-scripts
          configMap:
            name: release-name-enterprise-scripts
            defaultMode: 0755
        - name: config-volume
          configMap:
            name: release-name-enterprise
      containers:
        - name: "enterprise-notifications"
          image: docker.io/anchore/enterprise:v5.15.0  
          imagePullPolicy: IfNotPresent
          command: ["/bin/sh", "-c"]
          args:
            -  /docker-entrypoint.sh anchore-enterprise-manager service start --no-auto-upgrade notifications
          ports:
            - containerPort: 8668
              name: notifications
          envFrom:
            - configMapRef:
                name: release-name-enterprise-config-env-vars
            - secretRef:
                name: release-name-enterprise
          env:
            
            
            # check if the domainSuffix is set on the service level of the component, if it is, use that, else use the global domainSuffix
            
            - name: ANCHORE_ENDPOINT_HOSTNAME
              value: release-name-enterprise-notifications.default.svc.cluster.local
            - name: ANCHORE_PORT
              value: "8668"
            - name: ANCHORE_HOST_ID
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
          volumeMounts:
            
            - name: anchore-license
              mountPath: /home/anchore/license.yaml
              subPath: license.yaml
            - name: config-volume
              mountPath: /config/config.yaml
              subPath: config.yaml
            - name: anchore-scripts
              mountPath: /scripts
          livenessProbe:
            httpGet:
              path: /health
              port: notifications
              scheme: HTTP
            initialDelaySeconds: 120
            timeoutSeconds: 10
            periodSeconds: 10
            failureThreshold: 6
            successThreshold: 1
          readinessProbe:
            httpGet:
              path: /health
              port: notifications
              scheme: HTTP
            timeoutSeconds: 10
            periodSeconds: 10
            failureThreshold: 3
            successThreshold: 1
---
# Source: enterprise/templates/policyengine_deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-enterprise-policy
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: policyengine
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    {}
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: release-name-enterprise
      app.kubernetes.io/component: policyengine
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        
        app.kubernetes.io/name: release-name-enterprise
        app.kubernetes.io/component: policyengine
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/version: 5.15.0
        app.kubernetes.io/part-of: anchore
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: enterprise-3.5.1
      annotations:
        
        checksum/secrets: f8cfe8678728e877cf8d6f6f7194a5c79c760310fc0e2f09e620df58af77ad19
        checksum/enterprise-config: e29e447ba055f81f31f12ebdde80cbc639a6c4f8e10e46a181aa753d405702aa
        checksum/enterprise-envvar: 98f9a6efa52b5e29b2c6fad7ce1b8e99b33f6eedf292478421259fe0ff2d0148
    spec:      
      securityContext:
        fsGroup: 1000
        runAsGroup: 1000
        runAsUser: 1000
      
8000
imagePullSecrets:
        - name: anchore-enterprise-pullcreds
      volumes:
        
        - name: anchore-license
          secret:
            
            secretName: anchore-enterprise-license
        - name: anchore-scripts
          configMap:
            name: release-name-enterprise-scripts
            defaultMode: 0755
        - name: config-volume
          configMap:
            name: release-name-enterprise
        - name: anchore-scratch
          
          emptyDir: {}
      containers:
        - name: "enterprise-policyengine"
          image: docker.io/anchore/enterprise:v5.15.0  
          imagePullPolicy: IfNotPresent
          command: ["/bin/sh", "-c"]
          args:
            -  /docker-entrypoint.sh anchore-enterprise-manager service start --no-auto-upgrade policy_engine
          envFrom:
            - configMapRef:
                name: release-name-enterprise-config-env-vars
            - secretRef:
                name: release-name-enterprise
          env:
            
            
            # check if the domainSuffix is set on the service level of the component, if it is, use that, else use the global domainSuffix
            
            - name: ANCHORE_ENDPOINT_HOSTNAME
              value: release-name-enterprise-policy.default.svc.cluster.local
            - name: ANCHORE_PORT
              value: "8087"
            - name: ANCHORE_HOST_ID
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
          ports:
            - name: policyengine
              containerPort: 8087
          volumeMounts:
            
            - name: anchore-license
              mountPath: /home/anchore/license.yaml
              subPath: license.yaml
            - name: config-volume
              mountPath: /config/config.yaml
              subPath: config.yaml
            - name: anchore-scripts
              mountPath: /scripts
            - name: "anchore-scratch"
              mountPath: /analysis_scratch
          livenessProbe:
            httpGet:
              path: /health
              port: policyengine
              scheme: HTTP
            initialDelaySeconds: 120
            timeoutSeconds: 10
            periodSeconds: 10
            failureThreshold: 6
            successThreshold: 1
          readinessProbe:
            httpGet:
              path: /health
              port: policyengine
              scheme: HTTP
            timeoutSeconds: 10
            periodSeconds: 10
            failureThreshold: 3
            successThreshold: 1
---
# Source: enterprise/templates/reports_deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-enterprise-reports
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: reports
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    {}
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: release-name-enterprise
      app.kubernetes.io/component: reports
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        
        app.kubernetes.io/name: release-name-enterprise
        app.kubernetes.io/component: reports
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/version: 5.15.0
        app.kubernetes.io/part-of: anchore
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: enterprise-3.5.1
      annotations:
        
        checksum/secrets: 250042c4795b6ecf5d3f6e7f2e7ca783a84eceef1eff2498f23dca16b9ef420e
        checksum/enterprise-config: e29e447ba055f81f31f12ebdde80cbc639a6c4f8e10e46a181aa753d405702aa
        checksum/enterprise-envvar: 98f9a6efa52b5e29b2c6fad7ce1b8e99b33f6eedf292478421259fe0ff2d0148
    spec:      
      securityContext:
        fsGroup: 1000
        runAsGroup: 1000
        runAsUser: 1000
      imagePullSecrets:
        - name: anchore-enterprise-pullcreds
      volumes:
        
        - name: anchore-license
          secret:
            
            secretName: anchore-enterprise-license
        - name: anchore-scripts
          configMap:
            name: release-name-enterprise-scripts
            defaultMode: 0755
        - name: config-volume
          configMap:
            name: release-name-enterprise
      containers:
        - name: "enterprise-reports"
          image: docker.io/anchore/enterprise:v5.15.0  
          imagePullPolicy: IfNotPresent
          command: ["/bin/sh", "-c"]
          args:
            -   /docker-entrypoint.sh anchore-enterprise-manager service start --no-auto-upgrade reports
          ports:
            - containerPort: 8558
              name: reports
          envFrom:
            - configMapRef:
                name: release-name-enterprise-config-env-vars
            - secretRef:
                name: release-name-enterprise
          env:
            
            
            # check if the domainSuffix is set on the service level of the component, if it is, use that, else use the global domainSuffix
            
            - name: ANCHORE_ENDPOINT_HOSTNAME
              value: release-name-enterprise-reports.default.svc.cluster.local
            - name: ANCHORE_PORT
              value: "8558"
            - name: ANCHORE_HOST_ID
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
          volumeMounts:
            
            - name: anchore-license
              mountPath: /home/anchore/license.yaml
              subPath: license.yaml
            - name: config-volume
              mountPath: /config/config.yaml
              subPath: config.yaml
            - name: anchore-scripts
              mountPath: /scripts
          livenessProbe:
            httpGet:
              path: /health
              port: reports
              scheme: HTTP
            initialDelaySeconds: 120
            timeoutSeconds: 10
            periodSeconds: 10
            failureThreshold: 6
            successThreshold: 1
          readinessProbe:
            httpGet:
              path: /health
              port: reports
              scheme: HTTP
            timeoutSeconds: 10
            periodSeconds: 10
            failureThreshold: 3
            successThreshold: 1
---
# Source: enterprise/templates/reportsworker_deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-enterprise-reportsworker
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: reportsworker
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    {}
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: release-name-enterprise
      app.kubernetes.io/component: reportsworker
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        
        app.kubernetes.io/name: release-name-enterprise
        app.kubernetes.io/component: reportsworker
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/version: 5.15.0
        app.kubernetes.io/part-of: anchore
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: enterprise-3.5.1
      annotations:
        
        checksum/secrets: 4accf8d87648af669d22fb7b235fb2d695a602f97d150227039e2289164ed6d2
        checksum/enterprise-config: 0beab45980e330f6899f9a9c75560419cbee002dbf81be23062991c8f3a7cd40
        checksum/enterprise-envvar: 98f9a6efa52b5e29b2c6fad7ce1b8e99b33f6eedf292478421259fe0ff2d0148
    spec:      
      securityContext:
        fsGroup: 1000
        runAsGroup: 1000
        runAsUser: 1000
      imagePullSecrets:
        - name: anchore-enterprise-pullcreds
      volumes:
        
        - name: anchore-license
          secret:
            
            secretName: anchore-enterprise-license
        - name: anchore-scripts
          configMap:
            name: release-name-enterprise-scripts
            defaultMode: 0755
        - name: config-volume
          configMap:
            name: release-name-enterprise
      containers:
        - name: "enterprise-reportsworker"
          image: docker.io/anchore/enterprise:v5.15.0  
          imagePullPolicy: IfNotPresent
          command: ["/bin/sh", "-c"]
          args:
            -   /docker-entrypoint.sh anchore-enterprise-manager service start --no-auto-upgrade reports_worker
          ports:
            - containerPort: 8559
              name: reportsworker
          envFrom:
            - configMapRef:
                name: release-name-enterprise-config-env-vars
            - secretRef:
                name: release-name-enterprise
          env:
            
            
            # check if the domainSuffix is set on the service level of the component, if it is, use that, else use the global domainSuffix
            
            - name: ANCHORE_ENDPOINT_HOSTNAME
              value: release-name-enterprise-reportsworker.default.svc.cluster.local
            - name: ANCHORE_PORT
              value: "8559"
            - name: ANCHORE_HOST_ID
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
          volumeMounts:
            
            - name: anchore-license
              mountPath: /home/anchore/license.yaml
              subPath: license.yaml
            - name: config-volume
              mountPath: /config/config.yaml
              subPath: config.yaml
            - name: anchore-scripts
              mountPath: /scripts
          livenessProbe:
            httpGet:
              path: /health
              port: reportsworker
              scheme: HTTP
            initialDelaySeconds: 120
            timeoutSeconds: 10
            periodSeconds: 10
            failureThreshold: 6
            successThreshold: 1
          readinessProbe:
            httpGet:
              path: /health
              port: reportsworker
              scheme: HTTP
            timeoutSeconds: 10
            periodSeconds: 10
            failureThreshold: 3
            successThreshold: 1
---
# Source: enterprise/templates/simplequeue_deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-enterprise-simplequeue
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: simplequeue
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    {}
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: release-name-enterprise
      app.kubernetes.io/component: simplequeue
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        
        app.kubernetes.io/name: release-name-enterprise
        app.kubernetes.io/component: simplequeue
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/version: 5.15.0
        app.kubernetes.io/part-of: anchore
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: enterprise-3.5.1
      annotations:
        
        checksum/secrets: abd0656f3b5348a5bf346dc7dcc9aa2e914d7d2157eb5b4a257c80f8e2bdcda0
        checksum/enterprise-config: e29e447ba055f81f31f12ebdde80cbc639a6c4f8e10e46a181aa753d405702aa
        checksum/enterprise-envvar: 98f9a6efa52b5e29b2c6fad7ce1b8e99b33f6eedf292478421259fe0ff2d0148
    spec:      
      securityContext:
        fsGroup: 1000
        runAsGroup: 1000
        runAsUser: 1000
      imagePullSecrets:
        - name: anchore-enterprise-pullcreds
      volumes:
        
        - name: anchore-license
          secret:
            
            secretName: anchore-enterprise-license
        - name: anchore-scripts
          configMap:
            name: release-name-enterprise-scripts
            defaultMode: 0755
        - name: config-volume
          configMap:
            name: release-name-enterprise
      containers:
        - name: enterprise-simplequeue
          image: docker.io/anchore/enterprise:v5.15.0  
          imagePullPolicy: IfNotPresent
          command: ["/bin/sh", "-c"]
          args:
            -  /docker-entrypoint.sh anchore-enterprise-manager service start --no-auto-upgrade simplequeue
          envFrom:
            - configMapRef:
                name: release-name-enterprise-config-env-vars
            - secretRef:
                name: release-name-enterprise
          env:
            
            
            # check if the domainSuffix is set on the service level of the component, if it is, use that, else use the global domainSuffix
            
            - name: ANCHORE_ENDPOINT_HOSTNAME
              value: release-name-enterprise-simplequeue.default.svc.cluster.local
            - name: ANCHORE_PORT
              value: "8083"
            - name: ANCHORE_HOST_ID
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
          ports:
            - name: simplequeue
              containerPort: 8083
          volumeMounts:
            
            - name: anchore-license
              mountPath: /home/anchore/license.yaml
              subPath: license.yaml
            - name: config-volume
              mountPath: /config/config.yaml
              subPath: config.yaml
            - name: anchore-scripts
              mountPath: /scripts
          livenessProbe:
            httpGet:
              path: /health
              port: simplequeue
              scheme: HTTP
            initialDelaySeconds: 120
            timeoutSeconds: 10
            periodSeconds: 10
            failureThreshold: 6
            successThreshold: 1
          readinessProbe:
            httpGet:
              path: /health
              port: simplequeue
              scheme: HTTP
            timeoutSeconds: 10
            periodSeconds: 10
            failureThreshold: 3
            successThreshold: 1
---
# Source: enterprise/templates/ui_deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-enterprise-ui
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: ui
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    {}
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: release-name-enterprise
      app.kubernetes.io/component: ui
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        
        app.kubernetes.io/name: release-name-enterprise
        app.kubernetes.io/component: ui
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/version: 5.15.0
        app.kubernetes.io/part-of: anchore
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: enterprise-3.5.1
      annotations:
        
        checksum/secrets: 0fda98e1dab68f2c5b03e0e5c95e2c273e3e4738cae1eb6ebb3bcd42190b31f1
        checksum/ui-config: 531107975963ae870007fcbbf368fe5b43402256d282bcd08239c1344984614a
    spec:      
      securityContext:
        fsGroup: 1000
        runAsGroup: 1000
        runAsUser: 1000
      imagePullSecrets:
        - name: anchore-enterprise-pullcreds
      volumes:
        
        - name: anchore-license
          secret:
            
            secretName: anchore-enterprise-license
        - name: anchore-ui-config
          configMap:
            name: release-name-enterprise-ui
      containers:
        - name: "enterprise-ui"
          image: docker.io/anchore/enterprise-ui:v5.15.0  
          imagePullPolicy: IfNotPresent
          command: ["/bin/sh", "-c"]
          args:
            -  /docker-entrypoint.sh node /home/node/aui/build/server.js 
          env:
            
            
            # check if the domainSuffix is set on the service level of the component, if it is, use that, else use the global domainSuffix
            
            - name: ANCHORE_ENDPOINT_HOSTNAME
              value: release-name-enterprise-ui.default.svc.cluster.local
            - name: ANCHORE_PORT
              value: "80"
            - name: ANCHORE_HOST_ID
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
          envFrom:
            - secretRef:
                name: release-name-enterprise-ui
          ports:
            - containerPort: 3000
              protocol: TCP
              name: ui
          volumeMounts:
            
            - name: anchore-license
              mountPath: /home/anchore/license.yaml
              subPath: license.yaml
            - name: anchore-ui-config
              mountPath: /config/config-ui.yaml
              subPath: config-ui.yaml
          livenessProbe:
            tcpSocket:
              port: ui
            initialDelaySeconds: 120
            timeoutSeconds: 10
            periodSeconds: 10
            failureThreshold: 6
            successThreshold: 1
          readinessProbe:
            httpGet:
              path: /service/health
              port: ui
              scheme: HTTP
            timeoutSeconds: 10
            periodSeconds: 10
            failureThreshold: 3
            successThreshold: 1
---
# Source: enterprise/charts/postgresql/templates/primary/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: release-name-postgresql
  namespace: "default"
  labels:
    app.kubernetes.io/name: postgresql
    helm.sh/chart: postgresql-12.5.9
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: primary
spec:
  replicas: 1
  serviceName: release-name-postgresql-hl
  updateStrategy:
    rollingUpdate: {}
    type: RollingUpdate
  selector:
    matchLabels:
      app.kubernetes.io/name: postgresql
      app.kubernetes.io/instance: release-name
      app.kubernetes.io/component: primary
  template:
    metadata:
      name: release-name-postgresql
      labels:
        app.kubernetes.io/name: postgresql
        helm.sh/chart: postgresql-12.5.9
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: primary
    spec:
      serviceAccountName: default
      
      affinity:
        podAffinity:
          
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app.kubernetes.io/name: postgresql
                    app.kubernetes.io/instance: release-name
                    app.kubernetes.io/component: primary
                topologyKey: kubernetes.io/hostname
              weight: 1
        nodeAffinity:
          
      securityContext:
        fsGroup: 1001
      hostNetwork: false
      hostIPC: false
      containers:
        - name: postgresql
          image: docker.io/bitnami/postgresql:13.11.0-debian-11-r15
          imagePullPolicy: "IfNotPresent"
          securityContext:
            runAsUser: 1001
          env:
            - name: BITNAMI_DEBUG
              value: "false"
            - name: POSTGRESQL_PORT_NUMBER
              value: "5432"
            - name: POSTGRESQL_VOLUME_DIR
              value: "/bitnami/postgresql"
            - name: PGDATA
              value: "/bitnami/postgresql/data"
            # Authentication
            - name: POSTGRES_USER
              value: "anchore"
            - name: POSTGRES_POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: release-name-postgresql
                  key: postgres-password
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: release-name-postgresql
                  key: password
            - name: POSTGRES_DB
              value: "anchore"
            # Replication
            # Initdb
            # Standby
            # LDAP
            - name: POSTGRESQL_ENABLE_LDAP
              value: "no"
            # TLS
            - name: POSTGRESQL_ENABLE_TLS
              value: "no"
            # Audit
            - name: POSTGRESQL_LOG_HOSTNAME
              value: "false"
            - name: POSTGRESQL_LOG_CONNECTIONS
              value: "false"
            - name: POSTGRESQL_LOG_DISCONNECTIONS
              value: "false"
            - name: POSTGRESQL_PGAUDIT_LOG_CATALOG
              value: "off"
            # Others
            - name: POSTGRESQL_CLIENT_MIN_MESSAGES
              value: "error"
            - name: POSTGRESQL_SHARED_PRELOAD_LIBRARIES
              value: "pgaudit"
          ports:
            - name: tcp-postgresql
              containerPort: 5432
          livenessProbe:
            failureThreshold: 6
            initialDelaySeconds: 30
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
            exec:
              command:
                - /bin/sh
                - -c
                - exec pg_isready -U "anchore" -d "dbname=anchore" -h 127.0.0.1 -p 5432
          readinessProbe:
            failureThreshold: 6
            initialDelaySeconds: 5
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
            exec:
              command:
                - /bin/sh
                - -c
                - -e
                
                - |
                  exec pg_isready -U "anchore" -d "dbname=anchore" -h 127.0.0.1 -p 5432
                  [ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ]
          resources:
            limits: {}
            requests:
              cpu: 250m
              memory: 256Mi
          volumeMounts:
            - name: dshm
              mountPath: /dev/shm
            - name: data
              mountPath: /bitnami/postgresql
      volumes:
        - name: dshm
          emptyDir:
            medium: Memory
  volumeClaimTemplates:
    - apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: data
      spec:
        accessModes:
          - "ReadWriteOnce"
        resources:
          requests:
            storage: "20Gi"
---
# Source: enterprise/charts/ui-redis/templates/master/application.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: release-name-ui-redis-master
  namespace: "default"
  labels:
    app.kubernetes.io/name: ui-redis
    helm.sh/chart: ui-redis-17.11.8
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: master
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ui-redis
      app.kubernetes.io/instance: release-name
      app.kubernetes.io/component: master
  serviceName: release-name-ui-redis-headless
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ui-redis
        helm.sh/chart: ui-redis-17.11.8
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: master
      annotations:
        checksum/configmap: 72223ca78492745b4bf5df10d040b582000e30f0e1bc0c18154a27868da81587
        checksum/health: 09b1e1e055ba7fcca57a4bea6f6cc8da24e283146707f5c4a8d9d5b7f6ceef9a
        checksum/scripts: c0abb531c2f099ed0ac466d8c3a151214545c09233a4eae460d6d9a422896170
        checksum/secret: 68f83a0c8ed93c792e215eb79987da567ee95e0d7572a4275fe57d0a2d9c571a
    spec:
      
      securityContext:
        fsGroup: 1001
      serviceAccountName: release-name-ui-redis
      automountServiceAccountToken: true
      affinity:
        podAffinity:
          
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app.kubernetes.io/name: ui-redis
                    app.kubernetes.io/instance: release-name
                    app.kubernetes.io/component: master
                topologyKey: kubernetes.io/hostname
              weight: 1
        nodeAffinity:
          
      terminationGracePeriodSeconds: 30
      containers:
        - name: redis
          image: docker.io/bitnami/redis:7.0.12-debian-11-r0
          imagePullPolicy: "IfNotPresent"
          securityContext:
            runAsUser: 1001
          command:
            - /bin/bash
          args:
            - -c
            - /opt/bitnami/scripts/start-scripts/start-master.sh
          env:
            - name: BITNAMI_DEBUG
              value: "false"
            - name: REDIS_REPLICATION_MODE
              value: master
            - name: ALLOW_EMPTY_PASSWORD
              value: "no"
            - name: REDIS_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: release-name-ui-redis
                  key: redis-password
            - name: REDIS_TLS_ENABLED
              value: "no"
            - name: REDIS_PORT
              value: "6379"
          ports:
            - name: redis
              containerPort: 6379
          livenessProbe:
            initialDelaySeconds: 20
            periodSeconds: 5
            # One second longer than command timeout should prevent generation of zombie processes.
            timeoutSeconds: 6
            successThreshold: 1
            failureThreshold: 5
            exec:
              command:
                - sh
                - -c
                - /health/ping_liveness_local.sh 5
          readinessProbe:
            initialDelaySeconds: 20
            periodSeconds: 5
            timeoutSeconds: 2
            successThreshold: 1
            failureThreshold: 5
            exec:
              command:
                - sh
                - -c
                - /health/ping_readiness_local.sh 1
          resources:
            limits: {}
            requests: {}
          volumeMounts:
            - name: start-scripts
              mountPath: /opt/bitnami/scripts/start-scripts
            - name: health
              mountPath: /health
            - name: redis-data
              mountPath: /data
            - name: config
              mountPath: /opt/bitnami/redis/mounted-etc
            - name: redis-tmp-conf
              mountPath: /opt/bitnami/redis/etc/
            - name: tmp
              mountPath: /tmp
      volumes:
        - name: start-scripts
          configMap:
            name: release-name-ui-redis-scripts
            defaultMode: 0755
        - name: health
          configMap:
            name: release-name-ui-redis-health
            defaultMode: 0755
        - name: config
          configMap:
            name: release-name-ui-redis-configuration
        - name: redis-tmp-conf
          emptyDir: {}
        - name: tmp
          emptyDir: {}
        - name: redis-data
          emptyDir: {}
---
# Source: enterprise/templates/hooks/pre-upgrade/upgrade_rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: release-name-enterprise-upgrade-sa
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: upgradejob
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    
    "helm.sh/hook": pre-upgrade
    "helm.sh/hook-weight": "0"
---
# Source: enterprise/templates/hooks/pre-upgrade/upgrade_rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: release-name-enterprise-upgrade-role
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: upgradejob
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    
    "helm.sh/hook": pre-upgrade
    "helm.sh/hook-weight": "0"
rules:
  - apiGroups:
      - extensions
      - apps
    resources:
      - deployments
    verbs:
      - get
      - list
      - watch
      - update
      - patch
  - apiGroups:
      - apps
    resources:
      - deployments/scale
    verbs:
      - patch
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - watch
      - list
      - get
---
# Source: enterprise/templates/hooks/pre-upgrade/upgrade_rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: release-name-enterprise-upgrade-role-binding
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: upgradejob
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    
    "helm.sh/hook": pre-upgrade
    "helm.sh/hook-weight": "0"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: release-name-enterprise-upgrade-role
subjects:
  - kind: ServiceAccount
    name: release-name-enterprise-upgrade-sa
    namespace: default
---
# Source: enterprise/templates/tests/anchorectl_smoketest.yaml
apiVersion: v1
kind: Pod
metadata:
  name: "release-name-enterprise-5150-smoke-test"
  namespace: "default"
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: smoketest
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    
    "helm.sh/hook": test
spec:
  volumes:
  
  - name: anchore-license
    secret:
      
      secretName: anchore-enterprise-license
  - name: anchore-scripts
    configMap:
      name: release-name-enterprise-scripts
      defaultMode: 0755
  - name: config-volume
    configMap:
      name: release-name-enterprise
  imagePullSecrets:
    - name: anchore-enterprise-pullcreds
  containers:
    - name: "anchorectl-smoketest"
      image: docker.io/anchore/enterprise:v5.15.0
      imagePullPolicy: IfNotPresent
      envFrom:
      - configMapRef:
          name: release-name-enterprise-config-env-vars
      - secretRef:
          name: release-name-enterprise
      env:
      
      
      # check if the domainSuffix is set on the service level of the component, if it is, use that, else use the global domainSuffix
      
      - name: ANCHORE_ENDPOINT_HOSTNAME
        value: release-name-enterprise-5150-smoke-test.default.svc.cluster.local
      - name: ANCHORE_PORT
        value: "null"
      - name: ANCHORE_HOST_ID
        valueFrom:
          fieldRef:
            fieldPath: metadata.name

      command: ["/bin/bash", "-c"]
      args:
        - |
          anchorectl system smoke-tests run || true

      volumeMounts:
      
      - name: anchore-license
        mountPath: /home/anchore/license.yaml
        subPath: license.yaml
      - name: config-volume
        mountPath: /config/config.yaml
        subPath: config.yaml
      - name: anchore-scripts
        mountPath: /scripts
  restartPolicy: Never
---
# Source: enterprise/templates/hooks/pre-upgrade/upgrade_job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: release-name-enterprise-5150-upgrade
  namespace: default
  labels:
    
    app.kubernetes.io/name: release-name-enterprise
    app.kubernetes.io/component: upgradejob
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: 5.15.0
    app.kubernetes.io/part-of: anchore
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: enterprise-3.5.1
  annotations:
    
    "helm.sh/hook": pre-upgrade
    "helm.sh/hook-weight": "3"
    "helm.sh/hook-delete-policy": before-hook-creation
spec:
  template:
    metadata:
      name: release-name-enterprise-5150-upgrade
      labels:
        
        app.kubernetes.io/name: release-name-enterprise
        app.kubernetes.io/component: upgradejob
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/version: 5.15.0
        app.kubernetes.io/part-of: anchore
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart:
8000
 enterprise-3.5.1
      annotations:
        
        checksum/secrets: 8d588250937ffa3a80e7b2e32f8ba2fdb82ce05978e1f35674e47547b91cb936
    spec:      
      securityContext:
        fsGroup: 1000
        runAsGroup: 1000
        runAsUser: 1000
      serviceAccountName: release-name-enterprise-upgrade-sa
      imagePullSecrets:
        - name: anchore-enterprise-pullcreds
      restartPolicy: Never
      volumes:
        
      initContainers:
        - name: scale-down-anchore
          image: bitnami/kubectl:1.30
          command: ["/bin/bash", "-c"]
          args:
            - |
              kubectl scale deployments --all --replicas=0 -l app.kubernetes.io/name=release-name-enterprise;
              while [[ $(kubectl get pods -l app.kubernetes.io/name=release-name-enterprise --field-selector=status.phase=Running --no-headers | tee /dev/stderr | wc -l) -gt 0 ]]; do
                echo 'waiting for pods to go down...' && sleep 5;
              done
        - name: wait-for-db
          image: docker.io/anchore/enterprise:v5.15.0
          imagePullPolicy: IfNotPresent
          env:
            
            
            # check if the domainSuffix is set on the service level of the component, if it is, use that, else use the global domainSuffix
            
            - name: ANCHORE_ENDPOINT_HOSTNAME
              value: release-name-enterprise-5150-upgrade.default.svc.cluster.local
            - name: ANCHORE_PORT
              value: "null"
            - name: ANCHORE_HOST_ID
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
          command: ["/bin/bash", "-c"]
          args:
            - |
              while true; do
                CONNSTR=postgresql://"${ANCHORE_DB_USER}":"${ANCHORE_DB_PASSWORD}"@"${ANCHORE_DB_HOST}":"${ANCHORE_DB_PORT}"/"${ANCHORE_DB_NAME}"
                if [[ ${ANCHORE_DB_SSL_MODE} != null ]]; then
                  CONNSTR=${CONNSTR}?sslmode=${ANCHORE_DB_SSL_MODE}
                fi
                if [[ ${ANCHORE_DB_SSL_ROOT_CERT} != null ]]; then
                  CONNSTR=${CONNSTR}\&sslrootcert=${ANCHORE_DB_SSL_ROOT_CERT}
                fi
                err=$(anchore-enterprise-manager db --db-connect ${CONNSTR} pre-upgrade-check 2>&1 > /dev/null)
                if [[ !$err ]]; then
                  echo "Database is ready"
                  exit 0
                fi
                echo "Database is not ready yet, sleeping 10 seconds..."
                sleep 10
              done
      containers:
        - name: upgrade-enterprise-db
          image: docker.io/anchore/enterprise:v5.15.0
          imagePullPolicy: IfNotPresent
          envFrom:
            - configMapRef:
                name: release-name-enterprise-config-env-vars
            - secretRef:
                name: release-name-enterprise
          env:
            
            
            # check if the domainSuffix is set on the service level of the component, if it is, use that, else use the global domainSuffix
            
            - name: ANCHORE_ENDPOINT_HOSTNAME
              value: release-name-enterprise-5150-upgrade.default.svc.cluster.local
            - name: ANCHORE_PORT
              value: "null"
            - name: ANCHORE_HOST_ID
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
          volumeMounts:
            
          command: ["/bin/bash", "-c"]
          args:
            - |
               anchore-enterprise-manager db --db-connect postgresql://"${ANCHORE_DB_USER}":"${ANCHORE_DB_PASSWORD}"@"${ANCHORE_DB_HOST}":"${ANCHORE_DB_PORT}"/"${ANCHORE_DB_NAME}" upgrade --dontask;


@zhill
Copy link
Member
zhill commented Apr 2, 2025

@blang9238 what is the reason for this PR?

gnyahay
gnyahay previously approved these changes Apr 2, 2025
Copy link
Contributor
@gnyahay gnyahay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It appears to be backwards compatible.

@blang9238
Copy link
Contributor 6D47 Author

@zhill P1 asked if we could support separating out the registry / repo / tag (similar to other charts) but mentioned that BB will only pull it down if it's in our upstream. I'm reworking this currently to use the common.tpl. I'll send you the internal discussion.

@@ -39,7 +39,11 @@ spec:
{{- include "enterprise.common.cloudsqlContainer" . | nindent 8 }}
{{- end }}
- name: "{{ .Chart.Name }}-{{ $component | lower }}"
image: {{ .Values.image }}
image: {{- if and .Values.imageSpec.registry .Values.imageSpec.repository .Values.imageSpec.tag -}}
{{ printf "%s/%s:%s" .Values.imageSpec.registry .Values.imageSpec.repository .Values.imageSpec.tag | trim | indent 1}}
Copy link
Member
@zhill zhill Apr 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this doesn't handle digests instead of a tag.

9E81 Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but similar bitnami format does support digest as well in the object via the "Digest" field, so not sure if that needs to also be supported here as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, can certainly add it.

image: {{- if and .Values.ui.imageSpec.registry .Values.ui.imageSpec.repository .Values.ui.imageSpec.tag -}}
{{ printf "%s/%s:%s" .Values.ui.imageSpec.registry .Values.ui.imageSpec.repository .Values.ui.imageSpec.tag | trim | indent 1}}
{{- else -}}
{{ .Values.image | trim | indent 1}}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is what's breaking testing. I'll push a fix shortly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

Create an image specification template that can override the default image
based on global settings
*/}}
{{- define "enterprise.common.imageOverride" -}}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Override" is confusing here, it should just be "enterprise.common.image".

Copy link
Contributor Author
@blang9238 blang9238 Apr 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack, so should it be:

image: docker.io/anchore/enterprise:v5.16.0
  # registry: docker.io
  # repository: anchore/enterprise
  # tag: v5.16.0
  # digest: sha256:12345abcde

Then if registry / repo / tag / digest are defined, use that, otherwise default to just the pullstring?

or should I split it out into something like:

image: 
  pullstring: docker.io/anchore/enterprise:v5.16.0
  # registry: docker.io
  # repository: anchore/enterprise
  # tag: v5.16.0
  # digest: sha256:12345abcde

And then do the same checks but modify .Values.image to .Values.image.pullstring?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Disregard, saw your later comment about checking the 'image' value either being a map or a string

Create an image specification template for the UI that can override the default image
based on component-specific or global settings
*/}}
{{- define "enterprise.ui.imageOverride" -}}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here as above, just use "enterprise.ui.image".

Create an image specification template for the UI that can override the default image
based on component-specific or global settings
*/}}
{{- define "enterprise.kubectl.imageOverride" -}}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And here.

@@ -1,6 +1,6 @@
apiVersion: v2
name: enterprise
version: "3.5.2"
version: "3.5.3"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this is a new feature, though backwards compatible, I think this is "3.6.0" not a patch update.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zhill since v5.16.0 took the chart to 3.6.0, should I then bump this to 3.7.0?

Copy link
Member
@zhill zhill left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@blang9238 I also checked on how we can make this a bit easier for the user and something like this snippet would allow the 'image' value to be either a map or a string so we wouldn't need different keys:

{{- define "enterprise.common.image" -}}
# Check if the 'image' is a string or not, if not, use it as a map.
{{- if eq (printf "%T" .Values.image) "string" }}
  {{- .Values.image | trim -}}
{{- else -}}
  # Assume a map (but verify non-nil..) and construct string from the parts.
  {{- printf "%s/%s:%s" .Values.image.Registry .Values.image.Repository Values.image.Tag | trim -}}
{{- end -}}
{{- end }}

I'd like others' opinions on if allowing different types there would be more confusing, but at least it is a single place in the Values to set the images rather than 2 that interact. @HN23 or @Btodhunter, thoughts?

@@ -39,7 +39,7 @@ spec:
{{- include "enterprise.common.cloudsqlContainer" . | nindent 8 }}
{{- end }}
- name: "{{ .Chart.Name }}-{{ $component | lower }}"
image: {{ .Values.image }}
image: {{ include "enterprise.common.imageOverride" . | trim }}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the trim needed here? Shouldn't that be applied once in the setting of the value?

@blang9238
Copy link
Contributor Author

@zhill pushed some updates for your review.

Copy link
Member
@zhill zhill left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like there is a breaking change, so we need to fix that (the new way for kubectl works but is breaking if a user had set it previously). Otherwise just a few questions and minor value checks to ensure the image refs are constructed well. Overall looking good. Getting close!

{{- define "enterprise.common.image" -}}
{{- if eq (printf "%T" .Values.image) "string" }}
{{- .Values.image | trim -}}
{{- else if and .Values.image.digest -}}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the "and" necessary here? I may be wrong but I don't think the "and" is helpful here.

3D11
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nope, removed.

{{- define "enterprise.ui.image" -}}
{{- if eq (printf "%T" .Values.ui.image) "string" }}
{{- .Values.ui.image | trim -}}
{{- else if and .Values.ui.image.digest -}}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same "and" question

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed.

{{- if eq (printf "%T" .Values.image) "string" }}
{{- .Values.image | trim -}}
{{- else if and .Values.image.digest -}}
{{- printf "%s/%s@%s" .Values.image.registry .Values.image.repository .Values.image.digest | trim -}}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in both these cases, tag or digest, if the user only fills out part of the dict, it will output a bad reference that will fail to run the pods. I'd like to see some checks that the registry and repository properties exist and are non-nil.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack thanks @zhill, I'll revisit and get checks in place for this.

Copy link
Contributor Author
@blang9238 blang9238 Apr 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok latest commit should handle it better.

@blang9238
Copy link
Contributor Author

@zhill Is this aligned with what you were thinking? I left the trims in, renders correctly when I tested locally and with helm templating.

@HN23
Copy link
Member
HN23 commented Apr 16, 2025

@blang9238
Can you update the stable/enterprise/templates/tests/anchorectl_smoketest.yaml as well. its currently {{ .Values.image }}, which causes it to not be right if passing in a dict.

@blang9238 blang9238 requested review from zhill and HN23 April 25, 2025 16:53
blang9238 added 11 commits May 27, 2025 09:57
Signed-off-by: Ben Lang <blang@anchore.com>
Signed-off-by: Ben Lang <blang@anchore.com>
Signed-off-by: Ben Lang <blang@anchore.com>
Signed-off-by: Ben Lang <blang@anchore.com>
Signed-off-by: Ben Lang <blang@anchore.com>
Signed-off-by: Ben Lang <blang@anchore.com>
Signed-off-by: Ben Lang <blang@anchore.com>
…erride'

Signed-off-by: Ben Lang <blang@anchore.com>
Signed-off-by: Ben Lang <blang@anchore.com>
Signed-off-by: Ben Lang <blang@anchore.com>
Signed-off-by: Ben Lang <blang@anchore.com>
blang9238 added 5 commits May 27, 2025 10:15
Signed-off-by: Ben Lang <blang@anchore.com>
Signed-off-by: Ben Lang <blang@anchore.com>
Signed-off-by: Ben Lang <blang@anchore.com>
Signed-off-by: Ben Lang <blang@anchore.com>
@blang9238 blang9238 force-pushed the blang/allow-registry-repo-tag-image-spec branch from ab96e9a to ce04dc4 Compare May 27, 2025 17:54
@@ -17,9 +17,16 @@ global:
## Common params used by all Anchore k8s resources
###################################################

## @param image Image used for all Anchore Enterprise deployments, excluding Anchore UI
## @param imageOverride imageOverride used for all Anchore Enterprise deployments by separating out registry/repo:tag, excluding Anchore UI
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is imageOverride an old/deleted value? If so, we should remove mentions of it.

@@ -1,6 +1,6 @@
apiVersion: v2
name: enterprise
version: "3.8.0"
version: "3.8.1"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably bump this to the next minor version instead of a patch since its changing/adding a new way the chart works. Also we should probably update the README to include the changes (both the kubectlImage change and the image/ui.image change)

Signed-off-by: Ben Lang <blang@anchore.com>
E144
Copy link
Member
@HN23 HN23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm. Thanks for the PR @blang9238 !

Copy link
Member
@zhill zhill left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@blang9238 blang9238 merged commit 8d17b6c into main May 27, 2025
12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants
0