8000 [BUG] Backuprepo `ERROR : precheck.txt: Failed to copy: AccessDenied: not entitled` · Issue #9463 · apecloud/kubeblocks · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
[BUG] Backuprepo ERROR : precheck.txt: Failed to copy: AccessDenied: not entitled #9463
Closed
@ToshY

Description

@ToshY

Describe the bug

Created a BackupRepo with s3-compatible to use with Backblaze (requires forcePathStyle to be true), which has an access key that only allows operations to the current bucket.

But when having an access key that only allows operations for the specific bucket, it fails with error:

$  kubectl get backuprepo -n postgres s3-repo -o jsonpath='{.status.conditions[?(@.type == "PreCheckPassed")].message}'
 pre-check
  2025/06/17 20:31:57 ERROR : precheck.txt: Failed to copy: AccessDenied: not entitled
        status code: 403, request id: 8ba2a5dab95c2807, host id: aYU1kFDIdOcU2fjP7NAU01Tj9MrsxRDfX
  Error: push to "/precheck.txt": AccessDenied: not entitled
        status code: 403, request id: 8ba2a5dab95c2807, host id: aYU1kFDIdOcU2fjP7NAU01Tj9MrsxRDf
More details
$  kubectl get backuprepo -n postgres s3-repo -o jsonpath='{.status.conditions[?(@.type == "PreCheckPassed")].message}'

Pre-check job failed, information collected for diagnosis.

Job failure message: BackoffLimitExceeded:Job has reached the specified backoff limit

Logs from the pre-check job:
  + export 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/bin/datasafed'
  + + datasafed push - /precheck.txt
  echo pre-check
  2025/06/17 20:31:57 ERROR : precheck.txt: Failed to copy: AccessDenied: not entitled
        status code: 403, request id: 8ba2a5dab95c2807, host id: aYU1kFDIdOcU2fjP7NAU01Tj9MrsxRDfX
  Error: push to "/precheck.txt": AccessDenied: not entitled
        status code: 403, request id: 8ba2a5dab95c2807, host id: aYU1kFDIdOcU2fjP7NAU01Tj9MrsxRDfX

Events from Pod/kb-system/pre-check-999055bb-s3-repo-6s7q8:
  Type  Reason  Age     From    Message
  ----  ------  ----    ----    -------
  Normal        Scheduled       4s      default-scheduler       Successfully assigned kb-system/pre-check-999055bb-s3-repo-6s7q8 to k3s-agent-nbg1-3-zvt
  Normal        Pulled  4s      kubelet Container image "apecloud-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/datasafed:0.2.0" already present on machine
  Normal        Created 4s      kubelet Created container: dp-copy-datasafed
  Normal        Started 4s      kubelet Started container dp-copy-datasafed
  Normal        Pulled  4s      kubelet Container image "apecloud-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/kubeblocks-tools:1.0.1-beta.2" already present on machine
  Normal        Created 3s      kubelet Created container: pre-check
  Normal        Started 3s      kubelet Started container pre-check

Events from Job/kb-system/pre-check-999055bb-s3-repo:
  Type  Reason  Age     From    Message
  ----  ------  ----    ----    -------
  Normal        SuccessfulCreate        37s     job-controller  Created pod: pre-check-999055bb-s3-repo-mbkm7
  Normal        SuccessfulCreate        25s     job-controller  Created pod: pre-check-999055bb-s3-repo-kxbdb
  Normal        SuccessfulCreate        4s      job-controller  Created pod: pre-check-999055bb-s3-repo-6s7q8
  Warning       BackoffLimitExceeded    0s      job-controller  Job has reached the specified backoff limit

To Reproduce
Steps to reproduce the behavior:

  1. Used following config files.

s3-repo

---
apiVersion: dataprotection.kubeblocks.io/v1alpha1
kind: BackupRepo
metadata:
  name: s3-repo
  namespace: postgres
  annotations:
    dataprotection.kubeblocks.io/is-default-repo: 'true'
spec:
  storageProviderRef: s3-compatible
  accessMethod: Tool
  pvReclaimPolicy: Retain
  volumeCapacity: 100Gi
  config:
    bucket: cluster-backup-001
    endpoint: s3.eu-central-001.backblazeb2.com
    region: eu-central-001
    forcePathStyle: 'true'
    mountOptions: --memory-limit 1000 --dir-mode 0777 --file-mode 0666
  credential:
    name: s3-backup-secret
    namespace: postgres

s3-backup-secret

---
apiVersion: v1
kind: Secret
metadata:
  name: s3-backup-secret
  namespace: postgres
type: Opaque
data:
  accessKeyId: ${AWS_ACCESS_KEY_ID}
  secretAccessKey: ${AWS_SECRET_ACCESS_KEY}

^ make sure the access key only has rights to the specific bucket (no master key)

  1. Deploy
  2. Check status and see error

Expected behavior
No error

Screenshots
If applica 5E38 ble, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: -
  • Browser -
  • Kubeblocks version: v1.0.1-beta.2

Additional context

Add option no_check_bucket for the s3-compatible storage provider

Metadata

Metadata

Assignees

Labels

kind/bugSomething isn't working

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions

    0