8000 cluster authentication fails after a while. · Issue #80 · paralus/relay · GitHub 10000
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
cluster authentication fails after a while. #80
Open
@rogelioorihuela

Description

@rogelioorihuela

Expected vs actual behavior

Unable to connect to cluster via Lens after a while since initial connection. Relay agent needs to be rebooted to allow connection to target cluster again.

relay-agent logs:
{"level":"info","ts":"2023-08-21T16:11:04.457Z","caller":"tunnel/client.go:562","msg":"Relay Agent.Client.paralus-core-relay-agent::action handshake addr 18.204.85.83:443 "}
{"level":"info","ts":"2023-08-21T16:11:13.757Z","caller":"agent/agent.go:444","msg":"Relay Agent::setting hash of configmap data hash: 6b4af4b720c5f97671316f2d25f3a7802ef690289f7e0bd465b1c407616e36e2 "}
{"level":"info","ts":"2023-08-21T16:16:04.350Z","caller":"cleanup/authz.go:143","msg":"finding authz matching","selector":"authz-expiry<1692634564,paralus-relay=true"}
{"level":"info","ts":"2023-08-21T16:16:04.388Z","caller":"tunnel/client.go:209","msg":"Relay Agent.Client::scaled client timer totalScaledClient : 0 "}

Steps to reproduce the bug

  1. Deploy paralus
  2. Connect to cluster (do not disconnect)
  3. Try to connect after 5

Are you using the latest version of the project?

     NAME      NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION

paralus paralus 5 2023-08-16 17:14:50.752987 -0400 EDT deployed ztka-0.2.5 v0.2.4

You can check your version by running helm ls|grep '^<deployment-name>' or using pctl, pctl version, and provide the output.

What is your environment setup? Please tell us your cloud provider, operating system, and include the output of kubectl version --output=yaml and helm version. Any other information that you have, eg. logs and custom values, is highly appreciated!

helm version
version.BuildInfo{Version:"v3.11.0", GitCommit:"472c5736ab01133de504a826bd9ee12cbe4e7904", GitTreeState:"clean", GoVersion:"go1.18.10"}

kubectl version --output=yaml
clientVersion:
buildDate: "2022-01-25T21:25:17Z"
compiler: gc
gitCommit: 816c97ab8cff8a1c72eccca1026f7820e93e0d25
gitTreeState: clean
gitVersion: v1.23.3
goVersion: go1.17.6
major: "1"
minor: "23"
platform: darwin/amd64
serverVersion:
buildDate: "2023-07-28T16:53:07Z"
compiler: gc
gitCommit: 222fd0c4adcf243ebeaee986cc0d41a39e570d8f
gitTreeState: clean
gitVersion: v1.23.17-eks-2d98532
goVersion: go1.19.6
major: "1"
minor: 23+
platform: linux/amd64

Kubernetes cluster running on AWS

(optional) If you have ideas on why the bug happens or how it can be solved, please provide it here

  • I've described the bug, included steps to reproduce it, and included my environment setup with all customizations.
  • I'm using the latest version of the project.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingneeds-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0