-
Notifications
You must be signed in to change notification settings - Fork 505
Expose DaemonSet transformer state #2149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
FWIW it turns out our endpoints have been becoming stale due to the above Kubernetes issue. |
Whoa!!! Well, you know what they say: "It's never a Kubernetes bug -- except when it's a Kubernetes bug" |
@adleong can you please assign this issue to me? |
dtacalau
added a commit
to dtacalau/linkerd
that referenced
this issue
Oct 14, 2019
…rrent view of each DaemonSet namer will be exposed to :9990/namer_state//io.l5d.k8s.daemonset/// , where namespace/port/service are the required config for the DaemonSet transformer. Signed-off-by: dst4096 <dst4096@gmail.com>
dtacalau
added a commit
to dtacalau/linkerd
that referenced
this issue
Oct 15, 2019
…rrent view of each DaemonSet namer will be exposed to :9990/namer_state/io.l5d.k8s.daemonset/namespace/port/service , where namespace/port/service are the required config for the DaemonSet transformer. Signed-off-by: dst4096 <dst4096@gmail.com>
dtacalau
added a commit
to dtacalau/linkerd
that referenced
this issue
Oct 15, 2019
…rrent view of each DaemonSet namer will be exposed to :9990/namer_state/io.l5d.k8s.daemonset/namespace/port/service , where namespace/port/service are the required config for the DaemonSet transformer. State is exposed for both namer and interpreter transformers. Signed-off-by: dst4096 <dst4096@gmail.com>
dtacalau
added a commit
to dtacalau/linkerd
that referenced
this issue
Oct 16, 2019
…rrent view of each DaemonSet namer will be exposed to :9990/namer_state/io.l5d.k8s.daemonset/namespace/port/service , where namespace/port/service are the required config for the DaemonSet transformer. State is exposed for both namer and interpreter transformers. Multiple stacked transformers are supported, so, if a namer/interpreter has multiple transformers defined, state will be exposed for all of them. Signed-off-by: dst4096 <dst4096@gmail.com>
dtacalau
added a commit
to dtacalau/linkerd
that referenced
this issue
Oct 17, 2019
…rrent view of each DaemonSet namer will be exposed to :9990/namer_state/io.l5d.k8s.daemonset/namespace/port/service , where namespace/port/service are the required config for the DaemonSet transformer. State is exposed for both namer and interpreter transformers. Multiple stacked transformers are supported, so, if a namer/interpreter has multiple transformers defined, state will be exposed for all of them. Signed-off-by: dst4096 <dst4096@gmail.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Issue Type: Feature request
What happened: I run linkerd on Kubernetes in linker to linker mode. For whatever reason the Kubernetes
kube-system/l5d
service linkerd uses in my deployment to discover the pods in its DaemonSets (to route downstream traffic) had a stale and inaccurate set of pods. You can see more in #2148. This was not a linkerd bug, but might have been easier to debug if it were possible to inspect linkerd's DaemonSet transformer state in the same way we can inspect the Kubernetes namer state today.What you expected to happen:
An endpoint such as
http://:9990/transformer_state/io.l5d.k8s.daemonset.json
exposes linkerd's current view of each DaemonSet namer. (I suppose it may make sense to nest this under namer_state somehow?)Perhaps there would also be value in exposing the localnode transformer state? My understanding is that one doesn't actually make any separate calls to the Kubernetes API, so I usually feel I can infer the localnode transformer state from the namer state. That said, more visibility can't hurt.
Anything else we need to know?: Here's my DaemonSet transformer configuration:
Environment:
The text was updated successfully, but these errors were encountered: