8000 fix(policy-controller)!: Disallow requests to undefined service ports by adleong · Pull Request #14149 · linkerd/linkerd2 · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

fix(policy-controller)!: Disallow requests to undefined service ports #14149

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Jul 1, 2025

Conversation

adleong
Copy link
Member
@adleong adleong commented Jun 17, 2025

Fixes #13922

When a meshed client attempts to connect to a clusterIP Service on a port that is not present in that service's spec, Linkerd allows that connection through anyway. This is in contrast to bare kube-proxy behavior where only the ports defined in the service spec are forwarded to the service endpoints.

We update the policy-controller to return an opaque route with a forbidden filter for such connections. This causes the outbound Linkerd proxy to immediately reset the connection if a client attempts to connect to a port not defined in the service spec.

This shows up in Linkerd proxy metrics as a Forbidden error with the "undefined-port" synthetic route:

outbound_tcp_route_close_total{
	parent_group="core",
	parent_kind="Service",
	parent_namespace="default",
	parent_name="nginx",
	parent_port="80",
	parent_section_name="",
	route_group="",
	route_kind="default",
	route_namespace="",
	route_name="undefined-port"
	,target_ip=""
	target_port="80",
	error="forbidden"
} 1

adleong added 4 commits April 30, 2025 20:54
Signed-off-by: Alex Leong <alex@buoyant.io>
Signed-off-by: Alex Leong <alex@buoyant.io>
Signed-off-by: Alex Leong <alex@buoyant.io>
@adleong adleong requested a review from a team as a code owner June 17, 2025 23:40
Signed-off-by: Alex Leong <alex@buoyant.io>
adleong added 2 commits June 18, 2025 23:41
Signed-off-by: Alex Leong <alex@buoyant.io>
Signed-off-by: Alex Leong <alex@buoyant.io>
@adleong adleong requested a review from zaharidichev June 26, 2025 16:33
Copy link
Member
@zaharidichev zaharidichev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work and use of our APIs! LGTM, pending some simplification comments.

Comment on lines 436 to 449
// Since we set a Forbidden filter, this backend will never be used.
// However, it is required to have a backend in the OpaqueRoute in order
// for the config to be valid. Therefore, we use an arbitrary dummy backend.
let backend = outbound::Backend {
metadata: metadata.clone(),
queue: Some(default_queue_config()),
kind: Some(outbound::backend::Kind::Forward(
destination::WeightedAddr {
addr: Some(SocketAddr::new(IpAddr::V4(Ipv4Addr::new(192, 0, 2, 1)), 1).into()),
weight: 1,
..Default::default()
},
)),
};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TIOLI: Can't you use an empty distribution instead. For example this seems to work:

diff --git a/policy-controller/grpc/src/outbound.rs b/policy-controller/grpc/src/outbound.rs
index e85a339d4..a0df569e0 100644
--- a/policy-controller/grpc/src/outbound.rs
+++ b/policy-controller/grpc/src/outbound.rs
@@ -20,13 +20,7 @@ use linkerd_policy_controller_core::{
     },
     routes::GroupKindNamespaceName,
 };
-use std::{
-    net::{IpAddr, Ipv4Addr, SocketAddr},
-    num::NonZeroU16,
-    str::FromStr,
-    sync::Arc,
-    time,
-};
+use std::{net::SocketAddr, num::NonZeroU16, str::FromStr, sync::Arc, time};

 mod grpc;
 mod http;
@@ -433,21 +427,6 @@ fn undefined_port(target: ResourceTarget) -> outbound::OutboundPolicy {
         })),
     });

-    // Since we set a Forbidden filter, this backend will never be used.
-    // However, it is required to have a backend in the OpaqueRoute in order
-    // for the config to be valid. Therefore, we use an arbitrary dummy backend.
-    let backend = outbound::Backend {
-        metadata: metadata.clone(),
-        queue: Some(default_queue_config()),
-        kind: Some(outbound::backend::Kind::Forward(
-            destination::WeightedAddr {
-                addr: Some(SocketAddr::new(IpAddr::V4(Ipv4Addr::new(192, 0, 2, 1)), 1).into()),
-                weight: 1,
-                ..Default::default()
-            },
-        )),
-    };
-
     let opaque = outbound::proxy_protocol::Opaque {
         routes: vec![outbound::OpaqueRoute {
             metadata: Some(Metadata {
@@ -455,13 +434,8 @@ fn undefined_port(target: ResourceTarget) -> outbound::OutboundPolicy {
             }),
             rules: vec![outbound::opaque_route::Rule {
                 backends: Some(outbound::opaque_route::Distribution {
-                    kind: Some(outbound::opaque_route::distribution::Kind::FirstAvailable(
-                        outbound::opaque_route::distribution::FirstAvailable {
-                            backends: vec![outbound::opaque_route::RouteBackend {
-                                backend: Some(backend),
-                                ..Default::default()
-                            }],
-                        },
+                    kind: Some(outbound::opaque_route::distribution::Kind::Empty(
+                        outbound::opaque_route::distribution::Empty {},
                     )),
                 }),
                 filters: vec![outbound::opaque_route::Filter {

This way you should be able to avoid the dummy backend and the bogus ip address. You should probably verify this yourself, but some superficial testing with this proposed change seems to yield the desired result and passes the policy validation in the proxy.

~ $ curl localhost:4191/metrics | grep 81
outbound_tcp_route_open_total{parent_group="core",parent_kind="Service",parent_namespace="default",parent_name="server",parent_port="81",parent_section_name="",route_group="",route_kind="default",route_namespace="",route_name="undefined-port",target_ip="",target_port="81"} 1
outbound_tcp_route_close_total{parent_group="core",parent_kind="Service",parent_namespace="default",parent_name="server",parent_port="81",parent_section_name="",route_group="",route_kind="default",route_namespace="",route_name="undefined-port",target_ip="",target_port="81",error="invalid_policy"} 0
outbound_tcp_route_close_total{parent_group="core",parent_kind="Service",parent_namespace="default",parent_name="server",parent_port="81",parent_section_name="",route_group="",route_kind="default",route_namespace="",route_name="undefined-port",target_ip="",target_port="81",error=""} 0
outbound_tcp_route_close_total{parent_group="core",parent_kind="Service",parent_namespace="default",parent_name="server",parent_port="81",parent_section_name="",route_group="",route_kind="default",route_namespace="",route_name="undefined-port",target_ip="",target_port="81",error="forbidden"} 1
outbound_tcp_route_close_total{parent_group="core",parent_kind="Service",parent_namespace="default",parent_name="server",parent_port="81",parent_section_name="",route_group="",route_kind="default",route_namespace="",route_name="undefined-port",target_ip="",target_port="81",error="invalid_backend"} 0
outbound_tcp_route_close_total{parent_group="core",parent_kind="Service",parent_namespace="default",parent_name="server",parent_port="81",parent_section_name="",route_group="",route_kind="default",route_namespace="",route_name="undefined-port",target_ip="",target_port="81",error="unexpected"} 0

Here are the logs from the client proxy for reference. You could see how the proxy logs that an empty distribution will never become ready. But we do not care about this since we error anyway.

[   786.544512s] DEBUG ThreadId(01) linkerd_idle_cache: Caching new value key=OrigDstAddr(10.96.209.116:81)
[   786.544600s] DEBUG ThreadId(01) linkerd_app_outbound::discover: Discover addr=10.96.209.116:81
[   786.544618s] DEBUG ThreadId(01) linkerd_service_profiles::allowlist: Discovery allowed addr=10.96.209.116:81
[   786.544742s] DEBUG ThreadId(01) linkerd_idle_cache: Awaiting idleness
[   786.544887s] DEBUG ThreadId(01) h2::codec::framed_write: send frame=Headers { stream_id: StreamId(11), flags: (0x4: END_HEADERS) }
[   786.544966s] DEBUG ThreadId(01) h2::codec::framed_write: send frame=Data { stream_id: StreamId(11) }
[   786.544977s] DEBUG ThreadId(01) h2::codec::framed_write: send frame=Data { stream_id: StreamId(11), flags: (0x1: END_STREAM) }
[   786.545083s] DEBUG ThreadId(01) h2::codec::framed_write: send frame=Headers { stream_id: StreamId(11), flags: (0x4: END_HEADERS) }
[   786.545180s] DEBUG ThreadId(01) h2::codec::framed_write: send frame=Data { stream_id: StreamId(11) }
[   786.545198s] DEBUG ThreadId(01) h2::codec::framed_write: send frame=Data { stream_id: StreamId(11), flags: (0x1: END_STREAM) }
[   786.548621s] DEBUG ThreadId(01) h2::codec::framed_read: received frame=Headers { stream_id: StreamId(11), flags: (0x4: END_HEADERS) }
[   786.548641s] DEBUG ThreadId(01) h2::codec::framed_read: received frame=Data { stream_id: StreamId(11) }
[   786.548730s] DEBUG ThreadId(01) linkerd_app_outbound::policy::api: policy=ClientPolicy { parent: Resource { group: "core", kind: "Service", name: "server", namespace: "default", section: None, port: Some(81) }, protocol: Opaque(Opaque { routes: Some(Route { policy: RoutePolicy { meta: Default { name: "undefined-port" }, filters: [Forbidden], distribution: Empty, params: () } }) }), backends: [] }
[   786.550824s] DEBUG ThreadId(01) h2::codec::framed_read: received frame=Headers { stream_id: StreamId(11), flags: (0x4: END_HEADERS) }
[   786.550844s] DEBUG ThreadId(01) h2::codec::framed_read: received frame=Data { stream_id: StreamId(11) }
[   786.550887s] DEBUG ThreadId(01) linkerd_service_profiles::client: Resolved profile profile=Profile { addr: Some(LogicalAddr(server.default.svc.cluster.local:81)), http_routes: [], targets: [], opaque_protocol: false, endpoint: None }
[   786.550906s] DEBUG ThreadId(01) linkerd_app_outbound::discover: Discovered
[   786.550939s] DEBUG ThreadId(01) tower::buffer::worker: service.ready=true processing request
[   786.550964s] DEBUG ThreadId(01) linkerd_app_outbound::opaq: Using ClientPolicy routes addr=server.default.svc.cluster.local:81
[   786.550983s] DEBUG ThreadId(01) linkerd_idle_cache: Caching new value key=Opaq(OpaqSidecar { orig_dst: OrigDstAddr(10.96.209.116:81), routes: Receiver { shared: Shared { value: RwLock(PhantomData<std::sync::rwlock::RwLock<linkerd_app_outbound::opaq::logical::Routes>>, RwLock { data: Routes { logical: Logical { meta: ParentRef(Resource { group: "core", kind: "Service", name: "server", namespace: "default", section: None, port: Some(81) }), addr: Name(NameAddr { name: Name("server.default.svc.cluster.local"), port: 81 }) }, routes: Some(Route { policy: RoutePolicy { meta: Default { name: "undefined-port" }, filters: [Forbidden], distribution: Empty, params: () } }), backends: [] } }), version: Version(0), is_closed: false, ref_count_rx: 1 }, version: Version(0) } })
[   786.551030s] DEBUG ThreadId(01) linkerd_app_outbound::opaq::logical::router: Selected route meta=RouteRef(Default { name: "undefined-port" })
[   786.551049s] DEBUG ThreadId(01) linkerd_distribute::stack: New distribution backends=Empty
[   786.551069s] DEBUG ThreadId(01) linkerd_distribute::service: empty distribution will never become ready
[   786.551074s] DEBUG ThreadId(01) linkerd_stack::loadshed: Service has become unavailable
[   786.551145s]  INFO ThreadId(01) linkerd_app_core::serve: Connection closed error=logical service server.default.svc.cluster.local:81: route default.undefined-port: forbidden TCP route error.sources=[route default.undefined-port: forbidden TCP route, forbidden TCP route] client.addr=10.244.0.8:46740 server.addr=10.96.209.116:81
[   786.551186s] DEBUG ThreadId(01) linkerd_idle_cache: Awaiting idleness

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call. I've tested this and it works as expected!

@adleong adleong merged commit 9256082 into main Jul 1, 2025
70 of 75 checks passed
@adleong adleong deleted the alex/port-hiding-v2 branch July 1, 2025 00:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Linkerd exposes all ports on pod regardless of Service definition
3 participants
0