Description
Details
What steps did you take and what happened:
Hi there! I have tried to deploy the pod gateway and admission controller using your Helm chart, using the following values:
image:
repository: ghcr.io/angelnu/pod-gateway
# I am using dev version for testing - others should be using latest
tag: v1.8.1
DNSPolicy: ClusterFirst
webhook:
image:
repository: ghcr.io/angelnu/gateway-admision-controller
# Use dev version
pullPolicy: Always
tag: v3.9.0
namespaceSelector:
type: label
label: routed-gateway
gatewayDefault: true
gatewayLabel: setGateway
addons:
vpn:
enabled: true
type: gluetun
gluetun:
image:
repository: docker.io/qmcgaw/gluetun
tag: latest
env:
- name: VPN_SERVICE_PROVIDER
value: custom
- name: VPN_TYPE
value: wireguard
- name: VPN_INTERFACE
value: wg0
- name: FIREWALL
value: "off"
- name: DOT
value: "off"
- name: DNS_KEEP_NAMESERVER
value: "on"
envFrom:
- secretRef:
name: wireguard-config
livenessProbe:
exec:
command:
- sh
- -c
- if [ $(wget -q -O- https://ipinfo.io/country) == 'NL' ]; then exit 0; else exit $?; fi
initialDelaySeconds: 30
periodSeconds: 60
failureThreshold: 3
networkPolicy:
enabled: true
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
ports:
# VPN traffic
- port: 51820
protocol: UDP
- to:
- ipBlock:
cidr: 10.0.0.0/8
settings:
# -- If using a VPN, interface name created by it
VPN_INTERFACE: wg0
# -- Prevent non VPN traffic to leave the gateway
VPN_BLOCK_OTHER_TRAFFIC: true
# -- If VPN_BLOCK_OTHER_TRAFFIC is true, allow VPN traffic over this port
VPN_TRAFFIC_PORT: 51820
# -- Traffic to these IPs will be send through the K8S gateway
VPN_LOCAL_CIDRS: "10.0.0.0/8 192.168.0.0/16"
# -- settings to expose ports, usually through a VPN provider.
# NOTE: if you change it you will need to manually restart the gateway POD
publicPorts:
- hostname: transmission-client.media-center
IP: 10
ports:
- type: udp
port: 51413
- type: tcp
port: 51413
So far so good, I've got the pod gateway and admission controller up and running in my vpn-gateway
ns with wireguard VPN client working on the pod gateway, now trying to actually route a pod in my media-center
routed ns:
These are the logs of the gateway-init
container of my transmission-client
pod:
❯ kubectl logs transmission-client-7bbc685b44-xkk6h -n media-center -c gateway-init
...
++ dig +short vpn-gateway-pod-gateway.vpn-gateway.svc.cluster.local @10.43.0.10
+ GATEWAY_IP=';; connection timed out; no servers could be reached'
It looks like it's not able to resolve vpn-gateway-pod-gateway.vpn-gateway.svc.cluster.local
in the init container, but the cluster local DNS works fine I tried running the same pod in a non routed namespace, exec into it and nslookup
and it worked fine:
root@transmission-client-7bbc685b44-m8l4r:/# nslookup vpn-gateway-pod-gateway.vpn-gateway.svc.cluster.local 10.43.0.10
Server: 10.43.0.10
Address: 10.43.0.10:53
Name: vpn-gateway-pod-gateway.vpn-gateway.svc.cluster.local
Address: 10.42.2.61
Any idea what can cause this behavior?
What did you expect to happen:
I was expecting the routed pod gateway to be updated successfully with the pod starting up.
Anything else you would like to add:
Any help appreciated, there is probably something I'm missing here, happy to provide more information to debug this, thanks :)