8000 [1.13 rc3 Swarm] When restarting a local container on a worker which connects to a overlay network the worker is added as peer every time. · Issue #29276 · moby/moby · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
[1.13 rc3 Swarm] When restarting a local container on a worker which connects to a overlay network the worker is added as peer every time. #29276
Open
@hamburml

Description

@hamburml

Description

I am using 1.13 rc3 to test the newest swarm features. Currently I am using overlay networks in the swarm to let local running containers communicate with each other. To be precise, the manager runs zabbix-server, zabbix-web and zabbix-postgres container. One of the worker runs zabbix-agent which should connect to the zabbix-server. They are all in the same zabbix-swarm-network.

Now I used docker restart to restart the docker container on the worker. After inspecting the overlay network via docker network inspect zabbix-swarm-network I found out that the current worker is added every restart to the network. This is ONLY on the worker! The manager shows everything correct.

Steps to reproduce the issue:

  1. Make overlay network networkName
  2. Create container with --network networkName
  3. restart container with docker restart containerName
  4. docker network inspect networkName

Describe the results you received:

root@jar:~# docker network inspect zabbix-swarm-network
[
    {
        "Name": "zabbix-swarm-network",
        ...,
        "Containers": {
            "3a2c1e4b122eb5358deb3b67120874717592c8f7508c917b47d90dcb1dafd95c": {
                "Name": "zabbix-agent-jar",
                ...
            }
        },
        ...
        "Peers": [
            {
                "Name": "jar-b6c5612d7641",
                "IP": "5.39.83.204"
            },
            {
                "Name": "bottle.haembi.de-74a51b212452",
                "IP": "5.9.24.226"
            },
            {
                "Name": "jar-b6c5612d7641",
                "IP": "5.39.83.204"
            },
            {
                "Name": "jar-b6c5612d7641",
                "IP": "5.39.83.204"
            }
        ]
    }
]

Describe the results you expected:
Only one entry of jar.

Additional information you deem important (e.g. issue happens only occasionally):

I don't know if this is important but the zabbix-server (which runs on the manager) can't reach the zabbix-agent on the worker. Maybe this is due to the duplicated peers entry? Or I am doing something wrong here.
I just set some settings wrong. This is not important.

Output of docker version on manager:

Client:
 Version:      1.13.0-rc3
 API version:  1.25
 Go version:   go1.7.3
 Git commit:   4d92237
 Built:        Mon Dec  5 18:49:08 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.13.0-rc3
 API version:  1.25 (minimum version 1.12)
 Go version:   go1.7.3
 Git commit:   4d92237
 Built:        Mon Dec  5 18:49:08 2016
 OS/Arch:      linux/amd64
 Experimental: false

Output of docker info: on manager

Containers: 4
 Running: 4
 Paused: 0
 Stopped: 0
Images: 4
Server Version: 1.13.0-rc3
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 33
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: active
 NodeID: 2roi10et2p908xc60w8ap8i6x
 Is Manager: true
 ClusterID: 69a3wiv7ja5bxs7isbndnmglv
 Managers: 1
 Nodes: 6
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
 Node Address: 5.9.24.226
 Manager Addresses:
  5.9.24.226:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 03e5862ec0d8d3b3f750e19fca3ee367e13c090e
runc version: 51371867a01c467f08af739783b8beafc154c4d7
init version: 949e6fa
Kernel Version: 3.16.0-4-amd64
Operating System: Debian GNU/Linux 8 (jessie)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.62 GiB
Name: bottle.haembi.de
ID: W4ZV:L457:5MAK:WYYG:UE3Y:ECG6:A5UI:7EFA:IW7Q:5BE2:UPBP:G5YW
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No memory limit support
WARNING: No swap limit support
WARNING: No kernel memory limit support
WARNING: No oom kill disable support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

Output of docker version on worker:

Client:
 Version:      1.13.0-rc3
 API version:  1.25
 Go version:   go1.7.3
 Git commit:   4d92237
 Built:        Mon Dec  5 18:49:08 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.13.0-rc3
 API version:  1.25 (minimum version 1.12)
 Go version:   go1.7.3
 Git commit:   4d92237
 Built:        Mon Dec  5 18:49:08 2016
 OS/Arch:      linux/amd64
 Experimental: false

Output of docker info: on worker

Containers: 2
 Running: 1
 Paused: 0
 Stopped: 1
Images: 7
Server Version: 1.13.0-rc3
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 34
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: active
 NodeID: zqe2l9ft41guntg448vqxtpf5
 Is Manager: false
 Node Address: 5.39.83.204
 Manager Addresses:
  5.9.24.226:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 03e5862ec0d8d3b3f750e19fca3ee367e13c090e
runc version: 51371867a01c467f08af739783b8beafc154c4d7
init version: 949e6fa
Kernel Version: 3.16.0-4-amd64
Operating System: Debian GNU/Linux 8 (jessie)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 3.854 GiB
Name: jar
ID: N47Z:CACH:MYSN:7R2P:YO74:O2AP:CCJA:GSVL:NPPN:4VXQ:SZPS:HS2R
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No memory limit support
WARNING: No swap limit support
WARNING: No kernel memory limit support
WARNING: No oom kill disable support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

Metadata

Metadata

Assignees

Labels

area/networkingkind/bugBugs are bugs. The cause may or may not be known at triage time so debugging may be needed.version/1.13

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions

    0