-
Notifications
You must be signed in to change notification settings - Fork 18.8k
After stopping docker, previously running containers cannot be started or removed #5684
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@vieira Please reboot the machine and let us know if you're still having troubles. |
The above steps are reproducible even after rebooting the machine. |
@alexlarsson can you please take a look ? It seems to be related to devicemapper |
The problem just seems related to the devicemapper. I think its really something else though. At this point the container (tail) is still running, so the device mapper device will be mounted, which means we can't mount it again, or remove it. This is why these operations fail. |
@alexlarsson do you know an easy way to clean up the system once this goes wrong? |
Well, if you find the runaway container process maybe you could force kill it. |
@vieira you can unmount: and start the container again it should work |
I can see that my docker was started with -d and -r. First, when docker is restarted, the containers dont get restarted. Then the above mentioned error happens (when trying to start the container(s)). My centos 6.5 is still getting 1.0.0.6 from the epel. Has this ever been identified as a bug in 1.0 and got fixed in the 1.1? Can somebody please confirm? Thanks |
Hello everyone, still not fixed in 1.1.1.
|
I am getting the a lot as well, but it does seem to remove the container in some sense (in that I can start a new container with the same name) |
Is There a work around for this issue? |
Looking for a workaround as well. |
Seems like stopping all containers before the docker daemon fix the issue. I've added this
Here is a gist with my debugging steps: https://gist.github.com/rochacon/4dfa7bd4de3c5f933f0d |
@rochacon Thanks for your workaround. I will test it today or tomorrow with 1.2 (seems you tested with 1.1.1, right?). Hope it works. |
@vieira I also tried with 1.2.0, same results. |
After 4 weeks running, one of my containers stopped... Not sure why... How can I found the root cause? Anyway, I had the same problem... It solved with the suggestion from @aroragagan: umount, docker start container... I'm on RHEL 6.5 by the way...
|
We're seeing this on 1.3.0 now, on an EC2 Ubuntu system that was upgraded from 12.04 to 14.04. My dev instance is a direct 14.04 install into Vagrant and does not have this problem. Unmounting and then restarting the containers seems to work, but that defeats the purpose of having them configured to restart automatically when the instance reboots or when docker restarts. Let me know if there's any further information I can provide on versions of supporting packages, etc, since I have a working and non-working system available. |
Seeing the same issue with docker 1.3 Ubuntu 14.04 with either Linux kernel 3.13 or 3.14. |
@srobertson are you referring to "containers not being restarted when the daemon restarts"? Are you using the new per-container restart-policy? Because the daemon-wide The new (per container) restart-policy is described in the CLI reference |
+1, got this issue on docker 1.3 @ ArchLinux x86_64 with 3.17.2-1-ARCH kernel.
Umount solves the problem. |
umount is a workaround, I wouldn't say it solves the problem. Simply restarting the daemon with containers running will reproduce the issue. |
No, the problem already existed using docker 1.10 and with the default ubuntu 14.04-kernel (~3.10 I think) and by using aufs. Then I upgraded (step by step) storage driver, kernel and docker. No significant change in the experienced problem... Do you think, it's worth trying overlay concerning this problem? (Performance is not a big issue in my case.) |
@thaJeztah I never saw this issue before and since I
I have this issue :( |
Still got this on |
Also got the issue:
docker version
docker info
uname -a
|
This is a mix of different issues. I think we need to close this. None of the latest reported cases are anything like the OP. @guenhter I suspect this is related to another issue with mounting either /var/run into a container (any other container on your host) or mounting /var/lib/docker |
Also, many of the pre-1.11 issues with "device or resource busy" type errors are most likely from killing the daemon (ungracefully) and then starting it back up. Closing for reasons stated above. |
Sorry - I'm not sure if I understand this. What do you mean by "None of the latest reported cases are anything like the OP" ? |
@dsteinkopf Yes, with as much detail as you can provide (compose files, daemon logs, etc.). |
Hi just to note on the issue I have specified earlier, I have upgraded my kernel version to 4.4.0-21-generic and the docker version info are as follows: Server: The issue reported earlier seems to have stopped occurring. Used Docker for considerable time by upgrading the kernel versions and it seemed to have stopped. |
Found a workarround for the problem, at least when used with docker-compose see #3786 (comment) |
Same issue with a container that is failing to restart. Ubuntu 14.04
Error:
Unmount fails:
|
This still is an issue for us (using 1.11.2 on Ubuntu 14.04.4 LTS (with KVM) (3.13.0-88-generic)). Is there any open ticket I can subscribe to get updates? |
@GameScripting See #21704 |
…e discussed in the ticket and hence misleading
Linux zk1 3.10.0-327.28.3.el7.x86_64(centos 7) Error response from daemon: Driver devicemapper failed to remove root filesystem 228f2c2da3de4d5abd3881184aeb330a4c18e4311ecf404e2fb8cd4ffe15e901: devicemapper: Error running DeleteDevice dm_task_run failed |
Just ran into this.
|
Still getting this too
|
Same issue, has been happening over many many versions of Docker. I use Arch Linux; devicemapper containers on ext4 FS.
|
If it helps... I believe that I am having the same/similar issue here as well. If deploy a service using compose up -d and then update the image name to a different one in the compose.yaml and do another compose up -d the compose fails with error around devicemapper: Error Version Information: Server: As a temporary workaround, i have added a docker-compose down --rmi all prior to rerunning the up. |
I also have the same issue in Docker version: 1.12.3 |
I'm pretty sure the rest of the people who are experiencing this issue is related to #27381 |
Im seeing this in docker 1.12.3 on CentOs 7 dc2-elk-02:/root/staging/ls-helper$ docker --version P.S. I am not using docker compose. |
Bitten after after host going out-of-disk-space. |
I had similar issue, I saw these error likes in my /var/log/syslog file: |
The issue can be reproduced as follows:
This is an up to date Ubuntu 14.04 host running lxc-docker 0.11.1. Storage driver is devicemapper and kernel version is 3.13.0.
This is a regression from docker 0.9 (from the official Ubuntu repos). The problem is also present in 0.10.
The text was updated successfully, but these errors were encountered: