-
Notifications
You must be signed in to change notification settings - Fork 26
Kublet-wrapper won't start after upgrade to 1353.2.0 with /var/logs already mounted #1892
Comments
Eeck, I guess I'll have this problem as well if reaches Stable. |
We won't let it reach stable. If we don't have a better solution, we can replace the hardcoded volume name in kublet-wrapper with a UUID to ensure we don't conflict with user's names, but I wanted to try and find a better solution first. |
It turns out this is far more complicated than I initially thought. I shouldn't have put off looking into this for so long! If two mounts have the same target, stage1-fly actually "dedupes" them. Basically, the bug is: $ sudo rkt run --stage1-name=coreos.com/rkt/stage1-fly --insecure-options=image,ondisk --volume=name1,kind=host,source=/tmp --mount volume=name1,target=/tmp --volume=name2,kind=host,source=/tmp --mount volume=name2,target=/tmp docker://busybox -- -c 'ls /tmp'
run: can't evaluate mounts: missing mount for volume "name2" However, if the target differs they don't get merged and things work, or if the names are identical then they merge identically and work. It seems like the actual fix is going to be a patch to rkt fly to make it's mount logic closer to the other stage1's mount logic. |
The upstream rkt bug has been fixed, this is now pending release of rkt v1.26. It won't make this alpha, but will make the next one. |
So, if I understand your comments correctly, the current problem is when you have 2 mounts with a different name, but with the same target? Is this in stable? |
@edevil for reference, these are the discussions on rkt side: rkt/rkt#3663 and rkt/rkt#3666. Root cause is a mix of: 1) not checking for multiple volumes 2) non-uniform deduplication of mounts 3) wrong logic in fly which didn't allow for unused volumes. |
Ok, I just wanted to be sure that in my case upgrade will not break my kubernetes clusters. This is the value that I use for RKT_RUN_ARGS: Of these, only /var/log seems to be duplicated. And since it has the same name as in here, this will not be a problem, right? |
@edevil that we be fine I think. Regardless we're not updating any mounts in kubelet-wrapper on stable until rkt's updated so we can do so without fear of breaking things any further than they are |
Since rkt 1.26.0, duplicate volume names are invalid. This avoids clashing with common user volume names like var-log.
Bug
Container Linux Version
Expected Behavior
A upgrade of kublet-wrapper shouldn't prevent it to start.
Actual Behavior
We use RKT_OPTIONS to mount
/var/log
as the kublet symlinks the docker logs with the pod information. It's usefull for fluentd.The latest kubelet-wrapper seems to have added the
/var/log
mount, resulting in having it twice on the rkt command line.Rkt errors with the following error message :
wrapper[10308]: run: can't evaluate mounts: missing mount for volume "var-log"
Reproduction Steps
The text was updated successfully, but these errors were encountered: