Acceptance tests using different clients concurrently. Environment deployed from packages.
Build: #3228 failed
Job: QoS was successful
Job result summary
- Completed
- Duration
- 14 minutes
- Agent
- bamboo-agent-os-03-s3-proxy-dont-disable
- Revision
-
d704831f382422ac250de6a2a1b215bef0202e9c
- Total tests
- 4
- Successful since
- #3058 ()
Tests
- 4 tests in total
- 4 minutes taken in total.
Error summary
The build generated some errors. See the full build log for more details.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 4822 100 4822 0 0 54179 0 --:--:-- --:--:-- --:--:-- 54179
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
c6a83fedfae6: Pulling fs layer
c6a83fedfae6: Verifying Checksum
c6a83fedfae6: Download complete
c6a83fedfae6: Pull complete
Digest: sha256:0a4eaa0eecf5f8c050e5bba433f58c052be7587ee8af3e8b3910ef9ab5fbe9f5
Status: Downloaded newer image for alpine:latest
Error: Kubernetes cluster unreachable: Get "http://localhost:8080/version?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.158089 7608 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.158473 7608 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.159995 7608 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.161488 7608 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.163253 7608 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?
E0905 10:45:31.238791 7618 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.239602 7618 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.241279 7618 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.242841 7618 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.244658 7618 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?
E0905 10:45:31.316915 7624 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.317448 7624 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.320379 7624 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.320929 7624 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.322407 7624 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?
E0905 10:45:31.391763 7630 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.392522 7630 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.394030 7630 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.395684 7630 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.397422 7630 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?
E0905 10:45:31.469267 7637 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.469613 7637 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.471330 7637 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.472698 7637 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.474304 7637 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Error response from daemon: Cannot kill container: 42ba76768a4e: Container 42ba76768a4e63decebbe3068da50398e083a9cf8e0be4a457d5a9dddc14670b is not running
Unable to find image 'ubuntu:14.10' locally
14.10: Pulling from library/ubuntu
[DEPRECATION NOTICE] Docker Image Format v1, and Docker Image manifest version 2, schema 1 support will be removed in an upcoming release. Suggest the author of docker.io/library/ubuntu:14.10 to upgrade the image to the OCI Format, or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/
b0efe5c05b4c: Pulling fs layer
0a1f1b169319: Pulling fs layer
1ceb0a3c7c48: Pulling fs layer
a3ed95caeb02: Pulling fs layer
1ceb0a3c7c48: Verifying Checksum
1ceb0a3c7c48: Download complete
0a1f1b169319: Download complete
a3ed95caeb02: Download complete
b0efe5c05b4c: Verifying Checksum
b0efe5c05b4c: Download complete
b0efe5c05b4c: Pull complete
0a1f1b169319: Pull complete
1ceb0a3c7c48: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:6341c688b4b0b82ec735389b3c97df8cf2831b8cb8bd1856779130a86574ac5c
Status: Downloaded newer image for ubuntu:14.10
*
W0905 10:47:58.664714 11884 out.go:239] ! kubectl and minikube configuration will be stored in /root
! kubectl and minikube configuration will be stored in /root
W0905 10:47:58.664727 11884 out.go:239] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
W0905 10:47:58.664735 11884 out.go:239] *
*
W0905 10:47:58.664763 11884 out.go:239] - sudo mv /root/.kube /root/.minikube $HOME
- sudo mv /root/.kube /root/.minikube $HOME
W0905 10:47:58.664775 11884 out.go:239] - sudo chown -R $USER $HOME/.kube $HOME/.minikube
- sudo chown -R $USER $HOME/.kube $HOME/.minikube
W0905 10:47:58.664783 11884 out.go:239] *
*
W0905 10:47:58.664792 11884 out.go:239] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
* This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
I0905 10:47:58.666772 11884 addons.go:69] Setting storage-provisioner=true in profile "minikube"
I0905 10:47:58.666793 11884 addons.go:231] Setting addon storage-provisioner=true in "minikube"
I0905 10:47:58.666842 11884 host.go:66] Checking if "minikube" exists ...
I0905 10:47:58.667491 11884 kubeconfig.go:92] found "minikube" server: "https://10.87.23.58:8443"
I0905 10:47:58.667507 11884 api_server.go:166] Checking apiserver status ...
I0905 10:47:58.667553 11884 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0905 10:47:58.685124 11884 addons.go:69] Setting default-storageclass=true in profile "minikube"
I0905 10:47:58.685269 11884 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0905 10:47:58.688502 11884 kubeconfig.go:92] found "minikube" server: "https://10.87.23.58:8443"
I0905 10:47:58.688923 11884 api_server.go:166] Checking apiserver status ...
I0905 10:47:58.692760 11884 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0905 10:47:58.700757 11884 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/13438/cgroup
I0905 10:47:58.716723 11884 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod59a379e8716bbd825eba17c7cd4d9b47/d463f41c5bec85c6e9c5f03ae2cde511e783a9a98ec0f22470ab91c76c63855c"
I0905 10:47:58.716815 11884 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod59a379e8716bbd825eba17c7cd4d9b47/d463f41c5bec85c6e9c5f03ae2cde511e783a9a98ec0f22470ab91c76c63855c/freezer.state
I0905 10:47:58.731947 11884 api_server.go:204] freezer state: "THAWED"
I0905 10:47:58.731990 11884 api_server.go:253] Checking apiserver healthz at https://10.87.23.58:8443/healthz ...
I0905 10:47:58.749782 11884 api_server.go:279] https://10.87.23.58:8443/healthz returned 200:
ok
I0905 10:47:58.753827 11884 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0905 10:47:58.752731 11884 kapi.go:248] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
I0905 10:47:58.753896 11884 start.go:223] Will wait 6m0s for node &{Name: IP:10.87.23.58 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0905 10:47:58.767680 11884 out.go:177] * Verifying Kubernetes components...
I0905 10:47:58.758061 11884 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/13438/cgroup
I0905 10:47:58.790694 11884 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0905 10:47:58.791008 11884 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0905 10:47:58.791030 11884 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
I0905 10:47:58.791038 11884 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
I0905 10:47:58.791120 11884 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0905 10:47:58.791262 11884 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1200172290 /etc/kubernetes/addons/storage-provisioner.yaml
I0905 10:47:58.806150 11884 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod59a379e8716bbd825eba17c7cd4d9b47/d463f41c5bec85c6e9c5f03ae2cde511e783a9a98ec0f22470ab91c76c63855c"
I0905 10:47:58.806217 11884 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod59a379e8716bbd825eba17c7cd4d9b47/d463f41c5bec85c6e9c5f03ae2cde511e783a9a98ec0f22470ab91c76c63855c/freezer.state
I0905 10:47:58.806588 11884 api_server.go:52] waiting for apiserver process to appear ...
I0905 10:47:58.806645 11884 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0905 10:47:58.807776 11884 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0905 10:47:58.815754 11884 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 127.0.0.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0905 10:47:58.820800 11884 api_server.go:204] freezer state: "THAWED"
I0905 10:47:58.820835 11884 api_server.go:253] Checking apiserver healthz at https://10.87.23.58:8443/healthz ...
I0905 10:47:58.830818 11884 api_server.go:279] https://10.87.23.58:8443/healthz returned 200:
ok
I0905 10:47:58.842584 11884 api_server.go:72] duration metric: took 88.645948ms to wait for apiserver process to appear ...
I0905 10:47:58.842611 11884 api_server.go:88] waiting for apiserver healthz status ...
I0905 10:47:58.842639 11884 api_server.go:253] Checking apiserver healthz at https://10.87.23.58:8443/healthz ...
I0905 10:47:58.851322 11884 api_server.go:279] https://10.87.23.58:8443/healthz returned 200:
ok
I0905 10:47:58.852825 11884 api_server.go:141] control plane version: v1.28.1
I0905 10:47:58.852857 11884 api_server.go:131] duration metric: took 10.23663ms to wait for apiserver health ...
I0905 10:47:58.852868 11884 system_pods.go:43] waiting for kube-system pods to appear ...
I0905 10:47:58.856887 11884 addons.go:231] Setting addon default-storageclass=true in "minikube"
I0905 10:47:58.856936 11884 host.go:66] Checking if "minikube" exists ...
I0905 10:47:58.857883 11884 kubeconfig.go:92] found "minikube" server: "https://10.87.23.58:8443"
I0905 10:47:58.857907 11884 api_server.go:166] Checking apiserver status ...
I0905 10:47:58.857948 11884 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0905 10:47:58.864679 11884 system_pods.go:59] 4 kube-system pods found
I0905 10:47:58.864727 11884 system_pods.go:61] "etcd-bamboo-agent-os-03-s3-proxy-dont-disable" [b11e3923-b6ec-47be-a563-e0a24a2cc86e] Pending
I0905 10:47:58.864740 11884 system_pods.go:61] "kube-apiserver-bamboo-agent-os-03-s3-proxy-dont-disable" [32a54773-f475-4fa2-860f-361418b1b6f6] Pending
I0905 10:47:58.864749 11884 system_pods.go:61] "kube-controller-manager-bamboo-agent-os-03-s3-proxy-dont-disable" [81e0e47f-d84d-47d9-bb63-ad369c32e039] Pending
I0905 10:47:58.864757 11884 system_pods.go:61] "kube-scheduler-bamboo-agent-os-03-s3-proxy-dont-disable" [009758cb-1bee-46a7-966f-9bdab0b63498] Pending
I0905 10:47:58.864766 11884 system_pods.go:74] duration metric: took 11.890739ms to wait for pod list to return data ...
I0905 10:47:58.864779 11884 kubeadm.go:581] duration metric: took 110.850242ms to wait for : map[apiserver:true system_pods:true] ...
I0905 10:47:58.864795 11884 node_conditions.go:102] verifying NodePressure condition ...
I0905 10:47:58.871777 11884 node_conditions.go:122] node storage ephemeral capacity is 25215872Ki
I0905 10:47:58.871815 11884 node_conditions.go:123] node cpu capacity is 4
I0905 10:47:58.871832 11884 node_conditions.go:105] duration metric: took 7.027873ms to run NodePressure ...
I0905 10:47:58.871847 11884 start.go:228] waiting for startup goroutines ...
I0905 10:47:58.882799 11884 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/13438/cgroup
I0905 10:47:58.891879 11884 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod59a379e8716bbd825eba17c7cd4d9b47/d463f41c5bec85c6e9c5f03ae2cde511e783a9a98ec0f22470ab91c76c63855c"
I0905 10:47:58.891950 11884 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod59a379e8716bbd825eba17c7cd4d9b47/d463f41c5bec85c6e9c5f03ae2cde511e783a9a98ec0f22470ab91c76c63855c/freezer.state
I0905 10:47:58.906326 11884 api_server.go:204] freezer state: "THAWED"
I0905 10:47:58.906360 11884 api_server.go:253] Checking apiserver healthz at https://10.87.23.58:8443/healthz ...
I0905 10:47:58.914375 11884 api_server.go:279] https://10.87.23.58:8443/healthz returned 200:
ok
I0905 10:47:58.914465 11884 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
I0905 10:47:58.914504 11884 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0905 10:47:58.914683 11884 exec_runner.go:51] Run: sudo cp -a /tmp/minikube326244435 /etc/kubernetes/addons/storageclass.yaml
I0905 10:47:58.929980 11884 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0905 10:47:59.863193 11884 exec_runner.go:84] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 127.0.0.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.047373725s)
I0905 10:47:59.863234 11884 start.go:901] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
I0905 10:48:00.081506 11884 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.273689569s)
I0905 10:48:00.092216 11884 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
I0905 10:48:00.096395 11884 addons.go:502] enable addons completed in 1.445314895s: enabled=[default-storageclass storage-provisioner]
I0905 10:48:00.096458 11884 start.go:233] waiting for cluster config update ...
I0905 10:48:00.096478 11884 start.go:242] writing updated cluster config ...
I0905 10:48:00.096828 11884 exec_runner.go:51] Run: rm -f paused
I0905 10:48:00.164968 11884 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
I0905 10:48:00.169774 11884 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Submodule 'automation-examples' (ssh://git@git.onedata.org:7999/vfs/automation-examples.git) registered for path 'automation-examples'
Submodule 'bamboos' (ssh://git@git.onedata.org:7999/vfs/bamboos.git) registered for path 'bamboos'
Submodule 'cdmi_swagger' (ssh://git@git.onedata.org:7999/vfs/cdmi-swagger.git) registered for path 'cdmi_swagger'
Submodule 'one_env' (ssh://git@git.onedata.org:7999/vfs/one-env.git) registered for path 'one_env'
Submodule 'onepanel_swagger' (ssh://git@git.onedata.org:7999/vfs/onepanel-swagger.git) registered for path 'onepanel_swagger'
Submodule 'oneprovider_swagger' (ssh://git@git.onedata.org:7999/vfs/oneprovider-swagger.git) registered for path 'oneprovider_swagger'
Submodule 'onezone_swagger' (ssh://git@git.onedata.org:7999/vfs/onezone-swagger.git) registered for path 'onezone_swagger'
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-MAOPT-CQT/onedata-acceptance/automation-examples'...
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-MAOPT-CQT/onedata-acceptance/bamboos'...
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-MAOPT-CQT/onedata-acceptance/cdmi_swagger'...
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-MAOPT-CQT/onedata-acceptance/one_env'...
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-MAOPT-CQT/onedata-acceptance/onepanel_swagger'...
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-MAOPT-CQT/onedata-acceptance/oneprovider_swagger'...
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-MAOPT-CQT/onedata-acceptance/onezone_swagger'...
Submodule 'bamboos' (ssh://git@git.onedata.org:7999/vfs/bamboos.git) registered for path 'onepanel_swagger/bamboos'
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-MAOPT-CQT/onedata-acceptance/onepanel_swagger/bamboos'...
Submodule 'bamboos' (ssh://git@git.onedata.org:7999/vfs/bamboos.git) registered for path 'oneprovider_swagger/bamboos'
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-MAOPT-CQT/onedata-acceptance/oneprovider_swagger/bamboos'...
Submodule 'bamboos' (ssh://git@git.onedata.org:7999/vfs/bamboos.git) registered for path 'onezone_swagger/bamboos'
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-MAOPT-CQT/onedata-acceptance/onezone_swagger/bamboos'...
Unable to find image 'docker.onedata.org/one_env:v39' locally
v39: Pulling from one_env
41af1b5f0f51: Pulling fs layer
da812d3cf979: Pulling fs layer
6ea2f9c9eaa1: Pulling fs layer
ba301be5517a: Pulling fs layer
c41cf17cec29: Pulling fs layer
718cbc35a0e0: Pulling fs layer
7640532158c9: Pulling fs layer
6eba1fc2d1b6: Pulling fs layer
caf52bb4b365: Pulling fs layer
3dfbdb225edd: Pulling fs layer
b62f693fa900: Pulling fs layer
79ae03a86415: Pulling fs layer
ba301be5517a: Waiting
6eba1fc2d1b6: Waiting
c41cf17cec29: Waiting
718cbc35a0e0: Waiting
7640532158c9: Waiting
caf52bb4b365: Waiting
3dfbdb225edd: Waiting
79ae03a86415: Waiting
6ea2f9c9eaa1: Verifying Checksum
6ea2f9c9eaa1: Download complete
ba301be5517a: Verifying Checksum
ba301be5517a: Download complete
41af1b5f0f51: Verifying Checksum
41af1b5f0f51: Download complete
c41cf17cec29: Verifying Checksum
c41cf17cec29: Download complete
7640532158c9: Verifying Checksum
7640532158c9: Download complete
6eba1fc2d1b6: Verifying Checksum
6eba1fc2d1b6: Download complete
caf52bb4b365: Verifying Checksum
caf52bb4b365: Download complete
41af1b5f0f51: Pull complete
3dfbdb225edd: Verifying Checksum
3dfbdb225edd: Download complete
b62f693fa900: Verifying Checksum
b62f693fa900: Download complete
79ae03a86415: Verifying Checksum
79ae03a86415: Download complete
718cbc35a0e0: Download complete
da812d3cf979: Verifying Checksum
da812d3cf979: Download complete
da812d3cf979: Pull complete
6ea2f9c9eaa1: Pull complete
ba301be5517a: Pull complete
c41cf17cec29: Pull complete
718cbc35a0e0: Pull complete
7640532158c9: Pull complete
6eba1fc2d1b6: Pull complete
caf52bb4b365: Pull complete
3dfbdb225edd: Pull complete
b62f693fa900: Pull complete
79ae03a86415: Pull complete
Digest: sha256:e39dbddda3d96a874a4a2aebf2f188bfeb8ffdeb25e3fd27678ab5197f387940
Status: Downloaded newer image for docker.onedata.org/one_env:v39
error: failed to create clusterrolebinding: clusterrolebindings.rbac.authorization.k8s.io "serviceaccounts-cluster-admin" already exists
Unable to find image 'docker.onedata.org/swagger-aggregator:1.5.0' locally
1.5.0: Pulling from swagger-aggregator
3e51df1a11ab: Pulling fs layer
afe1a672db39: Pulling fs layer
80bad90eec10: Pulling fs layer
ca328de5836f: Pulling fs layer
4f4fb700ef54: Pulling fs layer
f9032f2640a0: Pulling fs layer
982a4aef97ed: Pulling fs layer
aca789ddad1c: Pulling fs layer
a3e17b3016f6: Pulling fs layer
b67d253c6e22: Pulling fs layer
302c9eabb2bb: Pulling fs layer
bb6e4824f633: Pulling fs layer
4f4fb700ef54: Waiting
f9032f2640a0: Waiting
982a4aef97ed: Waiting
aca789ddad1c: Waiting
ca328de5836f: Waiting
a3e17b3016f6: Waiting
b67d253c6e22: Waiting
302c9eabb2bb: Waiting
bb6e4824f633: Waiting
80bad90eec10: Download complete
afe1a672db39: Verifying Checksum
afe1a672db39: Download complete
ca328de5836f: Verifying Checksum
ca328de5836f: Download complete
4f4fb700ef54: Verifying Checksum
4f4fb700ef54: Download complete
f9032f2640a0: Verifying Checksum
f9032f2640a0: Download complete
3e51df1a11ab: Verifying Checksum
3e51df1a11ab: Download complete
a3e17b3016f6: Verifying Checksum
a3e17b3016f6: Download complete
aca789ddad1c: Verifying Checksum
aca789ddad1c: Download complete
b67d253c6e22: Verifying Checksum
b67d253c6e22: Download complete
302c9eabb2bb: Verifying Checksum
302c9eabb2bb: Download complete
bb6e4824f633: Verifying Checksum
bb6e4824f633: Download complete
982a4aef97ed: Verifying Checksum
982a4aef97ed: Download complete
3e51df1a11ab: Pull complete
afe1a672db39: Pull complete
80bad90eec10: Pull complete
ca328de5836f: Pull complete
4f4fb700ef54: Pull complete
f9032f2640a0: Pull complete
982a4aef97ed: Pull complete
aca789ddad1c: Pull complete
a3e17b3016f6: Pull complete
b67d253c6e22: Pull complete
302c9eabb2bb: Pull complete
bb6e4824f633: Pull complete
Digest: sha256:e2e8e762a03a0acdd49e63c4168157cb4e0e79f31f4e815561e9f4c65dbf8ac8
Status: Downloaded newer image for docker.onedata.org/swagger-aggregator:1.5.0
Unable to find image 'swaggerapi/swagger-codegen-cli:2.4.20' locally
2.4.20: Pulling from swaggerapi/swagger-codegen-cli
e7c96db7181b: Pulling fs layer
f910a506b6cb: Pulling fs layer
b6abafe80f63: Pulling fs layer
0d9127f69a1f: Pulling fs layer
0d9127f69a1f: Waiting
f910a506b6cb: Verifying Checksum
f910a506b6cb: Download complete
e7c96db7181b: Verifying Checksum
e7c96db7181b: Pull complete
f910a506b6cb: Pull complete
0d9127f69a1f: Verifying Checksum
0d9127f69a1f: Download complete
b6abafe80f63: Verifying Checksum
b6abafe80f63: Download complete
b6abafe80f63: Pull complete
0d9127f69a1f: Pull complete
Digest: sha256:e961c734f4a232ea050293e9b16aed4cc131ffecf4a7d8671f15f1d79bca8796
Status: Downloaded newer image for swaggerapi/swagger-codegen-cli:2.4.20
/bin/sh: 2: [[: not found
c313dba7f09c: Download complete
v12: Pulling from onedata/acceptance_mixed
d5fd17ec1767: Pulling fs layer
635970228e5b: Pulling fs layer
e6b8b518c4b7: Pulling fs layer
262a85701f26: Pulling fs layer
56fb40ac7433: Pulling fs layer
72938d7adcac: Pulling fs layer
cdf99cc302c9: Pulling fs layer
99e960dff935: Pulling fs layer
543cdc7d97b9: Pulling fs layer
c5eae658fb83: Pulling fs layer
e8c6e757d9d8: Pulling fs layer
c313dba7f09c: Pulling fs layer
cdf99cc302c9: Waiting
99e960dff935: Waiting
543cdc7d97b9: Waiting
c5eae658fb83: Waiting
e8c6e757d9d8: Waiting
c313dba7f09c: Waiting
262a85701f26: Waiting
56fb40ac7433: Waiting
72938d7adcac: Waiting
e6b8b518c4b7: Verifying Checksum
e6b8b518c4b7: Download complete
d5fd17ec1767: Verifying Checksum
d5fd17ec1767: Download complete
56fb40ac7433: Verifying Checksum
56fb40ac7433: Download complete
262a85701f26: Verifying Checksum
262a85701f26: Download complete
72938d7adcac: Download complete
99e960dff935: Verifying Checksum
99e960dff935: Download complete
d5fd17ec1767: Pull complete
543cdc7d97b9: Verifying Checksum
543cdc7d97b9: Download complete
cdf99cc302c9: Verifying Checksum
cdf99cc302c9: Download complete
e8c6e757d9d8: Verifying Checksum
e8c6e757d9d8: Download complete
c313dba7f09c: Verifying Checksum
c313dba7f09c: Download complete
635970228e5b: Verifying Checksum
635970228e5b: Download complete
c5eae658fb83: Download complete
635970228e5b: Pull complete
e6b8b518c4b7: Pull complete
262a85701f26: Pull complete
56fb40ac7433: Pull complete
72938d7adcac: Pull complete
cdf99cc302c9: Pull complete
99e960dff935: Pull complete
543cdc7d97b9: Pull complete
c5eae658fb83: Pull complete
e8c6e757d9d8: Pull complete
c313dba7f09c: Pull complete
Digest: sha256:96db83e9518bd8b75168f158bad519b54bb4b61336e7ba414e24be2483b152bb
Status: Downloaded newer image for onedata/acceptance_mixed:v12
/usr/local/lib/python3.8/dist-packages/pytest_selenium/drivers/crossbrowsertesting.py:72: SyntaxWarning: "is not" with a literal. Did you mean "!="?
if report.when == 'setup' or info.get('test_score') is not 'fail':
/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-MAOPT-CQT/onedata-acceptance/tests/gui/steps/oneprovider/qos.py:357: SyntaxWarning: assertion is always true, perhaps remove parentheses?
assert ('qos-status-impossible' in
/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-MAOPT-CQT/onedata-acceptance/tests/gui/steps/oneprovider/qos.py:360: SyntaxWarning: assertion is always true, perhaps remove parentheses?
assert ('qos-status-fulfilled' in
error: failed to create clusterrolebinding: clusterrolebindings.rbac.authorization.k8s.io "serviceaccounts-cluster-admin" already exists
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
id: ‘user1’: no such user
command terminated with exit code 1
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
id: ‘user2’: no such user
command terminated with exit code 1
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
command terminated with exit code 1
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
command terminated with exit code 1
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
command terminated with exit code 1
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
id: ‘user1’: no such user
command terminated with exit code 1
id: ‘user2’: no such user
command terminated with exit code 1
command terminated with exit code 1
command terminated with exit code 1
command terminated with exit code 1
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
rsync: change_dir#3 "/tmp/logs" failed: No such file or directory (2)
rsync error: errors selecting input/output files, dirs (code 3) at main.c(828) [sender=3.1.3]
command terminated with exit code 3
cp: cannot stat 'onedata-acceptance/one_env/sources_info.yaml': No such file or directory
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 4822 100 4822 0 0 20965 0 --:--:-- --:--:-- --:--:-- 20965
Error response from daemon: Cannot kill container: 1baf89e4fcfd: Container 1baf89e4fcfd8835edd1fd0474d480645583d295b407f79f96ca1f3df5e5080a is not running
Error response from daemon: Cannot kill container: c8197596f18b: Container c8197596f18b00bb67d02eb685191533b9dc64673e1380cddb730e9240a18909 is not running
Error response from daemon: Cannot kill container: 66e99790d120: Container 66e99790d12033e43d3fcd2a0c804f1ee5498629aca523a04fa5dbd38fd3703c is not running
Error response from daemon: Cannot kill container: 26cb412b9ee5: Container 26cb412b9ee5ef3b89a9a26cb23f88895620c2a2096593dbfe1afb2d1c448dfa is not running
Error response from daemon: Cannot kill container: 053cb5ec4448: Container 053cb5ec444866ba50aedcc5d046962b75505e133b94d0f5227912d0b2693be5 is not running
Error response from daemon: Cannot kill container: cd37de466065: Container cd37de46606520c3101550cfb3a754f29413edc35609b547ebe9fe91e45d193f is not running
Error response from daemon: Cannot kill container: 6e8b963d54e8: Container 6e8b963d54e8a859136d1517b483873b5a157560c6682ae1c25cc900da9fe680 is not running
Error response from daemon: Cannot kill container: 2ca9939b3785: Container 2ca9939b37859a2421e204c140fc594547d0c3e3221f8524dac145c316b9441a is not running
Error response from daemon: Cannot kill container: 21ecffca763d: Container 21ecffca763d427786fd1ffc4c0f538c6b17beb7dd0eb60b5fbfebf781a2bc6c is not running
Error response from daemon: Cannot kill container: 82a02def72ef: Container 82a02def72ef57f5199a83029662fd83605e995842b4011db8c229a82530250e is not running
Error response from daemon: Cannot kill container: 6a9de10f451a: Container 6a9de10f451abcbca4f5919e6790465558eda0473a0b1f07fdb20e87bd3e810a is not running
Error response from daemon: Cannot kill container: 9bc9ecd24fc2: Container 9bc9ecd24fc2c7e6f279c56e3e85393480a5fd691f761756a371387b63f220d4 is not running
Error response from daemon: Cannot kill container: 658db340fc03: Container 658db340fc038ed8453f9786aba1efff1ffd8a013112b44b0e08b98308ed90ea is not running
Error response from daemon: Cannot kill container: afc60a4268a3: Container afc60a4268a3205a0842cda0f78cee42372e4ae60607dc47cccf43c68067665e is not running
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 4822 100 4822 0 0 54179 0 --:--:-- --:--:-- --:--:-- 54179
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
c6a83fedfae6: Pulling fs layer
c6a83fedfae6: Verifying Checksum
c6a83fedfae6: Download complete
c6a83fedfae6: Pull complete
Digest: sha256:0a4eaa0eecf5f8c050e5bba433f58c052be7587ee8af3e8b3910ef9ab5fbe9f5
Status: Downloaded newer image for alpine:latest
Error: Kubernetes cluster unreachable: Get "http://localhost:8080/version?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.158089 7608 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.158473 7608 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.159995 7608 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.161488 7608 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.163253 7608 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?
E0905 10:45:31.238791 7618 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.239602 7618 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.241279 7618 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.242841 7618 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.244658 7618 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?
E0905 10:45:31.316915 7624 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.317448 7624 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.320379 7624 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.320929 7624 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.322407 7624 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?
E0905 10:45:31.391763 7630 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.392522 7630 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.394030 7630 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.395684 7630 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.397422 7630 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?
E0905 10:45:31.469267 7637 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.469613 7637 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.471330 7637 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.472698 7637 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0905 10:45:31.474304 7637 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Error response from daemon: Cannot kill container: 42ba76768a4e: Container 42ba76768a4e63decebbe3068da50398e083a9cf8e0be4a457d5a9dddc14670b is not running
Unable to find image 'ubuntu:14.10' locally
14.10: Pulling from library/ubuntu
[DEPRECATION NOTICE] Docker Image Format v1, and Docker Image manifest version 2, schema 1 support will be removed in an upcoming release. Suggest the author of docker.io/library/ubuntu:14.10 to upgrade the image to the OCI Format, or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/
b0efe5c05b4c: Pulling fs layer
0a1f1b169319: Pulling fs layer
1ceb0a3c7c48: Pulling fs layer
a3ed95caeb02: Pulling fs layer
1ceb0a3c7c48: Verifying Checksum
1ceb0a3c7c48: Download complete
0a1f1b169319: Download complete
a3ed95caeb02: Download complete
b0efe5c05b4c: Verifying Checksum
b0efe5c05b4c: Download complete
b0efe5c05b4c: Pull complete
0a1f1b169319: Pull complete
1ceb0a3c7c48: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:6341c688b4b0b82ec735389b3c97df8cf2831b8cb8bd1856779130a86574ac5c
Status: Downloaded newer image for ubuntu:14.10
*
W0905 10:47:58.664714 11884 out.go:239] ! kubectl and minikube configuration will be stored in /root
! kubectl and minikube configuration will be stored in /root
W0905 10:47:58.664727 11884 out.go:239] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
W0905 10:47:58.664735 11884 out.go:239] *
*
W0905 10:47:58.664763 11884 out.go:239] - sudo mv /root/.kube /root/.minikube $HOME
- sudo mv /root/.kube /root/.minikube $HOME
W0905 10:47:58.664775 11884 out.go:239] - sudo chown -R $USER $HOME/.kube $HOME/.minikube
- sudo chown -R $USER $HOME/.kube $HOME/.minikube
W0905 10:47:58.664783 11884 out.go:239] *
*
W0905 10:47:58.664792 11884 out.go:239] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
* This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
I0905 10:47:58.666772 11884 addons.go:69] Setting storage-provisioner=true in profile "minikube"
I0905 10:47:58.666793 11884 addons.go:231] Setting addon storage-provisioner=true in "minikube"
I0905 10:47:58.666842 11884 host.go:66] Checking if "minikube" exists ...
I0905 10:47:58.667491 11884 kubeconfig.go:92] found "minikube" server: "https://10.87.23.58:8443"
I0905 10:47:58.667507 11884 api_server.go:166] Checking apiserver status ...
I0905 10:47:58.667553 11884 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0905 10:47:58.685124 11884 addons.go:69] Setting default-storageclass=true in profile "minikube"
I0905 10:47:58.685269 11884 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0905 10:47:58.688502 11884 kubeconfig.go:92] found "minikube" server: "https://10.87.23.58:8443"
I0905 10:47:58.688923 11884 api_server.go:166] Checking apiserver status ...
I0905 10:47:58.692760 11884 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0905 10:47:58.700757 11884 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/13438/cgroup
I0905 10:47:58.716723 11884 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod59a379e8716bbd825eba17c7cd4d9b47/d463f41c5bec85c6e9c5f03ae2cde511e783a9a98ec0f22470ab91c76c63855c"
I0905 10:47:58.716815 11884 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod59a379e8716bbd825eba17c7cd4d9b47/d463f41c5bec85c6e9c5f03ae2cde511e783a9a98ec0f22470ab91c76c63855c/freezer.state
I0905 10:47:58.731947 11884 api_server.go:204] freezer state: "THAWED"
I0905 10:47:58.731990 11884 api_server.go:253] Checking apiserver healthz at https://10.87.23.58:8443/healthz ...
I0905 10:47:58.749782 11884 api_server.go:279] https://10.87.23.58:8443/healthz returned 200:
ok
I0905 10:47:58.753827 11884 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0905 10:47:58.752731 11884 kapi.go:248] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
I0905 10:47:58.753896 11884 start.go:223] Will wait 6m0s for node &{Name: IP:10.87.23.58 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0905 10:47:58.767680 11884 out.go:177] * Verifying Kubernetes components...
I0905 10:47:58.758061 11884 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/13438/cgroup
I0905 10:47:58.790694 11884 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0905 10:47:58.791008 11884 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0905 10:47:58.791030 11884 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
I0905 10:47:58.791038 11884 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
I0905 10:47:58.791120 11884 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0905 10:47:58.791262 11884 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1200172290 /etc/kubernetes/addons/storage-provisioner.yaml
I0905 10:47:58.806150 11884 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod59a379e8716bbd825eba17c7cd4d9b47/d463f41c5bec85c6e9c5f03ae2cde511e783a9a98ec0f22470ab91c76c63855c"
I0905 10:47:58.806217 11884 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod59a379e8716bbd825eba17c7cd4d9b47/d463f41c5bec85c6e9c5f03ae2cde511e783a9a98ec0f22470ab91c76c63855c/freezer.state
I0905 10:47:58.806588 11884 api_server.go:52] waiting for apiserver process to appear ...
I0905 10:47:58.806645 11884 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0905 10:47:58.807776 11884 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0905 10:47:58.815754 11884 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 127.0.0.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0905 10:47:58.820800 11884 api_server.go:204] freezer state: "THAWED"
I0905 10:47:58.820835 11884 api_server.go:253] Checking apiserver healthz at https://10.87.23.58:8443/healthz ...
I0905 10:47:58.830818 11884 api_server.go:279] https://10.87.23.58:8443/healthz returned 200:
ok
I0905 10:47:58.842584 11884 api_server.go:72] duration metric: took 88.645948ms to wait for apiserver process to appear ...
I0905 10:47:58.842611 11884 api_server.go:88] waiting for apiserver healthz status ...
I0905 10:47:58.842639 11884 api_server.go:253] Checking apiserver healthz at https://10.87.23.58:8443/healthz ...
I0905 10:47:58.851322 11884 api_server.go:279] https://10.87.23.58:8443/healthz returned 200:
ok
I0905 10:47:58.852825 11884 api_server.go:141] control plane version: v1.28.1
I0905 10:47:58.852857 11884 api_server.go:131] duration metric: took 10.23663ms to wait for apiserver health ...
I0905 10:47:58.852868 11884 system_pods.go:43] waiting for kube-system pods to appear ...
I0905 10:47:58.856887 11884 addons.go:231] Setting addon default-storageclass=true in "minikube"
I0905 10:47:58.856936 11884 host.go:66] Checking if "minikube" exists ...
I0905 10:47:58.857883 11884 kubeconfig.go:92] found "minikube" server: "https://10.87.23.58:8443"
I0905 10:47:58.857907 11884 api_server.go:166] Checking apiserver status ...
I0905 10:47:58.857948 11884 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0905 10:47:58.864679 11884 system_pods.go:59] 4 kube-system pods found
I0905 10:47:58.864727 11884 system_pods.go:61] "etcd-bamboo-agent-os-03-s3-proxy-dont-disable" [b11e3923-b6ec-47be-a563-e0a24a2cc86e] Pending
I0905 10:47:58.864740 11884 system_pods.go:61] "kube-apiserver-bamboo-agent-os-03-s3-proxy-dont-disable" [32a54773-f475-4fa2-860f-361418b1b6f6] Pending
I0905 10:47:58.864749 11884 system_pods.go:61] "kube-controller-manager-bamboo-agent-os-03-s3-proxy-dont-disable" [81e0e47f-d84d-47d9-bb63-ad369c32e039] Pending
I0905 10:47:58.864757 11884 system_pods.go:61] "kube-scheduler-bamboo-agent-os-03-s3-proxy-dont-disable" [009758cb-1bee-46a7-966f-9bdab0b63498] Pending
I0905 10:47:58.864766 11884 system_pods.go:74] duration metric: took 11.890739ms to wait for pod list to return data ...
I0905 10:47:58.864779 11884 kubeadm.go:581] duration metric: took 110.850242ms to wait for : map[apiserver:true system_pods:true] ...
I0905 10:47:58.864795 11884 node_conditions.go:102] verifying NodePressure condition ...
I0905 10:47:58.871777 11884 node_conditions.go:122] node storage ephemeral capacity is 25215872Ki
I0905 10:47:58.871815 11884 node_conditions.go:123] node cpu capacity is 4
I0905 10:47:58.871832 11884 node_conditions.go:105] duration metric: took 7.027873ms to run NodePressure ...
I0905 10:47:58.871847 11884 start.go:228] waiting for startup goroutines ...
I0905 10:47:58.882799 11884 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/13438/cgroup
I0905 10:47:58.891879 11884 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod59a379e8716bbd825eba17c7cd4d9b47/d463f41c5bec85c6e9c5f03ae2cde511e783a9a98ec0f22470ab91c76c63855c"
I0905 10:47:58.891950 11884 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod59a379e8716bbd825eba17c7cd4d9b47/d463f41c5bec85c6e9c5f03ae2cde511e783a9a98ec0f22470ab91c76c63855c/freezer.state
I0905 10:47:58.906326 11884 api_server.go:204] freezer state: "THAWED"
I0905 10:47:58.906360 11884 api_server.go:253] Checking apiserver healthz at https://10.87.23.58:8443/healthz ...
I0905 10:47:58.914375 11884 api_server.go:279] https://10.87.23.58:8443/healthz returned 200:
ok
I0905 10:47:58.914465 11884 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
I0905 10:47:58.914504 11884 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0905 10:47:58.914683 11884 exec_runner.go:51] Run: sudo cp -a /tmp/minikube326244435 /etc/kubernetes/addons/storageclass.yaml
I0905 10:47:58.929980 11884 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0905 10:47:59.863193 11884 exec_runner.go:84] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 127.0.0.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.047373725s)
I0905 10:47:59.863234 11884 start.go:901] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
I0905 10:48:00.081506 11884 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.273689569s)
I0905 10:48:00.092216 11884 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
I0905 10:48:00.096395 11884 addons.go:502] enable addons completed in 1.445314895s: enabled=[default-storageclass storage-provisioner]
I0905 10:48:00.096458 11884 start.go:233] waiting for cluster config update ...
I0905 10:48:00.096478 11884 start.go:242] writing updated cluster config ...
I0905 10:48:00.096828 11884 exec_runner.go:51] Run: rm -f paused
I0905 10:48:00.164968 11884 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
I0905 10:48:00.169774 11884 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Submodule 'automation-examples' (ssh://git@git.onedata.org:7999/vfs/automation-examples.git) registered for path 'automation-examples'
Submodule 'bamboos' (ssh://git@git.onedata.org:7999/vfs/bamboos.git) registered for path 'bamboos'
Submodule 'cdmi_swagger' (ssh://git@git.onedata.org:7999/vfs/cdmi-swagger.git) registered for path 'cdmi_swagger'
Submodule 'one_env' (ssh://git@git.onedata.org:7999/vfs/one-env.git) registered for path 'one_env'
Submodule 'onepanel_swagger' (ssh://git@git.onedata.org:7999/vfs/onepanel-swagger.git) registered for path 'onepanel_swagger'
Submodule 'oneprovider_swagger' (ssh://git@git.onedata.org:7999/vfs/oneprovider-swagger.git) registered for path 'oneprovider_swagger'
Submodule 'onezone_swagger' (ssh://git@git.onedata.org:7999/vfs/onezone-swagger.git) registered for path 'onezone_swagger'
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-MAOPT-CQT/onedata-acceptance/automation-examples'...
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-MAOPT-CQT/onedata-acceptance/bamboos'...
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-MAOPT-CQT/onedata-acceptance/cdmi_swagger'...
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-MAOPT-CQT/onedata-acceptance/one_env'...
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-MAOPT-CQT/onedata-acceptance/onepanel_swagger'...
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-MAOPT-CQT/onedata-acceptance/oneprovider_swagger'...
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-MAOPT-CQT/onedata-acceptance/onezone_swagger'...
Submodule 'bamboos' (ssh://git@git.onedata.org:7999/vfs/bamboos.git) registered for path 'onepanel_swagger/bamboos'
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-MAOPT-CQT/onedata-acceptance/onepanel_swagger/bamboos'...
Submodule 'bamboos' (ssh://git@git.onedata.org:7999/vfs/bamboos.git) registered for path 'oneprovider_swagger/bamboos'
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-MAOPT-CQT/onedata-acceptance/oneprovider_swagger/bamboos'...
Submodule 'bamboos' (ssh://git@git.onedata.org:7999/vfs/bamboos.git) registered for path 'onezone_swagger/bamboos'
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-MAOPT-CQT/onedata-acceptance/onezone_swagger/bamboos'...
Unable to find image 'docker.onedata.org/one_env:v39' locally
v39: Pulling from one_env
41af1b5f0f51: Pulling fs layer
da812d3cf979: Pulling fs layer
6ea2f9c9eaa1: Pulling fs layer
ba301be5517a: Pulling fs layer
c41cf17cec29: Pulling fs layer
718cbc35a0e0: Pulling fs layer
7640532158c9: Pulling fs layer
6eba1fc2d1b6: Pulling fs layer
caf52bb4b365: Pulling fs layer
3dfbdb225edd: Pulling fs layer
b62f693fa900: Pulling fs layer
79ae03a86415: Pulling fs layer
ba301be5517a: Waiting
6eba1fc2d1b6: Waiting
c41cf17cec29: Waiting
718cbc35a0e0: Waiting
7640532158c9: Waiting
caf52bb4b365: Waiting
3dfbdb225edd: Waiting
79ae03a86415: Waiting
6ea2f9c9eaa1: Verifying Checksum
6ea2f9c9eaa1: Download complete
ba301be5517a: Verifying Checksum
ba301be5517a: Download complete
41af1b5f0f51: Verifying Checksum
41af1b5f0f51: Download complete
c41cf17cec29: Verifying Checksum
c41cf17cec29: Download complete
7640532158c9: Verifying Checksum
7640532158c9: Download complete
6eba1fc2d1b6: Verifying Checksum
6eba1fc2d1b6: Download complete
caf52bb4b365: Verifying Checksum
caf52bb4b365: Download complete
41af1b5f0f51: Pull complete
3dfbdb225edd: Verifying Checksum
3dfbdb225edd: Download complete
b62f693fa900: Verifying Checksum
b62f693fa900: Download complete
79ae03a86415: Verifying Checksum
79ae03a86415: Download complete
718cbc35a0e0: Download complete
da812d3cf979: Verifying Checksum
da812d3cf979: Download complete
da812d3cf979: Pull complete
6ea2f9c9eaa1: Pull complete
ba301be5517a: Pull complete
c41cf17cec29: Pull complete
718cbc35a0e0: Pull complete
7640532158c9: Pull complete
6eba1fc2d1b6: Pull complete
caf52bb4b365: Pull complete
3dfbdb225edd: Pull complete
b62f693fa900: Pull complete
79ae03a86415: Pull complete
Digest: sha256:e39dbddda3d96a874a4a2aebf2f188bfeb8ffdeb25e3fd27678ab5197f387940
Status: Downloaded newer image for docker.onedata.org/one_env:v39
error: failed to create clusterrolebinding: clusterrolebindings.rbac.authorization.k8s.io "serviceaccounts-cluster-admin" already exists
Unable to find image 'docker.onedata.org/swagger-aggregator:1.5.0' locally
1.5.0: Pulling from swagger-aggregator
3e51df1a11ab: Pulling fs layer
afe1a672db39: Pulling fs layer
80bad90eec10: Pulling fs layer
ca328de5836f: Pulling fs layer
4f4fb700ef54: Pulling fs layer
f9032f2640a0: Pulling fs layer
982a4aef97ed: Pulling fs layer
aca789ddad1c: Pulling fs layer
a3e17b3016f6: Pulling fs layer
b67d253c6e22: Pulling fs layer
302c9eabb2bb: Pulling fs layer
bb6e4824f633: Pulling fs layer
4f4fb700ef54: Waiting
f9032f2640a0: Waiting
982a4aef97ed: Waiting
aca789ddad1c: Waiting
ca328de5836f: Waiting
a3e17b3016f6: Waiting
b67d253c6e22: Waiting
302c9eabb2bb: Waiting
bb6e4824f633: Waiting
80bad90eec10: Download complete
afe1a672db39: Verifying Checksum
afe1a672db39: Download complete
ca328de5836f: Verifying Checksum
ca328de5836f: Download complete
4f4fb700ef54: Verifying Checksum
4f4fb700ef54: Download complete
f9032f2640a0: Verifying Checksum
f9032f2640a0: Download complete
3e51df1a11ab: Verifying Checksum
3e51df1a11ab: Download complete
a3e17b3016f6: Verifying Checksum
a3e17b3016f6: Download complete
aca789ddad1c: Verifying Checksum
aca789ddad1c: Download complete
b67d253c6e22: Verifying Checksum
b67d253c6e22: Download complete
302c9eabb2bb: Verifying Checksum
302c9eabb2bb: Download complete
bb6e4824f633: Verifying Checksum
bb6e4824f633: Download complete
982a4aef97ed: Verifying Checksum
982a4aef97ed: Download complete
3e51df1a11ab: Pull complete
afe1a672db39: Pull complete
80bad90eec10: Pull complete
ca328de5836f: Pull complete
4f4fb700ef54: Pull complete
f9032f2640a0: Pull complete
982a4aef97ed: Pull complete
aca789ddad1c: Pull complete
a3e17b3016f6: Pull complete
b67d253c6e22: Pull complete
302c9eabb2bb: Pull complete
bb6e4824f633: Pull complete
Digest: sha256:e2e8e762a03a0acdd49e63c4168157cb4e0e79f31f4e815561e9f4c65dbf8ac8
Status: Downloaded newer image for docker.onedata.org/swagger-aggregator:1.5.0
Unable to find image 'swaggerapi/swagger-codegen-cli:2.4.20' locally
2.4.20: Pulling from swaggerapi/swagger-codegen-cli
e7c96db7181b: Pulling fs layer
f910a506b6cb: Pulling fs layer
b6abafe80f63: Pulling fs layer
0d9127f69a1f: Pulling fs layer
0d9127f69a1f: Waiting
f910a506b6cb: Verifying Checksum
f910a506b6cb: Download complete
e7c96db7181b: Verifying Checksum
e7c96db7181b: Pull complete
f910a506b6cb: Pull complete
0d9127f69a1f: Verifying Checksum
0d9127f69a1f: Download complete
b6abafe80f63: Verifying Checksum
b6abafe80f63: Download complete
b6abafe80f63: Pull complete
0d9127f69a1f: Pull complete
Digest: sha256:e961c734f4a232ea050293e9b16aed4cc131ffecf4a7d8671f15f1d79bca8796
Status: Downloaded newer image for swaggerapi/swagger-codegen-cli:2.4.20
/bin/sh: 2: [[: not found
c313dba7f09c: Download complete
v12: Pulling from onedata/acceptance_mixed
d5fd17ec1767: Pulling fs layer
635970228e5b: Pulling fs layer
e6b8b518c4b7: Pulling fs layer
262a85701f26: Pulling fs layer
56fb40ac7433: Pulling fs layer
72938d7adcac: Pulling fs layer
cdf99cc302c9: Pulling fs layer
99e960dff935: Pulling fs layer
543cdc7d97b9: Pulling fs layer
c5eae658fb83: Pulling fs layer
e8c6e757d9d8: Pulling fs layer
c313dba7f09c: Pulling fs layer
cdf99cc302c9: Waiting
99e960dff935: Waiting
543cdc7d97b9: Waiting
c5eae658fb83: Waiting
e8c6e757d9d8: Waiting
c313dba7f09c: Waiting
262a85701f26: Waiting
56fb40ac7433: Waiting
72938d7adcac: Waiting
e6b8b518c4b7: Verifying Checksum
e6b8b518c4b7: Download complete
d5fd17ec1767: Verifying Checksum
d5fd17ec1767: Download complete
56fb40ac7433: Verifying Checksum
56fb40ac7433: Download complete
262a85701f26: Verifying Checksum
262a85701f26: Download complete
72938d7adcac: Download complete
99e960dff935: Verifying Checksum
99e960dff935: Download complete
d5fd17ec1767: Pull complete
543cdc7d97b9: Verifying Checksum
543cdc7d97b9: Download complete
cdf99cc302c9: Verifying Checksum
cdf99cc302c9: Download complete
e8c6e757d9d8: Verifying Checksum
e8c6e757d9d8: Download complete
c313dba7f09c: Verifying Checksum
c313dba7f09c: Download complete
635970228e5b: Verifying Checksum
635970228e5b: Download complete
c5eae658fb83: Download complete
635970228e5b: Pull complete
e6b8b518c4b7: Pull complete
262a85701f26: Pull complete
56fb40ac7433: Pull complete
72938d7adcac: Pull complete
cdf99cc302c9: Pull complete
99e960dff935: Pull complete
543cdc7d97b9: Pull complete
c5eae658fb83: Pull complete
e8c6e757d9d8: Pull complete
c313dba7f09c: Pull complete
Digest: sha256:96db83e9518bd8b75168f158bad519b54bb4b61336e7ba414e24be2483b152bb
Status: Downloaded newer image for onedata/acceptance_mixed:v12
/usr/local/lib/python3.8/dist-packages/pytest_selenium/drivers/crossbrowsertesting.py:72: SyntaxWarning: "is not" with a literal. Did you mean "!="?
if report.when == 'setup' or info.get('test_score') is not 'fail':
/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-MAOPT-CQT/onedata-acceptance/tests/gui/steps/oneprovider/qos.py:357: SyntaxWarning: assertion is always true, perhaps remove parentheses?
assert ('qos-status-impossible' in
/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-MAOPT-CQT/onedata-acceptance/tests/gui/steps/oneprovider/qos.py:360: SyntaxWarning: assertion is always true, perhaps remove parentheses?
assert ('qos-status-fulfilled' in
error: failed to create clusterrolebinding: clusterrolebindings.rbac.authorization.k8s.io "serviceaccounts-cluster-admin" already exists
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
id: ‘user1’: no such user
command terminated with exit code 1
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
id: ‘user2’: no such user
command terminated with exit code 1
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
command terminated with exit code 1
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
command terminated with exit code 1
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
command terminated with exit code 1
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
id: ‘user1’: no such user
command terminated with exit code 1
id: ‘user2’: no such user
command terminated with exit code 1
command terminated with exit code 1
command terminated with exit code 1
command terminated with exit code 1
Defaulted container "oneclient" out of: oneclient, wait-for-onezone (init), wait-for-token-dispenser (init)
rsync: change_dir#3 "/tmp/logs" failed: No such file or directory (2)
rsync error: errors selecting input/output files, dirs (code 3) at main.c(828) [sender=3.1.3]
command terminated with exit code 3
cp: cannot stat 'onedata-acceptance/one_env/sources_info.yaml': No such file or directory
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 4822 100 4822 0 0 20965 0 --:--:-- --:--:-- --:--:-- 20965
Error response from daemon: Cannot kill container: 1baf89e4fcfd: Container 1baf89e4fcfd8835edd1fd0474d480645583d295b407f79f96ca1f3df5e5080a is not running
Error response from daemon: Cannot kill container: c8197596f18b: Container c8197596f18b00bb67d02eb685191533b9dc64673e1380cddb730e9240a18909 is not running
Error response from daemon: Cannot kill container: 66e99790d120: Container 66e99790d12033e43d3fcd2a0c804f1ee5498629aca523a04fa5dbd38fd3703c is not running
Error response from daemon: Cannot kill container: 26cb412b9ee5: Container 26cb412b9ee5ef3b89a9a26cb23f88895620c2a2096593dbfe1afb2d1c448dfa is not running
Error response from daemon: Cannot kill container: 053cb5ec4448: Container 053cb5ec444866ba50aedcc5d046962b75505e133b94d0f5227912d0b2693be5 is not running
Error response from daemon: Cannot kill container: cd37de466065: Container cd37de46606520c3101550cfb3a754f29413edc35609b547ebe9fe91e45d193f is not running
Error response from daemon: Cannot kill container: 6e8b963d54e8: Container 6e8b963d54e8a859136d1517b483873b5a157560c6682ae1c25cc900da9fe680 is not running
Error response from daemon: Cannot kill container: 2ca9939b3785: Container 2ca9939b37859a2421e204c140fc594547d0c3e3221f8524dac145c316b9441a is not running
Error response from daemon: Cannot kill container: 21ecffca763d: Container 21ecffca763d427786fd1ffc4c0f538c6b17beb7dd0eb60b5fbfebf781a2bc6c is not running
Error response from daemon: Cannot kill container: 82a02def72ef: Container 82a02def72ef57f5199a83029662fd83605e995842b4011db8c229a82530250e is not running
Error response from daemon: Cannot kill container: 6a9de10f451a: Container 6a9de10f451abcbca4f5919e6790465558eda0473a0b1f07fdb20e87bd3e810a is not running
Error response from daemon: Cannot kill container: 9bc9ecd24fc2: Container 9bc9ecd24fc2c7e6f279c56e3e85393480a5fd691f761756a371387b63f220d4 is not running
Error response from daemon: Cannot kill container: 658db340fc03: Container 658db340fc038ed8453f9786aba1efff1ffd8a013112b44b0e08b98308ed90ea is not running
Error response from daemon: Cannot kill container: afc60a4268a3: Container afc60a4268a3205a0842cda0f78cee42372e4ae60607dc47cccf43c68067665e is not running