Acceptance tests using different clients concurrently. Environment deployed from packages.

Build: #2309 failed

Job: Onepanel failed

Job result summary

Completed
Duration
8 minutes
Revision
d6888cd402c1e2c666417f7e0c3bb45e62d1469e
Fixed in
#2310 (Changes by Łukasz Opioła)
No failed test found. A possible compilation error occurred.

Error summary

The build generated some errors. See the full build log for more details.

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100  4653  100  4653    0     0  32312      0 --:--:-- --:--:-- --:--:-- 32312
Error: Kubernetes cluster unreachable: Get "http://localhost:8080/version?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Error response from daemon: readlink /mnt/storage/docker/overlay2/l: invalid argument
Error response from daemon: readlink /mnt/storage/docker/overlay2/l: invalid argument
Error response from daemon: readlink /mnt/storage/docker/overlay2/l: invalid argument
Error response from daemon: readlink /mnt/storage/docker/overlay2/l: invalid argument
Error response from daemon: readlink /mnt/storage/docker/overlay2/l: invalid argument
Error response from daemon: readlink /mnt/storage/docker/overlay2/l: invalid argument
Error response from daemon: readlink /mnt/storage/docker/overlay2/l: invalid argument
Error response from daemon: readlink /mnt/storage/docker/overlay2/l: invalid argument
Error response from daemon: Cannot kill container: cc0cb1f7a453: Container cc0cb1f7a453285791ea90e9dd46c585e1e01e4b2f5e8069caa57174935214d9 is not running
Error response from daemon: Cannot kill container: 71e54aaabfb4: Container 71e54aaabfb4291a61d06b1c4b92fc139689c30fd07d013026da4151de9a77a1 is not running
Error response from daemon: Cannot kill container: e7b56aef7841: Container e7b56aef78415c3a96be10a4c2626a16675178f4575eec5075c840ffd12dabdb is not running
Error response from daemon: Cannot kill container: 7ec0072dc8df: Container 7ec0072dc8dfe5dd0d407a03587c3984df6e9baa745c6fd948eee8a137052e39 is not running
Error response from daemon: Cannot kill container: 5a7753615dbb: Container 5a7753615dbb4e81d0217e000d49c1da129db9dd581554d81d6a5e6e2b28aba8 is not running
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'


stderr:
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING FileExisting-socat]: socat not found in system path
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

*
╭──────────────────────────────────────────────────────────────────╮
│                                                                  │
│    * If the above advice does not help, please let us know:      │
│      https://github.com/kubernetes/minikube/issues/new/choose    │
│                                                                  │
│    * Please attach the following file to the GitHub issue:       │
│    * - /root/.minikube/logs/lastStart.txt                        │
│                                                                  │
╰──────────────────────────────────────────────────────────────────╯
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": exit status 1
stdout:
[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'


stderr:
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING FileExisting-socat]: socat not found in system path
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Related issue: https://github.com/kubernetes/minikube/issues/4172
cp: cannot stat 'onedata/test-reports/*.xml': No such file or directory
cp: cannot stat 'onedata/one_env/sources_info.yaml': No such file or directory
mv: cannot stat 'onedata/tests/mixed/logs/report.*': No such file or directory
mv: cannot stat 'onedata/tests/mixed/logs/gui_report/images.yaml': No such file or directory