GUI acceptance tests using environment deployed from packages.

Build: #2112 failed

Job: Onezone spaces multiuser failed

Stages & jobs

  1. Acceptance Test

Job result summary

Completed
Duration
8 minutes
Revision
d6888cd402c1e2c666417f7e0c3bb45e62d1469e
Failing since
#2110 (Child of ODSRV-OPRPM-2125)
Fixed in
#2113 (Changes by Łukasz Opioła)
No failed test found. A possible compilation error occurred.

Error summary

The build generated some errors. See the full build log for more details.

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100  4653  100  4653    0     0  38775      0 --:--:-- --:--:-- --:--:-- 38775
Error: Kubernetes cluster unreachable: Get "http://localhost:8080/version?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Error response from daemon: readlink /mnt/storage/docker/overlay2/l: invalid argument
Error response from daemon: readlink /mnt/storage/docker/overlay2/l: invalid argument
Error response from daemon: Cannot kill container: a7298dd7649b: Container a7298dd7649bd315ea1db5542c8c251978801ff2e0899addd938d566bc8c42fc is not running
Error response from daemon: Cannot kill container: 35280939f5f9: Container 35280939f5f9562ee564a6e177d52eb23762bc6fb385ef332a1a55bbbd5aa12d is not running
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'


stderr:
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING FileExisting-socat]: socat not found in system path
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

*
╭──────────────────────────────────────────────────────────────────╮
│                                                                  │
│    * If the above advice does not help, please let us know:      │
│      https://github.com/kubernetes/minikube/issues/new/choose    │
│                                                                  │
│    * Please attach the following file to the GitHub issue:       │
│    * - /root/.minikube/logs/lastStart.txt                        │
│                                                                  │
╰──────────────────────────────────────────────────────────────────╯
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": exit status 1
stdout:
[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'


stderr:
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING FileExisting-socat]: socat not found in system path
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Related issue: https://github.com/kubernetes/minikube/issues/4172
cp: cannot stat 'onedata/test-reports/*.xml': No such file or directory
cp: cannot stat 'onedata/one_env/sources_info.yaml': No such file or directory
mv: cannot stat 'onedata/tests/gui/logs/report.*': No such file or directory
mv: cannot stat 'onedata/tests/gui/logs/gui_report/images.yaml': No such file or directory
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100  4653  100  4653    0     0  19069      0 --:--:-- --:--:-- --:--:-- 18991
Error: Kubernetes cluster unreachable: Get "http://localhost:8080/version?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Error response from daemon: readlink /mnt/storage/docker/overlay2/l: invalid argument
Error response from daemon: readlink /mnt/storage/docker/overlay2/l: invalid argument
Error response from daemon: readlink /mnt/storage/docker/overlay2/l: invalid argument
Error response from daemon: readlink /mnt/storage/docker/overlay2/l: invalid argument
Error response from daemon: readlink /mnt/storage/docker/overlay2/l: invalid argument
Error response from daemon: readlink /mnt/storage/docker/overlay2/l: invalid argument
Error response from daemon: readlink /mnt/storage/docker/overlay2/l: invalid argument
Error response from daemon: readlink /mnt/storage/docker/overlay2/l: invalid argument
Error response from daemon: Cannot kill container: 9a0f861d3641: Container 9a0f861d36416b73eab1ea2aed40ca8930fde26ac6c1c1de8dc0d618d8874fdb is not running
Error response from daemon: Cannot kill container: baf4addd15b2: Container baf4addd15b2884c9a1148d36b2cb9b06458bb51e6d4adac2a8baad1b4d67c2d is not running
Error response from daemon: Cannot kill container: d8f186f56af2: Container d8f186f56af25435da0cff50e0c5bab16352f930e43e3f3dcee53e843fa16592 is not running
Error response from daemon: Cannot kill container: d596354016fd: Container d596354016fda0f004c1020b64a0fcddbe332393a9564331ee9860ef1e8c6d76 is not running
Error response from daemon: Cannot kill container: 386a93db3ad8: Container 386a93db3ad87ac1c4e0fd350c68991fa5ba06800e5c12ae7029646a59b751ad is not running