GUI acceptance tests using environment deployed from packages.

Build: #2261 failed

Job: Multiprovider multiuser failed

Stages & jobs

  1. Acceptance Test

Job result summary

Completed
Duration
8 minutes
Revision
32a8637a9a01b830c75d008c42ff27e7892513dd
Total tests
5
Fixed in
#2262 (Child of ODSRV-OPRPM-2186)

Tests

  • 5 tests in total
  • 5 tests failed
  • 5 failures are new
  • 5 minutes taken in total.
New test failures 5
Status Test Duration
Collapse Failed test_multiprovider_multiuser test_user_supports_space_by_two_providers_and_sees_that_there_are_two_provider_in_file_browser[1oz_2op_deployed]
5 mins
AssertionError: no info notify with ".*[Aa]dded.*support.*space.*" msg found
request = <FixtureRequest for <Function 'test_user_supports_space_by_two_providers_and_sees_that_there_are_two_provider_in_file_browser[1oz_2op_deployed]'>>

    @pytest.mark.usefixtures(*function_args)
    def scenario_wrapper(request):
>       _execute_scenario(feature, scenario, request, encoding)

/usr/local/lib/python3.6/dist-packages/pytest_bdd/scenario.py:227: 
(40 more lines...)
Collapse Failed test_multiprovider_multiuser test_user_uploads_file_on_one_provider_sees_its_distribution_downloads_on_other_provider_and_again_sees_its_distribution[1oz_2op_deployed]
1 sec
tests.utils.http_exceptions.HTTPBadRequest: [400] Bad Request: {"error":{"id":"errorOnNodes","details":{"hostnames":["dev-oneprovider-krakow-0.dev-oneprovider-krakow.default.svc.cluster.local"],"error":{"id":"badValueToken","details":{"tokenError":{"id":"badToken","description":"Provided token could not be understood by the server."},"key":"token"},"description":"Bad value: provided \"token\" is not a valid token (see details)."}},"description":"Error on nodes dev-oneprovider-krakow-0.dev-oneprovider-krakow.default.svc.cluster.local: Bad value: provided \"token\" is not a valid token (see details)."}}
request = <FixtureRequest for <Function 'test_user_uploads_file_on_one_provider_sees_its_distribution_downloads_on_other_provider_and_again_sees_its_distribution[1oz_2op_deployed]'>>

    @pytest.mark.usefixtures(*function_args)
    def scenario_wrapper(request):
>       _execute_scenario(feature, scenario, request, encoding)

/usr/local/lib/python3.6/dist-packages/pytest_bdd/scenario.py:227: 
(47 more lines...)
Collapse Failed test_multiprovider_multiuser test_user_uses_autocleaning[1oz_2op_deployed]
1 sec
tests.utils.http_exceptions.HTTPBadRequest: [400] Bad Request: {"error":{"id":"errorOnNodes","details":{"hostnames":["dev-oneprovider-paris-0.dev-oneprovider-paris.default.svc.cluster.local"],"error":{"id":"badValueToken","details":{"tokenError":{"id":"badToken","description":"Provided token could not be understood by the server."},"key":"token"},"description":"Bad value: provided \"token\" is not a valid token (see details)."}},"description":"Error on nodes dev-oneprovider-paris-0.dev-oneprovider-paris.default.svc.cluster.local: Bad value: provided \"token\" is not a valid token (see details)."}}
request = <FixtureRequest for <Function 'test_user_uses_autocleaning[1oz_2op_deployed]'>>

    @pytest.mark.usefixtures(*function_args)
    def scenario_wrapper(request):
>       _execute_scenario(feature, scenario, request, encoding)

/usr/local/lib/python3.6/dist-packages/pytest_bdd/scenario.py:227: 
(47 more lines...)
Collapse Failed test_multiprovider_multiuser test_user_uses_autocleaning_with_lower_size_limit_which_skips_too_small_files[1oz_2op_deployed]
1 sec
tests.utils.http_exceptions.HTTPBadRequest: [400] Bad Request: {"error":{"id":"errorOnNodes","details":{"hostnames":["dev-oneprovider-paris-0.dev-oneprovider-paris.default.svc.cluster.local"],"error":{"id":"badValueToken","details":{"tokenError":{"id":"badToken","description":"Provided token could not be understood by the server."},"key":"token"},"description":"Bad value: provided \"token\" is not a valid token (see details)."}},"description":"Error on nodes dev-oneprovider-paris-0.dev-oneprovider-paris.default.svc.cluster.local: Bad value: provided \"token\" is not a valid token (see details)."}}
request = <FixtureRequest for <Function 'test_user_uses_autocleaning_with_lower_size_limit_which_skips_too_small_files[1oz_2op_deployed]'>>

    @pytest.mark.usefixtures(*function_args)
    def scenario_wrapper(request):
>       _execute_scenario(feature, scenario, request, encoding)

/usr/local/lib/python3.6/dist-packages/pytest_bdd/scenario.py:227: 
(47 more lines...)
Collapse Failed test_multiprovider_multiuser test_user_uses_autocleaning_with_upper_size_limit_which_skips_too_big_files[1oz_2op_deployed]
17 secs
tests.utils.http_exceptions.HTTPBadRequest: [400] Bad Request: {"error":{"id":"errorOnNodes","details":{"hostnames":["dev-oneprovider-paris-0.dev-oneprovider-paris.default.svc.cluster.local"],"error":{"id":"badValueToken","details":{"tokenError":{"id":"badToken","description":"Provided token could not be understood by the server."},"key":"token"},"description":"Bad value: provided \"token\" is not a valid token (see details)."}},"description":"Error on nodes dev-oneprovider-paris-0.dev-oneprovider-paris.default.svc.cluster.local: Bad value: provided \"token\" is not a valid token (see details)."}}
request = <FixtureRequest for <Function 'test_user_uses_autocleaning_with_upper_size_limit_which_skips_too_big_files[1oz_2op_deployed]'>>

    @pytest.mark.usefixtures(*function_args)
    def scenario_wrapper(request):
>       _execute_scenario(feature, scenario, request, encoding)

/usr/local/lib/python3.6/dist-packages/pytest_bdd/scenario.py:227: 
(47 more lines...)

Error summary

The build generated some errors. See the full build log for more details.

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100  4653  100  4653    0     0  35250      0 --:--:-- --:--:-- --:--:-- 35250
Error response from daemon: Cannot kill container: f15e99b0f8b6: Container f15e99b0f8b63de4f609f7b3ba32b894f0e95e0d0de605277fa06739d1c7f80d is not running
Submodule 'bamboos' (ssh://git@git.onedata.org:7999/vfs/bamboos.git) registered for path 'bamboos'
Submodule 'cdmi_swagger' (ssh://git@git.onedata.org:7999/vfs/cdmi-swagger.git) registered for path 'cdmi_swagger'
Submodule 'one_env' (ssh://git@git.onedata.org:7999/vfs/one-env.git) registered for path 'one_env'
Submodule 'onepanel_swagger' (ssh://git@git.onedata.org:7999/vfs/onepanel-swagger.git) registered for path 'onepanel_swagger'
Submodule 'oneprovider_swagger' (ssh://git@git.onedata.org:7999/vfs/oneprovider-swagger.git) registered for path 'oneprovider_swagger'
Submodule 'onezone_swagger' (ssh://git@git.onedata.org:7999/vfs/onezone-swagger.git) registered for path 'onezone_swagger'
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-GAPT-CMMT/onedata/bamboos'...
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-GAPT-CMMT/onedata/cdmi_swagger'...
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-GAPT-CMMT/onedata/one_env'...
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-GAPT-CMMT/onedata/onepanel_swagger'...
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-GAPT-CMMT/onedata/oneprovider_swagger'...
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-GAPT-CMMT/onedata/onezone_swagger'...
Submodule 'bamboos' (ssh://git@git.onedata.org:7999/vfs/bamboos.git) registered for path 'onepanel_swagger/bamboos'
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-GAPT-CMMT/onedata/onepanel_swagger/bamboos'...
Submodule 'bamboos' (ssh://git@git.onedata.org:7999/vfs/bamboos.git) registered for path 'oneprovider_swagger/bamboos'
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-GAPT-CMMT/onedata/oneprovider_swagger/bamboos'...
Submodule 'bamboos' (ssh://git@git.onedata.org:7999/vfs/bamboos.git) registered for path 'onezone_swagger/bamboos'
Cloning into '/mnt/storage/bamboo-agent-home/xml-data/build-dir/ODSRV-GAPT-CMMT/onedata/onezone_swagger/bamboos'...
error: failed to create clusterrolebinding: clusterrolebindings.rbac.authorization.k8s.io "serviceaccounts-cluster-admin" already exists
Error: could not find tiller
Error: could not find tiller
error: failed to create clusterrolebinding: clusterrolebindings.rbac.authorization.k8s.io "serviceaccounts-cluster-admin" already exists
protocol version mismatch -- is your shell clean?
(see the rsync man page for an explanation)
rsync error: protocol incompatibility (code 2) at compat.c(178) [Receiver=3.1.2]
command terminated with exit code 126
protocol version mismatch -- is your shell clean?
(see the rsync man page for an explanation)
rsync error: protocol incompatibility (code 2) at compat.c(178) [Receiver=3.1.2]
command terminated with exit code 126
protocol version mismatch -- is your shell clean?
(see the rsync man page for an explanation)
rsync error: protocol incompatibility (code 2) at compat.c(178) [Receiver=3.1.2]
command terminated with exit code 126
protocol version mismatch -- is your shell clean?
(see the rsync man page for an explanation)
rsync error: protocol incompatibility (code 2) at compat.c(178) [Receiver=3.1.2]
command terminated with exit code 126
protocol version mismatch -- is your shell clean?
(see the rsync man page for an explanation)
rsync error: protocol incompatibility (code 2) at compat.c(178) [Receiver=3.1.2]
command terminated with exit code 126
protocol version mismatch -- is your shell clean?
(see the rsync man page for an explanation)
rsync error: protocol incompatibility (code 2) at compat.c(178) [Receiver=3.1.2]
command terminated with exit code 126
protocol version mismatch -- is your shell clean?
(see the rsync man page for an explanation)
rsync error: protocol incompatibility (code 2) at compat.c(178) [Receiver=3.1.2]
command terminated with exit code 126
command terminated with exit code 126
protocol version mismatch -- is your shell clean?
(see the rsync man page for an explanation)
rsync error: protocol incompatibility (code 2) at compat.c(178) [Receiver=3.1.2]
protocol version mismatch -- is your shell clean?
(see the rsync man page for an explanation)
rsync error: protocol incompatibility (code 2) at compat.c(178) [Receiver=3.1.2]
command terminated with exit code 126
error: failed to create clusterrolebinding: clusterrolebindings.rbac.authorization.k8s.io "serviceaccounts-cluster-admin" already exists
protocol version mismatch -- is your shell clean?
(see the rsync man page for an explanation)
rsync error: protocol incompatibility (code 2) at compat.c(178) [Receiver=3.1.2]
protocol version mismatch -- is your shell clean?
(see the rsync man page for an explanation)
rsync error: protocol incompatibility (code 2) at compat.c(178) [Receiver=3.1.2]
command terminated with exit code 126
protocol version mismatch -- is your shell clean?
(see the rsync man page for an explanation)
rsync error: protocol incompatibility (code 2) at compat.c(178) [Receiver=3.1.2]
command terminated with exit code 126
protocol version mismatch -- is your shell clean?
(see the rsync man page for an explanation)
rsync error: protocol incompatibility (code 2) at compat.c(178) [Receiver=3.1.2]
command terminated with exit code 126
protocol version mismatch -- is your shell clean?
(see the rsync man page for an explanation)
rsync error: protocol incompatibility (code 2) at compat.c(178) [Receiver=3.1.2]
command terminated with exit code 126
protocol version mismatch -- is your shell clean?
(see the rsync man page for an explanation)
rsync error: protocol incompatibility (code 2) at compat.c(178) [Receiver=3.1.2]
command terminated with exit code 126
protocol version mismatch -- is your shell clean?
(see the rsync man page for an explanation)
rsync error: protocol incompatibility (code 2) at compat.c(178) [Receiver=3.1.2]
command terminated with exit code 126
protocol version mismatch -- is your shell clean?
(see the rsync man page for an explanation)
rsync error: protocol incompatibility (code 2) at compat.c(178) [Receiver=3.1.2]
command terminated with exit code 126
protocol version mismatch -- is your shell clean?
(see the rsync man page for an explanation)
rsync error: protocol incompatibility (code 2) at compat.c(178) [Receiver=3.1.2]
command terminated with exit code 126
cp: cannot stat 'onedata/one_env/sources_info.yaml': No such file or directory
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100  4653  100  4653    0     0  44740      0 --:--:-- --:--:-- --:--:-- 44740
Error from server (NotFound): pods "dev-oneprovider-paris-0" not found
Error response from daemon: Cannot kill container: ffdf59424dfa: Container ffdf59424dfa2cbe20e0771fd1a38581cd2b493ae3eab80bef56a05467d06ede is not running
Error response from daemon: Cannot kill container: 504ad6e79fd5: Container 504ad6e79fd57905351d9151251d92096bac0324a833dfb4a76883ed0189e7d5 is not running
Error response from daemon: Cannot kill container: 2fe8e0acd4eb: Container 2fe8e0acd4eb70eecf920f4bd8bb5d15ea486d601c41aa3a9648aaff5fe6e484 is not running
Error response from daemon: Cannot kill container: 55c6f73b4035: Container 55c6f73b403526d17379d57e244763fb08d030af4f3f3326f7df2fa1a2dc6e6b is not running
Error response from daemon: Cannot kill container: 8fe354eb3698: Container 8fe354eb369841d957e458190026f0da98c785c1eeacb3705537330d66d0cdcb is not running
Error response from daemon: Cannot kill container: d9d6c6f40f43: Container d9d6c6f40f43a77ef8b8218af0288a067e008feed5341e796dcf556df2a1667b is not running
Error response from daemon: Cannot kill container: 4c3d37106534: Container 4c3d3710653405ac97408c22f931b3c033cd52e20484f03fdb9ac5457347f005 is not running
Error response from daemon: Cannot kill container: 1fc0182e668c: Container 1fc0182e668c23a8490c47820e2aedf38c97035c85a7c765e56a459f063a9411 is not running
Error response from daemon: Cannot kill container: 38ce1c56a450: Container 38ce1c56a4502ea6a915bcc80421bce335f916a17262348969ea643f0553d3a9 is not running
Error response from daemon: Cannot kill container: dacb44de4258: Container dacb44de42589fb41106d324e0ca12aa925e55400bac5a9798a111d0919f42b6 is not running
Error response from daemon: Cannot kill container: 6a0460b71f03: Container 6a0460b71f0377f5c8017a75c71843b48162a5d7b2d49db97ad26e4e52471d22 is not running
Error response from daemon: Cannot kill container: 9aa6af223357: Container 9aa6af2233573df404c65b58c0c17a89c94dfeb23a0f59e46f35c7a313412c26 is not running
Error response from daemon: Cannot kill container: ad794e7f53af: Container ad794e7f53af97da22c47f9010a3fd9ea971bf5c28d88a6b845dc96a089a0d1b is not running
Error response from daemon: Cannot kill container: 0e7844703d33: Container 0e7844703d33d93c11141b89f7733f7a99b8e307aa03778ea931bab68b2cac15 is not running
Error response from daemon: Cannot kill container: 87eda5c10ed5: Container 87eda5c10ed54158eb38f881c160fb04b97da00da2e61cfd898f63c9bf305b79 is not running
Error response from daemon: Cannot kill container: 6edb74309c69: Container 6edb74309c69f4603fbf325cc1678409ade754e1a0dc2a7bf3a76f77833f6e22 is not running
Error response from daemon: Cannot kill container: 1311a51a442d: Container 1311a51a442d5a0c0a86b80e8edfd8cc6455a92f8e13cd7bc80b090be540e221 is not running
Error response from daemon: Cannot kill container: 6939a3ff537d: Container 6939a3ff537d0d8f89bdf54224df589799a41d1e766975cfa4fcbbb0d7e84fb3 is not running
Error response from daemon: Cannot kill container: ac311a732f74: Container ac311a732f74f00bff14960da4c04cbf766fa7a576e6ecf11a562168a83d6d23 is not running