You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Might be a repeat of #336 where playbooks with multiple tasks that use the vmware.vmware_rest modules against multiple vCenters at the same time may error. It is not completely repeatable but does happen often. Sometimes a retry fixes it. Unable to reproduce with a single vCenter. Running cloud.common 2.1.2 and vmware.vmware_rest 2.2.0
ISSUE TYPE
Bug Report
COMPONENT NAME
Observed in
appliance_access_ssh
appliance_networking_proxy
and possibly others
All tasks complete the same as if they were run with the option serial: 1 being set
ACTUAL RESULTS
On the 7th task, one of the 12 vcenters in the inventory threw the error below while all of the others completed successfully. Retrying it may succeed on all, or fail with similar message on a different vcenter with the same error
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible_collections.cloud.common.plugins.module_utils.turbo.exceptions.EmbeddedModuleUnexpectedFailure: Cannot decode plugin answer: b''
fatal: [randomvcenter.domain.com -> localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_vmware.vmware_rest.appliance_access_ssh_payload_ypjpjcua/ansible_vmware.vmware_rest.appliance_access_ssh_payload.zip/ansible_collections/cloud/common/plugins/module_utils/turbo/common.py\", line 106, in communicate\n File \"/usr/lib64/python3.8/json/__init__.py\", line 357, in loads\n return _default_decoder.decode(s)\n File \"/usr/lib64/python3.8/json/decoder.py\", line 337, in decode\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n File \"/usr/lib64/python3.8/json/decoder.py\", line 355, in raw_decode\n raise JSONDecodeError(\"Expecting value\", s, err.value) from None\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1668545865.656959-11481-114100221361235/AnsiballZ_appliance_access_ssh.py\", line 107, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1668545865.656959-11481-114100221361235/AnsiballZ_appliance_access_ssh.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1668545865.656959-11481-114100221361235/AnsiballZ_appliance_access_ssh.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.vmware.vmware_rest.plugins.modules.appliance_access_ssh', init_globals=dict(_module_fqn='ansible_collections.vmware.vmware_rest.plugins.modules.appliance_access_ssh', _modlib_path=modlib_path),\n File \"/usr/lib64/python3.8/runpy.py\", line 207, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.8/runpy.py\", line 97, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib64/python3.8/runpy.py\", line 87, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_vmware.vmware_rest.appliance_access_ssh_payload_ypjpjcua/ansible_vmware.vmware_rest.appliance_access_ssh_payload.zip/ansible_collections/vmware/vmware_rest/plugins/modules/appliance_access_ssh.py\", line 261, in <module>\n File \"/usr/lib64/python3.8/asyncio/base_events.py\", line 616, in run_until_complete\n return future.result()\n File \"/tmp/ansible_vmware.vmware_rest.appliance_access_ssh_payload_ypjpjcua/ansible_vmware.vmware_rest.appliance_access_ssh_payload.zip/ansible_collections/vmware/vmware_rest/plugins/modules/appliance_access_ssh.py\", line 177, in main\n File \"/tmp/ansible_vmware.vmware_rest.appliance_access_ssh_payload_ypjpjcua/ansible_vmware.vmware_rest.appliance_access_ssh_payload.zip/ansible_collections/cloud/common/plugins/module_utils/turbo/module.py\", line 119, in __init__\n File \"/tmp/ansible_vmware.vmware_rest.appliance_access_ssh_payload_ypjpjcua/ansible_vmware.vmware_rest.appliance_access_ssh_payload.zip/ansible_collections/cloud/common/plugins/module_utils/turbo/module.py\", line 154, in run_on_daemon\n File \"/tmp/ansible_vmware.vmware_rest.appliance_access_ssh_payload_ypjpjcua/ansible_vmware.vmware_rest.appliance_access_ssh_payload.zip/ansible_collections/cloud/common/plugins/module_utils/turbo/common.py\", line 109, in communicate\nansible_collections.cloud.common.plugins.module_utils.turbo.exceptions.EmbeddedModuleUnexpectedFailure: Cannot decode plugin answer: b''\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
The text was updated successfully, but these errors were encountered:
I believe this is an issue with the turbo server. If you try to switch clusters but the turbo server doesnt reset (its reset is time based) then the old cluster will be used instead of the new one. Probably will be fixed by #499
Not ideal but as a workaround, you can add a pause in between the tasks. Do all of cluster A tasks first, pause 15 secs, do all of cluster B tasks
SUMMARY
Might be a repeat of #336 where playbooks with multiple tasks that use the vmware.vmware_rest modules against multiple vCenters at the same time may error. It is not completely repeatable but does happen often. Sometimes a retry fixes it. Unable to reproduce with a single vCenter. Running cloud.common 2.1.2 and vmware.vmware_rest 2.2.0
ISSUE TYPE
COMPONENT NAME
Observed in
appliance_access_ssh
appliance_networking_proxy
and possibly others
ANSIBLE VERSION
COLLECTION VERSION
CONFIGURATION
OS / ENVIRONMENT
Ansible Execution Environment from ansible-builder
vCenter 7.0 U3
STEPS TO REPRODUCE
Playbook with several tasks in a row all using the same collection. After
EXPECTED RESULTS
All tasks complete the same as if they were run with the option
serial: 1
being setACTUAL RESULTS
On the 7th task, one of the 12 vcenters in the inventory threw the error below while all of the others completed successfully. Retrying it may succeed on all, or fail with similar message on a different vcenter with the same error
The text was updated successfully, but these errors were encountered: