Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Occasional inaccessible socket with simultaneous calls #262

Open
jamesturner246 opened this issue Oct 11, 2024 · 1 comment
Open

Occasional inaccessible socket with simultaneous calls #262

jamesturner246 opened this issue Oct 11, 2024 · 1 comment

Comments

@jamesturner246
Copy link

Hi drunc devs. Myself and @cc-a Ran into a bit of a bump today whilst working with the drunc process manager Docker image.

Occasionally the socket is inaccessible with error 500 returned. The issue occurs sporadically when two or more gRPC calls occur in quick succession. It's not disrupting our work, and it's an opportunity to make things a bit mroe robust on our side, but thought I'd bring it up in any case.

Cheers.

Stack trace:

❯ docker compose exec app python scripts/talk_to_process_manager.py & docker compose exec app python scripts/talk_to_process_manager.py
[4] 2463345
[4]  + 2463345 suspended (tty output)  docker compose exec app python scripts/talk_to_process_manager.py
                                                                                                        Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/drunc/utils/shell_utils.py", line 204, in send_command_aio
    response = await cmd(request)
               ^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/grpc/aio/_call.py", line 327, in __await__
    raise _create_rpc_error(
grpc.aio._call.AioRpcError: <AioRpcError of RPC that terminated with:
	status = StatusCode.UNAVAILABLE
	details = "Socket closed"
	debug_error_string = "UNKNOWN:Error received from peer  {grpc_message:"Socket closed", grpc_status:14, created_time:"2024-10-11T13:58:59.404908282+00:00"}"
>

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/src/app/scripts/talk_to_process_manager.py", line 59, in <module>
    val = asyncio.run(main())
          ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/usr/src/app/scripts/talk_to_process_manager.py", line 54, in main
    await create_session(pmd)
  File "/usr/src/app/scripts/talk_to_process_manager.py", line 31, in create_session
    return [
           ^
  File "/usr/src/app/scripts/talk_to_process_manager.py", line 31, in <listcomp>
    return [
           ^
  File "/usr/local/lib/python3.11/site-packages/drunc/process_manager/process_manager_driver.py", line 175, in dummy_boot
    yield await self.send_command_aio(
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/drunc/utils/shell_utils.py", line 207, in send_command_aio
    self.__handle_grpc_error(e, command)
  File "/usr/local/lib/python3.11/site-packages/drunc/utils/shell_utils.py", line 92, in __handle_grpc_error
    rethrow_if_unreachable_server(error)
  File "/usr/local/lib/python3.11/site-packages/drunc/utils/grpc_utils.py", line 222, in rethrow_if_unreachable_server
    raise ServerUnreachable(grpc_error._details) from grpc_error
drunc.utils.grpc_utils.ServerUnreachable: ('Socket closed', 14)
@plasorak
Copy link
Collaborator

plasorak commented Dec 5, 2024

Hi James, regarding this, would you be able to modify: https://github.com/DUNE-DAQ/drunc/blob/develop/src/drunc/process_manager/interface/process_manager.py#L59C18-L59C35 to

import futures
grpc.aio.server(futures.ThreadPoolExecutor(max_workers=10))

And check whether the problem persists?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants