You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We don't want to start bpftrace as a cli anymore, we want to start a server that
can parse bpftrace programs and move the whole cli experience to kubectl-trace.
Long version
While developing and maintaining kubectl-trace with our current user experience we are facing a set of different challenges,
but first, let's describe how the current experience looks like:
The current experience is that kubectl-trace, starts a job containing a container with the bpftrace binary in it,
then, a tool called the "tracerunnre" starts bpftrace with a set of parameters and a TTY connection. That TTY connection
is used to "stream" the stdout and stderr of bpftrace directly to the TTY of the terminal that started kubectl-trace or that
attached to a particolar trace.
The first challenge, had been to develop a stable TTY over http fow that can sustain the output coming
from bpftrace and serve it to the user via the kubectl.
The second challenge for us is to accuratelly manage errors. We are not managing them.
The third chalenge is that since day zero we wanted kubectl-trace to be able to connect to
multiple containers/hosts at once.
Because of those challenges, we've been looking forward a solution.
The most natural we came up with consists in moving the cli experience from bpftrace to kubectl-trace directly.
That means that the new "trace" user experience will be:
The user starts a trace via kubectl-trace
The local kubectl-trace parses the program and serializes it for execution
The job is created with the serialized program
Inside the job, instead of tracerunner we start a gRPC server exposing an API that contains the bpftrace functionalities
kubectl-trace connects to that API and gets the traces back
While the "attach" user experience will be:
The user attaches to a trace via kubectl-trace
kubectl-trace just connects to the existing server and gets the trace back
By doing this we face a set of requirements that however can be achieved:
The server is gRPC over TLS
The connection between the kubectl-trace and the server is authenticated using the usual kubernetes auth mechanisms
On the other hand we obtain also a set of advantages:
Since we have a gRPC server/client we can connect to multiple servers and have an API that supports that
We will not need to connect to a remote tty anymore, we don't even need a tty
By parsing and serializing the program at client side we can manage errors better, e.g. parsing errors are given right away without having to start the job and everything
The text was updated successfully, but these errors were encountered:
tl;dr
We don't want to start bpftrace as a cli anymore, we want to start a server that
can parse bpftrace programs and move the whole cli experience to kubectl-trace.
Long version
While developing and maintaining kubectl-trace with our current user experience we are facing a set of different challenges,
but first, let's describe how the current experience looks like:
The current experience is that kubectl-trace, starts a job containing a container with the bpftrace binary in it,
then, a tool called the "tracerunnre" starts bpftrace with a set of parameters and a TTY connection. That TTY connection
is used to "stream" the stdout and stderr of bpftrace directly to the TTY of the terminal that started kubectl-trace or that
attached to a particolar trace.
The first challenge, had been to develop a stable TTY over http fow that can sustain the output coming
from bpftrace and serve it to the user via the kubectl.
The second challenge for us is to accuratelly manage errors. We are not managing them.
The third chalenge is that since day zero we wanted kubectl-trace to be able to connect to
multiple containers/hosts at once.
Because of those challenges, we've been looking forward a solution.
The most natural we came up with consists in moving the cli experience from bpftrace to kubectl-trace directly.
That means that the new "trace" user experience will be:
While the "attach" user experience will be:
By doing this we face a set of requirements that however can be achieved:
On the other hand we obtain also a set of advantages:
The text was updated successfully, but these errors were encountered: