You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 18, 2023. It is now read-only.
Describe the bug
Running Spark tasks using gazelle should use more off-heap memory. However, when we run 2TB tpc-ds, we find that for most applications, the onheap memory usage is much greater than that in the off-heap memory, and some SQL statements do not even use off-heap memory, which troubles our parameter configuration.
notes:Traverse all SparkListenerTaskEnd events in the eventlog, obtain the maximum values of JVMHeapMemory and OffHeapExecutionMemory in all Task Executor Metrics.Used as the basis for determining the memory usage onheap and offheap.The test results are as follows(The memory unit is MB.):
The text was updated successfully, but these errors were encountered:
Describe the bug
Running Spark tasks using gazelle should use more off-heap memory. However, when we run 2TB tpc-ds, we find that for most applications, the onheap memory usage is much greater than that in the off-heap memory, and some SQL statements do not even use off-heap memory, which troubles our parameter configuration.
notes:Traverse all SparkListenerTaskEnd events in the eventlog, obtain the maximum values of JVMHeapMemory and OffHeapExecutionMemory in all Task Executor Metrics.Used as the basis for determining the memory usage onheap and offheap.The test results are as follows(The memory unit is MB.):
The text was updated successfully, but these errors were encountered: